Bugbear Thoughts

The worst page in the universe

Nginx 99: Cannot assign requested address to upstream

Table of Contents

The Problem 

1) enabling keepalive inside nginx , 2) enabling keepalive in your backend, solution 2: setting tcp_tw_reuse to 1, 1) make your backend listen on multiple ip’s, 2) next you need to configure your nginx upstream to make load balancing.

If you are using Nginx for reverse or caching proxy and you are making some good amount of traffic, soon or later you are going to have issues with the TCP connections between Nginx and your backend. 

You will start getting error messages looking like this: 

[crit] 2323#0: *535353 connect() to 127.0.0.1:8080 failed (99: Cannot assign requested address) while connecting to upstream

When you use Nginx to proxy towards backend, each proxied request is making additional TCP session to the backend.

In TCP/IP each connection is uniquely identified by the following: 

src_ip:src_port –> dst_ip:dst_port

So if you need to open additional TCP session to the backend, you will need unique src port to use. 

The number of dynamic source ports you can get per IP is defined by the “ip_local_port_range”, which could be checked by issuing: 

cat  /proc/sys/net/ipv4/ip_local_port_range

and usually is:  32768 – 60999

So basically you are limited to less than 30 000 TCP connections. 

Now if you add the fact that each connection stays at least 60 seconds in TCP:TIME_WAIT state, you will soon realize that exhausting your dynamic src ports will be pretty easy with bit higher traffic. 

This could lead not only to Nginx related problems, but also affecting other applications, which are trying to create TCP session and get dynamic port for the same IP. 

You can check your current connections by issuing: 

During the time I have found multiple solutions to this problem and I’m going to go through all of them. 

Solution 1: Enabling KeepAlive between Nginx and your Backend 

The idea of KeepAlive is to reuse already opened connections. For this to work, you will need to configure both Nginx to support KeepAlive (which is the harder part) and also enable KeepAlive in your backend server (whatever it is ). 

You need to do the following settings inside Nginx in order to activate the use of KeepAlive

Add the following to your Location {} directive, where is your proxy_pass : 

proxy_http_version 1.1; proxy_set_header Connection "";

Define KeepAlive enabled Upstream in http { } config: 

If you are using 127.0.0.1:80 as backend , your upstream could look like this: 

Modify your proxy_pass to use upstream definition instead of direct address

If your proxy_pass is looking similiar to this: 

proxy_pass http://127.0.0.1:80 ;

It should now be changed to look like: 

proxy_pass http://localhost_80 ; 

Finally you should enable KeepAlive in your backend

If you are using Apache web server as a backend, you could add the following to your httpd.conf :

By doing all this, your number of open connections between Nginx and upstream ,should drop significantly.

If for some reason you don’t want (or can’t) use KeepAlive between Nginx and the upstream/backend, you could try using the tcp_tw_reuse kernel setting.

At least for me this option worked perfectly and solved the connections problem when KeepAlive is disabled.

You could turn on this option by doing the following:

Edit /etc/sysctl.conf  and add:

net.ipv4.tcp_tw_reuse=1

then issue:

Solution 3: Using multiple backend ip addresses

If solutions 1 and 2 doesn’t work for you because you have really extreme traffic volumes, then you should think about adding additional backend ip addresses.

The concept is pretty easy and straight forward, what you need to do is the following: 

If your Nginx / Backend are running inside the same machine, this is pretty easy, because you could either use Public IP + localhost (127.0.0.1) , or you could add any additional private ip address you like and use them .

So for example if you are going to use the following ips:

, you should configure your backend to listen on all of them

After your backend is configured, you need to configure your upstream {} definition in Nginx, so you use all of the configured ip addresses.

For example it should look like:

By adding such upstream definition, nginx will load balance the requests to backend equally and use the default round-robin mechanism.

If you are using 3 different IP’s, your dynamic port range will be trippled so: x3 .

Related posts:

Nginx: Cannot assign requested address for upstream

Mattias Geniar, November 02, 2015

Follow me on Twitter as @mattiasgeniar

A few days ago, I ran into the following interesting Nginx error message.

My configuration was very simple. This was an Nginx proxy that did all the SSL encryption and sent traffic to a Varnish instance, running on port :80 on localhost. The big takeway here is that it was a pretty high traffic Nginx proxy.

Even with keepalive enabled in the nginx upstream , the error popped up. But what did it mean?

TCP ports and limits

It’s good to know a thing or two about networking besides just servers once in a while. The problem occurred because the server couldn’t get a free TCP port quickly enough to make the connection to 127.0.0.1 .

The ss tool gives you stats on the sockets/ports on the server. In this case, I had 51.582 TCP sessions in use (either active, closed, awaiting to be closed, …).

A normal server has around 28.000 possible TCP ports it can use to make a TCP connection to a remote (or local) system. Everything that talks via an IP address will pick a free port from this range to serve as source port for the outgoing connection. This port range is defined by the ip_local_port_range sysctl parameter.

The format is “ minimum maximum ” port. So 61000 – 32768 = 28 232 available source ports.

An nginx SSL proxy that connects to a Varnish instance running on localhost will look like this in your netstat .

The key takeaways here the source connection 127.0.0.1:37713 that connects to its endpoint 127.0.0.1:80 . For every source connection a new TCP source port is selected from the range in the ip_local_port_range parameter.

The combination of a source IP, source port, destination IP and destination IP needs to be unique. This is what’s called a quadruplet in networking terms. You likely can’t (easily) change the source IP. The source port is dynamically picked. That only leaves the destination IP and the destination port that are free to play with.

Solving the source port limitation

There are a couple of easy fixes. First, the ip_local_port_range can be increased on a Linux machine (for more reading material, see increase ip_local_port_range TCP port range in Linux ).

This effectively increases the total port range from its default 28 232 ports to 49 000 ports.

If that’s not enough, you can add more destination IPs to connect to. Remember that each connection consists of the 4 parts (called quadruplets ) with source IP and source port, destination IP and destination port. If you can’t change the source port or IP, just change the destination IPs.

Consider this kind of upstream definition in Nginx;

Such a definition can be used in your nginx configurations with the proxy_pass directive.

Now if you know that each server usually has 2 IPs or more, it’s very easy to add more quadruplets to your networking stack by adding an addition IP to your Nginx upstream. You’ve already added 127.0.0.1 , but your server will have another IP (its public or DHCP IP) that you can safely add too, if your webserver binds to all ports.

Every IP you add in your upstream is effectively adding 49.000 local ports to your networking stack. You can even add non-routable local IPs to your server, as interface aliases , just to use as new destination IPs for your proxy configurations.

Want to subscribe to the cron.weekly newsletter?

I write a weekly-ish newsletter on Linux, open source & webdevelopment called cron.weekly .

It features the latest news, guides & tutorials and new open source projects. You can sign up via email below.

No spam. Just some good, practical Linux & open source content.

Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

nginx proxy: connect() to ip:80 failed (99: Cannot assign requested address)

An nginx/1.0.12 running as a proxy on Debian 6.0.1 starts throwing the following error after running for a short time:

Not all requests produce this error, so I suspect that it has to do with the load of the server and some kind of limit it hit.

I have tried raising ulimit -n to 50k and worker_rlimit_nofile to 50k as well, but that does not seem to help. lsof -n shows a total of 1200 lines for nginx. Is there a system limit on outgoing connections that might prevent nginx from opening more connections to its upstream server?

mariow's user avatar

3 Answers 3

Seems like I just found the solution to my own question: Allocating more outgoing ports via

solved the problem.

Each TCP connection has to have a unique quadruple source_ip:source_port:dest_ip:dest_port

source_ip is hard to change, source_port is chosen from ip_local_port_range but can't be more than 16 bits. The other thing left to adjust is dest_ip and/or dest_port. So add some IP aliases for your upstream server:

upstream foo { server ip1:80; server ip2:80; server ip3:80; }

Where ip1, ip2 and ip3 are different IP addresses for the same server.

Or it might be easier to have your upstream listen on more ports.

Bryan Larsen's user avatar

modify /etc/sysctl.conf:

diyism's user avatar

Your Answer

Sign up or log in, post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service , privacy policy and cookie policy

Not the answer you're looking for? Browse other questions tagged nginx or ask your own question .

Hot Network Questions

cannot assign requested address) while connecting to upstream client

Your privacy

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy .

Stack Exchange Network

Stack Exchange network consists of 181 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Server Fault is a question and answer site for system and network administrators. It only takes a minute to sign up.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

nginx - connect() failed upstream under load testing

I've been doing some load testing with wrk of my nginx reverse proxy -> my web app setup and I noticed that when I get to 1000+ concurrent connections, nginx starts returning 502s and the following error message:

the wrk command was:

I'm trying to figure out what might have gone wrong here. My web application is listening to requests proxied by nginx at port 3004. Is nginx running out of ports? Is the web application not able to handle this many request? Are requests being timed out? I'm not clear on this and would love to have more insight into it.

Alexandr Kurilin's user avatar

2 Answers 2

Already answered here: https://stackoverflow.com/questions/14144396/nginx-proxy-connect-to-ip80-failed-99-cannot-assign-requested-address

The message suggests you've run out of local sockets/ports.

Try to increase networking limits:

Alternatively you may try unix sockets to see if it helps.

Community's user avatar

Overview of Network Sockets When a connection is established over TCP, a socket is created on both the local and the remote host. The remote IP address and port belong to the server side of the connection, and must be determined by the client before it can even initiate the connection. In most cases, the client automatically chooses which local IP address to use for the connection, but sometimes it is chosen by the software establishing the connection. Finally, the local port is randomly selected from a defined range made available by the operating system.The port is associated with the client only for the duration of the connection, and so is referred to as ephemeral. When the connection is terminated, the ephemeral port is available to be reused.

Solution Enabling Keepalive Connections

Use the keepalive directive to enable keepalive connections from NGINX to upstream servers, defining the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. Without keepalives you are adding more overhead and being inefficient with both connections and ephemeral ports.

more : https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/

Mont's user avatar

Your Answer

Sign up or log in, post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service , privacy policy and cookie policy

Not the answer you're looking for? Browse other questions tagged nginx reverse-proxy port load-testing resources or ask your own question .

Hot Network Questions

cannot assign requested address) while connecting to upstream client

Your privacy

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy .

cannot assign requested address) while connecting to upstream client

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed (99: Cannot assign requested address) while connecting to upstream, #938

@Shreewebs

Shreewebs commented Sep 29, 2018

@mofolo

mofolo commented Jan 8, 2019

Sorry, something went wrong.

@fevangelou

fevangelou commented Jan 22, 2019

@fevangelou

No branches or pull requests

@fevangelou

IT Dead Inside

IT Dead Inside

Christopher Laine

Nov 9, 2019

Member-only

Docker Containers and localhost: Cannot Assign Requested Address

Look out for your use of ‘localhost’ when using docker and docker compose.

This one might not be obvious at first glance. I hope this helps someone out there who may run into this same issue.

I’ve got an ASP.NET Core website which connects to an API. The website uses a proxy class…

More from IT Dead Inside

IT is a cesspool, but its home

About Help Terms Privacy

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store

Christopher Laine

Author, programmer, would-be philosopher. Author of Screens https://christopherlaine.net/screens

Text to speech

mattgadient.com

Fixing “cannot assign requested address” for nginx + ipv6 on ubuntu 18.04.

Okay, so before we get started, I’m going to assume the following:

Ubuntu 18.04 - Cannot assign requested address in nginx over IPv6

I hit the “cannot assign requested address” in 2 circumstances. First, nginx wouldn’t start at all because it wouldn’t bind. Once that was fixed, the second issue was  it would bind except when the server restarted though it worked when the server was manually restarted.

I’ve run into these in years past, but things changed between Ubuntu 16.04 and 18.04. Beginning in 17.10, Ubuntu changed from ifupdown to netplan which made the process a little different.

In any case, here’s how I fixed each:

NGINX not starting at all when there is an ipv6 listen directive

If you take a look at your /etc/network/interfaces file, you’ll probably find it empty except for a message mentioning that “ifupdown has been replaced by netplan(5) on this system” .

The new configuration is in /etc/netplan/10-ens3.yaml . Edit it and you’ll see something like this:

Netplan default file

You’ll have to add the IPv6 address here, so it looks like this:

Netplan file modified for IPv6

…essentially,  addresses: ['1234:5678:9abc:def0:1:2:3:4/64'] was added. Note the indentation and that you need the prefix (/64). There are prefix calculators on the web if you’re not sure. This was all I needed in my case. However if you have a few to add, they go in the same block but are comma-separated. You should be able to add IPv4 addresses here too, so    addresses: ['x:x:x:x:1:2:3:4/64', 'x:x:x:x:4:3:2:1/64', 1.2.3.4/24] would be an example of that.

Once you’re all set, you need to run the following:

…if there was an issue, the first line will usually spit out the problem. If everything went well, try:

Hopefully nginx starts up now!

If you run into other hiccups or if your server was configured a little differently and the above doesn’t quite work, Ubuntu does have a little more on their blog at https://blog.ubuntu.com/2017/12/01/ubuntu-bionic-netplan . Another site with some configuration examples can be found at  https://websiteforstudents.com/configuring-static-ips-ubuntu-17-10-servers/ .

Issue #2: nginx now works if manually started, but has the “bind / requested address” error when the server is rebooted

You won’t know if you have this issue until you reboot and try a service nginx status  to see the error, followed by a service nginx start to verify it does work when manually started.

Whether you hit this issue is probably going to depend on the way your host has the network set up. IPv4 tends to come up fast in the networking process, but IPv6 can potentially take awhile. If nginx starts before the IPv6 address is up… well… nginx doesn’t start.

To fix the issue, we want to make nginx wait not just for the “network-online” signal before it starts. This takes place after the normal “network” signal. To do this:

The file will look something like this after the modification:

nginx.service file for Ubuntu 18.04

That should be it! Restart your machine and make sure nginx is running!

Ad-free Sunday

No bulky ads today: instead, a couple YouTube links to churches who have Sunday services online.

Church of the Rock (Mark Hughes) - https://www.youtube.com/channel/UCVQxQBeMwzh2GffoWoBRjeg Springs Church (Leon Fontaine) - https://www.youtube.com/channel/UCM1LviWWBwbApUAQTLkXsKA

1 Comment |  Leave a Comment

Leave a Comment Cancel reply

You can use an alias and fake email. However, if you choose to use a real email, "gravatars" are supported. You can check the privacy policy for more details.

JavaScript must be enabled to comment!

cannot assign requested address) while connecting to upstream client

cannot assign requested address (99)

Ilan Berkner's profile photo

Ilan Berkner

dormando's profile photo

Rohit Karlupia

IMAGES

  1. [Get 23+] Socket.error Errno 99 Cannot Assign Requested Address Jupyter

    cannot assign requested address) while connecting to upstream client

  2. Cannot assign requested address解决方法

    cannot assign requested address) while connecting to upstream client

  3. Cannot Assign Requested Address

    cannot assign requested address) while connecting to upstream client

  4. Network error Cannot assign requested address

    cannot assign requested address) while connecting to upstream client

  5. 阿里云服务器下jupyter notebook 远程访问,已解决OSError: [Errno 99]Cannot assign requested address等问题_Systemd的博客

    cannot assign requested address) while connecting to upstream client

  6. HBase single startup problems --Cannot assign requested address

    cannot assign requested address) while connecting to upstream client

VIDEO

  1. Asking address while p😁 #funny #shorts #comedy

  2. Lost connection to host and couldn't join

  3. Disk Management Vs Diskpart to delete partition on USB Disk (2023)

  4. Cisco Packet Tracer Tutorial

  5. IMPOSSIBLE 🍷🗿

  6. How To Fix Unable to reach our servers.& internet connection or Re-launch problem in PayZapp

COMMENTS

  1. Nginx 99: Cannot assign requested address to upstream

    Nginx 99: Cannot assign requested address to upstream · Solution 1: Enabling KeepAlive between Nginx and your Backend. 1) Enabling KeepAlive

  2. Nginx: Cannot assign requested address for upstream

    It's good to know a thing or two about networking besides just servers once in a while. The problem occurred because the server couldn't get a

  3. nginx proxy: connect() to ip:80 failed (99: Cannot assign requested

    Where ip1, ip2 and ip3 are different IP addresses for the same server. Or it might be easier to have your upstream listen on more ports.

  4. nginx

    The message suggests you've run out of local sockets/ports. Try to increase networking limits: echo "10240 65535" > /proc/sys/net/ipv4/

  5. failed (99: Cannot assign requested address) while connecting to

    XX.XXX:80 failed (99: Cannot assign requested address) while connecting to upstream, client: Xx.XX.XX.XXX, server: localhost, request: "GET

  6. Nginx on Plesk server fails to start: 99: Cannot assign requested

    Unsynced IP address between Plesk and OS or IP address was removed/changed manually directly on the server. For example, the number of IPs

  7. Docker Containers and localhost: Cannot Assign Requested Address

    I hope this helps someone out there who may run into this same issue. I've got an ASP.NET Core website which connects to an API. The website uses a proxy class

  8. Nginx 99: Cannot assign requested address to upstream

    В итоге получилось, что для решения ошибки Nginx Cannot assign requested address while connecting to upstream можно использовать целый

  9. Fixing “Cannot assign requested address” for NGINX + IPv6 on

    You're using a host that gives you IPv6 addresses and you do have IPv6 enabled on their end. You are on Ubuntu 18.04 or later (technically at least 17.10 for

  10. cannot assign requested address (99)

    You could increase the port range and/or decrease the tcp close timeout, but much easier fix would be to use the client libraries connection reusing ability.