Bugbear Thoughts
The worst page in the universe

Nginx 99: Cannot assign requested address to upstream
Table of Contents
The Problem
1) enabling keepalive inside nginx , 2) enabling keepalive in your backend, solution 2: setting tcp_tw_reuse to 1, 1) make your backend listen on multiple ip’s, 2) next you need to configure your nginx upstream to make load balancing.
If you are using Nginx for reverse or caching proxy and you are making some good amount of traffic, soon or later you are going to have issues with the TCP connections between Nginx and your backend.
You will start getting error messages looking like this:
[crit] 2323#0: *535353 connect() to 127.0.0.1:8080 failed (99: Cannot assign requested address) while connecting to upstream
When you use Nginx to proxy towards backend, each proxied request is making additional TCP session to the backend.
In TCP/IP each connection is uniquely identified by the following:
src_ip:src_port –> dst_ip:dst_port
So if you need to open additional TCP session to the backend, you will need unique src port to use.
The number of dynamic source ports you can get per IP is defined by the “ip_local_port_range”, which could be checked by issuing:
cat /proc/sys/net/ipv4/ip_local_port_range
and usually is: 32768 – 60999
So basically you are limited to less than 30 000 TCP connections.
Now if you add the fact that each connection stays at least 60 seconds in TCP:TIME_WAIT state, you will soon realize that exhausting your dynamic src ports will be pretty easy with bit higher traffic.
This could lead not only to Nginx related problems, but also affecting other applications, which are trying to create TCP session and get dynamic port for the same IP.
You can check your current connections by issuing:
During the time I have found multiple solutions to this problem and I’m going to go through all of them.
Solution 1: Enabling KeepAlive between Nginx and your Backend
The idea of KeepAlive is to reuse already opened connections. For this to work, you will need to configure both Nginx to support KeepAlive (which is the harder part) and also enable KeepAlive in your backend server (whatever it is ).
You need to do the following settings inside Nginx in order to activate the use of KeepAlive
Add the following to your Location {} directive, where is your proxy_pass :
proxy_http_version 1.1; proxy_set_header Connection "";
Define KeepAlive enabled Upstream in http { } config:
If you are using 127.0.0.1:80 as backend , your upstream could look like this:
Modify your proxy_pass to use upstream definition instead of direct address
If your proxy_pass is looking similiar to this:
proxy_pass http://127.0.0.1:80 ;
It should now be changed to look like:
proxy_pass http://localhost_80 ;
Finally you should enable KeepAlive in your backend
If you are using Apache web server as a backend, you could add the following to your httpd.conf :
By doing all this, your number of open connections between Nginx and upstream ,should drop significantly.
If for some reason you don’t want (or can’t) use KeepAlive between Nginx and the upstream/backend, you could try using the tcp_tw_reuse kernel setting.
At least for me this option worked perfectly and solved the connections problem when KeepAlive is disabled.
You could turn on this option by doing the following:
Edit /etc/sysctl.conf and add:
net.ipv4.tcp_tw_reuse=1
then issue:
Solution 3: Using multiple backend ip addresses
If solutions 1 and 2 doesn’t work for you because you have really extreme traffic volumes, then you should think about adding additional backend ip addresses.
The concept is pretty easy and straight forward, what you need to do is the following:
If your Nginx / Backend are running inside the same machine, this is pretty easy, because you could either use Public IP + localhost (127.0.0.1) , or you could add any additional private ip address you like and use them .
So for example if you are going to use the following ips:
, you should configure your backend to listen on all of them
After your backend is configured, you need to configure your upstream {} definition in Nginx, so you use all of the configured ip addresses.
For example it should look like:
By adding such upstream definition, nginx will load balance the requests to backend equally and use the default round-robin mechanism.
If you are using 3 different IP’s, your dynamic port range will be trippled so: x3 .
Related posts:
- Nginx – PHP-FPM – FastCGI sent in stderr: “Primary script unknown” while reading response header from upstream
- Nginx – Configuring reverse proxy + caching
- Nginx Caching Based On Page Size
- Nginx + php-fpm – Getting Internal server error 504
- Nginx Proxying Long URL Addresses
- Make Your WordPress At Least 3x Faster By Migrating To Nginx + PHP-FPM
- Nginx Caching – Check If Request Is Being Cached
- Nginx – Hardening SSL security by protecting from well-known attack vectors
- Configure Nginx Reverse Proxy For Grafana Access
- Centos – Webuzo – Requested action not taken: mailbox unavailable 550 Sender verify failed
Nginx: Cannot assign requested address for upstream
Mattias Geniar, November 02, 2015
Follow me on Twitter as @mattiasgeniar
A few days ago, I ran into the following interesting Nginx error message.
My configuration was very simple. This was an Nginx proxy that did all the SSL encryption and sent traffic to a Varnish instance, running on port :80 on localhost. The big takeway here is that it was a pretty high traffic Nginx proxy.
Even with keepalive enabled in the nginx upstream , the error popped up. But what did it mean?
TCP ports and limits
It’s good to know a thing or two about networking besides just servers once in a while. The problem occurred because the server couldn’t get a free TCP port quickly enough to make the connection to 127.0.0.1 .
The ss tool gives you stats on the sockets/ports on the server. In this case, I had 51.582 TCP sessions in use (either active, closed, awaiting to be closed, …).
A normal server has around 28.000 possible TCP ports it can use to make a TCP connection to a remote (or local) system. Everything that talks via an IP address will pick a free port from this range to serve as source port for the outgoing connection. This port range is defined by the ip_local_port_range sysctl parameter.
The format is “ minimum maximum ” port. So 61000 – 32768 = 28 232 available source ports.
An nginx SSL proxy that connects to a Varnish instance running on localhost will look like this in your netstat .
The key takeaways here the source connection 127.0.0.1:37713 that connects to its endpoint 127.0.0.1:80 . For every source connection a new TCP source port is selected from the range in the ip_local_port_range parameter.
The combination of a source IP, source port, destination IP and destination IP needs to be unique. This is what’s called a quadruplet in networking terms. You likely can’t (easily) change the source IP. The source port is dynamically picked. That only leaves the destination IP and the destination port that are free to play with.
Solving the source port limitation
There are a couple of easy fixes. First, the ip_local_port_range can be increased on a Linux machine (for more reading material, see increase ip_local_port_range TCP port range in Linux ).
This effectively increases the total port range from its default 28 232 ports to 49 000 ports.
If that’s not enough, you can add more destination IPs to connect to. Remember that each connection consists of the 4 parts (called quadruplets ) with source IP and source port, destination IP and destination port. If you can’t change the source port or IP, just change the destination IPs.
Consider this kind of upstream definition in Nginx;
Such a definition can be used in your nginx configurations with the proxy_pass directive.
Now if you know that each server usually has 2 IPs or more, it’s very easy to add more quadruplets to your networking stack by adding an addition IP to your Nginx upstream. You’ve already added 127.0.0.1 , but your server will have another IP (its public or DHCP IP) that you can safely add too, if your webserver binds to all ports.
Every IP you add in your upstream is effectively adding 49.000 local ports to your networking stack. You can even add non-routable local IPs to your server, as interface aliases , just to use as new destination IPs for your proxy configurations.
Want to subscribe to the cron.weekly newsletter?
I write a weekly-ish newsletter on Linux, open source & webdevelopment called cron.weekly .
It features the latest news, guides & tutorials and new open source projects. You can sign up via email below.
No spam. Just some good, practical Linux & open source content.
- Stack Overflow Public questions & answers
- Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers
- Talent Build your employer brand
- Advertising Reach developers & technologists worldwide
- About the company
Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
nginx proxy: connect() to ip:80 failed (99: Cannot assign requested address)
An nginx/1.0.12 running as a proxy on Debian 6.0.1 starts throwing the following error after running for a short time:
Not all requests produce this error, so I suspect that it has to do with the load of the server and some kind of limit it hit.
I have tried raising ulimit -n to 50k and worker_rlimit_nofile to 50k as well, but that does not seem to help. lsof -n shows a total of 1200 lines for nginx. Is there a system limit on outgoing connections that might prevent nginx from opening more connections to its upstream server?
3 Answers 3
Seems like I just found the solution to my own question: Allocating more outgoing ports via
solved the problem.
- 4 You can also make this change persistent by adding a net.ipv4.ip_local_port_range = 10240 65535 line to /etc/sysctl.conf and calling sudo sysctl -p – Himura Sep 9, 2020 at 13:55
Each TCP connection has to have a unique quadruple source_ip:source_port:dest_ip:dest_port
source_ip is hard to change, source_port is chosen from ip_local_port_range but can't be more than 16 bits. The other thing left to adjust is dest_ip and/or dest_port. So add some IP aliases for your upstream server:
upstream foo { server ip1:80; server ip2:80; server ip3:80; }
Where ip1, ip2 and ip3 are different IP addresses for the same server.
Or it might be easier to have your upstream listen on more ports.
modify /etc/sysctl.conf:
- I'm not sure if this would have helped because the problem wasn't a DOS attack resulting in lots of TIME_WAIT but just a huge number of regular traffic that was supposed to go through and not be killed with a faster TIME_WAIT timeout. – mariow Nov 11, 2014 at 15:13
- @mariow, on my server, there are huge quantity of outgoing requests(crawler), so fast TIME_WAIT reusing is essenstial. – diyism Dec 26, 2014 at 2:20
- 3 net.ipv4.tcp_tw_recycle is broken and was removed from Linux 4.12: git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/… reuse is dangerous as well: vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux Better don't use these options. – pva Dec 5, 2017 at 19:16
Your Answer
Sign up or log in, post as a guest.
Required, but never shown
By clicking “Post Your Answer”, you agree to our terms of service , privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged nginx or ask your own question .
- The Overflow Blog
- How Intuit democratizes AI development across teams through reusability sponsored post
- The nature of simulating nature: A Q&A with IBM Quantum researcher Dr. Jamie...
- Featured on Meta
- We've added a "Necessary cookies only" option to the cookie consent popup
- Launching the CI/CD and R Collectives and community editing features for...
- The [amazon] tag is being burninated
- Temporary policy: ChatGPT is banned
- Staging Ground Beta 1 Recap, and Reviewers needed for Beta 2
Hot Network Questions
- Multiple tables with red color curly brackets
- Why are some high schools called hospitals?
- Who owns code in a GitHub organization?
- how would zombies affect trench warfare?
- What laws would Jesus be breaking if he were to turn water into wine today?
- Is it suspicious or odd to stand by the gate of a GA airport watching the planes?
- My code is GPL licensed, can I issue a license to have my code be distributed in a specific MIT licensed project?
- Why are all monasteries human?
- Google maps for Space exploration
- What is the point of Thrower's Bandolier?
- "Is" or "are" for two uncountable words?
- Resistance depending on voltage - the chicken and the egg?
- Is there any room for negotiation on my PhD program stipend offers?
- Should I put my dog down to help the homeless?
- Counting Letters in a String
- Heap implementation in C
- How would "dark matter", subject only to gravity, behave?
- Reference implementation of Shamir's Secret Sharing
- What did Ctrl+NumLock do?
- Do new devs get fired if they can't solve a certain bug?
- Basic page layout program from the PrintMaster 2.0 era
- What is the name of the color used by the SBB (Swiss Federal Railway) in the 1920s to paint their locomotive?
- Lagrange Points in General Relativity
- Are demand and time deposit accounts really loans _to_ the bank?
Your privacy
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy .
Stack Exchange Network
Stack Exchange network consists of 181 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Server Fault is a question and answer site for system and network administrators. It only takes a minute to sign up.
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
nginx - connect() failed upstream under load testing
I've been doing some load testing with wrk of my nginx reverse proxy -> my web app setup and I noticed that when I get to 1000+ concurrent connections, nginx starts returning 502s and the following error message:
the wrk command was:
I'm trying to figure out what might have gone wrong here. My web application is listening to requests proxied by nginx at port 3004. Is nginx running out of ports? Is the web application not able to handle this many request? Are requests being timed out? I'm not clear on this and would love to have more insight into it.
- reverse-proxy
- load-testing
- 1 Seems you've run out of local ports due to sockets in TIME-WAIT state. You can try using bigger local port range, set keepalive for connections, or using unix sockets to connect to backends. See serverfault.com/questions/649262/… – Federico Sierra Apr 17, 2015 at 21:15
- Consider github.com/lebinh/ngxtop for additional insights. NgxTop shows many more metrics based on those logs. – JayMcTee Apr 11, 2016 at 9:12
2 Answers 2
Already answered here: https://stackoverflow.com/questions/14144396/nginx-proxy-connect-to-ip80-failed-99-cannot-assign-requested-address
The message suggests you've run out of local sockets/ports.
Try to increase networking limits:
Alternatively you may try unix sockets to see if it helps.
Overview of Network Sockets When a connection is established over TCP, a socket is created on both the local and the remote host. The remote IP address and port belong to the server side of the connection, and must be determined by the client before it can even initiate the connection. In most cases, the client automatically chooses which local IP address to use for the connection, but sometimes it is chosen by the software establishing the connection. Finally, the local port is randomly selected from a defined range made available by the operating system.The port is associated with the client only for the duration of the connection, and so is referred to as ephemeral. When the connection is terminated, the ephemeral port is available to be reused.
Solution Enabling Keepalive Connections
Use the keepalive directive to enable keepalive connections from NGINX to upstream servers, defining the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. Without keepalives you are adding more overhead and being inefficient with both connections and ephemeral ports.
more : https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/

Your Answer
Sign up or log in, post as a guest.
Required, but never shown
By clicking “Post Your Answer”, you agree to our terms of service , privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged nginx reverse-proxy port load-testing resources or ask your own question .
- The Overflow Blog
- How Intuit democratizes AI development across teams through reusability sponsored post
- The nature of simulating nature: A Q&A with IBM Quantum researcher Dr. Jamie...
- Featured on Meta
- We've added a "Necessary cookies only" option to the cookie consent popup
Hot Network Questions
- Is it ok to run post hoc comparisons if ANOVA is nearly significant?
- Are there tables of wastage rates for different fruit and veg?
- Are people inherently good or bad according to Judaism
- Does melting sea ices rises global sea level?
- What is the difference between paper presentation and poster presentation?
- Displaying label if field contains 'X' or 'Y' value in QGIS
- My code is GPL licensed, can I issue a license to have my code be distributed in a specific MIT licensed project?
- Full text of the 'Sri Mahalakshmi Dhyanam & Stotram'
- How would "dark matter", subject only to gravity, behave?
- How to replace last occurrance of a specific character in field 1 using awk
- Would Fey Ancestry affect Cutting Words?
- Covering of a knot complement
- Old cartoon TV show (on DVD) about soccer ball-shaped characters that live in outer space or on an alien planet
- Why did Ukraine abstain from the UNHRC vote on China?
- Why does it seem like I am losing IP addresses after subnetting with the subnet mask of 255.255.255.192/26?
- How do/should administrators estimate the cost of producing an online introductory mathematics class?
- A story about a girl and a mechanical girl with a tattoo travelling on a train
- Resistance depending on voltage - the chicken and the egg?
- What is the point of Thrower's Bandolier?
- Extract raster value at each vertex from line, while keeping the line
- how would zombies affect trench warfare?
- How to protect big bundle of NM cable near attic scuttle hole?
- Which type of license doesn't require attribution in Github projects?
- Was Kip's defiance relevant to the Galactics' decision?
Your privacy
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy .
- No suggested jump to results
- Notifications
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failed (99: Cannot assign requested address) while connecting to upstream, #938
Shreewebs commented Sep 29, 2018
mofolo commented Jan 8, 2019
Sorry, something went wrong.
fevangelou commented Jan 22, 2019
No branches or pull requests

IT Dead Inside

Nov 9, 2019
Member-only
Docker Containers and localhost: Cannot Assign Requested Address
Look out for your use of ‘localhost’ when using docker and docker compose.
This one might not be obvious at first glance. I hope this helps someone out there who may run into this same issue.
I’ve got an ASP.NET Core website which connects to an API. The website uses a proxy class…
More from IT Dead Inside
IT is a cesspool, but its home
About Help Terms Privacy
Get the Medium app

Christopher Laine
Author, programmer, would-be philosopher. Author of Screens https://christopherlaine.net/screens
Text to speech
mattgadient.com
Fixing “cannot assign requested address” for nginx + ipv6 on ubuntu 18.04.
Okay, so before we get started, I’m going to assume the following:
- You’re using a host that gives you IPv6 addresses and you do have IPv6 enabled on their end.
- You are on Ubuntu 18.04 or later (technically at least 17.10 for this)
- You put an IPv6 address (ie X:X:X:1:2:3:4) as a listen directive in an nginx server block. Example: listen [XX:XX:XXXX:XX:1:2:3:4]:443 ssl http2;
- You’re certain the nginx config itself is fine.

I hit the “cannot assign requested address” in 2 circumstances. First, nginx wouldn’t start at all because it wouldn’t bind. Once that was fixed, the second issue was it would bind except when the server restarted though it worked when the server was manually restarted.
I’ve run into these in years past, but things changed between Ubuntu 16.04 and 18.04. Beginning in 17.10, Ubuntu changed from ifupdown to netplan which made the process a little different.
In any case, here’s how I fixed each:
NGINX not starting at all when there is an ipv6 listen directive
If you take a look at your /etc/network/interfaces file, you’ll probably find it empty except for a message mentioning that “ifupdown has been replaced by netplan(5) on this system” .
The new configuration is in /etc/netplan/10-ens3.yaml . Edit it and you’ll see something like this:

You’ll have to add the IPv6 address here, so it looks like this:

…essentially, addresses: ['1234:5678:9abc:def0:1:2:3:4/64'] was added. Note the indentation and that you need the prefix (/64). There are prefix calculators on the web if you’re not sure. This was all I needed in my case. However if you have a few to add, they go in the same block but are comma-separated. You should be able to add IPv4 addresses here too, so addresses: ['x:x:x:x:1:2:3:4/64', 'x:x:x:x:4:3:2:1/64', 1.2.3.4/24] would be an example of that.
Once you’re all set, you need to run the following:
…if there was an issue, the first line will usually spit out the problem. If everything went well, try:
Hopefully nginx starts up now!
If you run into other hiccups or if your server was configured a little differently and the above doesn’t quite work, Ubuntu does have a little more on their blog at https://blog.ubuntu.com/2017/12/01/ubuntu-bionic-netplan . Another site with some configuration examples can be found at https://websiteforstudents.com/configuring-static-ips-ubuntu-17-10-servers/ .
Issue #2: nginx now works if manually started, but has the “bind / requested address” error when the server is rebooted
You won’t know if you have this issue until you reboot and try a service nginx status to see the error, followed by a service nginx start to verify it does work when manually started.
Whether you hit this issue is probably going to depend on the way your host has the network set up. IPv4 tends to come up fast in the networking process, but IPv6 can potentially take awhile. If nginx starts before the IPv6 address is up… well… nginx doesn’t start.
To fix the issue, we want to make nginx wait not just for the “network-online” signal before it starts. This takes place after the normal “network” signal. To do this:
- Edit the /lib/systemd/system/nginx.service file.
- Find the line that says After=network.target and change it to After=network-online.target
- Save the file
- Run systemctl disable nginx.service followed by systemctl enable nginx.service
The file will look something like this after the modification:

That should be it! Restart your machine and make sure nginx is running!
Ad-free Sunday
No bulky ads today: instead, a couple YouTube links to churches who have Sunday services online.
Church of the Rock (Mark Hughes) - https://www.youtube.com/channel/UCVQxQBeMwzh2GffoWoBRjeg Springs Church (Leon Fontaine) - https://www.youtube.com/channel/UCM1LviWWBwbApUAQTLkXsKA
1 Comment | Leave a Comment
- Simon Hampel on November 24, 2018 - click here to reply Thanks - this is exactly the problem I just came across having moved from 16.04 to 18.04 and Netplan. Your solution fixed the problem - much appreciated.
Leave a Comment Cancel reply
You can use an alias and fake email. However, if you choose to use a real email, "gravatars" are supported. You can check the privacy policy for more details.
JavaScript must be enabled to comment!

cannot assign requested address (99)
Ilan Berkner
Rohit Karlupia

IMAGES
VIDEO
COMMENTS
Nginx 99: Cannot assign requested address to upstream · Solution 1: Enabling KeepAlive between Nginx and your Backend. 1) Enabling KeepAlive
It's good to know a thing or two about networking besides just servers once in a while. The problem occurred because the server couldn't get a
Where ip1, ip2 and ip3 are different IP addresses for the same server. Or it might be easier to have your upstream listen on more ports.
The message suggests you've run out of local sockets/ports. Try to increase networking limits: echo "10240 65535" > /proc/sys/net/ipv4/
XX.XXX:80 failed (99: Cannot assign requested address) while connecting to upstream, client: Xx.XX.XX.XXX, server: localhost, request: "GET
Unsynced IP address between Plesk and OS or IP address was removed/changed manually directly on the server. For example, the number of IPs
I hope this helps someone out there who may run into this same issue. I've got an ASP.NET Core website which connects to an API. The website uses a proxy class
В итоге получилось, что для решения ошибки Nginx Cannot assign requested address while connecting to upstream можно использовать целый
You're using a host that gives you IPv6 addresses and you do have IPv6 enabled on their end. You are on Ubuntu 18.04 or later (technically at least 17.10 for
You could increase the port range and/or decrease the tcp close timeout, but much easier fix would be to use the client libraries connection reusing ability.