开发者

Max number of socket on Linux

开发者 https://www.devze.com 2023-01-10 20:01 出处:网络
It seems that the server is limited at ~32720 sockets... I have tried every known variable change to raise up this limit.

It seems that the server is limited at ~32720 sockets... I have tried every known variable change to raise up this limit. But the server stay limited at 32720 opened socket, even if there is still 4Go of free memory and 80% of idle cpu...

Here's the configuration

~#开发者_运维百科 ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63931
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 798621
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 2048
cpu time               (seconds, -t) unlimited
max user processes              (-u) 63931
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

net.netfilter.nf_conntrack_max = 999999
net.ipv4.netfilter.ip_conntrack_max = 999999
net.nf_conntrack_max = 999999

Any thoughts ?


If you're dealing with openssl and threads, go check your /proc/sys/vm/max_map_count and try to raise it.


In IPV4, the TCP layer has 16 bits for the destination port, and 16 bits for the source port.

see http://en.wikipedia.org/wiki/Transmission_Control_Protocol

Seeing that your limit is 32K I would expect that you are actually seeing the limit of outbound TCP connections you can make. You should be able to get a max of 65K sockets (this would be the protocol limit). This is the limit for total number of named connections. Fortunately, binding a port for incoming connections only uses 1. But if you are trying to test the number of connections from the same machine, you can only have 65K total outgoing connections (for TCP). To test the amount of incoming connections, you will need multiple computers.

Note: you can call socket(AF_INET,...) up to the number of file descriptors available, but you cannot bind them without increasing the number of ports available. To increase the range, do this:

echo "1024 65535" > /proc/sys/net/ipv4/ip_local_port_range (cat it to see what you currently have--the default is 32768 to 61000)

Perhaps it is time for a new TCP like protocol that will allow 32 bits for the source and dest ports? But how many applications really need more than 65 thousand outbound connections?

The following will allow 100,000 incoming connections on linux mint 16 (64 bit) (you must run it as root to set the limits)

#include <stdio.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/ip.h>

void ShowLimit()
{
   rlimit lim;
   int err=getrlimit(RLIMIT_NOFILE,&lim);
   printf("%1d limit: %1ld,%1ld\n",err,lim.rlim_cur,lim.rlim_max);
}

main()
{
   ShowLimit();

   rlimit lim;
   lim.rlim_cur=100000;
   lim.rlim_max=100000;
   int err=setrlimit(RLIMIT_NOFILE,&lim);
   printf("set returned %1d\n",err);

   ShowLimit();

   int sock=socket(AF_INET,SOCK_STREAM,IPPROTO_TCP);
   sockaddr_in maddr;
   maddr.sin_family=AF_INET;
   maddr.sin_port=htons(80);
   maddr.sin_addr.s_addr=INADDR_ANY;

   err=bind(sock,(sockaddr *) &maddr, sizeof(maddr));

   err=listen(sock,1024);

   int sockets=0;
   while(true)
   {
      sockaddr_in raddr;
      socklen_t rlen=sizeof(raddr);
      err=accept(sock,(sockaddr *) &raddr,&rlen);
      if(err>=0)
      {
        ++sockets;
        printf("%1d sockets accepted\n",sockets);
      }
   }
}


Check the real limits of the running process with.

cat /proc/{pid}/limits

The max for nofiles is determined by the Kernel, the following as root would increase the max to 100,000 "files" i.e. 100k CC

echo 100000 > /proc/sys/fs/file-max

To make it permanent edit /etc/sysctl.conf

fs.file-max = 100000

You then need the server to ask for more open files, this is different per server. In nginx, for example, you set

worker_rlimit_nofile 100000;

Reboot nginx and check /proc/{pid}/limits

To test this you need 100,000 sockets in your client, you are limited in the testing to the number of ports in TCP per IP address.

To increase the local port range to maximum...

echo "1024 65535" > /proc/sys/net/ipv4/ip_local_port_range

This gives you ~64000 ports to test with.

If that is not enough, you need more IP addresses. When testing on localhost you can bind the source/client to an IP other than 127.0.0.1 / localhost.

For example you can bind your test clients to IPs randomly selected from 127.0.0.1 to 127.0.0.5

Using apache-bench you would set

-B 127.0.0.x

Nodejs sockets would use

localAddress

/etc/security/limits.conf configures PAM: its usually irrelevant for a server.

If the server is proxying requests using TCP, using upstream or mod_proxy for example, the server is limited by ip_local_port_range. This could easily be the 32,000 limit.


Which server are you talking about ? It might be it has a hardcoded max, or runs into other limits (max threads/out of address space etc.)

http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-1 has some tuning to needed to achieve a lot of connection, but it doesn't help if the server application limits it in some way or another.


If you're considering an application where you believe you need to open thousands of sockets, you will definitely want to read about The C10k Problem. That page discusses many of the issues you will face as you scale up your number of client connections to a single server.


On Gnu+Linux, maximum is what you wrote. This number is (probably) stated somewhere in networking standards. I doubt you really need so many sockets. You should optimize the way you are using sockets instead of creating dozens all the time.


In net/socket.c the fd is allocated in sock_alloc_fd(), which calls get_unused_fd().

Looking at linux/fs/file.c, the only limit to the number of fd's is sysctl_nr_open, which is limited to

int sysctl_nr_open_max = 1024 * 1024; /* raised later */

/// later...
sysctl_nr_open_max = min((size_t)INT_MAX, ~(size_t)0/sizeof(void *)) &
                         -BITS_PER_LONG;

and can be read using sysctl fs.nr_open which gives 1M by default here. So the fd's are probably not your problem.

edit you then probably checked this as well, but would you care to share the output of

#include <sys/time.h>
#include <sys/resource.h>
int main() {
    struct rlimit limit;
    getrlimit(RLIMIT_NOFILE,&limit);
    printf("cur: %d, max: %d\n",limit.rlim_cur,limit.rlim_max);
}

with us?


Generally having too much live connections is a bad thing. However, everything depends on the application and the patterns it communicates with its clients.

I suppose there is a pattern when clients have to be permanently async-connected and it is the only way a distributed solution might work.

Assumimg there are no bottlenecks in memory/cpu/network for the current load, and keeping in mind that to leave idle open connection is the only way distributed applications consumes less resources (say, connection time, and the overall/peak memory), overall OS network performance might be higher than using best practices we all know.

Good question and it needs for a solution. The problem is nobody can answer this. I would suggest to use divide & conquer technique and when the bottleneck is found return to us.

Please take apart your application on testbed and you will find the bottleneck.

0

精彩评论

暂无评论...
验证码 换一张
取 消