Hi, here's a sixcore with 32 GB RAM. I've installed MySQL 5.1.47 (backport). Config is nearly standard, except max_connections, which is set to 2000. On the other hand there is PHP5.3/FastCGI on nginx. There is a very simple php application which should be served. NGINX can handle thousands of request parallel on this machine. This application accesses MySQL via mysqli. When using non-persistent connections in mysqli there is a problem when reaching 100 concurrent connections.
[error] 14074#0: *296 FastCGI sent in stderr: "PHP Warning: mysqli::mysqli(): [2002] Resource temporarily unavailable (trying to connect via unix:///tmp/mysqld.sock) in /var/www/libs/db.php on line 7
I've no idea to solve this. Connecting via tcp to mysql is terrible slow. The interesting thing is, when using persistent connections (add 'p:' to hostname in mysqli) the first 5000-10000 thousand requests fail with the same error as above until max connections (from webserver, set to 1500) is reached. After the first requests MySQL keeps it 1500 open connections and all is fine, so that I can make my 1500 concurrent requests. Huh? Is it possible, that this is a problem with PHP FastCGI?
-
It's possible that, if you connect 10,000 times at once, 1500 of those 10,000 will be accepted and the remaining 8500 will be rejected (as in, will not be delayed until the persistent connection is available). Then, subsequent requests will use the 1500-large persistent pool.
Is there a problem with opening the persistent connections once when you start up the server?
Borealid : @cuti123: so why don't you set up the persistent connections before starting the application for real? The problem may be that your DB server is limiting the maximum rate of opening new connections.From Borealid -
I wrote, that the "resource temporarily unavailable" error occurs every time, already when using only some hundred non-persistent connections. Now, I've added
usleep(rand(500000,1500000));
before mysql connect call in my php script, so that this script sleeps for a random time (0.5-1.5 seconds) before connecting to mysql. And tadaaa, the problem doesn't occur with non-persistent connection. So could it possibly be, that MySQL has a problem with massive concurrent thread creation?
Borealid : Or maybe it's that MySQL (sensibly) rate-limits your connections to protect itself? See http://mysqlha.blogspot.com/2007/08/rate-limiting-for-innodb.html: I don't think so, because the problem is already the connection to MySQL and not a query. At this point it doesn't have to do something with InnoDB (IMHO). -
This sounds like maybe connections aren't being cleaned up in a timely manner. "show processlist" should show the connections if that's the case. Looking at the MySQL documentation, it appears this may be common with PHP unless certain PHP parameters are tweaked. You may want to look through the article and comments at http://dev.mysql.com/doc/refman/5.0/en/too-many-connections.html
Another possibility brought up by the very slow TCP connections is perhaps MySQL (or tcpwrappers) is trying to do hostname lookup for access and the lookup is slow. This seems unlikely when using unix sockets, but in case it's trying to match localhost or system hostnames it may be worth looking into whether any access rules could be moved to IP or removed.
From Jeremy M -
Thanks for your replies. For me it looks like it's a problem with concurrent thread creating by MySQL. Because when 1500 persistent connections are established no more error occurs and these connections can be used by 1500 scripts at one time, so I don't believe in a problem with parallel access to socket or a TCP problem.
To sum up
- ulimit=32768, max_connections=2000, php_fastcgi_children=1500
- execution of 1500 concurrent requests with non-persistent connections always fails with this "Resource temporarily unavailable" (only a small portion of request is handled)
- when adding random delay of 0.5-1.5 seconds before connect to database (with non-persistent connections) this problem completely disappears
- when using persistent connections, this problem occurs all the time until all php fastcgi children have established their persistent connection to database (so when mysql processlist shows less than 1500 connections, this failure occurs still, but not so often)
- when using persistent connections and all php processes have established a connection to db, the failure doesn't occur anymore
-
what OS is this running on top of? Windows has a terribly long time out on TCP connections. Something on the order of 90 seconds. You have to make some registry edits to bring back down to something shorter. Also helps to raise the port limit of 10k all to something higher.
: I'm using Debian Lenny. The following additional settings were made: echo 131072 > /proc/sys/net/nf_conntrack_max echo 131072 > /proc/sys/net/netfilter/nf_conntrack_max echo 3 > /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_time_wait echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse But remember, all interprocess communication is made via sockets, so these settings doesn't affect this.From aduljr -
Have you also adjusted the max_user_connections parameter? It could be that nginx is treated as a single user, and is capped by that parameter.
From blacklotus