Apache PHP MySQL and Runaway Scripts

Sometimes due to programming error or due to very complex query you can get your PHP script running too long, well after user stopped waiting for the page to render and went browsing other sites.
Looking at Server-Status I’ve seen scripts executing for hours sometimes which is obviously the problem – they take Apache Slot, MySQL Connection and other resources.

I had discussion today who was thinking Apache would kill the script after “Timeout” specified in Apache configuration is reached – this was not my experience so I decided to run couple of tests to check it.

I wrote couple of very simple scripts which demonstrate different behavior for scripts running very long time. For simplicity I did not use MySQL, but other system call – sleep() which has similar behavior for sake of experiment:

Script which simply runs in tight loop “forever”

Script which runs in tight loop but doing a lot of sleep() calls – so it takes a lot of time but does not really use any CPU time:

Script which is same as previous one but which also outputs some data in the loop:

Script which checks connection status in the loop and exits if connection is not 0 (NORMAL)
Check connection handling manual for details.

The testing I did was as follows – I would run the script and let it run checking “/Server-Status” checking if script dies after specified timeout. As the next step I pressed “STOP” button in the browser to abort connection and check if script continues to run.
ignore_user_abort was set to FALSE for running these tests.

Tests were done with PHP 5.2.5 and Apache 2.0.59.

Results were a bit surprising:

Running “TightLoop” script terminated after 30 seconds with “Fatal error: Maximum execution time of 30 seconds exceeded in /var/www/html/tightloop.php on line 5” just as expected.

For rest of the scripts I could see them running for as long time as I wished and even if script did not output anything the 120 “Timeout” set on Apache size did not seem to work.

The STOP button discovery (and so script abortion) worked only in case script was outputing something. If script is just doing something like scripts or MySQL calls and does not output anything to the browser neither automatic termination not connection_aborted() check seems to work. More over you need both ob_flush() and flush() together with IO for it to work reliably as if no low level network send is attempted connection abort from client will not be discovered.

This is unfortunately very serious limitation in practice when you’re using MySQL with PHP as you can have single runaway query which takes a lot of time and which does not allow you to do any output (to see if user is still waiting) while it happens.

Happily there is pcntl extension which you can enable and which gives you access to process control system calls including pcntl_alarm. Using this function you can set an alarm and have script terminated after certain amount of seconds:

Unlike set_time_limit() which specifies cpu time, pcntl_alarm() specifies number of wall clock seconds and so it is much more useful for scripts which spend most of their time waiting.

There is also pcntl_signal function which you can use to install your own timeout handling function which can be used for example to print nice error message as well as to do some cleanup work – for example you can open another MySQL connection to kill the query which was running (otherwise query may well continue to run even if PHP script is aborted).

UPDATE: There are some comments saying experimenting with sleep() function is wrong and if I would use real MySQL queries this would be counted to max_execution_time and so script will be terminated within few seconds.

This is NOT the case. I’ve tested it with 2 simple scripts which run for 1000 seconds (spending this time on MySQL time) one is doing single query another 1000 queries 1 second length each. Neither of them gets killed:

Another one:

Share this post

Comments (21)

  • sleetdrop

    but pcntl is not suggested to be used in a web environment.

    PHP manual says:

    Process Control support in PHP implements the Unix style of process creation, program execution, signal handling and process termination. Process Control should not be enabled within a web server environment and unexpected results may happen if any Process Control functions are used within a web server environment.

    May 20, 2008 at 11:48 pm
  • peter


    Good Catch. In fact I used this last technique a lot with lighty and PHP-FastCGI mode – in this case it works quite well.

    May 21, 2008 at 12:45 am
  • Ven

    I have not encountered the same problem as you.
    When it reach the max_execution_time (setting in php.ini, I set it to 60), the script automatically stoped and print an error: “Fatal error: Maximum execution time of 60 seconds exceeded…”

    May 21, 2008 at 2:43 am
  • Andrey Hristov

    “The STOP button discovery (and so script abortion) worked only in case script was outputing something. If script is just doing something like scripts or MySQL calls and does not output anything to the browser neither automatic termination not connection_aborted() check seems to work. More over you need both ob_flush() and flush() together with IO for it to work reliably as if no low level network send is attempted connection abort from client will not be discovered.”

    That’s normal for a TCP/IP connection. When the server sends a packet and it reaches the client and if the connection has been closed by the client, it will send a packet back stating it. Without this information interchange it’s not possible to detect the client disconnect. It’s not a PHP problem, it’s a web app problem. Probably a solution will be to send 0x20 for example before sending the real output. The whitespace will be discarded by the browser but the server will know that the client is there.

    The MySQL SLEEP function gives me thoughts about possible SQL Injection attack for DoS. If the application is prone to SQL Injection, one can inject SLEEP(99999999) (choose your number) and disconnect. Doing it till there are no more web processes/threads, or the MySQL connections will be exhausted earlier.

    May 21, 2008 at 3:26 am
  • Joe

    That’s a great article! Today I was thinking about how to solve exactly the same problem. I have some slow SQL queries and the Apache thread seems to continue running permanently. I end up with a lot of Sleeping threads in MySQL and eventually max number of connections is reached! I thought DB connection wasn’t being closed properly, but I even created a destructor to handle this.

    What a pity this extension is not for production websites.

    Are there any other ways to handle this?

    May 21, 2008 at 7:37 am
  • peter


    Do you see it for ALL scripts which are shown here ? You should see an error for first script (which runs in tight loop and so consumes a lot of CPU time) but others should not be killed.

    May 21, 2008 at 8:17 am
  • peter


    I’m not saying it is unusual. In fact it is quite close to MySQL case – if client is gone SELECT * FROM tbl; will be aborted virtually instantly while if temporary table or external sort is used it will continue to run.

    Regarding constantly sending ” ” to check for user abort followed by ob_flush() and flush() it is possible though not overly practical as this would generate a lot of smaller TCP/IP packets while your goal is normally good buffering to get decent performance. In most application running too long and user abort are exceptions rather than the normal flow.

    Also note there is RST packet sent by the browser if user clicks STOP. It should be possible to check if connection is alive from the server. connection_aborted() could in theory do that but it does not do it – which is good thing to keep in mind.

    MySQL SLEEP() function is indeed great for bringing down sites with SQL injection but it is not the only one. You can also use BENCHMARK() function.

    May 21, 2008 at 12:06 pm
  • peter


    It can be used for production web sites quite well. It works well with PHP-FastCGI which you can use 🙂

    May 21, 2008 at 12:13 pm
  • Alex.W474

    You can use set_time_limit to specify the limit, most of the PHP scripts require less then 10 seconds to run.
    Se also Troubleshooting “Fatal error: Maximum execution time exceeded” in PHP.

    May 21, 2008 at 2:31 pm
  • peter


    Again – set_time_limit defines CPU time not wall clock time. Check the example scripts provided.
    It works OK for computation scripts but it does not work for database bound or network bound.

    May 21, 2008 at 2:36 pm
  • jeffatrackaid

    I think the hanging behavior is what I see on a client’s site. On this site, they have some horrendous selects that dump to TMP tables. There are a few selects that take nearly a minute to run. I think the client abandons the page but the script or at least the select continues to run. I will have to check more closely to see if we can replicate this and confirm the results. On the database that runs the queries, the selects quickly pile up due to table locking. The issue then snowballs as the web servers are quite busy during peak with 100’s req/sec. As the searches hang the apache slots, the site’s performance degrades.

    May 22, 2008 at 6:07 pm
  • peter

    Absolutely. If PHP queries MySQL Apache child will be busy and client stop will not abort the process.

    May 22, 2008 at 9:37 pm
  • Rasmus

    Using signals in userspace PHP is a really bad idea. You have no idea how the web server code you are running inside might be affected which is why that note is in the documentation. Apache1 will send a SIGALRM when the configured timeout expires, but signal handling in Apache2 is completely different, so when you start debugging signals and timeouts you need to be very explicit about which environment you are in, and for Apache2, which mpm you are running. In order to not clash with Apache’s signal handling, PHP uses an itimer which triggers a SIGPROF (normally used for profiling). A SIGPROF counts cpu time, so yes, if you do nothing but sleep or you are blocked on io from somewhere else, then you need to rely on the Apache SIGALRM to time you out. But for any sort of busy loop, it will kick in.

    The nasty thing is that 3rd party libraries can hijack any of these signals and it can be really hard to figure out who is doing that on you if you aren’t getting the timeouts you are expecting. There isn’t much neither Apache nor PHP can do about that.

    May 23, 2008 at 9:48 pm
  • peter

    Thanks Rasmus,

    But what is about using this approach for PHP-FastCGI, Which is exactly environment which I found it working ?

    When if we stick to apache what do you propose as alternative ? To have some watchdog script checking running scripts and killing apache processes if they were serving request for too long ? Not overly user friendly. Though I guess you can make front server to detect and error and display nicer message.

    May 24, 2008 at 12:14 am
  • lauri

    I’m not sure that you can actually have a completely runaway MySQL query. Eventually the max_execution_time will be reached, as the execution of mysql_?, PDO etc are counted towards the max_execution_time, and the script is aborted. In you tests however, the sleep() function that you used to ‘simulate’ a query is not counted towards max_execution_time. In fact, in the 2nd example only the for() instructions count as execution time. So even this script will eventually abort, but as the for() instruction take svery little CPU time it seems to just go on forever. AS for runaway queries, the test surely show something, but exactly what?

    May 24, 2008 at 12:11 pm
  • peter


    I’ve did the test with mysql_xxx functions for you. There is no difference – MySQL calls are not counted towards execution time. If it works differently in your case please provide example code and PHP version – I’m really curious.

    May 24, 2008 at 5:37 pm
  • PHP