Link to home
Start Free TrialLog in
Avatar of akohan
akohan

asked on

Fatal error - Maximum execution time of 30 seconds exceeded


Hello,

Line marked in snippet causes fatal error. Would please somebody give me heads up why I'm getting this?
// some sample code
 
    while($current_record=mysql_fetch_array($sql_result)){
 
           $ar_test1[$index] =  array (
               "social_security" => $current_record["ssn"],
	       "full_name"       => $current_record["name"],
	       "status"          => $current_record["status_desc"],
	       "date"            => $current_record["date"],
               "agency"          => '',
	       "balance"         => ''
           );		
   
       $str_getIndividualRec =  
	        "SELECT ............ ";
 
          $sql_indvidual_result = mysql_query($str_getIndividualRec) or die(mysql_error());  //// <<<< line causes Fatal Error
		  
	      while($records=mysql_fetch_array($sql_indvidual_result)){
 
		     $ar_test1[$index]['agency'] = $records['agency']; 
	    	     $ar_test1[$index]['balance'] = $records['balance'] ;
 
		  } 
		  $index++;
	}

Open in new window

Avatar of Dirtpatch-Jenkins
Dirtpatch-Jenkins

The files your trying to work with are exceeding the webservers set time-out, which is apparently set to 30 seconds for that execution.

Do you have access to the server .ini files?
If you have access to the webservers php ini file look for this in the code :

max_execution_time = 30

simply change the 30 to 60 or whatever you need.
There is probably a problem with your select query, as it uses more than 30 seconds to execute. Show us the query. Run the query prefixed with the word "explain", and post the result (aka execution plan).

EXPLAIN SELECT ....
You will need to run the EXPLAIN SELECT ... in phpmyadmin or some other mysql client.
Avatar of Tomeeboy
Should be a simple fix if this is being caused by PHP's script execution time limit.  You can adjust this right within your script, and only for this script, without having to change the settings for your entire server.  Just add this to the top of your PHP script:

<?php
set_time_limit(xx) ;
?>

Replace "xx" with the amount of time (in seconds) you want to increase the limit to (maybe try 60, then 90, and so forth).  This will only change the time limit for that particular script.  You can modify the setting for all scripts by editing PHP.ini.
More information on set_time_limit():
http://www.php.net/set_time_limit
Avatar of akohan

ASKER



Hello Cxr,

This is the sql. it is a sql script into a loop.




///////////////
 
      $str_getIndividualRec =  
	    "SELECT collectionAgenciess.coll_company AS agency, " .
            "SUM(tranc.trans_amount) AS balance " .
            "FROM  client, collectionAgenciess, trans " .
            "WHERE client.CollectionAgencyID=collectionagenciess.coll_id " .
            "AND   client.st_id='" . $current_record['id'] . "' " .
            "AND   tranc.st_id=client.st_id ";	

Open in new window

How many records are there in each table? Do you have an index on all relevant columns? There should be an index on all of these:

client.st_id
tranc.st_id
collectionagenciess.coll_id

Possibly also on client.CollectionAgencyID, depending on how the server decides to do this.

The output of EXPLAIN SELECT ... would help us determine what is going on.
How many entries are currently in the tables? Did you recently change servers, or hardware? Is this an existing application you are fixing, or a new one entirely?

As cxr said, if you execute an EXPLAIN statement and post us the results it would be a great help.
Avatar of akohan

ASKER


Right. In Linux I know how to use describe or explain but in Windows (current platform) don't know how.
Any idea?
Run the mysql client from the command line, just as you would on linux.

 > mysql -uUserName -p DatabaseName

You may need to cd to the mysql bin directory, on my windows I do this:

> cd "c:\Program Files\MySQL\MySQL Server 5.0\bin"
or unlimitted
set_time_limit(0);
Avatar of akohan

ASKER


Hello Blumi,

Yes that's true but I believe there should be a better solution by optimizing my SQL script or code. I think default setting is considered for most of the cases so let me try other things first but I will keep it in mind.

Regards,
ak
We don't even know for sure that this MySQL query is what is using up all of the execution time, as we don't have the entire script in front of us to look at.  The query is in a loop, which is going through ANOTHER query's results, so it seems there are at least two queries here that could potentially be eating up execution time.  This one may simply be the final straw, depending on what else is in your script.

You can use set_time_limit() function, as recommended, which may allow your script to finish executing.  Once the script is no longer timing out, it should be a lot easier to determine exactly where the slowdown is by measuring your script execution time.  For example, add this to the start of your code:

<?php
function getTime()
    {
    $a = explode (' ',microtime());
    return(double) $a[0] + $a[1];
    }

$Start = getTime();
?>

Then put this at the end of your code:

<?php
$Finish = getTime();
echo "Time taken = ".number_format(($Finish - $Start),2)." seconds";
?>

That will tell you how long your entire script is taking to execute.  You can then move the $start & $finish lines to right before and after your MySQL query in the code.  Compare the times and see exactly how much of the script execution time is actually being eaten up by the query.  You might find that the query is fine, but the combined execution times caused by the loop and other code are adding up to more than 30 seconds.  You can continue to move the code and pinpoint exactly where the slowdown is.  If it's this particular query eating up most of the execution time, then at least we'll know for sure and won't just be assuming.
Avatar of akohan

ASKER



Hello Tomeeboy,

You are right. OK I will get back to you guys soon.


Thanks.
ak
Avatar of akohan

ASKER


Hello again,

I followed what you said and did calculate the nested while loop's time and it came up as:

Time taken = 0.45, 0.46 or 0.48  seconds (iteration from 1 to 61)

I'm wondering why such a small query causes this since it only returns 2 fields and it works very fast when I run it in SQL editor.

Thanks.



Did you mean to say 45, 46 and 48 seconds? or are those decimal points correct?  It looks like you're saying that the nested while loop is executing in about a half second.
Avatar of akohan

ASKER


not seconds but less than seconds not 45 seconds but 0.45 second.

>>It looks like you're saying that the nested while loop is executing in about a half second?

Yes but for first 61 iteration then gets stuck.

Any advice?
61 iterations multiplied by about 0.5 seconds is about 30 seconds. How many iterations should there be? How many rows are found by the outer query?

Two possible ways to solve this:

1) Speed up the inner query. If you can get it down to 0.05 you can manage 600 iterations.

2) Join the two queries into one. If it is possible, it will be much faster.
Oh, sorry, that's what I thought you were saying... So the loop is running 61 times, at a little less than a half second each time?  That does add up to nearly 30 seconds, and a lot of queries.  The query itself seems like it's executing in a decent amount of time.  The problem here is that the all those half seconds are adding up.

You're either going to have to come up with a way of more efficiently pulling this information (not by optimizing that query, but by figuring out a way to pull all of the data without that long loop), or leave the time limit exended.

From what I can tell by your code, you have already pulled up the records of a person, or persons, with a query that is not shown in the above code snippet.  From there, you loop through these records and pull up more detailed records for each person.

How many people are you wanting to pull information and detailed records for with this?

In my opinion half a second is not a decent amount of time for a single query, unless there are veeeery many rows involved. Most "normal" queries executes in a fraction of that time.

akohan: how many rows are there in each of your tables? Can you find out if there are indexes on the relevant columns? Can you provide us with an execution plan (EXPLAIN SELECT)?

If you can also show us the outer query, we could maybe find out a way to combine the two queries. That would be much faster.
Avatar of akohan

ASKER



>>how many rows are there in each of your tables? I need to work with 3 or 4 tables but the main table has62413 records.


>>Can you find out if there are indexes on the relevant columns? several indexes (this is not my database and I just start working on it)

I will send you explain select soon.

Thanks.
Avatar of akohan

ASKER


When I come to think of it (in general) , even after making a SQL query optimized still there is a high chance of getting  "FATAL ERROR Maximum Time ... "  errors in runtime since it would happen anyway as the number of users increases. Right?!

It seems we should be always worry about this matter!
There are two ways to interpret your comment about increasing number of users: increasing number of web site users, or increased number of user records in the database. I don't know which you meant.

Either way, there should not be a high chance of getting a timeout. With optimized queries, a normal page load time should be less than a second. For a heavy page with few users (backend) a few seconds is acceptable, maybe even 10 seconds. Any more than that, and the users will start to think the page is broken, they will click reload multiple times and, and THEN you will have a problem. Many simultaneous requests which needs 10 seconds of db processing each, and you will soon bring the db server to its knees, even with a relatively small number of users.

With only 62K rows there is no reason for your query to be slow. But there must be indexes defined for the relevant columns. You have not told us how many rows the other tables have. It is highly relevant. It is not the total number of rows that are important, but the multiple of the number of rows in each involved table. For example, if you have 62K rows in one table, 10K in one and 1000 in one, the potential total number of rows for the db to examine in a join is up to 62K x 10K x 1K = 620.000.000.000 rows! That's a worst case with no indexes and no server side optimization, but it illustrates how relatively small tabels adds up (or rather, "multiplies up") to large amounts of data to process. This is why indexes are so important.
Another thing you may want to consider here is to break up your results into multiple pages.  You could then limit the number of records being pulled by the query and reduce or completely eliminate any chance of it appearing slow or timing out.
Avatar of akohan

ASKER



Hello Tombeeboy,

That's a good idea. how can I do that then?! Any sample?


regards,
ak
Avatar of akohan

ASKER


Hello Cxr,

Thank you for explaining the situation such in detail. It totally makes sense and gives me a better view for future development steps.

table a = 62413
table b = 88
table c = 116577

and based on what you explained is a big number and again base on your advice I'm writing a new single query rather two queries without nested loops hoping that would fix the issue.

long story short, I'm working on an old system written by a person who I have never met + without documentation. The scary part is that the main table has more than 200 field (still don't know why it is designed this way) and The bottom line is that I have to make it work for reports I'm generating.

As far as describe table_name command I do get a big list of fields. Do you need it? is that what you meant by explain command?

Regards,
ak



Avatar of akohan

ASKER


>>There are two ways to interpret your comment about increasing number of users: increasing number of web site users, or increased number of user records in the database. I don't know which you meant.

I will appreciate it if you address both of the cases since I'm still learning and it will be huge help. My current project is for the uses whose information are stored into a DB I need to start from there.

Regards,
ak
>> As far as describe table_name command I do get a big list of fields. Do you need it?

No, I don't think so.

>> is that what you meant by explain command?

No, I meant EXPLAIN SELECT ..., put the word "EXPLAIN" in front of the slow query, and run it in a mysql client. The output is a so-called "execution plan", it will show which indexes are used and in what order mysql are reading the tables. It will show what potential for improvements you have.

>> I will appreciate it if you address both of the cases

I did. "Either way..."

Your reply to Tombeeboy made me wonder... are you trying to list 62000 records on one web page? That will not work. Use LIMIT to limit the number of rows: SELECT ... LIMIT 0,100 lists the first 100 records, and SELECT ... LIMIT 100,100 lists the next 100 records, LIMIT 200,100 list the next 100 and so on. The first number is the "offset", the second number is the number of rows to list. The first number is optional, LIMIT 100 also means to list the 100 first rows, same as LIMIT 0,100.
Here's a tutorial that gives some examples of basic pagination (breaking your results up into pages):
http://www.phpfreaks.com/tutorial/basic-pagination/page3

I linked to the page of the tutorial that has the actual code you would need to look at.  You will definitely need to break results up into pages, rather than trying to spew out thousands of records on one page.

Between adding pagination and combining your two queries into one (if you're able to), I think the remainder of your problem will be fixed.
Avatar of akohan

ASKER


Hello Cxr,

>>Your reply to Tombeeboy made me wonder... are you trying to list 62000 records on one web page?

No,  not at all.  even the worst query will not return that many lines,  I guess I shouldn't mixed it up in here. It is just I need to learn how put the results of a query in a page where there are more than 20 or 30 lines.

Regards,
ak
Avatar of akohan

ASKER



Hello Cxr,

That was a nice command to learn Thank you indeed! I just ran the previous SQL script I had problem with (nested in the loop)
id   select_type     table         type      possible_keys                key        key_len     ref         rows        Extra
1      SIMPLE      collection                     ALL      PRIMARY                                                               2      
1      SIMPLE      transactions              ALL                                                                          116577      Using where


if I have understood it right. I think having 116577 rows in a loop is just not right. specially by increasing number of requests.

and the one which was OK (out of the loop)
1      SIMPLE           client            ALL                                                                                  62413       Using where
1      SIMPLE           statuses  eq_ref        PRIMARY,IX_statuses       PRIMARY          3        client.status       1           Using where

Although, I'm taking a different approach to this but I will appreciate it if you give me advice on this.

Regards,
ak



type=ALL means that the server reads all rows for that table. When you have two ALL, like in the first example, it is a "worst case", and you can multiply the two numbers in the "rows" column to see the total number of rows the server needs to examine. In this case it is 2 x 116577 = 233154 rows. You have not showed us this query, so I don't know if this is necessary or not. I would guess there should have been an index on one (or more) of the columns used in the WHERE clause of that query.

The second example has type=ALL in the first row, and type=eq_ref in the second row. It means the server
reads all 62413 rows from the client table, and uses the primary key to do a lookup on the statuses table. This is good, the "rows" column shows 1 for the statuses table, and 1 x 62413 = 62413, so this loookup on statuses does not "cost" anything extra. But, unless the goal of that query was to list all clients, there should probably have been an index on one or more of the columns used in the where clause. You have not showed us the query, so I can only guess.
 
The query we are interested in, is the one that joins three tables, client, collectionAgenciess and trans:

$str_getIndividualRec =  
            "SELECT collectionAgenciess.coll_company AS agency, " .
            "SUM(tranc.trans_amount) AS balance " .
            "FROM  client, collectionAgenciess, trans " .
            "WHERE client.CollectionAgencyID=collectionagenciess.coll_id " .
            "AND   client.st_id='" . $current_record['id'] . "' " .
            "AND   tranc.st_id=client.st_id ";  

Insert a value for $current_record['id'], and run this with EXPLAIN SELECT.

It would also be usefull to see the outer query, the one that is executed only once and populates $current_record.  Maybe these two queries can be combined into one.
By the way... there is an error in that query, it reads FROM ... trans, but fetches the column tranc.trans_amount and have tranc.st_id in the WHERE clause. Is it tranc or trans? There is also a case mismatch collectionagenciess/collectionAgenciess, this works on windows but not on other systems (linux/unix).
Avatar of akohan

ASKER


Hello again Cxr,

First thank you for your help and advice. As far as trans/tranc yes that is a typo but regarding your description on query that is totally right.
now I have turned them in to one query and works fine and of course not getting the scary message (Fatal Error). But since I ran into this situation I must run EXPLAIN on the new query as well.

One important question I have is that where possibly I can learn things you mentioned ? I mean how to interpret EXPLAIN results? do I need to buy a book where I would find those thing by example ?

Once again thank you!

ak



Avatar of akohan

ASKER


One another thing that EXPLAIN has the following feedback on my query. I'm kind of worry since I do see two ALL in there. Having two ALLs mean I have to still work till I get one ALL ?


id   select_type          table            type      possible_keys                     key        key_len     ref                   rows        Extra
1      SIMPLE             collection             ALL                                                                                                     2      
1      SIMPLE                client             ALL                                                                                                 62413       Using where
1      SIMPLE              statuses           eq_ref          PRIMARY,IX_statuses         PRIMARY      3      sis.client.status        1       Using where
1      SIMPLE             transactions     ref          idxClientNum      idxClientNum             12      sis.client.user_id      11      
Avatar of akohan

ASKER


So 62413 x 2 = 124826

should I consider this as a potential issue?

How to interpret EXPLAIN results is somewhat explained here:

http://dev.mysql.com/doc/refman/5.1/en/using-explain.html

About type=ALL: Preferably you should have none of those. In some cases, like with the "collection" table, it does not matter, because there are so few rows in the table.

The output of EXPLAIN SELECT is hard to comment on when I have not seen the query. Please post the query you used to get the execution plan above, the last one with the collection, client, statuses and transactions tables. The impact of that query is 2 x 62413 x 1 x 11 = 1373086, it should be possible to improve the speed considerably, easing the burden on the mysql server. How many rows does this query return?

>> So 62413 x 2 = 124826
>> should I consider this as a potential issue?

Unless the intention of that query was to list all records twice, yes. I need to see the query. There was probably a WHERE clause, if any of the columns in the WHERE clause had an index, the numbers would have been reduced, and the query would have been faster.
Avatar of akohan

ASKER


Please fine the query script in below.

Regards,
ak


///
 
 
 SELECT client.id, transactions.trans_amt,
    CONCAT( SUBSTRING(client.ssn, 1, 3), '-' , 
    SUBSTRING(client.ssn, 4, 2), '-' , SUBSTRING(client.ssn, 6, 4) ) AS _Social_security, 
    CONCAT(client.lname, ',  ', client.fname) AS _Name, statuses.status_desc AS _Status,  
    DATE_FORMAT(client.CollectionAgencyDate , '%m/%d/%y') AS _Collection_date,  collection.coll_company 
 
 FROM  client, statuses, transactions, collection 
 WHERE client.IsInCollections = 1
 AND   client.status=statuses.status_id 
 AND   client.st_id=transactions.ar_id

Open in new window

And how many rows does it return? Is this the rows that you want to show on a single web page?

The collection table is not mentioned in the WHERE clause. It seems there are only two rows in the collection table. I would guess that this query returns all results twice: once for each collection.coll_company. This is probably an error. There should be some condition in the WHERE clause for the collection table.

The execution plan says that transactions is joined using client.user_id, but the query says it is client.st_id... are you sure this query created that execution plan?

Please fix the collection WHERE clause, and run the query to get the total number of rows returned. If you expect very many, you can just run SELECT count(*) FROM ... but keep the exact same query after FROM. Then it will only return the result count. How much time did it use? Then run it with EXPLAIN SELECT ... and post it all again,  the corrected query, the execution plan, the result count, and the execution time for the select without the EXPLAIN.
Avatar of akohan

ASKER


>>And how many rows does it return?

Time: 00:00:01, Records: 44300, Fields: 7
Returned 44300 records.

>> Is this the rows that you want to show on a single web page?
yes but since they are too much I must learn how to show them in pages.

>>There should be some condition in the WHERE clause for the collection table.
OK, correct me if I'm wrong. consider Clients table,  statuses, transactions and collection table.
clients has a field in it as collection_id where through this id I can get the name of the agency which does the collection (I just don't like these companies) anyway. In my SQL query, should I add  following where clause?

 WHERE client.IsInCollections = 1
 AND   client.status=statuses.status_id
 AND   client.st_id=transactions.ar_id
 and    client.collectionAgnecyId=collection_id   <<<<<<<<<<< should get added?


is this right?




ASKER CERTIFIED SOLUTION
Avatar of Roger Baklund
Roger Baklund
Flag of Norway image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of akohan

ASKER

yes, that is my habit to use tb_name.fld_name.

as far as the Where clause amazingly I added the criteria you advised to include and this time query returned 22150 records!

Wow!
Avatar of akohan

ASKER



Thank you, it works just fine but I guess I will be picky as long as I see two or more ALL in rows returned by EXPLAIN.
Last question: Is it OK to use try/catch (exception error handling) when I run a query in general rather than getting the ugly message as Fatal error?

Thanks indeed for your help.
Regards,
ak
Yes and no. There is nothing wrong with using try/catch to catch errors. After all, that is what it is for. :) But the special case of "maximum execution time exceeded" only occurs when it is allready too late to execute more of your own code... There are some tricks that you can try, though. Read more here:

https://www.experts-exchange.com/questions/23948338/Maximum-execution-time-exceeded-Error-Handling.html
Avatar of akohan

ASKER


Thank you!