Solved

problem with php file completely finishing on godaddy shared deluxe linux site

Posted on 2010-08-25
21
538 Views
Last Modified: 2013-12-13


Problem:  Have a web application running on godaddy - shared linux deluxe
          plan - using php4 and mysql.

          (am totally uneducated in php, mysql and apache - a software
          consultant did the project but is no longer available at all -
          a second software consultant cannot find the problem)
          (godaddy insists they can do nothing to help with a site's
           software the site did themself; once they said it's your
           htaccess file)

          Once a day (early am) we run an import_both.php (via cron) job that rebuilds
          parts of the mysql data files.  import_both.php runs both
          import_custdata.php and import_otherdata.php.

          From end Dec 2009 thru Jun 17, 2010 the import_both.php ran
          successfully most times - very rarely would the job not finish.
          (The job displays at every 100th record loaded since initially
          we had problems getting the job to complete.)
          (we now display at every 500th record - just in case it
          might help - didn't seem to)

          Originally, it reloaded approximately 80,000 records in import_custdata.php
          and close to 3,000 in import_otherdata.php.

          Starting early on June 18, 2010 the files only load
          completely at most 40% of the days.

          In desperation, we removed records that are least used
          so we are now loading somewhere around 50,000 records.

          The input file size for the job is approximately 8MB.

          The site is more used now than originally but it is still
          not heavily used.  We assume that most often no user would
          be on the site very early in the am, but occasionally
          someone could be on the site.

          The email from godaddy would only show the echos of
          each 100 records until it decided to stop.

          BUT - we could also run the job via keying in its entire
          address in Internet Explorer, entering the user name/password
          and then watch it run.  (https: ...  (secure site))

          often it would get:
 
Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator, support@supportwebsite.com and inform them of the time the error occurred, and anything you might have done that may have caused the error.

More information about this error may be available in the server error log.



--------------------------------------------------------------------------------

Apache/1.3.33 Server at unitedreaders.com Port 443


BUT - if we hit the 'refresh' key it would often finish.

Or we would try it multiple times and often one of those times would finish.

Have tried researching this and doing the suggestions, but still a problem.


Went into godaddy Hosting GUI, Settings then the File Extensions, and changed
the file association for PHP4 files from " PHP 4.xFastCGI" to just "PHP 4.x".

Now - we no longer get the 'Internal Server Error' message when we run
the job via Internet Explorer - it just seems to stall and not continue -
but usually hitting REFRESH seems to re-start it and then it often finishes.


          When a user is on the site, there are coding errors
          (some missing files) that will go into godaddy's log
          (if we set it on godaddy's site to create a log), but
          these errors seem mostly not to affect the user - they
          just do not see a certain 'picture' or icon.

Here is the php.ini file that is in the directory that holds the php
file that sometimes finishes, sometimes doesn't:
(many of the directories have an .htaccess file and php.ini file -
that are different than these - e.g. the one the user visiting
the site gets to).

register_globals = off
allow_url_fopen = off

expose_php = Off
max_input_time = 1800
max_execution_time = 1800
mysql.connect_timeout = 240
memory_limit = 128M
post_max_size = 64000000
upload_max_filesize = 64000000
variables_order = "EGPCS"
extension_dir = ./
upload_tmp_dir = /tmp
precision = 12
SMTP = relay-hosting.secureserver.net
url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=,fieldset="

[Zend]
zend_extension=/usr/local/zo/ZendExtensionManager.so
zend_extension=/usr/local/zo/4_3/ZendOptimizer.so


Here is the .htaccess file that is in the directory that holds the php
file that sometimes finishes, sometimes doesn't:

authuserfile /home/dir1/dir2/dir3/dir4/dir5/htconfig/.htpassword.ghtpassword
authtype basic
authgroupfile /dev/null
authname "Secure Area"
require user admin1 admin2 admin3

ANY help would be MOST appreciated.  Unfortunately, the true testing
of any changes can only happen once a day - and about 40% of the time
the job does run successfully.
0
Comment
Question by:Mid-Atlantic-Data
  • 10
  • 10
21 Comments
 
LVL 82

Expert Comment

by:Dave Baldwin
Comment Utility
On the "deluxe linux" plan, you can have 25 MySQL databases and 150GB of disk space.  I suggest you clone the database and PHP files with a slight name change and use them for testing.  That way you should be able to test whenever you want.

The mostly likely thing is that since it is a shared hosting, other people's sites take up part of the CPU time and your job times out.  If that turns out to be the case, then scheduling the cron job to run something like 4 times five minutes apart in the middle of the night might fix the problem.  That might require a little re-write though.  In doing a 'refresh', you are probably still in the 'session' for that file.
0
 

Author Comment

by:Mid-Atlantic-Data
Comment Utility
good suggestion to create a test environment ....  even tho our skills are minimal.

Re job timing out - in the php.ini file - commads max_input_time
and max_execute_time seem more than sufficient - might it be worth it
to increase them??  or are there other commands also that affect time?

re running the job 4 times - the import_custdata.php completely replaces all the custdata records - so if the 4th (or last) time fails, then we are left with a partially incomplete website - not acceptable.  That's why - if the cron job doesn't complete,
we run it via Internet Explorer until it does complete.   But this is not really satisfactory to do 7 days a week.

Since starting to run only php4 - not php4-FastCGI - when run the job via Internet Explorer, hitting the REFRESH key starts the echo of the 100th (or 500th) record load point all over again - at least on the screen display.
0
 
LVL 82

Expert Comment

by:Dave Baldwin
Comment Utility
If the problem does turn out to be running out of time, then re-writing it to do smaller pieces every five minutes until it gets it done is a way of getting around that.

I am curious why your are replacing the entire database every day.  And you might consider this an opportunity to upgrade to PHP 5 while you're at it.  You're probably going to have to hire a consultant to make some changes any way.
0
 

Author Comment

by:Mid-Atlantic-Data
Comment Utility
ok - you haven't specifically said to increase those 2 max values, but I will for tonight.

Our main software runs on an older in-house software program.

easiest way to give customers some web access was just to replace all the fields they see every day - not deluxe,
but seemed to be quick and dirty and satisfactory - that is, until June 18.
0
 
LVL 82

Expert Comment

by:Dave Baldwin
Comment Utility
There are a number of ways to address this problem.  The first thing is that I doubt that all those values are changing every day.  Do you know how much the data changes each day?  If it's only a few percent then you could do updates instead of replacing the whole database.  Another alternative is to use two sets of tables.  At any given time, one is the working set and the other is the update set.  When the update is done, you switch them.

I have several web sites on Godaddy.  I always make my MySQL tables externally accessible.  Then I can do updates remotely from my servers here in my office.

The first thing is to gather information.  How much does the data change?  How long does it take to actually do the updates?
0
 

Author Comment

by:Mid-Atlantic-Data
Comment Utility
the data actually changes very little.  The one table would be relatively simple to only send the updates.  The other data changes in many places in our software.  Would probably need to do a diff between the old fiile and the newest replacement and then only send the differences to godaddy.

Was very surprised that 80,000 records could be a problem.

since last suggestion, changed the php.ini file:  the 2 max fields which were 1800 are now at 6000.  the mysql one which was 240 is now 960.

Also changed the memory_limit from 128M to 32M - just in case the import stops if suddenly enough memory is not available ???

Tried is last night 30 min. after making the changes and it worked  (of course, it works 40% of the time).
Worked again at the cron time a little while ago.

Have never timed the import when run via Internet explorer - but guess sometimes it's as fast as 10 seconds - other times may take a minute or so and then sometimes it just 'sits' and even after several minutes, get no more displays of every 100th record - so seems to have stopped without finishing.
0
 
LVL 82

Expert Comment

by:Dave Baldwin
Comment Utility
Don't think of it as 80,000 records, think of 200 plus users on shared hosting also doing their work in the middle of the night.  If you were the only one of the server, 80,000 records could take as long as 80 seconds.

Until you're pretty sure it's been fixed, you need to keep a log of what time the update is done, how long it took, and did it finish.  In addition, you might want to do something to see when the update data is actually available.  Figure out what the last record is and make up a query to get that one.  Godaddy's MySQL servers are on different machines than the web servers.
0
 

Author Comment

by:Mid-Atlantic-Data
Comment Utility
your initial thought seemed to be it ran out of time ..

now max_execution_time = 6000

        max_input_time = 6000

        mysql.connect_timeout = 960

.  what do you think re increasing those maxes??

.  have no idea what mysql.connect_timeout really does -> what do you think of the 960 value?

.  have been logging whether job completed or not and what time the email from godaddy is time stamped - but would need to change the import...php file to
somehow echo the actual start and end time.  Really don't know how to do that.

       
0
 
LVL 82

Accepted Solution

by:
Dave Baldwin earned 400 total points
Comment Utility
The times seem fine but they aren't the only issue.  Part of it on shared hosting will always be other users.  It is not impossible that sometimes many other users are also trying to get things done at the same time.

I strongly feel that the application should be changed to make sure it completes.  Part of that would be to minimize the changes by doing updates instead of complete reloads.  If only 500 out of 50,000 records need to change, that reduces the time requirement by 99%.  Using alternating tables would reduce the time requirement to microseconds to switch tables.  In either of those situations, the time limits would no longer matter.
0
 

Author Comment

by:Mid-Atlantic-Data
Comment Utility
You obviously have a lot of successful experience in these matters.

We will consider doing updates only - but it will take a bit for us to switch
the software both internally and on the website.

re alternating tables - no idea how to do that ...

In the meantime, we have 2 successful imports completed - will see what the
next several days bring ...
If they are successful, we will accept the solution.
...  may even try the 80,000 records we really want rather than the 50,000 we are getting by with.
Thank you VERY much for all your suggestions!
0
Complete VMware vSphere® ESX(i) & Hyper-V Backup

Capture your entire system, including the host, with patented disk imaging integrated with VMware VADP / Microsoft VSS and RCT. RTOs is as low as 15 seconds with Acronis Active Restore™. You can enjoy unlimited P2V/V2V migrations from any source (even from a different hypervisor)

 
LVL 82

Expert Comment

by:Dave Baldwin
Comment Utility
Glad to help.  Most of my experience is in figuring how things get screwed up and what to do next.  Most of us test our databases on our own servers... which don't have 100's of other users.

If you're expecting growth on your website, you should still make a longer term plan to make sure the updates get completed.  There are a variety of software methods for doing that.
0
 
LVL 6

Assisted Solution

by:birwin
birwin earned 100 total points
Comment Utility
I have some real world experience with large cron inports of data with GoDaddy. I had a client that imported over 100,000 lines of .csv data daily, with each line containing 38 records.
The bottom line was that GoDaddy admitted that their system could not handle it. I moved the client onto my own, dedicated server, and the problems disappeared.
The good news was that, even though the client had prepaid GoDaddy for three years of hosting, they refunded his fees, once they realized that their system could not handle the data. That was very classy, and has led me to recommend GoDaddy to several clients as a good budget host. But they are a budget host, and your needs probably exceed their capacity or that of any low end host.
0
 

Author Comment

by:Mid-Atlantic-Data
Comment Utility
job failed again - this time after 8000 records, but before 8500

0
 
LVL 82

Expert Comment

by:Dave Baldwin
Comment Utility
Does your email tell you the start and stop times so you'll know how long it was active?
0
 

Author Comment

by:Mid-Atlantic-Data
Comment Utility
it  doesn't .

But it does have the email time stamped and the cron job is always set to run at the same time every day.
Could not find any consistent correlation between them.

e.g. the incomplete import of 8000+ records is email time stamped 1 min after the cron job's schedule time.

So it seems the job had to only be running under 2 minutes.
0
 
LVL 82

Expert Comment

by:Dave Baldwin
Comment Utility
Save the file below as 'phpchk.php' and run it on your web site.  Let me know what it says for 'max_execution_time'.  On two of my Godaddy sites it is 30 seconds.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">

<html>
<head>
<title>PHP Check</title>
</head>
<body>
<p>PHP Check</p>
<?php phpinfo() ?>
<?php phpversion() ?>
</body>
</html>

Open in new window

0
 

Author Comment

by:Mid-Atlantic-Data
Comment Utility
Ran the file from my import dir

    Max_execution_time 1800

This is the max_execution_time in the 'root' directory,
but it is now set to 9000 in the import directory
0
 
LVL 82

Expert Comment

by:Dave Baldwin
Comment Utility
Since the last failed job apparently ran less than 120 seconds (two minutes), that's probably not going to make any difference.  I think it's clear that you need a better method to make sure the updates get completed.
0
 

Author Comment

by:Mid-Atlantic-Data
Comment Utility
re birwin comment of large import of 3.8 million records - we used to do
80,000 with no problem - and that's a lot less - but still not working consistently
now with 50,000 records - but appreciate the thought that we could be trying to do the impossible with godaddy.

re only doing the records that changed rather than replacing the whole file,
upon reflection - realized that once a month approximately 50% of the records
change on 1 day (The other days relatively few change).  But that day would always be a problem.   When we first began the website, we quickly switched to replacing the entire file because the update of individual records seemed to be way too time consuming.  Could program around doing different jobs on different dates, but would much rather not.

The website is getting errors when a user accesses it because a few files called in the code are missing.  We are in the process of correcting that.

'"robots.txt" file does not exist' is not uncommon.  We are going to try and eliminate that error.

??  if the cron job doing the file update is in progress and a visitor to the site does something to cause an error that can go to the log file (like ones mentioned above), is it possible that could cause the import to halt ???
0
 

Author Closing Comment

by:Mid-Atlantic-Data
Comment Utility
Thank you both for your help - it just really seems that godaddy cannot consistently handle this amount of data (which really doesn't seem that large too me).
So either load the data differently or choose another provider.
0
 
LVL 82

Expert Comment

by:Dave Baldwin
Comment Utility
I'm afraid you're right.  Thanks for the points.
0

Featured Post

What Security Threats Are You Missing?

Enhance your security with threat intelligence from the web. Get trending threat insights on hackers, exploits, and suspicious IP addresses delivered to your inbox with our free Cyber Daily.

Join & Write a Comment

As a database administrator, you may need to audit your table(s) to determine whether the data types are optimal for your real-world data needs.  This Article is intended to be a resource for such a task. Preface The other day, I was involved …
Part of the Global Positioning System A geocode (https://developers.google.com/maps/documentation/geocoding/) is the major subset of a GPS coordinate (http://en.wikipedia.org/wiki/Global_Positioning_System), the other parts being the altitude and t…
Learn how to match and substitute tagged data using PHP regular expressions. Demonstrated on Windows 7, but also applies to other operating systems. Demonstrated technique applies to PHP (all versions) and Firefox, but very similar techniques will w…
The viewer will learn how to create a basic form using some HTML5 and PHP for later processing. Set up your basic HTML file. Open your form tag and set the method and action attributes.: (CODE) Set up your first few inputs one for the name and …

743 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

13 Experts available now in Live!

Get 1:1 Help Now