We have an online database, hosted on a dedicated UNIX-BASED web server by the German company '1 and 1' and we are having an intermittent problem with the speed of writes when we update any records in the database. Reads from the database are always fine - it is writes only.
The database is used to drive an ecommerce website and contains approx. 120,000 records. As part of the routine magement of the database, a retired guy comes in a couple of hours a day and modifies such things as catalogue numbers, stock quantities, prices etc.
I cannot stress this enough - this is a very low-key business - it is not Amazon - at his busiest, he probably updates 50 - 100 records a day. Record fields are accessed by an html form with a search box pulling up a sheet with a record's details (approx. 15 fields in a record). Individual fields are then updated on-screen and an update button saves the record back via an SQL 'Update' statement constructed by a receiving perl script (called als.cgi).
The problem seems to occur after variable amounts of time - sometimes he can update 30 records, sometimes 10, sometimes 2 or 3 - normally when he clicks the update button, the write update takes 1-2 seconds - when this issue arises an update can take 2-3 minutes! No suprise, the hosting company's support line was totally unhelpful - they sent me a 150 page PDF file about using SQL and told me that I was using 'Slow Queries'.
The one helpful thing they did tell me was to use SSH and the TOP command to see what processes were running on the Server at the time of the slow updates - I did and was amazed to find that when the update issue arose, for 2 -3 minutes there were up to 6 versions of the script (als.cgi) showing in the process list (all taking 0% processor time). After a few minutes of them bobbing up and down the list, they intermittently vanished one at a time and the updates went back to normal speed until the next time.
As the code for the admin routines used to make the updates is on the back-end of the ecommerce website, in the same perl file as the routines that drive the front-end, I assumed the other instances of als.cgi were people browsing the products on the site (which is not very busy usually) but when this happpens there are always the same 4 - 6 instances of the script in the list - it was my understanding that PERL is an epoch level language, i.e. everytime the script is run, all resources are created on entry and destroyed on exit - how can there be any versions of the script hanging about in the process list?
To be sure, I re-coded the admin screen (which I wrote about 4 years ago) using modern techniques such as AJAX for the user interface and seperate .CGI files for read (get.cgi) and write access (put.cgi) to the database . I also coded the script to only update database fields which had been changed on the form (rather than all fields in the record as it does at present) - this should have dramatically reduced the load on the server. To my amazement after a while it did exactly the same thing - began to run slowly for updates, with multiple copies of als.cgi in the process list - bear in mind, als.cgi wasn't even being executed for updates at this point, put.cgi/get.cgi were used instead but no sign of them in the process list! - I presume that this confirms my hypothesis that there are people browsing the site (the routines for which are contained in als.cgi) at the same time as we are updating records on the back-end.
The hosting company seem to be sticking to the line that the site is inefficient and it is a quota issue - I think this is rubbish, we have our own server and I would expect it to be able to handle one guy slowly updating a few dozen record fields a day without the server breaking a sweat.