Link to home
Start Free TrialLog in
Avatar of ckoning
ckoningFlag for United States of America

asked on

Trouble With High Volume Select Statments

I have written a custom php session handler which uses the default session id's but stores the session information in a MySQL database. The site running the manager handles roughly 30k visitors a day, each of which generate roughly 5 session writes and lookups in very rapid succession due to AJAX requests. The same issue has arisen using MS SQL Server as a storage platform. The problem occurs on session writes. The write function first tries to locate an existing entry in the session table with the session id (used as the primary key) and then either inserts or updates the record accordingly. Occasionally (1%-5% of session writes) the initial lookup comes back empty even though the session is not new, and there is a row in the database attached to the session id. The logic then tries to insert a new record, only to run afoul of primary key uniqueness constraint. This causes unacceptable failure. The failure is handled relatively gracefully, but provides a significantly degraded user experience. After many attempts to resolve the issue, I have been unsuccessful. The table is using InnoDB as an engine and utf8_general_ci as a character set. The web app is running on Zend Framework and using the framework's DB functionality. The problem still occurs when reglar mysqli connections are employed instead. Any insight would be greatly appreciated. Psuedo code for the session write is provided below.
function sessionWrite( id, sessionData )
    sql = 'select 1 from sessions where sessionId = id';
    rows = db->query(sql); // occasionally comes back with 
                           // null result set even though
                           // session record exists
    if( count(rows) > 0 )
        updateRecord(id,sessionData);
    else
        // Then tries to insert duplicate row
        insertNewRecord(id,sessionData);
    return;

Open in new window

ASKER CERTIFIED SOLUTION
Avatar of Bernard Savonet
Bernard Savonet
Flag of France image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
In trying to have an AJAX request work, in the javascript, I was looking at about 10 different JS functions, each making a separate call for a web connect ajax data transfer, I had to change it, so it determined what data was needed for each function , combined the separate requests, and made one web connect ajax data transfer, for all 10 data segments. Also, in some ajax setups, I had to use a javascript "timing" tracker, to make sure the user did not flood the ajax responder. I would guess that your problem is from to many rapid requests in an ajax responder. You might consider reworking your javascript, for less requests and limiting the requests so that there is at least 2 seconds between them. You are probably stuck with what ever DB performance that is in your system, I would think there would be a way to reduce or limit the DB requests from very rapid succession due to AJAX requests.
Avatar of ckoning

ASKER

The INSERT *** ON DUPLICATE UPDATE syntax was exactly what I needed. Well, it doesn't address the underlying race condition, but bypasses it entirely. The atomicity was a concern, and it seems that the syntax is atomic (http://lists.mysql.org/internals/33846 for reference). Many helpful suggestions to improve performance, but unfortunately, this approach is a band aid until we can address some fundamental design flaws that require data cacheing in the session for AJAX handling. Thanks for the input.