Attempt a SELECT statement more than once

MS-SQL: 12.00.5546
PHP: 5.6.25

Have a simple SELECT statement that needs to retrieve results:

$stmt = sqlsrv_query( $db, "SELECT * FROM Vendor WHERE VendorID = '$vendorID'" );

Open in new window


To insure a result, a safety loop in the PHP is setup like this:

$attempt = 0;
$stmt = false;
while( !$stmt ){
   $stmt = sqlsrv_query( $db, "SELECT * FROM Vendor WHERE VendorID = '$vendorID'" );
   if( ++$attempt > 10 ) die( print_r( sqlsrv_errors(), true ) );
}

Open in new window

 
Is there a way to use something like TRY...CATCH to avoid a PHP-style loop?  If not, is there another way to force a SELECT to retry?
Joe WeinpertSr Software EngineerAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

gr8gonzoConsultantCommented:
While PHP does have a try/catch mechanism, it's not a replacement for a loop, and sqlsrv_query doesn't throw exceptions - it just returns the boolean false if the query fails.

What you have is probably the most efficient solution if you want to try up to X times to get a result back, although just for code clarity, I might suggest a do...while() loop instead of a while().

$attempt = 0;
do {
   $stmt = sqlsrv_query( $db, "SELECT * FROM Vendor WHERE VendorID = '$vendorID'" );
   if( ++$attempt > 10 ) die( print_r( sqlsrv_errors(), true ) );
} while( !$stmt );

Open in new window


Something to consider is that if your query fails for some reason other than a connection / resources problem (e.g. the query itself is bad because $vendorID is malformed), then you're basically running a bad query up to 10 times. You might want to also check what the error message is before you loop and then if it shows a more fundamental problem like a malformed query, just use break; to escape the loop early.

On a side note, it's pretty rare for a database to fail a query like that 10 times unless something is seriously wrong. If this code actually works and it's failing 7 times and working on the 8th, then you should really consider looking for a root cause on the database side (e.g. the DB might be stretched far beyond its hardware capacity). If a query is valid, then a properly-managed database should return results the first time you ask for them.
1
Julian HansenCommented:
First of all - why do you want the select to retry? What are you expecting that requires a loop to keep trying a query that failed the first time?

Either the query fails because it is mal-formed - in which case a loop does nothing
Or it fails because of a connection problem - in which case if you have a situation where a server is so regularly unavailable that you need to implement a loop to get around it - then maybe you need to look at why the server is not available.

A try / catch just provides a construct for you to handle a failure - the try block will execute until an error is thrown at which point it jumps to the catch block - if you want to keep trying the server you would still need to implement a loop - but I re-iterate my question as to why you would want to do that.
0
Joe WeinpertSr Software EngineerAuthor Commented:
The SELECT is not always returning results from records that I know exist.  Within HTML, a jQuery ajax PHP request is being submitted to update a distinct record in the table. The distinction is the VendorID.


for( i = 0; i < vendorArray.length; i++ ){
   array = {}
   array.Vendor = vendorArray[i]
   ajaxData = { "processVendor": JSON.stringify( array ) }

   $.ajax({
      url: "updateVendor.php",
      async: true,
      data: ajaxData,
      type: "POST",
      cache: false,
      complete: function(){
         if( thisVendor == finalVendor ){
            finishVendors()
         }
      },
      success: function( answer, status, xhr ){
         thisVendor = answer
      },
      error: function( xhr, status, error ){
         console.log( "status:", status, "\nerror:", error, "\nxhr:" ); console.dir( xhr )
      }
   })
}

Open in new window


The table has 100 vendors.  A single record based on the VendorID field is being updated each time through the loop.  The PHP gets the record data via the SELECT statement posted in the initial question.

If I set ajax.async to be false, there is never a problem.  But there is a problem if I don't.  I assume that a record is locked by MS-SQL when updating its fields via one ajax request.  Why then can't another request be made to read the table via SELECT to get data from a different vendor record than the one being updated.  That is what appears to be happening.

I placed code to write the number of loops it took to SELECT a record from the table.  Most of the time it was the second attempt that was successful.  And the results are never the same ... unless I synch the requests rather than asynch.

I am not sure why locking a record for updating also may lock other records from being read.

If there is something I can do via MS-SQL statements, that would be great.
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

gr8gonzoConsultantCommented:
Have you looked at the logs and the error message that comes back when you produce the problem?

Locking behavior can become complicated. You have different types of locking, transaction isolation levels, etc... that all can play a part in this behavior. So you should start with your error messages and logs to figure out what is actually happening first.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Julian HansenCommented:
Why are you making an AJAX call for every item in the array?

Why don't you pass the array in ONE AJAX Call and then iterate over that array in your PHP script?
0
Joe WeinpertSr Software EngineerAuthor Commented:
The idea was to speed things up by having the MS-SQL server handle 100 separate requests almost at once rather than a single request for 100 records.  It would be something like having 100 separate users hit a request button a few ms apart for their distinct record in the table.
0
gr8gonzoConsultantCommented:
You're probably going to have SLOWER performance by trying to push 100 parallel requests to the server simply because of the overhead. Basically, each separate AJAX request has to make its way to the server, then it has to be processed by the web server (which could result in the requests sitting in a queue), then the web server has to load PHP 100 times (once per request, unless you're using FastCGI with sufficient connections), then PHP has to run the script and connect to the database (100 connections or however many concurrent clients are allowed by the web server / PHP services and however many concurrent connections are allowed by MSSQL), and THEN you are running the 100 queries, which results in another 100 network trips between PHP and MSSQL.

Compare that to a single AJAX call that could potentially use one single SQL transaction to update all the necessary records in one big query, and you'll almost certainly have MUCH better performance without locking issues.
0
Joe WeinpertSr Software EngineerAuthor Commented:
This all started because of another application that we created to build a BOM (Bill-of-Materials) table containing over 180,000  distinct records.  Looping with AJAX works 100% of the time.  The difference between that and this is in the BOM application we are creating a table via INSERT.  There is no need to SELECT records to get informational data.  Creating the table used to take 11 minutes.  Now it only takes 40 seconds.  Never been an error yet.
0
Joe WeinpertSr Software EngineerAuthor Commented:
The suggestion to pass the complete array would be great if there was a way to make requests in a PHP loop to the SQL Server without needing to wait each time for a request to complete before looping to the next one.
0
gr8gonzoConsultantCommented:
It sounds like you might be trying to insert one row at a time if it's taking that long. If you're trying to insert 180,000 rows into a table, you should be using bulk inserts. Running 180,000 individual queries will DEFINITELY take forever (again, because of all the overhead of sending 180,000 queries from the client to the server, and having each one commit). Splitting it up into AJAX can SORT OF make it faster, but it's the wrong way to do it, because you're taking up LOTS of other resources unnecessarily.

You should be using this syntax when inserting lots of rows:

INSERT INTO yourtable (columnA, columnB, columnC) VALUES
  (record1valueA, record1valueB, record1valueC),
  (record2valueA, record2valueB, record2valueC),
  (record3valueA, record3valueB, record3valueC);


Example:
INSERT INTO fruits (name, color, shape, quantity) VALUES
  ('apple', 'red', 'round', 100),
  ('orange', 'orange', 'round', 56),
  ('genetically engineered watermelon', 'green', 'square', 2);

Then send that single query over to the database (you might have to break it up into a couple of queries based on how long the query is and what your configured allowed packet / query size limit is on MSSQL).

What that does is it inserts all 3 records (or however many you define) all within one transaction, which HUGELY optimizes the data throughput. Give it a try and I'll bet you'll see the table created in far less than 40 seconds. I would not be surprised if you can get all 180,000 inserted in less than 5 seconds with one script (if all records fit within one query). I just ran a test on a smaller table (couple of columns with an auto-incrementing ID and one secondary index) inserting 180,000 records in one query and it ran in JUST over 1 second.

And if you want to optimize it even further, leave off all secondary indexes when you first create the table. Add all your data, THEN add your secondary indexes afterwards, and they'll insert even faster.

If you want to better understand the difference, imagine you need to go buy 100 things from the supermarket.

Scenario A: You get in your car, drive to the supermarket, buy 1 item, drive home, get out, and drop off the item. Then you get back in your card and go back to the supermarket for the next item, etc, etc... until you have all 100 items. This is what's happening when you run 180,000 queries to insert 180,000 records.

Scenario B: You realize Scenario A takes too long, so you call up 99 of your friends, and all 100 of you try to go to the supermarket at the same time and each buy 1 item. The supermarket parking lot only holds 50 cars at a time, so 50 items are purchased, then as the parking lot empties, the rest of the friends get in and they buy their item. You might speed things up a bit, but you're taking up the parking lot that could be used by other people, and you're taking up the time of your friends. This is what's happening when you run 100 AJAX calls (and chances are that your browser will only send a handful of those AJAX requests at a time because of its own per-domain connection limits). Again, it's somewhat faster but VERY inefficient / wasteful. Plus, it takes longer for the supermarket cashier to check out 100 people with 1 item each versus 1 person with 100 items.

Scenario C: You go to the supermarket alone and buy all 100 items at once. This is the bulk insert query that I've mentioned above. The code is making one big trip, and the database (cashier) can quickly handle all 100 items in one session, so it goes REALLY quick.
1
Julian HansenCommented:
I think Gr8gonzo has covered this - but to add to it - there is no merit (that I can see) in running multiple AJAX calls - the overhead of each call, coupled with the complexity of synchronising all those calls (async false is not a good idea) results in greater performance hit than doing it at once.

BTW if you wanted to do multiple calls for something else what you do is store the return of each async call promise and then use something like Promise.all to wait for all the requests to complete before running the next bit of code.
However, I can pretty much guarantee this will be slower than a single submit.

You will still need a loop on the PHP side but this will be to create the query - not to execute it.

Something like this
<?php
...
$parameters = isset($_POST['vendors']) ? $_POST['vendors'] : false;
if ($parameters) {
   $query = "INSERT INTO Vendor (field1, field2, field3) VALUES ";
   foreach($parameters as $p) {
      $query .= <<< INSERT
           ('{$p['field1']','{$p['field2']','{$p['field3']'),
INSERT;
   }
   trim($query,',');
   // Execute query here
}

Open in new window

1
Joe WeinpertSr Software EngineerAuthor Commented:
The use of a multiple-value based INSERT (up to 1000)  is used throughout many of the apps that we have created here.  I understand them and appreciate the time and effort by all to remind me of those.

But a Bill-of-Materials table is built recursively, not by just adding a record at a time.   A top-level item will recursively work through itself to add all of the records for all of its sub-components and all of their sub-components and down and down and down.   Using a recursive CTE is fabulous.

But it was a bad example on my part to mention the BOM because it is not the original problem.

The only reason I am using a looping AJAX call is to force the SQL Server to handle many requests at nearly the same time to work on a table.  Each request is for a different, unique record in the table.  Passing an array and having the PHP loop through it just means that it has to wait for the Server in each loop to work on the next record.  So it is modifying fields one record at a time rather than all the records nearly at once.

It would be great to have the PHP-side fire off many requests to the Server without needing to wait for an answer each time.  The client-side AJAX.complete: function does just that.

Looping like this is having a problem with the PHP results of reading a record's data  when another record's data  is being changed.  Sometimes the result is "false" rather than having the data even though the record exists and is in the table.  Very troubling, indeed.

But looping like this causes no result problems if it is building a table without the need to read any of its records.  This is what the BOM application does.  It loops through a 5,000 top-level item array sending each to the PHP to recursively build the table.  And the SQL Server has yet to hiccup while doing it.
0
Julian HansenCommented:
So it is modifying fields one record at a time rather than all the records nearly at once.
This is a big misconception - if you are sending multiple requests from your AJAX code - this is no different from sending those requests from your PHP code. The same work is being done.

Let us look at the scenario where you are looping in AJAX

for(var i = 0; i < somevalue; i++) {
    $.ajax({
      ...
    }).then(function() {
        // handle complete here BUT !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! see below
    });
}

Open in new window

Anything you do to the interface is going to fire on the FIRST completion AND on the SECOND AND on the THIRD etc.
If the update of the interface should happen AFTER all queries are run then you are going have a problem with this pattern - as you have already discovered - where you resorted to async: false - which is a very very very very bad solution to this problem.

This tells me you have to wait for all AJAX calls to complete before you can proceed - which negates any performance advantage you had from splitting them up unless you are hoping that spawning multiple simultaneous requests on the server will speed up the process - as opposed to one bulk update. The bulk update will always be faster.

What I am getting from your subsequent posts is that you have a situation where bulk inserts are not possible, you do one insert based on that you do another etc.

Lets for the moment assume that firing the requests from AJAX in a loop is the way to go - what you are going to have to do is save the AJAX returns and then use $.when.apply to wait for all those requests to complete.

Here is some sample code
<script>
var list = [1,2,3,4,5];
$(function() {
	var ajaxCalls = checkAll();
	$.when.apply($, ajaxCalls).done(function() {
		console.log('all done');
	});
});
function checkAll()
{
	var ajaxList = [];
	var index = 1;
	
	$.each(list, function(i, e) {
		console.log('Starting AJAX: ' + e);
		var ajax = $.ajax({
			url: 't2737.php',
			data: {id: e},
			type: 'post'
		});
		ajaxList.push(ajax);
		ajax.then(function(resp) {
			console.log(resp);
		});
	});
	
	return ajaxList;
}
</script>

Open in new window

You can see it working here

I cannot comment on the alternative PHP side as I am not clear on how your nested data works - that seems to be a late entry to the question - maybe if you can give us an illustration.
0
gr8gonzoConsultantCommented:
Alright, so if all your logic is in the database via a CTE and you're doing a recursive CTE to populate your table and want it to run async, you might want to use a the Service Broker to set it up:
http://rusanu.com/2009/08/05/asynchronous-procedure-execution/

Just pay attention to the bit about max_queue_readers at the end. While I don't usually just link to a remote article, it's mostly code with a few explanations that can't really be summarized here - you'd have to just read the thing (or do a Google search for async MSSQL service broker to find a similar article), and I don't know that it actually addresses this specific problem or not (you mentioned the BOM is not the original problem, but I don't know if the recursive CTE is still in play on -this- problem).

Maybe even use a UDF to set it all up for one unique record, then just call the UDF a bunch of times from a single script to essentially "queue up" the operations.

If the query is failing, that's not really PHP's fault here. Chances are that the database is returning some error message, and like I said further above, you need to immediately look at the resulting error message when the query comes back as false (and possibly check the server logs, too). That error message should contain the response from the database explaining why it's not returning your expected query data.

Finally, while I'm 50/50 on whether to include business logic in the database vs. code (I don't know enough about your system and environment to suggest a different way - personally I would have let the code recursively prepare all of the data and then insert the final result, but that might not work if you have other independent applications that execute that same CTE), I will say that if you're going to stick with putting that logic into the database, then you need to rely on the database to handle the parameters in one go rather than trying to push multiple connections to the database (e.g. pass all your parameters in and have an SP loop through them).

When you have one client intentionally opening up multiple connections to the database, 99% of the time, you're doing it wrong. One client should ideally only ever have one connection. MSSQL has an absolute ton of features and capabilities that you are paying quite a bit to have available to you, including a myriad of ways to run asynchronous queries. If you ignore those and use a "duct tape" method to perform multiple connections, you're going to run into other problems like this. They'll either show up now or in the future.

My own personal suggestion is to place as much of the load onto cheap web servers versus expensive database servers, whenever possible. So usually I build out any hefty business logic into the client-side (in this case, PHP) code, and let it recursively do its thing to prepare all the data in a series of queries, and then execute the INSERT in the bulk fashion mentioned above.

MSSQL licenses are not cheap, and you usually want it running on some hefty hardware. If you place a lot of the burden of data creation and preparation onto the DB server so that it eventually starts having performance problems when READING/USING the data, then you're going to quickly run into the need for another database server.

Meanwhile, web servers are a dime a dozen. PHP is free and so is Apache. And if you're running a *NIX OS, then you are probably only paying for a fairly low amount of hardware (compared to a DB server's hardware). However, it's ultimately your call...
0
Joe WeinpertSr Software EngineerAuthor Commented:
OK ... plenty of ideas and directions for me to follow.  I'm closing this question down, but will probably end up posting more as I delve deeper.  

Thanks!  Even though a particular answer has to be marked as best  ... all of the answers were the best
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
PHP

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.