MySQL 5.5 slow updates

Hello,

the server we have running is MySQL 5.5 on 512GB SSD - Intel Core i7-3820 @ 3.60GHz - 64GB of RAM and 1GBPS connectivity.

We have to UPDATE about 5,000 rows per request sent from another server, this takes about 142-150 seconds to do. The table that this gets done in is innodb and is properly indexed. We need to get this done to the lowest possible number in terms of time:

this query is the one that gets executed

$sql = "UPDATE bc_status SET sa_bill_status='" . $bill_status . "', sa_individual_id='" . $individual_id . "', semaphore='0' WHERE recipient='" . $number . "' 
AND sa_XBDLID='" . $id_delivery . "'";

Open in new window



this is our my.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
bind-address = x.x.x.x.x
#skip-networking
max_connections = 400
key_buffer = 384M
myisam_sort_buffer_size = 64M
join_buffer_size = 60M
read_buffer_size = 60M
sort_buffer_size = 120M
table_cache = 1800
thread_cache_size = 384
wait_timeout = 7200
connect_timeout = 10
tmp_table_size = 64M
max_heap_table_size = 64M
max_allowed_packet = 64M
max_connect_errors = 1000
read_rnd_buffer_size = 524288
bulk_insert_buffer_size = 32M
query_cache_limit = 64M
query_cache_size = 512M
query_cache_type = 0
query_prealloc_size = 65536
query_alloc_block_size = 131072
default-storage-engine = MyISAM
##carefull with this
#innodb_thread_concurrency=32
#innodb_flush_method            = O_DIRECT
#innodb_log_files_in_group	= 2
#innodb_log_file_size           = 512M
#innodb_flush_log_at_trx_commit = 1
innodb_file_per_table          = 1
innodb_flush_log_at_trx_commit = 2
innodb_buffer_pool_size        = 45000M
#innodb_print_all_deadlocks = 1
#innodb_use_native_aio = 1
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

Open in new window


someone commented out some of the innodb settings becuase we really have no idea how to optimize, some of these settings were provided by percona's my.cnf generator.
Mario BernheimCTOAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

COBOLdinosaurCommented:
Without seeing the structure of the data, the format of the inputs, and the script that is running the query it is not likely that we are going to be able to offer much help.  Why are you assuming that the problem is in the configuration settings?

Cd&
0
Mario BernheimCTOAuthor Commented:
Let me paste structure, and format of the inputs and the script that is running the query... I am assuming it is becuase we're just reading values, exploding it and updating one by one...
0
Mario BernheimCTOAuthor Commented:
<?php
$time_start = microtime(true);
set_time_limit(0);
include ("lib.php");


$batch_id = $argv[1];


$sql = "SELECT * from bc_confirmations_log WHERE id='" . $batch_id . "'";
$rs = $conn -> execute($sql);

$id_delivery = $rs -> fields['id_deliver'];
$all_results = $rs -> fields['result_data'];
$sql = "UPDATE bc_confirmations_log SET processed='yes' WHERE id='" . $batch_id . "' LIMIT 1";
$rs = $conn -> execute($sql);

$rsql = "UPDATE bc_status_log SET got_response='yes' WHERE id_deliver='" . $id_delivery . "' LIMIT 1";
try {
	$ar = $conn -> execute($rsql);
	if ($conn -> Affected_Rows() == "") {
		insert_ledger("no rows where changed on id_delivery for $id_delivery -> $rsql");
	}
} catch (exception $e) {
	insert_ledger("could not change status got_response on $id_delivery with exception $e");
}
$all_results = explode("|", $all_results);

$loop = count($all_results);
$counter = 0;
foreach ($all_results as $values) {
	$counter++;
	$insertstrings = explode(",", $values);

	$individual_id = $insertstrings[0];
	//query_log("Individual ID ->" . $individual_id);

	$number = $insertstrings[1];

	$bill_status = $insertstrings[2];

	$sql = "UPDATE bc_status SET sa_bill_status='" . $bill_status . "', sa_individual_id='" . $individual_id . "', semaphore='0' WHERE recipient='" . $number . "' AND sa_XBDLID='" . $id_delivery . "'";

	try {
		$conn -> Execute($sql);
	} catch (exception $e) {
		insert_ledger("TRY CATCH ERROR -> $e when trying to run $sql");
	}
}
$time_end = microtime(true);
$time = $time_end - $time_start;
$sql = "UPDATE bc_confirmations_log SET how_long='" . $time . "', amount='" . $counter . "' WHERE id='" . $batch_id . "' LIMIT 1";
$rs = $conn -> execute($sql);
insert_ledger("$batch_id confirmation DONE!");
?>

Open in new window

0
Cloud Class® Course: Microsoft Azure 2017

Azure has a changed a lot since it was originally introduce by adding new services and features. Do you know everything you need to about Azure? This course will teach you about the Azure App Service, monitoring and application insights, DevOps, and Team Services.

Ray PaseurCommented:
My first two thoughts without going into the code...

Do you have an index on every column used in WHERE, ORDER, JOIN, GROUP, etc.?

Do you have LIMIT clauses on every query that does not absolutely require a complete table scan?
0
Mario BernheimCTOAuthor Commented:
Hi Ray,

thanks for taking the time to answer, yes there is an index on every column WHERE, its all properly indexed..
 
There isn't a limit clause on this query:

UPDATE bc_status SET sa_bill_status='" . $bill_status . "', sa_individual_id='" . $individual_id . "', semaphore='0' WHERE recipient='" . $number . "' AND sa_XBDLID='" . $id_delivery . "'";

Open in new window


Should we add one? sa_individual_id is the unique identifier, there are no duplicates.
0
Ray PaseurCommented:
Yes, if an UPDATE is expected to hit only one row, then by all means use LIMIT.

As I read the query, recipient and sa_XBDLID are the things that would determine the unique identity of the row, right?
0
Mario BernheimCTOAuthor Commented:
Hello Ray,

Yes, in fact, XBDLID would be the batch ID number for the entire request, then recipient and sa_individual_id would determine the unique identity of the row.

Ive added LIMIT.

There was about a 20 second speedup...

the data is read this way:

4E6407FE3B542,50586636608,400|4E6407FE3E03D,50589705033,400|etc..etc..etc..

Open in new window

0
Ray PaseurCommented:
What do you get if you use var_dump($all_results) after this line?

$all_results = explode("|", $all_results);

That seems to imply that there are more than one value in a table column, and that is usually a recipe for trouble with a data base.

I am thinking the database needs to be redesigned with "normalization" in mind.  To see what I am talking about, please make a Google search for the exact phrase, "Should I Normalize my Database" and read the very interesting arguments on the different sides of that question.
0
Mario BernheimCTOAuthor Commented:
all values in $all_results are unique, it ends like so:

4E6407FE3B542,50586636608,400|4E6407F23B542,56586636608,400

Open in new window


this is what I get, if I vardump
  string(29) "4E641CA3E3F64,50589481451,400"
  [5082]=>
  string(29) "4E641CA3E52EE,50589469212,400"
  [5083]=>
  string(29) "4E641CA3E5EA7,50582212603,400"
  [5084]=>
  string(29) "4E641CA3E4F05,50589295313,400"
  [5085]=>
  string(29) "4E641CA3E7DE9,50557136211,400"
  [5086]=>
  string(29) "4E641CA3E8D8B,50589282020,400"
  [5087]=>
  string(29) "4E641CA3EA4FD,50589482102,400"
  [5088]=>
  string(29) "4E641CA3E6E48,50583337602,400"
  [5089]=>
  string(29) "4E641CA3EACCE,50583341565,400"
  [5090]=>
  string(29) "4E641CA3EBC6F,50589278573,400"
  [5091]=>
  string(29) "4E641CA3E9944,50582025694,400"
  [5092]=>
  string(29) "4E641CA3EE382,50589293871,400"
}

Open in new window


4E641CA3EE382 <-- will never repeat itself.
0
Ray PaseurCommented:
I'm thinking that a query returning, in effect, over 5,000 rows may need to be rethought.  A data base table that has more than one data element in a column often causes this sort of slow response.
0
Mario BernheimCTOAuthor Commented:
thats how the carrier sends it to us. in batches of 5000
0
Mario BernheimCTOAuthor Commented:
also... we receive about 30-40 requests via post of 5000 results... these are billing results... and must be updated as quickly as possible... could there be be an async solution?
0
Ray PaseurCommented:
An async solution could work if you do not require the updates to appear in real time.  You can have your script make a POST request to the async script.  The async script can update the table.  If it encounters any errors, it can write an error log with trigger_error().  A CRON job can monitor the error log and send you an email or text message when something appears in the log.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Mario BernheimCTOAuthor Commented:
im gonna go ahead and give that a try, run nginx + php-fpm on the database server and send the queries to update it.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
MySQL Server

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.