• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 384
  • Last Modified:

How else can you speed up this query?

This is for a very high traffic site I'm consulting for.  The query below is fast, but not fast enough when 30% of the time it runs between 0.1 and 0.3 seconds.  As much as possible it should be in the 0.001 level  Wondering how else it can be sped up.

SELECT `userid`, `username`, `email`, `level`, `lang_pref`, `ver_pref`, `fl_location` FROM users_table WHERE `userid` = 123456 AND `level` != 0;

snippet of the DESC with the relevant columns below:

| userid                | int(10) unsigned    | NO   | PRI | NULL    | auto_increment |
| username              | varchar(100)        | NO   | MUL | NULL    |                |
| lang_pref                  | varchar(10)         | NO   |     | en      |                |
| email                 | varchar(255)        | NO   | MUL | NULL    |                |
| level                 | tinyint(3) unsigned | YES  |     | 0       |                |
| fl_location              | varchar(2)          | NO   |     |         |                |
| ver_pref       | tinyint(1)          | NO   |     | NULL    |                |

Lock time statistics:
+----------------+------------------+---------------------+-------------------+
| min(lock_time) | max(lock_time)   | avg(lock_time)      | stddev(lock_time) |
+----------------+------------------+---------------------+-------------------+
|              0 | 1.79318904876709 | 0.00769399780482583 | 0.017644606359121 |
+----------------+------------------+---------------------+-------------------+

One thing we noticed is for a single user, this call is repeated more than a hundred times, sometimes 3000 times, within a single hour.  I'm still checking with the client why this is so, but I still need to find options how to best deal with this - if there are still optimizations that can be done within the database, if it needs to be recommended to the client to change the frequency of the calls (if possible), or maybe an external cache like memcache is needed.

Any suggestions?  Thanks.
0
johanntagle
Asked:
johanntagle
  • 4
  • 2
  • 2
  • +1
3 Solutions
 
EyalCommented:
add with(nolock) after table name
0
 
EyalCommented:
sorry wrong DB syntax
0
 
johanntagleAuthor Commented:
Hmmm the equivalent to that would be:

SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
<The Actual Query>
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ ;

Not sure if we can change transaction isolation level thousands of time in an hour, even if just for the session.  Need to study further (if people have more inputs on this please do give them).

Thanks Eyal.

Anyone else have inputs?
0
Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

 
blue_hunterCommented:
please check the following.
1. Table type ( MyISAM or INNODB)
2. table are properly indexed.
3. SET transaction Isolation level ( as what you mentioned, yes the isolation level can set and reset in each of your session )
4. check your innodb_flush_log_at_trx_commit varaible, if you DB don't mind to lost 1 minutes transaction, set it to "2" instead
5. profiling your SQL query, check where is the bottle neck.
0
 
johanntagleAuthor Commented:
Hi Blue_Hunter

1. Innodb
2. We're accessing via primary key, so even if the other column in the filter is not indexed, we just have one column after filtering by primary key. I also tried adding an index for both columns, mysql still only just uses primary key.
3. My question with setting isolation level is how it will affect other queries does it make sense to change it just for every time that query is run, which while is done thousands of times per hour, is still just less than 20% of all the queries run.
4. Will check on one with the client.  We lose 1 minute worth of transactions in case of crash, right?
5. How else do you suggest to profile, aside from what I described above?

Thanks!

Johann
0
 
blue_hunterCommented:
3. It's all about the client acceptance, whether to make available the new committed records to be read by the curren running transactions. Of course you may implement it as GLOBAL, instead of SESSION, all base on your client's acceptance and concern.
From my opinion, it does make sense that you executing the command, each time the query is run, because it's so particular.
4. yes
5. SET profiling=1;
   SHOW PROFILES;
   SHOW PROFILE FOR QUERY <query no>;

0
 
Guy Hengel [angelIII / a3]Billing EngineerCommented:
if you have your db as InnoDB, it's part of transactions, hence it may take time.
your first paths to speed up are
* reduce the load (by ensuring the db is not queried that often, for example by keeping data in memory.
* tuning the mysql parameters to take more data into memory:
  => http://www.mysqlperformanceblog.com/2006/09/29/what-to-tune-in-mysql-server-after-installation/
* more memory
* faster disks
* change or copy the table to MyIsam or Memory table type:
  => http://dev.mysql.com/doc/refman/5.0/en/memory-storage-engine.html

important note for the memory table:
=> once you restart mysql, the table is "lost", so you have to "reimport" it.
=> so every processing (insert, update, delete) should go to both the actual and the memory table
=> the SELECT's only to the memory table

hope this helps
0
 
johanntagleAuthor Commented:
My apologies for not getting back to you as other tasks took priority.  will give feedback within the week.
0
 
johanntagleAuthor Commented:
Thanks to everyone who participated.  angelIII is right - optimizing the code so that the query is not called so frequently gave the highest impact.
0

Featured Post

Configuration Guide and Best Practices

Read the guide to learn how to orchestrate Data ONTAP, create application-consistent backups and enable fast recovery from NetApp storage snapshots. Version 9.5 also contains performance and scalability enhancements to meet the needs of the largest enterprise environments.

  • 4
  • 2
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now