Optimize query/table for speed MySQL?

Need to make our MySQL queries faster on a large table (1544980 total records)

SELECT id,sender,recipient,message,priority,flash_message
FROM movistar_mt3_saca_nic
WHERE process=0
ORDER BY priority DESC, id ASC
LIMIT 10

Open in new window


its a transactional table, we select 10 numbers and try to bill them within 1 second for services and then they get updated with the bill status so they dont get billed twice...
a timer then selects another set of 10 numbers... the query is taking 3.5seconds...

the structure of the table is this:

CREATE TABLE IF NOT EXISTS `movistar_mt3_saca_nic` (
  `id` bigint(14) NOT NULL AUTO_INCREMENT,
  `batch_id` varchar(20) NOT NULL,
  `sender` varchar(50) NOT NULL,
  `recipient` varchar(50) NOT NULL,
  `message` varchar(180) NOT NULL,
  `sequence_number` int(11) NOT NULL DEFAULT '0',
  `command_status` varchar(11) DEFAULT NULL,
  `message_id` varchar(255) DEFAULT NULL,
  `done_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `service` bigint(14) DEFAULT NULL,
  `send_time` datetime(3) DEFAULT NULL,
  `confirm_time` datetime(3) DEFAULT NULL,
  `sequence_sending` int(11) NOT NULL DEFAULT '1',
  `process` int(11) NOT NULL DEFAULT '0',
  `priority` int(11) NOT NULL DEFAULT '0',
  `flash_message` int(11) NOT NULL DEFAULT '0',
  `transaction_id` varchar(50) DEFAULT NULL,
  `delivery_number` int(11) DEFAULT '1',
  `part_number` int(11) DEFAULT '1',
  `qty_attempts` int(11) DEFAULT '0',
  `product_id` varchar(50) DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `service` (`service`),
  KEY `send_time` (`send_time`,`confirm_time`),
  KEY `procesados` (`process`),
  KEY `message` (`message`),
  KEY `recipient` (`recipient`),
  KEY `sender` (`sender`),
  KEY `priority` (`priority`),
  KEY `done_date` (`done_date`),
  KEY `index19` (`delivery_number`),
  KEY `index20` (`part_number`),
  KEY `index21` (`qty_attempts`),
  KEY `index13` (`priority`,`id`)
) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=4636815 ;

Open in new window


our previous DBA quit... and left us with this, we use percona edition mysql
Mario BernheimCTOAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

David FavorLinux/LXD/WordPress/Hosting SavantCommented:
You can always run EXPLAIN on your query in question to find missing indexes + other optimization clues.
0
Mario BernheimCTOAuthor Commented:
SQL query: EXPLAIN SELECT id,sender,recipient,message,priority,flash_message FROM movistar_mt3_saca_nic WHERE process=0 ORDER BY priority DESC, id ASC LIMIT 10;
Rows: 1

id      select_type      table      type      possible_keys      key      key_len      ref      rows      Extra      
1      SIMPLE      movistar_mt3_saca_nic      ref      procesados      procesados      4      const      716107      Using where; Using filesort
0
Kent OlsenDBACommented:
The query is finding 716,107 rows, sorting them, and then returning the first 10 of the sorted list.  Depending on your system and configuration 3.5 seconds may not be unreasonable.

Can you apply a second filter to reduce the number of rows from nearly 3/4 million to about 100,000?  Or less?
0
The Five Tenets of the Most Secure Backup

Data loss can hit a business in any number of ways. In reality, companies should expect to lose data at some point. The challenge is having a plan to recover from such an event.

Mario BernheimCTOAuthor Commented:
I believe this server is runnign SSD disks and 128GB of ram with 40 cores?
0
Kent OlsenDBACommented:
Can you create a composite index on the data over the key items?

CREATE INDEX idx_1 ON  movistar_mt3_saca_nic (process, priority DESC, id ASC)

I'd test this on an offline system.  It may help the report, but it could also create quite a bit of overhead with storing/updating rows.
0
Mario BernheimCTOAuthor Commented:
I actually removed ALL indexes and it lowered the query time from 3.5s to 1.1
0
Mario BernheimCTOAuthor Commented:
This is our my.cnf

 
[mysql]

# CLIENT #
port                           = 3306
socket                         = /var/lib/mysql/mysql.sock

[mysqld]

# GENERAL #
user                           = mysql
default-storage-engine         = InnoDB
socket                         = /var/lib/mysql/mysql.sock
pid-file                       = /var/lib/mysql/mysql.pid

# MyISAM #
key-buffer-size                = 32M
myisam-recover                 = FORCE,BACKUP

# SAFETY #
max-allowed-packet             = 16M
max-connect-errors             = 1000000
innodb                         = FORCE

# DATA STORAGE #
datadir                        = /var/lib/mysql/

# CACHES AND LIMITS #
tmp-table-size                 = 32M
max-heap-table-size            = 32M
query-cache-type               = 0
query-cache-size               = 0
max-connections                = 7000
thread-cache-size              = 50
open-files-limit               = 65535
table-definition-cache         = 1024
table-open-cache               = 2048

# INNODB #
innodb-flush-method            = O_DIRECT
innodb-log-files-in-group      = 2
innodb-log-file-size           = 512M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table          = 1
innodb-buffer-pool-size        = 108G

# LOGGING #
log-error                      = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes  = 1
slow-query-log                 = 1
slow-query-log-file            = /var/lib/mysql/mysql-slow.log

Open in new window

0
Mario BernheimCTOAuthor Commented:
any thoughts on config?
0
theGhost_k8Database ConsultantCommented:
Hello Mario,

Here are quick comments:

* Comment for table structure:
- avoid index on `message`
- don't include PK in your index -> KEY `index13` (`priority`,`id`)
- is the `priority` field boolean or can be covered with lower datatype - say tinyint ?
- all your integer columns are signed, make them unsigned to make full use of the range
        `process` int(11) UNSIGNED NOT NULL DEFAULT '0'

* "I actually removed ALL indexes and it lowered the query time from 3.5s to 1.1"
- not a good idea. You can improve WRITEs without indexes but not READs. And InnoDB table without PK is not good atall.
- You're doing full table scan, reading the table lines one by one. This is fast if your table is in the memory. I see your have good amount of innodb-buffer-pool. If you run your query with SQL_NO_CACHE (SELECT SQL_NO_CACHE ...), you should see the real picture without index.

* About query:
Do you really want to process all the records? Say the record created 2 years ago? If not, create a filter on date which will substancially reduce the data to process:
eg:
SELECT ... WHERE send_time>date_sub(CURDATE(), interval 30 days)
0
Tomas Helgi JohannssonCommented:
Hi,

The index KEY `procesados` (`process`), has to few columns for this query.
Try the index
KEY `procesados` (process DESC, id ASC ,sender,recipient,message,priority,flash_message )

This index will make the query index only and eliminate sorting as the process and id are in the same order as the query wants to order by.

Note, that index with only one column has many performance disadvantages over indexes with 2+ columns especially when the table becomes large.
There are quite a few of them in this table that you should look at and see what queries uses them. Optimize these indexes using that list of queries.

Regards,
    Tomas Helgi
0
theGhost_k8Database ConsultantCommented:
For whatever solution you choose -> Do NOT add a Primary Key in any secondary index.
0
Tomas Helgi JohannssonCommented:
Hi,

>For whatever solution you choose -> Do NOT add a Primary Key in any secondary index.

Adding additional columns to an index ( whether it belongs to a primary key or not) is beneficial to all queries using multiple columns in a where clause where those columns match to a columns of an regular index (where duplicates are allowed). This has been proven to be true for all database systems.
Same columns in unique indexes and primary keys (which are unique) is a whole other matter.
Having only one column in an index is not good practice when it comes to database and query performance.

Regards,
    Tomas Helgi
0
theGhost_k8Database ConsultantCommented:
Hello Tomas,

Though generally speaking the idea you're proposing is right but for MySQL this is what documentation says:

In InnoDB, each record in a secondary index contains the primary key columns for the row, as well as the columns specified for the secondary index.
Ref: https://dev.mysql.com/doc/refman/5.7/en/innodb-index-types.html

so "( whether it belongs to a primary key or not) " -> This makes a difference in case of Innodb (MySQL).

Regards.
0
Tomas Helgi JohannssonCommented:
Hi,

In InnoDB, each record in a secondary index contains the primary key columns for the row, as well as the columns specified for the secondary index.
This does not mean that you shouldn't add the columns or subset of columns of a primary key to an secondary index. This means, as I see it, that the primary key is used as a pointer for internal usage. Every database has such internal pointers in pages of tables and indexes to identify pages and/or rows so it can be ACID compliant.

The order of which the columns are put in an index can make a great deal of performance. The ultimate goal is to find the smallest set of rows that you are searching for in a query as quickly as possible. That is, the smallest set of rows you you can filter out from the first index scan of a suitable index. If that is not possible then the optimizer chooses additional indexes and/or touches the table or in the worst case runs a tablescan (scans the entire table).

So by adding a subset or the entire primary key columns in a secondary index (like I proposed) in my first comment
will do nothing but good for the query. In fact this proves my point.

If have done a lot of database/query tuning for over a decade (almost two) on all sorts of databases including MySQL and MariaDB with good results so I think I know  what I'm talking about.

Best regards,
  Tomas Helgi
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
MySQL Server

From novice to tech pro — start learning today.