Link to home
Start Free TrialLog in
Avatar of Mike Paradis
Mike Paradis

asked on

Best use of NVMe with Mysql?

I'm using freebsd to run dedicated mysql servers on hardware, no virtual machines.

When installing the hardware, I installed 16GB NVMe devices in the blades.


There are a couple of ways I could use them to help speed up mysql such as putting the log file/s on it or using it as some sort of cache but I think there is something else I read about that would really speed up mysql. What might that be?


Avatar of nickg5
nickg5
Flag of United States of America image

Here are ten ways to compare to the testing you have already done in regards to speed.

https://seo-explorer.io/blog/five-ways-to-improve-mysql-select-speed-part-1/
Avatar of Member_2_231077
Member_2_231077

At only 16GB it's most likely intended to be cache such as Optane acceleration, would need more detail on blade and device hardware to confirm.
How big are the database(s)?
Avatar of Mike Paradis

ASKER

I could put larger in there but wanted to see what 16GB alone could do. We always seem to throw more and more hardware at things and sometimes, it's nice to know what using a little less can mean :).

The live db is only around 5GB. The app uses multiple DBs but this main one is the one that needs to be fast.
The buffer pool is set to 98GB of 128GB in the blade.
The innodb_log_file_size is set to 1GB.

It's just a test server but it's got 128GB, E-2660 I think so 32cores.

Mainly, I just want to know what the best way to utilize this would be and how to do it so I can take notes.
SOLUTION
Avatar of gr8gonzo
gr8gonzo
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
@nickg5, thanks for the article. It seems to be mostly about optimizing databases and not quite about how to improve performance using nvme.

@gr8gonzo, thanks for your input.
>Given how much RAM you have, you might also check to see if you can set up a process to sync
>up any read-heavy tables to temporary memory tables so that the nvme isn't competing

In another question, using a ramdisk comes up. The data is not financial and I keep a 1G recovery log file going which should be plenty. The majority of the use method is high number of connections with very small payloads and moving unneeded data to secondary db servers to keep the main one slim (5G right now). Servers are in data centers so power is quite reliable.

>but it could help extend the life of the nvme and optimize its I/O.

If this is something I would have to consider, then nvme is not for us. It would end up being one more potential failure point and something to monitor. Simpler to add more drives maybe and use that memory.

The storage devices themselves are SAS SSD so good performance to begin with.
I don't know how an nvme compares to SSD but aren't they pretty much the same in terms of lifetime?

@skullnobrains mentioned this in another post of mine;
actually, even for write intensive workloads, a sata array and a single tiny nvme drive for the zfs intent log or mariadb binlog will outperform a sas array on a raid card and be MUCH more cost effective. and you can use that cost effectiveness to add more drives. 4 sata drives will definitely outperform 2 sas drives even without the extra intent log drive on a db.
---
hardware optimisations if needed later on would be to throw in a couple of small slc nvme drives, partition them 1/4 raid1 for the zfs intent log, and the rest in raid0 for thr zfs ARC read cache. that will improve performance by orders of magnitude if needed. additionally, the lifetime of the other drives will improove significantly. likely double. sticking binlog and the likes on different drives is imho useless on zfs

Open in new window


Maybe there is no point then?

If the nvme has a potentially shorter lifespan, it may not be worth using and maybe better to just stick another couple of SAS SSD in the blade.
SAS and NVMe have the same storage technology available so should live the same length of time, bear in mind that different makes and SLC/MLC/TLC etc will differentiate here. Also on a RAID controller there is no TRIM but that is compensated by more reserve space (e.g. 960GB Vs 1TB usable) for background garbage collection.

You might also consider persistent memory if your hardware supports it, instead of being connected to the PCIe bus it's in a DIMM socket so even faster, can be configured as a hard disk and even booted from. Fairly new technology so relatively expensive, 128GB costs $500. Bear in mind you can only use software for mirroring / redundancy but that may apply to NVMe as well.
When I talked about extending the lifespan, I wasn't suggesting that NVMe had a shorter lifespan, but ANY device has -some- kind of lifespan. I was saying that by distributing the I/O, you can simply maximize its lifespan and effectiveness.

Also, I was referring to using the MEMORY table engine in MySQL, not a ramdisk. Both of them are ultimately storing data in memory, but a ramdisk is a little "lower" in the architecture. Using a MEMORY table would be easier for code to initialize the table(s) as necessary. 
BTW, I'm not using the hardware controller. Partly why I went with BSD was so I could use no hw raid and zfs instead. So, I believe the new servers will be RAIDZ1, two SAS SSD drive.

>persistent memory if your hardware supports it, instead of being connected to the PCIe bus it's in a
>DIMM socket so even faster, can be configured as a hard disk and even booted from.

That's fun sounding. Never heard of this :).

>Fairly new technology so relatively expensive, 128GB costs $500.

That's kind of in line with memory cost so it's not actually that bad.

>When I talked about extending the lifespan, I wasn't suggesting that NVMe had a shorter lifespan,

Got it, thanks for the clarification.

>Also, I was referring to using the MEMORY table engine in MySQL, not a ramdisk. Both of them are
>ultimately storing data in memory, but a ramdisk is a little "lower" in the architecture. Using a
>MEMORY table would be easier for code to initialize the table(s) as necessary.

Ok, something new to look into I guess.

So, yea or ney on using a 16GB nvme in some useful way? If it's just extra hardware for nothing, I'd prefer pulling it out and using the memory or a config method.

Mainly, I'd like to lower the amount of drive use and thought that there might be an advantage but I don't think there is since the SAS SSD drives will be about as fast and using memory even faster.
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
forgot to mention : DO NOT install the os on your faster drives. it is a common but foolish recommendation.

in your case, the expected performance gain is next to nil and might even be negative : throughput will not be better. reads will hardly benefit since the buffer pool can hold your whole db in memory. maybe marginally during warmup. writes will commit much faster but you have few of them so there probably won t be much of a difference.

also note that if you have a bunch of server grade sas drives in raid 10, it is quite possible that your existing array outperforms the 2 nvme drives for everything except time to commit writes. remember that dbs are mostly about iops.
I understand what you're saying.
On some vmware servers for example, the OS is running on USB or on board 15GB SSD while the rest runs on the fast drives. I do the same on NAS servers.
USB and SD cards no longer supported for VMware boot :(
I guess that's ok. I've read many treads where folks were saying that SD cards is not very reliable, USB, maybe a tiny bit more.

I've also used satadom devices as boot/os drives with no problems and mini SSD SATA cards on IBM blades and never had a problem.
sd cards and usb keys are not very suitable for os installation. a disk is best. no point in using a very fast one, though. mini ssds have little in common with sd cards and should be considered as regular disks.

sd cards and usb keys are perfectly usable with targetted oses but these normally do not write to disk at all or seldom. in your case, it would work fine though since writes would be typically limited to a few tmp files. sticking tmp and run and even var in ramdisks would be better, though.

... but there is little point in doing so. the os will be loaded from the pool and hardly written to or read from after boot anyway.