Avatar of Magnus_Bleken
Magnus_Bleken

asked on 

Hyper-V server/storage

We sometimes have delays on our servers.. We try to tweak the memory and have helped a lot, but not enough - it seems. CPU on the HyperV boxes seems to be anywhere from 20 to 80%.
I'm not sure where the bottleneck is and I'm hoping for some feedback of comparing IOps on P2000G3 with SAS 10K disks connected SAS HBA vs MSA2040 using SSDs connected SAS HBA.

I have checked with HP support and they recommended me setting up round-robin.. that doesn't work when using SAS HBA... I've search "all the web".. so.. please.. if you have any real answers/experience/suggestions, let me know!
Do we need another setup? New servers (recommendations)? Another storage solution (recommendations)? We prefer HP...

We have about 300 users connected through Remote Desktop Services (virtual servers RDS on 2008R2, 2012R2 and 2016 servers) running on a total of four HP ProLiant DL380 G7 boxes with 2xE5620 CPU's and 192GB ram each.
These hyper-v servers are also running other VM's (total of 24 including these 7 for RDS).

These are connected to a HP P2000 G3 SAS DC with 24 x 900GB 10K SAS disks... first 8 running RAID-5, next 8 running RAID-10, next 6 running RAID-10, 2 in global spare.
Checking the stats on the HP P2000 G3 SAS the Average Response Time are as follows:
- 6 disks RAID-10 (low 2ms, high 20ms, avg 12ms) - throughput peeking 40MB/s
- 8 disks RAID-10 (low 2ms, high 27ms, avg 11ms) - throughput peeking 80MB/s
- 8 disks RAID-5 (low 8ms, high 25ms, avg 16ms) - throughput peeking 230MB/s

I have a couple of questions:
- How would a MSA2040 DC SFF (the one with SSD cache) compare regards IOps? Yes, I have read all the PDFs from HP but they compare using 192 disks in the P2000 and 144 disks in the MSA2040 if I'm not mistaken and .... well.. that's a bit too many for me ;)
- How would speed (throughput, IOps, delay etc.) be using iSCSI (on a dedicated 10Gbps NIC) be vs SAS 6Gb we use now?
- Are there any other storage units we should consider like QNAP SS-EC2479? Pros/cons?
- Any "max IOps" on a P2000 G3 controller? If "any", what's the max on MSA2040?
- Any other suggestions?
- We use 10Gbps SFP+ NICs now.. any loss/latency if we switch to 10Gbps cobber NICs/switches?
Hyper-VStorageVirtualization

Avatar of undefined
Last Comment
Philip Elder
Avatar of Philip Elder
Philip Elder
Flag of Canada image

MPIO set up using the Microsoft DSM or HP DSM?
Avatar of Magnus_Bleken
Magnus_Bleken

ASKER

Microsoft...
VHD are defragged and fixed..
Avatar of Philip Elder
Philip Elder
Flag of Canada image

We have been building and testing storage and compute cluster solutions for quite a long time with a number of blog posts of our experiences.

The first article answers a lot of your specific questions. The others have some key bits of information with the last being posts tagged with performance.


Having three different arrays on the P2000 is probably one of the key performance hits. That's a lot of calculations both controllers have to accomplish at any given time but especially during logon storms.

The average 10K SAS disk can yield about 250 to 450 IOPS depending on how the storage stack is set up.
The average SAS SSD can yield about 25,000 to 45,000 IOPS depending on how the storage stack is set up.
The Know Your Workload blog post explains a bit more about that.

For smart shelves we set up our storage with all disks in one array. We carve up a small LUN for the Witness Disk for the cluster with the rest split 50/50 between two LUNs to allow one controller ownership of each. We do the same in the cluster where one node owns CSV0 and another node owns CSV1. That helps distribute the I/O load as far as this configuration type can be.

And one last thing, we don't deploy anything less than RAID 6 anymore. Today's 10K SAS disks are so dense they have pretty good IOPS and/or throughput so in an eight 10K SAS disk RAID 6 array we tend to see about 800MB/Second throughput and 2,800 IOPS. Depending on how we set up the stack we can push more throughput or pull more IOPS out of the setup.
Avatar of Magnus_Bleken
Magnus_Bleken

ASKER

I read everywhere that RAID6 is going to be a bottleneck, so we use RAID5 for "slow storage" and RAID10 for databases etc.

Noone here have gone from P2000G3 SAS to MSA2020 SAS with SSD cache that can comment on speed difference (throughput/IOps)? Not using 192 disks...
Avatar of Philip Elder
Philip Elder
Flag of Canada image

We'd never risk production workloads to RAID 5. That's just too risky.

In our testing a 10K SAS RAID 6 across eight disks does about 800MB/Second or about 1,500 to 2,300 IOPS depending on storage stack setup.

We have plenty of active sites running on 8 to 24 disks in a RAID 6 array with more spindles adding more throughput and IOPS.

How would the SSD cache be set up? There would be some performance gains to be had but a lot has to do with how the data gets cached and whether the cache algorithm can work with VHDX.
Avatar of Magnus_Bleken
Magnus_Bleken

ASKER

We use the recommended setup from http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=mmr_kc-0112012 regarding RAID levels.

MSA2040 comes in two versions.. One with SSD cache and one without..
I'm wondering if upgrade to MSA2040 will actually be worth the cost.. I cannot find any suitable comparison between P2000G3 SAS and MSA2040 SAS (with and without SSD cache)... Only data where they have used 192 and 144 disks.. we're not going to use that many.
Avatar of Philip Elder
Philip Elder
Flag of Canada image

The P2000 is a 6Gbps SAS part.
The MSA 2040 is a 12Gbps SAS part.

The HBAs should be 12Gbps to fully utilize the bandwidth available.

12Gbps means more bandwidth and even lower latency.
Avatar of Magnus_Bleken
Magnus_Bleken

ASKER

This really does not answer my question.
If you use 24 x 600GB 10K SAS disks in P2000 G3 connected using HP SC08e HBA SAS vs 8 x SSD in the MSA2040 .. what would the IOps difference be?
I find comparison on HP's page but they use 192 disks for the P2000 and 144 for the MSA2040.. I need a test with less disks ("normal setup")..
ASKER CERTIFIED SOLUTION
Avatar of Philip Elder
Philip Elder
Flag of Canada image

Blurred text
THIS SOLUTION IS ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
Avatar of Magnus_Bleken
Magnus_Bleken

ASKER

Thank you.
I've googled quiet a bit, but couldn't find anywhere to set/change the queue depth? Isn't this possible and is done automatically?
And regarding the P2000 G3 and the SC08e HBA .. I cannot find any "max IOPS" anywhere... just to check if the SC08e could be a bottleneck..?
Avatar of Philip Elder
Philip Elder
Flag of Canada image

I have a couple of questions:
 - How would a MSA2040 DC SFF (the one with SSD cache) compare regards IOps? Yes, I have read all the PDFs from HP but they compare using 192 disks in the P2000 and 144 disks in the MSA2040 if I'm not mistaken and .... well.. that's a bit too many for me ;)
 - How would speed (throughput, IOps, delay etc.) be using iSCSI (on a dedicated 10Gbps NIC) be vs SAS 6Gb we use now?
 - Are there any other storage units we should consider like QNAP SS-EC2479? Pros/cons?
 - Any "max IOps" on a P2000 G3 controller? If "any", what's the max on MSA2040?
 - Any other suggestions?
 - We use 10Gbps SFP+ NICs now.. any loss/latency if we switch to 10Gbps cobber NICs/switches?

In all reality this whole discussion is moot. If iSCSI is to be used at 10Gb Ethernet speeds that would be the bottleneck.

One SAS cable actually has four 6Gbps SAS connections in it. So, that's 24Gbps of _SAS_ throughput at the virtually no latency SAS bus speeds.

As shown above, two SAS cables can handle a sum total of 377K IOPS. There's no way iSCSI across 10GbE can come even close to that. The latency would kill.

The max IOPS for the 12Gbps SAS setup would be about 700K IOPS via two HBAs with one 12Gbps SAS cable each in the server node per node. Microsoft's experience at 6Gbps is similar to our results: Achieving Over 1-Million IOPS from Hyper-V VMs in a Scale-Out File Server Cluster Using Windows Server 2012 R2 .
Avatar of Magnus_Bleken
Magnus_Bleken

ASKER

I have not mentioned 10Gbps or iSCSI once... I'm talking about HBA SAS ...
My question is how would x (lets say 8) SSD disks in MSA2040 compare to 24 SAS 10K disks in P2000? Both connecting using SAS HBA...
P2000 doesn't support HP SSD but MSA2040 do..
I need "real world data".. :)
Avatar of Philip Elder
Philip Elder
Flag of Canada image

User generated image
I believe my examples above are "real world data" as they are direct from our experience building solutions for our clients.
Avatar of Magnus_Bleken
Magnus_Bleken

ASKER

Ahh hehe yeah, sorry about that. That was one of my many questions.. I wasn't sure to use iSCSI or SAS HBA if we do any changes.. but I'll stick to SAS HBA..
Thank you for  you answers :)
Avatar of Philip Elder
Philip Elder
Flag of Canada image

You're welcome. :0)
Storage
Storage

Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. In addition to local storage devices like CD and DVD readers, hard drives and flash drives, solid state drives can hold enormous amounts of data in a very small device. Cloud services and other new forms of remote storage also add to the capacity of devices and their ability to access more data without building additional data storage into a device.

45K
Questions
--
Followers
--
Top Experts
Get a personalized solution from industry experts
Ask the experts
Read over 600 more reviews

TRUSTED BY

IBM logoIntel logoMicrosoft logoUbisoft logoSAP logo
Qualcomm logoCitrix Systems logoWorkday logoErnst & Young logo
High performer badgeUsers love us badge
LinkedIn logoFacebook logoX logoInstagram logoTikTok logoYouTube logo