Would it make much of a difference if when we put a new host in in addition to the SAS HDDs (Raid 5) we added 1 100GB Enterprise SSD or EFD? This will be an HP DL380 in a remote office.  Heavy database and Web traffic. This server will only have local Storage.
Dual 8 Core CPU with 192GB of RAM.
ESXi 5.1 U1 Standard.

Please if you have some white papers or study's done that would help a lot.
LVL 14
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

BusbarSolutions ArchitectCommented:
SSD is much way faster and definitely but adding a single disk might pose a risk as SSD are susceptible to failure.
"SAS HDDs (Raid 5) "

See gun pointed at foot.
do not, and I repeat, do NOT  use RAID5  for  production virtual machine storage.
it is really more important to not use RAID5 for local storage,  than anything fancy that can be done with a SSD.

To put it this way;  RAID5 gives the worst of both worlds:   horrible  I/O performance for real-world virtualization workloads  (Unless your intention is to run just 1 VM with 99% cryogenically cold data),  and horrible resiliency;  on  HDDs with more than 600GB capacity,  rather high risks of total volume loss,  since what happens with RAID5  is you have a good chance of failing to successfully rebuild the array after a disk failure.

A SSD for system boot image makes sense in a cluster,  to reduce power consumption;  or rather, a pair of mirrored SSDs.

The best use for a single SSD;   will most likely be  use for the vFRC feature, which is a feature that will be introduced in the upcoming release of vSphere,  vSphere 5.5.

The only other major benefit of using an enterprise SSD in an ESXi server at the current time, is to use it as a  SSD   swap cache,     BUT   this  doesn't  make  local storage a good idea.

Buying a SSD for  vFRC swap cache, is no substitute,  for designing appropriate backend storage for your workload;  these are mechanisms for dealing with burst loads.
Seth SimmonsSr. Systems AdministratorCommented:
if you are dealing with heavy database, probably not good to virtualize in the first place - let alone virtualize in raid 5.  any vmware configuration with local storage should be raid 10.  i noticed better performance and rebuild time was fairly quick after a recently failed 300gb sas drive - though the time to rebuild will vary depend on i/o load of your guests.  as mentioned, don't do raid 5 for virtualization
10 Tips to Protect Your Business from Ransomware

Did you know that ransomware is the most widespread, destructive malware in the world today? It accounts for 39% of all security breaches, with ransomware gangsters projected to make $11.5B in profits from online extortion by 2019.

"If you are dealing with heavy database, probably not good to virtualize in the first place"

Hey;  virtualizing  heavy databases can be beneficial, much in the way that virtualizing  other servers can.     Of course...  different application owners will have different ideas of what workload constitutes "heavy" ,    but  large  Exchange, SQL servers,  and many similar applications  can be virtualized well --  and achieve management functions,  high availability  features, DR capabilities, and cost benefits   provided by the virtualization software.

On the other hand,  it won't work very well, and you won't achieve very high rates of consolidation;  if the storage design selected isn't up to the task.

In particular;   you should confer with some  virtualization, storage, or database architect(s),  about the specifics of your environment before you rush into virtualizing  much  production stuff,   so they can help you design your environment to run the workloads  so they meet SLA and  so the virtualization is cost-efficient.

Generally;  RAID10 is appropriate for virtualization, and there are some  circumstances where there are implementations of RAID6   (or RAID60) available that are appropriate  in some more complicated configurations involving  10+ spindles per array.

I  do not believe there is any major vendor left seriously suggesting RAID5  as a good or reasonable practice for production under any circumstance in any configuration;    it's an obsolete level  that is not  adequately resilient under current media storage capacities ---  the remaining use  could be archival/nearline systems,  where resiliency and performance do not matter at all,  there are  higher level application checksums to verify data integrity,  and there is a minimum of 1  other backup  aside from the archival system.
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
RickEpnetAuthor Commented:
Thank you for this information.
RickEpnetAuthor Commented:
I appreciate the information on the Raid 5. I should have mentioned this but there will be a hot spare and this is a DR Server so it will not be in production very often. Production has an EMC SAN.
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
No problems.
You have the option of booting an ESXi server from SD card or USB drive,  but it is not a best practice in most cases, due to the significant increases in boot times, and additional risks;  scratch disk requirements,  and using a RAID1 pair of small boot drives for a dedicated boot volume is still recommended,  particularly outside HA clusters.

In general, adding more DRAM will be much more beneficial than adding an SSD as a SSD swap cache.

Even as a DR server; in general, even RAID5 with a minimum of two hot spares is not going to be acceptable for production virtualization workloads.

You probably missed the point; that in RAID5 arrays with currently in use media capacities >600gb, the chance of further failures during a rebuild is high:  hot spares are already factored in, the storage vendors expect them to be there.

But the immediate degradation under 1 failure, AND  the  read/write penalties under normal operations  for small arrays are bigger issues.

If production requires an EMC SAN;  that should tell you something about the IO requirements after a true DR failure event.

Which are important to keep in mind,  because it is not as if  IO systems degrade very gracefully,  once their capabilities are exceeded;   unlike  network, memory, and CPU,  there is a mechanical element involved ----  make sure after factoring in the RAID5  penalty,   your  DR workload in production won't  peak at  too many IOPS  per spindle.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.