Link to home
Start Free TrialLog in
Avatar of janhoedt
janhoedt

asked on

RAID 0 & 5 instead of RAID 5 only on 5 disks

Hi,

I have a RAID 5 NAS on which I run 2 VMFS volume for my NAS.
Performance is poor so I'd like to reconfigure my NAS.

What I thought is setting RAID 5 for 3 disks, putting my data on it and RAID0 on 2 other disks so I have some more performance there for more important machines.

So what I would do:
-3 disks RAID5 = 4TB (2 TB each) with my data + 1 VMFS store
-2 disks with RAID 0 = 4TB with 1 or 2 VMFS stores


VM's which should be fast, I can put on RAID0 (backing them up with VMWare Data recovery), machines for monitoring etc would come on RAID5.

RAID10 is also an option, configured that before but then when slow I don't have any option to go to/migrate slow vm to faster storage.

Please advise.
J.
ASKER CERTIFIED SOLUTION
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
NO RAID 0

1. You have a 100% certainty that you will eventually experience data loss due to an unreadable block or drive failure eventually.
2. Unless you are bonding multiple 1Gbit ethernet ports or have something faster to access the NAS, then even a single disk drive can read/write faster then you can get data to/from it over the network.

If your performance is "poor", then best you quantify that first. You could very well be throwing your money away by redoing the RAID.  In fact, depending on what make/model of NAS you have and the specific make/model of disks, then you won't see much of an improvement no matter how you have the RAID configured.
Avatar of teomcam
teomcam

If I were you I would not use RAID 0 for any service in your server room unless very unimportant data store jobs. Since you are not happy with RAID5, RAID 10 is the best option. You will loose all data if 1 HDD fails in RAID0. RAID 10 and RAID 5 will give you 1 HDD fault tolerance.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of janhoedt

ASKER

The nas is a Synology 1511+ and I m running iscsi on it.
Note: for about 2 years I run raid 0 on a different nas (Synology 210j) never ever any problem or corruption. My 10 year old pc still boots from its 10 year old disk.
As mentioned, raid0 would be used for pzeformance and data backed up daily.
Didn t hear/read anything that convinced me yet not to use it.

Raid0 2 disks would be for iscsi fast vm s (backed up), raid 5 (3 disks) for my data and slower vm s. When I see performance drop of vm(sometimes due to datacopy or other) I atleast can move to other storage, what I can t now or couldn t with raid 10.
raid 0 can drop drives whereas a raid 10 can survive 2 failed drives if they are on seperate raid 0 as a raid 10 is a raid 1 of 2 raid 0's thats the benefit of nested raid
So because you got lucky for 2 years and didn't lose a HDD, you somehow are immune to drive failures for all eternity??
I know the RAID advantages. I had RAID 10 but now I see disadvantages for my ESX environment.
Now my performance is impacted for ALL vm's if storage latency occurs (due to backup running or other). If I split up the datastores over several disks, I can make sure performance is never impacted for certain machines.

I'm not immune to drive failures, therefore I'm taking backups at regular times. That's why I want to split up in a redundant part (my data and a datastore) + a non redundant one.
The thing is: I need multiple datastores to ensure performance for my most important vm's and don't see how else to achieve this then splitting up in multiple RAID configs.
You are not considering the performance advantages of redundant RAID levels.  (This assumes you have a decent controller).  For example, in RAID1 you have twice the IOPs then on a single drive.  The controller will give an I/O request to the disk that can handle it first.  In perfect world you have twice the read IOPs, twice the read throughput of a single disk on reads.   On writes,  no advantage.  HOWEVER, the larger the logical device, the larger the block size that VM has to use.  I don't remember the defaults, but there is a point where VMWARE has to increase the block size.   So there is a point where your IOPs gets divided by two because the logical drive is too big.  

Also you are not considering the impact of error recovery.  Get an ECC error and even with premium disks it could take several seconds to retry.  With a redundant RAID level and load balancing, the controller could get the I/O from the other drive while the original one goes through the time-expensive recovery/allocation of a bad block.

There are other advantages of certain RAID levels beyond these two, my point is that it is NOT just about redundancy and how many drives does it take to lose my data.  Performance is affected by additional variables and common failure scenarios.
Ok, I understand your explanation but do you want to say?
I need my datastreams (backup of data) not to impact my vm activity. I only see this possible by creating two RAID configs.
Either get rid of vmware entirely, or use vmdirect and dedicated controllers for each VM so the individual VMs get 100% of all I/O for the disks you assign to them.   Disk performance is the bottleneck and when VM virtualizes the I/O your performance for this type of load decreases significantly.
I have bought my NAS especially for my ESX environment so getting rid of it is not an option.
Not sure if my HP Microservers will support DirectPath or howto configure, what advantages/disadvantages are.
HP Microservers do not support Direct Path.
You bought the wrong tool for the job.  If you want better disk I/O then don't virtualize.   If you would have needed a system to be a web server and screwed up and got one with a 100Mbit ethernet would you fix the problem by getting a faster ethernet, or redesign all your web content to minimize the sizes of each web page to improve overall performance?   If you want anything more then an incremental performance improvement then you have to do something radical.
It's only a lab, I don't expect lightspeed performance, just as good as possible. I already bought SSD drives and will run my host cache on it. Then I might run some vm's on local SSD (not redundant, I know). Then some extra performance might come from splitting up raid (RAID 5 and RAID 0 or RAID1 if I stick to redundancy).

I might go for an extra (budget) NAS by buying extra HP Microserver and run a NAS on it or run virtual SAN hanccocka has an article posted. But don't know details of those options yet (price, time I spent to configure, after all I spent to much on it now and would like finally to play around with my vm's).
Anyhow, it stays a lab so more performance would be nice but is not critical, I want it to become as performant as I bugetwise can afford and I'm pretty much at my limits now.