Link to home
Start Free TrialLog in
Avatar of Clifford Jenkins
Clifford JenkinsFlag for United States of America

asked on

RAID 5 and Upgrading drives

I was taught that when you use RAID 5, that all drives should be the same.  Currently, I have 4 drives in a RAID 5 configuration, one has gone bad, and I feel now would be a good time to increase the size. I have Four 1TB drives and would like to replace the bad drive with a 2TB. Eventually, replace them all with 2TB drives.  I'm using an Iomega StorCenter ix4-200d, if that helps any.
ASKER CERTIFIED SOLUTION
Avatar of Seth Simmons
Seth Simmons
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Clifford Jenkins

ASKER

I did notice that this model, does come in a 12TB configuration (4 x 3TB), however, in a RAID 5 would still only be 9TB.  I have two of these devices, both with a bad drive. I'm guessing I can purchase a 3TB and see where it takes me.
DON'T use RAID 5. It is highly unreliable. Particularly with the large disk sizes you get today it is a terrible option to use. Maybe 2 decades ago using RAID 5 was understandable, but today it is absolutely obsolete.

Do an extra backup with the LAN disconnected, then replace the disks with larger ones, create a new array, for example RAID 1 or 6, then restore the backup you just made.

Also, if this is the array you are booting the OS from, and your OS is an m$ OS, you won't be able to use any space on the array beyond 2TB. So either you will have to make sure you don't get disks that are too large, or then make sure your RAID controller allows you to split the array into one small array for the OS, and the rest for your data. That large array you can then setup as a GPT disk and then you can utilize all it's space. GPT disks can go beyond 2TB, but m$ OS's can't boot into GPT disks unless your BIOS is set to UEFI, and the m$ OS is 64 Bit. But for that to work, you would have to reinstall the OS. There are no such limits in OS's like Linux.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
but with RAID-6, you'll lose 2 more disks for parity.
At least nowadays, modern HDD is very reliable. I've used RAID-5 in my HP StorageWorks 2600 : 12x 4 TB SATA HDD with no issue.
If you have only 4 disks, you might as well do RAID 10 instead of RAID 6.  There will be some speed benefit to RAID 10.

I also see nothing wrong with RAID 5 if you have decent backup and a decent replacement plan.  You just need to plan replacements well before you expect disk failure.  Don't run past their warranty if the data is important to you.  A basic 4 to 7 disk NAS is cheap.  You might as well buy a replacement every 2-3 years and rotate out the older NAS well before you expect the disks and systems to fail.  I never had a problem with the small groups that didn't have the money for a full EMC RAID or Netapp, when I did it that way.  I had them keep the older one as a spare and a backup copy until they exceeded the capacity of the unit, then kept them as archives, and then bought a newer, higher capacity NAS and continued the rotation.  With the rotation cycle and backups that were also duplicated to the older unit, at least until they exceed the capacity, they had immediate redundancy and time in case of a 2nd RAID 5 disk fails.  It can all be recreated.
Avatar of Member_2_231077
Member_2_231077

Anyone using 12 * 4TB disks in RAID 5 should be called Junior IT Systems Engineer, the chances of punctured stripes due to bad blocks on the remaining disks during a rebuild is so high that you are not only risking your data but risking it having errors that you don't even know about. If you don't believe us then phone HP support and ask them.
The  Iomega StorCenter ix4-200d only allow RAID 5 and RAID 10.  The drives in it are Seagate ST31000520AS, CC38 from about 2011. This is the backup solution from a previous-previous person. Due to a myriad of other things, I am trying to get these two up and running as quickly as possible. At the same time, due to limitations on the Iomega, I will be looking for a much better and more up-to-date solution. At another site, we're running a dual Equallogic setup.
Anyone using 12 * 4TB disks in RAID 5 should be called Junior IT Systems Engineer,
If you are doing single RAID 5 set with 12 disks then you are indeed an amateur.
the chances of punctured stripes due to bad blocks on the remaining disks during a rebuild is so high that you are not only risking your data
If you're using a 12 disk RAID, it's because you have a use case for it that involves more users, more capacity and much heavier access.  You will indeed have more trouble when the disks are more heavily used.

Most of those home NAS don't even have a RAID 6 option.  RAID 6 on a 4 disk system is quite wasteful.  There's overhead in calculating parity that you don't have to do with RAID 10.  With only 4 disks, you lose the same amount of space with RAID 10 as with RAID 6, but the RAID 10 will be faster.  If this is for a small, light use office, then these disks won't wear out as quickly as that 12 disk RAID, even with just a RAID 5.


The drives in it are Seagate ST31000520AS, CC38 from about 2011.
It's time to get new disks.  Those are getting old and I wouldn't recommend them as production disks.  It's fine for an additional tertiary backup to an existing backup.
@JuniorSystemsEngineer - I am with Andy on RAID-5, 20 years ago RAID-5 was it, we all used it and coped with its poor write performance.  But things change (as they always do in IT) once spindles got above 600GB the rebuild times started to become too long and the risk of a second disk failure while the first disk was rebuilding became unacceptably high.

Now we are starting to see spindles in the 12-14TB size bracket and another issue raises its head - the Bit Error Rate number, which is 1 in 10^14 bits (for a consumer disk) [approx 12.5TB) Its not as simple as that as you would expect but this article is a good starting point http://www.theregister.co.uk/2015/05/07/flash_banishes_the_spectre_of_the_unrecoverable_data_error/
It's always a good idea to consider these things:

- What is the *entire* process for replacing a failed or failing drive in a RAID system?  Unfortunately when systems are sold or built for small businesses this is almost an afterthought.  So, often the process is only defined when it's needed instead of up front.  Things that may be considered include:
a) what hard drives will be acceptable as replacements?  Will they be available?  What's the plan?
b) What process will be used throughout the replacement process?  Does the operating system have to be booted and running to accomplish this?
c) Are the instructions for replacement available and clear?

- What is the purpose for using RAID?  This should not be considered in a vacuum.  What is the likely time frame that the replacement purpose will be needed? (I acknowledge that I'm not dealing with any speed advantages here).
a) Generally I should think that the #1 purpose should be that operations can continue without interruption.
b) Next may be that reconstructing the system is so labor-intensive and time-consuming that having a ready method for replacement is planned.  Note that this may not at all deal with doing other recovery steps such as going to an earlier restore point - which are intended to deal with other issues (i.e. going back instead of moving forward).  What plans for bare metal backup are there?

Having considered these things seriously, one may find that a software RAID is unacceptable in favor of hardware RAID.
If continuous operation isn't a real issue then might not other redundancy methods be equally acceptable?
Either way, the *entire* process for recovery should be understood.  Otherwise one is buying a warm security blanket without any notion that it will work as needed or as intended.
Thank you for your help and staying on topic for what I was asking.