RAID 1+0 Array, 2TB Volume, Increase size / space

We are running RAID 1+0 using an Areca AR-1680 controller and 5 1TB Western Digital drives for an array volume of 2TB. More space is needed, so we are wanting to individually switch the 1TB drives with 2B or 3TB devices to make an array volume of 4TB or 6TB.

Is this possible? Or would it be necessary to copy off the data, rebuild the entire array and restore?

It is an NFS Sever running Ubuntu and contains primarily backup images, but a portion is also utilized as secondary production storage (mapped Windows data drive) for a VMWare server.
zicemanAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

DavidPresidentCommented:
That (swapping drives one by one and rebuilding after each swap) won't work.   You need to do a full bare-metal-restore backup.  Then blow the RAID away, and replace with new drives, then restore.   The commercial backup packages will automatically resize the partition.

NOTE YOU WILL NOT BE ABLE TO BOOT A LOGICAL DRIVE > 2 TB UNLESS YOUR PC HAS UEFI AND YOUR CONTROLLER SUPPORTS BOOTING UEFI FOR YOUR O/S!!!
0
DavidPresidentCommented:
Here is a better idea, assuming you have the space.  Rebuild the system with 2 x (1 or 2TB) SAS disks as a RAID1 for the O/S, swap, scratch table space, indexes, and anything else that is write intensive.   Then use the remaining slots for 2 x 4GB drives in a RAID1.  This will allow you to boot and you'll have great performance.  

SAS drives will significantly outperform SATA drives.  Use them for the boot partition.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
zicemanAuthor Commented:
Thanks much for the reply. I figured this was likely the case but was hoping otherwise.

Both the SuperMicro chassis and Areca controller card are somewhere between 3-5yr old. I am thinking chances are low that UEFI is supported. Logical assumption?  I would likely need update the Ubuntu OS, controller card drivers & firmware.

If I can confirm or otherwise establish UEFI boot support, then would it better to stick with a RAID 1+0 configuration for the benefit of striping for the entire 4TB or 6TB array volume?
0
rindiCommented:
Since your OS is Linux, you don't need an UEFI BIOS in order to be able to boot to GPT disks. That is only needed for m$ OS's. But in my point of view you should still always separate the data from the OS, so it still makes snes to have the OS on a separate, small disk, and the data on another. I don't know your Areca controller, but many controllers allow you to create separate volumes on the array, so the OS sees 2 disks as a result, one a small one which you can make an MBR disk for the OS, and the other a GPT disk for the data. You could also use GPT for the OS disk, but in my point of view the only thing that does is complicate matters.
0
DavidPresidentCommented:
it is highly likely any 4-5 year old consumer-class controller will have problems with HDDs > fffffffe blocks (or 2.09TB).  In fact, since you ARE using LINUX and want to do raid1/raid10, then you should just use the md software raid driver.   You'll get read load balancing, so in perfect world 2x read speed of a single drive, and writes will be no worse than they are now.   You will not get write-back caching, but that is dangerous unless you have a UPS anyway.

So personally, i'd buy an inexpensive SAS/SATA JBOD controller on eBay.   Use the 2 SATA ports for the 2 disks you mirror to boot, then the other controller for other disks..  This gives you good balancing of backplane I/O.

Also since this is unix (sorry, missed that in my original response), then you could use partition magic to image / resize the existing LUNs attached to the areca card by booting to a usb stick so everything is read-only mount.   You'll have to do a little bit of work by mucking with paths and using mdadm and the md driver  (you use /dev/md0 instead of /dev/sda, and create a setup so /dev/sda + /dev/sdb are a raid1 called /dev/mdb, and so on.  Lots of things online that walk you through this..

But it will allow you to migrate w/o any risk of data loss or rebuilding a system.   Image the raid array onto a single disk then convert that disk online to a software mirror.
0
zicemanAuthor Commented:
In doing some digging around on Areca and "expanding array", I came across this - http://hardforum.com/showthread.php?t=1356904&page=2

Is this possible because the volume is RAID 5 instead of RAID 1+0?
0
DavidPresidentCommented:
The question was asked and answered.  The RAID5 just will NOT work for you because you will go over the 2TB limit for the boot drive.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Storage

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.