Link to home
Start Free TrialLog in
Avatar of Ryan Rowley
Ryan RowleyFlag for United States of America

asked on

Replacing raid drives with high capacity and resizing

This is more an academic discussion than a question.
I want to find out if there has been any real advancement in Raid management or technology.  I have two fileservers that are identical. Each has 24 hot swap 500GB SATA drives on two 3ware controllers set up as Raid 5 and are configured as one large LVM data volume with two reserved as hot spares (one per controller). They are running Centos 5 on two separate U320 SCSI hard drives mirrored. So the Raid 5s are data only and both fileservers are approaching 100% capacity.  Normally if I replaced a 500GB SATA with a 750GB or 1TB SATA drive it would only rebuild to 500GB. Now if I slowly replace each 500GB drive with a higher capacity drive and let it rebuild, when all are replaced is there a mechanism to regain and grow the volume to use the untapped 5.5 to 11 Terabytes per fileserver.
 
Avatar of ezaton
ezaton

I'm not sure about 3ware (you could check their web site), however, I KNOW this can be done on HP, IBM and Dell server controllers.
As ezaton stated this can be done. if the 3ware hardware supports the growth of the underlying LVM structure it is simply a matter of growing the filesystem once the LVM has been expanded. There are also other options here if it does not support the growth of the LVM but does allow for the creation of another LVM on the newly acquired free space; for instance you could create another LVM and use some form of volume management on the host to concatenate the two volumes together. Again though this assumes you are able to also extend the filesystem once the volume has been grown.
In Linux (and actually, most other *nix systems) you just  define the (replace lingo with your preferred *nix lingo) additional space as an additional PV and extend the existing VG with this PV. Then your LVM system can extend an existing volume (online) to the desired size.
Isn't that what I just said. :)
Almost. As we say here - God is in the details.
LVM is a volume management. It allows for concatenation or for extending a volume. You can't create another LVM and use another (TM) volume management solution to bind the additional space to the existing one. You just use LVM for it. Everything is LVM! :-)
SOLUTION
Avatar of aaron757
aaron757

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Ryan Rowley

ASKER

I mentioned the LVM setup because it may be a factor that helps or limits my options. The main point is replacing smaller drive with larger drives and letting the Raid rebuild and when all drives are of the larger size, recapture the wasted space. Also consider doing the same with ext3 filesystems broken into 11 mounted 1TB raid volumes.

Within are 11TB under LVM  we have had no past problem growing or shrinking the Logical volume space.
I am also using the XFS filesystem.  
XFS, as far as I recall, cannot be shrunk.  And back to your question - can easily be done (although it will take a long while to reconstruct each time).

Aaron - I assumed that the usage of LVM, especially with highstart seems to know what he's saying (it's a compliment. Believe me :-) ), was referring to Linux LVM. It seems I was right.
True, different storage layers exist, and all most them use some sort of logical translation system. However, Linux LVM is Linux LVM.
ezaton - I have had ext3 and reiser on both of these fileservers before. XFS currently is working best for our current needs. I have upgrade these systems before. They started with 250GB drives, then 450GB drives. Each time we did a upgade we had to back up the system, replace the drives, create the volumes and/or filesystems, then restore the data. When ever I take these systems down, no one can do their work. When we put these systems into service, the backup restore option was the only way to reclaim the unallocated disk space. Looking to see if there are any advancements that I have missed that would be able to reduce the impact to my clients.
I can say with some certainty, at least regarding Dell Servers (which is what I work with) that it is not possbile to accomplish what you are attempting.  You will have to back up your data, delete the existing array, and recreate the array with your new higher capacity drives.  Again, this is the only validated way I know of with Dell servers.  
The controller is made by 3ware, but as far as I recall, it can be done in Dell's Perc controllers.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
It would seem that the Technology has not advanced in this area. With the rapid advancement of disk hardware, there is a need for this capability. From my software engineering perspective, it makes me wonder what it would take to re-write the Raid software to provide this functionality.

I would hate to have to back up 10000TB Raid on 173GB drives to replace them with 750GB drives.  ( I have worked with a NAS of this size and larger)
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Well I should be able to test some of the ideas soon. I have ordered 100 750GB SATA drives.
What type of enclosure are you putting teh drives into?
Fileservers that I built. Each holds two U320 scsi drives and 24 SATA drives on 3ware controllers.
They run Centos Linux on Dual Xeon supermicro motherboards. All SATA drives are hot swapable.
Very nice enclosures, 3 hot swap power supplies with thin DVD drive and floppy on front with the 24 HD bays.
This discussion has answered the question mentioned at the top. It was an academic discussion regarding methods to increase the available size of an existing disk array. I think that points should be given here.

Thanks.
Ez
Spent last week testing this question on one of my file servers. It did not work, however I can not say that it would not work. Other factors may have caused the problem.
I will split the points between ezaton, aaron757, and eagle0468.
Can you elaborate on the problems you have encountered? I would like to know, for future knowledge, what was the problem.

Thanks
When working with the 3Ware software I noticed it was not working correctly.
I believe that the 3Ware software or firmware was bad.

Drives and Raids would appear and disappear and sometimes would not perform as directed.
Maybe you have some cabling problem? I did not encounter a  problem when I have used 3Ware devices (did it about three times so far)
Cabling does not seem to be the problem. Once the raid is created it performs without error.
Performing a Software refresh and turning the old drives into JBODs, help clear up problems.

You have to turn the old drives into JBOD's before removing them. If you don't the drives that you remove will be locked and unusable. 3Ware does have some software that will unlock the drives, but it only works with a 8500 series controller. I learned this the hard way. I had to buy a 8500 to unlock 50 drives.