Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
?
Solved

Replacing raid drives with high capacity and resizing

Posted on 2007-07-25
25
Medium Priority
?
891 Views
Last Modified: 2013-11-14
This is more an academic discussion than a question.
I want to find out if there has been any real advancement in Raid management or technology.  I have two fileservers that are identical. Each has 24 hot swap 500GB SATA drives on two 3ware controllers set up as Raid 5 and are configured as one large LVM data volume with two reserved as hot spares (one per controller). They are running Centos 5 on two separate U320 SCSI hard drives mirrored. So the Raid 5s are data only and both fileservers are approaching 100% capacity.  Normally if I replaced a 500GB SATA with a 750GB or 1TB SATA drive it would only rebuild to 500GB. Now if I slowly replace each 500GB drive with a higher capacity drive and let it rebuild, when all are replaced is there a mechanism to regain and grow the volume to use the untapped 5.5 to 11 Terabytes per fileserver.
 
0
Comment
Question by:Ryan Rowley
  • 9
  • 9
  • 3
  • +1
24 Comments
 
LVL 7

Expert Comment

by:ezaton
ID: 19569193
I'm not sure about 3ware (you could check their web site), however, I KNOW this can be done on HP, IBM and Dell server controllers.
0
 
LVL 3

Expert Comment

by:aaron757
ID: 19569594
As ezaton stated this can be done. if the 3ware hardware supports the growth of the underlying LVM structure it is simply a matter of growing the filesystem once the LVM has been expanded. There are also other options here if it does not support the growth of the LVM but does allow for the creation of another LVM on the newly acquired free space; for instance you could create another LVM and use some form of volume management on the host to concatenate the two volumes together. Again though this assumes you are able to also extend the filesystem once the volume has been grown.
0
 
LVL 7

Expert Comment

by:ezaton
ID: 19570170
In Linux (and actually, most other *nix systems) you just  define the (replace lingo with your preferred *nix lingo) additional space as an additional PV and extend the existing VG with this PV. Then your LVM system can extend an existing volume (online) to the desired size.
0
Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
LVL 3

Expert Comment

by:aaron757
ID: 19571136
Isn't that what I just said. :)
0
 
LVL 7

Expert Comment

by:ezaton
ID: 19571158
Almost. As we say here - God is in the details.
LVM is a volume management. It allows for concatenation or for extending a volume. You can't create another LVM and use another (TM) volume management solution to bind the additional space to the existing one. You just use LVM for it. Everything is LVM! :-)
0
 
LVL 3

Assisted Solution

by:aaron757
aaron757 earned 450 total points
ID: 19571294
I was using highstar1's terminology when I used the acronym LVM. But it is possible to use multiple layers of different LVM's to address this situation as I originally stated. For instance if your storage array has it's own LVM which maintains the physical devices as well as the creation of the RAID volumes (and possibly even the carving up of those volumes into smaller LUNS that get assigned to a system) then you could also have a LVM on the system which is wholly different to manipulate the LUNS that are presented e.g. concatenation of LUNS, RAID ontop of RAID (not recommended but possible), etc... and then you would have the filesystem which resides on those logical devices created by the OS LVM. The filesystem itself would really be the determining factor in whether of not you could utilize the new space, and most filesystems are mature enough now that growing them is of little consequence and really most LVM's now are fairly on par with each other with regards to what functionality they provide depending on the level at which they are utilized within the architecture.
0
 
LVL 2

Author Comment

by:Ryan Rowley
ID: 19574220
I mentioned the LVM setup because it may be a factor that helps or limits my options. The main point is replacing smaller drive with larger drives and letting the Raid rebuild and when all drives are of the larger size, recapture the wasted space. Also consider doing the same with ext3 filesystems broken into 11 mounted 1TB raid volumes.

Within are 11TB under LVM  we have had no past problem growing or shrinking the Logical volume space.
I am also using the XFS filesystem.  
0
 
LVL 7

Expert Comment

by:ezaton
ID: 19575223
XFS, as far as I recall, cannot be shrunk.  And back to your question - can easily be done (although it will take a long while to reconstruct each time).

Aaron - I assumed that the usage of LVM, especially with highstart seems to know what he's saying (it's a compliment. Believe me :-) ), was referring to Linux LVM. It seems I was right.
True, different storage layers exist, and all most them use some sort of logical translation system. However, Linux LVM is Linux LVM.
0
 
LVL 2

Author Comment

by:Ryan Rowley
ID: 19575984
ezaton - I have had ext3 and reiser on both of these fileservers before. XFS currently is working best for our current needs. I have upgrade these systems before. They started with 250GB drives, then 450GB drives. Each time we did a upgade we had to back up the system, replace the drives, create the volumes and/or filesystems, then restore the data. When ever I take these systems down, no one can do their work. When we put these systems into service, the backup restore option was the only way to reclaim the unallocated disk space. Looking to see if there are any advancements that I have missed that would be able to reduce the impact to my clients.
0
 
LVL 3

Expert Comment

by:eagle0468
ID: 19589819
I can say with some certainty, at least regarding Dell Servers (which is what I work with) that it is not possbile to accomplish what you are attempting.  You will have to back up your data, delete the existing array, and recreate the array with your new higher capacity drives.  Again, this is the only validated way I know of with Dell servers.  
0
 
LVL 7

Expert Comment

by:ezaton
ID: 19590295
The controller is made by 3ware, but as far as I recall, it can be done in Dell's Perc controllers.
0
 
LVL 3

Assisted Solution

by:eagle0468
eagle0468 earned 450 total points
ID: 19590783
ezaton, I work with Dell PERC controllers on a daily basis.  I know for a fact that it can't be done.  What will happen, and I know this because I tested it yesterday evening, is that the additional space would be availabe for a new array, but cannot be used to reconstruct the existing array.  The only way I know to add space to an existing array without increasing the number of physical disks is to back up the data, put the new drives in, create a new array and restore from backup.  It is also the quickest and safest way to do so as it requires a data backup that will not take as long as a reconstruct does and it will ensure data integrity in the process.  
0
 
LVL 2

Author Comment

by:Ryan Rowley
ID: 19592537
It would seem that the Technology has not advanced in this area. With the rapid advancement of disk hardware, there is a need for this capability. From my software engineering perspective, it makes me wonder what it would take to re-write the Raid software to provide this functionality.

I would hate to have to back up 10000TB Raid on 173GB drives to replace them with 750GB drives.  ( I have worked with a NAS of this size and larger)
0
 
LVL 7

Accepted Solution

by:
ezaton earned 600 total points
ID: 19592792
Wait. If you can add an additional array over the increased space (and not extend the current array, as said eagle0468), you could add it as PV to the existing LVM and extend the size of the volume still. Should work just right.

As always - keeping an up-to-date backup of your data is always a good advice (which is hard to follow in some cases, but still...)
0
 
LVL 2

Author Comment

by:Ryan Rowley
ID: 19704811
Well I should be able to test some of the ideas soon. I have ordered 100 750GB SATA drives.
0
 
LVL 3

Expert Comment

by:eagle0468
ID: 19734731
What type of enclosure are you putting teh drives into?
0
 
LVL 2

Author Comment

by:Ryan Rowley
ID: 19771713
Fileservers that I built. Each holds two U320 scsi drives and 24 SATA drives on 3ware controllers.
They run Centos Linux on Dual Xeon supermicro motherboards. All SATA drives are hot swapable.
Very nice enclosures, 3 hot swap power supplies with thin DVD drive and floppy on front with the 24 HD bays.
0
 
LVL 7

Expert Comment

by:ezaton
ID: 19996257
This discussion has answered the question mentioned at the top. It was an academic discussion regarding methods to increase the available size of an existing disk array. I think that points should be given here.

Thanks.
Ez
0
 
LVL 2

Author Comment

by:Ryan Rowley
ID: 19998861
Spent last week testing this question on one of my file servers. It did not work, however I can not say that it would not work. Other factors may have caused the problem.
0
 
LVL 2

Author Comment

by:Ryan Rowley
ID: 19998908
I will split the points between ezaton, aaron757, and eagle0468.
0
 
LVL 7

Expert Comment

by:ezaton
ID: 19999193
Can you elaborate on the problems you have encountered? I would like to know, for future knowledge, what was the problem.

Thanks
0
 
LVL 2

Author Comment

by:Ryan Rowley
ID: 20009676
When working with the 3Ware software I noticed it was not working correctly.
I believe that the 3Ware software or firmware was bad.

Drives and Raids would appear and disappear and sometimes would not perform as directed.
0
 
LVL 7

Expert Comment

by:ezaton
ID: 20011006
Maybe you have some cabling problem? I did not encounter a  problem when I have used 3Ware devices (did it about three times so far)
0
 
LVL 2

Author Comment

by:Ryan Rowley
ID: 20040448
Cabling does not seem to be the problem. Once the raid is created it performs without error.
Performing a Software refresh and turning the old drives into JBODs, help clear up problems.

You have to turn the old drives into JBOD's before removing them. If you don't the drives that you remove will be locked and unusable. 3Ware does have some software that will unlock the drives, but it only works with a 8500 series controller. I learned this the hard way. I had to buy a 8500 to unlock 50 drives.
0

Featured Post

NEW Veeam Agent for Microsoft Windows

Backup and recover physical and cloud-based servers and workstations, as well as endpoint devices that belong to remote users. Avoid downtime and data loss quickly and easily for Windows-based physical or public cloud-based workloads!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

We look at whether swapping a controller board on a failed hard drive is likely to solve the problem.
Among the most obnoxious of Exchange errors is error 1216 – Attached Database Mismatch error of the Jet Database Engine. When faced with this error, users may have to suffer from mailbox inaccessibility and in worst situations, permanent data loss.
This video Micro Tutorial explains how to clone a hard drive using a commercial software product for Windows systems called Casper from Future Systems Solutions (FSS). Cloning makes an exact, complete copy of one hard disk drive (HDD) onto another d…
Despite its rising prevalence in the business world, "the cloud" is still misunderstood. Some companies still believe common misconceptions about lack of security in cloud solutions and many misuses of cloud storage options still occur every day. …
Suggested Courses
Course of the Month14 days, 11 hours left to enroll

577 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question