Link to home
Create AccountLog in
Avatar of FoxKeegan
FoxKeeganFlag for United States of America

asked on

Is it possible to expand a 3 disk RAID-5 into a 4 disk RAID-5 without losing the data with a Perc5i/Integrated card?

I'd like to, in the future, expand a 3 disk RAID-5 array running on a PowerEdge 2950 that has a Perc5i / Integrated card.

It will be running ESXi 4.1 Update 1.

The goal is to avoid moving all the VMs on the data store off this array, so I don't have to destroy it to expand it.

I've been told this isn't possible; I've also been told it is possible, with the OpenManage software from Dell.

To date I've managed to run vihostupdate on a vSphere Management Assistant instance against the oem-dell-openmanage-esxi_6.2.0-A00.zip file. (After editing the xml file in it to permit it to run on 4.1.0)  It says it worked, but I'm not even sure how to access it now, as nothing looks different, so I'm assuming there is other software I need to take advantage of these new 'patches' I just put in. But I'm still not certain it will permit me to access PERC software on the ESXi OS. Tomorrow I'll probably go open the SSH ports to attempt to install OM-SrvAdmin-Dell-Web-LX-6.5.0-2247.ESX41.i386_A01.tar.gz.

I'd really just like to figure the correct way to get the most tools available to manage my servers and upgrade them as painlessly as possible in the future.

Thank you in advance.
SOLUTION
Avatar of David
David
Flag of United States of America image

Link to home
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
See answer
Is the VMware ESXi 4.1 installation also on the same Array as the datastore?

It is possible to expand the VMFS datastore into the new free space on the disk, providing that there are not more than 4 primary parttitions on the disk. Which if you have installed ESXi 4.1 onto the same array containing the initial datastore may give issues.

It's recommended to install ESXi and have a seperate array for the VMFS datastore.
Avatar of FoxKeegan

ASKER

I always seem to forget to provide one bit of critical information:  The ESXi 4.1u1 install is running off a bootable USB stick.
I'm presently playing with three 2TB disks using 8MB blocks.  Still trying to expand the existing datastore first. (Another goal I'm trying to solve myself so I need not ask the question here)  But I was hoping I could just expand it again after adding another disk.  Thank you.

Can the controller resize the device from the BIOS menu? (CTRL+R at boot) or am I correct in assuming I must install some sort of software that runs on ESXi itself?

If it's possible in BIOS, I've not yet seen the option, but can keep digging/researching.  (Not the most intuitive BIOS I've played with)
8MB block size is so very wrong for a RAID controller.   Any time any of the VMs so much as ACCESS a file, anywhere, then this will require 64MB worth of disk I/O.  
While it's getting a little off the primary topic question, I'll respond:
Is the best practice then to then simply make as many small drives on the array and split large VMs into several smaller files?  8MB is required (to my knowledge) to form a 2TB partition.  I'll be looking for other materials on the best practices in this matter, but any documentation you may know of is appreciated.
the following block sizes are required on the formatted datastore dependant upon the size of virtual machines you are using, so if you required a 2TB virtual disk, you would use a 8MB block size on the formatted VMFS datastore, it's not often used, because 2TB virtual disks are uncommon.

2MB and 4MB block sizes are more common.

• 1MB block size – 256GB maximum file size
• 2MB block size – 512GB maximum file size
• 4MB block size – 1024GB maximum file size
• 8MB block size – 2048GB maximum file size

If creating a 2TB single LUN/Array for ESX, remember the maximum size of a VMFS formatted partition is 2TB - 512 bytes (so just a little smaller!)
Getting way off the initial topic here; I clearly need to ask another question regarding block size in a separate topic.

Regarding dlethe's first response: "The controller will resize the device, but VMware won't automagically know what to do with it."

So the controller will resize the device.  Given the rest of that statement, it sounds like it can be resized without destroying the data.

Can this resize process be done from the controller's BIOS, or does it require Dell OpenManage software?  (Which I'm still trying to figure out how to install on ESXi, but I've been too busy to work on it the past few days)
there isconfusion here... the RAID ctrl and vmware have block sizes or better defined IO sizes. I an speaking that a 8mb bs on the ctrl is wayto big for a R5.  
If you resize the Array using the Controller.

You will then need to resize the VMFS partition on the datastore, using Increase Datastore Wizard or Extents. Providing your ESX/ESXi installation is also not on the same array as the VMFS datastore.
Definite confusion.

I get the feeling what I'm attempting to do is foolish for some reason, so rather than getting an answer on how to do it, I'm being told a different way to do it entirely.  I'll write up everything I'm trying to do, in detail, tomorrow, and rather than asking the original question, perhaps I can be told what I should be doing instead.

Hanccocka: You said: "If you resize the Array using the Controller." How do you expand a RAID-5 array from three disks to four (or more) using a Perc5 Integrated controller? Without losing the data.  Is this done in the controller software accessible during the boot sequence or with some other software?  This is my question.

I realize this must be getting frustrating.  Thank you for your patience. I'll try to give a "big picture" of everything tomorrow.
SOLUTION
Link to home
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
you may have an option to expand the array using the controllers bios option at post test

Now we're getting somewhere.  I haven't found anywhere to do this yet.  I'm at work now, but I'll check the firmware version of the controller when I get home.  It's a Perc5i / Integrated controller. (The version of controller)  This was the first part of my original question.

Supposedly there is a "Reconstruction Wizard" as part of a larger suite designed to handle the PERC, but if this is required (Dell OpenManage perhaps?) I've yet to figure out how to get it to work on ESXi.  THis was the second part of my original question.

Backstory
I've two Dell PowerEdge 2950s.  They are going to two different locations, linked by VPN.  The plan was to install three 2TB drives into each, in a RAID-5 configuration.  They would each have ESXi 4.1u1 installed on a bootable USB thumbdrive.  Each would have a Windows 2008 VM to act as Domain controllers. Each would also have a Windows 2008 member server VM to act as an Exchange server.  Finally, each would have a CentOS VM that RSYNCs with each other, acting as a large NAS. This VM would likely hold VMs via NFS for other machines to connect to. (Mostly PowerEdge 2850s)

The plan was to purchase additional 2TB hard drives in the future to add to and expand the original RAID-5 without data loss.  I realize Microsoft Exchange prefers RAID-1 to RAID-5, but the workload isn't high enough on these machines to worry about that performance hit.  The data capacity saved by one large RAID-5 is more valuable than the performance gained by RAID-1.

For the sake of simplicity (Although we're probably a bit past that, eh?) please don't worry about block sizes, or that I will need to add additional LUNs in the VMFS after expansion.  I'm aware of this, that's fine. (Especially since ESXi can't make a drive bigger than 2TB, so a separate LUN is required regardless.)  I'll likely post another question regarding block sizes of both the RAID array and VMFS LUNs.

At this point I'm starting to believe it's simply going to be easier to move the Windows server VMs elsewhere, destroy the RSYNC duplicate, rebuild the array the old fashioned way, destroying the data and move the Windows server VMs back. Trying to expand the arrays seems highly complicated.
Yes, that plan would make it easier. I don't know if you are using a free or licensed version of ESX/ESXi.

But you could download a trial of one of the many backup products to make this easier for you.

and try a real DR situation.

1. Backup ALL VMs.
2. Flatline the box.
3. Restore ALL VMs.
ASKER CERTIFIED SOLUTION
Link to home
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
This page contains reference which is also useful and relevant to the subject:  http://support.dell.com/support/edocs/software/svradmin/5.1/en/omss_ug/html/vdmgmt.html

An excerpt:

Virtual Disk Task: Reconfigure (Step 1 of 3)

Does my controller support this feature? See "Appendix: Supported Features."

The Reconfigure task enables you to change the virtual disk configuration. Using this task, you can change the RAID level and increase the virtual disk size by adding physical disks. On some controllers, you can also remove physical disks.

Before continuing with the virtual disk reconfiguration, you should be familiar with the information in "Starting and Target RAID Levels for Virtual Disk Reconfiguration" and "Choosing RAID Levels and Concatenation".

       NOTE: You cannot reconfigure a virtual disk on a controller that is operating in cluster mode.
       NOTE: On the PERC 5/E controller, you can create no more than 64 virtual disks. After you have reached this limit, you will no longer be able to reconfigure any of the virtual disks on the controller.
       NOTE: On Linux, if you do a reconfigure on the same controller on which the operating system resides, you may experience extremely slow system performance until the reconfigure is complete.
       NOTE: You may want to review "Virtual Disk Considerations for PERC 3/SC, 3/DCL, 3/DC, 3/QC, 4/SC, 4/DC, 4e/DC, 4/Di, 4e/Si, 4e/Di, CERC ATA100/4ch, PERC 5/E and PERC 5/i, and SAS 5/iR Controllers". This section contains considerations that also apply to reconfiguring a virtual disk on these controllers.
To Reconfigure a Virtual Disk: Step 1 of 3

Select the physical disks that you want to include in the virtual disk. You can expand the virtual disk's capacity by adding additional physical disks. On some controllers, you can also remove physical disks.

The changes you make to the physical disk selection are displayed in the Selected Physical Disks table.

       NOTE: For a controller that has more than one channel, it may be possible to configure a virtual disk that is channel-redundant. See "Channel Redundancy and Thermal Shutdown" for more information.
Click Continue to go to the next screen or Exit Wizard if you want to cancel.

To locate this task in Storage Management:

Expand the Storage tree object to display the controller objects.

Expand a controller object.

Select the Virtual Disks object.

Select Reconfigure from the Available Tasks drop-down menu.

Click Execute.

Virtual Disk Task: Reconfigure (Step 2 of 3)

Does my controller support this feature? See "Appendix: Supported Features."

This screen enables you to select the RAID level and size for the reconfigured virtual disk.

To Reconfigure a Virtual Disk: Step 2 of 3

Select the new RAID level for the virtual disk. The available RAID levels depend on the number or physical disks selected and the controller. The following describes possible RAID levels:

Depending on the controller, Concatenated enables you to combine the storage capacity of several disks or to create a virtual disk using only a single physical disk. See "Number of Physical Disks per Virtual Disk" for information on whether the controller supports a single physical disk or two or more when using Concatenated. Using Concatenated does not provide data redundancy nor does it affect the read and write performance.

Select RAID 0 for striping. This selection groups n disks together as one large virtual disk with a total capacity of n disks. Data is stored to the disks alternately so that they are evenly distributed. Data redundancy is not available in this mode. Read and write performance is enhanced.

Select RAID 1 for mirroring disks. This selection groups two disks together as one virtual disk with a capacity of one single disk. The data is replicated on both disks. When a disk fails, the virtual disk continues to function. This feature provides data redundancy and good read performance, but slightly slower write performance. Your system must have at least two disks to use RAID 1.

Select RAID 1-concatenated to span a RAID 1 disk group across more than a single pair of physical disks. RAID 1-concatenated combines the advantages of concatenation with the redundancy of RAID 1. No striping is involved in this RAID type.

Select RAID 5 for striping with distributed parity. This selection groups n disks together as one large virtual disk with a total capacity of (n-1) disks. When a disk fails, the virtual disk continues to function. This feature provides better data redundancy and read performance, but slower write performance. Your system must have at least three disks to use RAID 5.

Select RAID 10 for striping over mirror sets. This selection groups n disks together as one large virtual disk with a total capacity of (n/2) disks. Data is striped across the replicated mirrored pair disks. When a disk fails, the virtual disk continues to function. The data is read from the surviving mirrored pair disk. This feature provides the best failure protection, read and write performance. Your system must have at least four disks to use RAID 10.

Type the size for the reconfigured virtual disk in the Size text box. The minimum and maximum allowable size is displayed under the Size text box. These values reflect the new capacity of the virtual disk after any addition or deletion of physical disks which you may have chosen in "Virtual Disk Task: Reconfigure (Step 1 of 3)".

       NOTE: On the CERC SATA1.5/2s controller, you must specify the maximum virtual disk size.
       NOTE: The PERC 3/SC, 3/DCL, 3/DC, 3/QC, 4/SC, 4/DC, 4e/DC, 4/Di, 4e/Si, 4e/Di, and CERC ATA100/4ch controllers do not allow you to change or reconfigure the virtual disk size.
Click Continue to go to the next screen or Exit Wizard if you want to cancel.

Virtual Disk Task: Reconfigure (Step 3 of 3)

Does my controller support this feature? See "Appendix: Supported Features."

This screen enables you to review your changes before completing the virtual disk reconfiguration.

To Reconfigure a Virtual Disk: Step 3 of 3

Review your changes. The New Virtual Disk Configuration table displays the changes you have made to the virtual disk. The Previous Virtual Disk Configuration displays the original virtual disk prior to reconfiguration.

Click Finish to complete the virtual disk reconfiguration. If you want to exit without changing the original virtual disk, click Exit Wizard.

       NOTE: On some controllers, performing a Rescan while a reconfiguration is in progress will cause the virtual disk configuration and the physical disk state to display incorrectly. For example, changes to the virtual disk's RAID level may not be displayed and the state of physical disks that were added to the virtual disk may display as Ready instead of Online.
Considerations for Concatenated to RAID 1 Reconfiguration on PERC 3/Si, 3/Di, and CERC SATA1.5/6ch Controllers

When reconfiguring a concatenated virtual disk to a RAID 1 on a PERC 3/Si, 3/Di, or CERC SATA1.5/6ch controller, the reconfigured virtual disk may display the Resynching state. When reconfiguring from a concatenated virtual disk to a RAID 1, data is copied from the single concatenated disk to the RAID 1 mirror. The controller perceives this operation as similar to resynching a mirror, and therefore may display the Resynching state.

Performing a controller rescan during the virtual disk reconfiguration may also cause the virtual disk to display a Resynching state.

While the virtual disk displays a Resynching state, the "Pause Check Consistency" and "Cancel Check Consistency" tasks will be available. Executing either of these tasks on the virtual disk while it is in Resynching state will cause the virtual disk to be in a Failed Redundancy state.
Full answer from author.  Points awarded for highly appreciated help from experts.