Link to home
Start Free TrialLog in
Avatar of burkem3434
burkem3434Flag for United States of America

asked on

Anyone have experience creating an image on Raid 1 and restoring to Raid 10 (Server 2012 w/SQL)

Evening,

Brand new site today has 29 GB left of 1.2TB hot-plug dual-port SAS in a Raid 1 config. It is all data with no temp cleanups left. First thought was to image disk to a larger drive. But larger drives aren't compatible with that server according to HPE and the 1.6TB doesn't really seem to buy a  lot of space.

Second thought is to build a RAID 10 with four 1.2 TB drives. The "sounds good on paper" belief is I can create a bare metal image from the one drive restore to the new raid 10 volume. What is anyone's experience with this process and what software did you use to create the bare metal image? Any special considerations for the SQL or is that irrelevant with Bare Metal?

Thanks for all insights.
ASKER CERTIFIED SOLUTION
Avatar of Kimputer
Kimputer

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of burkem3434

ASKER

Ok great it sounds like what I had hoped for. If a bare metal image is created from a single drive it can be recorded to the Raid 10 volume without any changes necessary. what software do you prefer? Acronis?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Windows Server Backup will likely do the job... for free.
Available resources, down time available?
What is the existing storage layout from OS to data drives?

Does the system gave four available free bayes, and can you get four disks
To create a new raid 10 volume, to which the SQL db's can relocated.
Two 1.2 TB 10k SAS currently in a Raid 1 configuration. Six bays available. it was setup as a single C drive with all 1.2TB (a folder called shares contains all the data).
how old is the system, what is the upgrade cycle?
Potentially, you are within a year of going through an upgrade cycle.

Is it a sole server in the setup?
Are you considering virtualizing?

Adding a new volume, and then transition data, SQL at your own pace.
Avatar of Member_2_231077
Member_2_231077

If you have a battery backed Smart Array controller you can simply add two drives and convert from RAID 1 to RAID 10 or any other RAID level on the fly and extend the logical disk. No need to image, no need to shut down but the process does take a day or so to redistribute the data blocks.

There is one problem, and that will apply to imaging as well - you will cross the 2TB boundary and if it is currently MBR you can only use 2TiB as it is the boot volume. Older HPE servers do not support GPT boot as they do not have UEFI firmware. You can get around that by just having a 2TB logical disk and use the rest of the space as D:
If this is non-controller based RAID then the following applies
If the RAID1 is also you boot device, the think again... You can boot from mirrored devices, but not from striped devices.
(BIOSes handle the mirror as a single member disk,   that canot work with striped disks).

Don't think HPE make a server with a non-RAID SAS HBA, almost guaranteed to be Smart Array based although it could be the fakeRAID S series. Not the easiest things to use with imaging software as drivers for them aren't normally included as they are proprietary so may not be on the default boot media, Paragon for example only put open source drivers on their DVDs/ISOs.

Swapping one at a time for 2.4TB works too, may not be in the Quickspecs for the particular model but once they move on to a new generation they don't update the quickspecs for newer drives so it only lists the lower capacity disks and RAM. Generally pretty easy to prove they're supported from the "applies to" lists in firmware bundles etc.. Again though logical disk extension needs battery/supercap for some absurd reason.
Even with a RAID controler the disks can be JBOD. 
IMHO, the main issue is what is the risk in this endevour and how long will the system stay down. versus is there another approach to achieve the same result without the added risk?
I think adding a new pair/or four and set them up as an additional volume to which the SQL database and the shares can be migrated at a time and place with no downtime, for the shares
The DBs will have minimal downtime as it takes to stop ms sql, copy the file, detach and reattach in the new location.
The system DB master, msdb, tmep, are of a bit more complexity .sqlcmd ....
l
The way I suggested the system would stay down for exactly 0 minutes if there is a cache battery, about 5-10 minutes to fit one if not.
The RAID 1 to RAID 10 conversion, thought of that as well, but when I looked to confirm whether HP MSA, or SmartArray will do a straight forward process, several of the links I found, it seems the process is not as straightforward when it is the boot drive.

As ohters noted, the complication is that it is the OS disk and the 2TB boot media issue.

since it is a windows server 2012 likely upgrade to occur within the next two years, the addition of a pair of drive in a raid 1 configration and migration of data will achieve the same result at the same cost, ...unless the RAID 10 for the new volume for an additional cost of drives.
It's pretty simple, first stage is to add the disks to the array, second stage is to extend the logical disk to 2.15TB / 1.95 TiB and then create a 2nd logical disk with the remaining space. Adding another array and moving the shares to it is also good.
The process could very well be simple. looking back, though, it was always a conversion on the data volume not on the primary boot volume. the risk at that stage/time was subject to a differnt assessment.

I think the risk given the same underlying proces, post expansion, one would have to relocate the database files and shares.