Link to home
Start Free TrialLog in
Avatar of SeeDk

asked on

Cloning OS+data onto new Virtual Disk / RAID?

This is on a Dell PowerEdge R720.
It has RHEL6 installed and is configured to use a RAID 0 Virtual Disk.
RAID 0 is no good for redundancy so I want to build a new Virtual Disk and configure it as a RAID 1.

After setting up the new RAID, I would like to clone the OS and the data from the RAID 0 to the RAID 1.
Then, change the boot options so the server boots from the RAID 1 only and get rid of the RAID 0 array entirely.

Is there a way I can do this? I am hoping to avoid having to completely re-install the OS and applications.
Avatar of Juan Jose Perez
Juan Jose Perez
Flag of Mexico image

Hello SeeDk,

Norton Ghost v8 or higher has an option to convert from RAID-0 to RAID-1

- You will need to backup your RAID-0 Data into a USB Drive or HardDisk
- Reconfigure the RAID0 to RAID1
- Restore de image from Norton Ghost.

Avatar of Member_2_231077

If you are buying two more similar size hard disks then it would be far easier to just add them to the server and then migrate the RAID level from RAID 0 to RAID 10 using OMSA, you wouldn't have any downtime that way.
Avatar of SeeDk



Thanks, backing up seems like it could work. I don't have Ghost though. Would restoring from a regular Linux backup work?

That would have been great! But I bought much larger drives for the RAID1.

I saw this post from HP Unit:

Let me see if PERC has that option.


Here is a Post from DELL

it says:

- That you will need to add a hard disk
- RECONFIGURE  option will need followed FROM RAID-0 to RAID-1 and selecting new HDD.

Avatar of PowerEdgeTech
You can't migrate to a RAID 10, but as long as your RAID 0 is a single (1) disk, you can add a disk (of the same type - SAS/SATA, HDD/SSD, etc.) and convert it to a RAID 1 using the Reconfigure option in OMSA (if running a version of RHEL on which OMSA is supported - OR boot to a "live" disc to do the reconfigure; you may be able to do this in the BIOS, depending on your controller).

If you are using more than one disk in a RAID 0, then you will need to do a backup/restore, including, as andy suggested, cloning directly from one VD to another. Depending on  your controller (which you didn't mention), you can change the "boot" VD on the CTRL MGMT screen of the CTRL-R utility.

This is all possible from the H710 controller.
Avatar of arnold
Flag of United States of America image

Link to home
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of SeeDk


That could work. It is a PERCH710 Mini. I don't see those options now but that may be because the new disks are not inserted yet.

That would be ideal and yes i will double check the backups are fine before doing this.
Thanks for the help - here is the output:

Filesystem                      1K-blocks       Used Available Use% Mounted on
/dev/mapper/vg_servername-LogVol00  280788868  152836324 113689248  58% /
tmpfs                           132242256         72 132242184   1% /dev/shm
/dev/sda1                          198337      59034    129063  32% /boot
//servera/folder           2123775996 1634560864 489215132  77% /mnt/linux_backup
//serverb/shared            1073608700  912087644 161521056  85% /shared
//serverc/folder            209582076  112800004  96782072  54% /server
serverd:/export/data              575663104  487401472  59019264  90% /mnt/data
//servere/linuxdump           2844785660 2381308740 463476920  84% /mnt/linuxdump

Open in new window

cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 02 Id: 00 Lun: 00
  Vendor: DELL     Model: PERC H710P       Rev: 3.13
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi5 Channel: 00 Id: 00 Lun: 00
  Vendor: PLDS     Model: DVD-ROM DS-8D9SH Rev: UD51
  Type:   CD-ROM                           ANSI  SCSI revision: 05

Open in new window

fdisk -l
Disk /dev/sda: 292.3 GB, 292326211584 bytes
255 heads, 63 sectors/track, 35539 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00074a48

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          26      204800   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              26       35540   285268992   8e  Linux LVM

Disk /dev/mapper/vg_servername-LogVol00: 292.1 GB, 292112302080 bytes
255 heads, 63 sectors/track, 35513 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Open in new window

  /dev/ram0  [      16.00 MiB] 
  /dev/root  [     272.05 GiB] 
  /dev/ram1  [      16.00 MiB] 
  /dev/sda1  [     200.00 MiB] 
  /dev/ram2  [      16.00 MiB] 
  /dev/sda2  [     272.05 GiB] LVM physical volume
  /dev/ram3  [      16.00 MiB] 
  /dev/ram4  [      16.00 MiB] 
  /dev/ram5  [      16.00 MiB] 
  /dev/ram6  [      16.00 MiB] 
  /dev/ram7  [      16.00 MiB] 
  /dev/ram8  [      16.00 MiB] 
  /dev/ram9  [      16.00 MiB] 
  /dev/ram10 [      16.00 MiB] 
  /dev/ram11 [      16.00 MiB] 
  /dev/ram12 [      16.00 MiB] 
  /dev/ram13 [      16.00 MiB] 
  /dev/ram14 [      16.00 MiB] 
  /dev/ram15 [      16.00 MiB] 
  1 disk
  17 partitions
  0 LVM physical volume whole disks
  1 LVM physical volume

Open in new window

  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               vg_servername
  PV Size               272.05 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              69645
  Free PE               0
  Allocated PE          69645
  PV UUID               hYnOGs-qrZ8-cqC9-tM1R-6ZsD-4oEM-folmrE

Open in new window

 --- Volume group ---
  VG Name               vg_servername
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               272.05 GiB
  PE Size               4.00 MiB
  Total PE              69645
  Alloc PE / Size       69645 / 272.05 GiB
  Free  PE / Size       0 / 0   
  VG UUID               HwWo7v-bmZE-otUk-rela-XiZN-IiSL-tLM8iq

Open in new window

  --- Logical volume ---
  LV Path                /dev/vg_servername/LogVol00
  LV Name                LogVol00
  VG Name                vg_servername
  LV UUID                xxj36t-0PU7-Cx7P-qcIu-mjIn-cLbU-Q0zOk5
  LV Write Access        read/write
  LV Creation host, time, 2014-03-10 09:37:34 -0400
  LV Status              available
  # open                 1
  LV Size                272.05 GiB
  Current LE             69645
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

Open in new window

Well, once a new disk was inserted, did the option to convert a single raid 0 drive to a two drive raid 1 with the original drive 0 as the reference ?
Or are you looking to go through adding the new disk as yet another single raid 0 drive to be presented to the system as sdb, and then going through the software raid (mdadm)
The new drive sdb needs to be partitioned identically in terms of space, but type has to be raid auto.
Then you attach the /dev/sdb1 as /dev/md0 single "drive" member RAID 1, with this partition /dev/md0 would need to be formatted as your existing sdba1 ext3/4
Mount //dev/md0 /mnt
Cd /boot
find . | cpio -pdvmu /mnt
The above will copy/clone the /boot data to the new raided volume.
You would need to install the boot sector on /dev/sdb as well as .........etc. etc.

As noted, poweredgetech's suggestion of converting single drive to a raided volume on the perc would achieve what you want with a single step transparent to the OS..
Avatar of SeeDk


That option did not appear... probably because the new disks were of different sizes. Would have been nice if it did, since the process would be so much simpler.

On OMSA, I created a RAID 1 Virtual Group for the two new disks.
In Linux it is showing as /dev/sdf when I do fdisk -l.

Disk /dev/sdf: 599.6 GB, 599550590976 bytes
255 heads, 63 sectors/track, 72891 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

If I'm understanding you correctly, you're saying I need to partition this sdf drive exactly the same as the /dev/sda drive:
  Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          26      204800   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              26       35540   285268992   8e  Linux LVM

So I would create an sdf1 and sdf2 partition (both ext4 in this system) with the same start/end numbers?
Then I copy the data in sda1 to sdf1 and in sda2 to sdf2?

How does the /dev/mapper/... disk fit into this?
Would copying sda1 and sda2 be equivalent of copying ALL the data on the server or only the OS data?
I am still very much a Linux newbie and trying to wrap my head around all this.
You can partition it any way you want, but you have to make sure that your new drives /boot is a raw partition ext3 ext4 etc
/dev/mapper is an LVM related partition VOlumegroup-logicalvolume..

You would have to clone/copy /dev/sda1 which is your /boot  to /dev/sdf1 if partitioned similarly.
Then the /dev/sdf2 if partitioned as an LVM volume would should up ....

The transition will become more complicated if you need to pull the existing drive and reorder the raid controller to designate which volume is the boot volume as it may lead to the now reflected /dev/sdf to revert to /devsda as the reference...

This rapidly gets involved and one have to keep track of every item.......
Avatar of SeeDk


It seems there's a lot of things that can go wrong if I try doing this manual copying/partitioning and I don't think I'd be able to handle any Linux related hiccups along the way.
Maybe the best way is to just take a full backup of the OS even if it means there is some downtime.
In the Windows world, I would just download some software like Clonezilla or Acronis to take an image of the disk which I could then restore from.
I hear Clonezilla can work in Linux but how reliable would it be in this situation?

Just to reiterate:
The current running system is on a 272GB RAID 0 virtual disk. (2 physical disks)
The new disk is a 558GB RAID 1 virtual disk (2 physical disks)
RHEL is 6.5
The disks are on a PERC H710P controller
Avatar of SeeDk


Got bogged down with other things and stopped this.
Got more familiar with Linux in the meantime and am looking back at this server - I'm thinking why not just use dd to copy the data over?

I tried it on a VM and it seemed to work great. Took only 20 minutes to copy 40GB of data. Maybe a little over 2 hours for 300GB and that is fine.
I imagine the system can be up and running while the copy is occurring so there would be very little downtime. The only downside I see is that if one is careless and gets the syntax wrong...everything is deleted. That said it seems simple. In this case it'd be:

dd if=/dev/sda of=/dev/sdf bs=64K conv=noerror,sync

Open in new window

Any other reasons this would not be the best option?
dd is a low level media sector by sector/byte by byte, that will take a long time compared to other options.
Avatar of SeeDk


Long time meaning ...4 - 6 hours?
Depending on size of volume being copied it can be longer or shorter.

you could use dump/restore to clone file-level dd might need only be used to set the boot sector, first 512 bytes.....
Avatar of SeeDk


Thanks arnold, I came back to this after getting more familiar with Linux.
Used the dd copy option but the commands you showed me help better understand how the disks are structured by the OS.