Larrymey HawkinsDr. Larrymey Hawkins PhD
Throughout my career I’ve read many articles and performed due diligence on recovering VM’s that broke on the XenServer platform. From my experience and perspective it seems to be an ample amount of confusion as to a good base method to start with.  While there are many routes to recovery there isn’t any “Step One” way to accomplish this.  

I hope to explain a process that I have used many times with success.  The first step is to create a new disk, attach it to a new VM and instantiate the disk image on physical storage.  The simplistic way of doing this is to use VDI-create.  All commands given below are run from the XenServer command line interface.  VDI-create also accept the following parameters.

The first is name-label. This parameter is a readable name for the disk.

      ( name-label=name for the VDI )

The second parameter is type. This Setting sets the type of VDI to create.

      ( type=system | user | suspend | crashdump )

The third parameter is SR. This is the reference of the Storage Repository where the VM's data will be stored.

      ( VDI-create sr-uuid=UUID of the SR where you want to create the VDI )

The Fourth and final parameter is Virtual-size.  This sets the size of the virtual disk in bytes.  Break out your calculator.

      ( virtual-size=size of virtual disk )

      ( vdi-create sr-uuid=UUID of the SR name-label=name of the VDI type=system | user | suspend | crashdump virtual-size=size of virtual disk  )

VDI-create cause the XenServer installation to create a blank disk image on the default Xen Server storage. Don’t panic just yet if you don’t know the UUID of your VM’s or SR. I will cover UUID discovery later in the article.

The disk image is represented on physical storage by the type SR being used in the creation of the VDI. If the SR is "lvm" the new disk image will be in LVM format; if the SR is "nfs" then the new disk image will be a VHD format created on your NFS.

So far we have a broken VM and a fresh VDI that we just created. These are both independent objects that exist on the XenServer. So the next step is to create a link, associating the VDI with our VM.

The attachment is formed by creating a "connector" object called a VBD (Virtual Block Device).  This is needed to later recover and extract the data from the VM.  To create our VBD we run the VBD-create command.  Keep in mind you must create or know the UUID of your VM and your VDI depending on the direction of your recovery. VBD-create accept the following parameters.

The first is VDI.  This is the object of the VDI that you want to attach.

      (  vbd-create vdi-uuid=UUID )

The second parameter is VM.  This is the object of the VM that will attach to the VDI.

      ( vbd-create vm-uuid=UUID  )

The third parameter is mode. This decides if the VDI will be attached in read only or not.

      ( mode=RW | RO )

The fourth parameter is device. This parameter specifies if the VM will be able to read/write the VDI's data.

      ( device=device )

The fifth parameter is type. This parameter sets the disk read type at a hard disk or a cd rom drive.

      ( type=Disk | CD )

      The Final parameter is Bootable. This determines if the VDI is bootable.

      ( bootable=true )

( vbd-create vm-uuid=UUID of the VM device=device value vdi-uuid=UUID of the VDI the VBD will connect to [bootable=true] [type=Disk | CD] [mode=RW | RO]  )

VBD-create creates an object on XenServer and links it to the reference specified.  Next we Start our recovery process. Previously I have mentioned the UUID’s of VM’s and SR’s.  You can breathe easy now the command to list them is below.

      ( xe vm-list ) This command lists the VM UUID’s and names.

      ( xe sr-list ) This command lists the SR UUID’s and names.

      ( xe vdi-list ) This command lists the VDI UUID’s and names.

Next we use kpartx to create maps from partition tables available on the VBD we created earlier.  If you are recovering Windows VM the partition table will be HDA format, if it is Linux based it will be SDA format. Kpartx supports parameters listed below

      The First parameter is –A. This parameter adds partition devmappings.

      ( -a /)

            ( kpartx -a /dev/VG_XenStorage-xxxx/LV-<vm uuid>.hda or sda )

The second parameter is –l. This parameter lists partition devmappings.

      ( -l )

            ( kpartx –l )

LV-<vm uuid>.sda is the logical volume on the XenServer that you want to attach to the VM as a disk.

Now for the moment of truth! The mounting process! To mount the device map created by the kpartx command on /mnt use the mount command specified below.

      ( mount /dev/mapper/LV-<vm uuid>.sda1 /mnt )

After Mounting run the kpartx list command to verify that the partitions have mounted. You should see the LV’s mounted earlier. Start the VM and recover your data normally. This is one of many methods of recovering data from broken VM’s.

All of this can be avoided by taking regular snapshots and backing them up to a remote location such as a SR or USB drive using XenCenter.  Constantly be aware of which snapshots you have of your VM’s. In XenServer Free you have to manually create them for XenServer advanced customers and above you can create scheduled snapshots.  

I hope this article benefits the tired and weary from sitting in front of a XenServer console in the early hours of the morning.

Larrymey Hawkins
Cloud 9 Computing L.L.C.
Larrymey HawkinsDr. Larrymey Hawkins PhD

Comments (2)

Thank you so much for this tutorial.

I kindly ask you to complement it with this parallelism: on NTFS sometime some hdd' bad sectors "in the right place", might make to disappear one entire partition.

Really good (paid) data recovery programs will scan that drive and many times they will, not only find the missing partition, but also the entire tree with both directories and folders names and allow to get the data back.

Now translate this breakage to an Citrix XenServer where e.g. on one single 1TB drive, are installed both XenServer (first 4GB more or less + swap) and where the rest of the 1TB drive has been assigned as SR. Well, on that SR is e.g. one single VM guest.

I have one situation where I find the first partition, the Citrix linux partition and tree, but the rest of the disk is seen as raw space.

Using one of those capable data recovery program (able to support linux filesystems), what do you expect to find/search for?
Should that software to support/recognize LVM/LVM2? And what if none of them supports LVM/LVM2?
(BTW, Citrix uses LVM or LVM2?

Thank you for hinting

Larrymey HawkinsDr. Larrymey Hawkins PhD


Is the VHD file accessible across the SR? If so can it be mounted by another vm?
IF so re mount in a new vm and run recovery software across the mounted drive as you would on any normal server.

If not the external recovery I have have ran has been with Kernel Linux which Works on LVM and LVM2.

Alternatively I have also used  Knoppix 5.1 LiveCD and ran the standard Raid and volume recovery procedures,  

command dd to extract the first part of the partition and write it to a text file

Create the file /etc/lvm/backup/VolGroup00:

vi /etc/lvm/backup/VolGroup00

Re write config data.

start LVM

/etc/init.d/lvm start

Read in the volume


Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.