I have a client with a HP ML350 G5 running VMware ESXi 4.1. Running 12GB ram, they had a 1 TB Raid 5 array running on (3) 500GB sata disks on an e200i raid controller with 128mb ram. On original disk array are 3 logical drives, drives 1, 2, and 3, respectively. Disk 1 is 16gb for VMware, disk 2 is 480GB for OS volumes, and disk3 is 500GB for data volumes. Running on the server is 1 Windows 2003 SBS server running 32 bit, and 5 windows XP guests set up on an active directory domain.
The raid has been problematic….lost at least one drive every 6 months, and rebuilds were taking forever, making everything crawl during rebuild periods.
I recommended memory upgrades for the server to 16gb, a new p400 raid controller with 512mb ram, and 3 146 gb SAS dries to populate the rest drive cage with, possibly to use to boot the ESX, and put the OS volume for the server on.
I connected the drive cage to the new raid controller and it saw the original array and 3 logical disks. However, when I added a new array created with the 3 SAS drives, the existing drives that were logical disks 1, 2 and 3, became Logical disks 2, 3 and 4. The new SAS array became logical disk 1, and as such the server now attempts to boot from it. That’s ok, but when I attempted to install ESX on the logical Disk 1, it completes, but when I boot ESX, it fails with some sort of conflict because there is already an install of VMware on logical disk 2. I can’t DELETE logical disk 2 using the ACU, because it is not the LAST disk on the original array. I even booted the Smart Start, and I can’t do anything with that either.
My question is: How do I DISABLE the ESX installation on what is now logical drive 2, so that I can boot ESX from logical drive 1, which will obviously be faster, as it is made of 15K SAS drives.