Link to home
Start Free TrialLog in
Avatar of Abdurahman Almatrodi
Abdurahman AlmatrodiFlag for Saudi Arabia

asked on

PowerEdge 2850 RAID + Windows 2008r2 - Error in installation

Hi

I have PowerEdge 2850 with RAID 1 that was working correctly with Windows 2008 r2. I changed hard disks to 300GB 15K Fujitsu MBA3300NC SCSI U320 Drive, of course 2 hard disks. I created RAID 1.

Now, when I try to install Windows 2008 r2 even Windows 2012, I got error message:
"Setup was unable to create a new system partition or locate an exiting system partition."

I tried to insert the old hard disks, but it is the same issue.

I am new to this situation, and I think there is some problem in RAID creation.
How to solve this, please.
Avatar of John Jennings
John Jennings
Flag of United States of America image

Is this a hardware RAID configuration? If that's the case, you'll need to rebuild the RAID inside the server first so that it presents as a logical drive. Then your install should work just fine.

This article is a little old, but I don't think the process has changed much since it was written (I setup 4 PowerEdge machines two years ago, and I remember it being very much the same)

http://www.thegeekstuff.com/2008/07/step-by-step-guide-to-configure-hardware-raid-on-dell-servers-with-screenshots/
Avatar of ServerWorkscc
ServerWorkscc

Dell power edge 2850 good solid server however doesnt like being messed with

Upgrade the bios to latest verion
Upgrade the Raid to latest version

default the card

insert a single drive and check the lights come up green if Ok remove and repeat with the second drive

then insert both drives and boot

press ctrl & m (its this on most of them ) to access the raid card

create a new array by selecting the 2 drives and then initilize
THIS WILL DESTROY ALL DATA

Reboot and install windows
I also wanted to add this: Were your old hard drives SCSI as well, or were they SATA/SAS drives?
If you did a swap upgrade, i.e, replace disk 1 with bigger drive, rebuild, repeat with disk 2 ...
then you wasted your time.  This controller doesn't expose that extra space.  

The first thing the controller does is resize the larger disk to the one you had before.

You *should* have added the 2 new disks, built the RAID1, and then migrated the data from the existing disk via some partition manager by booting to a UNIX USB stick or CD.

Going forward is painful.
1. Add a NON-RAID controller to this system.
2. Attach the 2 old disks to the non-RAID controller.
3. Attach the 2 larger new disks to the RAID controller.
4. Build a fresh new RAID1 with those 2 new disks from the BIOS, let it initialize (takes hours).
5. Boot that system to windows (External USB drive, or another scratch drive. Sorry)
6. Go to runtime.org, download raid reconstructor (free to try, pay if successful)
7. Boot system to windows, kick off the raid reconstructor. Have it look at the 2 busted RAID drives on the non-RAID controller, and reconstruct into a virtual RAID1 image. (Takes seconds).
8. If that image looks good, pay them $$. Get key, install key.
9. Then do an image backup of the virtual RAID1 array onto the new physical RAID1 on larger disks.
10.  shut down box, remove the non-RAID controller & disks, set boot path to boot to the new RAID1.
11. boot, and as long as you didn't screw something up and wipe the data originally, you should be good.  (Then use native windows command to resize the partitions, or create new one with free space.

There are other variations on this theme, but it is all going to be about same work.
Avatar of Abdurahman Almatrodi

ASKER

At first, I'd like to thank all of you.

Dear JohnThePro:
I've seen this before, and I followed it, but nothing changed. My old HDD are SCSI.

Dear ServerWorkscc:
Bios and Raid are already updated. The drives are working. Also, I created a new array as I mentioned, and initialed it, but nothing new.

Dear dlethe:
I am not that expert to work with servers like this one. But, I tried to rebuild them, and I think I stuck in this. It been rebuilding for about 10 hours up to now!!!. Even I reboot the server, and it still giving me rebuilding.
The reason it is taking so long is because you are also doing bad block recovery.  Depending on the HDD, it could take 2-60 seconds PER bad block.   You have over a trillion blocks to check.

Just let it run, it may take days or weeks, but rebooting it just causes it to start over.
Still facing the same problems, here are some pictures:

User generated image
User generated image
User generated image
User generated image
I have another PowerEdge without RAID, So I removed the PERC from this server, and installed in the second one. It works!!!!!!!!!

What is the problem with first one!
I couldn't possibly tell you what is wrong w/o having the equipment in my lab and running some diagnostic tools we developed in-house that relies on information we obtained under non-disclosure (because this is what we do to pay the bills).

Dell, HP, and the others who OEM the controllers do not offer tools to get the information to diagnose problems.  The reason is political.  I can't tell you how many times a manufacturer has said that they don't want to empower end-users with detailed information because that means the users will inevitably misinterpret something; send back perfectly good equipment; give users perception the hardware is problematic.

You pretty much get only things that have 100% confidence level, like a disk is offline or a LUN is critical.  If you want to know WHY something happens, then you have to pay for an expensive consulting gig, or better be a buying millions of dollars worth of equipment to warrant the engineering time.

Sorry, but that is the way it works in the real world ;)
ASKER CERTIFIED SOLUTION
Avatar of PowerEdgeTech
PowerEdgeTech
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial