Mounting a RAID5 volume on Ubuntu Server

I am a Linux newbie and have built an Ubuntu 9.04 x64 server on an Intel S3200SHV mobo with an integrated SATA RAID controller.  The OS installed fine on RAID1.  Disk config is:

RAID1 - Ubuntu OS
RAID5 - space for storage

I am trying to mount the RAID5 volume but not sure how.  I can see the 3 disks that are in my array as sdc, sdd, sde.  

I also did not do the install with LVM.  I just did a straight guided install.

Let me know if you need more info.  Thanks in advance for the help.
convergencetechAsked:
Who is Participating?
 
russell124Connect With a Mentor Commented:
When you did the Ubuntu install, did you use the raid option in the installer, or did it detect your RAID1 array as 1 volume?

I too find it strange that the RAID1 is working, but the RAID5 isn't.   Is it possible that you used the linux software raid on installation thinking that it was your fakeraid controller?

Can you run:

>sudo cat /proc/mdstat

at the command line to check to see if you have software raid enabled already?

If you get output that looks something like this:

md0 : active raid1 sda1[0] sdb1[1]
      488134912 blocks [2/2] [UU]

you already have software raid1 enabled for your OS volumes.

Unless you are planning on dual-booting this system with windows, I would ditch the fakeraid and go with software raid under linux via mdadm.  In both cases the CPU is actually doing the processing for the raid operations.  Fakeraid just makes it look like a single nice volume in windows.

Have a look at this guide for setting up software raid 5:
http://bfish.xaedalus.net/?p=188
0
 
joolsCommented:
how do you see the disks in the raid array? If you used the hardware raid it would usually be presented as one lun.

Can you also post the output for `fdisk -l`  and  `df -k`.



0
 
convergencetechAuthor Commented:
fdisk -l

Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000af120

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        9327    74919096   83  Linux
/dev/sda2            9328        9728     3221032+   5  Extended
/dev/sda5            9328        9728     3221001   82  Linux swap / Solaris

Disk /dev/sdb: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000af120

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        9327    74919096   83  Linux
/dev/sdb2            9328        9728     3221032+   5  Extended
/dev/sdb5            9328        9728     3221001   82  Linux swap / Solaris

Disk /dev/sdc: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000dab3a

   Device Boot      Start         End      Blocks   Id  System

Disk /dev/sdd: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00b200b1

   Device Boot      Start         End      Blocks   Id  System

Disk /dev/sde: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00b200b1

   Device Boot      Start         End      Blocks   Id  System



df -k

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/isw_bihjhjihbc_Volume01
                      73742752   2723152  67273648   4% /
tmpfs                  4040400         0   4040400   0% /lib/init/rw
varrun                 4040400       108   4040292   1% /var/run
varlock                4040400         0   4040400   0% /var/lock
udev                   4040400       176   4040224   1% /dev
tmpfs                  4040400       352   4040048   1% /dev/shm
lrm                    4040400      2760   4037640   1% /lib/modules/2.6.28-11-server/volatile




0
The 14th Annual Expert Award Winners

The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified!

 
convergencetechAuthor Commented:
Sorry...the first part of your question is that it is hardware RAID, but I'm not sure how to view it as a LUN (I'm a serious newbie).

the df -k command shows that Volume01 is the one I am trying to get to, but when I browse to the location of /dev/mapper it is just a block file.
0
 
joolsCommented:
ok, a little strange but we'll carry on.

you have 2 disks, sda and sdb which are both 80GB and 2 disks sdc and sde which are 250GB
you also say you have a hardware raid1.

Can you post back `vgdisplay -v` so we can see what volumes the lvm is using... it seems the install used lvm anyway.

It sounds like you want to create a RAID 5 array from the sdb/c/ and e disks which is doable but the disks should be the same size so you could have a raid 5 of 80GB and loose the rest, I'm thinking that your system is not Raided at the moment though.

When the system boots up can you get into the RAID setup and look at the disks? Did you specifically configure RAID 1 in the hardware setup?

I think having the vgdisplay output may answer some questions.

0
 
convergencetechAuthor Commented:
Thanks jools.  I appreciate all of the help.

There are actually three (3) 250GB drives in a RAID5 array - sdc, sdd, sde.
The RAID1 is sda and sdb.

Basically I configured it for a RAID1 OS array and a RAID5 array for data storage (in this case virtual machine storage for VMware Workstation).

All of the RAID config was done through the configuration utility on the RAID controller prior to installing the OS, so it is all hardware-based RAID.

VGDISPLAY is actually not installed, but if you feel that it is necessary, then I will install it and post the results of the command.  Thanks.

0
 
joolsCommented:
I misread your earlier post!

I got the impression that the mapper device was an lvm object, you did try vgdisplay as opposed to VGDISPLAY??
0
 
convergencetechAuthor Commented:
yes...I ran the 'vgdisplay -v' command and got the following message:

The program 'vgdisplay' is currently not installed.  You can install it by typing:
apt-get install lvm2
-su: vgdisplay: command not found
0
 
joolsCommented:
I've no idea how your raid works, it seems very weird, I would usually expect the raid 1 set to just show as one disk, perhaps sda, and not see sda and sdb in the disk list.

it doesnt seem to be a full raid card, I've seen posts referring to it as fakeraid.
   http://linuxmafia.com/faq/Hardware/sata.html
   http://www.experts-exchange.com/OS/Linux/Q_23922521.html

You will probably need to configure the device using dmraid.
   http://linux.die.net/man/8/dmraid

If you google the board with linux it seems to have caused some users issues.


Alas, I've not used the intel boards in quite a while and never used dmraid before.

worth checking the man page for dmraid though...



0
 
convergencetechAuthor Commented:
I was reading the fakeraid stuff too.  I will check out dmraid and see what happens.  Thanks again for all of the help.
0
 
convergencetechAuthor Commented:
When I did the install it detected my RAID1 volume automatically.  However, it did not see the RAID5 volume during the install.

I will run the commands and see what output I get a little later today.
0
 
joolsConnect With a Mentor Commented:
there is a possibility that the dmraid does not support raid5.

I prefer the usual software raid or a full hardware raid card myself. I'd be quite wary of problems if you have hardware issues.
0
 
convergencetechAuthor Commented:
I split the points, because both moving to a HW RAID solution AND using the software RAID 5 guide worked.  Thanks again.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.