Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium


Setting up gvinum raid5 on FreeBSD 7 ( amd64 )

Posted on 2008-06-22
Medium Priority
Last Modified: 2013-12-06
Hello, I'm trying to get gvinum raid5 running on FreeBSD 7 and have hit a road block, here's the run down.

I have an IDE HD ( 200gb ), which has the freeBSD install on it and 4 500GB hard drives ( exact same hard drives ) connected via sata2. I have thoroughly tested all 5 hard drives on other systems and they are all good.

After installing freeBSD I did the following


#bsdlabel -w /dev/ad4
#bsdlabel -w /dev/ad5
#bsdlabel -w /dev/ad6
#bsdlabel -w /dev/ad7

3) created my raid config file "/etc/raid.conf"

drive disk_1 device /dev/ad4
drive disk_2 device /dev/ad5
drive disk_3 device /dev/ad6
drive disk_4 device /dev/ad7
volume raid5
plex org 261k
sd drive disk_1
sd drive disk_2
sd drive disk_3
sd drive disk_4

3) Created the array

#gvinum create /etc/raid.conf

4) Partitioned the new array

#newfs /dev/gvinum/raid5

5) Created Directory and Mounted it

#mkdir /wmfiles
#mount /dev/gvinum/raid5 /wmfiles

And voila , the volume, plex, and subdisks were running and I could write to wmfiles perfectly fine.

I then add the new mount to fstab

/dev/gvinum/raid5                /wmfiles                 ufs                  rw                 2                 2

and add it to the /boot/loader.conf


Here's where things go wrong, I reboot and the boot fails because of the new volume. It throws the following error and puts me into a crippled single user mode where I must remount /usr, /, and /var just to use my /rescue/vi

        ufs: /dev/gvinum/raid5 (/wmfiles)
Automatic file system check failed; help!

when I run

#fsck /dev/gvinum/raid5

I get


which seems to be the first block, followed by ( after continue )

THE FOLLOWING DISK SECTORS COULD NOT BE READ: 128, 129 [etc etc] 142, 143,
ioctl (GCINFO): Inappropriate ioctl for device
fsck_ufs: /dev/gvinum/raid5: can't read disk label

#gvinum list

returns that the volume and plex are down and the subdisks are stale, if I remove the fstab line and the loader.conf line, and reboot, then try to mount and start the volume manually, nothing happens.

Anyone have any ideas as to how to fix this? I am a relative UNIX newbie, so please tell me exactly how to do something if you'd like more information ( ie if you ask for a config file print out etc, tell me how to get said printout, thanks )

Question by:achaean1
  • 7
  • 5
LVL 62

Accepted Solution

gheist earned 1500 total points
ID: 21842429
raid5 volume has no checksums initiated.

dd if=/dev/zero of=/dev/gvinum/raid5 bs=64k

will do the job.

Otherwise you will end up again in unreadable sectors in areas where checksums are not present.

Partitioning using fdisk and labeling using disklabel is highly recommended given error messages you receive (sysinstall does both easy way after plex & volume is running)

Have a look at smartmontools package for physical disk monitoring.


Author Comment

ID: 21846986
Thanks, I will try it out when I get home from work. I'll reset the raid config and partition with fdisk instead of newfs.

Author Comment

ID: 21851418
Ok, so I don't understand how to use sysinstall to do this, when I run disklabel the disks show up as their separate ad4 to ad7 selves rather than a single raid array. There is an ar0, is this the raid array? The size it presents is not the array size though. Only ad0 shows up in the disklabel tool.

I attempted to run the dd if=/dev/zero command , unfortunately it doesn't seem to show progress, I stopped it after 10 minutes and it had only processed about 600mb, which would estimate out to about 15 days, so I'm probably confused amid my lack of knowledge here.

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

LVL 62

Expert Comment

ID: 21853054
Errrmmm.... Your setup is that slow... and gvinum is not loaded yet....

Author Comment

ID: 21854580
I didn't think my setup was slow, athlon64 X2 5400+, 7200RPM hard drives, 2gb ram.

What do you mean gvinum is not loaded yet?
LVL 62

Expert Comment

ID: 21858903
Given 1MB per minute write speed is very low.

Given your descriptopn gvinum was not loaded at time of creation, so odds are high that metatdata are not written correctly.

Author Comment

ID: 21859460
Thanks for the assistance thus far,

Any suggestions on how to speed it up? Should I be trying anything specific aside from playing with the plex org size?

LVL 62

Expert Comment

ID: 21870268
What is cpu usage of gvinum kernel module when you do dd command? (from top)

RAID has slow writes by definition:
Basically when 64k block is written gvinum module reads 261KB off the four disks and writes 261KB back to two disks.

Let me suggest RAID 10 : http://en.wikipedia.org/wiki/Nested_RAID_levels#RAID_1.2B0
Because it is highly fault tolerant in hot-swap enclosure.

Normally disk DMA size is 32KB, so stripes bigger than that do not contribute to IO performance, so let me recommend setting plex to 32K and redo IO benchmarks to see improvement.

If that is still unacceptable go with RAID10

LVL 62

Expert Comment

ID: 21870290
Let me suggest iobench benchmark as it is able to simulate different loads, like random reading/writing multiple 2GB slices in 16K blocks as PostgreSQl does, reading multiple <1MB files in 32KB blocks like Apache, or writing 500 files <16MB in 16KB blocks like sendmail.

LVL 62

Expert Comment

ID: 21870340
Getting blocksize below page size which is 4KB on amd64 would put unnecessary load on memory manager.

Author Comment

ID: 21920183
Seems regardless of what I do I can't get it going faster than 7mbs, I've decided to just shell out the money for a RAID card, never really found out if your solution worked or not, but you've deserved the points for effort.
LVL 62

Expert Comment

ID: 21920973
Have you tried RAID0/RAID1 versions? I guess checksumming is delay factor. I do get RAID0 nearly n times faster on read or write using old bad disk arrays. I know it is walking the fire, but sometimes helps. Thats drop-box for g4u images, so I do not care - just bring down "array" on SMART warnings, copy disk, restart array. Thats complicated only for forst time, next time it goes quite easy.

RAID cards accommodate NCQ etc and give simplet SCSI-like interface, so kernel has less job to do.

Featured Post

[Webinar On Demand] Database Backup and Recovery

Does your company store data on premises, off site, in the cloud, or a combination of these? If you answered “yes”, you need a data backup recovery plan that fits each and every platform. Watch now as as Percona teaches us how to build agile data backup recovery plan.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Introduction Regular patching is part of a system administrator's tasks. However, many patches require that the system be in single-user mode before they can be installed. A cluster patch in particular can take quite a while to apply if the machine…
Java performance on Solaris - Managing CPUs There are various resource controls in operating system which directly/indirectly influence the performance of application. one of the most important resource controls is "CPU".   In a multithreaded…
Learn how to get help with Linux/Unix bash shell commands. Use help to read help documents for built in bash shell commands.: Use man to interface with the online reference manuals for shell commands.: Use man to search man pages for unknown command…
In a previous video, we went over how to export a DynamoDB table into Amazon S3.  In this video, we show how to load the export from S3 into a DynamoDB table.
Suggested Courses
Course of the Month15 days, 21 hours left to enroll

580 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question