We help IT Professionals succeed at work.

Preparing disks for in Openfiler for DRBD

jonas-p asked

I'm configuring my openfilers for drbd. (I'm doing this with article "http://wiki.the-mesh.org/wiki/OpenFilerHaSetup".
But i'm having troubles with following commands:
- [root@openfiler2 ~]# drbdadm create-md cluster_metadata
- [root@openfiler2 ~]# drbdadm create-md vg0_drbd

I have the same problem at my openfiler 1. But i just explained it with this one. You will see error that I get on the code snippets.

Also i provided a disk lay-out. It my previous question i already asked the problem (ID: 37127700) and someone said that I had to delete the partitions on drbd0 and drbd1.

Already thanks.
Kind regards.

[root@openfiler2 ~]# drbdadm create-md cluster_metadata

--== This is a new installation of DRBD ==--
Please take part in the global DRBD usage count at http://usage.drbd.org.

The counter works anonymously. It creates a random number to identify
your machine and sends that random number, along with the kernel and
DRBD version, to usage.drbd.org.

The benefits for you are:
 * In response to your submission, the server (usage.drbd.org) will tell you
   how many users before you have installed this version (8.3.10).
 * With a high counter LINBIT has a strong motivation to
   continue funding DRBD's development.


In case you want to participate but know that this machine is firewalled,
simply issue the query string with your favorite web browser or wget.
You can control all of this by setting 'usage-count' in your drbd.conf.

* You may enter a free form comment about your machine, that gets
  used on usage.drbd.org instead of the big random number.
* If you wish to opt out entirely, simply enter 'no'.
* To count this node without comment, just press [RETURN]

  --==  Thank you for participating in the global usage survey  ==--
The server's response is:

you are the 6153th user to install this version

From now on, drbdadm will contact usage.drbd.org only when you update
DRBD or when you use 'drbdadm create-md'. Of course it will continue
to ask you for confirmation as long as 'usage-count' is at its default
value of 'ask'.

Just press [RETURN] to continue: md_offset 485926891520
al_offset 485926858752
bm_offset 485912027136

Found some data

 ==> This might destroy existing data! <==

Do you want to proceed?
[need to type 'yes' to confirm]
Operation canceled.

Open in new window

[root@openfiler2 ~]# drbdadm create-md vg0_drbd
md_offset 4998268186624
al_offset 4998268153856
bm_offset 4998115618816
Found some data
 ==> This might destroy existing data! <==
Do you want to proceed?
[need to type 'yes' to confirm] yes
Operation canceled.

Open in new window

Watch Question

Top Expert 2009

what I told you previously is :

DRBd will complain only when it sees the that hardrive is already formatted .

thats why i said to you..

when you create the partition . dont format .

example :
when you create the partion,

/dev/sd1 and /dev/sd2

now you have use /dev/sd1 to drbd0
and /dev/sd2 to drbd1

now you create the metadata
drbdadm create-md drbd0

and drbdadm create-md drbd1

when metadata will be created

then format the partion

mkfs.ext3 /dev/drbd0
and mkfs.ext3 /dev/drbd1

does it make sense ?

Top Expert 2009
have a look to this one


from that website :

 Initialise metadata on /dev/drbd0 (cluster_metadata) and /dev/drbd1 (vg0drbd) on both nodes:

root@filer01 ~# drbdadm create-md cluster_metadata
root@filer01 ~# drbdadm create-md vg0drbd
root@filer02 ~# drbdadm create-md cluster_metadata
root@filer02 ~# drbdadm create-md vg0drbd

Note: if the commands above generate errors about needing to zero out the file system, use the following command:

root@filer01 ~# dd if=/dev/zero of=/dev/sda3

as i said :

after you use fdisk /dev/sda

dont format the patition with ext3. leave the format .

you will create file system with

root@filer01 ~# mkfs.ext3 /dev/drbd0


Hello fosiul01,

Thanks for the information. That worked for me, but I had to do the command:
"dd if=/dev/zero of=/dev/sda3
I also did it at both openfilers.
Than I followed the instructions you provided me (html link).
But now i'm stuck on following:

[root@openfiler1 ~]# chkconfig --level 2345 heartbeat on
error reading information on service heartbeat: No such file or directory
You have new mail in /var/spool/mail/root

Can you help me.
thanks in advance.

Kind regards.
Top Expert 2009

I believed you did not install the heartbeat packages


yum install  heartbeat

it should isntall the packages



[root@openfiler1 ~]# yum install heartbeat
-bash: yum: command not found

Top Expert 2009

i belived heatbeat install automatically with openfiler ...

you are instllaing openfiler from openfiler CD is not it ???

what about

service heartbeat start ??

what does this command do ??

also type

locate heartbeat



On my openfiler 1 it did all the commands but no succes. (It's openfiler v.2.9) but this is the first one i used. Additional I installed openfiler 2 (same v.2.9)

That's why I tried it aswell on the openfiler 2. Here the same problem no service recognized.
But when I used the locate command it got following (see code)

What do I need to do next? I will install openfiler 1 again (same as openfiler 2) so the heartbeat files are on it or should i wait?

[root@openfiler2 ~]# locate heartbeat
[root@openfiler2 ~]#

Open in new window

Top Expert 2009

when i installed openfiler by looking that guide, it went straight way.
i never had to install heartbeat ...

can you pls follow this


and try to intall heartbean again ..

may be from openfiler Gui interface or from command line

Install conary packages :


I tried following commands on both filers:

conary update conary
conary updateall --replace-files --no-conflict-check

It installed different packages on both but same result when i try:
- locate heartbeat or start the service.

On openfiler1: nothing to find
On openfiler2: find in in /usr directory

Is heartbeat for openfiler 2 OK? Or is something wrong aswell? Then I just need to do openfiler1?
I will retry it back on openfiler tomorrow after I reinstalled.