Avatar of BakerSyd
BakerSydFlag for Australia

asked on 

esxi error mounting RAW Device mapping from EVA


i am trying to add a 100gig hdd as a RAW LUN to a server that runs on our development esxi box.

the 100gig hdd has been created in our EVA and presented directly to the esxi box
i did a rescan all and it found the new lun.

i then proceed to add a new hdd as normal, i select raw lun mapping
and follow all the steps, when i click ok i get the following error

Reconfigure virtual machine SERVER A general system error occurred: Failed to create journal file provider: Failed to open "/var/log/vmware/journal/1378689-776.25" for write: No such file or directory domain\username 09/09/2013 11:22:17 AM 09/09/2013 11:22:17 AM 09/09/2013 11:22:17 AM

i have read the following article from vmware


but it doesnt really help me out that much, i have checked the locations mentioned in the article and there is nothing in there.. plenty of space on the virtual centre server
5.3 gig available and nothing in the Journals location.

User generated image
i logged onto the esxi host directly with putty and i ran the commands to check available space, and it looks like there is plenty there...

User generated image
our esxi build is 5.0.0, 623860

im not sure where to go from here or how to resolove this, has anybody else experienced something like this?


Avatar of undefined
Last Comment
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

what is the output of

vdf -h
Avatar of BakerSyd
Flag of Australia image


not sure which part you are looking for... so here is the full result of vdf-h

~ # vdf -h
Tardisk                  Space      Used
misc-cni.v00               28K       25K
net-bnx2.v00              316K      314K
net-bnx2.v01                1M        1M
net-cnic.v00              140K      139K
net-tg3.v00               304K      303K
scsi-bnx.v00              268K      267K
scsi-bnx.v01              212K      211K
scsi-bfa.v00                2M        2M
ima-be2i.v00                1M        1M
net-be2n.v00              324K      320K
scsi-be2.v00              532K      529K
scsi-lpf.v00                1M        1M
char-hpc.v00               36K       32K
char-hpi.v00               32K       30K
hp-ams.v00                  2M        2M
hp-build.v00                8K        6K
hp-smx-p.v00               28M       28M
hpacucli.v00               11M       11M
hpbootcf.v00               36K       32K
hponcfg.v00                92K       88K
scsi-hps.v00              184K      183K
scsi-hpv.v00                2M        2M
vmware-e.v00               52K       50K
net-igb.v00               252K      251K
net-ixgb.v00              352K      349K
scsi-mpt.v00              460K      459K
net-mlx4.v00              456K      452K
ima-qla4.v00                1M        1M
net-qlcn.v00              396K      395K
scsi-qla.v00              408K      404K
ata-pata.v00               44K       42K
ata-pata.v01               32K       28K
ata-pata.v02               32K       30K
ata-pata.v03               32K       31K
ata-pata.v04               40K       36K
ata-pata.v05               36K       32K
ata-pata.v06               32K       28K
ata-pata.v07               40K       36K
block-cc.v00               84K       81K
ehci-ehc.v00               88K       86K
s.v00                     348M      347M
ipmi-ipm.v00               40K       39K
ipmi-ipm.v01              108K      105K
ipmi-ipm.v02              100K       98K
misc-dri.v00                3M        3M
net-e100.v00              300K      296K
net-e100.v01              248K      246K
net-enic.v00              148K      144K
net-forc.v00              128K      124K
net-nx-n.v00                1M        1M
net-r816.v00              136K      134K
net-r816.v01               84K       80K
net-s2io.v00              244K      241K
net-sky2.v00              112K      111K
ohci-usb.v00               56K       55K
sata-ahc.v00               96K       95K
sata-ata.v00               60K       58K
sata-sat.v00               88K       85K
sata-sat.v01               44K       41K
sata-sat.v02               44K       40K
sata-sat.v03               36K       32K
scsi-aac.v00              168K      167K
scsi-adp.v00              424K      423K
scsi-aic.v00              296K      292K
scsi-fni.v00              152K      151K
scsi-ips.v00              132K      130K
scsi-meg.v00               96K       93K
scsi-meg.v01              160K      156K
scsi-meg.v02               92K       91K
scsi-mpt.v01              516K      512K
scsi-mpt.v02              436K      433K
scsi-rst.v00              748K      745K
uhci-usb.v00               60K       56K
hpnmi.v00                  40K       37K
scsi-qla.v01                1M        1M
imgdb.tgz                 292K      290K
state.tgz                  20K       20K
Ramdisk                   Size      Used Available Use% Mounted on
root                       32M       32M        0B 100% --
etc                        28M      200K       27M   0% --
tmp                       192M       32K      191M   0% --
hostdstats                791M        7M      783M   0% --
Avatar of BakerSyd
Flag of Australia image


what ive leart since i posted the info above is that its quite clear that the ramdisk root is full... 100%
weve tried clearing some logs but looking at the host there is no visible log file thats hogging all the space.. or any file for that matter that is causing any sort of problem with its size..

it looks like there is a update to esxi which is meant to address this sort of memory issue, so we are going to schedule an upgrade.. unfortunately there is only 1 esxi host in this dev environment so a downtime will have to be scheduled.

let me know if you have any ideas on what else we can check...
yes, as you can see the issues is the root full at 100%!
Avatar of Kerem ERSOY


You should have checked /var/log/vmware/journal/ but as your original post suggests you've checked the C:\Document settings\... After checking the vmware appplication.

Please go to the VmWare server and issue:

df /var/log/vmware/journal

Open in new window

Remoe any stale logs and dumps and you should be ok.

Avatar of BakerSyd
Flag of Australia image


when i run the df command i get this

~ # df /var/log/vmware/journal
Filesystem         Bytes          Used     Available Use% Mounted on
VMFS-5     5000684109824 3172504436736 1828179673088  63% /vmfs/volumes/Data
VMFS-5      141465485312   89166708736   52298776576  63% /vmfs/volumes/datastore1
vfat          4293591040      22020096    4271570944   1% /vmfs/volumes/5024229b-c45c6d50-fea4-2c768a4f08c1
vfat           261853184     149065728     112787456  57% /vmfs/volumes/7b4d4e7c-296829e9-af16-74b2c42a18c2
vfat           261853184          8192     261844992   0% /vmfs/volumes/87076576-dd072337-5992-e1db8cf35b11
vfat           299712512     188481536     111230976  63% /vmfs/volumes/50242294-9788e7f0-6be5-2c768a4f08c1

Open in new window

not sure what you mean by dy removing stale logs... i see no logs there?
Avatar of compdigit44

Have you tried to zero-out this volume???


o work around this issue, either zero-out the LUN from the storage array or apply a compatible disk label type using the partedUtil command on the ESXi host. Applying a legacy MBR (msdos) or GTP type disk label to the LUN allows it to be used in conjunction with the Add Storage Wizard.

Warning: This procedure has the potential to cause data loss if it is applied against the incorrect LUN. Ensure to validate that you have the correct NAA identifier before proceeding. For more information, see Identifying disks when working with VMware ESX/ESXi (1014953).

To apply a legacy msdos type partition label, run this command:

# partedUtil mklabel /dev/disks/naa_id msdos

For example:
# partedUtil mklabel /dev/disks/naa.60060480000190103951533030311234 msdos

After running this command, the partition table must be able to be read using this command:

# partedUtil getptbl /vmfs/devices/disks/naa_id
Avatar of Kerem ERSOY

can you also use this command :

du -h /var/log/vmware/journal 

Open in new window

and see what usess the most space in there.
Avatar of BakerSyd
Flag of Australia image


yeah theres nothing in there...

~ # du -h /var/log/vmware/journal
8.0k    /var/log/vmware/journal/1377667571.1.lck
12.0k   /var/log/vmware/journal

Open in new window

Avatar of compdigit44

Hi BakerSyd, have you tried to zero out the volume????
Avatar of BakerSyd
Flag of Australia image


im going to close this as could not find a solution.

going to log a ticket with vmware, upgrading to 5.1 or 5.5 is on the cards so i expect it will resolve this.
Avatar of BakerSyd
Flag of Australia image

Blurred text
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial

Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. In addition to local storage devices like CD and DVD readers, hard drives and flash drives, solid state drives can hold enormous amounts of data in a very small device. Cloud services and other new forms of remote storage also add to the capacity of devices and their ability to access more data without building additional data storage into a device.

Top Experts
Get a personalized solution from industry experts
Ask the experts
Read over 600 more reviews


IBM logoIntel logoMicrosoft logoUbisoft logoSAP logo
Qualcomm logoCitrix Systems logoWorkday logoErnst & Young logo
High performer badgeUsers love us badge
LinkedIn logoFacebook logoX logoInstagram logoTikTok logoYouTube logo