Problem with xfs file system

Hi,
I've a 2TB xfs filesystem created on a coraid device.
I can succesfully mount it and browse file/directory, but when I start my backup program (backuppc) the filesystem become unusable.
In fact:

# mount /coraid
# ls /coraid
cpool log pc share trash
# /etc/init.d/backuppc start
Starting backuppc: ok.
# ls /coraid
ls: reading directory .: Input/output error

Open in new window


This is my dmesg:

Filesystem "etherd/e3.2": Corruption of in-memory data detected.  Shutting down filesystem: etherd/e3.2
Please umount the filesystem, and rectify the problem(s)
xfs_force_shutdown(etherd/e3.2,0x1) called from line 424 of file fs/xfs/xfs_rw.c.  Return address = 0xf8c91788
xfs_force_shutdown(etherd/e3.2,0x1) called from line 424 of file fs/xfs/xfs_rw.c.  Return address = 0xf8c91788
Filesystem "etherd/e3.2": Disabling barriers, not supported by the underlying device
XFS mounting filesystem etherd/e3.2
Starting XFS recovery on filesystem: etherd/e3.2 (logdev: internal)
Ending XFS recovery on filesystem: etherd/e3.2 (logdev: internal)
xfs_da_do_buf: bno 8388639
dir: inode 2148761461
Filesystem "etherd/e3.2": XFS internal error xfs_da_do_buf(1) at line 1992 of file fs/xfs/xfs_da_btree.c.  Caller 0xf8c59b97
 [<f8c597bc>] xfs_da_do_buf+0x387/0x705 [xfs]
 [<f8c59b97>] xfs_da_read_buf+0x19/0x1e [xfs]
 [<f8c59b97>] xfs_da_read_buf+0x19/0x1e [xfs]
 [<f8c622f9>] xfs_dir2_leafn_toosmall+0x15a/0x28f [xfs]
 [<f8c622f9>] xfs_dir2_leafn_toosmall+0x15a/0x28f [xfs]
 [<f8c5aa79>] xfs_da_join+0xa0/0x631 [xfs]
 [<f8c59b97>] xfs_da_read_buf+0x19/0x1e [xfs]
 [<f8c5a6a6>] xfs_da_fixhashpath+0x47/0xe1 [xfs]
 [<f8c61117>] xfs_dir2_node_removename+0x432/0x452 [xfs]
 [<f8c5c951>] xfs_dir_removename+0xe2/0xe9 [xfs]
 [<f8c74b21>] xfs_log_reserve+0x56c/0x5b2 [xfs]
 [<f8c88eee>] kmem_zone_zalloc+0x1d/0x41 [xfs]
 [<f8c86e6b>] xfs_remove+0x23d/0x3a9 [xfs]
 [<f8c8ec1e>] xfs_vn_unlink+0x17/0x3b [xfs]
 [<f8c69a8a>] xfs_iunlock+0x51/0x6d [xfs]
 [<f8c83ad3>] xfs_access+0x34/0x3a [xfs]
 [<f8c8ee2b>] xfs_vn_permission+0x0/0x13 [xfs]
 [<f8c8ee3a>] xfs_vn_permission+0xf/0x13 [xfs]
 [<c01655f7>] permission+0xa3/0xb6
 [<c0165ba9>] may_delete+0x32/0xe3
 [<c01660f1>] vfs_unlink+0xa3/0xd9
[<c0167be1>] do_unlinkat+0x85/0x113
 [<c013fb27>] handle_IRQ_event+0x23/0x49
 [<c013fc00>] __do_IRQ+0xb3/0xe8
 [<c013fc24>] __do_IRQ+0xd7/0xe8
 [<c0102c11>] sysenter_past_esp+0x56/0x79
Filesystem "etherd/e3.2": XFS internal error xfs_trans_cancel at line 1138 of file fs/xfs/xfs_trans.c.  Caller 0xf8c86fb1
 [<f8c7e5f5>] xfs_trans_cancel+0x4d/0xd6 [xfs]
 [<f8c86fb1>] xfs_remove+0x383/0x3a9 [xfs]
 [<f8c86fb1>] xfs_remove+0x383/0x3a9 [xfs]
 [<f8c8ec1e>] xfs_vn_unlink+0x17/0x3b [xfs]
 [<f8c69a8a>] xfs_iunlock+0x51/0x6d [xfs]
 [<f8c83ad3>] xfs_access+0x34/0x3a [xfs]
 [<f8c8ee2b>] xfs_vn_permission+0x0/0x13 [xfs]
 [<f8c8ee3a>] xfs_vn_permission+0xf/0x13 [xfs]
 [<c01655f7>] permission+0xa3/0xb6
 [<c0165ba9>] may_delete+0x32/0xe3
 [<c01660f1>] vfs_unlink+0xa3/0xd9
 [<c0167be1>] do_unlinkat+0x85/0x113
 [<c013fb27>] handle_IRQ_event+0x23/0x49
 [<c013fc00>] __do_IRQ+0xb3/0xe8
 [<c013fc24>] __do_IRQ+0xd7/0xe8
 [<c0102c11>] sysenter_past_esp+0x56/0x79
xfs_force_shutdown(etherd/e3.2,0x8) called from line 1139 of file fs/xfs/xfs_trans.c.  Return address = 0xf8c91788
Filesystem "etherd/e3.2": Corruption of in-memory data detected.  Shutting down filesystem: etherd/e3.2
Please umount the filesystem, and rectify the problem(s)

Open in new window


I tried: xfs_repair and this is the output:

# xfs_repair -L /dev/etherd/e3.2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2

Killed

Open in new window



Can anyone help me? Thankyou
 
Armitage318Asked:
Who is Participating?
 
Armitage318Connect With a Mentor Author Commented:
Solved by using a more recent version of xfs_repair.
0
 
farzanjCommented:
First before doing anything, you need to backup -- I know you routing back is not working.  You need to create tar or rsync over the network or simply copy to a USB device.  2TB is not a high volume these days and you can simply get an external drive with more than 2TB capacity.  This is the first thing and is very important.  Even if cp works, you should backup your file system first.
0
 
DavidPresidentCommented:
Just because you can mount it and browse directory doesn't prove much.  As example, if this was a RAID5 and it was reassembled out of order, but you got the first disk right then you would have similar results.  

Anything you aren't revealing?  Is this a RAID config, and did you have problems at one time?
0
 
coredatarecoveryCommented:
It could have been caused by an unstable drive in the raid array, I'd check the smart status on the drives.

the xfs file system should not break on the raid unless you have a memory error on the card or have a drive dropping out.
I'd take a hard look at what the root cause was before I continued to store my data on this box.

Is this a RAID 0? if so, check it out as above. if it's a raid1 or raid 5 it could be a card/hardware issue.


good luck, and as always backup, backup, backup.
0
 
Armitage318Author Commented:
i think the solution was fine
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.