?
Solved

Linux / mdraid / ext4: ~3TB of data vanished, fsck -n ok but fsck -n -b <backup super> reports errors?

Posted on 2013-12-03
18
Medium Priority
?
737 Views
Last Modified: 2013-12-04
Right, so: discovered some mangled directory names on my 10TB raid6 / ext4 array
where i expected some 3TB of files & directories.
so, unmounted, fscked
fsck -fn ok.
remounted, filesystem tree clean but missing several branches, du reports ~3TB more free space than yesterday :-(

fsck -n with backup superblock spams my teminal with "Free blocks count wrong for group #"
any ideas? "Free blocks count" sounds promising, as I certainly have those.

Mount -o ro this using a backup superblock fails, and I don't want to run fsck without -n until I'm sure it's reasonably safe.

Testdisk lists the filesystem but only says "filesystem appears damaged"

Any more info I can provide?

echo "check" >/sys/block/md3/md/sync_action
Check completed ok, no errors.
Where to next?
I note that fsck with backup superblock reports all groups free blocks = 32768. i.e. "Free blocks count wrong for group #68503 (32768, counted=14952)"
This 32768 is a somewhat suspicious number to me...

Appreciate any suggestions.

Debian 7.2, x86_64

More info:

~# dumpe2fs -h -o superblock=0 /dev/md3 | tee dump.prisbh 
dumpe2fs 1.42.5 (29-Jul-2012)
Filesystem volume name:   Archive
Last mounted on:          /mnt/archive
Filesystem UUID:          d1439567-35a6-4944-b155-f705d62b7301
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              366264320
Block count:              2930114304
Reserved block count:     0
Free blocks:              2364373670
Free inodes:              366021769
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      325
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         4096
Inode blocks per group:   256
RAID stride:              128
RAID stripe width:        512
Flex block group size:    16
Filesystem created:       Sun Apr 22 11:51:25 2012
Last mount time:          Wed Dec  4 09:13:51 2013
Last write time:          Wed Dec  4 09:13:51 2013
Mount count:              33
Maximum mount count:      -1
Last checked:             Fri May 24 10:19:30 2013
Check interval:           0 (<none>)
Lifetime writes:          11 TB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      d9212338-b97a-4d17-874c-fa7715c7c922
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             128M
Journal length:           32768
Journal sequence:         0x0024792d
Journal start:            0

Open in new window


~# dumpe2fs -h -o superblock=32768 /dev/md3 | tee dump.bupsbh 
dumpe2fs 1.42.5 (29-Jul-2012)
Filesystem volume name:   Archive
Last mounted on:          /mnt/archive
Filesystem UUID:          d1439567-35a6-4944-b155-f705d62b7301
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         not clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              366264320
Block count:              2930114304
Reserved block count:     0
Free blocks:              1018225394
Free inodes:              366033340
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      325
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         4096
Inode blocks per group:   256
RAID stride:              128
RAID stripe width:        512
Flex block group size:    16
Filesystem created:       Sun Apr 22 11:51:25 2012
Last mount time:          Fri May 24 08:22:56 2013
Last write time:          Fri May 24 11:11:53 2013
Mount count:              0
Maximum mount count:      -1
Last checked:             Fri May 24 10:19:30 2013
Check interval:           0 (<none>)
Lifetime writes:          10 TB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      d9212338-b97a-4d17-874c-fa7715c7c922
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             128M
Journal length:           32768
Journal sequence:         0x0024792d
Journal start:            0

Open in new window


Why such a difference between the primary superblock and backups?
If I read this right, the backup superblocks have not been updated since May...

Needless to say, I would quite like that data back.

Cheers,

Steve.
0
Comment
Question by:v_evets
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 10
  • 8
18 Comments
 
LVL 19

Expert Comment

by:xterm
ID: 39694578
Have you checked back in syslog for errors preceding the event?

What physical disks are under this RAID, and have you checked that none are reporting any failures?  (to be truthful though, if you've lost more than your parity size allotment, not sure if that's going to be helpful to recover that data, but you at least need to know.)
0
 

Author Comment

by:v_evets
ID: 39694627
Nothing unusual in /var/log/*

The disks are rubbish, ST2000DM001

# for disk in i h g d l e k j; do smartctl -H /dev/sd$disk; done
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

Open in new window

0
 

Author Comment

by:v_evets
ID: 39694629
Most relevant event: mdadm scheduled check (completed ok)
The data can be replaced, though it will be a royal PITA as the real backup is on optical media :-(
I would very much like to know how this can happen even if it comes to that.
All the files still accessible are just fine. Pretty specific for a disk level fault no? wouldn't that be more likely to trash the raid entirely?
0
Optimize your web performance

What's in the eBook?
- Full list of reasons for poor performance
- Ultimate measures to speed things up
- Primary web monitoring types
- KPIs you should be monitoring in order to increase your ROI

 
LVL 19

Expert Comment

by:xterm
ID: 39694650
I note that fsck with backup superblock reports all groups free blocks = 32768. i.e. "Free blocks count wrong for group #68503 (32768, counted=14952)"
This 32768 is a somewhat suspicious number to me...


That number sounds fine as a block size (it's divisible by 1024 as it should be) but the fact that it says "counted=14952" to me more the red flag - as if it was unable to see all of them.

What does cat /proc/mdstat show?

What about this?

for disk in i h g d l e k j; do fdisk -l /dev/sd$disk; done
0
 
LVL 19

Expert Comment

by:xterm
ID: 39694654
All the files still accessible are just fine. Pretty specific for a disk level fault no? wouldn't that be more likely to trash the raid entirely?

Yes, one would think.  I've NEVER had good fortune with software RAID on Linux, but I've also never lost a single byte of data.  I would be more suspicious of an admin or script/cron making a mistake given your lack of any system complaints whatsoever.

When you described "mangled" directory names - did they contain weird blinking control characters?
0
 

Author Comment

by:v_evets
ID: 39694655
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] 
md3 : active raid6 sdi1[0] sdh1[11] sdg1[8] sdd1[10] sdl1[4] sde1[9] sdk1[2] sdj1[1]
      11720457216 blocks super 1.2 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
      bitmap: 0/15 pages [0KB], 65536KB chunk

md5 : active raid1 sdc3[0] sdb3[1]
      973631352 blocks super 1.2 [2/2] [UU]
      
md4 : active raid1 sdc1[0] sdb1[2]
      975860 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

Open in new window


and

# for disk in i h g d l e k j; do fdisk -l /dev/sd$disk; done

Disk /dev/sdi: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d65ae

   Device Boot      Start         End      Blocks   Id  System
/dev/sdi1            2048  3906824191  1953411072   da  Non-FS data

Disk /dev/sdh: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdh1            2048  3906824191  1953411072   da  Non-FS data

Disk /dev/sdg: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1            2048  3906824191  1953411072   da  Non-FS data

Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048  3906824191  1953411072   da  Non-FS data

Disk /dev/sdl: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000c55e8

   Device Boot      Start         End      Blocks   Id  System
/dev/sdl1            2048  3906824191  1953411072   da  Non-FS data

Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1            2048  3906824191  1953411072   da  Non-FS data

Disk /dev/sdk: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001644c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdk1            2048  3906824191  1953411072   da  Non-FS data

Disk /dev/sdj: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c44e2

   Device Boot      Start         End      Blocks   Id  System
/dev/sdj1            2048  3906824191  1953411072   da  Non-FS data

Open in new window

0
 

Author Comment

by:v_evets
ID: 39694665
Mangled directory names: unfortunately didn't think to record it. no control characters, caps alpha and "~" mostly, >10 characters long. Only really looked at a couple of directories before unmounting.
0
 

Author Comment

by:v_evets
ID: 39694678
More grepping of logs, further back this happened. Not sure if relevant:
Nov 20 08:34:43 damnation kernel: [2112721.528101] INFO: task updatedb.mlocat:25586 blocked for more than 120 seconds.
Nov 20 08:34:43 damnation kernel: [2112721.528104] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 08:34:43 damnation kernel: [2112721.528106] updatedb.mlocat D ffff880224e7e8f0     0 25586  25582 0x00000000
Nov 20 08:34:43 damnation kernel: [2112721.528109]  ffff880224e7e8f0 0000000000000082 0000000000000000 ffff8801531eb690
Nov 20 08:34:43 damnation kernel: [2112721.528112]  0000000000013780 ffff880006f6ffd8 ffff880006f6ffd8 ffff880224e7e8f0
Nov 20 08:34:43 damnation kernel: [2112721.528115]  ffffffff810135d2 ffffffff81066245 ffff8801157bc2f0 ffff88022fc93fd0
Nov 20 08:34:43 damnation kernel: [2112721.528118] Call Trace:
Nov 20 08:34:43 damnation kernel: [2112721.528124]  [<ffffffff810135d2>] ? read_tsc+0x5/0x14
Nov 20 08:34:43 damnation kernel: [2112721.528127]  [<ffffffff81066245>] ? timekeeping_get_ns+0xd/0x2a
Nov 20 08:34:43 damnation kernel: [2112721.528130]  [<ffffffff8111d315>] ? wait_on_buffer+0x28/0x28
Nov 20 08:34:43 damnation kernel: [2112721.528132]  [<ffffffff8134e141>] ? io_schedule+0x59/0x71
Nov 20 08:34:43 damnation kernel: [2112721.528134]  [<ffffffff8111d31b>] ? sleep_on_buffer+0x6/0xa
Nov 20 08:34:43 damnation kernel: [2112721.528136]  [<ffffffff8134e584>] ? __wait_on_bit+0x3e/0x71
Nov 20 08:34:43 damnation kernel: [2112721.528139]  [<ffffffff81120e40>] ? bio_alloc_bioset+0x43/0xb6
Nov 20 08:34:43 damnation kernel: [2112721.528141]  [<ffffffff8134e626>] ? out_of_line_wait_on_bit+0x6f/0x78
Nov 20 08:34:43 damnation kernel: [2112721.528143]  [<ffffffff8111d315>] ? wait_on_buffer+0x28/0x28
Nov 20 08:34:43 damnation kernel: [2112721.528145]  [<ffffffff8105fcad>] ? autoremove_wake_function+0x2a/0x2a
Nov 20 08:34:43 damnation kernel: [2112721.528167]  [<ffffffffa01bc247>] ? ext4_find_entry+0x1bd/0x298 [ext4]
Nov 20 08:34:43 damnation kernel: [2112721.528170]  [<ffffffff8110b5ac>] ? __d_lookup+0x3e/0xce
Nov 20 08:34:43 damnation kernel: [2112721.528177]  [<ffffffffa01bc350>] ? ext4_lookup+0x2e/0x11c [ext4]
Nov 20 08:34:43 damnation kernel: [2112721.528179]  [<ffffffff8110b1d3>] ? __d_alloc+0x12c/0x13c
Nov 20 08:34:43 damnation kernel: [2112721.528182]  [<ffffffff81102709>] ? d_alloc_and_lookup+0x3a/0x60
Nov 20 08:34:43 damnation kernel: [2112721.528184]  [<ffffffff811031ad>] ? walk_component+0x219/0x406
Nov 20 08:34:43 damnation kernel: [2112721.528186]  [<ffffffff81104041>] ? path_lookupat+0x7c/0x2bd
Nov 20 08:34:43 damnation kernel: [2112721.528189]  [<ffffffff81036628>] ? should_resched+0x5/0x23
Nov 20 08:34:43 damnation kernel: [2112721.528191]  [<ffffffff8134deec>] ? _cond_resched+0x7/0x1c
Nov 20 08:34:43 damnation kernel: [2112721.528193]  [<ffffffff8110429e>] ? do_path_lookup+0x1c/0x87
Nov 20 08:34:43 damnation kernel: [2112721.528195]  [<ffffffff81105d27>] ? user_path_at_empty+0x47/0x7b
Nov 20 08:34:43 damnation kernel: [2112721.528198]  [<ffffffff810fdbad>] ? cp_new_stat+0xe6/0xfa
Nov 20 08:34:43 damnation kernel: [2112721.528200]  [<ffffffff810fdd7a>] ? vfs_fstatat+0x32/0x60
Nov 20 08:34:43 damnation kernel: [2112721.528202]  [<ffffffff810fdedb>] ? sys_newlstat+0x12/0x2b
Nov 20 08:34:43 damnation kernel: [2112721.528205]  [<ffffffff81354212>] ? system_call_fastpath+0x16/0x1b

Open in new window


And no, I  didn't name this box :-)
0
 
LVL 19

Expert Comment

by:xterm
ID: 39694679
Probably not what you want to hear, but I can't see a thing wrong with your RAID, nor the setup of your physical disk partitions.  Are there any crons that ran prior to this discovery, and have you looked at root's (or any other administrator's) shell command histories just to see if there's a chance a mistake was made?
0
 
LVL 19

Expert Comment

by:xterm
ID: 39694682
Nov 20 08:34:43 damnation kernel: [2112721.528101] INFO: task updatedb.mlocat:25586 blocked for more than 120 seconds.
Nov 20 08:34:43 damnation kernel: [2112721.528104] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 08:34:43 damnation kernel: [2112721.528106] updatedb.mlocat D ffff880224e7e8f0


Do you have updatedb set (via "prunepaths") to ignore your RAID?  If so, then it's certainly not relevant, but if it scans your RAID each time it runs and hung while doing so, then it may indicate that something was amiss at that time.  But it could've been anything - a stale NFS handle, etc.  Do you have any inclination as to whether your data was still present after 11/20?
0
 
LVL 19

Expert Comment

by:xterm
ID: 39694688
Perhaps a bit of a digression here, but I find it unusual that updatedb was running at 8:34A - my cron.daily containing that process runs at 4:02AM.

What time does yours run, and is it possible it's still indexing millions of files on your RAID many hours later?

BTW, if your data was truly only lost today, then missing files will still be visible using the locate command and you can isolate the time frame.  If they're not there, it happened before the last time this cron was run (assuming you index your RAID)
0
 

Author Comment

by:v_evets
ID: 39694708
Cronjobs: all debian stock +
mysqldump --defaults-extra-file=/etc/mysql/debian.cnf -e -A --events | gzip > /var/backups/mysql_dump_`date +\%A`.sql.gz

grep -q 'Seagate' /etc/mtab && rsync -aAXHh --delete --delete-excluded --exclude=/dev/* --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* --exclude=/run/* --exclude=/mnt/* --exclude=/media/* --exclude=/lost+found /* /mnt/Seagate/Backups/$hostname

/usr/bin/rkhunter --update --cronjob && /usr/bin/rkhunter --check --cronjob

root's .bash_history has nothing dangerous in it (but it's only the default 500 lines long), no others with RW.

I agree that's what it looks like, just can't see how.

Might be a good time to migrate to ZFS :-)
0
 

Author Comment

by:v_evets
ID: 39694723
Been indexing daily for years without issue, but it's possible.

25 6 * * *      root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )

Good point, they're not in locatedb now.

This could have occurred up to 2 days ago.

Most, at any rate, was accessible ~2/12
0
 

Author Comment

by:v_evets
ID: 39694753
How do I be sure the ext4 fs is actually ok? if it is, then it must be a SNAFU, as any problems lower down (i.e. raid) would show up as corruption at the filesystem level yes?
This backup superblocks thing is still nagging me, It is normal then that fsck -b throws errors?
0
 

Author Comment

by:v_evets
ID: 39694883
Looks like maybe they _were_ deleted: debugfs ls -d shows the missing files.
Still mystified as to how.
This conclusive? best suggestion for recovery? -
extundelete can't recover anything except 1 small file that had actually been deleted recently.
Writes to this volume would have been on the order of 200Mb/day, confined to one directory (which is ok)
0
 
LVL 19

Accepted Solution

by:
xterm earned 2000 total points
ID: 39695884
That happens to me every time I use extundelete - I get stuff I don't want, and can't get anything I do want.

As to your previous question, yes, your ext4 is surely fine - if you think about it, FS errors are very verbose and usually blow you up with thousands of repeated entries in syslog.

I'm sorry for your data loss - I hope you can have better success with undeleting than I've had.  I see your rsync cron does "--delete" so if it ran after the data loss, your backup is unfortunately useless too.  You might consider removing that option for dailies, and only doing it once in a while as a cleanup unless your data changes greatly each day.
0
 

Author Comment

by:v_evets
ID: 39696511
Good point, though  that rsync is just the os array anyway. Undeleting may well take longer than a restore from originals. Looks like I need to employ a disk swapping monkey for a while :-(
Thanks for your help.
0
 
LVL 19

Expert Comment

by:xterm
ID: 39696524
Any time!  As inconvenient as it will be, I'm relieved for you that you at least have backups.
0

Featured Post

10 Questions to Ask when Buying Backup Software

Choosing the right backup solution for your organization can be a daunting task. To make the selection process easier, ask solution providers these 10 key questions.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

VM backups can be lost due to a number of reasons: accidental backup deletion, backup file corruption, disk failure, lost or stolen hardware, malicious attack, or due to some other undesired and unpredicted event. Thus, having more than one copy of …
Many businesses neglect disaster recovery and treat it as an after-thought. I can tell you first hand that data will be lost, hard drives die, servers will be hacked, and careless (or malicious) employees can ruin your data.
In this Micro Tutorial viewers will learn how to restore single file or folder from Bare Metal backup image of their system. Tutorial shows how to restore files and folders from system backup. Often it is not needed to restore entire system when onl…
This tutorial will walk an individual through configuring a drive on a Windows Server 2008 to perform shadow copies in order to quickly recover deleted files and folders. Click on Start and then select Computer to view the available drives on the se…
Suggested Courses

762 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question