Solved

shadow protect images - at some point some files may not be recoverable? Or how do you do it right?

Posted on 2014-02-03
10
809 Views
Last Modified: 2014-02-04
It dawned on me recently.  Is this correct? under some situations, a file can't be recovered if

a) it wasn't on the server long
b) it's a long time after the file was deleted / corrupted?

Specifically with shadow protect, but I would think it would apply with other apps.  Is there a way to avoid the problem?  Do you point this out to the client?  How? It seems like an obscure situation they may not grasp and get more afraid that you are saying things might not be recoverable.

Say a file is created on the server on the 2nd of the month.  You are doing 15 min continuous incrementals.  It winds up getting deleted / corrupted within a couple weeks - before the end of the month.

It IS on the 15 min. incrementals, the daily consolidated, the weekly consolidated... but not on the monthly consolidated.....

then over time, the retension settings for Shadow Protect throw out the unneeded dailys and weeklys that have the file in them.  yes, using the defaults of SP, that's weeks /  months later.  but then the client says - 'we just noticed this file we made a while ago was deleted....

it could be unrecoverable, right?  Even if you 'archive'.  If you have an archive of the 1st or 31st of the month, the file will not be there.  yes, maybe an obscure situation.  but a possible situation?

how do you deal with that?  Avoid that?  etc.
0
Comment
10 Comments
 
LVL 38

Accepted Solution

by:
Hypercat (Deb) earned 250 total points
ID: 39830794
When explaining any kind of backup system to a client or user, you need to be very clear that a backup is NOT forever and not infallible.  With Shadowprotect set to do 1 hour incrementals and then daily, weekly and monthly rollups, I usually simplify it by saying that any single file can most likely be recovered within 14 days. If they question why I say "most likely," I'll give an example of someone who creates a file and then deletes it within an hour of creating it.  Because I schedule my backups to be hourly, I'll say that in that case the file may not have been backed up before it was deleted.

Also, we often will put an additional program on heavily used file servers to preserve multiple file versions.  One that we've used for years is Undelete, which can be set to save a number of file versions for things like documents, etc., so if someone deletes or even edits a file by mistake you can recover a previous version without even going to a backup. You can of course also use Microsoft's own volume shadow copy services, but I prefer Undelete because it has better management tools.
0
 
LVL 38

Expert Comment

by:Philip Elder
ID: 39831309
Our client firms require that we have at least two years worth of data recoverable.

So, we grandfather, grandmother, and in some cases father the backup drives (6, 12, 24 months).

This means that our rotations must include enough drives to cover the regular rotations (2 sometimes 3 sets) and then the archive sets.

The data is encrypted so we don't worry too much about the drives sitting in our vault. :)

Philip
0
 

Author Comment

by:BeGentleWithMe-INeedHelp
ID: 39831511
Sorry if I am being stubborn...  

Hyper - I like how you make a point to mention the shorter (14 day) window. As much as we all talk of backups for months.... some weird situations might not have the files months later.

Philip - I understand what you are saying but do you agree with Hyper - there's situations where an admittedly likely tiny amount of files might not be there when you look at the backups at some point in the future?

Like Hyper's created and deleted between snapshots, but also my thinking that even file that exists for weeks, might not be recoverable?
0
 
LVL 38

Expert Comment

by:Philip Elder
ID: 39831538
The only time we have experienced file loss was in the case of an IDE RAID 5 setup where some of the disks started to experience wobbly bits (bad sectors).

Those wobbly bits lead to a garbage in garbage out situation with the backups as they did not manifest themselves until the server went full-stop.

After a hard-reset the server came up and all was seemingly happy. Then the NTFS 55 errors came along. The she went full-stop again with no recovery.

A perfect storm of events lead to the backups being relatively useless (BackupExec to DAT libraries).

That was the last time we experienced file loss. Out of 650GB of data we recovered everything but one partner's 24 files.

To date we have gone through some very spectacular recoveries in part due to ShadowProtect and in part due to the skills learned via the SwingIT migration techniques (www.sbsmigration.com - best $400 I've ever spent). No data was lost.

Since switching to SAS only disks and hardware RAID we have not experienced any data loss. We have had a few lost drives over the years but no data lost.

So, to answer your question straight up: I am 100% confident that our backups are good all the way through.

But again, we _test_ those backups with full bare-metal restores (to Hyper-V or physical box).

Oh, and our primary vertical is accounting offices. They touch _everything_ stored on their systems year after year.

Philip
0
 

Author Comment

by:BeGentleWithMe-INeedHelp
ID: 39831550
great to hear things are working!  And yes, I love shadow Protect also.  THe examples you give are for restores of the whole hard drive.

I am not saying there's a flaw in SP!  I'm just throwing out this idea asking if I am mistaken in thinking there's a conceivable situation, as tiny as it may be, but doesn't it exist that in some situations, you may not be able to recover a file if months after it was deleted people realize it's missing / corrupt.  again, not because of a flaw in SP or anything like that. just that by the nature of the daily, weekly, monthly rollups / consolidations, you lose - I guess the word to use here is granularity?  within a week or 2 of a problem, you can restore down to a 15 minute window (assuming that's how often you take images). But weeks later, you can only restore down to a certain day.  farther out you can only restore a file from the weekly rollups.  and then after a longer time, you can only restore down to a monthly image, UNLESS you are saving all the individual snapshots, which over time takes  up lots of space.

If you don't mind, just a simple yes or no to the idea that while I readily admit it's (very) unlikely, it is in the realm of possibility? Or am I wrong?  

And if it is possible, how, do you minimize that possibility or just acknowledge the limits of the idea of backups.
0
What Is Threat Intelligence?

Threat intelligence is often discussed, but rarely understood. Starting with a precise definition, along with clear business goals, is essential.

 
LVL 38

Expert Comment

by:Philip Elder
ID: 39831630
Stick with SAS and hardware RAID then chances are virtually nil.

Bits are bits. They are either there or not.

Zeroes and Ones. ;)

Our hosting partners deal with Petabytes and more. No bits lost there in the years we've been dealing with them.

The longest business relationship we have is about 14 years now. Other than the 24 files listed above not one bit has been missed. :)

That spans IDE, SATA, and now SAS on hardware RAID  (was 5 now 6).

Philip
0
 
LVL 20

Assisted Solution

by:SelfGovern
SelfGovern earned 250 total points
ID: 39832120
Yes, it is certainly possible for a file to be created and deleted and then not appear on a later weekly or monthly backup set.

Weird things can happen when a file is renamed, also.

So explain your backup methodology to your clients as hypercat outlines.  If that's not good enough for the client, you can increase the robustness of that client's backups, at an increased cost.  It is possible, for instance, to have a journaling file system that keeps track of all changes.  But it's not possible to have an inexpensive journaling file system.
0
 
LVL 38

Expert Comment

by:Philip Elder
ID: 39832577
This is how we structure the backup regimen:
 + Volume Shadow Copy snapshots at 0737, 1037, 1237, 1537, 1737, and 1937
 + ShadowProtect snapshots at 0837, 1137, 1337, 1637, 1837, and 2037

Between those two layers we will catch most everything.

There is no accounting for the user element. Yet, we have managed to keep everything that requires keeping.

And this after users get click happy and delete client files, nest folders elsewhere, and so on.

The Previous Versions tab is a user's best friend. :)

Philip
0
 

Author Comment

by:BeGentleWithMe-INeedHelp
ID: 39832632
Philip - maybe it's me.  

You are a legend in my mind - 3rd tier, your lengthy blogs, MVP, etc.  I really look to how you do things. I really just wanted to get acknowledgement from you that my feeble mind figured something out. Or that I am wrong - that's fine too.

But I'm not feeling I'm getting a yea or nea (sp?) from you on this.

I'm not questioning how often you are doing the snapshots

(by the way - any significance for you for the number 37 : )  ??

I'm not questioning RAID vs. JBOD, SATA vs. SAS, etc.

That last reply 'we will catch MOST everything'.  Most isn't all.  I'm NOT trying to rag on a company, procedure, process, etc... Just wanted to know if this thing that dawned on me is accurate.

As the other guys said - yeah, make a file at x:39 and delete it by x+1:33 and it won't be recoverable.  But 'worse' - even with the S/F/G or all that other stuff, if you toss the incremental snapshots, there's a chance down the road that a file that existed for days or weeks WILL NOT be on the consolidated files and won't be recoverable.  And if you acknowledge that, is there anything you do to try to prevent that (seems to me that keeping all the incrementals forever is the only way) or try to explain that to the client.

Not sure if I wasn't clear or why you didn't give a 'you are right' or 'you are wrong' in my thinking.
0
 
LVL 38

Expert Comment

by:Philip Elder
ID: 39832680
Okay, straight up:

No. No data loss. That's 15 years experience speaking.

The 37 is due to folks tending to do things on the hour in groups. The above was an example. In some cases we run SP at :17 and VSC at :37.

Focus on the forest not the tree's branches.

Philip
0

Featured Post

Free Gift Card with Acronis Backup Purchase!

Backup any data in any location: local and remote systems, physical and virtual servers, private and public clouds, Macs and PCs, tablets and mobile devices, & more! For limited time only, buy any Acronis backup products and get a FREE Amazon/Best Buy gift card worth up to $200!

Join & Write a Comment

This article is an update and follow-up of my previous article:   Storage 101: common concepts in the IT enterprise storage This time, I expand on more frequently used storage concepts.
Today, still in the boom of Apple, PC's and products, nearly 50% of the computer users use Windows as graphical operating systems. If you are among those users who love windows, but are grappling to keep the system's hard drive optimized, then you s…
This tutorial will show how to configure a new Backup Exec 2012 server and move an existing database to that server with the use of the BEUtility. Install Backup Exec 2012 on the new server and apply all of the latest hotfixes and service packs. The…
The viewer will learn how to successfully download and install the SARDU utility on Windows 7, without downloading adware.

744 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

11 Experts available now in Live!

Get 1:1 Help Now