Link to home
Start Free TrialLog in
Avatar of Line One
Line One

asked on

Windows Server backup full backup

We notice that even though we have chosen 'fast' in the performance settings for Windows Server Backup (Windows Server 2012 R2) so that it does an incremental backup it will do a full backup regardless at various points in time.  It seems when this happens our ability to restore to an earlier point in time is modified - so if before the unexpected full backup we could restore to points from two weeks ago after the new full backup we might be only able to restore to points within the last week.  How does Windows Server backup do this? Does it somehow merge the difference files up to last week with the base image and create a new base image and somehow manipulate the unmerged difference files so that they are differences from the new base image and then give us restore points from last week?
SOLUTION
Avatar of noxcho
noxcho
Flag of Germany image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Dr. Klahn
Dr. Klahn

Side note:  Personally I believe "all full backups" is the best policy.  It's tempting to say "One full backup a week, then incrementals for six days."  Or worse, "One full backup on day 1, then 30 incrementals."  But this policy increases the chances (by 6, or 30 times!) of there being a problem with the restore.  If any incremental has a problem, that day and all succeeding days can not be restored.

With full backups, each day can be restored individually all the way back to the head end of the backup rotation.

Big disks are cheap today but lost data is always expensive.  There's no excuse for doing incremental backups when a 3 TB drive only costs $200.  It'll cost more than that in administrator time alone restoring a backup plus six incrementals, not counting the two days the system will be down while doing restores.  Stuff four 3 TB drives into the system and JBOD them.  Then you've got a 12 TB volume and can probably do full backups for a month to that, especially so with compression enabled.
@Dr. Klahn - I understand those who have a long-standing aversion to any form of incremental backups. With older, simpler backup technology, you are correct: one bad incremental backup breaks the chain at that point and you're not safe until your next full backup. That is understandably something to be avoided.

Fortunately, technology has progressed. New technology incremental backups are not like old incrementals. The old way of thinking about them need not apply. You still have to be careful that the technology you choose has the right next-generation incremental capabilities. With the better ones, an incremental backup cannot be "bad." It is not simply created by blindly writing changed data to a drive and hoping they are OK. Unlike the old tape systems that had to run through the whole tape to verify a backup, the new systems verify on the fly so that they cannot create a bad backup.

Ah, but what if the storage medium is bad or goes bad? Will that "break the chain"? With backup technology that verifies as it backs up, you don't need to worry about good data written to bad media. The system detects that and avoids the bad sectors, in effect doing ongoing test restores.

Data rot - losing data to media that goes bad later - is a real risk with traditional technology. But now systems can find and repair data rot automatically with very little overhead compared to old verification techniques.

Just as we have clients who will never trust uploading their confidential data "to the Internet" despite what anyone says about encryption technology, I am sure there are those who will always object to any form of incremental backup. They may have experienced first-hand or with their clients the pain of loss resulting from failed incremental backups. But exclusive dedication to full backups has consequences.

Here's an example. One of our clients has 160 GB of data on their server. They add about 0.5 GB per day. It is backed up incrementally and continuously to our cloud. After one year, that works out to 340 GB (not counting reductions by deduplication and compression, variables we'll ignore here).

Let's say they insisted on daily full backups. That's 91 Terabytes of backups on 31 external hard drives costing $516 per month! (160 GB on day one. 340 GB on day 365. Average: 250 GB/day * 365 days = 91TB.) Sure, you can change the numbers, reuse drives, etc. But the new technology gives you continuous backups, the best remedy against one of the biggest threats: ransomware. It is better, faster and cheaper than daily full backups.
Avatar of Line One

ASKER

Fellows,

Thanks for the good arguments for different points of view.

However in the case of Windows Server Backup I don't think incremental is the correct word - it is differential. From what I understand when you first do a backup WSB creates a full backup - that becomes your base image. Then the next backup you do WSB will do a differential - only write what is different from that day.  The next backup will then be a differential - everything from the image again but now you have two days differential both days incorporated into the differential - if you did a full recovery you would have all the data from day 1 that has been changed but not deleted - if something was deleted you wouldn't have it. The first differential would have that.  Etc. etc.  So if you lose one of the differentials you lose some granularity of recovery - files that have been deleted since the previous differentials. If everyday somebody deleted 10 files the differential following those deletions would not be able to recover those deleted files but would be able to recover everything else since the image and the current differential.  just as WSB is not really straight incremental - rolling incremental as was described somewhere in my readings - so full backup is not really 'full backup' in the ordinary sense.  I am not sure what it does when it decides to do a full backup - there is some kind of mysterious algorithm at play that MS doesn't seem to detail very much - that no matter that you have 'fast' chosen in Performance setting  it will decide to make a full backup but if you actually check the size of this full backup  it is not really a full backup. So if you have a 4 TB USB and you decide to do a full backup of 1 TB and then choose it again it's not as if the USB drive will have 2 TB on it.  Windows has done some kind of magic so it's really a different type of differential than if you had chosen 'fast'.  I have yet to have this explained to me by anyone - in particular Microsoft.  I find it odd that there is in fact no MS deep dive for WSB - it's scattered notes and white papers.  There is no Technet Webinar that I have found - WSB - Behind the Scenes, more than you thought you wanted to know' two hours of excruciatingly in-depth esoteric knowledge of one of the most critical functions of the OS.  I can find this for Replica and a lot of other topics but not WSB.  (My mouse manual is more detailed.)

If I am wrong with my understanding above or anybody can illuminate the algorithm and logistics of any point I have raised I would be most appreciative.
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I was waiting for more response.  As none is forthcoming I am awarding points and closing.
Thanks for sticking to the topic.