Link to home
Get AccessLog in
Avatar of Pau Lo
Pau Lo

asked on

Backup Failures - what goes wrong

Can I ask regardless of what backup solution you use:

1) Have you ever had backup jobs fail? If so what caused them to fail, and is it quite common for them to fail? Or a once a year type scenario?

2) What mechanisms do you have in place to identify a backup job that didnt complete successfully, and what can you do about a failed backup job? i.e. if it failed it failed, theres no much you can do about it?
ASKER CERTIFIED SOLUTION
Avatar of giltjr
giltjr
Flag of United States of America image

Link to home
membership
This content is only available to members.
To access this content, you must be a member of Experts Exchange.
Get Access
SOLUTION
Link to home
membership
This content is only available to members.
To access this content, you must be a member of Experts Exchange.
Get Access
Avatar of Pau Lo
Pau Lo

ASKER

>.1) Yes.  Various reasons.  Not enough space left in the backup pool.  Files being locked that should not be locked.  I would say a few times a year we may have jobs fail.


Re files being locked, can you provide an example? I.e. users with files open? Or something else (please keep answers management / low tech freindly)...
SOLUTION
Link to home
membership
This content is only available to members.
To access this content, you must be a member of Experts Exchange.
Get Access
Avatar of Pau Lo

ASKER

>>If you regularly change backups i.e. adding and removing sources and destinations then this can lead to failure


Can you elaborate a little on this? Especially what you mean by sources/destinations?
Avatar of Pau Lo

ASKER

>>i.e. file share backups file in use errors


And can you elaborate on this too?
"Files being locked that should not be locked."  

This is typically on our z/OS system.  Under z/OS when you open a file you can open it "exclusive" use only.  Meaning no other process can open the file while your process has it open.   This is because when you open a file the system has no clue if you are going to open the file for read only or write.  This prevents two process from trying to update the same file at the same time.  

Unfortunately there is no real equivalent of this on most distributed systems.  Which is why you can have two, or more, processes update the same file.
you have a backup job that does file server backups from fileserver1

changing the source means adding filesserver2 into that backup job

changing the destination means writing to different disks, different tape drive

this can lead to errors if its not been tested fully.

Some of the less enterprise level backup solutions can give more issues that others.

Using backup exec in previous roles there were no end of issues that caused mutliple failures across multiple job.