Solved

Is online backup a good choice?

Posted on 2000-02-15
12
520 Views
Last Modified: 2013-12-06
Not to mention how much system resource has been consumed by
online backup and caused slow system response,    I was just wondering how can we ensure that the online backup is safe in terms of the data consistency and integrity.  Assuming one backup takes about 2 hours  to complete and the Oracle data to be backuped must be in sync, so during  this 2 hours period,  how can we ensure that the backuped Oracle data is in sync  with each other?
0
Comment
Question by:kslzzg
  • 5
  • 3
  • 3
  • +1
12 Comments
 
LVL 20

Expert Comment

by:tfewster
ID: 2522592
If you are using the Oracle backup tools, you won't have a problem; When the backup starts, the database instance  is locked to ensure consistency and all data changes are just written to the redo logs; The backup then backups up the redo logs. Finally, the instance is unlocked and the changes in the redo logs are applied.

 
0
 
LVL 14

Expert Comment

by:chris_calabrese
ID: 2523196
If you're running the VxFS filesystem (available for most commercial Unix flavors)  you might also be able to create a "snapshot" of the real filesystem and back up the snapshot.  This way the database doesn't have to be locked during the backup.
0
 

Author Comment

by:kslzzg
ID: 2525381
I am running HFS filesystem on HP-UX 10.20 and using HP OpenView OmniBack II A.02.55 to backup all the filesystems containing  Oracle data.  In this case,  do you still think the online backup  is applicable? What kind of Oracle backup tools
 are available on HP-UX 10.20?  I saw some kind of raw disk
 backup but I cannot understand what does that mean. Does it mean that Oracle data can just reside on those raw disks which don't need to be initialised before using them?  Another
 point that I can't understand is if the database instances
 are locked when online backup starts,  will we still be able to
 access Oracle data at the moment?  If we have to wait for a long time until they are unlocked,  then that means users will
 suffer a very poor response, right?
0
 
LVL 20

Expert Comment

by:tfewster
ID: 2526669

>  using HP OpenView OmniBack II A.02.55 to backup all the filesystems containing Oracle data.  In this case, do you still think the online backup is applicable?

I doubt that OmniBack is "Oracle aware", so the chances are your backups are not valid, i.e data files will be inconsistent with each other as they are backed up at slightly different times. I hope I'm wrong on this, but I would only use OmniBack when the database was shut down.

> What kind of Oracle backup tools are available on HP-UX 10.20?

I don't know, I'm not an Oracle DBA - I assume there are some basic online backup utilities plus you can buy additional packages to help you manage your backups. However, the O/S will not be an issue.

> Does it mean that Oracle data can just reside on those raw disks which don't need to be initialised before using them?

Yes, Oracle can handle raw disk without using HP-UX filesystems; This is meant to improve performance, but complicates matters like backups.

> if the database instances are locked when online backup starts, will we still be able to access Oracle data at the moment?

The instance is locked to writes (which are just sent to the redo logs), but you can read from it; In practice, the users should notice no difference. (The system may be slower because of heavy disk access)

To summarise, I suspect your backups are unsafe - But I don't have the knowledge to tell you how to put it right! Try the Oracle topic on EE...
0
 
LVL 14

Expert Comment

by:chris_calabrese
ID: 2527155
The way you have things setup now, the data from the database is going to be corrupted if there any writes to the database during the backup window and you'll not be able to restore your backups.

OmniBack II is not Oracle aware by default, but there is an add-on package for it that makes it Oracle aware.  This works by locking the database to writes during the backups window.

It's possible to do backups on raw-disk partitions, but you'll need "pre-exec" and "post-exec" to lock the database to writes during the backups window (actually, given the journal and query optimizer activity, that might not even be enough unless the journal is on a seperate disk that you don't backup).

The only other option is to convert to VxFS and use pre-exec and post-exec scripts to do snapshots and then backup the snapshots.
0
 
LVL 20

Expert Comment

by:tfewster
ID: 2528259
chris, you obviously have more Oracle knowledge than I do and your comment was more constructive than mine - feel free to make it an answer.

However, may I comment on your comment?

> the data from the database is going to be corrupted if there any writes to the database during the backup window

I know you don't mean the data on the disk, but that's how it reads...and it's a bit alarming! (Although knowing all your backups are corrupt is bad enough)

> (Omniback) works by locking the database to writes during the backups window.

Does this mean attempted writes will fail, or does it work as I suggested at the start, the writes are made to the journal and then applied to the database later?

0
IT, Stop Being Called Into Every Meeting

Highfive is so simple that setting up every meeting room takes just minutes and every employee will be able to start or join a call from any room with ease. Never be called into a meeting just to get it started again. This is how video conferencing should work!

 
LVL 14

Accepted Solution

by:
chris_calabrese earned 30 total points
ID: 2528783
I haven't dealt with Oracle backups too much, but I've done database backups in general (including using Omniback) and they all work the same at this level (with the exception of Interbase, which does its locking vastly different than other database systems).

>> the data from the database is going to be corrupted if there any writes to the database  during the backup window

>  I know you don't mean the data on the disk, but that's how it reads...and it's a bit alarming! (Although knowing all your backups are corrupt is bad enough)

You're right, I meant the data on the backups.   The data on the disk will still be OK.

>> (Omniback) works by locking the database to writes during the backups window.

> Does this mean attempted writes will fail, or does it work as I suggested at the start, the writes are made to the journal and then applied to the database later?

It depends.

Generally, other processes will write the journal until the journal fills, at which point the processes will hang waiting for more journal space.  They'll also hang waiting for their transactions to complete when they do a commit.

However, with so many processes hanging around waiting, there's a high likelyhood that some of them will hang.  For example, if A reads part of the database and then write the journal (but can't commit), and then B comes along and wants to read/write the same record(s), B will hang until A commits, which won't happen until after the backups are done.  I'm pretty sure the likelyhood of deadlocks is also increases, but I can't think of why right now.
0
 

Author Comment

by:kslzzg
ID: 2529623
Since there is a high  likelyhood that the user's transactions
 may hang due to the likelyhood of deadlocks until the backups are done,    while at the same time the data on the backup is under the  threat of corruption and  also resulting in poor performance ,  in this case do you still recommend online backup?   Have you ever done that before?  How was
 the outcome and were you able to restore the data?  One more question, just out of curiosity,  is there any online restore since there is online backup ?


As for the  pre-exec and post-exec scripts to do snapshots on VxFS,  can you elaborate it and how to write these scripts? I am new to these,  Can you tell me where is the journal and what does it mean and how does it work?  Does the journal belong to Oracle part or Backup  part or  OS part?  Or is it one
of the concepts within VxFS ?


0
 
LVL 14

Expert Comment

by:chris_calabrese
ID: 2531339
1.  No, I do not recommend online backup.  Instead I recommend the VxFS snapshot method where you're backing up a "snapshot" of an off-line database, but the database can continue running.

2.  If you do online backup, you can only restore the data successfully if the database was locked during the backup (the semi-online approach).

3.  No, there's no such thing as online restore.  If you restore a database, there's an assumption that something really bad's happened and you need to get back to a known good database state.  This is something like a total meltdown of the hardware (like after a fire or earthquack) or a total meltdown of the software resulting in serious corrpution of the data.  Database backups should _NOT_ be relied upon to deal with minor hardware issues such as losing a single disk.  Use RAID technology to deal with that problem.  Similarly, they should not be relied upon to deal with minor software issues such as the OS crashing.  The database journal system should deal with that by itself.

4.  VxFS shapshots work by creating a new mount-point that is a "frozen" copy of an existing filesystem.  You'll need to convert from HFS to VxFS to make use of this feature.  To integrate this with Oracle and OmniBack, you typically do something like this:
  a.  Write a script that locks the database (by calling some Oracle stuff), creates the snapshot (see mount_vxfs man page, specifically the snapof= option), and then unlocks the database.
  b.  Write another script that unmounts the snapshot.
  c.  Setup the Omniback datalists to backup the snapshot filesystem, calling the script from 4.a as the pre-exec script and the script from 4.b as the post-exec script (easy to do in the GUI or -pre-exec and -post-exec in the config files).
   d. Test each step independantly, then test the scripts, then test it with OmniBack.

5.  The journal is a database thing not related to the OS or VxFS.  It is used to store changes to the database before the transaction is committed.  From a backups standpoint, the important thing to remember is that the journal should be on a different filesystem/partition than the actual data.  Actually, from a performance standpoint, it should be on different physical disks than the actual data.
0
 

Author Comment

by:kslzzg
ID: 2533397
I saw  some filesets which are databases-related within OmniBack bundles when I installed OmniBack, such as
SAP/R3, Oracle, Sybase, Informix Integration packet.   May I know what can they do and How do they work?
0
 
LVL 2

Expert Comment

by:GP1628
ID: 2555111
I know this may be a short lived answer but RIGHT NOW the price of hard drives is way low. Usually lower than any backup hardware I have looked at.

We looked at improving our backup procedure.....
For us, we found it simpler to get another drive and do a mirror. Then backup the mirror. It was fast, easier to verify at least the mirror, enabled us to return the work-drive to service quickly, and gave us an exact match drive that we could slip into place if drive one crashed.

We found that a mirror drive was easier and more secure an answer than either trying to add backup capability to each machine, or trying to network the backups. That may have just been our situation though.

Gandalf
0
 
LVL 14

Expert Comment

by:chris_calabrese
ID: 2555494
Yes, there are definitely interesting things you can do with drive mirroring.  In particular, for an on-line transactional database RAID in general is clearly a necessity to keep yourself on the air in the even of disk failures, etc.

However, it's not very good for archival storage (several generations worth of old copies of the database), which is usually required for robust recovery in the face of true disasters (both environmental disasters and software disasters) and also to satisfy leagal requirements.  The typical archival schedule is dailies for a week, monthlies for a year, and yearlies for a minimum of seven years.

On this schedule, a one-month old machine would have around 8 generations of backups, a one-year old machine would have around 19 generations of backups, and a 10 year old machine would have around 26 generations of backups.  I'll assume 20 generations as the average for most machines (giving an average age of a little over a year).

On a 100gig system (typical size for a database server), it would take between 3 and 10 disks to backup the system using pure mirroring depending on disk size (bigger disks are cheaper/byte more expensive per disk and slower).  At say $1k a piece for 9gig disks or $2k for 36gig (we're not getting 5,000 RPM IDE drives, you know), that's between around 3 * 2k * 20 = $120,000 and 10 * 1k * 20 = $200,000 for the backup pool.  It's also 60-200 very delicate disks that have to be carefully shipped and stored off site.

With tape, you can easily fit 100gig on two to three DLT VII's, which run about $75 each.  3 * 75 * 20 = $4,500 for the backup pool and 40-60 fairly industructable tapes that can be hauled off and stored off site in inexpensive rigid plastic cartons.

One possible way of leveraging disk mirrors is to have one set of disks that's placed in the mirror normally, but then break the mirror and backup off the "spare" disks to tape.  This is conceptually pretty simple to the VxFS snapshot mechanism I suggested, but doing the "snapshot" at the disk driver layer rather than the filesystem layer.  Note that you'll actually need three disks in each mirror set, since you still want mirrored disks in operation during the backup in case there's a disk failure during the backup window.
0

Featured Post

How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

Join & Write a Comment

Introduction Regular patching is part of a system administrator's tasks. However, many patches require that the system be in single-user mode before they can be installed. A cluster patch in particular can take quite a while to apply if the machine…
I promised to write further about my project, and here I am.  First, I needed to setup the Primary Server.  You can read how in this article: Setup FreeBSD Server with full HDD encryption (http://www.experts-exchange.com/OS/Unix/BSD/FreeBSD/A_3660-S…
Learn how to get help with Linux/Unix bash shell commands. Use help to read help documents for built in bash shell commands.: Use man to interface with the online reference manuals for shell commands.: Use man to search man pages for unknown command…
In a previous video, we went over how to export a DynamoDB table into Amazon S3.  In this video, we show how to load the export from S3 into a DynamoDB table.

744 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

12 Experts available now in Live!

Get 1:1 Help Now