Link to home
Start Free TrialLog in
Avatar of marrowyung
marrowyung

asked on

protect Oracle EBS database.

hi,

usually how can we protect the Oracle EBS database from disaster? still the same oracle RAC and data guard ?
ASKER CERTIFIED SOLUTION
Avatar of Mark Geerlings
Mark Geerlings
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of marrowyung
marrowyung

ASKER

hi,

tks.

" using DataPump Export for a full database export is usually not practical."

why? too big ? so what should we do ? just use RMAN ?


the oracle EBS DB usually sit in another oracle DB ? EBS just the name of the application ?

" it is very important to have good backups of the EBS application software regularly."

how to back it up?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
"or DataGuard."

for another copy in DR site? and data pump and dump will be too slow then.

by the way if the primary site has data mixed up the data guard will replication all bad data to DR site anyway, so still not very good, right?

why Oracle E-Business Suite database always has DB size big, what it store usually to make it that big ?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
"for an EBS database, that will not help you do a DataPump export.  And yes, if your primary site gets data corruption, then that same corruption will usually be copied to your DataGuard site, unless the problem is only in the physical disks at the primary site."

then in Oracle world, how can we solve it ?

"Because that is the way that Oracle designs it.  It contains over 21,000 tables, 25,000 views and 41,000 PL\SQL packages.  And that is just following an initial install of EBS with no data in it yet for your organization."

HAAHHA, so when we buy that we have to buy the Oracle hardware too ?

so they sales HW and earn a lot because of this ?
"in Oracle world, how can we solve [the problem of data corruption]?"
1. Enable the two optional init parameters: "db_block_checking" and "db_block_checksum".  These will protect you from most physical block corruptions.
2. Set up a logical DataGuard instance instead of (or in addition to) a physical DataGuard instance.  The logical instance will *NOT* contain any physical block corruption that the primary instance may have.

If you buy Oracle's EBS software, yes you will have to plan to spend a lot of money on server and storage hardware.  You may choose to buy Oracle servers for this, but you don't have to.  Yes, Oracle will be happy to take your money for hardware as well as for software.
"1. Enable the two optional init parameters: "db_block_checking" and "db_block_checksum".  These will protect you from most physical block corruptions.
"
sorry, please briefly explain what is these 2 for ?

"2. Set up a logical DataGuard instance instead of (or in addition to) a physical DataGuard instance.  The logical instance will *NOT* contain any physical block corruption that the primary instance may have."

good !   but there must be pros and cons between logical and physical dataguard, what is it in high level ?
These are two different data-protection mechanism that Oracle offers.  You would have to read Oracle's documentation on these to get the exact details.  But basically they are a way of calculating a "checksum" of the bytes that should be written to a database block, then reading the block back from disk and calculating a "checksum" again.  Those checksums should match.  If they don't, the block wasn't written correctly, or a physical problem or corruption in the disk is indicated.

Yes, there are pros and cons to both physical and logical Oracle standby databases.  A physical dataguard (or standby) database is simple in concept: every block of information in the source database gets copied to the physical standby (and any errors in the blocks usually also get copied).  A logical standby database is different.  This usually starts with a physical copy, but then instead of applying all block changes, only the SQL commands are copied to the standby, then those same SQL commands get applied to the standby.  This means that block corruption from the primary is *NOT* copied to the standby.  Also, this allows the standby to have additional indexes that can be helpful for reporting, if some of  your reports can be directed to use the standby instead of the primary.
"This means that block corruption from the primary is *NOT* copied to the standby.  "

but if at the beginning the first physical block got damaged and it STILL copy to the standby , right?


" Also, this allows the standby to have additional indexes that can be helpful for reporting, if some of  your reports can be directed to use the standby instead of the primary."

this seems active data guard ? do data guard is physical standby and active data guard is logical standby ?
"in Oracle world, how can we solve [the problem of data corruption]?"
1. Enable the two optional init parameters: "db_block_checking" and "db_block_checksum".  These will protect you from most physical block corruptions.
2. Set up a logical DataGuard instance instead of (or in addition to) a physical DataGuard instance.  The logical instance will *NOT* contain any physical block corruption that the primary instance may have.

either one can or can be BOTH ?


And yes, if your primary site gets data corruption, then that same corruption will usually be copied to your DataGuard site, unless the problem is only in the physical disks at the primary site."

I am sorry, why unless the problem is only in the physical disks ? you mean in logical level, the table, has no problem ?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
so you agree that for 2, the first copy can already corrupted.

anyway for us to verify oracle DB is healthy before copy ?

"Either a physical or logical standby can also be open for read-only access to allow queries to run from the standby, so either one can be an "Active DataGuard" instance."

so that different is only SQL statement reapply or data keep copying ? I don't see other different.
"so you agree that for 2 [a logical standby] the first copy can already [be] corrupted."
Yes, if you have just your primary Oracle database with no standby set up yet, your primary database could have data or block corruption.  That is always a possibility.  Is It likely?  Probably not, but it is always possible.

"[Is there] any way for us to verify [that an] oracle DB is healthy before copy?"
Yes, there are at least three options:
1. Run the dbverify utility that Oracle includes as part of every Oracle database.
2. You can also do a full RMAN backup and a full RMAN restore to a different server.  If that succeeds, that indicates your primary database has no corruption.
3. You can run a full DataPump export and a DataPump import into a different server.  If those both succeed, that also indicates your primary database has no corruption.
"1. Run the dbverify utility that Oracle includes as part of every Oracle database."

can we lunch this tools from SQL developer ? usually how long it takes ? much shorter than 2 and 3 ?


\"2. You can also do a full RMAN backup and a full RMAN restore to a different server.  If that succeeds, that indicates your primary database has no corruption."

seems much longer time than 1?

"3. You can run a full DataPump export and a DataPump import into a different server.  If those both succeed, that also indicates your primary database has no corruption."

even take longer time than 2, right?
1. From the Oracle12 documentation: "DBVERIFY is an external command-line utility that performs a physical data structure integrity check."  See: https://docs.oracle.com/database/121/SUTIL/GUID-05C9875B-1AC1-433D-A71E-B7C0667EE1F1.htm#SUTIL013

No you do not run this in SQL Developer.  This is a command-line utility that you run from the O/S on the database server.   How long does it take?  That will depend on the size of your database data files.  You need to run dbverify for each datafile in your database.  Normally, this will take less time than an RMAN backup and it may be faster than a DataPump export.

2. Yes, a full RMAN backup will normally take longer than the time required to run dbverify on each datafile.

3. No, a DataPump export will usually be faster than an RMAN backup, and the resulting "dump" file will usually be much smaller than the size of a full RMAN backup.  That is because the export "dump" file does not include any space for deleted records, or any of the free space that database blocks usually contain.  Also, for indexes, the "dump" file only includes the SQL commands necessary to rebuild the indexes.  The "dump" file does *NOT* contain all of the space used by existing indexes.
"That will depend on the size of your database data files.  You need to run dbverify for each datafile in your database. "

Sure, then can take a long long time . just like MS SQL dbcheck ! tks. make sense.

but i can't believe that it needs to run on EACH data file ! can make mistake !
tks,.