Database Sync

Environment: 50+ databases running on 11gR2 (mix of RAC & standalone), OS: Solaris 10

Task: Need to sync almost all of these dbs on a regular basis from PROD, according to different releases, in order for the developers to develop/test & verify their codes and then later QA can do regression & performance testing too.

a. what are our best & fastest options to achieve this task, currently using data pump export/import, but would like to know, is there any better option/practice that is going on out there?

b. due to the size of PROD, have to do a lot of work manually to reduce these new replica db sizes, because don't want & not needed too, that much storage @dev & QA, obviously, do share if you haven any better/expert idea to achieve this task, in a better manner.

c. also got to encrypt some data, and this is also done manually, that's the reason these sync ups took a lot of time.

On my last assignment, we used to do syncing of our dbs using LVM Snapshot/Virt-Manager, but it was Linux and using virtual machines and all was file system snap/copy & replication process and it was very fast too.

Is there any thing like LVM Snapshot on Solaris OS ... any idea/guidance will highly be appreciated?

Please advice/guide, what are our best & appropriate options.

Thanks a lot in advance, in sharing your expert thoughts, suggestions and advices.
Mushfique KhanDirector OperationsAsked:
Who is Participating?
MikeOM_DBAConnect With a Mentor Commented:
Some ideas:

a.  Best & fastest options:

   Datapump over network link

   Transportable tablespaces
   Clone from rman db copy (vg. Backup incremental level 1 for recover of copy ...)
   RMAN duplicate from active database
   Advanced replication (with/without logical standby)
b.  Reduce these new replica db sizes ...

   Datapump with queries:
                   1) create temporary tables with subset of primary keys and corresponding FK's,
                    2) use these tables to build queries to export the data.

c.  Encrypt the data (or mask it?):
Mark WillsTopic AdvisorCommented:
From a philosophical perspective, we used to have a purpose built QA environment designed to test specific database elements gained from live experiences and supposed worse case or broken relationships (such as orphan rows, like headers with no detail and vice versa, or customers with no contacts and vice versa).

It was a fairly involved process to begin with and took a fair bit of scripting. But then we produced a series of "known outcomes" as part of a scripted test process and user testing encompassing "using it to work", "breaking it", "robustness", "integrity", "historical relationships" etc etc...

Now because that also reflected the live environment, we ended up using it for DEV by being able to copy across at any stage. The "master" for QA was pretty much locked and only available to a few.

If it passed QA, then we would go into user acceptance testing which was a group of individuals given the added responsibility for their departments to get involved in testing on a backup of live, then upgraded according to release, and used the same way as if it were live. The inputs were retrospective samples over the passed weeks / month end/ periodic demands and compared to the outcomes in live (and by the QA team back to the expected / known outcomes).

But it isn't for the feint hearted.

It did take quite a bit of work in the early stages, and did require quite some thought and commitment to make it happen and keeping it current. Pretty much everything was scripted so if we ever need to we could do the unthinkable and start from scratch again. But the main reason is because of the payroll subsystem and we had to mask reality in such a way that we can still test a-la real life without exposing highly sensitive data.

Advantages were huge in terms or productivity amongst staff (dev, QA and user) and always a relatively easy task to recover / emulate any environment, or to seed the dev environment. The release processes into live were always dependable and only a rare exception if something went bump (normally a people problem more than a process problem).

Might sound extreme, but it was very efficient in the end.
Alexander Eßer [Alex140181]Software DeveloperCommented:
Take a look at this product:

We've tested the trial recently and it's far less complicated than i.e. GoldenGate. IMHO, it is very powerful (and much cheaper)...

Kind regards,
Geert GConnect With a Mentor Oracle dbaCommented:
loool > & fastest
starting on a QAS refresh too

export full with expdp scheduled globally at the same time
 > why ? because in some weeks they'll figure out they forgot something
 > they want all their data in sync at the same time point in time

everything is done with scripts and sqlplus
the worst thing is, the columns are not always in the same order so for each such table
the qas table must be recreated or
> imported in a go-between schema and create as select has to be done with the different column order

don't forget to recreate the sequences, the production data will have higher sequence numbers
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.