Link to home
Start Free TrialLog in
Avatar of blf2vkr
blf2vkr

asked on

TOAD EXPORT

Hi there:

O/S: ORACLE 9i, WINDOWS 2000.

I want to create the TOAD EXPORT utility to load data from Production (Create Tables, Views, Synonyms, etc on Test system and load data in it) into Test system. EXPORT utility will create the scripts to load data, however the tables and views in the Production Database have constraints, synonyms, indexes, PK and FK, etc and therefore the load should be done in a proper order.

I WANT TO CREATE A MASTER FILE HAVING ALL THE SCRIPTS TO LOAD.

How do I know the script of which object I should run first before the other so that the load should go in order and there never occur any contension.


Please reply.

Thanks,

blf2vkr
Avatar of gvsbnarayana
gvsbnarayana

Hi,
   It seems that you are trying to load some records into a schema where you have already some records and the constraints are enabled and you are getting errors related to "Parent key not found".
You can:
Disable the FK constraints on the target system
Load all the files you have
Enable the FK constraints

You can use the following queries to prapre the enable/disable the FK constraints:
To disable, take the output of the query:
select 'alter table ' || table_name || ' disable constraint ' || constraint_name || ' ; ' from user_constraints where constraint_type='R';


To enable, take the output of the query:
select 'alter table ' || table_name || ' enable constraint ' || constraint_name || ' ; ' from user_constraints where constraint_type='R';

HTH
Regards,
Badri.
Avatar of blf2vkr

ASKER

THE REQUEST came as follows: I am new in TOAD and do not know what to do. Please help.

Thanks,

blf2vkr



We need to be able to load data from Production database (PORMF02) current state to other environment (acceptance and test) from a set of scripts.  Please use Toad to generate the export scripts (a set of insert statements for all tables), then write a cleanup script and master/main script to load them in the appropriate order?  Be careful on how you use schema owner.  Make it as a parameter if you have to.  Please check your code into CVS py.source repository.  Please plan ahead on converting the master/main script load data base on export date (i.e Toad export scripts will store in different directory and the master script will run them base on a export date parameter.)   We need to have this done COB today.  Let me know if you have any question.
ASKER CERTIFIED SOLUTION
Avatar of gvsbnarayana
gvsbnarayana

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Yes, MohanKNair has explained also how to execute the exp utility from the command prompt, and he is right.  The parameter rows=n, will import all without the data which is faster.

To talke the export of the schema from the prod database with data, as MohanKNair suggested, you will do this:

exp userid=<username>/<password>@<connect string> file=file1.dmp  owner=SCOTT file=file1.dmp statistics=none

Now take export of the prod database with data and import into target schema like this:

imp userid=<target username>/<password>@<connect string> file=file1.dmp

In here you will not have to use the rows=y, because it is implied, it is the default.

Good luck!
regards.

Well, Oracle's export/import is the best tool to do that. No doubt about it.
The user is trying to do what his manager/TL has advised....
There might be times that you need to send the data as a script of sql statements taken from the existing databases. So, TOAD export gives that solution.

Regards,
Badri.
exp/imp should be fine.
Though in case a lot of data to be transfered then export/import could be a bottleneck because of low performance
FastReader should work great to sync prod and test environment with large database size
http://www.wisdomforce.com
Hi.
I think it is about time you distribute the points as we have separated precious time of our own to help you.
Now it is time you help us giving the points we deserve.
Forced accept.

Computer101
EE Admin
Thank you.

From my answer, I have learned that export in a "direct way" is more efficient that a conventional way.

So eliminate the two lines that mention compress=y and consistent=y and add the following:

direct=y
RECORDLENGTH=65535

In this way your exports will be faster.