Solved

Export Data Pump

Posted on 2013-06-18
6
572 Views
Last Modified: 2013-06-25
Hi,
TNS for Linux: Version 10.2.0.4.0
I've 2 Oracle instances (PROD and DEV) Enterprise Edition Release 10.2.0.4.0 - 64bit
I need export full db from PROD instance and create the DUMPFILE on DEV machine.
I can create a database link from PROD to DEV and viceversa.

Can I create, from PROD, a new oracle directory that pointing to mount point on the DEV machine?


Thanks in advance!
0
Comment
Question by:ralph_rea
6 Comments
 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
ID: 39256521
If the prod server can mount a disk on dev then yes.

However, you should NEVER link dev and prod servers.  Even a database link!  Opens up way to many issues and it's a good way to trash production.

Since you can create database links, they are on the same network.  Just run exp from the dev server and point it to the production database through tnsnames.  Still risky but less so than physically linking the servers.

Just create the file on production and copy/ftp/??? it over.
0
 

Author Comment

by:ralph_rea
ID: 39256646
I cannot create the file dmp on production because I have just a little space on mount point.

So, if I understand it, your solution is:

on DEV:
1) Create an oracle directory
2) modify tnsnames.ora to point it to the producion database
3) run expdp
expdp "sys/password@PROD as sysdba" directory=dumpdir file=prod_exp.dmp full=y

Correct?
I Know that expdp is a server process and run on server machine. In this case prod_exp.dmp  file will be on PROD or DEV machine?
0
 
LVL 76

Assisted Solution

by:slightwv (䄆 Netminder)
slightwv (䄆 Netminder) earned 100 total points
ID: 39256709
Never tried datapump exports remotely.  Looks like you can do it but it does require a database link:
http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_export.htm#SUTIL856

If I was the production DBA, I wouldn't let the development database create a link to production.  Way to many things could go wrong.

I know classic exp/imp will work for you.
0
Efficient way to get backups off site to Azure

This user guide provides instructions on how to deploy and configure both a StoneFly Scale Out NAS Enterprise Cloud Drive virtual machine and Veeam Cloud Connect in the Microsoft Azure Cloud.

 
LVL 23

Assisted Solution

by:David
David earned 50 total points
ID: 39256767
1.  If you are so tight on production storage, and cannot remove any log or other trash files, you may soon have an even bigger problem.

2.  Are you backing up production, either with RMAN or DataPump?  Consider restoring to DEV from your last backup set. If you are not backing up PROD, see above about the bigger problem.

3.  Moving production data into a less secure copy, where security is typically less enforced, is also unwise.  Bigger problem, yada yada.

One win-win approach may be to clone the PROD database template using DBCA, use the template on DEV to create the new database, then populate the DEV side with a set of generated baseline data.  Remember that you don't need to have all the rows in PROD tables in order to test constraints.  Bring over your stats to simulate the load for the Optimizer, and bring over the SQL profiles.
0
 
LVL 34

Assisted Solution

by:johnsone
johnsone earned 50 total points
ID: 39257021
To use the exp/imp, you don't actually need to store the dump file.  You can do it easily with a named pipe on either side.  The basic commands would be this:

mknod data.dmp p
exp userid=user/pass@prod file=data.dmp ...
imp userid=user/pass@dev file=data.dmp ...

You would need to run the exp and imp commands in different OS sessions, but it definitely works, even on a large scale (we did a 1.5TB database once).  You never need more than a few blocks of disk space to do it.
0
 

Accepted Solution

by:
mkhandba earned 300 total points
ID: 39260650
You can do directly ... direct import into your DEV from PROD (not recommended), no need of any export too

Create a db link at DEV to your PROD and then can do this ... direct.

UNIX cmd:
nohup impdp system/xxxx parfile=imp_name.par &

LOGFILE=imp_name.log
DIRECTORY=DP_DIR
NETWORK_LINK=PROD.WORLD
SCHEMAS=SCOTT
REMAP_TABLESPACE=a:b, c:d,e:f     -  (if different tablespaces @source & target)
EXCLUDE=STATISTICS     - this'll increase speed, can gather after import
parallel=12  
metrics=yes
trace=480300

Now you can make changes accordingly, here I'm doing a schema import, directly into Target from Source, in one step, using DB Link, also doing a REMAP of tablespaces, if they are different.

HTH
0

Featured Post

Complete VMware vSphere® ESX(i) & Hyper-V Backup

Capture your entire system, including the host, with patented disk imaging integrated with VMware VADP / Microsoft VSS and RCT. RTOs is as low as 15 seconds with Acronis Active Restore™. You can enjoy unlimited P2V/V2V migrations from any source (even from a different hypervisor)

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

This article explains all about SQL Server Piecemeal Restore with examples in step by step manner.
CCModeler offers a way to enter basic information like entities, attributes and relationships and export them as yEd or erviz diagram. It also can import existing Access or SQL Server tables with relationships.
This video shows how to Export data from an Oracle database using the Datapump Export Utility.  The corresponding Datapump Import utility is also discussed and demonstrated.
This video shows how to copy an entire tablespace from one database to another database using Transportable Tablespace functionality.

864 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

22 Experts available now in Live!

Get 1:1 Help Now