expdp remote machine

Hi

I am trying to do export of some table which are more then 20GB.
The problem is the database is on remote server.
Is it possible to export dump file directly on to local machine.
How can I do that?

Can you help me with example.

Thanks
basiranaAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

basiranaAuthor Commented:
Remote DB name is rdb1
Local DB name is ldb1
remote machine is rmachine1
local machine name is lmachine1
0
basiranaAuthor Commented:
Tables that need to be exported are ASP_D, ASP_E, ASP_F.
Is it possible I can schedule as it is 20GB.How can I do that?

Thanks
0
basiranaAuthor Commented:
Is it possible I can rebuild index later? bcoz indexes are more then 40GB size.
I just want to import tables and constraints.
0
Determine the Perfect Price for Your IT Services

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden with our free interactive tool and use it to determine the right price for your IT services. Download your free eBook now!

mrjoltcolaCommented:
With expdp I don't believe you can do that. Data pump exports into an Oracle DIRECTORY object, which is local. You could try creating a network mapped directory.

Data Pump does have a network mode, but it does not create an interim dump file, it is direct database to database.

So I think you need to try regular old export, or try to allocate space on the server in a regular directory or, again, try a network mounted directory.
0
schwertnerCommented:
It is possible:

With the Oracle 10g import utility, you can use a network connection so that you can directly import either a single or multiple table(s) from one server to another. Here are the steps you need to perform in order to import a table or tables from one server to another. First, assume that you have two servers, Server A and Server B, and you want to import a table or tables such as "EMP" and "DEPT" from server "A" to "B."

1. Go to Server B and create a database link that will access the database in Server A.
Example:
SQL> CREATE DATABASE LINK mylink2a CONNECT TO scott IDENTIFIED BY password USING 'mylink2a';
Note that 'scott' is a user in the database in Server "A" and that its password is 'password'.

2. Then, perform the following import dump command.
Example:
# impdp scott2/password2 TABLES=emp,dept DIRECTORY=dpump1 NETWORK_LINK=mylink2a
Note that 'scott2' is a user in the database in Server B and that its password is 'password2'.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
schwertnerCommented:
I do not see the need to use files here, but if you decide to use single instance Export via Data Pump you will hit the 2GB barier (not in all cases, but this happens often on 32 bits machines).

The workaround:
1. Use filesize parameter like
filesize=1900M
2. use wildcard %U in the DUMPFILE:
DUMPFILE=expdat%U.dmp

So Data Pump will create many dump files.
0
basiranaAuthor Commented:
we have AIX bceyenetdb1 3 5 00C8AADD4C00 which is 64 bit and has 8 processors.
I need help to do datapump of tables without indexes and I want to run it parallel.
Any help?
0
schwertnerCommented:
It is easy.
Use
PARALLEL=8

Also

GRANTS=y
INDEXES=n
ROWS=y
CONSTRAINTS=y
CONSISTENT=n
 
 
0
schwertnerCommented:
But PARALLEL has no practical meaning if there are no datafiles.
You use 'network' variant and actually there is only one media - the network.
Normally I use PARALLEL=4 but when the dump file is a real dumpfile and with 2 Xeon processors.
0
basiranaAuthor Commented:
below is the expdp I have written does it work? If it works how can I schedule it to run at 5PM today?

expdp dhana@SOURCE_DB directory=dpump tables=MYDATA.SL_AN,MYDATA.SL_AT,MYDATA.SL_AN parallel=4 dumpfile=SL%U.dat network_link=source_link

After export How can I do import from multiple files?

Thanks
0
basiranaAuthor Commented:
CREATE DIRECTORY DUMP_DIR AS '/d032/oradata/dpump'

expdp dhana@SOURCE_DB
DIRECTORY=dump_dir
TABLES=MYDATA.SL_AN,MYDATA.SL_AT,MYDATA.SL_AN
PARALLEL=4
DUMPFILE=SL%U.dat
NETWORK_LINK=source_link
GRANTS=y
INDEXES=n
ROWS=y
CONSTRAINTS=y
CONSISTENT=n
0
schwertnerCommented:
using impdp

In contrast to clasic Import
the parameters have the same name :)

But possibly you missunderstood me.

ID:24796696 do not needs files. It is via the network.
You need files when you make pure server dump, transfer it on another machine and do import. But using DBLink you do not use files
0
basiranaAuthor Commented:
How can I do i using DB link? can you help me with example.
0
schwertnerCommented:
It is explained in the post ID:24796696 in details.
0
basiranaAuthor Commented:
schwertner

The posting explains about impdp. But I do not see expdp. Can you help me step by step. As I am new to datapump
0
mrjoltcolaCommented:
schwertner, will the network mode create an actual .dmp file? I was under the impression that the network mode for data pump does not create an interim file at all?

I will admit I do not use data pump much yet, still using exp/imp for most databases.
0
schwertnerCommented:
DP Dump files can not be exported directly to a remote machine.
Only if you use common SAN, NAS, remote disk or some other kind of shared disk drives.
Import is used because the dump files could only be used for imports.
Or ftp can be scheduled on the remote server.

If you would like to use Data Pump then create a local empty DB,
do the DP import via DBlink and after that export locally the DB in files.

If not - use classic export via client but set NLS_LANG appropriatelly.
0
mrjoltcolaCommented:
That is what I thought, I checked docs yesterday, and it is correct, to export with data[ pump remotely requires an actual Oracle instance plus a DB link, which will not exist for a pure client. So expdp cannot work like the old exp remote method. That is one thing I will miss if they ever remove exp completely.

0
schwertnerCommented:
The DP export is between two instances.
You can use an existing client instance and after that to export the dump file.
I guess you need for some reason Data Pump dump. If this is not so - use the old Export,
but you need client installation.
0
basiranaAuthor Commented:
I am not running expdb on my local machine it is on development machine and has dev DB running.
I am planning to export from prod to dev box over DB Link. and then import the data on to development.
All I need is stesp to export table and import using datapump.
As we are not having enough disk space I want to do export on to more then one disk.

I have disk1, and disk5 free which are of 15GB each but the table size is more then 23GB so we need to export on to diff disk and import.

I am looking for steps to perform this operation.

Thanks
0
schwertnerCommented:
The steps are described above.
Syntactically they are not long, but need some training.
If you have not enough space think about commpressed tablespaces, but also about the danger to hit some bugs.
Also please be aware that the using the method I find in my notes the data will be directly entered in the database, in the tables. So the tablespaces should be created. Do not expect that you will get dmp files - the method works other way, it is import and imports the data immediatelly.
0
basiranaAuthor Commented:
I have specified smaller filesize and given two destination directories the files are distributed equally in both directories and now it working fine.
Export the scott schema from orcl database and split the dump file into three files. The dump files will be stored in three different location. This method is especially useful if you do not have enough space in one file system to perform the complete expdp job. After export is successful, import the dump file into ordb database.

Expdp parfile content:

userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
filesize=50M
schemas=scott
include=table
0
schwertnerCommented:
In this case

parallel=n

will help.

Oracle recommends n to be 2 times the number of the cores.
0
schwertnerCommented:
Backup LISTENER.ORA file!!!!!!!

In ...\network\admin open listener.ora

see

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = D:\Ora10g)
      (PROGRAM = extproc)
    )
    (SID_DESC =
      (SID_NAME = test)
      (ORACLE_HOME = D:\Ora10g)
    )
  )

Add the entry
    (SID_DESC =
      (SID_NAME = test)
      (ORACLE_HOME = D:\Ora10g)
    )
with regard to the particular Oracle Home and SID Name.


In Oracle 11g you have to create from scratch in the listener.ora file the section

SID_LIST_LISTENER =
&..

mentioned above.
Restart the listener:
c:>lsnrctl stop
c:>lsnctl start
0
schwertnerCommented:
Goooood luck!
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Oracle Database

From novice to tech pro — start learning today.