?
Solved

Export bigger than 2GB

Posted on 2003-04-01
14
Medium Priority
?
1,370 Views
Last Modified: 2009-12-16
I am having problems trying to do an Oracle export.

System specs: AIX 4.3
Oracle: 7.3.4

Error displayed when doing database export:

++++++++++++++++++++++++++++++++++++++++++++++++++++++
EXP-00015: error on row 89473 of table EXCEPTIONS,
column CONSTRAINT, datatype 1
EXP-00002: error in writing to export file
EXP-00002: error in writing to export fileerror closing export file
EXP-00000: Export terminated unsuccessfully
++++++++++++++++++++++++++++++++++++++++++++++++++++++

So far I know this type of error is related to a file size limit in the Oracle side to 2GB,
the file only reaches a point where it is at 2147483647 (1.9999GB) and it stops growing.

can anyone please help.
0
Comment
Question by:hchibuda
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 5
  • 5
  • 2
  • +2
14 Comments
 
LVL 1

Expert Comment

by:suzieqt
ID: 8244671
It may be an O/S limit.

In any case, you can export to two or more files using file=name1,name2,...

Then import using the same syntax

Suzanne
0
 
LVL 1

Expert Comment

by:suzieqt
ID: 8244685
Sorry, I forgot to say that you also specify filesize for the maximum size of any file
0
 

Expert Comment

by:AspireDBA
ID: 8244954
Hi hchibuda,

Is it a 32-bit OS or 64-bit OS ? Try enhancing ulimit. If this doesn't work then you can specify multiple files to exp and use unix pipes to do the import...

HTH

Regards,

Suresh

www.aspiredba.com
0
VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

 

Author Comment

by:hchibuda
ID: 8245169
here are my size limits - which seem to be okay.

>ulimit -a
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        32768
memory(kbytes)       32768
coredump(blocks)     unlimited
nofiles(descriptors) 2000

what i'm doing right now is exporting table by table but thats not what i want. I'd like to know how to split the export file may be into two or more files. Suresh you mention using pipes, how is that done.

Regards,

0
 

Author Comment

by:hchibuda
ID: 8245268
by the way its a 64 bit OS
0
 
LVL 48

Accepted Solution

by:
schwertner earned 1000 total points
ID: 8245301
1. Check your operating system specific documentation for file size limitations. Most Unix systems, for example, have a file size limit of 2GB.  
2. Check the available space on the device or file system. If you are close to 100% utilization, lack of device space is the likely cause of the problem.  You need to make up extra space on the file system or device to accommodate the export dump  file.  
3. Check the size of the generated export dump file. If it is close to the operating system limit, you have probably hit the limit.  Your options at this point include:  
- exporting directly to tape (use VOLSIZE)        
- piping the export file through a compression utility    such as "compress" in UNIX.  If you are running Oracle on Unix, see the  sample shell  script that does just that.


The following Unix shell script will pipe an Oracle export file through the Unix compress utility to obtain a compressed export dump file.
#
# EXPORT WITH COMPRESS SCRIPT
# ---------------------------
#
# BEFORE RUNNING THIS SCRIPT, USE the mknod command to create a named pipe
# called /dev/PIPE.dmp (mknod is Unix command, PIPE.dmp is an arbitrary name).
# Then set environment variables for ORACLE_HOME, ORACLE_SID, and DUMPDIR, as
# well as PATH if needed.
# If you save the script as, say, exp_comp.sh, type
# % exp_comp.sh & 
# at the Unix prompt to run it.
# WARNING: You must have unique names for each pipe if you have
# multiple concurrent executions of this script.
trap '' 1 # nohup
# Set these to appropriate values
ORACLE_HOME=
ORACLE_SID=
DUMPDIR=
# Customize PATH if needed
PATH=/bin:/usr/bin:${ORACLE_HOME}/bin:/usr/local/bin:/usr/lbin
export ORACLE_HOME ORACLE_SID PATH
LOGFILE=$DUMPDIR/exp${ORACLE_SID}.log
exec >$LOGFILE 2>&1
echo "Exporting $ORACLE_SID database. start `date`"
# Customize this as needed
exp system/manager GRANTS=Y FULL=Y INDEXES=Y COMPRESS=Y FILE=/dev/PIPE.dmp & 
compress </dev/PIPE.dmp > $DUMPDIR/exp$ORACLE_SID.dmp.Z & 

0
 
LVL 1

Expert Comment

by:Swany
ID: 8249940
What OS and filesystem are you using?

You may just need to pass a parameter to the mount command
or specify an option in your vfstab (fstab on some
variants) to enable large file support.

VxFS doesn't support large files by default, but UFS
usually does.
0
 

Author Comment

by:hchibuda
ID: 8251639
AIX 4.3 BOS
VFS filesystem.
0
 
LVL 1

Expert Comment

by:Swany
ID: 8251831
Is VFS Veritas file system (ie, VxFS?)

If so, then add -o largefiles to your mount commands and
you should be able to create files > 2GB (up to 2TB in
fact)

I did a google on AIX and VFS and didn't find any
specific references to a VFS file system, but there were
lots of references for vxfs, so I am hoping that is
what you are running.
0
 
LVL 1

Expert Comment

by:Swany
ID: 8251839
oh, and I just thought about the fact that you might be
using 32bit oracle (you can run 32bit oracle on a 64bit platform).  If you are using 32 bit oracle, then
largefiles can't be created by oracle and you are out of
luck.
0
 

Author Comment

by:hchibuda
ID: 8259642
schwertner,

i've used the script for export sucessfully, now how do i do the import bit. coz i tried to  uncompress the dump file and it did the same thing as it was doing for export. SOS.
0
 
LVL 1

Expert Comment

by:Swany
ID: 8261618
You can use the same named pipe that you used for exporting.  Instead of sending the data to the pipe from exp and reading it with compress, you need to send the data to the pipe with compress (or zcat) and read it with
imp.

modify the above script and replace the last two lines
with:

zcat $DUMPDIR/exp$ORACLE_SID.dmp.Z > /dev/PIPE.dmp & 
imp system/manager FULL=Y FILE=/dev/PIPE.dmp &

0
 
LVL 1

Expert Comment

by:Swany
ID: 8261660
When I say modify I mean copy it and then modify it. You
need to use the first script for export and the second
script for import.

If you just try to uncompress the file it will fail because
your OS doesn't support large files.  You need to stream
the data to/from exp and imp for it to work.

In step one, you use exp to write the uncompressed data
to a named pipe (/dev/PIPE.dmp).  compress reads the data
from the pipe and writes it out in compressed format.

In step two, zcat takes a compressed file and outputs it
to stdout. The output is redirected to the pipe.  Oracle
import reads the uncompressed data from the pipe.
0
 

Author Comment

by:hchibuda
ID: 8281933
Thanks schwertner you've saved my day!
0

Featured Post

Visualize your virtual and backup environments

Create well-organized and polished visualizations of your virtual and backup environments when planning VMware vSphere, Microsoft Hyper-V or Veeam deployments. It helps you to gain better visibility and valuable business insights.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

How to Create User-Defined Aggregates in Oracle Before we begin creating these things, what are user-defined aggregates?  They are a feature introduced in Oracle 9i that allows a developer to create his or her own functions like "SUM", "AVG", and…
From implementing a password expiration date, to datatype conversions and file export options, these are some useful settings I've found in Jasper Server.
This videos aims to give the viewer a basic demonstration of how a user can query current session information by using the SYS_CONTEXT function
This video shows how to configure and send email from and Oracle database using both UTL_SMTP and UTL_MAIL, as well as comparing UTL_SMTP to a manual SMTP conversation with a mail server.
Suggested Courses

752 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question