ON SOLARIS: Doing a Oracle data import getting an error ORA-1237

Posted on 2002-05-01
Medium Priority
Last Modified: 2013-12-05
As stated above, on Solaris when doing a data import it looks initally as if the data import is takin place normally and then comes up with the above error.

There is no space restriction on the server.
The maximum number of extents is set to unlimited.
The maximum size was set to unlimited but seems to switch over to a size of 32Gig (which should be fine as it is way more than the single table that is being imported)
Question by:Robj
  • 3
LVL 48

Accepted Solution

schwertner earned 800 total points
ID: 6983086
fact: Oracle Server - Enterprise Edition 8.0
symptom: Extend datafile size over 2 GB generates error
symptom: ORA-01237: cannot extend datafile %s
symptom: ORA-01110: data file %s: '%s'
symptom: ORA-27059: skgfrsz: could not reduce file size
symptom: SVR4 error:  22 invalid argument
cause: Limitation of the Operating System


Add More Datafiles to the Tablespace Instead of Extending Size of Datafiles
Above 2 GB

LVL 48

Expert Comment

ID: 6983095
Why is 2Gb a Special Number ?
  Many CPU's and system call interfaces (API's) in use today use a word
  size of 32 bits. This word size imposes limits on many operations.
  In many cases the standard API's for file operations use a 32-bit signed
  word to represent both file size and current position within a file (byte
  displacement). A 'signed' 32bit word uses the top most bit as a sign
  indicator leaving only 31 bits to represent the actual value (positive or
  negative). In hexadecimal the largest positive number that can be
  represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.
  This is ONE less than 2Gb.

  Files of 2Gb or more are generally known as 'large files'. As one might
  expect problems can start to surface once you try to use the number
  2147483648 or higher in a 32bit environment. To overcome this problem
  recent versions of operating systems have defined new system calls which
  typically use 64-bit addressing for file sizes and offsets. Recent Oracle
  releases make use of these new interfaces but there are a number of issues
  one should be aware of before deciding to use 'large files'.

  Another "special" number is 4Gb. 0xFFFFFFFF in hexadecimal can be
  interpreted as an UNSIGNED value (4294967295 decimal) which is one less
  than 4Gb. Adding one to this value yields 0x00000000 in the low order
  4 bytes with a '1' carried over. The carried over bit is lost when using
  32bit arithmetic. Hence 4Gb is another "special" number where problems
  may occur. Such issues are also mentioned in this article.

What does this mean when using Oracle ?
  The 32bit issue affects Oracle in a number of ways. In order to use large
  files you need to have:

      1. An operating system that supports 2Gb+ files or raw devices

      2. An operating system which has an API to support I/O on 2Gb+ files

      3. A version of Oracle which uses this API

  Today most platforms support large files and have 64bit APIs for such files.
  Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs but
  the situation is very dependent on platform, operating system version and
  the Oracle version. In some cases 'large file' support is present by
  default, while in other cases a special patch may be required.
  At the time of writing there are some tools within Oracle which have not
  been updated to use the new API's, most notably tools like EXPORT and
  SQL*LOADER, but again the exact situation is platform and version specific.

Why use 2Gb+ Datafiles ?
  In this section we will try to summarise the advantages and disadvantages
  of using "large" files / devices for Oracle datafiles:

  Advantages of files larger than 2Gb:

      On most platforms Oracle7 supports up to 1022 datafiles.
      With files < 2Gb this limits the database size to less than 2044Gb.
      This is not an issue with Oracle8 which supports many more files.
        (Oracle8 supported 1022 files PER TABLESPACE).

      In reality the maximum database size in Oracle7 would be less than
        2044Gb due to maintaining separate data in separate tablespaces.
        Some of these may be much less than 2Gb in size. Larger files
        allow this 2044Gb limit to be exceeded.

      Larger files can mean less files to manage for smaller databases.

      Less file handle resources required.

  Disadvantages of files larger than 2Gb:

      The unit of recovery is larger. A 2Gb file may take between 15 minutes
      and 1 hour to backup / restore depending on the backup media and
      disk speeds.  An 8Gb file may take 4 times as long.

      Parallelism of backup / recovery operations may be impacted.

      There may be platform specific limitations - Eg: Asynchronous IO
      operations may be serialised above the 2Gb mark.

      As handling of files above 2Gb may need patches, special configuration
      etc.. there is an increased risk involved as opposed to smaller files.
      Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.

  Important points if using files >= 2Gb

      Check with the OS Vendor to determine if large files are supported
      and how to configure for them.

      Check with the OS Vendor what the maximum file size actually is.

      Check with Oracle support if any patches or limitations apply
      on your platform , OS version and Oracle version.

      Remember to check again if you are considering upgrading either
      Oracle or the OS in case any patches are required in the release
      you are moving to.

      Make sure any operating system limits are set correctly to allow
      access to large files for all users.

      Make sure any backup scripts can also cope with large files.

      Note that there is still a limit to the maximum file size you
      can use for datafiles above 2Gb in size. The exact limit depends
      on the DB_BLOCK_SIZE of the database and the platform. On most
       platforms (Unix, NT, VMS) the limit on file size is around

        See the details in the Alert in [NOTE:112011.1] which describes
        problems with resizing files, especially to above 2Gb in size.

  Important notes generally

      Be careful when allowing files to automatically resize. It is
        sensible to always limit the MAXSIZE for AUTOEXTEND files to less
      than 2Gb if not using 'large files', and to a sensible limit
      otherwise. Note that due to [BUG:568232] it is possible to specify
      an value of MAXSIZE larger than Oracle can cope with which may
      result in internal errors after the resize occurs. (Errors
      typically include ORA-600 [3292])

      On many platforms Oracle datafiles have an additional header
      block at the start of the file so creating a file of 2Gb actually
      requires slightly more than 2Gb of disk space. On Unix platforms
      the additional header for datafiles is usually DB_BLOCK_SIZE bytes
      but may be larger when creating datafiles on raw devices.

  2Gb related Oracle Errors:

      These are a few of the errors which may occur when a 2Gb limit
      is present. They are not in any particular order.
        ORA-01119 Error in creating datafile xxxx
           ORA-27044 unable to write header block of file
        SVR4 Error: 22: Invalid argument
        ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
        ORA-27070 skgfdisp: async read/write failed
        ORA-02237 invalid file size
          KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx
          file limit exceed.
        Unix error 27, EFBIG

Export and 2Gb
 2Gb Export File Size
  At the time of writing most versions of export use the default file
  open API when creating an export file. This means that on many platforms
  it is impossible to export a file of 2Gb or larger to a file system file.

  There are several options available to overcome 2Gb file limits with
  export such as:

        - It is generally possible to write an export > 2Gb to a raw device.
        Obviously the raw device has to be large enough to fit the entire
        export into it.

      - By exporting to a named pipe (on Unix) one can compress, zip or
        split up the output.
        See: "Quick Reference to Exporting >2Gb on Unix" [NOTE:30528.1]

      - One can export to tape (on most platforms)
        See "Exporting to tape on Unix systems" [NOTE:30428.1]
          (This article also describes in detail how to export to
          a unix pipe, remote shell etc..)

        - Oracle8i allows you to write an export to multiple export
          files rather than to one large export file.

 Other 2Gb Export Issues
  Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem
  with EXPORT on many releases of Oracle such that if you export a large table
  and specify COMPRESS=Y then it is possible for the NEXT storage clause
  of the statement in the EXPORT file to contain a size above 2Gb. This
  will cause import to fail even if IGNORE=Y is specified at import time.
  This issue is reported in [BUG:708790] and is alerted in [NOTE:62436.1]

  An export will typically report errors like this when it hits a 2Gb
      . . exporting table                   BIGEXPORT
      EXP-00015: error on row 10660 of table BIGEXPORT,
            column MYCOL, datatype 96
      EXP-00002: error in writing to export file
      EXP-00002: error in writing to export file
      EXP-00000: Export terminated unsuccessfully

  There is a secondary issue reported in [BUG:185855] which indicates that
  a full database export generates a CREATE TABLESPACE command with the
  file size specified in BYTES. If the filesize is above 2Gb this may
  cause an ORA-2237 error when attempting to create the file on IMPORT.
  This issue can be worked around be creating the tablespace prior to
  importing by specifying the file size in 'M' instead of in bytes.
  [BUG:490837] indicates a similar problem.

 Export to Tape
  The VOLSIZE parameter for export is limited to values less that 4Gb.
  On some platforms may be only 2Gb.
  This is corrected in Oracle 8i. [BUG:490190] describes this problem.

SQL*Loader and 2Gb
  Typically SQL*Loader will error when it attempts to open an input
  file larger than 2Gb with an error of the form:

      SQL*Loader-500: Unable to open file (bigfile.dat)
      SVR4 Error: 79: Value too large for defined data type

  The examples in [NOTE:30528.1] can be modified to for use with SQL*Loader
  for large input data files.
  Oracle 8.0.6 provides large file support for discard and log files in
  SQL*Loader but the maximum input data file size still varies between
  platforms. See [BUG:948460] for details of the input file limit.
  [BUG:749600] covers the maximum discard file size.

Oracle and other 2Gb issues
  This sections lists miscellaneous 2Gb issues:

  - From Oracle 8.0.5 onwards 64bit releases are available on most platforms.
    An extract from the 8.0.5 README file introduces these - see [NOTE:62252.1]

  - DBV (the database verification file program) may not be able to scan
    datafiles larger than 2Gb reporting "DBV-100".
    This is reported in [BUG:710888]

  - "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be
    specified in 'M' or 'K' to create files larger than 2Gb otherwise the
    error "ORA-02237: invalid file size" is reported. This is documented
    in [BUG:185855].

  - Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4.
    Eg: ALTER USER <username> QUOTA 2500M ON <tablespacename>  
      ORA-2187: invalid quota specification.
    This is documented in [BUG:425831].
    The workaround is to grant users UNLIMITED TABLESPACE privilege if they
    need a quota above 2Gb.

  - Tools which spool output may error if the spool file reaches 2Gb in size.
    Eg: sqlplus spool output.
  - Certain 'core' functions in Oracle tools do not support large files -
    See [BUG:749600] which is fixed in Oracle 8.0.6 and 8.1.6.  
    Note that this fix is NOT in Oracle 8.1.5 nor in any patch set.
    Even with this fix there may still be large file restrictions as not
    all code uses these 'core' functions.
    Note though that [BUG:749600] covers CORE functions - some areas of code
    may still have problems.
    Eg: CORE is not used for SQL*Loader input file I/O

  - The UTL_FILE package uses the 'core' functions mentioned above and so is
    limited by 2Gb restrictions Oracle releases which do not contain this fix.
    <Package:UTL_FILE> is a PL/SQL package which allows file IO from within

Port Specific Information on "Large Files"
  Below are references to information on large file support for specific
  platforms. Although every effort is made to keep the information in
  these articles up-to-date it is still advisable to carefully test any
  operation which reads or writes from / to large files:

LVL 48

Expert Comment

ID: 6983099
Summary File Limits for Oracle on Sun Solaris (32 bit Oracle releases)
OS Limits
    Release              Max file-system size       Max OS File size  
    <  Solaris 2.6            1Tb (UFS)                   2Gb
    >= Solaris 2.6            1Tb (40 bits)               1Tb

  PLEASE NOTE: In order to use large files on file systems the
             file system must be mounted with the 'largefiles' option

        Shell           Query/Set               Defaulted In
        -----           ---------               ------------
        ksh / sh        ulimit                  /etc/profile
        csh             limit [-h] filesize    

Oracle Limits
  IMPORTANT: There is a generic limit on the maximum Oracle datafile
             size of 4million Oracle data blocks. The exact limit is
             4194303 * DB_BLOCK_SIZE. Some Oracle operations may try
             to exceed this as alerted in [NOTE:112011.1].
  See [NOTE:62427.1] for generic 2Gb information.

  The matrix below shows where Oracle can work with files >=2Gb in size
  on Solaris.

SOLARIS 2.6 / 2.7 / 2.8
              File           Exp/   Loader
  Oracle      System  Raw   Imp**  see **   UTL_FILE  Notes (see below)
  =========================================================================       >4Gb  >=2Gb  >=2Gb   >=2Gb   >=2Gb 32Bit >4Gb  >=2Gb   <2Gb   <2Gb    <2Gb 64Bit >4Gb  >=2Gb   <2Gb    ?            ?             Exp is a 32bit executable       >4Gb  >=2Gb   <2Gb#A  #9     >2Gb       >4Gb  >=2Gb   <2Gb   <2Gb    <2Gb      <4Gb#6 >=2Gb   <2Gb   <2Gb    <2Gb      #6 #8       <2Gb  >=2Gb   <2Gb   <2Gb    <2Gb      #4 #8
>=      <4Gb#6 >=2Gb   <2Gb   <2Gb    <2Gb      #6 #7 #8
<       <2Gb  >=2Gb   <2Gb   <2Gb    <2Gb      #4 #5 #8
  8.0.3.x       <2Gb  >=2Gb   <2Gb   <2Gb    <2Gb
  7.3.4.x       <2Gb  >=2Gb   <2Gb   <2Gb    <2Gb      #1 #2
  7.3.3.x       <2Gb  >=2Gb   <2Gb   <2Gb    <2Gb      #1 #2

SOLARIS 2.5.1      NOTE: Solaris 2.5 does not allow large file system files.
          File                Exp/   Loader
  Oracle  System  Raw       Imp**      see **  UTL_FILE  Notes
  8.0.5.x  <2Gb       >=2Gb   <2Gb    <2Gb      <2Gb
  8.0.4.x  <2Gb       >=2Gb   <2Gb    <2Gb      <2Gb
  8.0.3.x  <2Gb       >=2Gb   <2Gb    <2Gb   <2Gb
  7.3.4.x  <2Gb       >=2Gb   <2Gb    <2Gb   <2Gb
  7.3.3.x  <2Gb       >=2Gb   <2Gb    <2Gb   <2Gb      #1  <2Gb       >=2Gb       <2Gb    <2Gb   <2Gb      #1

SOLARIS 2.4      NOTE: Solaris 2.4 does not allow large file system files.
          File                Exp/
  Oracle  System  Raw       Imp      Loader      UTL_FILE  Notes
  7.3.4.x  <2Gb       >=2Gb   <2Gb    <2Gb   <2Gb
  7.3.3.x  <2Gb       >=2Gb       <2Gb    <2Gb   <2Gb      #3  <2Gb       >=2Gb   <2Gb    <2Gb   <2Gb      #3

<  <2Gb   <2Gb       <2Gb       <2Gb   <2Gb      Any Solaris release

This  is a quick reference to commands that can be used with
  export and/or SQL*Loader on Unix.  
Calculating the size of an export file.
  % mknod /tmp/exp_pipe p
  % dd if=/tmp/exp_pipe of=/dev/null bs=1024 &
  % exp file=/tmp/exp_pipe <other options>

  This will return the number of of 1K blocks the export file will be in
  the following format :
      <no. of 1K blocks>+0 records in
      <no. of 1K blocks>+0 records out

Exporting directly to tape - an example
  A full database export to a QIC 150 (150 Megabytes capacity)
  VOLSIZE should be < tape capacity
  % exp userid=system/manager full=y file=/dev/rmt/0m volsize=145M
  After EXP has written <VOLSIZE> to the tape, it will prompt for next:

      Please mount the next volume and hit <ret> when you are done.

Exporting to tape via unix named pipes
  Note: 'dd' for tape devices may need 'bs=8k' or similar depending on tape
  % mknod /tmp/exp_pipe p                  # Make the pipe
  % dd if=/tmp/exp_pipe of=<tape device> &    # Write from pipe to tape
  % exp file=/tmp/exp_pipe <other options>    # Export to the pipe  

Importing from tape via unix named pipes
  % mknod /tmp/imp_pipe p                  # Make the pipe
  % dd of=/tmp/imp_pipe if=<tape device> &    # Write from tape to pipe
  % imp file=/tmp/imp_pipe <other options>    # Import from the pipe  

SQL*Loading from tape via unix named pipes
  % mknod /tmp/ldr_pipe p                      # Make the pipe
  % dd of=/tmp/ldr_pipe if=<tape device> &        # Write from tape to pipe
  % sqlload data=/tmp/ldr_pipe <other options>    # Load from the pipe  

Creating a compressed export file
  % mknod /tmp/exp_pipe p                     # Make the pipe
  % compress < /tmp/exp_pipe > export.dmp.Z & # Background compress
  % exp file=/tmp/exp_pipe <other options>    # Export to the pipe
Reading a compressed export file
  % mknod /tmp/imp_pipe p                       # Make the pipe
  % uncompress < export.dmp.Z > /tmp/imp_pipe & # Background uncompress
  % imp file=/tmp/imp_pipe <other options>      # Import from the pipe
Exporting to a remote tape device
  % mknod /tmp/exp_pipe p                    # Make the pipe
  % dd if=/tmp/exp_pipe | rsh <hostname> dd of=<file|device> &
                                    # from pipe to remote file
  % exp file=/tmp/exp_pipe <other options>    # Export to the pipe

Export to several files
 (Note: Not all platforms support the split "-b" option used here)
 % mknod /tmp/exp_pipe p                  # Make the pipe
 % cd /fs_with_25gig_freespace                  # Make sure disk has space
 % split -b2047m < /tmp/exp_pipe &            # Split input to 2Gb chunks
 % exp user/passwd file=/tmp/exp_pipe full=y  # Export to the pipe

 This will split the export into several files called 'xaa', 'xab', 'xac'
 all of size 2047Mb. These can be fed back into import thus:

 % mknod /tmp/imp_pipe p                      # Make the pipe
 % cd /fs_with_25gig_freespace                  
 % cat xaa xab xac > /tmp/imp_pipe &            # Put files into the pipe
 % imp user/passwd file=/tmp/imp_pipe            # And import

 The 'man' page for split shows options to generate more meaningful file names.
 WARNING: Some versions of 'split' are very poor performance wise.

Export >2GB
 Use the split example above
 Export to a NAMED PIPE and use unix 'cat' or 'dd' to copy from this to
 a large file. The 'cat' or 'dd' command must support 64 bit write and
 read offsets. Use the reverse for import.

Oracle8i introduced a new parameter (FILESIZE) which allows one to spread
exported data over multiple files.

Featured Post

The 14th Annual Expert Award Winners

The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified!

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Why Shell Scripting? Shell scripting is a powerful method of accessing UNIX systems and it is very flexible. Shell scripts are required when we want to execute a sequence of commands in Unix flavored operating systems. “Shell” is the command line i…
Checking the Alert Log in AWS RDS Oracle can be a pain through their user interface.  I made a script to download the Alert Log, look for errors, and email me the trace files.  In this article I'll describe what I did and share my script.
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.
This video shows how to recover a database from a user managed backup

568 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question