• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 16731
  • Last Modified:

Windows / Oracle Database dump (exp/imp) with different charset resulting in ora-1401 error

Hello I'd like to move a database from one oracle DBMS to another (from 8.1.7 to 9.0.2)

I'm using exp/imp that is working fine though during the import I get the ora-1401 error because on the source I'm using the WE8ISO8859P1 charset and on the target I'm using UTF-8. The involved umlauts are giving me the above error: ORA-01401: inserted value too large for column

---

So what do I have to do, to get those 200 tables resulting in 500MB data converted and imported on the target database properly?

Tolomir

0
Tolomir
Asked:
Tolomir
1 Solution
 
MikeOM_DBACommented:

Set your NLS_LANG environment variable to: WE8ISO8859P1

0
 
marperCommented:
Subject:  import gives ORA-01401

PURPOSE & Problem Description
------------------------------

This note is meant for the users who get the following set of errors while carrying
out the import against Oracle datbases with versions from 8.0.x (where x>=6) to 8.1.7.x.x
 
IMP-00017: following statement failed with ORACLE error 1401:
 " ALTER TABLE "AA" MODIFY ("COL" DEFAULT USER)"
IMP-00003: ORACLE error 1401 encountered
ORA-01401: inserted value too large for column

This issue is
Symptoms:
------------

Following table creation statement would fail in importing database as below:

SQL> create table dropit (c1 varchar2(8) default user);
create table dropit (c1 varchar2(8) default user)
                                            *
ERROR at line 1:
ORA-01401: inserted value too large for column

However the following statement would succeed in the importing database:

SQL>create table dropit (c1 varchar2(30) default user);

Table created

Solution and workaround:
========================

Steps:

a) Find out the table(s)and corressponding column(s) from the import.log which fail
   with these errors eg "AA" is the table and "COL" as above.

b) Then generate the table creation scripts for these errorneous tables from the
   export dump using show=y and rows=n options.

c) Modify the table creation script for the errorneous columns as below:

   change following statement(s):
 
   create table AA (... ..., col varchar2(x) default user, ....) ......;
   or
   alter table AA modify (col varchar2(x) default user)

   to

   create table AA (... ..., col varchar2(30) default user, ....) ......;
   or
   alter table AA modify (col varchar2(30) default user)

   OR change to:

   create table AA (col varchar2(8) default substr(user,1,8));
   or
   alter table AA modify (col varchar2(8) default substr(user,1,8));

d) Precreate the table(s) in the importing database with the above modified scripts.

e) Run the import again with IGNORE=Y.
   
Explanation
-----------

This is a bug(s) : Bug 2811170//Bug 979968/Bug 487982
This bug is fixed in 7.3.4 but exists from 8.0.x where (x>=6) to 8.1.7.x.x.
This bug does not exist in Oracle9i.
.
As a result of this bug : the following ddl statements statements fail during
import in the above said database versions.

  create table AA (... ..., col varchar2(x) default user, ....) ......;
  or
  alter table AA modify (col varchar2(x) default user)
0
 
marperCommented:
you also might want to read this note:

IMP-00019 And ORA-01401 On Import To Database Using UTF8 Character Set

When attempting to import export dump information, that has been created on a
database using a single-byte character set, like the WE8ISO8859P1 character set,
into a database, using a multi-byte character set, like the UTF-8 character set,
the following errors appear:

 . importing <username> objects into <username>.
 . . importing table                "<table name>".
 IMP-00019: row rejected due to ORACLE error 1401.
 IMP-00003: ORACLE error 1401 encountered.
 ORA-01401: inserted value too large for column.
 Column 1 .          0 rows imported.
 Import terminated successfully with warnings..

The problem seems to be caused by the character set conversion from
the WE8ISO8859P1 characterset, which is a single byte character set, to
the UTF-8 character set, which is a multi-byte character set that can use
up to 3 bytes.

The layout of ISO 8-bit character set is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Hex Code      Characters                   Comment
------------- ---------------------------- ------------------------------
0x00 to 0x1F  Control characters           Identical on ISO charactersets
0x20 to 0x7F  International characterset   Identical on ISO charactersets
0x80 to 0x90  Not used.                    Identical on ISO charactersets
0xA0 to 0xFF  National characters          (ÖÄÜöäü ß ...)

ISO charactersets only differ in character codes 0xA0 to 0xFF.

Description
~~~~~~~~~~~
The UTF-8 character set implies the character codes 0x00 to 0x7F, and
therefore also requires just a single byte for each character to describe.

However those characters above 0x7F will require a double byte rather than
a single byte for storage. Normally, this are the characters that differ
between the several national character sets. For the German language, as
example, this includes characters like 'öäüß'.

So, if your tables include a lot of national characters like the German Umlaut
characters ('öäüß') this could cause the target UTF-8 table columns might be
too small to include the converted single byte/multi byte data.

Thus, lets assume that 20% of character data in the language is represented
by a character value of > 0x7F, this may result in an equivalent storage of
120% in the physical in space usage occupied by the data.

Example
~~~~~~~

The following example will demonstrate my comments:

1. Displaying WE8ISO88559P1 characterset data:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL> select dump('äöü',16) from dual;

DUMP('ÄÖÜ',16)
----------------------
Typ=96 Len=3: e4,f6,fc

--> The national characters are store above 0x7F.
--> Each character requires 1 byte to store data.

2. Displaying UTF-8 characterset conversion (WE8ISO8859P1 --> UTF-8):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL> select dump(convert ('äöü','UTF8','WE8ISO8859P1'),16) from dual;

DUMP(CONVERT('ÄÖÜ','UTF8','WE8
------------------------------
Typ=1 Len=6: c3,a4,c3,b6,c3,bc

--> The UTF-8 characterset requires 2 bytes for each national character.
--> This will cause more row consumption!



fix:

- Analyze which tables/rows could be affected by national characters before
  running the export
- Increase the size of affected rows.
- Export the tab
0
Prepare for your VMware VCP6-DCV exam.

Josh Coen and Jason Langer have prepared the latest edition of VCP study guide. Both authors have been working in the IT field for more than a decade, and both hold VMware certifications. This 163-page guide covers all 10 of the exam blueprint sections.

 
TolomirAdministratorAuthor Commented:
Thank you marper for your help.

Are there any limits I have to take into account resizing an oracle 8.1.7 database.

Many entries are varchar2(255) so I guess a varchar2(300) should be fine, but can I do that  without affecting the database integrity (it's a live system).


Plan B would be to leave the system as is and simply copy those 2 affected big tables:

set autocommit 1
set arraysize 5000
copy from user/password@sourceDB -
to user/password@targetDB  -
replace offendingtable using select * from offendingtable;
exit;  

in sqlplus....

Will try that tomorrow then. I hope the conversion is done automatically, any ideas?

Tolomir
0
 
marperCommented:
The character conversion should be OK, it's just the storage space. As the note above said, you will need 20% more space for such fields.
To be on the safe side, just copy or export the affected tables by this character conversion and see the results.
Your plan B sounds like a good idea.
If that goes fine, you should not have problem exporting and importing the rest.
To import or export just specific tables use TABLES command (list the tables name, comma separated) in imp, exp utility

Example:

exp schema_owner/schema_owner_password@YOUR_SID PARFILE=par_file.dat STATISTICS=NONE

Par_file.dat:

FULL=N
FILE=DATA.DMP
LOG=DATA.LOG
rows=Y
buffer=33554432
TABLES=table1,table2,...
0
 
TolomirAdministratorAuthor Commented:
Ok, resizing the table went well, exp/imp went fine.

Tolomir
0
 
monuCommented:
bakwas ans
0

Featured Post

Free Tool: Path Explorer

An intuitive utility to help find the CSS path to UI elements on a webpage. These paths are used frequently in a variety of front-end development and QA automation tasks.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now