Link to home
Start Free TrialLog in
Avatar of gs79
gs79

asked on

ORA-12899: value too large for column "TGT_SCHEMA"."CUSTOMERS"."ZIPCODE"

When trying to load data using following sql from source to target, I get the following error:

insert into tgt_schema.customer_tab
select
*
from
src_schema.customer_tab;


ERROR at line 3:
ORA-12899: value too large for column "TGT_SCHEMA"."CUSTOMER_TAB"."ZIPCODE"
(actual: 16, maximum: 10)

I altered the column data type in the target schema and changed the precision of zipcode to varchar2(32) and executed the above sql and the query ram fine.

Now I have to questions:

1. How to find the record in the source which exceeded varchar2(10)
2. How did the source contained this value when the precision in the source table is also     varchar2(10)?

Please advise.. please let me know if you need any other information

Thanks
Avatar of Sean Stuber
Sean Stuber

select * from "TGT_SCHEMA"."CUSTOMER_TAB"."ZIPCODE" where length(zipcode) > 10
Avatar of gs79

ASKER

Thanks sdstuber

I think you meant src_schema instead of tgt_schema:
select * from "SRC_SCHEMA"."CUSTOMER_TAB"."ZIPCODE" where length(zipcode) > 10

My other question is how did this record exist in the source as we have the same constraint in the source table too..Please let me know

Thanks..
Avatar of gs79

ASKER

Also I didnt see any record in the source where length>10

select count(*)
from "SRC_SCHEMA"."CUSTOMER_TAB"."ZIPCODE" where length(zipcode) > 10

returned 0

This is strange..

Thanks
SOLUTION
Avatar of Sean Stuber
Sean Stuber

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of gs79

ASKER

This is the column definition in source and target after I do describe. I think we can rule out 2 since I ensured that there is no trigger.

The source and target were both defined as:

 VARCHAR2 (10 Byte)

I tried your above query on TGT_SCHEMA as well and there were no rows returned:

select count(*)
from "TGT_SCHEMA"."CUSTOMER_TAB"."ZIPCODE" where length(zipcode) > 10

Then is it option 3? With column definition as VARCHAR2(10 byte), is it not same that source and target has same definition?

I am still not able to find the culprit record

Please advice..

Thanks
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of gs79

ASKER

Thanks PortletPaul..

I was able to find this out using lengthB..

This is how it looks  in src vs target:

select zipcode lengthb(zipcode), length(zipcode) from "SRC_SCHEMA"."CUSTOMER_TAB"."ZIPCODE" where lengthB(zipcode) >= 10

In the target (after I have modified the definition from varchar2(10 bytes) to 32 bytes

ZIPCODE      LENGTH_IN_BYTES      LENGTH_IN_CHAR
ïïïïïïïï              8                              8

Target:

ZIPCODE      LENGTH_IN_BYTES      LENGTH_IN_CHAR
ïïïïïïïï              16                              8

The same 8 character length zipcode is taking 16 byte in Target. The only difference is source is 10g and target is 11g.

The characterset value in Target is UTF8 and I am not able to find that information in Source yet.

Will it be the difference in characterset that is causing this problem. We have modeled the target tables same as source tables. Now if the characterset is different probably we have to change the definition..

Please advice..

Thanks,
>>Will it be the difference in characterset that is causing this problem
probably, certainly looks like there's a difference

what is certain is that extended characters of UTF8 require more than 1 byte

the bigger question is: Should a zipcode accept UTF8 extended characters?
If the length in bytes is 16,  then the field can't be defined as varchar2(10 byte)
if the data is accepted at source as varchar(10) but that data expanded due to UTF8 at target, and this may be an idiot suggestion giving rise to mirth, but could (should?) the NLS_LANG be changed in the target at session level during the import?
oh sorry, I was looking at the output above backwards,  8 on target, 16 on source.

Nevermind

yes, changing character sets could cause the problem seen here
this might help identify inconsistencies perhaps - run in both source & target

select
  d.parameter parameter
, d.value value
, i.value instance_value
, s.value session_value
from nls_database_parameters d
left join nls_instance_parameters i on d.parameter = i.parameter
left join nls_session_parameters s on d.parameter = s.parameter