ORA-12899: value too large for column "TGT_SCHEMA"."CUSTOMERS"."ZIPCODE"

When trying to load data using following sql from source to target, I get the following error:

insert into tgt_schema.customer_tab
select
*
from
src_schema.customer_tab;


ERROR at line 3:
ORA-12899: value too large for column "TGT_SCHEMA"."CUSTOMER_TAB"."ZIPCODE"
(actual: 16, maximum: 10)

I altered the column data type in the target schema and changed the precision of zipcode to varchar2(32) and executed the above sql and the query ram fine.

Now I have to questions:

1. How to find the record in the source which exceeded varchar2(10)
2. How did the source contained this value when the precision in the source table is also     varchar2(10)?

Please advise.. please let me know if you need any other information

Thanks
gs79Asked:
Who is Participating?
 
PortletPaulConnect With a Mentor freelancerCommented:
try lengthB

select * from "SRC_SCHEMA"."CUSTOMER_TAB"."ZIPCODE" where lengthB(zipcode) >= 16

I've used 16 due to this: actual: 16, maximum: 10
if 16 produces no results drop back the number perhaps.
0
 
sdstuberCommented:
select * from "TGT_SCHEMA"."CUSTOMER_TAB"."ZIPCODE" where length(zipcode) > 10
0
 
gs79Author Commented:
Thanks sdstuber

I think you meant src_schema instead of tgt_schema:
select * from "SRC_SCHEMA"."CUSTOMER_TAB"."ZIPCODE" where length(zipcode) > 10

My other question is how did this record exist in the source as we have the same constraint in the source table too..Please let me know

Thanks..
0
Cloud Class® Course: Certified Penetration Testing

This CPTE Certified Penetration Testing Engineer course covers everything you need to know about becoming a Certified Penetration Testing Engineer. Career Path: Professional roles include Ethical Hackers, Security Consultants, System Administrators, and Chief Security Officers.

 
gs79Author Commented:
Also I didnt see any record in the source where length>10

select count(*)
from "SRC_SCHEMA"."CUSTOMER_TAB"."ZIPCODE" where length(zipcode) > 10

returned 0

This is strange..

Thanks
0
 
sdstuberConnect With a Mentor Commented:
actually, I did mean the target schema

the only way it could exist is either:

 1 - you're mistaken about the source being constrained to 10 characters.

2 - there is a trigger on the target table and the error is arising from larger values being inserted than what you intended

3 - characters vs bytes - If the source system is defined as 10 characters where the characters are multi-byte and the target system is defined as 10 bytes, then an 8-character value in the source will be too big on the target.
0
 
gs79Author Commented:
This is the column definition in source and target after I do describe. I think we can rule out 2 since I ensured that there is no trigger.

The source and target were both defined as:

 VARCHAR2 (10 Byte)

I tried your above query on TGT_SCHEMA as well and there were no rows returned:

select count(*)
from "TGT_SCHEMA"."CUSTOMER_TAB"."ZIPCODE" where length(zipcode) > 10

Then is it option 3? With column definition as VARCHAR2(10 byte), is it not same that source and target has same definition?

I am still not able to find the culprit record

Please advice..

Thanks
0
 
gs79Author Commented:
Thanks PortletPaul..

I was able to find this out using lengthB..

This is how it looks  in src vs target:

select zipcode lengthb(zipcode), length(zipcode) from "SRC_SCHEMA"."CUSTOMER_TAB"."ZIPCODE" where lengthB(zipcode) >= 10

In the target (after I have modified the definition from varchar2(10 bytes) to 32 bytes

ZIPCODE      LENGTH_IN_BYTES      LENGTH_IN_CHAR
ïïïïïïïï              8                              8

Target:

ZIPCODE      LENGTH_IN_BYTES      LENGTH_IN_CHAR
ïïïïïïïï              16                              8

The same 8 character length zipcode is taking 16 byte in Target. The only difference is source is 10g and target is 11g.

The characterset value in Target is UTF8 and I am not able to find that information in Source yet.

Will it be the difference in characterset that is causing this problem. We have modeled the target tables same as source tables. Now if the characterset is different probably we have to change the definition..

Please advice..

Thanks,
0
 
PortletPaulfreelancerCommented:
>>Will it be the difference in characterset that is causing this problem
probably, certainly looks like there's a difference

what is certain is that extended characters of UTF8 require more than 1 byte

the bigger question is: Should a zipcode accept UTF8 extended characters?
0
 
sdstuberCommented:
If the length in bytes is 16,  then the field can't be defined as varchar2(10 byte)
0
 
PortletPaulfreelancerCommented:
if the data is accepted at source as varchar(10) but that data expanded due to UTF8 at target, and this may be an idiot suggestion giving rise to mirth, but could (should?) the NLS_LANG be changed in the target at session level during the import?
0
 
sdstuberCommented:
oh sorry, I was looking at the output above backwards,  8 on target, 16 on source.

Nevermind

yes, changing character sets could cause the problem seen here
0
 
PortletPaulfreelancerCommented:
this might help identify inconsistencies perhaps - run in both source & target

select
  d.parameter parameter
, d.value value
, i.value instance_value
, s.value session_value
from nls_database_parameters d
left join nls_instance_parameters i on d.parameter = i.parameter
left join nls_session_parameters s on d.parameter = s.parameter
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.