We have some string data which gets inserted into another table as VARCHAR2(24). This is RPAD'ed to 24 characters and concatonated to another field, defined as VARCHAR2(5), and an attempt is then made to insert it into a final table defined as VARCHAR2(29). All's well until multi-byte characters such as '£' enter the equation, whereby the statement fails with "ORA-01401: inserted value too large for column"
I'd quite like to leave the program alone and ask our DBA to set up the NLS (?) environment to treat £'s as single-byte characters. I've tried setting NLS_LANG on the command line in UNIX but this seems to have no affect. Try inserting a '£' into a VARCHAR2(1) field as a test.
What problem do you get when you repeat the above test ? Do you get ORA-01401: inserted value too large for column ?
0
skuriakoseCommented:
Try setting the NLS_CHARACTERSET to some multibyte charset ( UTF-8). By default, the NLS_CHARACTERSET will be single byte and NLS_NCHAR_CHARACTERSET will be multibyte. Find out using
select * from V$nls_parameters;
0
CuthbertDibbleGrubAuthor Commented:
Pratikoy: indeed I do get the ORACLE error when inserting with both your and my examples. You must have a different NLS set up than me.
Skuriakose, I will interrogate my V$NLS_PARAMETERS values tomorrow - a useful view - many thanks, hopefully will lead to a solution!
0
Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more.
NLS_LANGUAGE: AMERICAN
NLS_TERRITORY: AMERICA
NLS_CURRENCY: $
NLS_ISO_CURRENCY: AMERICA
NLS_NUMERIC_CHARACTERS: .,
NLS_CALENDAR: GREGORIAN
NLS_DATE_FORMAT: DD-MON-YY
NLS_DATE_LANGUAGE: AMERICAN
NLS_CHARACTERSET: UTF8
NLS_SORT: BINARY
NLS_TIME_FORMAT: HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT: DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT: HH.MI.SSXFF AM TZH:TZM
NLS_TIMESTAMP_TZ_FORMAT: DD-MON-RR HH.MI.SSXFF AM TZH:TZM
NLS_DUAL_CURRENCY: $
NLS_NCHAR_CHARACTERSET: UTF8
NLS_COMP: BINARY
We've had problems changing these as it affects Pro*COBOL / Microfocus COBOL (which will hopefull be surplus to requirements soon!).
If you create you table specifying the number of characters, instead of bytes (the default), you can get around some of these issues (there are other problems if dealing with larger columns), and shouldn't affect any of your legacy apps like changing the nls parameters would.
-- The BYTE parameter is not needed, as it is the default, but included for clarity...
CREATE TABLE t1 (byte_varchar VARCHAR2(1 BYTE), char_varchar VARCHAR2(1 CHAR));
-- insert the euro sign into the byte-sized column... This one will fail with your error
INSERT INTO t1 VALUES ('€', NULL);
-- insert the euro sign into the char-sized column... This one will suceed
INSERT INTO t1 VALUES (NULL, '€');
The above will only actually work in 9i, but going by your parameter list, you don't have NLS_LENGTH_SEMANTICS, which would indicate that you're not using 9i are you?
Thanks for the great info to aid me on my chosen path - I'm going for a NLS solution as I'm not going to change all the potential scripts and tables affected once 9i arrives. I'll use the provided parameters as a guide and test this out on our replica server before taking the plunge! Points awarded as I'm convinced that I'l succeed eventually, and am adamant that our UK'based RDBMS should be rightfully configured to use GBP by default, instead of it's out of the box state!
pratikroy, the reason it all works fine for you is that you are using WE8ISO8859P1 as your character set, whereas CuthbertDibbleGrub has UTF8 set as the character set. The reason I used the euro character is that I have recently had issues with it (being that on a UTF8 database it is 3 bytes), but I had run all of the tests on a UTF8 database (as well as a WE8ISO8859P1 database).
To find the length in bytes of a particular character perform the SQL "SELECT LENGTHB('£') FROM dual;"
On a UTF8 database, you will get
LENGTHB('£')
------------
2
And on a WE8ISO8859P1 database, you will get:
LENGTHB('£')
------------
1
So, it will work fine for you on a WE8ISO8859P1 database, but not on a UTF8...
0
Question has a verified solution.
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
SQL> create table tab_char (fld1 varchar2(1));
Table created.
SQL> insert into tab_char values ('£');
1 row created.
SQL> select * from tab_char;
F
-
£
What problem do you get when you repeat the above test ? Do you get ORA-01401: inserted value too large for column ?