• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1031
  • Last Modified:

Concate operation: How to optimize

I want to know how to optimize concat operation for CLOB datatype.

We have a package, which loops through 100,000 records (or more) does lot of validations and again concates with other clob element.

Inside CURSOR loop, we do concatination of the previous elements to the current element.

When I executed the package without concatenation, it executated within minute, but with concat operation it took 30 minutes.

This concatenated string(clob) cannot be converted into temp table or associative array as this returned as CLOB to front end.
 
Is there any way to optimize the concatenation which results in forming a clob?
0
ajexpert
Asked:
ajexpert
  • 6
  • 5
2 Solutions
 
slightwv (䄆 Netminder) Commented:
Have you tried the dbms_lob.writeappend (typed in from memory so might be a little off on the name)?

Can you provide a test case where we can play with different scenarios?
0
 
ajexpertAuthor Commented:
--One of scenario (mocked up) that we use is

If I remove the concatenation operation, it performs faster
/* Formatted on 5/7/2011 12:06:32 AM (QP5 v5.115.810.9015) */
DECLARE
   lv_col_sids   CLOB;
   lv_sid        NUMBER;

   CURSOR C1
   IS
      SELECT   t1_sid, t2_name, t3_value FROM tableT;
BEGIN
   FOR C1_REC IN C1                                 -- approx 100,000  or more
   LOOP
      -- lot of checks -- IF ... ELSE

      -- REF CURSOR's ---
      -- open ref cursor
      lv_col_sids := lv_col_sids || ',' || c1_rec.t1_sid;  -- ** THIS CONCATENATION IS BOTTLE NECK
      -- close ref cursor
      
   END LOOP;


   lv_col_sids := LTRIM (lv_col_sids, ',');
END;

Open in new window

0
 
htonkovCommented:
I agree with slightwv, but you might still try to modify frontend to accept row piped clobs to ease up the oracle load...
It depends, of course, what exactly are you doing with returned clob... and what is the average size of returning clob...

Regards
0
Free Backup Tool for VMware and Hyper-V

Restore full virtual machine or individual guest files from 19 common file systems directly from the backup file. Schedule VM backups with PowerShell scripts. Set desired time, lean back and let the script to notify you via email upon completion.  

 
slightwv (䄆 Netminder) Commented:
Have good news I hope:  I have two options that CRUSH your concat on my little XE database.  I only used 10,000 rows but it should scale to your 100,000+ rows.

In my testing, DBMS_LOB wins hands down but only slightly over the XML trick.

Hopefully your testing will show the same results.
drop table tab1 purge;
create table tab1(t1_sid char(10));

begin
	for i in 1..10000 loop
		insert into tab1 values('1234567890');
	end loop;
end;
/
commit;

DECLARE
	lv_col_sids	CLOB;
	start_time	timestamp;
	end_time	timestamp;

	CURSOR C1
	IS
	  SELECT t1_sid FROM tab1;
BEGIN
	-- Original string concat
	start_time := systimestamp;
	dbms_lob.createtemporary(lv_col_sids,true);
	dbms_lob.open(lv_col_sids, dbms_lob.lob_readwrite);
	FOR C1_REC IN C1 -- approx 100,000  or more
	LOOP
	  lv_col_sids := lv_col_sids || ',' || c1_rec.t1_sid;  -- ** THIS CONCATENATION IS BOTTLE NECK
	END LOOP;
	lv_col_sids := LTRIM (lv_col_sids, ',');
	end_time := systimestamp;
	dbms_output.put_line('Time: ' || to_char(end_time-start_time));
	dbms_lob.freetemporary(lv_col_sids);


	-- DBMS_LOB.WRITEAPPEND
	start_time := systimestamp;
	dbms_lob.createtemporary(lv_col_sids,true);
	dbms_lob.open(lv_col_sids, dbms_lob.lob_readwrite);
	FOR C1_REC IN C1 -- approx 100,000  or more
	LOOP
	  dbms_lob.writeappend(lv_col_sids, length(c1_rec.t1_sid), c1_rec.t1_sid);
	END LOOP;
	lv_col_sids := LTRIM (lv_col_sids, ',');
	end_time := systimestamp;
	dbms_output.put_line('Time: ' || to_char(end_time-start_time));
	dbms_lob.freetemporary(lv_col_sids);

	-- XMLAGG trick
	start_time := systimestamp;
	SELECT
       	RTRIM(
           	EXTRACT(XMLAGG(XMLELEMENT("s", t1_sid || ',')), '/s/text()').getclobval(),
           	','
       	) into lv_col_sids
	FROM tab1;
	end_time := systimestamp;
	dbms_output.put_line('Time: ' || to_char(end_time-start_time));
	dbms_lob.freetemporary(lv_col_sids);
END;
/

Open in new window

0
 
ajexpertAuthor Commented:
@slightwv:

I tried XMLAGG trick, it is very much faster than string concat operation, However I hit another roadblock when I try to save the clob into table.

This is the error when I try to save into table having clob column
ORA-22993: specified input amount is greater than actual source amount
-- table for storing clob data
create table tmp_store_clob(str_clob  CLOB);
--------------------------------------------------

DECLARE
lv_col_sids CLOB;
BEGIN
SELECT
       	RTRIM(
           	EXTRACT(XMLAGG(XMLELEMENT("s", t1_sid || ',')), '/s/text()').getclobval(),
           	','
       	) into lv_col_sids
	FROM tab1;

  -- this gives error ORA-22993
  INSERT INTO tmp_store_clob(lv_col_sids);

  END;

Open in new window

0
 
slightwv (䄆 Netminder) Commented:
As posted, the insert causes a syntax error.  I assume this is because you posted a scrubbed version of the code?


The code below runs for me as-is and produces:

SQL>select length(str_clob) from tmp_store_clob;

LENGTH(STR_CLOB)
----------------
          109999

drop table tab1 purge;
create table tab1(t1_sid char(10));

begin
	for i in 1..10000 loop
		insert into tab1 values('1234567890');
	end loop;
end;
/
commit;

-- table for storing clob data
drop table tmp_store_clob purge;
create table tmp_store_clob(str_clob  CLOB);
--------------------------------------------------

DECLARE
lv_col_sids CLOB;
BEGIN
SELECT
       	RTRIM(
           	EXTRACT(XMLAGG(XMLELEMENT("s", t1_sid || ',')), '/s/text()').getclobval(),
           	','
       	) into lv_col_sids
	FROM tab1;

  -- this gives error ORA-22993
  INSERT INTO tmp_store_clob values(lv_col_sids);

END;
/


select length(str_clob) from tmp_store_clob;

LENGTH(STR_CLOB)
----------------
          109999

Open in new window

0
 
ajexpertAuthor Commented:
I checked the length of lv_col_sids in tab1 table, after XMLAGG it is  396988

Please check the example, its mockup and excerpt of my pl/sql



drop table tab1 purge;

create table tab1(t1_sid NUMBER);


begin
    for i in 1..30000 loop
        insert into tab1 values(123456789012345);
    end loop;
end;
/
commit;




-- table for storing clob data
drop table tmp_store_clob purge;

create table tmp_store_clob(str_clob  CLOB);



DECLARE
lv_col_sids CLOB;
BEGIN

EXECUTE IMMEDIATE ' TRUNCATE TABLE tmp_store_clob';

SELECT
           RTRIM(
               EXTRACT(XMLAGG(XMLELEMENT("s", t1_sid || ',')), '/s/text()').getclobval(),
               ','
           ) into lv_col_sids
    FROM tab1;

  -- this gives error ORA-22993
  INSERT INTO tmp_store_clob values(lv_col_sids);

END;
/

Open in new window

0
 
slightwv (䄆 Netminder) Commented:
Cool.  Not only can I reproduce that error, whenever I muck with it, I get a wonderful ora-600 dump.

I'm thinking it might be an XML bug in 10.2.0.3.  I can run what you posted in 11.2.0.2.

Go with the writeappend code it was slightly faster anyway.  I ran the below code using 10.2.0.3.

I'll see if I can work around the XML code.  I'm curious now....
drop table tab1 purge;

create table tab1(t1_sid NUMBER);

begin
    for i in 1..30000 loop
        insert into tab1 values(123456789012345);
    end loop;
end;
/
commit;



-- table for storing clob data
drop table tmp_store_clob purge;

create table tmp_store_clob(str_clob  CLOB);


TRUNCATE TABLE tmp_store_clob;

DECLARE
lv_col_sids CLOB;
BEGIN

	dbms_lob.createtemporary(lv_col_sids,true);
	dbms_lob.open(lv_col_sids, dbms_lob.lob_readwrite);
	FOR C1_REC IN (select t1_sid from tab1) -- approx 100,000  or more
	LOOP
	  dbms_lob.writeappend(lv_col_sids, length(c1_rec.t1_sid), c1_rec.t1_sid);
	END LOOP;
	lv_col_sids := LTRIM (lv_col_sids, ',');

	INSERT INTO tmp_store_clob values(lv_col_sids);

	dbms_lob.freetemporary(lv_col_sids);

END;
/


select length(str_clob) from tmp_store_clob;

Open in new window

0
 
slightwv (䄆 Netminder) Commented:
Found the problem.  It is the RTRIM call.  There is an undocumented bug in at least 10.2.0.3 that confuses the CLOB when you call RTRIM.

Any work-around I can find would be more involved than just using the writeappend code.

The XML code runs fine in 11.2.0.2 so it has been fixed somewhere.  I'm not going to file the bug report since I really have no need to have it fixed.

If you want to use the XML trick and need the bug fixed, just upload your test case above and tell them it works it you remove the RTRIM.

Another neat 'feature' on my test database:  Once the insert fails, it was the truncate table that was causing the dump.  I had to drop/create the table to clear it up.

If you need anything else on this, just ask.
0
 
ajexpertAuthor Commented:
Thanks slightwv.  I really appreciate your efforts.  

I am looking into this as well as other issues which have gained more priority today, will update you within this week.

Thanks a lot once again slightwv
0
 
slightwv (䄆 Netminder) Commented:
No problem.  Take your time.  You know how to get me if you need me!
0
 
ajexpertAuthor Commented:
Thanks Slightwv!!
0

Featured Post

Free Tool: ZipGrep

ZipGrep is a utility that can list and search zip (.war, .ear, .jar, etc) archives for text patterns, without the need to extract the archive's contents.

One of a set of tools we're offering as a way to say thank you for being a part of the community.

  • 6
  • 5
Tackle projects and never again get stuck behind a technical roadblock.
Join Now