Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 354
  • Last Modified:

Slow response of Oracle query, using functions

Hi experts, I have the next query:
SELECT to_char(xa_time_cnv.utc_to_loc(UTCTIME),'DD-MM-YYYY HH24:MI:SS'),  
                    +
                    ROUND(VALOR_INST,2),  
                    +
                    ge_pkt_conv_funcs.f_convert_tlq(TLQ_INST),  
                    +
                    ROUND(VALOR_PROM,2),  +
                    ge_pkt_conv_funcs.f_convert_tlq(TLQ_PROM),  
                    +
                    ROUND(VALOR_MAX,2),  +
                    ge_pkt_conv_funcs.f_convert_tlq(TLQ_MAX),  
                    +
                    to_char(xa_time_cnv.utc_to_loc(UTCTIME_MAX),'DD-MM-YYYY HH24:MI:SS'), 
                    +
                    ROUND(VALOR_MIN,2), 
                    +
                    ge_pkt_conv_funcs.f_convert_tlq(TLQ_MIN),  
                    +
                    to_char(xa_time_cnv.utc_to_loc(UTCTIME_MIN),'DD-MM-YYYY HH24:MI:SS')  
                    FROM  a_5min_033 
             WHERE utctime >= xa_time_cnv.loc_to_utc(TO_DATE('27/06/2014 00:00:00','DD-MM-YYYY HH24:MI:SS'),null) 
             AND   utctime <= xa_time_cnv.loc_to_utc(TO_DATE('28/06/2014 00:00:00','DD-MM-YYYY HH24:MI:SS') + 8/24,null) 
             AND   POINTNUMBER =  330000 
             ORDER BY utctime;  

Open in new window

The first time it runs after returns records:
Elapsed: 00:00:30.87

Open in new window

Then, In a second run (query in cache):
Elapsed: 00:00:01.17

Open in new window

I tried grouping, but without results:
GROUP BY utctime, valor_inst, tlq_inst, valor_prom, tlq_prom, valor_max, tlq_max, utctime_max, valor_min, tlq_min, utctime_min

Open in new window

a_5min_033 have 3million of rows, aprox.

I would like to improve the response, 1st time.

Could you advise me on how to get better performance?

In the attachment, I send the script of creating the table, indexes.
Thankyou in advanced.
Regards.
indexes.sql
INSERTS.sql
table.sql
0
carlino70
Asked:
carlino70
3 Solutions
 
slightwv (䄆 Netminder) Commented:
The second is faster because the blocks are cached in memory from the first run.

It took 30 seconds for the first.  What is the tuning goal?  Remember 'faster' is not a goal.  If you are after 1 second for any run, then stop now.  That likely isn't possible.

What does the explain plan show when executed against your database?

What does it show if you just query the columns without the calls to ge_pkt_conv_funcs.f_convert_tlq?
0
 
carlino70Author Commented:
The plan execution says;
Plan
SELECT STATEMENT  ALL_ROWSCost: 62  Bytes: 2,744  Cardinality: 49  				
	4 SORT GROUP BY  Cost: 62  Bytes: 2,744  Cardinality: 49  			
		3 FILTER  		
			2 TABLE ACCESS BY INDEX ROWID TABLE XAJTDB.A_5MIN_033 Cost: 61  Bytes: 2,744  Cardinality: 49  	
				1 INDEX RANGE SCAN INDEX (UNIQUE) XAJTDB.A_5MIN_033_PK Cost: 3  Cardinality: 89  

Open in new window


"ge_pkt_conv_funcs.f_convert_tlq"
"xa_time_cnv.utc_to_loc"
Without those functions, the answer its much better.

Then, you may not see an improvement in query syntax?
0
 
slightwv (䄆 Netminder) Commented:
Based on that execution plan, I don't see where I can improve the select.

Since it runs better without the function, I would look at tuning the function.
0
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
carlino70Author Commented:
The functions are in one package. I extracted only their lines.
function-time.sql
function-tlq.sql
0
 
slightwv (䄆 Netminder) Commented:
What are you wanting me to do with those functions?

If you want me to walk through them and 'tune' them I'm afraid that isn't going to work.

You would need to hire a consultant to do that.

I have no idea what f_convert_tlq is even doing.

I can offer advice on where to look:
1:  look at the selects/cursors inside the functions.  Make sure each one of them is using an index if there is a large amount of data in the tables.

2: look into removing some of the cursors where possible.

For example in utc_to_loc you only appear to select a single row.  A simple select into with a no_data_found exception might be better.

I'm guessing Oracle date/timezone conversions won't work for you?
0
 
flow01Commented:
1. check if "xa_time_cnv.loc_to_utc" is only executed twice
    if its executed more often , consider using the with  option
    with timerange
    as
    (
    select xa_time_cnv.loc_to_utc(TO_DATE('27/06/2014 00:00:00','DD-MM-YYYY HH24:MI:SS'),null)  fromdate,
               xa_time_cnv.loc_to_utc(TO_DATE('28/06/2014 00:00:00','DD-MM-YYYY HH24:MI:SS') + 8/24,null) to_date
    )
    select ..
              FROM  a_5min_033 , timerange
             WHERE utctime >= timerange.from_date
             AND   utctime <= timerange.to_date
             AND   POINTNUMBER =  330000
             ORDER BY utctime;  
2. if timetran contains many records a function based index on utcstart + (offset / (24 * 60)) might improve
          WHERE local_datetime BETWEEN (utcstart + (offset / (24 * 60)))
    by using that index
3. if your are on 11g  and tables  tlq, xajsdb.tlq_xui, xajsdb.pov  don't change often
    consider using the  /*+ RESULT_CACHE */  hint
     on the queries in the tql function :  as long as there are no changes each query will on be executed twice
4.   if you are on 11g :  depending on the number of different values of TLQ_INST  and other columns consider using the function cache
0
 
Alexander Eßer [Alex140181]Software DeveloperCommented:
2. if timetran contains many records a function based index on utcstart + (offset / (24 * 60)) might improve
          WHERE local_datetime BETWEEN (utcstart + (offset / (24 * 60)))
    by using that index
You should avoid the use of FBIs whenever possible. In other situations they might "cost" a lot if you / or a new/other developer doesn't know how to use them properly...

3. if your are on 11g  and tables  tlq, xajsdb.tlq_xui, xajsdb.pov  don't change often
    consider using the  /*+ RESULT_CACHE */  hint
Depending on your configuration (e.g. check v$parameter), you can and should ommit this hint. If set up correctly, your CBO should always know how to dig through your rows properly ;-)
0
 
carlino70Author Commented:
I would try with a code like below. But I have some problem with convertion of "UTCTIME" after "DECLARE TYPE", with the error :

ora-01722 invalid number
ORA-06512: at line 31

Is possible that conversion, when I using TYPE?
Thanks.

SET SERVEROUTPUT ON
SET TIMING ON
exec dbms_output.enable(1000000);
alter session set nls_date_format ='dd/mm/yyyy hh24:mi:ss';
DECLARE

  TYPE matriz_utctime is table of a_5min_033.utctime%TYPE;
  TYPE matriz_valor_inst is table of a_5min_033.valor_inst%TYPE;
  TYPE matriz_tlq_inst is table of a_5min_033.tlq_inst%TYPE;
  TYPE matriz_valor_prom is table of a_5min_033.valor_prom%TYPE;
  TYPE matriz_tlq_prom is table of a_5min_033.tlq_prom%TYPE;
  TYPE matriz_valor_max is table of a_5min_033.valor_max%TYPE;
  TYPE matriz_tlq_max is table of a_5min_033.tlq_max%TYPE;
  TYPE matriz_utctime_max is table of a_5min_033.utctime_max%TYPE;
  TYPE matriz_valor_min is table of a_5min_033.valor_min%TYPE;
  TYPE matriz_tlq_min is table of a_5min_033.tlq_min%TYPE;
  TYPE matriz_utctime_min is table of a_5min_033.utctime_min%TYPE;
    
  
  l_utctime matriz_utctime;
  l_valor_inst matriz_valor_inst;
  l_tlq_inst matriz_tlq_inst;
  l_valor_prom matriz_valor_prom;
  l_tlq_prom matriz_tlq_prom;
  l_valor_max matriz_valor_max;
  l_tlq_max matriz_tlq_max;
  l_utctime_max matriz_utctime_max;
  l_valor_min matriz_valor_min;
  l_tlq_min matriz_tlq_min;
  l_utctime_min matriz_utctime_min;
  
  
  
BEGIN
SELECT to_char(xa_time_cnv.utc_to_loc(UTCTIME),'DD-MM-YYYY HH24:MI:SS'),  
                    +
                    ROUND(VALOR_INST,2),  
                    +
                    ge_pkt_conv_funcs.f_convert_tlq(TLQ_INST),  
                    +
                    ROUND(VALOR_PROM,2),  +
                    ge_pkt_conv_funcs.f_convert_tlq(TLQ_PROM),  
                    +
                    ROUND(VALOR_MAX,2),  +
                    ge_pkt_conv_funcs.f_convert_tlq(TLQ_MAX),  
                    +
                    to_char(xa_time_cnv.utc_to_loc(UTCTIME_MAX),'DD-MM-YYYY HH24:MI:SS'), 
                    +
                    ROUND(VALOR_MIN,2), 
                    +
                    ge_pkt_conv_funcs.f_convert_tlq(TLQ_MIN),  
                    +
                    to_char(xa_time_cnv.utc_to_loc(UTCTIME_MIN),'DD-MM-YYYY HH24:MI:SS')
  BULK COLLECT INTO l_utctime, l_valor_inst, l_tlq_inst, l_valor_prom, l_tlq_prom,
       l_valor_max, l_tlq_max, l_utctime_max, 
       l_valor_min, l_tlq_min, l_utctime_min 
       FROM a_5min_033
  WHERE UTCTIME >= TO_DATE('21/06/2014 00:00:00','DD-MM-YYYY HH24:MI:SS') 
     AND   UTCTIME <= TO_DATE('22/06/2014 00:00:00','DD-MM-YYYY HH24:MI:SS') + 8/24 
     AND   POINTNUMBER =  330011
     ORDER BY UTCTIME;

  FOR i IN l_utctime.FIRST .. l_utctime.LAST LOOP
    dbms_output.put_line(l_utctime(i)    || ' , ' ||l_valor_inst(i)||' , '||
                            l_valor_prom(i) || ' , ' ||l_tlq_prom(i) || ' , '||
                            l_valor_max(i) || ' , ' ||l_tlq_max(i) || ' , '||l_utctime_max(i)|| ' , '||
                            l_valor_min(i) || ' , ' ||l_tlq_min(i) || ' , '||l_utctime_min(i) );
  END LOOP;

END;
/

Open in new window

0
 
slightwv (䄆 Netminder) Commented:
What data type is a_5min_033.utctime%TYPE ?

You are converting it to a VARCHAR2 with:
to_char(xa_time_cnv.utc_to_loc(UTCTIME),'DD-MM-YYYY HH24:MI:SS'),

Double check the rest.
0
 
carlino70Author Commented:
Thank you
0

Featured Post

Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now