Solved

issue on slow SQL

Posted on 2014-02-18
17
313 Views
Last Modified: 2014-02-25
Hello Experts,

I have the below SQL which was identified as time consuming SQL.

Here is my SQL and explain :

explain plan for 
SELECT STORED_VALUE,
                SUBSTR(M_JSON,1,LENGTH(M_JSON)-1) ||','|| MS_SRA_VISIT_SETUP_HOOK_PKG.FN_DOWNLOAD(:1)||'}' AS M_JSON
              FROM
                (WITH AUD_TAB AS
                (SELECT AUD_TPA_ID,
                  XMLAGG(XMLELEMENT("AUD",
                      XMLELEMENT("name",AUDITOR), 
                      XMLELEMENT("asgn",DECODE(ACTION_TYPE,'SYNC',1,'SUBMIT',2,'UPLOAD',3,'X',4,'Z',5,0)), 
                      XMLELEMENT("lead",IS_LEAD),
                      XMLELEMENT("offline",DECODE(OFFLINE_STATUS,'Y',1,0))
                     )
                  ) AS AUDITOR_INFO
                FROM
                  (SELECT A.AUD_TPA_ID,
                     A.AUDITOR,
                     (SELECT DECODE(VISIT_SUBMIT_FLAG,NULL,'Z',VISIT_SUBMIT_FLAG) 
                          FROM MS_SRA_ASSIGNMENT_HIERARCHIES H 
                          WHERE STAGE != 'OSOL'
                            AND A.VISIT_ID = H.VISIT_ID
                            AND H.USER_ID = A.AUDITOR) AS ACTION_TYPE,
                     (SELECT OFFLINE_STATUS 
                            FROM SI_EVENT_ASSIGNMENTS ASG,
                                MS_SRA_ASSIGNMENT_HIERARCHIES H 
                                    WHERE H.USER_ID = A.AUDITOR
                                    AND H.STAGE != 'OSOL'
                                    AND A.VISIT_ID = H.VISIT_ID
                                    AND ASG.USER_ID = H.USER_ID
                                    AND ASG.PROCESS_INSTANCE_ID IN (H.PID,H.PVPID)) AS OFFLINE_STATUS,
                     A.IS_LEAD
                   FROM MS_SRA_VISIT_SETUP_AUD A
                   WHERE A.VISIT_ID  = :1
                   ORDER BY IS_LEAD DESC
                  )
                GROUP BY AUD_TPA_ID
                )
              SELECT 1                                                                                                                                                                AS STORED_VALUE,
                MS_SRA_JSON_UTILS.XML2JSON(
                    XMLELEMENT ("tpaCo", XMLAGG(
                              XMLELEMENT("tpaName", 
                                    XMLELEMENT("name",T.AUDIT_COMPANY), 
                                    XMLELEMENT("aud",AUD.AUDITOR_INFO)
                              )
                              )
                         ),'tpaName,AUD') AS m_json
              FROM MS_SRA_VISIT_SETUP_TPA T,
                AUD_TAB AUD
              where T.VISIT_ID    =:1
              AND T.AUDIT_COMPANY = AUD.AUD_TPA_ID);

Open in new window


Plan hash value: 2428265258
 
-----------------------------------------------------------------------------------------
| Id  | Operation                          | Name                          | Cost (%CPU)|
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                               |     4  (50)|
|*  1 |  TABLE ACCESS BY INDEX ROWID       | MS_SRA_ASSIGNMENT_HIERARCHIES |     2   (0)|
|*  2 |   INDEX RANGE SCAN                 | MS_SRA_ASSIGNMENT_HIERAR_INX1 |     1   (0)|
|   3 |   NESTED LOOPS                     |                               |            |
|   4 |    NESTED LOOPS                    |                               |     3   (0)|
|*  5 |     TABLE ACCESS BY INDEX ROWID    | MS_SRA_ASSIGNMENT_HIERARCHIES |     2   (0)|
|*  6 |      INDEX RANGE SCAN              | MS_SRA_ASSIGNMENT_HIERAR_INX1 |     1   (0)|
|*  7 |     INDEX RANGE SCAN               | SI_EVENT_ASSIGNMENTS_N2       |     1   (0)|
|*  8 |    TABLE ACCESS BY INDEX ROWID     | SI_EVENT_ASSIGNMENTS          |     1   (0)|
|   9 |  VIEW                              |                               |     4  (50)|
|  10 |   SORT AGGREGATE                   |                               |            |
|  11 |    NESTED LOOPS                    |                               |     4  (50)|
|  12 |     VIEW                           |                               |     3  (67)|
|  13 |      SORT GROUP BY                 |                               |     3  (67)|
|  14 |       VIEW                         |                               |     2  (50)|
|  15 |        SORT ORDER BY               |                               |     2  (50)|
|  16 |         TABLE ACCESS BY INDEX ROWID| MS_SRA_VISIT_SETUP_AUD        |     1   (0)|
|* 17 |          INDEX RANGE SCAN          | MS_SRA_VISIT_SETUP_AUD_IDX    |     1   (0)|
|* 18 |     INDEX UNIQUE SCAN              | MS_SRA_VISIT_SETUP_TPA_UK     |     1   (0)|
-----------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("H"."USER_ID"=:B1 AND "STAGE"<>'OSOL')
   2 - access("H"."VISIT_ID"=:B1)
   5 - filter("H"."USER_ID"=:B1 AND "H"."STAGE"<>'OSOL')
   6 - access("H"."VISIT_ID"=:B1)
   7 - access("ASG"."USER_ID"=:B1)
       filter("ASG"."USER_ID"="H"."USER_ID")
   8 - filter("ASG"."PROCESS_INSTANCE_ID"="H"."PID" OR 
              "ASG"."PROCESS_INSTANCE_ID"="H"."PVPID" AND "H"."PVPID" IS NOT NULL)
  17 - access("A"."VISIT_ID"=:1)
  18 - access("T"."AUDIT_COMPANY"="AUD"."AUD_TPA_ID" AND "T"."VISIT_ID"=:1)

Open in new window



How can I tune this ?
0
Comment
Question by:Swadhin Ray
  • 7
  • 4
  • 4
  • +2
17 Comments
 
LVL 22

Accepted Solution

by:
Ivo Stoykov earned 100 total points
ID: 39866844
1st of all tuning a sql means a lot testing and experimenting. AFAK there is no magic way to do this. I doubt you could receive a valuable advices on your query.

As general it seems strange to me to use XML query to extract and concatenate some scripts. This is not the fastest way. If you could avoid it.

HTH

Ivo Stoykov
0
 
LVL 142

Assisted Solution

by:Guy Hengel [angelIII / a3]
Guy Hengel [angelIII / a3] earned 200 total points
ID: 39866850
there are indeed more questions to be asked ...
how many rows are in the tables, how many returned by the queries.
how many times does the query take really?

if the sql was identified as "time consuming", eventually it's because it's executed many times, but still be short.
in which case you might need to use some advanced "tuning" methods by using stuff like materialized views, denormalized data, different sql constructs etc etc ...

and there I can concur with above, there may very likely be no simple magic on your sql
0
 
LVL 16

Author Comment

by:Swadhin Ray
ID: 39866968
All the tables are having  600 records.
The output is inform of JSON format but gets 264 records.
0
PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 39867045
how much time the query takes?
how many times the query is run?
0
 
LVL 16

Author Comment

by:Swadhin Ray
ID: 39867058
it take 90 sec to run and this SQL is executed multiple times as this is associated with one report on the application login page.
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 39867234
90 seconds for tables having 600 rows each?
this is "not possible", unless your server is completely mistuned and undersized over overloaded anyhow.

however, if one of the "tables" is actually a view which returns 600 records, and running the view already takes up 89 seconds, this could explain a lot.
can you try to split up the query into it's parts and see how much time they take?
or create a script for us to reproduce this, including the tables creations, indexes, views and INSERT statements for the relevant data ...
0
 
LVL 23

Expert Comment

by:David
ID: 39867398
Confirm and reconfirm your table statistics are current.

There are any number of methods to tune, some built-in, some after-market.  My question is, what is the goal you want to reach -- when will you know to stop tuning?
0
 
LVL 16

Author Comment

by:Swadhin Ray
ID: 39878794
As I see the cost and time is due to the below way of written the SQL i.e as below:

 (SELECT AUD_TPA_ID,
                  XMLAGG(XMLELEMENT("AUD",
                      XMLELEMENT("name",AUDITOR),
                      XMLELEMENT("asgn",DECODE(ACTION_TYPE,'SYNC',1,'SUBMIT',2,'UPLOAD',3,'X',4,'Z',5,0)),
                      XMLELEMENT("lead",IS_LEAD),
                      XMLELEMENT("offline",DECODE(OFFLINE_STATUS,'Y',1,0))
                     )

What my goal is if I can over come this code on changing in such a way that this provides me the same result but more optimized way.
0
 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
ID: 39879967
My guess would be with the "WITH AUD_TAB AS" and that select and not that select itself.

Sorry but it is next to impossible to take a SQL statement and deconstruct it into the base tables and expected results.

I tend to agree with the above comment that 90 seconds doesn't seem possible for 600 rows and a cost of 4.

Can you show a different execution plan?

explain plan for
SELECT ...

then:
select * from table(dbms_xplan.display);
0
 
LVL 16

Author Comment

by:Swadhin Ray
ID: 39881618
This issue is resolved. the performance issue recorded due to its transition table which was having a billion records. After truncating the table the SQL is much better and having no issues.

Whenever the SQL was executed there was an entry to an T table, which was causing the issue.

Want to close this question: Suggest me if I need to slipt the points or shall close it.
0
 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
ID: 39882429
Was the billion row table reflected in the plan you posted?  Personally I don't see how
but I have to ask.
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 39882482
so, an AUDIT slowing down the query? difficult to tackle "remotely" without knowing the config.
0
 
LVL 16

Author Comment

by:Swadhin Ray
ID: 39885215
@slightwv : I think you took it into different direction, what I meant was our application works something like parent and temp table.

If we do a select for example on a report then it indirectly insert into a <<table>>_t table and then the application pick the data from that <<table>>_t table and display in the screen but the " _t " table should have deleted after display but what was happening is the "_t" table was not deleting the records by which the main SQL was been tracked as an huge CPU consuming SQL by using another tool and as well as in the AWR report this SQL was also reported.

Now this was reported because the application works like untill the "_t" table is deleted it all under one transaction.  For which even I was surprised and then when I didn't had any clue , I have one way to get the solution i.e. expert-exchange which have direct talk to experts like you and others.

After noting all your points , I run the test on specific report only and generated then trace using the tool which added another insert to the "t_table" which was holding a billion records.

The way the application works is good for other functional areas but not at this issue. For which I have to make the delete query scheduled and after that everything went good.



@Guy Hengel [angelIII / a3] :

There was also an issue with configuration of the application server with the DB server too. Where the agent of the report SQLs are been using SQL connection agent which was also changed to non-sql channel and made to a scheduled one. (This was done on the product- application what we have )

Hope I cleared the doubt on this question.
0
 
LVL 76

Assisted Solution

by:slightwv (䄆 Netminder)
slightwv (䄆 Netminder) earned 200 total points
ID: 39885522
>>display in the screen but the " _t " table should have deleted after display but what was happening is the "_t" table

This is the EXACT reason GLOBAL TEMPORARY TABLES (GTT) were created in Oracle.  There is no need to manually delete from them.  Once the session ends, the data is automatically removed.  Only the session that inserts into the table can see the data.

I STRONGLY suggest you change the table to a GTT.

Back to point:
Even after your explanation I don't see how the execution plan you posted went against a billion row table thus causing the performance problem.
0
 
LVL 16

Author Comment

by:Swadhin Ray
ID: 39885584
The execution plan was based on the SQL what I provided but the actual time was taken while inserting into the _t table.
And deletion was not happening. There is no global temporary table.
The design itself is not good, that I know.

If you see my comment the under line table was having less records :
ID: 39866968

But the _t table was having billion records. inserting into this table and selection only latest records was taking time.

Sorry for this inconvenience , my question was intern wrong. I should not have believed the external tool. Rather then I should have checked the ADM report. Where the insert was identified, but the SQL was also reported what I have mentioned in AWR report based on which I asked the question.

Hope I cleared the doubt on the Question asked by me and the loop holes in the design.
0
 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
ID: 39885703
I have no problem if you just delete this question.
0
 
LVL 16

Author Closing Comment

by:Swadhin Ray
ID: 39885991
Thanks a lot for your valuable inputs like global temporary table , removing the with clause , and also changing the logic on xml element.
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
Convert Oracle data into XML document 2 66
Shredding xml into an oracle 11g Database 2 45
PL SQL Search Across Columns 4 38
ODBC in excel 2016 in Windows 10 via VBA 16 85
I remember the day when someone asked me to create a user for an application developement. The user should be able to create views and materialized views and, so, I used the following syntax: (CODE) This way, I guessed, I would ensure that use…
From implementing a password expiration date, to datatype conversions and file export options, these are some useful settings I've found in Jasper Server.
This video shows setup options and the basic steps and syntax for duplicating (cloning) a database from one instance to another. Examples are given for duplicating to the same machine and to different machines
Video by: Steve
Using examples as well as descriptions, step through each of the common simple join types, explaining differences in syntax, differences in expected outputs and showing how the queries run along with the actual outputs based upon a simple set of dem…

810 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question