Solved

issue on slow SQL

Posted on 2014-02-18
17
307 Views
Last Modified: 2014-02-25
Hello Experts,

I have the below SQL which was identified as time consuming SQL.

Here is my SQL and explain :

explain plan for 
SELECT STORED_VALUE,
                SUBSTR(M_JSON,1,LENGTH(M_JSON)-1) ||','|| MS_SRA_VISIT_SETUP_HOOK_PKG.FN_DOWNLOAD(:1)||'}' AS M_JSON
              FROM
                (WITH AUD_TAB AS
                (SELECT AUD_TPA_ID,
                  XMLAGG(XMLELEMENT("AUD",
                      XMLELEMENT("name",AUDITOR), 
                      XMLELEMENT("asgn",DECODE(ACTION_TYPE,'SYNC',1,'SUBMIT',2,'UPLOAD',3,'X',4,'Z',5,0)), 
                      XMLELEMENT("lead",IS_LEAD),
                      XMLELEMENT("offline",DECODE(OFFLINE_STATUS,'Y',1,0))
                     )
                  ) AS AUDITOR_INFO
                FROM
                  (SELECT A.AUD_TPA_ID,
                     A.AUDITOR,
                     (SELECT DECODE(VISIT_SUBMIT_FLAG,NULL,'Z',VISIT_SUBMIT_FLAG) 
                          FROM MS_SRA_ASSIGNMENT_HIERARCHIES H 
                          WHERE STAGE != 'OSOL'
                            AND A.VISIT_ID = H.VISIT_ID
                            AND H.USER_ID = A.AUDITOR) AS ACTION_TYPE,
                     (SELECT OFFLINE_STATUS 
                            FROM SI_EVENT_ASSIGNMENTS ASG,
                                MS_SRA_ASSIGNMENT_HIERARCHIES H 
                                    WHERE H.USER_ID = A.AUDITOR
                                    AND H.STAGE != 'OSOL'
                                    AND A.VISIT_ID = H.VISIT_ID
                                    AND ASG.USER_ID = H.USER_ID
                                    AND ASG.PROCESS_INSTANCE_ID IN (H.PID,H.PVPID)) AS OFFLINE_STATUS,
                     A.IS_LEAD
                   FROM MS_SRA_VISIT_SETUP_AUD A
                   WHERE A.VISIT_ID  = :1
                   ORDER BY IS_LEAD DESC
                  )
                GROUP BY AUD_TPA_ID
                )
              SELECT 1                                                                                                                                                                AS STORED_VALUE,
                MS_SRA_JSON_UTILS.XML2JSON(
                    XMLELEMENT ("tpaCo", XMLAGG(
                              XMLELEMENT("tpaName", 
                                    XMLELEMENT("name",T.AUDIT_COMPANY), 
                                    XMLELEMENT("aud",AUD.AUDITOR_INFO)
                              )
                              )
                         ),'tpaName,AUD') AS m_json
              FROM MS_SRA_VISIT_SETUP_TPA T,
                AUD_TAB AUD
              where T.VISIT_ID    =:1
              AND T.AUDIT_COMPANY = AUD.AUD_TPA_ID);

Open in new window


Plan hash value: 2428265258
 
-----------------------------------------------------------------------------------------
| Id  | Operation                          | Name                          | Cost (%CPU)|
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                               |     4  (50)|
|*  1 |  TABLE ACCESS BY INDEX ROWID       | MS_SRA_ASSIGNMENT_HIERARCHIES |     2   (0)|
|*  2 |   INDEX RANGE SCAN                 | MS_SRA_ASSIGNMENT_HIERAR_INX1 |     1   (0)|
|   3 |   NESTED LOOPS                     |                               |            |
|   4 |    NESTED LOOPS                    |                               |     3   (0)|
|*  5 |     TABLE ACCESS BY INDEX ROWID    | MS_SRA_ASSIGNMENT_HIERARCHIES |     2   (0)|
|*  6 |      INDEX RANGE SCAN              | MS_SRA_ASSIGNMENT_HIERAR_INX1 |     1   (0)|
|*  7 |     INDEX RANGE SCAN               | SI_EVENT_ASSIGNMENTS_N2       |     1   (0)|
|*  8 |    TABLE ACCESS BY INDEX ROWID     | SI_EVENT_ASSIGNMENTS          |     1   (0)|
|   9 |  VIEW                              |                               |     4  (50)|
|  10 |   SORT AGGREGATE                   |                               |            |
|  11 |    NESTED LOOPS                    |                               |     4  (50)|
|  12 |     VIEW                           |                               |     3  (67)|
|  13 |      SORT GROUP BY                 |                               |     3  (67)|
|  14 |       VIEW                         |                               |     2  (50)|
|  15 |        SORT ORDER BY               |                               |     2  (50)|
|  16 |         TABLE ACCESS BY INDEX ROWID| MS_SRA_VISIT_SETUP_AUD        |     1   (0)|
|* 17 |          INDEX RANGE SCAN          | MS_SRA_VISIT_SETUP_AUD_IDX    |     1   (0)|
|* 18 |     INDEX UNIQUE SCAN              | MS_SRA_VISIT_SETUP_TPA_UK     |     1   (0)|
-----------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("H"."USER_ID"=:B1 AND "STAGE"<>'OSOL')
   2 - access("H"."VISIT_ID"=:B1)
   5 - filter("H"."USER_ID"=:B1 AND "H"."STAGE"<>'OSOL')
   6 - access("H"."VISIT_ID"=:B1)
   7 - access("ASG"."USER_ID"=:B1)
       filter("ASG"."USER_ID"="H"."USER_ID")
   8 - filter("ASG"."PROCESS_INSTANCE_ID"="H"."PID" OR 
              "ASG"."PROCESS_INSTANCE_ID"="H"."PVPID" AND "H"."PVPID" IS NOT NULL)
  17 - access("A"."VISIT_ID"=:1)
  18 - access("T"."AUDIT_COMPANY"="AUD"."AUD_TPA_ID" AND "T"."VISIT_ID"=:1)

Open in new window



How can I tune this ?
0
Comment
Question by:Swadhin Ray
  • 7
  • 4
  • 4
  • +2
17 Comments
 
LVL 22

Accepted Solution

by:
Ivo Stoykov earned 100 total points
Comment Utility
1st of all tuning a sql means a lot testing and experimenting. AFAK there is no magic way to do this. I doubt you could receive a valuable advices on your query.

As general it seems strange to me to use XML query to extract and concatenate some scripts. This is not the fastest way. If you could avoid it.

HTH

Ivo Stoykov
0
 
LVL 142

Assisted Solution

by:Guy Hengel [angelIII / a3]
Guy Hengel [angelIII / a3] earned 200 total points
Comment Utility
there are indeed more questions to be asked ...
how many rows are in the tables, how many returned by the queries.
how many times does the query take really?

if the sql was identified as "time consuming", eventually it's because it's executed many times, but still be short.
in which case you might need to use some advanced "tuning" methods by using stuff like materialized views, denormalized data, different sql constructs etc etc ...

and there I can concur with above, there may very likely be no simple magic on your sql
0
 
LVL 16

Author Comment

by:Swadhin Ray
Comment Utility
All the tables are having  600 records.
The output is inform of JSON format but gets 264 records.
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
Comment Utility
how much time the query takes?
how many times the query is run?
0
 
LVL 16

Author Comment

by:Swadhin Ray
Comment Utility
it take 90 sec to run and this SQL is executed multiple times as this is associated with one report on the application login page.
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
Comment Utility
90 seconds for tables having 600 rows each?
this is "not possible", unless your server is completely mistuned and undersized over overloaded anyhow.

however, if one of the "tables" is actually a view which returns 600 records, and running the view already takes up 89 seconds, this could explain a lot.
can you try to split up the query into it's parts and see how much time they take?
or create a script for us to reproduce this, including the tables creations, indexes, views and INSERT statements for the relevant data ...
0
 
LVL 23

Expert Comment

by:David
Comment Utility
Confirm and reconfirm your table statistics are current.

There are any number of methods to tune, some built-in, some after-market.  My question is, what is the goal you want to reach -- when will you know to stop tuning?
0
 
LVL 16

Author Comment

by:Swadhin Ray
Comment Utility
As I see the cost and time is due to the below way of written the SQL i.e as below:

 (SELECT AUD_TPA_ID,
                  XMLAGG(XMLELEMENT("AUD",
                      XMLELEMENT("name",AUDITOR),
                      XMLELEMENT("asgn",DECODE(ACTION_TYPE,'SYNC',1,'SUBMIT',2,'UPLOAD',3,'X',4,'Z',5,0)),
                      XMLELEMENT("lead",IS_LEAD),
                      XMLELEMENT("offline",DECODE(OFFLINE_STATUS,'Y',1,0))
                     )

What my goal is if I can over come this code on changing in such a way that this provides me the same result but more optimized way.
0
PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
Comment Utility
My guess would be with the "WITH AUD_TAB AS" and that select and not that select itself.

Sorry but it is next to impossible to take a SQL statement and deconstruct it into the base tables and expected results.

I tend to agree with the above comment that 90 seconds doesn't seem possible for 600 rows and a cost of 4.

Can you show a different execution plan?

explain plan for
SELECT ...

then:
select * from table(dbms_xplan.display);
0
 
LVL 16

Author Comment

by:Swadhin Ray
Comment Utility
This issue is resolved. the performance issue recorded due to its transition table which was having a billion records. After truncating the table the SQL is much better and having no issues.

Whenever the SQL was executed there was an entry to an T table, which was causing the issue.

Want to close this question: Suggest me if I need to slipt the points or shall close it.
0
 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
Comment Utility
Was the billion row table reflected in the plan you posted?  Personally I don't see how
but I have to ask.
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
Comment Utility
so, an AUDIT slowing down the query? difficult to tackle "remotely" without knowing the config.
0
 
LVL 16

Author Comment

by:Swadhin Ray
Comment Utility
@slightwv : I think you took it into different direction, what I meant was our application works something like parent and temp table.

If we do a select for example on a report then it indirectly insert into a <<table>>_t table and then the application pick the data from that <<table>>_t table and display in the screen but the " _t " table should have deleted after display but what was happening is the "_t" table was not deleting the records by which the main SQL was been tracked as an huge CPU consuming SQL by using another tool and as well as in the AWR report this SQL was also reported.

Now this was reported because the application works like untill the "_t" table is deleted it all under one transaction.  For which even I was surprised and then when I didn't had any clue , I have one way to get the solution i.e. expert-exchange which have direct talk to experts like you and others.

After noting all your points , I run the test on specific report only and generated then trace using the tool which added another insert to the "t_table" which was holding a billion records.

The way the application works is good for other functional areas but not at this issue. For which I have to make the delete query scheduled and after that everything went good.



@Guy Hengel [angelIII / a3] :

There was also an issue with configuration of the application server with the DB server too. Where the agent of the report SQLs are been using SQL connection agent which was also changed to non-sql channel and made to a scheduled one. (This was done on the product- application what we have )

Hope I cleared the doubt on this question.
0
 
LVL 76

Assisted Solution

by:slightwv (䄆 Netminder)
slightwv (䄆 Netminder) earned 200 total points
Comment Utility
>>display in the screen but the " _t " table should have deleted after display but what was happening is the "_t" table

This is the EXACT reason GLOBAL TEMPORARY TABLES (GTT) were created in Oracle.  There is no need to manually delete from them.  Once the session ends, the data is automatically removed.  Only the session that inserts into the table can see the data.

I STRONGLY suggest you change the table to a GTT.

Back to point:
Even after your explanation I don't see how the execution plan you posted went against a billion row table thus causing the performance problem.
0
 
LVL 16

Author Comment

by:Swadhin Ray
Comment Utility
The execution plan was based on the SQL what I provided but the actual time was taken while inserting into the _t table.
And deletion was not happening. There is no global temporary table.
The design itself is not good, that I know.

If you see my comment the under line table was having less records :
ID: 39866968

But the _t table was having billion records. inserting into this table and selection only latest records was taking time.

Sorry for this inconvenience , my question was intern wrong. I should not have believed the external tool. Rather then I should have checked the ADM report. Where the insert was identified, but the SQL was also reported what I have mentioned in AWR report based on which I asked the question.

Hope I cleared the doubt on the Question asked by me and the loop holes in the design.
0
 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
Comment Utility
I have no problem if you just delete this question.
0
 
LVL 16

Author Closing Comment

by:Swadhin Ray
Comment Utility
Thanks a lot for your valuable inputs like global temporary table , removing the with clause , and also changing the logic on xml element.
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Join & Write a Comment

Suggested Solutions

Truncate is a DDL Command where as Delete is a DML Command. Both will delete data from table, but what is the difference between these below statements truncate table <table_name> ?? delete from <table_name> ?? The first command cannot be …
Using SQL Scripts we can save all the SQL queries as files that we use very frequently on our database later point of time. This is one of the feature present under SQL Workshop in Oracle Application Express.
Via a live example show how to connect to RMAN, make basic configuration settings changes and then take a backup of a demo database
This videos aims to give the viewer a basic demonstration of how a user can query current session information by using the SYS_CONTEXT function

762 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

8 Experts available now in Live!

Get 1:1 Help Now