Solved

Part-1 Complex SQL Tuning Migraine...runs for 7 hrs have only 4.5hrs at to do that...what's a man to do?

Posted on 2012-03-09
4
566 Views
Last Modified: 2012-03-23
Hi all, I hope all is well in your homes. I do have s set of SQLs that is giving me migraine because the performance is driving me up the wall. The set of SQL is coming from SIEBEL generated SDLs, indexed to dearth, I think, runs from 12 am in the morning with a mandate to be done with at 5 am in a 24/7 business, and I have nary a sophisticated or clever tool to help in this work.  Lately, this code has been running till 7am and it causing our business partners to miss their commitments to other partners...not a good place to be as a DB2 DBA or programmer!!
I need your collective tuning expertise to solve this one. I have attached one set of this SQL - this one set works like a WLM-managed - each set must complete to complete a unit of wor (UOW) to produce a useable output for the next.

One of my thoughts is to look at ways to combine the three SQLs to one or only two because they almost identical.

Another thought is that between these SQLs are over 60 indexes with mostly index scans on all of them - a terrible code made possible by a code generator. I will like to see if I can utilize multi-dimentional clustering to facilitate less scans, and the heavy-load that excessive indexing that this imposses on the CPU and run-time, to say the lest.

The third thought that I have is if MQT can help this case. Can anyone gives his/her considered thought and opinion?

Finally, any other idea that anyone has for consideration will be highly highly appreciated.

Also, to make t worth the and to show my heightened concerns, I am willing to make each of my questions a seperate but related question so that people can focuse on areas they believe they can offer more effective advise. Is that okay with everyone?
Thanks everyone.
WLM-SET1-SQL.sql
0
Comment
Question by:Enyinnaya
  • 2
4 Comments
 
LVL 37

Accepted Solution

by:
momi_sabag earned 167 total points
Comment Utility
hi
these are pretty nasty SQL statements
i did not quite understand if you can change them, or you have to make them run faster without actually changing the code

as for the MQTs. I believe that they can provide a great improvement assuming that the data in the tables is rather static and does not change often, otherwise you will have to keep rebuilding those MQTs, but still, even if you need to build them once a day,
if you can use a single MQT to knock off this part of the query


LEFT OUTER JOIN S_SRV_REQ2_FNX c ON a.row_id=c.par_row_id
LEFT OUTER JOIN S_SRV_REQ1_FNX b ON a.row_id=b.par_row_id
LEFT OUTER JOIN S_ASSET d ON a.asset_id=d.row_id
LEFT OUTER JOIN S_FN_ACCNT_AST e ON (a.asset_id=e.FN_ACCNT_ID and e.RELATION_TYPE_CD = 'Primary')
LEFT OUTER JOIN s_asset f ON e.ASSET_ID=f.row_id
LEFT OUTER JOIN s_contact g ON g.row_id=a.cst_con_id
LEFT OUTER JOIN s_asset_con h ON (h.CONTACT_ID=g.row_id and h.asset_id=a.asset_id
and h.RELATION_TYPE_CD='Obligor' )
LEFT OUTER JOIN s_assetcon_addr i ON (i.asset_con_id=h.row_id
                             and i.use_type_cd='Primary')
LEFT OUTER JOIN s_addr_per j ON j.row_id=i.addr_id
LEFT OUTER JOIN S_FN_ACCNT1_FNX k ON k.row_id=d.row_id
LEFT OUTER JOIN s_user l ON l.row_id = a.last_upd_by
LEFT OUTER JOIN S_FN_ACCNT2_FNX m ON m.row_id=d.row_id
LEFT OUTER JOIN S_FN_ACCNT_FNX n ON n.row_id=d.row_id
LEFT OUTER JOIN S_CONTACT o ON a.created_by=o.PAR_ROW_ID
LEFT OUTER JOIN S_SRV_REQ3_FNX p ON a.row_id=p.row_id
LEFT OUTER JOIN S_PROD_INT Q ON d.PROD_ID=Q.ROW_ID
LEFT OUTER JOIN s_lang LANG     ON g.PREF_LANG_ID = LANG.lang_cd
LEFT OUTER JOIN S_POSTN OP      ON d.PR_POSTN_ID = OP.PAR_ROW_ID
LEFT OUTER JOIN S_USER OU       ON  OU.PAR_ROW_ID = OP.PR_EMP_ID
LEFT OUTER JOIN S_CONTACT OWNER ON OWNER.ROW_ID = OU.PAR_ROW_ID
LEFT OUTER JOIN S_PARTY PRTY    ON PRTY.ROW_ID = OP.PAR_ROW_ID
LEFT OUTER JOIN S_POSTN MGRP    ON MGRP.ROW_ID = PRTY.PAR_PARTY_ID
LEFT OUTER JOIN S_USER MU       ON MU.PAR_ROW_ID = MGRP.PR_EMP_ID
LEFT OUTER JOIN S_CONTACT MGR   ON MGR.ROW_ID = MU.PAR_ROW_ID
LEFT OUTER JOIN S_ASSET_X AX    ON AX.PAR_ROW_ID = d.ROW_ID
LEFT OUTER JOIN S_ASSET_CON PRCON ON (PRCON.CONTACT_ID = d.PR_CON_ID
and d.ROW_ID = PRCON.ASSET_ID)
LEFT OUTER JOIN s_assetcon_addr GSA ON (GSA.asset_con_id=PRCON.row_id
and GSA.use_type_cd='Primary')
LEFT OUTER JOIN s_addr_per GS ON GS.row_id=GSA.addr_id


so that you only have one join, that might be very good
0
 
LVL 37

Expert Comment

by:momi_sabag
Comment Utility
0
 
LVL 8

Assisted Solution

by:mustaccio
mustaccio earned 166 total points
Comment Utility
Do you care to post explain plans for these statements? Did you have a chance to compare the insert performance vs. the selects only? Does monitoring show any particular bottleneck in the system?
0
 
LVL 61

Assisted Solution

by:gheist
gheist earned 167 total points
Comment Utility
backing http:#37704035
but - you can replace it with a simple WHERE . and  . and  . and and expect builtin optimizer to sort out what is right. explicit joins sort of kill internal optimizer.
0

Featured Post

How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

Join & Write a Comment

Storage devices are generally used to save the data or sometime transfer the data from one computer system to another system. However, sometimes user accidentally erased their important data from the Storage devices. Users have to know how data reco…
Today, still in the boom of Apple, PC's and products, nearly 50% of the computer users use Windows as graphical operating systems. If you are among those users who love windows, but are grappling to keep the system's hard drive optimized, then you s…
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.
In a previous video, we went over how to export a DynamoDB table into Amazon S3.  In this video, we show how to load the export from S3 into a DynamoDB table.

772 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

10 Experts available now in Live!

Get 1:1 Help Now