Link to home
Start Free TrialLog in
Avatar of qbjgqbjg
qbjgqbjgFlag for United States of America

asked on

using max to select a record

I am creating an employee report that allows the user to select date. The report needs to provide the Position information, the salary and the home dept (along with some other data) for all acitve employees on that date. The historical Home dept, Position data, and salary are stored in 3 different tables with a change date. So, I created an sql command and based the report on the command. It took a little over 2 hours to run. So, I created a stored procedure, it takes about 30 minutes to run. Our DBA thinks that is still too long. I am attaching a copy of the stored procedure. I would welcome any suggestions on how this might be made to run faster. Thanks.usp-COO-EmployeeHistory.txt
Avatar of Kent Fichtner
Kent Fichtner
Flag of United States of America image

We had the same issue with a report that we built.  The way I did it was to make a stored procedure of the query I am running for crystal.  What the SP does for me is to insert data in to a new table the query for crystal.  Then when I run crystal I just do a SELECT * FROM TABLE and it run really fast.  It works great for us because the report isn't run to often so we just update the table with new information once a day in the morning.

Anther way I sped things up was to do the Where clause in the "Selection Formulas" area in Crystal.  That way SQL does part of the work and crystal does the rest (we have crystal and SQL on two different servers).

hope that helps.
Avatar of Surendra Nath
1) from the initial looks, change the variable table to permenant table (@EHistTable to __EHistTable)
this might increase the performance ...

2) if appointment_id on the above table is unique then make it primary key, or if a combination is a unique then create a clusterd index on it, after insertion into the EhistTable is done
   2a) just incase if the appointment_id is not unique, then create a non clustered index on top of it, this will definetly boost the performance.

apart from these, I can see there are many outer joins, outer joins will slow your query a lot sometimes, so, try to avoid them.

If you can provide your table schema and indexes, the number of rows on them, then we can suggest more improvements.

Do these at first and let us know how it ran.

By the way, run it thrice and take the average time between 2nd and 3rd, as the first time your stored procedure runs it might take a long time as the query plan is being built at that time and the next time it will be good.

I hope this info helps you out.
Avatar of didnthaveaname

Have you checked the execution plan for the underlying query to see where the most expensive parts are?  There may be some supporting indexes that could be added to some of the underlying tables to speed up the query. Additionally, if you are seeing the largest performance hits coming from the join operations with the table variable, you could consider a temp table with some indexing.
Avatar of qbjgqbjg


Thanks for the suggestions I will work on it.
apart from these, I can see there are many outer joins, outer joins will slow your query a lot sometimes, so, try to avoid them. Where do you see outer joins and how would I avoid them?
apart from these, I can see there are many outer joins, outer joins will slow your query a lot sometimes, so, try to avoid them. Do you mean the Left outer joins?
Avatar of Shaun Kline
Shaun Kline
Flag of United States of America image

Link to home
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
So for this section:

UPDATE @EHistTable

                                where ESHSHSTD.SHST_appoint_id = E.APPOINT_ID and
                                ESHSHSTD.SHST_CHG_DATE <= @Date)  

What would it look like to move the subquery to the from?
Yes I mean the left outer join.

@@Shaun_Kline, I dont think so, if a sub query is in the select clause then what you said is correct, but if it is in the where clause then it wont be, and anyways his subquery is a correlated one, so there is much minimal performance impact (although, if we can avoid it, it will boost the performance, but better than the non-correlative one).
I moved the sub queries to the from and I am running it.
Moving the subquery to the from helped a lot. It now runs in 13 minutes.
If you continue with the table based approach (temp or permanent) I think you should put indexes on that table to help improve all subsequent joins.

Also you are consistently using LEFT OUTER JOINS, then following these with where clauses on the left joined table. This produces the effect of an inner join, so you might as well use inner joins, e.g.

UPDATE @EHistTable
FROM   @EHistTable E

an INNER JOIN here would produce the same result.