Link to home
Start Free TrialLog in
Avatar of eviglotti
eviglotti

asked on

Slow SQL query in Microsoft SQL Server (no joins)

Hello,

I have two databases on Microsoft SQL Server 2012, both have about 5 million rows. There are not at all the same schema, but they are on the same database on the same server. This query:

select sum(RecordCount)
 from TableAinDatabase1

Returns 5,717,342 records and takes 5 seconds

But this query:

select sum(RecordCount)
 from TableBinDatabase2

Returns 5,160,207 records and takes 1.5 minutes.

I am not a DBA, but since there are no joins (and thus I don't think indexes are an issue), I am lost on how something could be THAT much faster/slower, and I'm not sure where to look next.

Thanks!
Avatar of arnold
arnold
Flag of United States of America image

Spec on system
Is TableBinDatabase1 being queried accessed regularly, while the second is just there?
When a db is active, it is kept in memory, when you query the second, the system might have to shift the first out and then query the second.
Run the same query on the second sequentially, and then query the first again. What are the results.

Use the explain/query plan for each and see what that tells you.
The problem troubleshooting starts with analyzing the slow query. If the difference between the estimated and actual number of rows in a query execution plan is higher than 10%, the statistics are obsolete. How the performance will be affected depends on the query and execution plan. The same obsolete statistics can have different effect on two different queries.
to know more reasons of slow running query, visit: http://www.sqlmvp.org/are-you-experiencing-slow-running-queries-in-sql-server/
I believe the statistics on table2 are out of date. Can you update statistics and then try?

Also, I agree with the other experts. If an object is frequently accessed, SQL Server will still have it lying around in the buffer pools and hence return you the results much faster. When capturing performance statistics, it is always preferred to have a cold buffer, clean cache condition. One way to achieve this would be to run the following before you run the query:

(DISCLAIMER: Please run these statements on your Dev/QA/Staging environments only - do not use them in production.)

DBCC DROPCLEANBUFFERS;
DBCC FREEPROCCACHE;
GO

Open in new window

Returns 5,717,342 records and takes 5 seconds
(...)
 Returns 5,160,207 records and takes 1.5 minutes.
By the queries each one can only return a row. Is the 5M the value returned by the single row?
Are both really tables or by chance any of them is a view?
If both are tables,  do they have a clustered index?
Can you please post your queries.

Also there is high possibility that indexes needs rebuilding and stats needs update. If these things are not updates regularly then SQL engine will give you sub optimal plan.

Try this..

ALTER INDEX ALL ON [yourtablename] REBUILD ;
UPDATE STATISTICS [yourtablename];

Note - First try using Rebuilding indexes only. If that also not worked then update the statistics.
ASKER CERTIFIED SOLUTION
Avatar of Scott Pletcher
Scott Pletcher
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of eviglotti
eviglotti

ASKER

Thanks everyone on these items. A few things:

1. It is true that TableAinDatabase1 has been accessed more over the years than TableBinDatbase2, but neither are used regularly. They are both star schema reporting tables and thus are used only a couple times a day. But TableAinDatabase1 has been around for about 10+ years whereas the slower one was just created a few months ago.  I have been going back and forth on the sequence of which query I run first and second and I still see the same results.

2. Unfortunately I'm not sure that updated statistics is going to help us as I have a maintenance plan that updates statistics on both tables in these two databases, so they both have that in common.

3. Both are actual tables, not views and both have 5mil rows, not just a value of 5mil in the sum. The RecordCount field is basically a 1 for all rows, though sometimes it is a 0 depending on the type of record. But ostensibly we are looking at 5mil rows in both tables. They both have a clustered index for their single primary key.

4. As for indexes, interestingly, the indexes on the fast query have been there forever, though I rebuild them once a month or so. The indexes on the slow query table are dropped and recreated daily when we drop indexes, load data, and create indexes.

5. Neither table has an index that contains RecordCount. No process is modifying the tables during the time of the query.

However, there is no question that the size of the table makes a huge difference. I thought they had similar number of columns but I was wrong, the slow one has twice as many. I'm not sure why twice as many columns means an exponential slowdown as it just doesn't seem proportional. Maybe there is a cutoff where past too many columns it has to separate in pages or something.

We will play with the table size and go from there.

Thanks!
This definitely is it. I mislead the group in that I thought the tables were about as wide, but clearly not. Thank you, we will investigate accordingly.
Access to the wider table will be slower. SQL Server stores data in pages. Wider the row, more are the pages required to store a fixed number of rows. Speed of data access is directly proportional to the number of pages and the density of rows/page.