In SQL Server, what would cause a query's duration to increase when I/O and CPU time are cut by 50%?
I have three queries that I've worked to optimize and my efforts have reduced CPU usage and I/O counts by half. (I verified these values via profiler and statictics in QA.) When I execute the "optimized" queries after executing "dbcc dropcleanbuffers", the duration is roughly double the original duration. When I execute the queries without dropping the buffers, the duration is rougly half the original value.
Here are some values returned by Profiler after dropping the buffers:
CPU: 6969 (old), 3250 (new)
Reads: 537075 (old), 191632 (new)
Duration 13235 (old), 36990 (new)
When looking at physical I/O via "set statistics io on", the number of physical I/O operations drop significantly. So, how can duration go up when CPU usage goes down, logical I/O goes down, and physical I/O goes down? What can I do to prove/fix the problem?