Where is the bottleneck?

Hi, I hope I can explain my issue and get some ideas:

I basically load QueryResults into a dataset and loop through those results and perform calculations on the results.

This is all happening fine when my QueryResults are, lets say > 500 records.  Once the QueryResults become a set of 1500 rows, for example, my server load on the DB goes way high and the operations take a minute instead of 5 seconds or less.

Where do I need to split the operations.  Is 1500 rows just too big a result to process?  

Thanks.
kapes13Asked:
Who is Participating?
 
Tony303Commented:
Have a look at the estimates execution plan perhaps. The items with the highest percent are the problem areas. Is there a table scan in the plan? that would be traditionally the biggest bottleneck.
0
 
kapes13Author Commented:
Yup I am looking over those times right now, it seems the first band of processing can handle substantially more objects than the second batch (where records go from the 400 to the 1500, of course), but I am still trying to see where the lapse would be since they run the same logic, but you are on the right path for sure, and I know I am too, it's just at which point I want to draw the final line.
0
 
kapes13Author Commented:
Hopefully the CPU times are accurate and I did not miss some memory hole somewhere that is not allowing the record set to really be processed in the proper area, so we are swapping out some processing routines and letting the smaller record sets do the bulk of the processing and see how that pans out, thanks all.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.