Recently migrated from windows2003 / iis6 to 2008 (32bit) iis7. Went from a single server to two servers in a webfarm. Much bigger/faster database server. Prior to the upgrade, the single server environment was bottlenecked by database capacity. We'd hit a certain transaction threshold and the sites would bog down, with the datbase was pegged at 95%. So that was the issue...so I thought. Moving to the new server environment, the db is crusing along at 20% under the same load. However, I still get occasionally bogged down on the site performance (page loads over 30 seconds, etc) for pages that normaly load in 1 second or better. Under this condition, the web server is barely using the cpu (avg 15%, spikes to 40 or 50%), using only 2gig of 4 avilable on memory. Now the load I'm talking about is roughly 100k page views (asp.net database driven pages with lots fo images) a day...yes a day. Traffic is spread faily evenly with no major peak hour, but slow hours in the early am hours.
We're runing asp.net 2.0 apps that have legacy dependency on 1.1 modules, hence we need to run our app pools in classic mode (6.0) equiv. Using perfmon, I can see that when active requests hit above 30 or so, then they begin to to really rise rapidly and the request queue count goes from 0 to 40 or 50. DUring this time, the pages are being served very slowing and of course, some time out.
I can consistently recreate the situation by taking one of the servers out of hte load balancer and throwing all requests a one server. Within a couple of minutes the active requests count goes way up and the site performs badly. Again, no bottleneck in network, database or memory. It looks like a thread pool issue. I've done some minor tweeks to the app pool configuration. But for the most part, no difference.
My question is, how can get IIS to utilize the cpu resources available (the web server barely breaks a sweat)? Also, what performance counters would most likely reveal where the bottle neck is? I am having failed request tracking installed on the servers to see at that reveals anything. The pages that end up sitting there in the worker process queue are the same pages that normally get served up in 1 second.
Thanks for any help.