Link to home
Create AccountLog in
Avatar of 4ncid
4ncid

asked on

IIS Worker Process Uses 100% CPU

We have 12 web sites on Windows 2008 R2 Standard 64 bit, 2GB RAM, 2 Processors. It is run in VMware at a hosting company and has been active about 2 months. ASP.Net code in Framework 4.0.  No code changes or Windows Updates applied since 2/23/15.

On 3/5/15, one web (X.com) site started slowing down or stopping completely for users. There may be up to 25 or 50 users hitting it simultaneously. Task Manager Performance Tab showed CPU usage pegged at 100% and Processes Tab showed w3wp.exe IIS Worker Process for X.com.

This would last 15 to 120 seconds. Sometimes stopping on its own - sometimes I killed the process.  It reoccurred at random times, getting more frequent as the day progressed. Resetting IIS and server reboot only provided temporary relief.  All other web sites functioning normally, including one with twice the user load.

Only cure was to move x.com to a similar VM used for development. No incidents since.

So - what the heck???  Thanks in advance.....
ASKER CERTIFIED SOLUTION
Avatar of Dan McFadden
Dan McFadden
Flag of United States of America image

Link to home
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
See answer
Avatar of 4ncid
4ncid

ASKER

Thank you Dan for the reply:
1. not yet - working on that
2. No errors for the IP bound to this site, even at times of CPU=100. Lots of entries like "Timer_ConnectionIdle"
3. The Event Log did have errors about this app not being able to write to text file because it was locked. The app wrote info lines to a text file using .NET Streamwriter.  I disabled this logging midday. Those error messages in Event Log stopped but the CPU problem continued.  No other clues in Event Logs.
4. Yes. Database activity was 25% below normal.
5.  Not sure how to check this. I didn't note excessive memory usage at the time.
Hopefully you have more than the default fields enabled in the http logging feature.  Otherwise, I recommend enabling all W3C fields.  In your case, there are fields that may be of help.  For example, time-taken, cs-bytes and sc-bytes.

Basically these fields are defined as:

time-taken = how long the request took to be executed, in milliseconds
cs-bytes = bytes sent by the client -or- bytes received by the server (inbound data)
sc-bytes = bytes sent by the server (outbound data)

Of course the field "cs-uri-stem" is important to see what URL the client request hit.  The "cs-uri-query" may also be of use if the app uses query strings instead of posting back the data.

In the HTTPERR logs, "Timer_ConnectionIdle" entries can be ignored.

Use task manager to see how much RAM a process is consuming.  I would select the following columns to be visible under the Processes tab in Task Manager:

1. Image Name
2. PID - process ID
3. User Name
4. CPU - [currently CPU usage in percent]
5. CPU Time - [overall time the process has on the CPU]
6. Working Set (Memory) - [memory used by a process but can be released if needed]
7. Memory (Private Working Set) - [memory reserved exclusively for the process]
8. Handles
9. Threads

#7 is a good estimate of how much RAM a process is using.  Measuring the real amount of RAM a process is a bit more complicated but Private Working Set is useful for quickly troubleshooting issues.

Dan
Avatar of 4ncid

ASKER

Web site in question was move to a different virtual machine with twice the resources (4GB instead of 2GB RAM, 2 Proc instead of 1).  Had at least 2 similar events yesterday where both CPU stayed about 85% for several minutes with the same IIS worker process. Working Set Memory about 610K. Private Working Memory about 350K.

I was running the MS •Debug Diagnostics 1.2 tool which produced and analyzed dump files. The All Functions area lists "System.threading.WaitHandle.WaitMultiple(System.threading.WaitHandle[],Int32,Boolean,Boolean)"

Top Thread:  2144
General Info: Type = Unknown Operation
Entry Point = clr!Thread::IntermediateThreadProc
No actual files are named
Avatar of 4ncid

ASKER

We tracked the problem to a query that was returning about 14,000 rows with 8 fields. All field results were concatenated into a string and then stored in a varchar(max) field.

Using StringBuilder instead of concatenation stopped the CPU drain during this step, but still stalled trying to store results in SQL.  We decided storing these results was not critical and are now skipping this operation.
For future use, if you enable HTTP logging in IIS and enable all available fields to be logged, you might have been able to find the culprit page by looking at pages that had high "time-taken" values.  This is a good indicator of pages that are taking long to respond to an http request and possible pain-points in the application in terms of response time and server resources consumed

In the HTTP logs, the page calling the query would most likely have also had larger values for "sc-bytes", which indicates that after the server received the http request, it was returning a bunch of data to the client.

quick http (W3C) log field definitions:
1. time-taken = how much time it took to deliver the page to the client [its a start to stop timer]
2. sc-bytes = server to client sent bytes.  you could say bytes sent by the server to the client after processing the http request
3. cs-bytes = client to server sent bytes.  you could also say bytes received by the server for a specific http request.

All three of these are useful to occasionally look at to ensure that your application is working within expected ranges.

For reasons/situations like this, I always recommend logging in multiple places in multi-tier applications.

Dan
Avatar of 4ncid

ASKER

Thanks Dan. What do you use for analyzing logs?  I have been using WebLogExpert which is good for checking traffic but not so much for finding errors as you suggested.
For quick and simple reports, I use Log Parser with Log Parser Studio as the GUI.

If I really want to dig thru the logs, I wrote a script that takes the log entries from the logs and puts them in a SQL Server db.  Then I can search thru the logs using T-SQL.  Lets me create custom queries that are specific to the apps that I need to monitor.

You could also use Log Parser for custom or complex analysis, but I've been using my scripts for some time now... its a control thing.

Dan