Solved

Windows 2008 R2 file server system process maxing CPU during working hours

Posted on 2013-02-06
4
727 Views
Last Modified: 2016-11-23
Greetings Experts,

I have Windows Server 2008 R2 running on an IBM 3650 M3 with 20G of RAM and 2 MT Quad core Xeons in the 2.93GHZ range I believe. The data bearing array has 4 500G SAS 15K drives in Raid5. The application is unusual in that the server hosts 3.2 million small files. The folders structure is quite deep and complex (sometimes 60-80 characters deep and some of the folders contain as many as 40 thousand files.

The server is used during business hours for editing primarily AutoCAD files, jpegs, pdfs, and dwfx(AutoCAD html image). Typically around 40 users are working with the data at any one time. The content is presented to clients via an IIS server real time.

The issue that I am having is that the large folders are extremely slow to display in the client's (as in client workstations) explorer window. I mean really SLOW. 15 to 30 seconds to display. When the system is really dragging the network, disk, and memory look fine but CPU is 70-90%. I waited until the users were gone this evening and ran some tests. I found that the simple act of accessing a folder with 40 thousand files would take the server's CPU up to 20% until the list was built (10 seconds maybe). With 40 people hammering the system plus the IIS server plus some automated process machines it's no wonder that the system is slow. By myself I took it to 20% with a 3 year old laptop.

Prior to the current server that role was performed by a IBM N3600 filer with similar poor response times and high CPU loads. Thinking it was overloaded (also doing VMs on SAN side) we moved to a conventional server as an experiment with slight improvements if any. Prior to the filer we were running this function on a Dell 1950 blade fibre attached to a 9 disk raid 5 on a EMC cx3-20. No problems there but we were running Win 2003/64 and XP SP3. I really think the problem lies within the interaction between Win 7/64 and 2008 R2/64.

Any thought?
0
Comment
Question by:Stach1953
  • 2
  • 2
4 Comments
 
LVL 47

Accepted Solution

by:
dlethe earned 500 total points
ID: 38862365
Just because CPU usage is high doesn't mean you are CPU bound.  The RAID takes care of itself.  Odds are extremely likely you have an I/O issue.  Look at queue depth in perfmon.  If it is > 2 then your apps are starved for data and your I/O subsystem is the  bottleneck.

4 disks in a RAID5 config with this many users and lots of large files is frankly, a horrible configuration for you.  Go RAID10.  By going RAID10 every bit if data is in two places, so on reads you will get at least twice the performance, maybe 3X compared to what you are doing because the reads will no longer be waiting on writes).   Then on writes, you'll probably get 50% or even greater improvement.

(Yes, you lose a little capacity. So buy two HDDs, put them in a RAID1 if you need more capacity).

Also do to the expected usage, your write cache will be saturated so it is going to be of little value to you. That means every write will involve full reads on ALL of the disk drives.  This is a rare situation where RAID5 is wrong for you on every possible metric.
0
 

Author Comment

by:Stach1953
ID: 38862543
Thanks for the input. The drive issue makes sense. there are other issues with a 4 disk R5 besides number of spindles.

 I can't find queue depth in perfmon only queue length. I assume that is what I am looking for. If so are we talking about logical or physical disk q length? If we're looking for queue length and it exceeds 2 I have an older R10 8 disk SAS DAS box I can try and see what happens.
0
 
LVL 47

Expert Comment

by:dlethe
ID: 38862603
You'll likely be much better off with a 8-disk R10 even if they are 7200RPM than a 15K RPM 4-disk RAID5.  Block size is important, but at this point it is surely going to be an improvement even if it isn't optimal.  I would try that first since you already have the gear.
0
 

Author Comment

by:Stach1953
ID: 38865134
I checked the disk queue length this morning after the system load picked up. The average hovers around .1 with a rare spike as high as 3. Most spikes are 1 or less. Even with the disk queue length and % disk time flatlined the CPU is still at 90%.
0

Featured Post

Why You Should Analyze Threat Actor TTPs

After years of analyzing threat actor behavior, it’s become clear that at any given time there are specific tactics, techniques, and procedures (TTPs) that are particularly prevalent. By analyzing and understanding these TTPs, you can dramatically enhance your security program.

Join & Write a Comment

The password reset disk is often mentioned as the best solution to deal with the lost Windows password problem. In Windows 2008, 7, Vista and XP, a password reset disk can be easily created. But besides Windows 7/Vista/XP, Windows Server 2008 and ot…
Hyper-convergence systems have taken the IT world by storm and have quickly started to change our point of view of how the data center should and could be architected. In this article, I’ll explain the benefits of employing a hyper-converged system …
In this video, we discuss why the need for additional vertical screen space has become more important in recent years, namely, due to the transition in the marketplace of 4x3 computer screens to 16x9 and 16x10 screens (so-called widescreen format). …
Windows 8 came with a dramatically different user interface known as Metro. Notably missing from that interface was a Start button and Start Menu. Microsoft responded to negative user feedback of the Metro interface, bringing back the Start button a…

707 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

13 Experts available now in Live!

Get 1:1 Help Now