Go Premium for a chance to win a PS4. Enter to Win


Windows 2008 R2 file server system process maxing CPU during working hours

Posted on 2013-02-06
Medium Priority
Last Modified: 2016-11-23
Greetings Experts,

I have Windows Server 2008 R2 running on an IBM 3650 M3 with 20G of RAM and 2 MT Quad core Xeons in the 2.93GHZ range I believe. The data bearing array has 4 500G SAS 15K drives in Raid5. The application is unusual in that the server hosts 3.2 million small files. The folders structure is quite deep and complex (sometimes 60-80 characters deep and some of the folders contain as many as 40 thousand files.

The server is used during business hours for editing primarily AutoCAD files, jpegs, pdfs, and dwfx(AutoCAD html image). Typically around 40 users are working with the data at any one time. The content is presented to clients via an IIS server real time.

The issue that I am having is that the large folders are extremely slow to display in the client's (as in client workstations) explorer window. I mean really SLOW. 15 to 30 seconds to display. When the system is really dragging the network, disk, and memory look fine but CPU is 70-90%. I waited until the users were gone this evening and ran some tests. I found that the simple act of accessing a folder with 40 thousand files would take the server's CPU up to 20% until the list was built (10 seconds maybe). With 40 people hammering the system plus the IIS server plus some automated process machines it's no wonder that the system is slow. By myself I took it to 20% with a 3 year old laptop.

Prior to the current server that role was performed by a IBM N3600 filer with similar poor response times and high CPU loads. Thinking it was overloaded (also doing VMs on SAN side) we moved to a conventional server as an experiment with slight improvements if any. Prior to the filer we were running this function on a Dell 1950 blade fibre attached to a 9 disk raid 5 on a EMC cx3-20. No problems there but we were running Win 2003/64 and XP SP3. I really think the problem lies within the interaction between Win 7/64 and 2008 R2/64.

Any thought?
Question by:Stach1953
  • 2
  • 2
LVL 47

Accepted Solution

David earned 2000 total points
ID: 38862365
Just because CPU usage is high doesn't mean you are CPU bound.  The RAID takes care of itself.  Odds are extremely likely you have an I/O issue.  Look at queue depth in perfmon.  If it is > 2 then your apps are starved for data and your I/O subsystem is the  bottleneck.

4 disks in a RAID5 config with this many users and lots of large files is frankly, a horrible configuration for you.  Go RAID10.  By going RAID10 every bit if data is in two places, so on reads you will get at least twice the performance, maybe 3X compared to what you are doing because the reads will no longer be waiting on writes).   Then on writes, you'll probably get 50% or even greater improvement.

(Yes, you lose a little capacity. So buy two HDDs, put them in a RAID1 if you need more capacity).

Also do to the expected usage, your write cache will be saturated so it is going to be of little value to you. That means every write will involve full reads on ALL of the disk drives.  This is a rare situation where RAID5 is wrong for you on every possible metric.

Author Comment

ID: 38862543
Thanks for the input. The drive issue makes sense. there are other issues with a 4 disk R5 besides number of spindles.

 I can't find queue depth in perfmon only queue length. I assume that is what I am looking for. If so are we talking about logical or physical disk q length? If we're looking for queue length and it exceeds 2 I have an older R10 8 disk SAS DAS box I can try and see what happens.
LVL 47

Expert Comment

ID: 38862603
You'll likely be much better off with a 8-disk R10 even if they are 7200RPM than a 15K RPM 4-disk RAID5.  Block size is important, but at this point it is surely going to be an improvement even if it isn't optimal.  I would try that first since you already have the gear.

Author Comment

ID: 38865134
I checked the disk queue length this morning after the system load picked up. The average hovers around .1 with a rare spike as high as 3. Most spikes are 1 or less. Even with the disk queue length and % disk time flatlined the CPU is still at 90%.

Featured Post

Veeam Task Manager for Hyper-V

Task Manager for Hyper-V provides critical information that allows you to monitor Hyper-V performance by displaying real-time views of CPU and memory at the individual VM-level, so you can quickly identify which VMs are using host resources.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

INTRODUCTION The purpose of this document is to demonstrate the Installation and configuration, of the HP EVA 4400 SAN Storage. The name , IP and the WWN ID’s used here are not the real ones. ABOUT THE STORAGE For most of you reading this, you …
Learn about cloud computing and its benefits for small business owners.
This video Micro Tutorial explains how to clone a hard drive using a commercial software product for Windows systems called Casper from Future Systems Solutions (FSS). Cloning makes an exact, complete copy of one hard disk drive (HDD) onto another d…
In this video, Percona Director of Solution Engineering Jon Tobin discusses the function and features of Percona Server for MongoDB. How Percona can help Percona can help you determine if Percona Server for MongoDB is the right solution for …

971 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question