?
Solved

Is there an official paper from Microsoft on how often to do defrag and SFC on SBS 2003?

Posted on 2010-11-15
14
Medium Priority
?
449 Views
Last Modified: 2012-05-10
Having loads of problems with LaCerte on sbs 2003 R2 and lacerte is asking if we defragged or run sfc at all or regularly.  I said no to both.

Am I wrong?  I asked them for their best practices recommendation and they said it's a microsoft issue and MS says monthly?!  

We run shadow protect for backup so that will cause huge backups every month after a defrag.  I thought the need for defrag has gone away?  

And SFC?  I never heard to run that routinely.

as for checking the hard disks (it's a RAID array), I posted that question here:

http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/SBS_Small_Business_Server/Q_26616643.html
0
Comment
Question by:ThisIsAToughOne
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 4
  • 3
  • 3
  • +3
14 Comments
 
LVL 88

Accepted Solution

by:
rindi earned 400 total points
ID: 34144117
I can't think of a reason to publish a "best Practice" for that, as it depends largely on what the server is used for and how. On Servers where files are often moved around or deleted etc., a defrag should be done regularly, while on those servers where there isn't much file movement, that wouldn't be necessary.

An SFC on the other hand is normally only necessary to fix problems with corrupt system files, if the server's hardware is healthy and there is no malware active on it such files shouldn't corrupt...
0
 
LVL 28

Assisted Solution

by:burrcm
burrcm earned 200 total points
ID: 34144125
2003 file server will require periodic defrag. It is not automatic. SFC? File protection on the other hand is automatic, so SFC should only be required if you have installed something ugly which has overwritten files it should not have and the system is clearly having problems.

Chris B
0
 
LVL 62

Assisted Solution

by:gheist
gheist earned 600 total points
ID: 34144201
Their home user guidelines recommend doing defrag (scheduling at boot), cleanmgr and chkdsk weekly.
Probably goes better if scheduled manually (at boot) after patch tuesday's reboot.
SFC should be done if at all before installing huge service packs.
0
Turn your laptop into a mobile console!

The CV211 Laptop USB Console Adapter provides a direct Laptop-to-Computer connection for fast and easy remote desktop access with no software to install.

 
LVL 18

Assisted Solution

by:BigSchmuh
BigSchmuh earned 800 total points
ID: 34144236
Sorry about that but having "loads of problem" on a RAID 5 array with applications issuing a lot of random writes is just a normal behavior up to my knowledge.

Checking the file system regularly is just non-sense, except if you are suffering from :
- Power outage that you are not aware of
- People shuting down the server by unplugging the power surge or turning off its PSU

Defragmenting regularly (once per semester should be enough) is a good practice on HDD but not on SSD

Can you switch/reinstall to a RAID 10 or 2x RAID 1 arrays ?
0
 
LVL 32

Expert Comment

by:Robberbaron (robr)
ID: 34144749
on a raid5, its my understanding that the files aren't actually moved into contiguous pieces on the disk as the pieces are physically split anyway.
it does provide a slight help with the logical storage and can reduce directory entries, speeding up that part of access.
0
 
LVL 88

Expert Comment

by:rindi
ID: 34145034
The defrag acts on the file-system and has nothing to do with single disks or a raid array, so that is irrelevant, a defrag is still useful.
0
 
LVL 18

Assisted Solution

by:BigSchmuh
BigSchmuh earned 800 total points
ID: 34145041
May be I should clarify some RAID facts.

All RAID level are defined using a stripe size that is the base unit of storage on a single drive. Each drives involved in a raid array has to store this exact stripe size, of course, it can store more than one.
This stripe size on each drive is although named a "block" sometimes.
Adding all stripe of every drives participating in an array defines a "full stripe size" where, usually, one does not account for the parity blocks.

Most raid controllers can define the stripe size in a large range of power of 2 values (2K, 4K, 8K, ...64K...256K...up to some MB depending on the card), usually 64K or 256K are the default stripe size values.

Parity RAID (5/6/50/60) arrays use parity blocks and suffer two write penalty :
-when they receive a write io of less than a full stripe size, they have first to read the old parity blocks, then compute the new parity blocks and write both the new data and parity blocks
-when they receive a write io of less than a stripe size, they have first to read both the old data and parity blocks, then compute the new parity blocks and write both the new data and parity blocks

Those write penalty can be lowered to a minimum using a large write back cache backed by a battery...but you can't expect this write cache to aggregate all random writes to a sequential one. In a multi-process context with both random and sequential writes, this write back cache MAY successfully sends the sequential writes interleaved with the random ones. This is what good hw controller brand are expected to deliver.

There are io pattern usages where parity RAID are good performers:
-Sequential io like those involved in backups
-Large writes with no later updates like those involved in archives or write once read many usages

Tuning a parity raid array is a mandatory step::
-Aligning the partition on a stripe boundary
-Defining the stripe size equal to the client io size (Ex: NTFS uses 4KB default cluster size...and would seriously benefits from a 64KB cluster size on a 64KB stripe size array)

For all OS, database usages and most applications data (where io write pattern usages are mostly random based), using mirrored based arrays (1/10/1E) are the way to go.
Logs, archives, backups can be stored on parity raid arrays.
0
 

Author Comment

by:ThisIsAToughOne
ID: 34145100
thanks guys, but does microsoft have a best practices recommendation for defrag?  I think it was Win NT (remember that!?) that didn't even come with a defrag app.

I have a LOB tech support saying we should have been defragging more often (we're not doing it at all) and making me look bad in front of the client.  When I asked him  how often they recommend, he said it's an OS issue and leave that up to Microsoft, which he thinks is monthly.  

So the client's on the phone with us with a problem with the LOB app not installing reliably.  The LOB tech gets things working so he looks like the hero and making me look to the client like I'm slacking.  Would like to have some ammo that microsoft doesn't have a recommendation or at least certainly not monthly.  My money is that it's the LOB and its bloat.  But I won't get the LOB tech to acknowledge that!
0
 

Author Comment

by:ThisIsAToughOne
ID: 34145117
gheist - sorry I missed your comments - do you have a URL you can link that weekly claim to?

and again, running an imaging app (shadow protect) would get beat up if you run defrag often - it'll think that lots of the hard drive changed each incremental and it'll be a huge.
0
 
LVL 18

Assisted Solution

by:BigSchmuh
BigSchmuh earned 800 total points
ID: 34145150
http://www.microsoft.com/athome/moredone/maintenance.mspx
Preventive Maintenance Activity ==> Recommended Frequency
Clean up the hard disk of temporary files ==> Weekly
Rearrange (defragment) the hard disk ==> Monthly
Check the hard disk for errors ==> Weekly
0
 
LVL 62

Assisted Solution

by:gheist
gheist earned 600 total points
ID: 34145214
http://www.microsoft.com/athome/setup/optimize.aspx

I would consider all steps as a way to prevent system from aging.
I would recommend to run it all after patching tuesday, say on 20th of month
If you need commands ask here.
0
 
LVL 18

Assisted Solution

by:BigSchmuh
BigSchmuh earned 800 total points
ID: 34145223
although, you have "Chapter 7: Operating Your Windows Server 2003 Environment"
  http://technet.microsoft.com/en-us/library/bb496971.aspx
Weekly Maintenance
   Running Disk Defragmenter
Monthly Maintenance
   Maintaining File System Integrity
0
 
LVL 88

Assisted Solution

by:rindi
rindi earned 400 total points
ID: 34145440
As I mentioned earlier, there isn't much point of giving a recommendation as it depends on the server's role and how it is used, so it would always be different depending on the situation.

Shadow Protect absolutely won't get influenced by a defrag. The defrag doesn't change the file or it's attributes, all you need to make sure of is that you don't run a full backup while defragging, or do a system Virus scan at the same time, as the disks will just be thrashing, slowing things down. An incremental backup on the other hand (what you are probably running mostly) isn't an issue.

Another thing, if you run a defrag regularly, the next defrags will be faster as less has to be done.
0
 
LVL 62

Assisted Solution

by:gheist
gheist earned 600 total points
ID: 34145549
pagedfrg on sysinternals also needs to run once in a while if you do not use commercial defragmenter...
0

Featured Post

Building an interactive eFuture classroom

Watch and learn how ATEN provided a total control system solution including seamless switching matrix switch, HDBaseT extenders, PDU, lighting control to build an interactive eFuture classroom.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

When we purchase storage, we typically are advertised storage of 500GB, 1TB, 2TB and so on. However, when you actually install it into your computer, your 500GB HDD will actually show up as 465GB. Why? It has to do with the way people and computers…
The question appears often enough, how do I transfer my data from my old server to the new server while preserving file shares, share permissions, and NTFS permisions.  Here are my tips for handling such a transfer.
This Micro Tutorial will teach you how to reformat your flash drive. Sometimes your flash drive may have issues carrying files so this will completely restore it to manufacturing settings. Make sure to backup all files before reformatting. This w…
In this video, Percona Director of Solution Engineering Jon Tobin discusses the function and features of Percona Server for MongoDB. How Percona can help Percona can help you determine if Percona Server for MongoDB is the right solution for …
Suggested Courses

764 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question