• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 458
  • Last Modified:

Is there an official paper from Microsoft on how often to do defrag and SFC on SBS 2003?

Having loads of problems with LaCerte on sbs 2003 R2 and lacerte is asking if we defragged or run sfc at all or regularly.  I said no to both.

Am I wrong?  I asked them for their best practices recommendation and they said it's a microsoft issue and MS says monthly?!  

We run shadow protect for backup so that will cause huge backups every month after a defrag.  I thought the need for defrag has gone away?  

And SFC?  I never heard to run that routinely.

as for checking the hard disks (it's a RAID array), I posted that question here:

http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/SBS_Small_Business_Server/Q_26616643.html
0
ThisIsAToughOne
Asked:
ThisIsAToughOne
  • 4
  • 3
  • 3
  • +3
10 Solutions
 
rindiCommented:
I can't think of a reason to publish a "best Practice" for that, as it depends largely on what the server is used for and how. On Servers where files are often moved around or deleted etc., a defrag should be done regularly, while on those servers where there isn't much file movement, that wouldn't be necessary.

An SFC on the other hand is normally only necessary to fix problems with corrupt system files, if the server's hardware is healthy and there is no malware active on it such files shouldn't corrupt...
0
 
burrcmCommented:
2003 file server will require periodic defrag. It is not automatic. SFC? File protection on the other hand is automatic, so SFC should only be required if you have installed something ugly which has overwritten files it should not have and the system is clearly having problems.

Chris B
0
 
gheistCommented:
Their home user guidelines recommend doing defrag (scheduling at boot), cleanmgr and chkdsk weekly.
Probably goes better if scheduled manually (at boot) after patch tuesday's reboot.
SFC should be done if at all before installing huge service packs.
0
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
BigSchmuhCommented:
Sorry about that but having "loads of problem" on a RAID 5 array with applications issuing a lot of random writes is just a normal behavior up to my knowledge.

Checking the file system regularly is just non-sense, except if you are suffering from :
- Power outage that you are not aware of
- People shuting down the server by unplugging the power surge or turning off its PSU

Defragmenting regularly (once per semester should be enough) is a good practice on HDD but not on SSD

Can you switch/reinstall to a RAID 10 or 2x RAID 1 arrays ?
0
 
Robberbaron (robr)Commented:
on a raid5, its my understanding that the files aren't actually moved into contiguous pieces on the disk as the pieces are physically split anyway.
it does provide a slight help with the logical storage and can reduce directory entries, speeding up that part of access.
0
 
rindiCommented:
The defrag acts on the file-system and has nothing to do with single disks or a raid array, so that is irrelevant, a defrag is still useful.
0
 
BigSchmuhCommented:
May be I should clarify some RAID facts.

All RAID level are defined using a stripe size that is the base unit of storage on a single drive. Each drives involved in a raid array has to store this exact stripe size, of course, it can store more than one.
This stripe size on each drive is although named a "block" sometimes.
Adding all stripe of every drives participating in an array defines a "full stripe size" where, usually, one does not account for the parity blocks.

Most raid controllers can define the stripe size in a large range of power of 2 values (2K, 4K, 8K, ...64K...256K...up to some MB depending on the card), usually 64K or 256K are the default stripe size values.

Parity RAID (5/6/50/60) arrays use parity blocks and suffer two write penalty :
-when they receive a write io of less than a full stripe size, they have first to read the old parity blocks, then compute the new parity blocks and write both the new data and parity blocks
-when they receive a write io of less than a stripe size, they have first to read both the old data and parity blocks, then compute the new parity blocks and write both the new data and parity blocks

Those write penalty can be lowered to a minimum using a large write back cache backed by a battery...but you can't expect this write cache to aggregate all random writes to a sequential one. In a multi-process context with both random and sequential writes, this write back cache MAY successfully sends the sequential writes interleaved with the random ones. This is what good hw controller brand are expected to deliver.

There are io pattern usages where parity RAID are good performers:
-Sequential io like those involved in backups
-Large writes with no later updates like those involved in archives or write once read many usages

Tuning a parity raid array is a mandatory step::
-Aligning the partition on a stripe boundary
-Defining the stripe size equal to the client io size (Ex: NTFS uses 4KB default cluster size...and would seriously benefits from a 64KB cluster size on a 64KB stripe size array)

For all OS, database usages and most applications data (where io write pattern usages are mostly random based), using mirrored based arrays (1/10/1E) are the way to go.
Logs, archives, backups can be stored on parity raid arrays.
0
 
ThisIsAToughOneAuthor Commented:
thanks guys, but does microsoft have a best practices recommendation for defrag?  I think it was Win NT (remember that!?) that didn't even come with a defrag app.

I have a LOB tech support saying we should have been defragging more often (we're not doing it at all) and making me look bad in front of the client.  When I asked him  how often they recommend, he said it's an OS issue and leave that up to Microsoft, which he thinks is monthly.  

So the client's on the phone with us with a problem with the LOB app not installing reliably.  The LOB tech gets things working so he looks like the hero and making me look to the client like I'm slacking.  Would like to have some ammo that microsoft doesn't have a recommendation or at least certainly not monthly.  My money is that it's the LOB and its bloat.  But I won't get the LOB tech to acknowledge that!
0
 
ThisIsAToughOneAuthor Commented:
gheist - sorry I missed your comments - do you have a URL you can link that weekly claim to?

and again, running an imaging app (shadow protect) would get beat up if you run defrag often - it'll think that lots of the hard drive changed each incremental and it'll be a huge.
0
 
BigSchmuhCommented:
http://www.microsoft.com/athome/moredone/maintenance.mspx
Preventive Maintenance Activity ==> Recommended Frequency
Clean up the hard disk of temporary files ==> Weekly
Rearrange (defragment) the hard disk ==> Monthly
Check the hard disk for errors ==> Weekly
0
 
gheistCommented:
http://www.microsoft.com/athome/setup/optimize.aspx

I would consider all steps as a way to prevent system from aging.
I would recommend to run it all after patching tuesday, say on 20th of month
If you need commands ask here.
0
 
BigSchmuhCommented:
although, you have "Chapter 7: Operating Your Windows Server 2003 Environment"
  http://technet.microsoft.com/en-us/library/bb496971.aspx
Weekly Maintenance
   Running Disk Defragmenter
Monthly Maintenance
   Maintaining File System Integrity
0
 
rindiCommented:
As I mentioned earlier, there isn't much point of giving a recommendation as it depends on the server's role and how it is used, so it would always be different depending on the situation.

Shadow Protect absolutely won't get influenced by a defrag. The defrag doesn't change the file or it's attributes, all you need to make sure of is that you don't run a full backup while defragging, or do a system Virus scan at the same time, as the disks will just be thrashing, slowing things down. An incremental backup on the other hand (what you are probably running mostly) isn't an issue.

Another thing, if you run a defrag regularly, the next defrags will be faster as less has to be done.
0
 
gheistCommented:
pagedfrg on sysinternals also needs to run once in a while if you do not use commercial defragmenter...
0

Featured Post

Get free NFR key for Veeam Availability Suite 9.5

Veeam is happy to provide a free NFR license (1 year, 2 sockets) to all certified IT Pros. The license allows for the non-production use of Veeam Availability Suite v9.5 in your home lab, without any feature limitations. It works for both VMware and Hyper-V environments

  • 4
  • 3
  • 3
  • +3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now