[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 5705
  • Last Modified:

Exchange 2010 database Availability Groups and Offline Defrag

Having recently run an offline defrag of our store and finding it took 7 hours for a 50GB database on our Exchange 2003 server I was wondering how this has changed in Exchange 2010.

Given that Single Instance Storage is growing and the use of Archiving in the same database will make databases huge running an offline defrag (or indeed any ESEUTIL repairs) are going to have a significant impact on uptime.

My first question is if I need to run an offline defrag on a database that is part of a database availability group will it take all databases offline and replicate the changes once the defrag has completed or will I be able to run the defrag on copy at a time?

My second question is are offline defrags still necessary or is there now some magic tool that does the equivalent without taking the database offline to recover precious disk space?

My final question is if a corruption occurs in a database will it be replicated to the other databases? I suspect not if all is being replicated is the log files but it would be nice to get a definitive answer so I know what I am potentially letting myself in for with Exchange 2010.

Thanks in advance for any help given.
0
Fester7572
Asked:
Fester7572
  • 3
  • 3
1 Solution
 
MesthaCommented:
Why did you run an offline defrag?
They are not something you must do and with Exchange 2007 and higher a complete waste of time.
Therefore there is no "need" to run an offline defrag now or at any point in the future.

The whole point of Exchange 2010 DAG is so that you don't have to worry about disk space. Storage is cheap. If you haven't got enough, throw in some more. The data is spread out over multiple disks on multiple servers so that you aren't exposed. If you have treated a DAG in the same way as you did with storage on previous versions then that was the mistake.

If you are going enough white space to even consider an offline defrag then simply create another database and move the data to the new database and drop the original.

If you are tight on storage on a product that isn't even three months old then it sounds like you didn't specify the server correctly. That may well be harsh, but that is how I see it.

Simon.
0
 
Fester7572Author Commented:
Sorry Simon, you misunderstand me. I am running Exchange 2003 standard. I did say that we are currently on Exchange 2003 right at the start.

Our database had grown to 70GB with 20GB of white space. Rather than wait for it to keel over I took preventative action  to buy me some time until we migrate to 2010 (still waiting for final budget approval!!). When I'd run a defrag aprevioulsy it had taken 3 hours for the same size. Based on that I'd scheduled in 5 hours downtime to complete the task and it still over-ran leading to angry users. I'd like to find out what I'm letting myself in for with the new version when we go across given that the database is likely to seriously increase in size when things go Kaka.

As for the spec, we will be moving to a 3TB SAN so I hope that leaves you feeling a little less harsh.

As there doesn't seem to be much real world knowledge posted yet for such a new product and using ESEUTIL I thought I'd ask the question to see what knowledge is out there.

Given that you can have multiple databases of a huge size even on standard (5 databases recommended not to exceed 200GB for standard as far as I know), I understand that offline defrags are less necessary than before, but not quite irrelevant.

I guess what I'm really trying to find out is do DAGs remove the risks of database corruption, etc and therefore ESEUTIL is unneccessary or will there still be those times when you get really, really unlucky and have to do something drastic that could take hours and seriously inconvenience users. Do Larger database = longer downtime.
0
 
MesthaCommented:
If you got an offline defrag done in 7 hours on a 50gb database you did well.
I would have said it could take anywhere between 12 and 50 hours, as the guidelines for the process is 1-4gb per hour depending on the hardware.

Your question says Exchange 2010. I missed the references were to the old defrag on Exchange 2003, alas when you are posting at such a high rate as I am, you can only skim read. Many Exchange administrators treat offline defrag as something that they have to do, which is not the case.

Although the points I made about offline defrag apply equally to Exchange 2003 as well.

Offline defrags are completely irrelevant. I haven't done an offline defrag on any site that I manage since the release of Exchange 2003 SP2. A waste of time, not risk free and completely unnecessary. There are NO good reasons for running on an offline defrag on Exchange 2007 or later.

I don't intend to do one ever on Exchange 2007 or later.

An offline defrag basically creates a new database. With multiple databases available to you in Exchange 2007 and higher, the same process can be achieved with no downtime simply by moving all the mailboxes to a new database file.

Most database corruption is not the fault of Exchange, it is poor hardware. Suspect disk, RAID controller etc. No solution is going to deal with corruption, and no solution can insulate against it. DAG is a live solution, the changes are replicated immediately. It is impossible to say that it will deal with corruption in the database because database corruption occurs.

Oh and there is no single instance storage on Exchange 2010. Its gone.

Simon.

0
Get your Conversational Ransomware Defense e‑book

This e-book gives you an insight into the ransomware threat and reviews the fundamentals of top-notch ransomware preparedness and recovery. To help you protect yourself and your organization. The initial infection may be inevitable, so the best protection is to be fully prepared.

 
Fester7572Author Commented:
Sorry its been a while on this Simon.
Before I award the points there is just one thing I need to clear up.
Have I understood you correctly in that if one copy of the database is corrupted in the DAG then all copies will be corrupted?
Whether corruption is the fault of the hardware or Exchange is kind of irrelevant to what I want to know. I want to know how to deal with it if I get unlucky and it occurs.
I guess I was kind of hoping that it is not the whole DAG that goes but one copy and that by taking the corrupted one offline and deleting the database (or similar procedure) it would then just replicate everything back and therefore fix the corruption (after the cause is fixed, naturally e.g. hardware fault).
Thanks very much for your time on this.
Darren
0
 
MesthaCommented:
I can't answer the question because there are so many variables. It depends on how the corruption has occurred.
 
During day to day operations, the database isn't replicated, it is the transaction logs. The other servers then build a database from those logs. Therefore it is possible that in some scenarios corruption may be replicated, but in others it may not.
If you discover a database has become corrupted and choose to replace it, then that will resolve the initial issue, but if the source has some unidentified corruption, then that may well be replicated across.

As I wrote above, there is no way to guarantee zero database corruption.
Its like the health of the human body - no one can guarantee that they are 100% healthy.

Simon.
0
 
Fester7572Author Commented:
Thanks for your efforts with this
0

Featured Post

Simplify Active Directory Administration

Administration of Active Directory does not have to be hard.  Too often what should be a simple task is made more difficult than it needs to be.The solution?  Hyena from SystemTools Software.  With ease-of-use as well as powerful importing and bulk updating capabilities.

  • 3
  • 3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now