Link to home
Start Free TrialLog in
Avatar of tjie
tjieFlag for United States of America

asked on

Exchange 2007: LOGS vs Storage Group

Hi,

1) This is related to Exchange 2007 in the Cluster Environment
-OS: Windows 2003 server
-Node1: EXCH07_1
-Node2: EXCH07_2
2) There is Volume called "LOGS" or R: drive
3) There are 6 storage groups: Storage group 1 up to 6
4) Per Consultant:
- The volume "LOGS" or R: drive should be more than 75 GB
- If The volume "LOGS" or R: drive is equal to 0 (zero); the Exchange server will fail
-If the Volume "LOGS" or R: drive is LESS than 75 GB; we have to "Run backup NOW" to the largest Storage group; say it if the Storage group 6 is the largest (at that time; when R: drive is LESS than 75 GB), we have to "right-click" the Storage group 6 and select "Run Backup Now"
5) My questions: i) What is the "LOGS" or R:drive?, ii)Why (as per consultant) if the R: drive is " 0 (zero value) ", the Exchange will fail?, iii)Why when we "backup the largest storage group" Now the R: Drive will become Bigger?
6) Thank you

tjie
SOLUTION
Avatar of bpinning
bpinning
Flag of Australia image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ASKER CERTIFIED SOLUTION
Avatar of tigermatt
tigermatt
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of tjie

ASKER

LN41 & other experts:
- I want to clarify and ask the questions related to the answer from LN41

""" * Measure the log file usage per day - watch how much the log file grows each day and make sure your log file drive is large enough to handle at least 7 days worth of logs."""
- This is my understanding ....
- Say it Today, i note the R: drive is 120 GB; tomorrow: 115 GB; the day after tomorrow is 112 GB ........so everyday it grows around 4 GB; for 7 days: around 28 GB (say it 30 GB) ......; so the worst comes to the worst, it will be around 90 GB after 7 days (so it is OK; it is bigger than 50 GB); is the understanding correct?


""" * Monitor your backups daily to make sure they ran OK. A full backup should truncate the logs when complete. There are other methods but this is simplest and most common. """
-We do the Full backup every week (we are using the backup exec) ...
-Your statement: "A full backup should truncate the logs when complete"; i do not understand here; Per my understanding: "the full backup" should be the same with the "partial backup" (incremental or differential); isnt it?; the "truncate" is similar to "defragmentation" right?; why the full backup will clear "the white spots of the databases"?


Thanks
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial

It's not quite the same as defragmentation.

Defragmentation concerns the physical database files. Online defrag, which runs on a daily basis by default, identifies database pages no longer in use and recovers them for future use. Those database pages are marked as white space and overwritten later. Offline defrag shrinks the database by copying to a new database file but leaving the whitespace pages behind.

Online defrag will normally look after itself, and in Exchange 2007 up, there is never a reason to do an offline defrag.

Transaction logs are separate files - they maintain a log stream of EVERYTHING which goes in and out of the databases. Change anything and it hits the logs. If you lose the RAID array holding your database files, you can restore from the last backup and "replay" the log files into the database, bringing you right back to the point when it stopped. When the backup is run, it removes the transaction log stream because it is not needed any more - the changes which the logs are tracking have now been backed up. As I mentioned before, this architecture provides some prospect of disaster recovery.

Consider for a moment the following:

* You run a backup once a week on a Sunday night. The backup is properly purging unrequired log files.

* One week, on a Saturday, the array holding your databases fails. You've now lost the EDB file containing all the email in that database.

* Using the logs, you restore last Sunday's backup. You are now 6 days out of date. However, with the transaction logging array available, you replay the logs into the recovered database. The logs have tracked ALL changes to the database since the last backup and you recover yourself to the point of RAID array disaster.

In your scenario, you have a cluster, so emphasis on this mode of disaster recovery is not so important as a failover event will occur. Nevertheless, you should still focus on logs and follow best practices with regard to them.

-Matt

I didn't notice you closed this while I was typing. Thanks for the points!