Link to home
Start Free TrialLog in
Avatar of mac_g
mac_gFlag for Saudi Arabia

asked on

Ideal log file size in the server and its effect

what is ideal log file size to be maintained in the servers to have  a optimal performance
Explain effect of file size in performance
SOLUTION
Avatar of Michael B. Smith
Michael B. Smith
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of dpearson
dpearson

The smaller the log files, the faster the performance because writing a log file takes some CPU time and some file system IO time.

As a result, most people log very little on a production server.

That being said - I think they're wrong :)  Where I work, we log mountains of data - over 1 GB/hour/server.

The reason is that when we have a problem with a production server we almost never have to reproduce the problem in order to determine what went wrong.  We already have the details in our logs, so we can generally figure out what happened and fix it without further details from an end user.  But that does mean our servers need more storage.

So while logging small amounts will make your server run fast, the real question should be "how much logging do you need in order to get the information you need to solve problems?"  And for that, the answer will vary with your business.

The analogy I like to use with our engineering team is the launch of the space shuttle.  Complex software systems are like launching a space vehicle.  And if the shuttle has a problem and blows up, you don't want to have to launch another one to figure out what went wrong.  You need tons and tons of data (logs).

Doug
There is no one answer to this question..  there are too many variables for a good answer. You have to examine several factors (and this isn't going to be a comprehensive list..
1. What do you *need* to capture in your logs
2. What would be *nice* to capture in your logs
3. What is the format of your logs? plain text, evtx, etc?
4. What is the underlying file system?
5. With NTFS, your block size matters, and your "normal" write size matters..
6. With ext4 or several other linux file systems, the block size is much less important, or is only a single sector.
7. How frequently is the log written to?  Does the whatever subsystem write every line, or does it group them and write them in chunks?
8. Is the log automatically compressed or left as plain text?
9. What's the encoding of the log file (if is plain text)
10. What's the language of the log file? (i.e. does it need a double byte character set)
11. What is the underlying physical disk structure? If you are dealing with SSD's then the disk fragmentation is literally meaningless, but if it is spindle disks, then fragmentation matters a lot..
12. What are the log management options? (flushing, rollovers, number of physical files maintainted, etc.)

Generally, all of the OS's can write plain text files extraordinarily fast, Windows evt/evtx logs are extraordinarily space efficient and are exceedingly fast.  

Coralon
This seems like a question that would be asked on a homework or test because it's so open ended.

In most cases, you let the system determine the default.  The devs and distro creators have already preset standard logs and sizes. If you don't know what you're doing, just leave it alone.  You only ever need to worry about it if you run out of disk space or you need to add additional logging.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial