• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1688
  • Last Modified:

Log4j memory leak ?

Hi all
We are using log4j version 1.2.8.
Our configuration is 10 rolling log files for each of the following debug,error and trace, each file size is 20M.
The problem is that the memory consumption grows directly with the logs grows.
I have system with 1G memory, starting the application the memory consumption is 100M and when
all 30 files (debug,error and trace) were full I checked the memory and it was ~700M used.
I delete 27 files (debg.log.x , error.log.x and trace.log.x) and the memory decreased back to 100M.
How come ? any idea ? we wrap the log4j in our project , anything we should take of when wraping it ?
0
dannysh
Asked:
dannysh
  • 3
  • 2
1 Solution
 
CEHJCommented:
That seems all quite natural...
0
 
mmuruganandamCommented:
You have something called FileAppender.  Write your own FileAppender by extending the FileAppender class.

There you have few controls on closing the files and few..
0
 
dannyshAuthor Commented:
>> That seems all quite natural
What do u mean ? should it be like that ?

>>You have something called FileAppender.  Write your own FileAppender by extending the FileAppender

How they implemnt it ? do they keep all lojs in memory also ? any idea where can I find implemenmation for it ?
0
Free Tool: Site Down Detector

Helpful to verify reports of your own downtime, or to double check a downed website you are trying to access.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

 
CEHJCommented:
If you want to limit the size to which your files grow (and it sounds like you might do) you don't need to subclass a FileAppender to do this - the max file size is a configurable property that you can change in your config file, e.g.:

log4j.appender.A1.MaxFileSize=300KB
0
 
mmuruganandamCommented:
What kind of rolling log is yours....

Is it time roll over or size roll over..
0
 
CEHJCommented:
>>What do u mean ? should it be like that ?

What i mean is that it doesn't surprise me. I'd take a guess that to avoid a bad performance hit due to continually opening and closing files, they've avoided a simple append to the log file by normal io means of appending, by buffering. As the files grow in size, the buffer may well do too. (Just a guess ;-))
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

  • 3
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now