Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 304
  • Last Modified:

cp'ing or ftp'ing a 6Mb file eats 6Mb of free memory

under SunOS 5.6
(following numbers taken from TOP and

my total free memory is 2048Mb
current free mem is 1920Mb

Now, when I copy of ftp a 6 Mb file
to this box, free mem drops 6Mb and
seems to stay there until the file
is deleted. Once it is deleted the 6Mb
reutrns to the free memory pool

Now why is this happening??
the file is copied to disk so why is 6Mb
of mem being tied up by this??

  • 3
  • 2
  • 2
  • +1
1 Solution
This is happening because you are copying the file to the swap or tmpfs location (probably /tmp)
it is a general misconception that this is a good place to put all and sundry.

from what you have said the swap space looks large enough (if not oversized) so there should be no urgent needs to increase it, however if you need to increase it later you can use the swapadd command to add more.

swap -l will tell you amount in use etc as well as the amount available.

prtconf | more will tell you (at the top) the amount of physical RAM also.

I hope this answers your question.
greAuthor Commented:
no I am not copying it to swap.(/tmp)
I created a dir right off root and copied it there.

THis is a Ultra 2 with 2 Gig ram and it is doing nothing right now.

I found that UNIX, in general, trys to keep files in buffered memory so that if it is requested again, it will not have to read it from disk. this sounds resonable, but I wold expect that at some point in time it would eventually write it to disk. I found that over 10 hours later it *still* had not freed the memory.

thx for the answer, but I don't really feel it explains my problem.

You are right about Unix using memory for read caching & you should be able to configure the kernel for the min & max amount of memory that can be used for this, e.g. 10-50% (Up to 50% will be used IF the memory isn't needed by processes)

If nothing else needs that memory (processes or other more recent data caches) & the file hasn't changed, there's no reason to clear it from memory (as it's still current).

Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

greAuthor Commented:
So do you know how to configure this?

Or do you know a way to *force* it to
not cache or to clear the cache so that the file is written to disk immediately?

SO what you are saying is that if memory is not needed for other processes, then it will leave a 6Mb file in cache until *higher* priority
process need it. Interesting, I tried to leave my system in *limbo* to see if the file was eventually written to disk, but after 10 hours it still wasn't.
it must be becuase no other processes are asking for memory.

I've tried it on several machines with the same result. I think I'll try it on a machine that has low mem avail and see what happens.

I'll have to check on configuring the limits under Solaris, but it shouldn't be a problem - the file WILL have been written to disk immediately (df will show that the available disk space has shrunk). "sync" flushes the write buffers, but writes normally take place as soon as the disks can service the request.

To prove that data has been flushed from memory, you'd have to do lots of reads of other data (up to 2Gb, to be sure the 6Mb file has been superceded), then re-read the file. Monitoring cache hit % with sar would indicate whether it was finding the data in memory or on disk.

If the system is using up to 50% of memory for READ cache, then free memory will have dropped to about 896Mb at this stage and will stay at that level, but you STILL won't know exactly what data is being cached until you try to access it!

You say that "over 10 hours later it *still* had not freed the memory", but the file existed on disk immediately after the FTP (and grew incrementaly during the transfer), right? Was the box esentially idle during that time such that there was little or no demand for memory by other processes?

With 2 gig of memory and little else running that needs memory, there's little reason for the OS to purge the cache as there's lots free. The data is already "safe on disk" and there's no competition for mmeory, so why should it. You'll find that the filesystem cache "self-sizes", depending on the how much free memory there is. Lots of stuff cached obviously improves performance it that data is used repeatedly and it doesn't really cost anything to drop it in cache while it's being read.
greAuthor Commented:
yes, it makes sense now, but it makes it hard to debug when  you think you have a memory leak because you see memory going down for no apparant reason. Yes the box was *idle*

I guess that is  what Purify is for!

It totally makes sense though.

I appreciate the discussion. I'd like
to give you each 75 points if there is
an easy way to do that please let me know.
It's a little work to split points, but it's not difficult. Reduce the points for this question 75 and pick one person's comment as an answer & grade it. Then post a new question for the other expert worth 75 points and titled something like "Points for some-name".

And yeah findig memory leaks is what Purify is for. Other than really gross leaks, it's difficult to draw any conclusions from what the system's free memory shows. BTW: the memory checking tools in Sun's compilers aren't bad either.

Featured Post

Become an Android App Developer

Ready to kick start your career in 2018? Learn how to build an Android app in January’s Course of the Month and open the door to new opportunities.

  • 3
  • 2
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now