[2 days left] What’s wrong with your cloud strategy? Learn why multicloud solutions matter with Nimble Storage.Register Now


cp'ing or ftp'ing a 6Mb file eats 6Mb of free memory

Posted on 2000-04-05
Medium Priority
Last Modified: 2011-08-18
under SunOS 5.6
(following numbers taken from TOP and

my total free memory is 2048Mb
current free mem is 1920Mb

Now, when I copy of ftp a 6 Mb file
to this box, free mem drops 6Mb and
seems to stay there until the file
is deleted. Once it is deleted the 6Mb
reutrns to the free memory pool

Now why is this happening??
the file is copied to disk so why is 6Mb
of mem being tied up by this??

Question by:gre
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
  • 2
  • 2
  • +1

Expert Comment

ID: 2688244
This is happening because you are copying the file to the swap or tmpfs location (probably /tmp)
it is a general misconception that this is a good place to put all and sundry.

from what you have said the swap space looks large enough (if not oversized) so there should be no urgent needs to increase it, however if you need to increase it later you can use the swapadd command to add more.

swap -l will tell you amount in use etc as well as the amount available.

prtconf | more will tell you (at the top) the amount of physical RAM also.

I hope this answers your question.

Author Comment

ID: 2690786
no I am not copying it to swap.(/tmp)
I created a dir right off root and copied it there.

THis is a Ultra 2 with 2 Gig ram and it is doing nothing right now.

I found that UNIX, in general, trys to keep files in buffered memory so that if it is requested again, it will not have to read it from disk. this sounds resonable, but I wold expect that at some point in time it would eventually write it to disk. I found that over 10 hours later it *still* had not freed the memory.

thx for the answer, but I don't really feel it explains my problem.

LVL 21

Expert Comment

ID: 2690863
You are right about Unix using memory for read caching & you should be able to configure the kernel for the min & max amount of memory that can be used for this, e.g. 10-50% (Up to 50% will be used IF the memory isn't needed by processes)

If nothing else needs that memory (processes or other more recent data caches) & the file hasn't changed, there's no reason to clear it from memory (as it's still current).

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!


Author Comment

ID: 2690897
So do you know how to configure this?

Or do you know a way to *force* it to
not cache or to clear the cache so that the file is written to disk immediately?

SO what you are saying is that if memory is not needed for other processes, then it will leave a 6Mb file in cache until *higher* priority
process need it. Interesting, I tried to leave my system in *limbo* to see if the file was eventually written to disk, but after 10 hours it still wasn't.
it must be becuase no other processes are asking for memory.

I've tried it on several machines with the same result. I think I'll try it on a machine that has low mem avail and see what happens.

LVL 21

Expert Comment

ID: 2691115
I'll have to check on configuring the limits under Solaris, but it shouldn't be a problem - the file WILL have been written to disk immediately (df will show that the available disk space has shrunk). "sync" flushes the write buffers, but writes normally take place as soon as the disks can service the request.

To prove that data has been flushed from memory, you'd have to do lots of reads of other data (up to 2Gb, to be sure the 6Mb file has been superceded), then re-read the file. Monitoring cache hit % with sar would indicate whether it was finding the data in memory or on disk.

If the system is using up to 50% of memory for READ cache, then free memory will have dropped to about 896Mb at this stage and will stay at that level, but you STILL won't know exactly what data is being cached until you try to access it!

LVL 40

Expert Comment

ID: 2691773
You say that "over 10 hours later it *still* had not freed the memory", but the file existed on disk immediately after the FTP (and grew incrementaly during the transfer), right? Was the box esentially idle during that time such that there was little or no demand for memory by other processes?

With 2 gig of memory and little else running that needs memory, there's little reason for the OS to purge the cache as there's lots free. The data is already "safe on disk" and there's no competition for mmeory, so why should it. You'll find that the filesystem cache "self-sizes", depending on the how much free memory there is. Lots of stuff cached obviously improves performance it that data is used repeatedly and it doesn't really cost anything to drop it in cache while it's being read.

Author Comment

ID: 2691800
yes, it makes sense now, but it makes it hard to debug when  you think you have a memory leak because you see memory going down for no apparant reason. Yes the box was *idle*

I guess that is  what Purify is for!

It totally makes sense though.

I appreciate the discussion. I'd like
to give you each 75 points if there is
an easy way to do that please let me know.
LVL 40

Accepted Solution

jlevie earned 450 total points
ID: 2691850
It's a little work to split points, but it's not difficult. Reduce the points for this question 75 and pick one person's comment as an answer & grade it. Then post a new question for the other expert worth 75 points and titled something like "Points for some-name".

And yeah findig memory leaks is what Purify is for. Other than really gross leaks, it's difficult to draw any conclusions from what the system's free memory shows. BTW: the memory checking tools in Sun's compilers aren't bad either.

Featured Post

On Demand Webinar - Networking for the Cloud Era

This webinar discusses:
-Common barriers companies experience when moving to the cloud
-How SD-WAN changes the way we look at networks
-Best practices customers should employ moving forward with cloud migration
-What happens behind the scenes of SteelConnect’s one-click button

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

In tuning file systems on the Solaris Operating System, changing some parameters of a file system usually destroys the data on it. For instance, changing the cache segment block size in the volume of a T3 requires that you delete the existing volu…
Installing FreeBSD… FreeBSD is a darling of an operating system. The stability and usability make it a clear choice for servers and desktops (for the cunning). Savvy?  The Ports collection makes available every popular FOSS application and packag…
Learn several ways to interact with files and get file information from the bash shell. ls lists the contents of a directory: Using the -a flag displays hidden files: Using the -l flag formats the output in a long list: The file command gives us mor…
Learn how to find files with the shell using the find and locate commands. Use locate to find a needle in a haystack.: With locate, check if the file still exists.: Use find to get the actual location of the file.:
Suggested Courses

656 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question