Link to home
Start Free TrialLog in
Avatar of m_bizon
m_bizon

asked on

Using NT without a swap file

It only seems logical to me that if you have a system with a large amount of RAM you should not use a swap file.  Swap files are very inneficient, but a inexpensive way of adding to the available system memory.  Take this scenario ->
You have a server with 128MB of ram and a 128MB max swap file size.  It runs fine and has enough memory to handle all its current tasks.  If you take that same server and put 256 or even 512MB of ram it would make sense to remove the swap file as this is a serious bottle neck in the performance of the system.

Here is the problem.  If you turn the swap file off (there is no option to turn it off in the settings but you can set it's maximum size to 0).  You get an error every time the system starts.  The system creates a swap file anyway.  And you very shortly get another error saying that your system has run out of virtual memory.

So here is the question.  I there any way, through registry changes or the like to make the system not use virtual memory.  If not than windows NT is complete garbage.   Why should I have to swap memory to disk when I have more than enough memory for what I am doing.  The original purpose of virtual memory was to allow the expansion of usable RAM in an OS by using cheap disk space.  The devolpers of OS's have lost focus by integrating a mandatory bottleneck to system performance.

I am sorry for the long post but this problem really irratates me.  When installing any OS it should detect the amount of RAM you have and give you the option of disabling virtual memory when you have over a certain amount of RAM.
ASKER CERTIFIED SOLUTION
Avatar of jaywallen
jaywallen

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Expressions
Expressions

The page file is used for recovery as well.  Hence why a dump file is created (if the option is selected) when you get a stop error.  There are more efficient ways of setting the page file up than just putting it on the boot partition though.
>windows NT is complete garbage.   Why should I have to swap memory to disk when I have more than enough memory for what I am doing.

I did some work on high end IBM Netfinity servers that had 4 Gigs of memory.  4 Gigs is the maximum addressable memory space for a 32 bit architecture.  Even then NT creates a swap file.

Standard M$ answer:  All these problems will be fixed in the next release.  So just wait for Windows 2000 Datacenter it will be released any year ...er I mean day now
I hear what you are saying. You want to force NT to use RAM instead of disk to optimize speed.  Someone at work has a machine with 2GB of ram, I'm sure the page file is still used.  I sujest using a min 2 max 2 MB page file on C:.  Nt needs that for debuging or something like that.???
I may be completely out of my gord here so please correct me if I am wrong. I was under them impression that the swap file was not even used unless the RAM was completely filled up. At which point it swaped out the oldest non-accessed portions of RAM to the swap.

You are using it in the most effective way. If you had 2 gig it might not even swap at all. This is NT needs to know that the swap is there in case it does need it. If it is not there and goes to swap to it your SOL. Why on earth would you not want a swap file? It does you no harm if your never maxing out your RAM in the first place. I don't think that it swaps stuff out of RAM just to get it out, it does it to create room for the most accessed parts of ram.

What happends when you fill up that 2 gig of ram, and there is no swap file? Does it dump your data to a bit buck, then when it needs it again is the program writen to know to relocated it on the hard drive? To search for it again, load it into RAM again and then likewise dumping some other say monitor driver that you have in RAM to the bit bucket?

You see you have to have it. The only way you would not need a swap file is if you had more memory than you had diskspace, at which point you will still need some extra to hold calculations.

Given you had a 1 gig drive, dump the whole drive into RAM, then leave 1 gig of ram for processing. Yea this would work and it has been done, It is called a RAM DRIVE. Broadcast.com does this.

Anyway, please as allways shoot me down if I am wrong. It happends all the time.


NEckusMaxamus
I may be wrong also with the page file.  It may stem from some of my frustrations with NT. :)

I was basing my assuming on taskmanager.  In the performance tab there is a Physical section.  At time there will be more Available memory than in the file cache.  Why?  I guess is Nt likes page files.  please prove me wrong.
Avatar of SysExpert
I agree with Jaywallen, Default settings are there from the days when 16 MB was a lot. If you change the registry to NOT swap out the  kernel or device drivers than you can use a minimum or 0 swap file ( with a little luck ).
I hope this helps.
No-one has mentioned memory leaks - NT is full of them, if MS fixes them (he he) it won't swap out. Until then it needs a swap file because it doesn't handle memory very well period.
Avatar of m_bizon

ASKER

A comment for NeckusMaximus....

You are completely off base.  If you use the NT diagnostic program it will show you how much memory is paged at any given time.  We have a system we are running with 1GB of RAM, after a reboot with no extra services loaded we can see 10MB of memory are paged on the disk.  It seems that NT will use virtual memory no matter how much RAM you have and no matter how much memory you are using.  The engineers at Microsoft were so concerned with making the virtual memory system as efficient as possible they lost focus and forgot why there is virtual memory to begin with, to give you more usable memory without having to buy very expensive RAM.  NT was designed when 70ns RAM was $50 a Meg and it was rare for a system to have 32MB of it.  As well all systems have a finite amount of memory virtual or not.  The settings for your page file(s) have a minimum and a maximum size.  If you use all of that you would have the same situation as if you were not using virtual memory and you used it all.  There is no advantage to having the memory that you run out of being on a disk.  When an application requests memory from the system it checks to see if there is any left if not it will return a "memory unavailable" code to the application, even if virtual memory is being used and even if you have 400Gb of free hard drive space.  When you reach the maximum setting on the total page file size you run out of memory.

Again I apologise for the rant.  I can't beleive that an OS designed in this day and age would have such an obvious and major flaw.  I have looked at the way 2000 handles it and it doesn't look hopefull.  I will have to experiment though.
Older versions of Novell Netware (3.x and 4.x) didn't need swap file..., but Netware 5.x has a 2 MB swap file which is usually enough.
If I remember well Linux also has a swap partition...
And these OS-s knows how to do file sharing.
It seems that MS don't, or forgot it, because Win 3.x did run without a swap file.
Anyway. MS "OS" needs infinite MB RAM and the same + 12 MB swap :-)
I think that one of the problems here is that we're equating swap space with a page file.  These are two different things.  Swap spaces are used in systems that require physical RAM to be available for any process.  Once enough processes are running that there is no more available RAM, entire processes are swapped out to the swap area on disk to make room for newer processes.  Think back to the days of 512K or RAM costing $10K and 300 Meg of disk costing $20K.  Unix and unix-like systems (Linux) use swap space.

NT does not use swap space.  It uses a page file.  NT is based on the use of "virtual memory" machines.  Virtual memory machines do no swap out entire processes to make room in RAM.  They page out "least recently used" pages of RAM.  If physical RAM is filled by all the processes running on a machine then memory pages will be paged out based on the oldest pages first.

Virtual memory machines also allow for user and program address space to be much larger than physical RAM.  NT I believe allows for 4 gig (I might be wrong on 4 gig, but it's some large amount) of address space per user.  Since a user could not actually have 4 gig of physical RAM available, some amount of the pagefile is reserved.  Therefore, you cannot run the machine without a pagefile.    

A quick and dirty explanation of "virtual memory" can be found at www.whatis.com.
Question for Jaywallen:

Does the registry key that you mention in your original post also apply to Windows 2000?  Or is there a similar key that can be used?

Dave
Hi, Moonshadow!

Same registry entry, same function, AFAIK.  The only one of my personal PCs I'm using W2K on currently is my PIII, 500 MHz notebook with only 128 MB RAM.  I just haven't bothered with it, figuring that it probably wasn't worthwhile unless I had at least 256 MB.  Since notebooks have slow hard drives, using this setting might have a very favorable effect upon performance.  Maybe I'll give it a try.  I've only used it on graphics workstations and servers before.

Regards,
Jim
The answer is that there is no good answer.

Following are some knowledge base articles if you want more details on NT memory management

http://support.microsoft.com/support/kb/articles/Q126/4/02.asp

http://support.microsoft.com/support/kb/articles/Q171/7/93.ASP

http://support.microsoft.com/support/kb/articles/Q184/4/19.ASP
Avatar of m_bizon

ASKER

I am glad this question stirred up such discussion.  Obviously microsoft needs to look at this issue, especially if memory prices continue to fall and sytems are capable of handling more and more of it.

Thank you Jaywallen.  That seems to be the most reasonable soloution
You're welcome, m_bizon.  I hope that preventing kernel and driver paging will at least provide some performance boost for you.

As capito pointed out there is no "good" answer to this one.  And I'm not sure that we'll see much of a change in the basic design philosophy employed in developing these operating systems until there's a radical change in the supporting hardware.  I think the public's greed for more and more "features" (and the software vendors' desire to capture / keep market share) will keep demands on the OS higher than can be safely met by a "full live in RAM" system.  It just seems to be our nature to always want more than can be accommodated wihout compromise in design.  But some of the wilder mass storage options looming on the horizon may be as fast as the RAM we use now.

Thank you for your consideration.

Regards,
Jim
Yea thanks guys, this was a good one. Hell I think I looked up more information on this than I have sence I started coming here, and even changed a couple of servers with what I found. All in all a really good thread this time around.

Have a good one guys,


NeckusMaxamus