Link to home
Start Free TrialLog in
Avatar of Visual3DMaya
Visual3DMayaFlag for Romania

asked on

Isn't it better have as much swap as possible?

Isn't it better have as much swap as possible?
ASKER CERTIFIED SOLUTION
Avatar of callrs
callrs

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of war1
Greetings, Visual3DMaya !

Not necessarily.  If you are space limited, having a big swap file takes away performance from other functions of the system.

Best wishes!
Avatar of Visual3DMaya

ASKER

Greetings :) how is that war1 ?
Visual3DMaya, if you allocate a large amount of space to the swap file, and your system need the space to store program and data files, then system has to hunt for open space. Program and data files may be fragmented.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Hi callrs,
*Concerns about "performance" are moot when a Windows system is using two or three times its total physical memory.
>>>why, it computes more addresses?

Also, if a large number of pages need to be moved in or out of the page file, chances are quite good that other hard-disk activity is taking place at the same time, further reducing performance.
>>>didn't get it here, what is the drawback.

  The thing with the Performance for systems using more than one and a hlaf the size of the physical memory is for clear reasons.. When the windows is actually USING that amount of swap memory the OS is accessing constantly your HD.

   If for example youare loading a huge big picture on photoshop. Photoshop will be loading, processing (decompressing), and saving into memory the processed result. So at the same time you are reading from the HD you'll be saving back in form of swap. It is actually much worse than that since many other things will be happening on the background.

   The deal is this... if you don't have enough physical memory it will use "HD memory". And the more frequent it needs to use the "HD memory" the slower your computer will seem to be running. As simple as that.

   But this indeed this has nothing to do with the size of the swap it only has to do with how much resources the running applications are demanding.
war1,
Program and data files may be fragmented
>>>no problem, i never defragment, i let them alone.
That happens when space shrinks :), that's a problem indeed but i take care and let more than necessary space.

I have another dedicated partition for swap file, second hard, and made it permanent, the same min and max. I dono if i extend the swap very much what happens, if the system performance decrease. By example if i make an 100Gb swap.

Right now i have 4Gb and dono if let it so or to make it smaller.
Any kind of test?
 And if your computer demands more of the swap file... The more optimized your swap file is the faster it will get the page job done.
 
 That's why the recomendations of having it on a  or different  HD or at least on a  different partition.  The major benefit of having it on a different partition is that it will be stored on one big contiguous file so the seek time is much faster.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial

"By example if i make an 100Gb swap. Right now i have 4Gb and dono if let it so or to make it smaller. Any kind of test?"


  Windows will never use 100GB of swap if you have only 2GB of memory.. Any application that will suck that amount of page memory it will take a few hours :o)  do anything useful. As I've said.. if you are using more than one and a half the physical memory it will already be too slow to be usable.


What you think markps_1 about what callrs recommends:
-Set a minimum swap file size so windows doesn't have to keep resizing it.
-Set no maximum size.

Now i have it permanent.
if you are using more than one and a half the physical memory it will already be too slow to be usable.
>>>why markps_1, don't use just as much as it needs from let's say 100Gb?
 It is fine to have 100GB file size... some people say that it will increase on allocation overhead.. but that probably will be the only drawback if that.

 
 As i've said before... the a bigger page size doesn't mean anything. You are just alocating that file size on your had drive... what windows will actually use is what matters...

   But if windows is actually using 100gb of paging it is alot of data to move :o) so probably you are running 100's of applications at once and by that meaning the OS will be frozen doing paging :o)
What 100 applications markps_1?
I presume that use it in a normal way.
The q would be: has that a bad influence to the system performance?
Because you recommended one and a hlaf the size of the physical memory, is that an optimal or a minimum?
Avatar of callrs
callrs

Hi Visual3DMaya. The words you quote are not mine, but are from Wiki, as I indicated "Now an official word from ..."

But I will attempt to explain the Wiki's words:
Re: *Concerns about "performance" are moot when a Windows system is using two or three times its total physical memory.
>>>why, it computes more addresses?

-If Windows is USING 2 to 3 times the RAM, then there's other issues to deal with; swap file performance issues are the least of your worries if you reach such a point.  Like if you have a cut & a broken arm, the cut is minor in comparison. Or like if you are in deep water & can't touch ground, any deeper water won't matter...      So if windows needs so much swap file, then look at issues as: what's hogging all the available RAM (adware/spyware? excess unneeded startup tasks? etc.)? How much RAM do you have?

Re: Also, if a large number of pages need to be moved in or out of the page file, chances are quite good that other hard-disk activity is taking place at the same time, further reducing performance.
>>>didn't get it here, what is the drawback.

Wiki basically says that a fixed-size swap file has disadvantages of: Windows unable to meet memory-allocation requests, offers no performance boost if Windows is already using too much of the page file, and offers little advantage if your page file is a fixed & unfragmented size vs. the recommended re-sizable file that may get fragmented.

A fragmented swap file is not much of a burden, since "Windows does not read from or write to the page file in sequential order for long periods of time" and "if a large number of pages need to be moved in or out of the page file, chances are quite good that other hard-disk activity is taking place at the same time, further reducing performance". Those last three words are just putting things in perspective: You will already have such degrade in performance with or without a fixed-size unfragmented swap file.

So they talk about the drawbacks of having a fixed-size swap. That's what that Wiki entry is about. Then they conclude their argument with 'A large "maximum" will incur no performance penalty'
 Let me rephrase...

  You use memory when you have programs running (OS and OS services included)  if you have 1 application running using 100MB it will probably be using only PHysical memory... If you have 10 applications using 100 mb you'll probably using part of the swap... Everytime it uses paging you'll notice a performance decrease of your overal applications. To reach paging more than a couple of times the amout of memory you have... Your overal performance is already really bad... To reach LOADED applications that will use 100GB of swap.. Your computer will be only doing swap and completly frozen. So that's why I've said you'll need to be running 100s of applications to reach that point...

  winodws will only use the swap when it NEEDS memory.

  AS Carlls just mentined... there is not a defined conclusion regarding the size. But there is no advantage of using huge page files since after your computer using a couple of times the amout of physical memory your computer will be frozen anyway.

 "I have another dedicated partition for swap file, second hard, and made it permanent, the same min and max. I dono if i extend the swap very much what happens, if the system performance decrease. By example if i make an 100Gb swap."

  The whole point of making it permanent is because it will prevent Fragmenting... if you have its own partition it will be already defragmented... so simply set the minimum and a considerable large maximum (2.5 your physical memory) and you'll be just fine.

 
wikipedia's description is incorrect.

 On the top it says that fragmentation is not an issue... but on the bottom of the page it says..

"In the Linux and *BSD operating systems, it is common to use a whole partition of a HDD for swapping. Though it is still possible to use a file for this, it is recommended to use a separate partition, because this excludes chances of fragmentation, which would reduce performance. "


 Fragmentation IS an issue and it is one of the major optimization you can have on your swap file.


 
Re: "wikipedia's description is incorrect."

That Wiki entry relates to the fixed-swap-file-size advice that some give. Wiki says don't set a fixed size by worrying about performance/fragmentation issues, since the performance/fragmentation issues are minor compared to having no maximum (or a large maximum) size set. You have to read the fragmentation remark in context of "fixed-sway-file-size" only. Outside of that context, it's a different ballgame.
 

Re: "Right now i have 4Gb and dono if let it so or to make it smaller."

I say, leave it at 4Gb minimum, and set no upper limit. Windows will ONLY allocate the memory it needs to allocate, so worrying about "why, it computes more addresses?" seems a mute point: if it doesn't need the space, I doubt Windows will worry about computing the addresses it doesn't need.
If it comes to the point that Windows needs more than 4Gig, then I believe it will tell you that it's increasing the size (my Win2K computer tells me. If not, then there's some free utilities that can monitor change in files in your system).

If you find that windows is frequently resizing the file, then go ahead, increase the minimum size to 5Gig. (What programs are you running that use so much memory? lol. My 512Meg swap file on a 768Meg-RAM computer hasn't seen the swap file increase yet)

Recommendation: Since you have swap file on a separate drive, why not use some of that space to store rarely accessed files? Partition the drive, leaving the first part (I would use no more than 8 gig, but the size is up to you) for the swap file, and the rest to hold my data. The first part of the drive has fastest access to data, so it's ideal for a swap file.
O, you may see an apparent contradiction in my stating "...tells me..." (that swap file is increasing)  and "...hasn't seen ...increase".

No contradiction, just a description of swap files on separate computers, not the same computer.
"hold my data"--> I meant "hold any data"

Also, I would set the minimum size to no more than 1 Gig; no max. (The auto-max then will be the available space on the partition). Then see if it EVER increases - if it doesn't, then you're fine. If it does increase though, then look at what's hogging all the physical RAM and possibly increase the minimum to 1.5Gig. Etc.

I hope you have at least 512Meg of RAM...
" Isn't it better have as much swap as possible?"

ABSOLUTELY NOT.

Go into the performance TAb, and set the swap file to system managed -- then CLICK APPLY -- you must do this, then reboot.

Too much wsap file space is worse than too little, it causes the system to lose data and thrash.  Even MS says this.

Set it to system managed for top performance.  This is tried and true, and MS says the same thing.
 did any one have the curiosity to read these... They all say the same thing... Permanent not too big swap is the way to go.. Oh my.

http://www.pcmag.com/article2/0,1759,887795,00.asp 
http://www.pcguide.com/opt/opt/osSwapLocation-c.html
http://www.adriansrojakpot.com/Speed_Demonz/Swapfile_Optimization/Swapfile_Optimization_01.htm
To reach LOADED applications that will use 100GB of swap.. Your computer will be only doing swap and completly frozen.
But there is no advantage of using huge page files since after your computer using a couple of times the amout of physical memory your computer will be frozen anyway.
>>>markps_1, it's very hard for me to anderstand your logiq.
Let's take again 100Gb swap.
Is it better to use smaller swap file and freeze instantly (when it is overload) instead of using 100Gb and have a chance to keep it working?
Let's focus on the swap not on the memory load.
What i need to know, i already said above is:
does decrease the system performance once the swap is set unjustified big? (presume i have a lot hard space and swap is made permanently)
>> does decrease the system performance once the swap is set unjustified big? (presume i have a lot hard space and swap is made permanently) >>

No system performance is decreased if swap file is set big.
 There is no gain in using a swap file too big. And there is a chance that you'll have a decreased performance due to allocation resources to administrate the large swap file (as I've mentioned before) seek time is also an issue with a large file.

   Ok... think again about your huge 100GB swap.

   Your hard drive is slow... and when your computer uses page it grabs data in chunks... If an application is running, code and data is being read and written constantly. (unless the application is not doing anything like an open word document in the background for example even though a dorment application is still consumming memory)

  So if you the use is actually using the 4 GB of paging spent on running applications. Means that at one point 4 GB of of data has been transfered  from the memory to the hard drive at run time... and being read and re-written all the time...

  if you imagine that it takes more than 1 minute to transfer 4 GB  (on a 80 MB/s hard drive + seek time).. and that's not simply transfered.. it is being read and written constantly.. If you have more ram you have less in HD processing and more in memory processing.. If you don't have at least 1GB of memory Your computer will be frozen (this is a loose example that approximates the 1.5 to the 2.5 recommended swap size)

  Regarding your 100GB of swap size... it is about 20 to 30 minutes  just to move that amount of data... what actually is happening over and over. As I've said, data is not simply stored this is a live process so code is read and executed in HD memory or data is being processed in HD memory... So your computer will be long frozen.
 
  THAT'S why a paging file that's many times bigger than your memory your computer.. With that much paging your computer is already frozen... so the size of the paging won't make any difference.
I'm have been curious about this issue also, and I believe that I get a marginal improvement in performance with the method suggested by markps_1 (ie. permanent and not too large) over all other permutations.

I have also tried the registry setting here but I notice absolutely no improvement or degradation in performance:
http://www.winguides.com/registry/display.php/244/

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management]
DWord Value Name: "ClearPageFileAtShutdown"
Enabled = 1

I am not asking another question here inside someone else's, but thought I would mention it in case this issue comes up and also allow other experts to advise Visual3DMaya of their observations regarding the clearing of the Page File with every shutdown.
If this digresses too much from the Virtual memory question, just ignore it ;-)
callrs, i have 512Mb but i intend to make a new OPTERON with 4Gb so i could use Maya or something else.
My q is just informative, as i will need much more memory than 1.5xRAM using Maya alone.
Possibly i will need even 100Gb swap. The q is now, will be worse for windows when not in Maya?

Recommendation: Since you have swap file on a separate drive, why not use some of that space to store rarely accessed files?
>>>system files?

The first part of the drive has fastest access to data, so it's ideal for a swap file.
>>>it's a good idea but i think this is problematic as windoze don like more than 1 primary partition, i had this issue formerly. I have another partition for data, system has its own.

markps_1, wikipedia's description is incorrect at a moment, indeed, same logiq like you :)

did any one have the curiosity to read these
>>>2 of them now and none explain why is not ok a big swap. People are chary regarding to space, when it can be done worse but without such waste. The first thing they think at is space, after that quality.
after what said war1: "No system performance is decreased if swap file is set big." i tend to believe him, until i'll read the contrary said by MS.

 So if you the use is actually using the 4 GB of paging spent on running applications. Means that at one point 4 GB of of data has been transfered  from the memory to the hard drive at run time... and being read and re-written all the time...
>>>i presumed that windows knows to first use the RAM and if has no choise to copy in swap. I'm also not sure it does that, specially windows XP that makes all at its own will.
...
don't confound paged memory (reside in RAM) with virtual memory (on hdd)

Hello scrathcyboy,
Too much wsap file space is worse than too little, it causes the system to lose data and thrash.  Even MS says this.
Please find for me the link where MS says that and you've got a lot of points!
Do not confound using much swap with setting it at a high value.

BillDL, it's the subject, it's ok, thanks for info even if i won't use it.
"Possibly i will need even 100Gb swap. The q is now, will be worse for windows when not in Maya?"
So why not start at a 1Gig swap file. If windows resizes it, then set it to 2Gig. If windows resizes it again , set it higher. etc. This method will give you an idea of exactly how much swap file Maya really needs. So I recommend this route.

But if you really want to just make it 100Gig. My answer is: no, it won't be worse for windows when not in Maya. As I said, windows will only do the allocation calculations for the amount it needs. IF more space is there than it actually needs to allocate, it simply won't use it, it would basically ignores it. It's like having a big castle but needing only a few rooms: the other rooms don't hold you back; instead they are encouraging in-case you ever have the need for more rooms for parties, guests, etc.

"A large "maximum" will incur no performance penalty." (Wikipedia)


-------
P.s. In-case the Wiki entry about fragmentation still seems flawed, please read above about the context & perspective that the fragmentation logic is used in. The context being that the fixed-swap-file-size logic that would cause much more trouble than fragmentation/performance issues if the swap file were not large enough to meet allocation requests and if there's too much swapping going on. But if you have a large enough swap file, or set to auto-expand, then you can worry about tuning it even further by having an unfragmented file.
on any build i do i try to have a 2 gig partition for the swap file dependent on Ram

512mb  = 1000 min /max swapfile
1gb +    = 1500 min/max  swapfile
etc etc

The fact the swapfile runs in its own space helps
Seems illogical.. Think about it, if you have MORE ram, then you need LESS swap space. If you have LESS ram, then you would need MORE swap space. But you have it reversed.
And it's OK to set a max if Windows won't need more than the max, but the problem comes when Windows wants to allocate more than the max; that's one reason setting no max is recommended.
---
The swap file size&usage can be monitored, to get a better idea of what size you need:
XP/2k: see http://support.microsoft.com/?kbid=555223     RAM, Virtual Memory, Pagefile and all that stuff
"Performance Monitor (Start, Administrative Tools, Performance) is the principle tool for monitoring system performance and identifying what the bottleneck really is.  ...

Paging File, %pagefile in use - this is a measure of how much of the pagefile is actually being used.
          This is the counter to use to determine if the pagefile is an appropriate size.  If this counter gets to 100, the pagefile is completely full and things will stop working.  Depending on the volatility of your workload, you probably want the pagefile large enough so that it is normally no more than 50 - 75% used.  If a lot of the pagefile is in use, having more than one on different physical disks, may improve performance. "

98: Start, Programs, Accessories, System Tools, System Monitor, Add item, Memory Manager, Swapfile size. See http://www.bootdisk.com/swapfile.htm for more info.
If you have more ram it means you can handle enough in ram data/code processing and can deal with more swap space. But there is a limit of swap space due to speed constrains with swap space too large. The Size/The delay time increases geometrically. It isn't too hard to understand that.

We have "load of memory" = 1 , one variable, and "size of swap" = 2 , second.
To compute 2, 1 must be given a known certain value.
Hence 2 might be as huge, no performance decrease.
Below (1) are excerpts from Microsoft's Dec. 2004 writeup about page-file. You said you are building an OPTERON, and I assume with a 64-bit operating system? The 64-bit editions of Windows XP and Windows Server 2003 increase the 4-Gig/process to 16 terabytes (see support.microsoft.com/default.aspx?scid=kb;en-us;294418).

So in answer to your original question, the following lines in the writeup likely apply to 64-bit OS as well as to 32-bit for which it was written (after all, 64-bit was on the horizon when it was written): "The operating system only assigns RAM page frames to virtual memory pages that are in use... having a large pagefile ... does not cause a problem and eliminates the need to fuss over how large to make it."


(1) http://support.microsoft.com/?kbid=555223     RAM, Virtual Memory, Pagefile and all that stuff:
--Quote
All processes (e.g. application executables) running under 32 bit Windows gets virtual memory addresses (a Virtual Address Space) going from 0 to 4,294,967,295 (2*32-1 = 4 GB), no matter how much RAM is actually installed on the computer.

In the default Windows OS configuration, 2 GB of this virtual address space are designated for each process’ private use and the other 2 GB are shared between all processes and the operating system.  Normally, applications (e.g. Notepad, Word, Excel, Acrobat Reader) use only a small fraction of the 2GB of private address space.  The operating system only assigns RAM page frames to virtual memory pages that are in use.

...There can be a large number of processes each with its own 2 GB of private virtual address space. ...

...A frequently asked question is how big should I make the pagefile? ...On server systems, a common objective is to have enough RAM so that there is never a shortage and the pagefile is essentially, not used.  On these systems, having a really large pagefile may serve no useful purpose.  On the other hand, disk space is usually plentiful, so having a large pagefile (e.g. 1.5 times the installed RAM) does not cause a problem and eliminates the need to fuss over how large to make it.  

Performance, Architectural Limits and RAM

...as load (number of users, amount of work being done) increases, performance ... will decrease, but in a non linear fashion.  Any increase in load (demand) beyond a certain point will result in a dramatic decrease in performance.  This means that some resource is in critically short supply and has become a bottleneck.
 
At some point, the resource in critical short supply can not be increased.  This means an architectural limit has been reached.  Some commonly reported architectural limits in Windows include: ...
--End Quote

---------------------64-Bit vs 32-Bit:

Think about it. 64-Bit operating system. 16Terrabytes vs 4Gigabytes virtual memory space per process: 1TB=1000GB so 16000GB/4GB = 4000!!! Thats 4000 TIMES MORE ADDRESS SPACE. So in 64-Bit, even 100-Gig address space is just PEANUTS -- It's a mere 100/16000=1/160th or less than 1% (0.625%) of capacity per process! It's no load at all to deal with such address space. The load is the actual physical disk access which is still much slower than accessing actual RAM, but even then, hard disk access has been on the rise, with greater speeds & larger disk caches.

So performance-wise, you need not worry about OS being bottlenecked by the address space -- but by the actual USE (not allocation or size) of the page file!


Is Maya a 64-bit application? If not, then it can't address more than 4 Gig anyway. Any 64-bit app can benefit from the larger virtual-memory space ("having a large pagefile ... does not cause a problem"), while any 32-bit app running on a 64-bit OS will only have access to 4Gig anyway - the OS won't assign to it anymore than that!

Addressing the last byte of memory I doubt is any more strenuous than addressing the 1st byte: e.g. each address in 32-bit space is held as a 32-bit integer (mapped to a 36-bit address space - see "Physical Address Extension" (PAE)), no matter if the number is simply 1 or if it's 4Gig.

The following talks about performance between 32-bit & 64-bit apps running on a 64-bit OS:

(2) http://www.pcstats.com/articleview.cfm?articleID=1665     AMD Athlon64 - 64-bit vs. 32-bit Head On Comparison - PCStats.com
--Quote
Memory addresses are run through the processor just like any other value, meaning they are stored in the registers. The largest integer number a 32-bit register can hold is around -2.1 to +2.1 billion. This translates to a maximum of 4GB of physical memory.

Various workarounds have been invented for the server market to transcend this limitation, but all sacrifice performance. 64-bit registers can effectively address up to 16 terabytes of physical memory ...

...Compatibility mode ... The beta version of 64-bit XP supports 32-bit executables through the use of this mode. When running in this mode, each 32-bit application is still limited to 4GB of memory, but it can have all of that 4GB to itself with no overhead for the operating system (assuming that there is more than 4GB of memory installed).

...As the results from this benchmark show, running the same program in 64-bit mode is not necessarily going to net you a performance increase if the program itself does not take advantage of the benefits that 64-bit operations or the Athlon 64's extra registers offer. Indeed, the additional complexities of 64-bit memory operations may even slow things down ... While not big enough to be an issue, it proves what we were already seeing from the previous benchmark results: 32-bit code runs slightly slower in a 64-bit environment than it does in its native 32-bit habitat.

...The boost in 64-bit mode ... When dealing with numbers too large to store in its registers, the processor must split them up and store them in individual cache or system memory locations. This takes precious time. Obviously, doubling the space available in the registers and increasing the number of registers available will result in less data needing to be shuffled off to the cache or system memory for storage, increasing performance.
--End Quote
So, amount of RAM aside, performance is not an issue in having a large swap file, (other than that disk space that's needlessly tied down & the performance-related issues for other data-access related to such tie-down).

Instead of " better have as much swap as possible" I say "better have as much swap as may be NEEDED." Why use more space for the swap then Windows will actually allocate? It's space that could be used for other stuff...So try to find the largest size that Windows will need (by starting small & increasing that if windows allocates more, or maybe start big & monitor how much is actually allocated)

Want more info? Google for:   "large swap file"  performance
(including those quotes)
Brings up some interesting info such as http://www.computing.net/windowsme/wwwboard/forum/19090.html
>>Hello scrathcyboy,
>>Too much wsap file space is worse than too little, it causes the system to lose data and thrash.  Even MS says this.
>>Please find for me the link where MS says that and you've got a lot of points!

The link above (...?kbid=555223) counters his comment. Considering what MS says there, I doubt there's as MS article saying that too much would be a problem. The best scrathcyboy I think can come up with is what MS says in the same link, that having too much is pointless, i.e. that " having a really large pagefile may serve no useful purpose."
yes, 64 bit, already are a lot of games and app, Maya i forgot.
2^32 lines of data=4G (all 32 data lines used for addressing)
2^44 lines of data=16T (only 44 of 64 data lines used for addressing, the rest for instructions too)
hence 64 bit processor is more powerfull than we can see now.
In addition, the windows code is just translated, not yet optimized for 64 bit.

Relevant benchmarks at http://www.pcstats.com/articleview.cfm?articleID=1665
Why use more space for the swap then Windows will actually allocate?
>>>because you never know what or how many app you run or what type of editing you do.
To monitor the swap, i don think so :)
Any more info, the same :)
Thanks to all.