Link to home
Start Free TrialLog in
Avatar of Intersection
Intersection

asked on

Copy many files without maxing out Kernel Paged Memory Pool?

Hello!

Question:
As in http://support.microsoft.com/kb/312362/ I am copying many files and running up my Paged Pool memory.
"there is a large probability that there are a very large number of files that are open on the server"
I start getting "Insufficient system resources exist to complete the requested service." error message, and then the computer crashes - sometimes destroying harddrives. :(
Even if I close the program before a crash, the Kernel Paged Memory values stays high.
(Note I monitor this in Task Manager > Performance > Kernel Memory > Paged value. And in VB.NET with System.Management.ManagementObjectSearcher("Select * From Win32_PerfRawData_PerfOS_Memory") and request "PoolPagedBytes".
)
If I start the program again without a reboot, I get the errors imediately. I guess because the Paged mem is still maxed out.

How in code can I close the "open files" that the memory system has open? That is, how can I reduce the level of the Paged Kernel Memory.
Is this possible?
Or, how can I copy many files without running up the Paged Kernel Mem in the first place?

Ive tried both of these calls to perform the copy  but the paged memory is still going up to 148916(KB).
System.IO.File.Copy(strFileSource, strFileDest, bOverwriteFile)
My.Computer.FileSystem.CopyFile(strFileSource, strFileDest, bOverwriteFile)

I have many files in some of the directories ie 19,400 - i dont know if this could have an impact at all.

Thank you for any guideance, advice, assistance!
-Topher


Background:
I need to copy many (100,000s to millions) of .jpg files from one drive to another. Off the shelf backup programs have failed (I assume because they try to create a massive list before starting), and other special requirements lead us to write custom software.

The software works as long as the number of files is not too great, but on operations lasting several hours sometimes the computer crashes - sometimes hosing the source and/or destination disk. Ive started debugging and have been getting "Insufficient system resources exist to complete the requested service." error message.

This seems to match up with http://support.microsoft.com/kb/312362/ or (http://support.microsoft.com/kb/304101)
This article recommends setting registry PoolUsageMaximum=60 and PagedPoolSize=-1

But I would like to fix the problem in code. (So that it is easy for customers to install - and im not modifying users computers.)
Avatar of Bob Learned
Bob Learned
Flag of United States of America image

How are you copying files now?
Avatar of Intersection
Intersection

ASKER

Copied from original question:

"Ive tried both of these calls to perform the copy but the paged memory is still going up to 148916(KB).
System.IO.File.Copy(strFileSource, strFileDest, bOverwriteFile)
My.Computer.FileSystem.CopyFile(strFileSource, strFileDest, bOverwriteFile)"
Just another comment - the Memory Usage by the application itself is just fine. I monitor it in TaskManager and it hovers around 10 MB over the hours of the copying.
The Learned one,

Is that what you were asking about - or did you mean some other aspect of the copy?

-Christopher
Nope, that would be it.  Does it matter how long the process takes? What happens if you Sleep the thread for a little while, to let things cool off.  It might be helpful to see how the memory usage is growing--like using Performance Monitor.
re: sleep...

Sorry, forgot to put that in the question. Yeah we've tried lots of "sleep"s and "application.doevents" - it helps for smaller image collections but it doesn't prevent system crashes with larger imagesets and it doesn't fix the problems that with the Kernel memory.

It seems like there is something going on so that the kernel memory or some related aspect of system memory doesn't get released while the program is running and adding sleeps or doevents don't seem to impact this..
How about having a controlling application that breaks the file sets into chunks, and then passes off the work to separate processes that exit once they are done?
Yeah, that is a good idea and it might be what we have to implement in the end.  Also, as I might have mentioned before. having processes finish and return doesn't  seem to avoid the problem.  We've thought of writing two completely seperate programs that hand off work to each other, but In our tests, after we get an initial  kernal error, even closing and restarting the program doesn't seem to fix the problem, only a system restart does. That's why this is such a weird bug.

In the end, since our system is completely dependent on being able to do robust file copies on huge image-sets,I think we really need to at least understand why doing it the current way leads to system instability before we can look for workarounds.
What type of operating system is this running on?
XP Pro
What are the physical specs for the machine:  RAM, hard drive, etc.
AMD Sempron 1.8 GHz 448 Ram
Windows XP SP2
60 GB HD copying to 160 GB HD.

Im no longer certain Its "Paged Kernel Memory".
The only thing for sure is this error:
"Insufficient system resources exist to complete the requested service."
Which does seem to point to NonPaged / Paged or PTE Kernel Memory.

http://forum.sysinternals.com/forum_posts.asp?TID=14554&KW=paged+limit+symbols&PID=69326#69326

Im attempting to use Processor Explorer to determine the limits of Paged and NonPaged memory on the system.
Kernel memory is Page + Non-Paged, and is finite.
ASKER CERTIFIED SOLUTION
Avatar of Intersection
Intersection

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Cool--learned something new today!!

http://blogs.technet.com/clint_huffman/archive/2008/04/07/free-system-page-table-entries-ptes.aspx

Possible Cause: Use of the /3GB switch
The system is vanilla windows XP no /3GB switch or /USERVA or /PAE.

Im not really finding anything on the web about how to NOT CREATE too many System PTE's - or how to reduce them. Again - i dont want to change the configuration on customers PC's, just want the copy program to work! :)

My current best option I can see if there's no way to prevent or release all of the PTE's is just to monitor them in code

Dim o As New System.Management.ManagementObjectSearcher("Select * From Win32_PerfRawData_PerfOS_Memory")
        For Each o2 As System.Management.ManagementBaseObject In o.Get
            lngReturn += CType(o2("FreeSystemPageTableEntries").ToString(), UInt32)
        Next
        Return (lngReturn)

and stop the program when they get too low and instruct the user to reboot the machine. Yuck.

-Christopher
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thank you Learned One. I'll try that.
We got an email recently asking for an update on this question so I thought I'd post a followup...

We finally solved the problem in our copy program by monitoring the System PTE's and if they drop below something (~4500) instruct user to reboot machine and start again. We stored the progress to a text file so that the program could pick up where it left off.

Having said that, it turns out that upgrading to XP Service Pack 3 fixes the problem so it must have been an undocumented bug in windows somewhere.

A few other things we implemented to improve system stability...

(1) Turn of indexing on your hard drive - it is much better if you do this BEFORE the drive is full since when you change it after the fact, it has to modify every file on the drive individually
  - Note that you can right-click on an individual drive and change the indexing, but you can also turn it off for the whole system which is a good idea if you have a lot of disks with many images.
(2) Turn off write-caching... this is important since if the massive copy you are doing fails and crashes the system for some reason you end up with a trashed MFT on the drive which is bad news (either drive is totally dead, or you might be lucky and be able to recover you files for $1,000-2,000 through a pro drive recovery, although all our files managed to crash their software too until we moved to service pack 3).
 (3) Increase the size of the Master File Table (MFT). This is something you set in the registry. do a quick google for more info. Basically, the system pre-allocates some space on the drive for keeping a list of where all the files are. if you have too many files then the MFT gets too big. I believe changing the MFT size in the registry only applies to future drives that the system formats, not existing drives.

 (4) Don't fill your drives... Since the MFT space is listed on the drive as available space, as the drive gets close to capacity, the MFT starts to get fragmented which is of course another big problem if you are trying to manage millions of files. So, to be really safe you'll want to figure out how much space the MFT takes up and not fill the drive past that (I believe you don't want to go over about 75% full).
 
(5) Disable "Last Access Time" record for files

Note that these are all the things we identified as potential issues. We didn't do any super rigorous testing other than to change all these things based on our research. I know for sure that if you have write-caching on and you crash the system in a big copy it can destroy the drives since we had that happen on probably 4 drives before we identified the issue. Most external drives had write-cache set off by default but you nee to check anyways.
 
Also, as I mentioned, XP service pack 3 seemed to fix the copy problems so I'm guessing it was a bug deep in windows, although no one seems to know about it. Even the people at OnTrack that we paid to recover one our dead drives hadn't run into the problem and as I said, their software crashed too. That same week, SP3 came out and we realized that it fixed the problem and then the OnTrack software worked fine.
 
---
Some more info from TweakXP:
How to Disable lastaccesstime and Make MFT larger:
http://www.tweakxp.com/article37043.aspx
 Increase XP NTFS performance Posted 5/5/2004 by TweakXP Member
Last access time stamps
XP automatically updates the date and time stamp with information about the last time you accessed a file. Not only does it mark the file, but it also updates the directory the file is located in as well as any directories above it. If you have a large hard drive with many subdirectories on it, this updating can slow down your system.

To disable the updating, start the Registry Editor by selecting Run from the Start menu, typing regedit in the Open text box, and clicking OK. When the Registry Editor window opens, navigate through the left pane until you get to

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Filesystem

In the right pane, look for the value named NtfsDisableLastAccessUpdate. If the value exists, it's probably set to 0. To change the value, double-click it. You'll then see the Edit DWORD Value screen. Enter 1 in the Value Data field and click OK.

If the value doesn't exist, you'll need to add it. Select New | DWORD Value from the Edit menu. The new value will appear in the right pane, prompting you for a value name. Type NtfsDisableLastAccessUpdate and press [Enter]. Double-click the new value. You'll then see the Edit DWORD Value screen. Enter 1 in the Value Data field and click OK. When you're done, close Regedit. Your registry changes will be saved automatically. Reboot your workstation.

The Master File Table
The Master File Table (MFT) keeps track of files on disks. This file logs all the files that are stored on a given disk, including an entry for the MFT itself. It works like an index of everything on the hard disk in much the same way that a phone book stores phone numbers.

NTFS keeps a section of each disk just for the MFT. This allows the MFT to grow as the contents of a disk change without becoming overly fragmented. This is because Windows NT didn't provide for the defragmentation of the MFT. Windows 2000 and Windows XPs Disk Defragmenter will defragment the MFT only if theres enough space on the hard drive to locate all of the MFT segments together in one location.

As the MFT file grows, it can become fragmented. Fortunately, you can control the initial size of the MFT by making a change in the registry. Making the MFT file larger prevents it from fragmenting but does so at the cost of storage space. For every kilobyte that NTFS uses for MFT, the less it has for data storage.
 
To limit the size of the MFT, start the Registry Editor by selecting Run from the Start menu, typing regedit in the Open text box, and clicking OK. When the Registry Editor window opens, navigate through the left pane until you get to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Filesystem
In the right pane, look for the value named NtfsMftZoneReservation. If the value doesn't exist, you'll need to add it. Select New | DWORD Value from the Edit menu. The new value will appear in the right pane, prompting you for a value name. Type NtfsMftZoneReservation and press [Enter]. Double-click the new value. You'll then see the Edit DWORD Value screen.

The default value for this key is 1. This is good for a drive that will contain relatively few large files. Other options include:
 2Medium file allocation  
3Larger file allocation  
4Maximum file allocation

To change the value, double-click it. When the Edit DWORD Value screen appears, enter the value you want and click OK. Unfortunately, Microsoft doesn't give any clear guidelines as to what distinguishes Medium from Larger and Maximum levels of files. Suffice it to say, if you plan to store lots of files on your workstation, you may want to consider a value of 3 or 4 instead of the default value of 1.