How important are pagefaults nowadays?
It used to be pagefaults were the dominating factor determining performance (and completely ignored in all those analyses of algorithms). But now computers typically have more than 2GB of ROM—the entire program and program data can fit in ROM.
Does this mean I needn't worry about where data is placed in memory like I used to?
For example, here's a test I did back in the mid 1980's:
Below are two examples of a FORTRAN nested Do loop which sets every element of an array A to the value X. Both programs achieve exactly the same result; the only difference between the two is the order in which the array elements are referenced.
The example on the left loops with the first index I changing most often, while the example on the right loops with the last index K changing most often.
DIMENSION A(100, 100, 100) DIMENSION A(100, 100, 100)
X = 1.0 X = 1.0
DO 10, K=1, 100 DO 10, I=1, 100
DO 10, J=1, 100 DO 10, J=1, 100
DO 10, I=1, 100 DO 10, K=1, 100
10 A(I,J,K) = X 10 A(I,J,K) = X
While the two examples appear to be nearly identical, a test run of the two programs reveals a startling difference between the two — the example on the left took 45 seconds to run using 30 seconds of CPU time, while the example on the right took 1 1/2 hours to run using 17 minutes of CPU time! [in the 1980's]
Ever since then I've been careful about data placement in memory and what order it's accessed.
Now with C# I don't have as much control over where in memory data is placed, and I'm wondering if I still need to be concerned about that.