Wondering which PCIe slots to use on dual Intel CPU E5 based servers, and whether any operating system know how to load balance to keep PCIe access local to the CPU.
It's well known that modern OSs are NUMA aware and will try to allocate RAM/cores so that memory access is local rather than across the hypertransport/quickpath busses because modern CPUs have inbuilt memory controllers.
What about PCIe cards though now that the PCIe controllers are built into the CPUs rather than on the southbridge? Say for example I have a HP DL380 gen8 or Dell R720 with two CPUs and I put dual port NICs in slots 1 and 4 and teamed them together for 4*1Gb, would the OS be clever enough to know that slot 1 was on CPU1 and slot 4 was on CPU2 and direct the outbound packets to the local NIC or would half of the traffic go over the HT bus in a random manner? Is there any benefit of spreading NICs and FC HBAs over both CPUs inbuilt PCIe controllers?
I'm going to sleep on it and see if anyone knows for sure.
Not sure if there is a heat/turbo gain to spreading the cards over both CPUs, if the processors and airflow were identical then maybe the additional heat from the PCIe controller part of the chip might come into play and slow the CPU on that die down but that would be delving into the 1-2% speed improvements you can get by picking the fastest of a particular chip part number.