What affinity to set certain application to for maximum/better performance?

hi guys

I've only recently come by the option of knowing how to set certain services to work with only one core or two cores or all cores on one's server or PC.

The question is, to maximise the performance of a system, to reduce the number of crashes or freezes, would it be right to say that setting only a certain number of cores to certain services will have a positive impact? I know that some applications require ALL cores. But how does one know which one's to change to only a few cores and to leave others to maximise the numbers?

Thanks for helping
Yashy
LVL 1
YashyAsked:
Who is Participating?
 
QlemoBatchelor, Developer and EE Topic AdvisorCommented:
If you do not know exactly what you are doing, leave it be. You should not assign cores to applications in general, because the OS is much better suited to decide.

Only if you want to make sure some processes share the same cores, leaving other cores free for other processes, you might be inclinded to set process affinity. But that are exceptions, not the common way. E.g. if you have certain processes you know are not well designed and running with high CPU load all the time, you can restrict those to use a particular core you put similar processes on.

With Hyperthreading on some processors, the OS also knows that using the first (real) core of a CPU socket, and leaving the virtual second idle, can improve performance because of allowing automatic overclocking. And there are more optimizations with modern processors than that.

So my recommendation is to not mess with processor/core affinity at all. You'll probably just make performance worse.
2
 
McKnifeCommented:
I agree. Also, please acknowledge that crashes cannot be reduced by assigning less cores to some application. There is a small risk for any application that it could use all cores for weeks, because it runs into an error, but that will rarely happen. If it happens, you should still be able to shut it down - a freeze is normally not to be expected and the risk of running into it would - in my opinion - not justify to assign less cores. If your applications tend to run out of control, and use all CPU power, you should setup monitoring to be quickly notified (nagios/perfmon and such offer task execution and/or mail sending in such a case).
1
 
Bill BachPresidentCommented:
Don't mess with it!  (There's an echo in here.)

To directly answer your question, though: Messing with processor affinity CAN be beneficial if you have substantial knowledge about the underlying application and hardware environment and feel that you can configure the system manually better than the operating system can.  For example, back in older versions of the OS, it was sometimes helpful to pin down the NIC drivers to a single core or pair of cores.  Further, if you had built a extensive, multi-threaded application that used interprocess communications and were trying to obtain the maximum possible performance, then you could force specific threads onto specific CPU's such that they would each have their own compute resource (i.e. CPU core), but that they might share a common code or data cache at a higher level.  I played with this for a while on an old ray tracing project I had written -- I forced the parent process to its own core, then let it spawn the child threads such that they shared the same CPU cache.  because this code was compute-intensive, I was able to eek out a gain of a couple of percentage points.  In the end, though, the overall gain wasn't worth the time I spent on it, and as soon as I upgraded the machine it was running on, the configuration had to be scrapped anyway.

In other words, don't mess with it.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.