IT researcher IT researcher - 2 years ago 186 Question

Run or set a process in low priority

I am trying to run a process as Low priority. but I didn't get option in ProcessPriorityClass to set it to low. But if I go to task manager I can set a process priority to low manually. So how It can be done? Below code I used to set process to below normal priority.

Dim s As New Process
s.StartInfo.FileName = "D:\myapp.exe"
s.PriorityClass = ProcessPriorityClass.BelowNormal

Answer Source

There is no "Low" value, though there is ProcessPriorityClass.Idle that for some reason the default taskmanager calls "Low" (most other task managers like ProcessHacker or ProcessExplorer call it "Idle", and if you're playing about with processes, you should probably get a decent task manager replacement like ProcessHacker, rather than the built-in app aimed at less-technical users).

Setting a process to ProcessPriorityClass.Idle works fine for me.

A note of caution though: it's generally a bad idea to change a priority class of a process, and most especially to idle and from outside the process (at least from inside the process you can decide to change the priority higher for certain tasks, then set it back again).

Setting such a low priority can cause nasty deadlocks if the process obtains any resource in a non-sharable manner, because it won't run while any higher-priority process needs a time slice (especially brutal on machines with few cores, as there are fewer simultaneous time slices available), so if a higher priority process also needs a resource, it will be waiting forever to get it, because the lower-priority process won't have a chance to run.

Eventually, Windows (though not some of the earlier versions) will fix that by temporarily boosting all the threads in the lower priority process that have been waiting to run to be higher than ThreadPriority.Highest threads in ProcessPriorityClass.High processes, and then gradually let them fall down to a lower priority, at which point the problem happens again.

This is probably the opposite to what you want to have happen.

And because it's especially brutal on machines with few cores, if your development rig is beefy you can have the situation where everything works fine on your machine, and then users with less beefy machines find that everything grinds to a halt for them.

By default, only some interrupts and the "System Idle Process" (which is a special case) runs at Idle, and there's a good reason for that.

Still, if you are sure you know what you are doing (or indeed, if you're experimenting), then s.PriorityClass = ProcessPriorityClass.Idle is what you want.

Edit: A bit more about priorities:

A given thread has a priority, relative to other threads in the process, and a given process has a priority relative to other processes on the system.

The priority of a thread relative to all other threads on the system, depends on both of these, as per the following table:

       Thread: | Idle | Lowest | Below  | Normal | Above  | Highest | Time-Critical
               |      |        | Normal |        | Normal |         | 
Idle Process   |  1   |   2    |   3    |   4    |   5    |    6    |     15
Below-Normal   |  1   |   4    |   5    |   6    |   7    |    8    |     15
Normal Process |  1   |   6    |   7    |   8    |   9    |   10    |     15
Above-Normal   |  1   |   8    |   9    |  10    |  11    |   12    |     15
High Process   |  1   |  11    |  12    |  13    |  14    |   15    |     15
Realtime       | 16   |  22    |  23    |  24    |  25    |   26    |     31

Now, any given computer will have X cores, where common values today are 1, 2, 4, 8 or 16. There can only be 1 thread per core running at a time.

If there are more threads that want to run than there are cores, then the scheduling happens as follows:

  1. Start with the highest priority threads that exist. Share out cores between them.
  2. If there were more threads than cores, round-robin the cores between them, so all threads of that priority get an equal share.
  3. If there are cores left over, do the same with the next highest priority threads, and so on.

So, if there were 4 cores, and we had 3 threads priority 8 (e.g. normal threads in normal processes), 2 priority 10 (e.g. above-normal threads in normal processes) and 3 priority 6 (highest-priority thread in an idle process) and they were all ready to run then:

  1. The 2 priority 10 threads would always get a timeslice on a core.
  2. The remaining 2 cores would be time-shared between the 3 normal/normal threads.
  3. The 3 priority 6 threads will not get a chance to run.

This is bad for those 3 threads, but it should be good for the system as a whole, because those 3 threads should only be for things that are of such low importance that we're totally okay with them not running.

These numbers are also boosted in the following ways:

  1. With foreground boost on (normal for desktop, not normal for server) if a normal process owns the foreground window, it is boosted above normal processes that do not.
  2. When a window receives mouse, keyboard or timer input, or a message from another window, its process is given a temporary boost.
  3. If a thread is waiting on something, and that thing has become ready, it gets a boost.
  4. If a thread has been ready for a long time without running, it may randomly get a large boost.

The first three of these should be reasonably common sense, in that it's obvious why one might want those processes or threads to be boosted.

The fourth is introducing a mild problem to fix a severe problem: If thread A needs resource low-priority thread B has, and there is only one core available, then they will dead-lock because thread B won't get a time-slice, so it won't release the resource so thread A will keep trying to get it so it won't end so thread B won't get a time-slice...

So the OS boosts thread B to be temporarily super-high priority, and thread A doesn't get a look in for a bit, and between that and the initial slow-down from the deadlock the system as a whole is much slower than it should be (one of the practical advantages of having a multi-core system isn't so much that it lets lots of busy processes work together better, but that it makes this scenario much less likely to happen).

The consequences of all of this is that 99% of the time, the best priority for a thread is normal, and the best priority for a process is normal.

Idle/Low should be preserved for processes of such insignificance that we really don't care if they never get a chance to do something. Screen-savers are indeed an example, because if a screen-saver never gets a chance to run, it probably shouldn't; nobody spends a lot of money on a state-of-the-art rig just to watch flying toasters (though during the 1990s one might have wondered).

Examples of non-normal priority done well:

The problem with "don't do that" advice, is that it's always incomplete; there are plenty of things where "don't do that" is the best advice generally (e.g. messing with the GC would be another example) but one can't really grok why "don't do that" is good advice without understanding the exceptions, and indeed it's not really good advice unless you cover the exceptions, it's just dogma. So it's worth considering good cases of high and low priority, and what about them removes the problem involved.

Realtime processing of live media.

If you are processing live music or video, and need a professional output (you are actually recording or transmitting this for others, rather than just watching it on your own machine), then it makes sense to do this in threads that are running at the very highest priorities—perhaps with the process set to realtime. This penalises the entire system as a whole, but right then the most important thing on that machine is the process doing the media processing, and given the choice of the media stream suffering a glitch, and something causing serious problems to the system, you'd rather have serious problem happen to the system and deal with it later. All normal concepts of "playing nicely with other processes" are off the table, and indeed on *nix systems for such purposes this sort of work is often done with a real-time kernel that is optimised for predictable time at the cost of less good overall concurrent performance (though all the other details above will differ too, I've only explained the Windows way above).

Finaliser thread in .NET

The finaliser thread in .NET runs at high priority. Most of the time this thread has nothing to do, so it is not active. When it does have something to do (the finalisation queue is not empty), then it's vitally important that it does it no matter how many other threads are running. Some important notes:

  1. Any well-written finaliser should be fast to execute, so the total time spent processing all finalisers should be short. (Finalisation and may even be abandoned in some circumstances if it takes a long time).
  2. Any well-written finaliser should not interfere with other threads, and hence processing all finalisers should not interfere with other threads.

These two facts are important in minimising the downside of having a higher-than-normal priority thread, since they mean it doesn't get into priority inversion problems, and for most of the time isn't running and hence isn't competing with other threads.

The System Idle Process

Having a special process running at idle serves two goals. The first is that since there is always a process running at idle that has a thread per core, there is no need for the scheduler to have any code to deal with the case of there being no thread to run, because there is always such a thread, and the logic described above will run one of those threads if there are no others.

The second is that these threads can call into whatever power-saving or underclocking abilities the core has, because if they are running for any length of time, then by definition the CPU is not needed and should be put into a low-power state.

Importantly, this process never obtains any non-shareable resource another process might want, so it can never cause priority inversion problems.

Here we can see that the very purpose of the process means that we pretty much never want it to run, if there is any other thread whatsoever that has something to do.

(It also gives us a good measure for when something should be at low priority; if it shouldn't be competing with a thread that exists just to put the CPU into a low-power state, then it shouldn't be low priority).

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download