Life and code.
RSS icon Home icon
  • Parallel Processing and the User Experience

    Posted on November 28th, 2005 Brian No comments

    Have you ever pegged your CPU? It could happen any number of ways. You could be encoding some video, or performing a big compilation. It could be a rouge process, sent spinning into an infinite loop through some obscure bug. Maybe you were viewing a PDF, and Acrobat hosed and sat in the background eating CPU for no reason.

    It happens to everybody, and most people first notice the problem because the computer suddenly becomes unresponsive. An Alt-Tab might take ten seconds to switch tasks, or moving a window around the screen is jerky and difficult to control. Maybe your music player starts skipping. If you’re technically savvy, you probably know how to bring up task manager and change the process’ priority, or perhaps kill it entirely.

    Pre-emptive multitasking was supposed to save us from this fate. The idea is that the kernel can interrput a long running process and give CPU time to shorter processes that (hopefully) will get done with their work quickly. Using some algorithm and various priority queues, it will provide the perfect balance between long-running CPU-hogs and short-running screen updates. It should be obvious from the experiences of millions of users around the world that scheduling algorithms just aren’t up to the task of ensuring a constant user experience.

    I don’t have this problem. As I type this, I am compiling several Gentoo packages in a virtual machine, and it has totally pegged one of my so-called CPUs. However, my system remaines quite response, and I can happily switch from one task to another with no perceived delay. The heavy whirring of my laptop’s fans are the only clue to the increased workload.

    How is this possible? The solution is to have some spare CPU time always available for super-high-priority user-experience tasks. Ordinarily, that would mean another CPU – but there is another option available. Intel’s P4 has a feature called hyper-threading that, put simply, allows one CPU to pretend like it’s two CPUs, although the latter really has only a fraction of the processing power of the first. That’s exactly what we want!

    The knowledgable reader will know that if my hyper-threaded CPU looks just like a second CPU, then there is no possible way on current modern operating systems to ensure that that extra fraction of processing power will be used only for high-priority user-experience tasks. Of course that is true – but we get some help from the fact that most programs written today can’t take advantage of a second CPU. Thus, because the long-running process pegs only one of the virtual CPUs, the second virtual CPU is still available for whatever else needs doing. The astute reader will also note that if I kick off another CPU-hungry process that I have screwed myself. Fortunately, that rarely happens. The practical result is a system that remains responsive even while its primary computing capacity is completely used.

    As we move towards a future where multiple CPUs and dual cores become the norm, the need for a marginal techonology like hyper-threading decreases for a while. As multi-processor machines become more and more common, more and more programs will be written to take advantage of multiple processors. We might very well come again upon the scenario of a long-running computationally-intensive programming pegging all of the CPUs on the system, and we’re back where we started in the single-threaded era.

    We need systems with dedicated, marginal processing capability for high-priority user-experience tasks. It must be a cooperation between the operating system and the CPU to provide this CPU time only to operations that would dramatically affect the user experience. While it might be possible to enumerate all of the various types of operations that might be considered appropriate for such CPU time, such as all paint messages or somesuch, a better approach is to allow programmers to set a USER_EXPERIENCE flag on a thread, and allow that thread access to the extra resources. A good scheduler could even look for threads that aren’t marked appropriatley and give them time based on heuristics applied to a thread’s observed behavior.

    Access by a thread should be carefully monitored, and the USER_EXPERIENCE flag could be revoked by the kernel if it detected abuse, possibly with a penalty to future scheduling. However, abuse will be unlikely because the dedicated resources are dinky compared to the rest of the system, it won’t be worth a programmer’s time to figure out how to sneak in and use the extra 10% of the system’s resources, especially if the kernel penalizes processes caught red-handed.

    And why should we care? Because user-experience is the single most important factor for a human’s satisfaction – and if we aren’t satisified with our machines, then why do we bother with them?

    Comments are closed.