Core Scheduler Changes

Besides the additions discussed above, some changes to the existing methods are required in the core
scheduler on SMP systems. While numerous small details change all over the place, the most important
differences as compared to uniprocessor systems are the following:
❑ When a new process is started with the exec system call, a good opportunity for the scheduler
to move the task across CPUs arises. Naturally, it has not been running yet, so there cannot
be any negative effects on the CPU cache by moving the task to another CPU. sched_exec
is the hook function invoked by the exec system call, and the code flow diagram is shown in
Figure 2-28.
sched_balance_self picks the CPU that is currently least loaded (and on which the process
is also allowed to run). If this is not the current CPU, then sched_migrate_task forwards an
according migration request to the migration thread using sched_migrate_task.
❑ The scheduling granularity of the completely fair scheduler scales with the number of CPUs.
The more processors present in the system, the larger the granularities that can be employed.
Both sysctl_sched_min_granularity and sysctl_sched_latency for sysctl_sched_min_

granularity are multiplied by the correction factor 1 + log2(nr_cpus), where nr_cpus represents
the number of available CPUs. However, they must not exceed 200 ms. sysctl_sched_
wakeup_granularity is also increased by the factor, but is not bounded from above