Memory management implementation covers many areas:
❑ Management of physical pages in memory.
❑ The buddy system to allocate memory in large chunks.
❑ The slab, slub, and slob allocators to allocate smaller chunks of memory.
❑ The vmalloc mechanism to allocate non-contiguous blocks of memory.
❑ The address space of processes.
As we know, the virtual address space of the processor is in general divided into two parts by
the Linux kernel. The lower and larger part is available to user processes, and the upper part is
reserved for the kernel. Whereas the lower part is modified during a context switch (between two
user processes), the kernel part of virtual address space always remains the same.
The available physical memory is mapped into the address space of the kernel. Accesses with virtual
addresses whose offset to the start of the kernel area does not exceed the size of the available RAM
are therefore automatically associated with physical page frames. This is practical because memory
allocations in the kernel area always land in physical RAM when this scheme is adopted. However,
there is one problem. The virtual address space portion of the kernel is necessarily smaller than the
maximum theoretical address space of the CPU. If there is more physical RAM than can be mapped
into the kernel address space, the kernel must resort to the highmem method to manage ‘‘super
The use of highmem pages is problematic only for the kernel itself. The kernel
must first invoke the kmap and kunmap functions discussed below to map the
highmem pages into its virtual address space before it can use them — this is not
necessary with normal memory pages. However, for userspace processes, it makes
absolutely no difference if the pages are highmem or normal pages because they are
always accessed via page tables and never directly.
There are two types of machine that manage physical memory in different ways:
1. UMA machines (uniform memory access) organize available memory in a contiguous fashion
(possibly with small gaps). Each processor (in a symmetric multiprocessor system) is able to
access each memory area equally quickly.
2. NUMA machines (non-uniform memory access) are always multiprocessor machines. Local
RAM is available to each CPU of the system to support particularly fast access. The processors
are linked via a bus to support access to the local RAM of other CPUs — this is naturally
slower than accessing local RAM.