Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed.
Memory management usually occurs at two distinct levels:
Virtual memory management
The operating system kernel manages the pages of physical memory and maps them into the virtual address spaces of processes at their request.
The following methods/algorithms fall into the category of virtual memory management:
Paging: The act of writing the contents of unused pages to disc in order to reuse the RAM page for other purposes (page out). When the data is needed again, the hardware issues an interrupt due to an unmapped memory address, which the kernel resolves by reading the page back from disc. Even though paging is entirely transparent to user space processes, it can severely slow a system down.
CoW (Copy on Write): Good implementations of the
fork()system call will not copy the entire data of the process. Instead, only the page tables are copied, and all memory pages are marked as read-only. Whenever an access violation occurs because one of the forked processes tries to write to a memory page, the kernel just copies the contents of that page.
Memory mapped files: Memory pages do not need to be backed by the swap area, the may also be backed by files. Such memory mapped files may usually be paged out without writing anything to disc, because memory mapping is usually used as a means of input rather than output. Code sections from executables and libraries, for instance, are usually memory mapped, which avoids duplication of data in memory when several processes use the same executable/libraries. Memory mapping is also available to user space processes via the
Dynamic memory management
This second level of memory management is local to every process, and is usually implemented by the standard C library (
free()). As a second level of memory management,
malloc()relies on the
mmap()system calls to request memory pages from the kernel. Large allocations are usually directly satisfied by an
mmap()call. For small allocations,
malloc()implementations keep track of mapped, unused memory, subdividing the unused memory as necessary to match the requests from the user code, and requesting new memory pages from the kernel whenever no suitable free memory block is found.
Common goals for dynamic memory allocators:
No unnecessary memory fragmentation
Low amount of memory used for bookkeeping purposes
Low allocation/deallocation latencies
This includes optimizations of the search for suitable free blocks, and methods to reduce the amount of system calls.
Another method of memory management within a process is the function call stack. This is usually managed by manipulating a single, dedicated register in the CPU. On entry, every function changes the value of this stack pointer register to allocate stack memory for its local variables, and resets the stack pointer before it returns to its caller. This method of memory allocation is extremely fast, however, the lifetime of the objects on the stack is limited to the function that allocates them.
Stack memory may either be provided as a block of fixed size, or may be dynamically increased by the kernel whenever it notices an access violation just below the stack memory region.
Even though the kernel is not responsible to do dynamic memory management for the processes, it still needs to do so for itself, in LINUX, this is done by the
kmalloc()function. Also, kernel threads need their own stacks. In contrast to the user space dynamic memory managers, the kernel facilities cannot rely on the virtual memory manager.
Common to virtual memory management and dynamic memory management is the necessity to know when a block of memory may be deallocated. There are three approaches to handle this:
Explicit management: The user has to take care that every allocation call is matched by a deallocation call.
Reference counting: Some entity keeps track of how many times every memory block is referenced. The reference count may reside in the memory block itself, or somewhere outside of it. An important implementation of this is the
Garbage collection: Used in high level scripting languages and in Java. Memory is only allocated, never explicitly deallocated, deallocation is the job of the garbage collector which scans all used memory for references to memory block and automatically deallocates blocks that are not referenced anymore.
Several methods have been devised that increase the effectiveness of the different memory management facilities, as the quality of the involved memory managers can have an extensive effect on overall system performance.