There are several different approaches for memory management that can solve the fragmentation problem on MMU-less embedded systems. Each algorithm is classified according to the way that it finds a free block of the most appropriate size. There are five categories extendedly analyzed in M. Masmano el al. [1], Sun et al. [2] and P. R. Wilson [10] works: Sequential Fit, Segregated Free Lists, Buddy Systems, Indexed Fit and Bitmap Fit.
A simple but not always feasible solution is the formatting of the available system memory into uniform blocks. By using this approach, embedded systems have to face a crucial limitation of serving the diverse-sized requested blocks, which makes it inefficient for complex and demanding applications.
Buddy systems approach attempts to split the available memory into two same-sized blocks respecting the efficiency issue and increasing the overall performance. This method usually creates fragmented memory parts after extended use of allocation and deallocation processes. In the buddy system, the memory is broken down into power-of-two sized naturally aligned blocks [17]. This approach greatly reduces external fragmentation of memory and helps in allocating bigger continuous blocks of memory aligned to their size. On the other hand, the buddy allocator suffers increased internal fragmentation of memory and is not suitable for general kernel allocations. This purpose is better addressed by the slab allocator.
Slab allocator, creates different sized blocks and matches the requested allocation to the closest one. The majority of memory allocation requests in the kernel is for small, frequently used data structures. The basic idea behind the slab allocator is that commonly used objects are pre-allocated in continuous areas of physical memory called slabs. Whenever an object is to be allocated, the slab allocator returns the first available item from a suitable slab corresponding to the object type. Because the sizes of the requested and allocated block match, the slab allocator significantly reduces internal fragmentation [6]. The advantage of this setup is that during most of the allocations, no global spinlock needs to be held. CAMA [6], which is a research follow-up of slab allocator, has a better performance by splitting and merging the block accordingly.
Two-Level Segregated Fit [1] (TLSF algorithm) seeks to implement a good-fit policy in order to fulfill the most important real-time requirements. The basic segregated fit mechanism uses an array of free lists, with each array holding free blocks within a size class. In order to speed-up the access to the free blocks and also to manage a large set of segregated lists, the array of lists is organized as a two-level array. The first-level array divides free blocks in classes that are a power of two apart (16, 32, 64, 128, etc.); and the second-level sub-divides each first-level class linearly, where the number of divisions is a user configurable parameter. Each array of lists has an associated bitmap used to mark which lists are empty and which ones contain free blocks.