Anbieter zum Thema
Block Allocator
Applications use block allocators when they need to allocate a large number of small objects of the same size. Typical examples of this pattern include small scalar values such as timestamp/value pairs that can represent sensor data; another allocation pattern that is well-served by the block allocator is abstract syntax tree nodes used by various parsers, and pages of an in-memory database. Block allocators handle such objects very efficiently with minimal overhead, and eliminate fragmentation by design.
The idea behind a block allocator is very simple. An allocator is given a quantity of memory, which we’ll call the “large block,” divides it into equal-size pieces, and organizes them in a linked-list of free elements. To serve a request for memory, the allocator returns a pointer to one element and removes it from the list. When there are no more elements in the “free list,” a new large block is selected from the memory pool using some other allocator (for example, a list allocator). The new large block again gets divided by the block allocator into equal-size elements. The elements are put into a new linked-list, and allocation requests are handled from this newly created list.
When an object is freed, it is placed back into its original “free list.” Since all allocated objects in a given list are of the same size, there is no need for the block allocator to “remember” each element’s size, or to aggregate smaller chunks into larger ones. Therefore, there is less overhead and fewer CPU cycles used by avoiding the aggregation of adjacent free blocks.
A block allocator eliminates fragmentation because blocks are all of equal size, therefore when freed they are not, and do not need to be, joined with adjacent free blocks.
There is also less overhead (this type of allocator is more efficient) because each block is a fixed, known, size so there is no need to carry additional meta-data (specifically, the size of the free block) in the linked list. This frees up 4 bytes of memory per block in a 32-bit system.
The most basic block allocators satisfy allocation requests only for objects that fit into their pre-determined element size, making such algorithms useful only when the allocation pattern is known in advance (for example, when the application always allocates 16-byte objects). In practice, many memory managers, including database memory managers, need to satisfy several allocation patterns. So, to take advantage of the simplicity of the block allocator algorithm while at the same time dealing with this variability, a block allocator can be combined with some other technique into a hybrid memory manager.
For example, a block allocator can maintain multiple lists of different–sized elements, choosing the list that is suited for a particular allocation request. Meanwhile, the blocks themselves, and objects that exceed the chunk size of any of the blocks, are allocated using another general purpose allocator (for example a page allocator, or a list allocator). In such an implementation, the number of allocations (or objects) processed by the block algorithm is typically many times higher than those made by the general purpose allocator, and this can greatly enhance performance.
Stack Allocator
Memory allocators would be simpler to design, and perform better, if they only had to allocate objects, and not free them. The stack allocator is designed to follow this simple strategy.
Stack-based allocators fit when memory requirements can be divided into a first phase in which objects are allocated, and a second phase in which they are de-allocated. In order to use the stack allocator, the number of allocations and the total amount of required memory should be (A) known in advance, and (B) limited.
Stack-based allocators use a concept similar to the application’s call stack. When a function is called, memory space for its arguments and local variables is pushed onto the stack (by advancing the stack pointer). When the function returns, the arguments and local variables are no longer needed and so the stack pointer is rewound to its starting point before the function call (i.e. the stack is “popped”). With the stack-based allocator, for each allocation a pointer is advanced. When the memory is no longer required, the pointer is simply rewound to the starting point.
One useful example of the stack allocator model is XML parsers, in which short-lived objects can be released all at once. Another example is SQL statement execution, in which all memory is released upon the statement’s commit or rollback. The process of evaluating and executing a SQL statement involves prodigious amounts of memory allocation:
The parser needs to build a parse tree of tokens; the optimizer is invoked next, which involves more allocations to hold possible execution plan steps (the optimizer’s job is to assemble a sequence of steps that is less costly than any other sequence of steps); finally the execution plan must be carried out, which can involve more allocations to hold the result set. In all, executing a SQL statement can involve thousands of individual allocations. When the application program releases the SQL statement handle, all of that memory can be released. This fits the two-phase pattern in which objects are first allocated, and then de-allocated.
An important by-product of the stack approach is improved safety: because memory is allocated and de-allocated in two phases (not in random order), the application doesn’t have to track individual allocations and it is impossible to accidentally introduce a memory leak through improper de-allocation.
Another benefit is much improved performance. During allocation, there is no chain of pointers to walk. And de-allocation performance benefits even more: though there might have been thousands of allocations during the first phase, de-allocating the memory does not require calling the equivalent of free thousands of times. Rather, the stack pointer is rewound in one move.
The stack allocator imposes virtually no overhead. There is a single stack pointer (4 bytes in a 32-bit system) in place of a chain of pointers in a list allocator or block allocator.
A final benefit is improved resiliency. Whereas list and block allocators intersperse chains of pointers in the heap, the stack allocator has just the stack pointer. So the chance of overwriting that memory and compromising the integrity of the allocator’s meta-data is greatly reduced.
Bitmap Allocator
The bitmap allocation algorithm is a little more complex. It acts on the memory pool by allocating objects in a pre-defined unit called the allocation quantum. The size of the quantum can be a word or a double word. A bitmap is a vector of bit-flags, with each bit corresponding to one quantum of memory. A bit value of 0 indicates a free quantum, while 1 is an allocated quantum.
The bitmap allocator is similar to the block allocator in that it always allocates memory in a pre-specified size, but the bitmap allocator preserves the flexibility of a list allocator’s ability to satisfy random allocation request sizes by finding adjacent free bits and allocating them together.
Advantages of bitmap allocators include the following:
- 1. They can allocate groups of objects sequentially, thus maintaining locality of reference. This means objects allocated sequentially are placed close to one another in storage, on the same physical “page.”
- 2. Fragmentation is reduced. When objects are allocated by a quantum of comparable size, small unused holes in the memory pool are avoided.
- 3. A performance advantage of the algorithm lies in the fact that the bitmaps themselves are not interleaved with main storage, which improves the locality of searching.
- 4. Keeping the bitmap separate from the heap also improves safety, in that memory overwrites (e.g. allocating 20 bytes and writing 24 bytes to the pointer that was returned) will not compromise the integrity of the allocator’s meta-data.
Bitmap allocators are commonly used in garbage collectors, database systems and file system disk block managers, where enforcing locality of reference and reducing fragmentation are imperative.
(ID:44195367)