Dynamic Memory Allocation: Justifiably Taboo?

Seite: 3/7

Anbieter zum Thema

Risks of Dynamic Memory Management

De-fragmentation strategies used by the general purpose allocation mechanisms that underlie malloc() and free() can cause non-deterministic behavior and/or impose CPU overhead that is too high for embedded systems. Application developers tend to use dynamic memory allocation liberally, without thinking about such effects. A significant drawback is that standard allocators are not limited in their consumption of physical or virtual memory. Their de facto limit is the total amount of system memory – introducing a risk of running out of memory.

One of the greatest risks in dynamic memory allocation is failure to diligently relinquish (free) allocated memory when it is no longer needed. This results in memory leaks that, no matter how much system memory is available, will cause the system to become progressively slower and eventually stop (or crash) altogether. Dynamically allocated memory needs to be carefully released when it is no longer needed or when it is out of scope.

Memory fragmentation is the phenomenon that occurs when a task allocates and de-allocates memory in random order. For the sake of simplicity, assume that the “heap” is 100 bytes, and memory is allocated in the following order:

A. 10 bytes

B. 46

C. 13

D. 15

Leaving 16 bytes of our original 100 bytes free (diagram 2).

Subsequently, the allocation for 13 bytes (C) is released (diagram 3).

And an allocation request is made for 21 bytes. There are 29 bytes of memory free, in the aggregate, but the allocation request cannot be satisfied: due to fragmentation, there is no free hole of 21 contiguous bytes. This is the essence of fragmentation. The longer a system runs, the more fragmented the heap becomes due to the randomness of allocating and freeing memory.

List Allocators: A Closer Look

List allocators used by malloc() and free() are, by necessity, general purpose allocators that aren’t optimized for any particular memory allocation pattern. Compared to allocators that take specific allocation needs into account, these general purpose mechanisms are more likely to introduce unpredictability, high performance cost, and excessive resource (particularly memory) consumption. To understand, let’s examine commonly used list allocation algorithms: first-fit, next-fit, best-fit and quick-fit.

The first-fit algorithm always walks the chain of pointers from the beginning of the chain, and therefore attempts to allocate memory from the “front” of the heap first, leaving large free holes in the back. Therefore, this algorithm minimizes fragmentation, but at the expense of unpredictable performance: the longer the system runs, the longer the chain becomes, and the allocator typically walks a greater distance before finding a free hole large enough.

The next-fit algorithm attempts to smooth the performance at the expense of greater fragmentation. This algorithm will walk the chain from wherever it last left off, so it will allocate memory from more-or-less random locations in the heap.

The best-fit algorithm attempts to minimize fragmentation, but again at the risk of unpredictable, and potentially very bad, performance. This algorithm walks the chain until it finds a free hole that is exactly the right size, or is as close as possible to the allocation request. It could find the perfect free hole immediately, or it might have to walk the entire chain.

The best-fit algorithm maintains a separate list of the most commonly requested allocation sizes, as well as pointers to free holes in the heap that match those commonly requested sizes. It increases the overhead (i.e. there’s more meta-data to be managed), but it can provide better, and more consistent, performance for common allocation request sizes.

Pulling it together

In summary, general purpose allocators

  • lack limits
  • have unpredictable and potentially unacceptable performance
  • suffer from fragmentation
  • can impose excessive overhead

Avoiding general purpose allocators is a good strategy generally for embedded and real-time systems. The drawbacks of these allocators are not surprising. After all, they must, by definition, satisfy a variety of application scenarios and allocation patterns. It stands to reason that a tool designed to address every scenario will fail to fill some needs precisely. And as discussed above, embedded systems have higher standards than non-embedded applications in areas including performance, predictability and reliability. They need software tools and components that do their jobs precisely.

(ID:44195367)