Mono mempools are linked lists of memory chunks. When mono_mempool_alloc receives a request for memory, it follows this strategy for allocating:
1. If possible, assigns the memory out of the beginning of the free space of the current chunk.
2. If the request size is larger than the current chunk free space, creates a new chunk and adds it to the head of the linked list.
A consequence of this strategy is that when case 2. is hit, the free space from the previous current chunk becomes unusable-- it is unused, but because the chunk will never be the head of the linked list again, it will never be allocated from. The memory is reclaimed when the mempool itself is destroyed, but until then the memory is totally wasted.
Consider the following innocuous set of operations:
- Create a mempool with initial pool size 2048
- Allocate an entity of size 2049
- Allocate an entity of size 1024
The first allocation will lead to a new chunk of size 3072 being allocated, wasting all the free space in the initial chunk. The second allocation will also lead to a new chunk being allocated, wasting the 1k at the end of the previous new chunk.
This might not actually be a problem-- we could view the memory wastage as an intentional tradeoff of memory efficiency for time efficiency, and there is the trick that because a separate path is used for allocations of >=4096 bytes (the "current chunk" does not change in this case) you can never waste more than 4095 bytes per allocation this way. However, a worrying thing is we are not tracking how much memory is lost this way. We should at least keep enough statistics to tell how much of a problem this is.