Bug 35136 - Mempool allocation strategy can lead to "leaked" allocated-but-unusable memory
Summary: Mempool allocation strategy can lead to "leaked" allocated-but-unusable memory
Status: NEW
Alias: None
Product: Runtime
Classification: Mono
Component: General (show other bugs)
Version: unspecified
Hardware: All All
: Normal normal
Target Milestone: ---
Assignee: Bugzilla
URL:
Depends on:
Blocks:
 
Reported: 2015-10-21 11:39 UTC by Andi McClure
Modified: 2015-10-21 12:01 UTC (History)
2 users (show)

Tags:
Is this bug a regression?: ---
Last known good build:

Notice (2018-05-24): bugzilla.xamarin.com is now in read-only mode.

Please join us on Visual Studio Developer Community and in the Xamarin and Mono organizations on GitHub to continue tracking issues. Bugzilla will remain available for reference in read-only mode. We will continue to work on open Bugzilla bugs, copy them to the new locations as needed for follow-up, and add the new items under Related Links.

Our sincere thanks to everyone who has contributed on this bug tracker over the years. Thanks also for your understanding as we make these adjustments and improvements for the future.


Please create a new report for Bug 35136 on GitHub or Developer Community if you have new information to add and do not yet see a matching new report.

If the latest results still closely match this report, you can use the original description:

  • Export the original title and description: GitHub Markdown or Developer Community HTML
  • Copy the title and description into the new report. Adjust them to be up-to-date if needed.
  • Add your new information.

In special cases on GitHub you might also want the comments: GitHub Markdown with public comments

Related Links:
Status:
NEW

Description Andi McClure 2015-10-21 11:39:52 UTC
Mono mempools are linked lists of memory chunks. When mono_mempool_alloc receives a request for memory, it follows this strategy for allocating:
1. If possible, assigns the memory out of the beginning of the free space of the current chunk.
2. If the request size is larger than the current chunk free space, creates a new chunk and adds it to the head of the linked list.
A consequence of this strategy is that when case 2. is hit, the free space from the previous current chunk becomes unusable-- it is unused, but because the chunk will never be the head of the linked list again, it will never be allocated from. The memory is reclaimed when the mempool itself is destroyed, but until then the memory is totally wasted.

Consider the following innocuous set of operations:
- Create a mempool with initial pool size 2048
- Allocate an entity of size 2049
- Allocate an entity of size 1024

The first allocation will lead to a new chunk of size 3072 being allocated, wasting all the free space in the initial chunk. The second allocation will also lead to a new chunk being allocated, wasting the 1k at the end of the previous new chunk.

This might not actually be a problem-- we could view the memory wastage as an intentional tradeoff of memory efficiency for time efficiency, and there is the trick that because a separate path is used for allocations of >=4096 bytes (the "current chunk" does not change in this case) you can never waste more than 4095 bytes per allocation this way. However, a worrying thing is we are not tracking how much memory is lost this way. We should at least keep enough statistics to tell how much of a problem this is.