double allocations:

Jason Evans jasone at canonware.com
Mon Sep 23 18:13:06 PDT 2013


On Sep 23, 2013, at 9:59 AM, bill <bill at cs.uml.edu> wrote:
> I've noticed that an allocation request for a large chunk of memory (128GB) results in two calls to pages_map() (in src/chunk_mmap.c), consuming 2x the VM I requested.  In a 64 bit world this is not a big problem, but I've recoded pages_map() to force allocation from an mmap'd ssd (instead of swap anonymous mmap), and it's forcing me to run out of backing store.  The issue I would like to understand is why pages_map() is called twice with separate requests for the single 128GB jemalloc() that I'm doing in my application.  The first allocation is followed by a call to pages_unmap(), but with an unmap size of 0 bytes, leaving it fully mapped, while the second allocation (which is slightly larger than 128GB) is trimmed to exactly 128GB by 2 subsequent pages_unmap() calls.  This behavior seems very strange to me, and any explanation would be appreciated.

It sounds like the first time pages_map() is called, it returns a result that isn't adequately aligned.  The second time, extra space is allocated so that the result can be trimmed to alignment boundaries.

You say that the interposed call to pages_unmap() receives a size of 0.  Assuming the call is coming from chunk_alloc_mmap(), I see no way that can happen.  There was a bug in a ~3-year-old version of jemalloc of this nature, but I hope you're using a more modern version.  Also of relevance: the SSD backing feature you added existed in 2.x versions of jemalloc, but I removed it because no one ever claimed to have found it useful.

Jason


More information about the jemalloc-discuss mailing list