jemalloc hooks clarifications
bucjac at gmail.com
Tue Jan 19 09:56:00 PST 2016
> On Dec 23, 2015, at 6:20 PM, Jason Evans <jasone at canonware.com> wrote:
> On Nov 25, 2015, at 8:14 AM, Jakob Buchgraber <jakob.buchgraber at tum.de> wrote:
>> I am playing around with the memory management hooks introduced in version 4.
>> So I wrote a delegate for the default chunk hooks, that additionally report to
>> stdout what's happening .
>> The test program allocates 1GB of memory and immediately frees it.
>> It then tries to allocate 4MB and 8MB. The output is as follows
>> ALLOC: new_addr 0, size 1073741824, alignment 2097152, zero 1, commit 1, arena_ind 0, ret 0x7f2f52a00000
>> DALLOC: chunk 0x7f2f52a00000, size 1073741824, committed 1, arena_ind 0
>> DECOMMIT: chunk 0x7f2f52a00000, size 1073741824, offset 0, length 1073741824, arena_ind 0
>> PURGE: chunk 0x7f2f52a00000, size 1073741824, offset 0, length 1073741824, arena_ind 0
>> ALLOC: new_addr 0, size 4194304, alignment 2097152, zero 1, commit 1, arena_ind 0, ret 0x7f2f52a00000
>> ALLOC: new_addr 0, size 8388608, alignment 2097152, zero 1, commit 1, arena_ind 0, ret 0x7f2f52e00000
>> Given that the 1GB has not been deallocated, but purged I would expect
>> the last two ALLOCations not to have happened. Instead I would expect
>> the virtual memory from the 1GB allocation before to be reused?
> It looks to me like the first ALLOC gets 2^30 bytes at 0x7f2f52a00000, and the DALLOC/DECOMMIT/PURGE logging indicates that during free() the memory is madvise()d away, but the virtual memory is cached for future use. Then the ALLOCs of 2^22 and 2^23 bytes use the lowest contiguous parts of the cached virtual memory (0x7f2f52a00000 == 0x7f2f52a00000 for the 2^30 and 2^22 allocations). If I understand correctly, this exactly matches your expectations.
Thanks you are correct. I was confused by the call to ALLOC, as I (wrongfully) assumed that fetching a chunk of cached virtual memory doesn’t call the alloc chunk hook.
I think there might be an issue with this approach though: https://github.com/jemalloc/jemalloc/issues/307
>> Also, on an unrelated note, is it generally safe to trigger purging for arena A
>> from within an allocation chunk hook of arena B, with A <> B?
>> The reason why am asking this question is that I would generally want to
>> run with purging disabled on all arenas, but if some threshold of committed
>> memory is surpassed I would like to enable purging for some arenas.
>> Does this sound feasible?
> Currently this will probably work, but isn't in general safe. I have some long term plans to allocate internal metadata from the auto arenas (maybe just arena 0, maybe any auto arena, depending on how things work out), so that it is possible to do low overhead full arena reset without losing critical metadata (https://github.com/jemalloc/jemalloc/issues/146). These changes would create the potential for deadlock in what you're proposing.
It’s deadlocking right now as well, as I am accessing stats from within the chunk hooks to determine which arenas to purge. I had to replace the malloc mutexes with recursive mutexes to make it work. Seems fine so far.
Basically, I am running with lots of main memory (> 1TB). Most of the time the program will only use a fraction of the available memory but some queries will require almost all the memory in some random arena. So even if I leave purging on and set lg_dirty_mult to say 3, some arenas might end up having cached 10s of GB of physical memory with others running out and the program will crash. Ideally, I would want BSD’s MADV_FREE on Linux. That patch never got merged though. So what I am doing is to add some logic that tracks the amount of committed physical memory and if some threshold is reached, I query the jemalloc stats and dynamically adjust the purging ratio. Does that make sense?
More information about the jemalloc-discuss