Memory usage regression
Jason Evans
jasone at canonware.com
Wed Oct 24 15:06:25 PDT 2012
On Oct 24, 2012, at 12:36 PM, Mike Hommey wrote:
> With that replayed workload, I can see two main things:
> - The amount of resident memory used by jemalloc 3 is greater than that
> of mozjemalloc after freeing big parts of what was allocated (in
> Firefox, after closing all tabs, waiting for a settle, and forcing GC).
> This is most likely due to different allocation patterns leading to
> some kind of fragmentation after freeing part of the allocated memory.
> See http://i.imgur.com/fQKi4.png for a graphical representation of
> what happens to the RSS value at the different checkpoints during the
> Firefox workload.
This difference may be inherent, due to something like size class changes. Are there any configuration differences between mozjemalloc and jemalloc 3 besides tcache that weren't removed? In particular, narenas is an important one.
> - The amount of mmap()ed memory is dangerously increasing during the
> workload. It almost (but not quite) looks like jemalloc don't reuse
> pages it purged. See http://i.imgur.com/klfJv.png ; VmData is
> essentially the sum of all anonymous ranges of memory in the process.
> Such an increase in VmData means we'd eventually exhaust the 32-bits
> address space on 32-bits OSes, even though the resident memory usage
> is pretty low.
This looks pretty bad. The only legitimate potential explanation I can think of is that jemalloc now partitions dirty and clean pages (and jemalloc 3 is much less aggressive than mozjemalloc about purging), so it's possible to have to allocate a new chunk for a large object, even though there would be enough room in an existing chunk if clean and dirty available runs were coalesced. This increases run fragmentation in general, but it tends to dramatically reduce the number of pages that are dirtied. I'd like to see the output of malloc_stats_print() for two adjacent points along the x axis, like "After iteration 5" and its predecessor. I'd also be curious if the VM size increases continue after many grow/shrink cycles (if so, it might be due to an outright bug in jemalloc).
Thanks,
Jason
More information about the jemalloc-discuss
mailing list