jasone at canonware.com
Wed Apr 4 10:31:41 PDT 2012
On Apr 3, 2012, at 9:21 AM, Mike Hommey wrote:
> On Thu, Mar 22, 2012 at 11:22:03AM -0700, Jason Evans wrote:
>> On Mar 22, 2012, at 11:03 AM, Mike Hommey wrote:
>>> In Firefox, we're tracking some of the stats provided by our fork of
>>> jemalloc. One of them is: - HeapCommitted - Memory mapped by the
>>> heap allocator that is committed, i.e. in physical memory or paged
>>> to disk. When heap-committed is larger than heap-allocated, the
>>> difference between the two values is likely due to external
>>> fragmentation; that is, the allocator allocated a large block of
>>> memory and is unable to decommit it because a small part of that
>>> block is currently in use.
>>> It would seem like this could match stats.active, but i'm not
>>> entirely sure it is the case. In particular, we don't count madvised
>>> pages in that metric, but it would seem stats.active does, although
>>> I haven't dug deep enough yet.
>> stats.active tracks all pages with active application allocations in
>> them. It does not include dirty unused pages for which madvise() has
>> not yet been called, nor does it include pages that are entirely
>> devoted to allocator metadata.
> So essentially, what we are currently tracking as committed, which
> doesn't include metadata, would be
> stats.active + stats.arenas.<i>.pdirty for each arena
> I'm starting to think it would be convenient to have special variables
> that return the sum of the corresponding variables for all arenas.
> Something like stats.arenas.pdirty that would be the sum of all
This already exists. You can pass narenas as the value for <i>, and you get summed statistics.
> Another thing we do in that committed number is that we only count pages
> for huge allocations instead of complete chunks. If you allocate
> chunk_size + a few pages, stats.active will count 2 * chunk_size, while
> what we are after is chunk_size + a few pages. As I'm considering
> pushing this upstream, I would like to know whether you'd rather this be
> done in stats.active, or a separate variable.
jemalloc tracks usable size rather than request size for all allocations, whether small, large, or huge, and it supports using the full size reported by malloc_usable_size() (not to mention the "real size" reported by the *allocm() API). Modifying huge allocations to no longer be a multiple of the chunk size would have some unfortunate chunk management side effects. Is this causing special pain in the context of Windows?
More information about the jemalloc-discuss