Memory usage regression
mh+jemalloc at glandium.org
Sun Nov 4 23:17:51 PST 2012
On Sun, Nov 04, 2012 at 09:17:32PM -0800, Jason Evans wrote:
> On Nov 1, 2012, at 12:23 PM, Jason Evans wrote:
> > On Oct 31, 2012, at 12:00 AM, Mike Hommey wrote:
> >> It's unfortunately only slightly better.
> >> http://i.imgur.com/hN1Cj.png
> > Thanks for testing it. Too bad it didn't help.
> > I spent some time yesterday thinking about the clean vs. dirty run
> > fragmentation problem yesterday and came to realize that up to now
> > all of the dirty page purging strategies jemalloc has employed have
> > been about limiting RSS, with only indirect regard for VM size. I
> > developed a patch that actually tracks the amount of clean/dirty run
> > fragmentation, but I'm still working out how to act on the
> > information.
> I finally managed to experiment a bit with the aforementioned patch,
> and it looks reasonably good (chunk fragmentation is *way* down). I'm
> seeing a higher soft page fault rate with this patch in place, but the
> patch and the control appear to be converging as the experiments run,
> so the fragmentation reduction may have some positive performance
> effects that mitigate the cost of extra purging.
This patch works quite well. The result is still above mozjemalloc, but
the leak is plugged. Thanks.
BTW, an interesting fact, if I didn't botch my stats: at the end of
the 5 iterations, while 17MB are allocated, sucking 68MB of RSS, only
40MB worth of pages have allocated data in them. The number is similar
to what I get with mozjemalloc (mozjemalloc is actually about 100K
higher than jemalloc3 on that metric, while RSS is 6MB higher with
jemalloc3) Which means (if my stats are not broken) that there is still
room for improving RSS.
More information about the jemalloc-discuss