High amount of private clean data in smaps

Jason Evans jasone at canonware.com
Tue Jun 25 09:26:02 PDT 2013


On Jun 25, 2013, at 6:13 AM, Thomas R Gissel wrote:
> With help from our local Linux kernel experts we've tracked down the inexplicable Private_Clean emergence in our processes' smaps file to a kernel bug, https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/fs/proc/task_mmu.c?id=1c2499ae87f828eabddf6483b0dfc11da1100c07, which, according to GIT, was first committed in v2.6.36-rc6~63.  When we manually applied the aforementioned patched to our kernel there were no memory segments in smaps showing large Private_Clean regions during our test.  Unfortunately the fix seems to have been merely an accounting change.  Everything previously reported as Private_Clean is now correctly showing up as Private_Dirty, so we are still digging to find out why our RSS, specifically Private_Dirty, continues to grow while jemalloc's active reports much lower numbers.
> 
Does the stressTest.c program reproduce the problem after the kernel fix?

Thanks,
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://jemalloc.net/mailman/jemalloc-discuss/attachments/20130625/5a5831ca/attachment.html>


More information about the jemalloc-discuss mailing list