<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>On Jun 25, 2013, at 6:13 AM, Thomas R Gissel wrote:</div><blockquote type="cite"><div><p><font size="3" face="Times New Roman">With help from our local Linux kernel experts we've tracked down the inexplicable Private_Clean emergence in our processes' smaps file to a kernel bug, </font><a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/fs/proc/task_mmu.c?id=1c2499ae87f828eabddf6483b0dfc11da1100c07"><font size="3" color="#0000FF" face="Times New Roman"><u>https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/fs/proc/task_mmu.c?id=1c2499ae87f828eabddf6483b0dfc11da1100c07</u></font></a><font size="3" face="Times New Roman">, which, according to GIT, was first committed in v2.6.36-rc6~63. When we manually applied the aforementioned patched to our kernel there were no memory segments in smaps showing large Private_Clean regions during our test. Unfortunately the fix seems to have been merely an accounting change. Everything previously reported as Private_Clean is now correctly showing up as Private_Dirty, so we are still digging to find out why our RSS, specifically Private_Dirty, continues to grow while jemalloc's active reports much lower numbers.</font>
</p></div></blockquote></div>Does the stressTest.c program reproduce the problem after the kernel fix?<div><br></div><div>Thanks,</div><div>Jason</div></body></html>