High amount of private clean data in smaps

Thomas R Gissel gissel at us.ibm.com
Tue Jun 25 06:13:21 PDT 2013


With help from our local Linux kernel experts we've tracked down the
inexplicable Private_Clean emergence in our processes' smaps file to a
kernel bug,
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/fs/proc/task_mmu.c?id=1c2499ae87f828eabddf6483b0dfc11da1100c07
, which, according to GIT, was first committed in v2.6.36-rc6~63.  When we
manually applied the aforementioned patched to our kernel there were no
memory segments in smaps showing large Private_Clean regions during our
test.  Unfortunately the fix seems to have been merely an accounting
change.  Everything previously reported as Private_Clean is now correctly
showing up as Private_Dirty, so we are still digging to find out why our
RSS, specifically Private_Dirty, continues to grow while jemalloc's active
reports much lower numbers.

Thanks,

Tom


|------------>
| From:      |
|------------>
  >-------------------------------------------------------------------------------------------------------------------------------------------------|
  |Jason Evans <jasone at canonware.com>                                                                                                               |
  >-------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| To:        |
|------------>
  >-------------------------------------------------------------------------------------------------------------------------------------------------|
  |Thomas R Gissel/Rochester/IBM at IBMUS,                                                                                                             |
  >-------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| Cc:        |
|------------>
  >-------------------------------------------------------------------------------------------------------------------------------------------------|
  |jemalloc-discuss at canonware.com                                                                                                                   |
  >-------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| Date:      |
|------------>
  >-------------------------------------------------------------------------------------------------------------------------------------------------|
  |06/06/2013 01:34 AM                                                                                                                              |
  >-------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| Subject:   |
|------------>
  >-------------------------------------------------------------------------------------------------------------------------------------------------|
  |Re: High amount of private clean data in smaps                                                                                                   |
  >-------------------------------------------------------------------------------------------------------------------------------------------------|





On Jun 5, 2013, at 9:17 PM, Thomas R Gissel <gissel at us.ibm.com> wrote:


      I too have been trying to reproduce the existence of Private_Clean
      memory segments in smaps via a simple test case with jemalloc and was
      unable to on my laptop, a 2 core machine running a 3.8.0-23 kernel .
      I then moved my test to our production box: 96GB memory, 24 hardware
      threads and 2.6 kernel (detailed information below), and within a few
      minutes of execution, with a few minor adjustments, I was able
      duplicate the results, smaps showing the jemalloc segment with
      Private_Clean memory usage, of our larger test. Note that I'm using
      the same jemalloc library whose information Kurtis posted earlier (96
      arenas etc...).


Interesting!  I don't see anything unusual about the test program, so I'm
guessing this is kernel-specific.  I'll run it on some 8- and 16-core
machines tomorrow with a couple of kernel versions and see what happens.

Thanks,
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://jemalloc.net/mailman/jemalloc-discuss/attachments/20130625/3215e254/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://jemalloc.net/mailman/jemalloc-discuss/attachments/20130625/3215e254/attachment.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ecblank.gif
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://jemalloc.net/mailman/jemalloc-discuss/attachments/20130625/3215e254/attachment-0001.gif>


More information about the jemalloc-discuss mailing list