<div dir="ltr"><div><div>I am quite certain I am looking at RES and not VIRT. In the tests, VIRT remains close to jemalloc's 'mapped' statistic, but resident set size is way off 'active' reported by jemalloc.<br>
<br></div>I will check if madvise fails in the tests and get back.<br><br></div>Thanks,<br>Vandana<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Apr 23, 2013 at 11:04 AM, Jason Evans <span dir="ltr"><<a href="mailto:jasone@canonware.com" target="_blank">jasone@canonware.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div><div><div class="h5"><div>On Apr 22, 2013, at 9:18 PM, vandana shah <<a href="mailto:shah.vandana@gmail.com" target="_blank">shah.vandana@gmail.com</a>> wrote:</div>
<blockquote type="cite"><div dir="ltr"><div><div>On Mon, Apr 22, 2013 at 11:49 PM, Jason Evans <span dir="ltr"><<a href="mailto:jasone@canonware.com" target="_blank">jasone@canonware.com</a>></span> wrote:</div></div>
</div><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>On Apr 21, 2013, at 10:01 PM, vandana shah wrote:<br>
> I have been trying to use jemalloc for my application and observed that the rss of the process keeps on increasing.<br>
><br>
> I ran the application with valgrind to confirm that there are no memory leaks.<br>
><br>
> To investigate more, I collected jemalloc stats after running the test for few days and here is the summary for a run with narenas:1, tcache:false, lg_chunk:24<br>
><br>
> Arenas: 1<br>
> Pointer size: 8<br>
> Quantum size: 16<br>
> Page size: 4096<br>
> Min active:dirty page ratio per arena: 8:1<br>
> Maximum thread-cached size class: 32768<br>
> Chunk size: 16777216 (2^24)<br>
> Allocated: 24364176040, active: 24578334720, mapped: 66739765248<br>
> Current active ceiling: 24578621440<br>
> chunks: nchunks highchunks curchunks<br>
> 3989 3978 3978<br>
> huge: nmalloc ndalloc allocated<br>
> 3 2 117440512<br>
><br>
> arenas[0]:<br>
> assigned threads: 17<br>
> dss allocation precedence: disabled<br>
> dirty pages: 5971898:64886 active:dirty, 354265 sweeps, 18261119 madvises, 1180858954 purged<br>
><br>
> While in this state, the RSS of the process was at 54GB.<br>
><br>
> Questions:<br>
> 1) The difference between RSS and jemalloc active is huge (more than 30GB). In my test, the difference was quite less in the beginning (say 4 GB) and it went on increasing with time. That seems too high to account for jemalloc data structures, overhead etc. What else gets accounted in process RSS - active?<br>
<br>
</div>jemalloc is reporting very low page-level external fragmentation for your app: 1.0 - allocated/active == 1.0 - 24364176040/24578334720 == 0.87%. However, virtual memory fragmentation is quite high: 1.0 - active/mapped == 63.2%.<br>
<div><br>
> 2) The allocations are fairly random, sized between 8 bytes and 2MB. Are there any known issues of fragmentation for particular allocation sizes?<br>
<br>
</div>If your application were to commonly allocate slightly more than one chunk, then internal fragmentation would be quite high, but at little actual cost to physical memory. However, you are using 16 MiB chunks, and the stats say that there's only a single huge (112-MiB) allocation.<br>
<div><br>
> 3) Is there a way to tune the allocations and reduce the difference?<br>
<br>
</div>I can't think of a way this could happen short of a bug in jemalloc. Can you send me a complete statistics, and provide the following?<br>
<br>
- jemalloc version<br>
- operating system<br>
- compile-time jemalloc configuration flags<br>
- run-time jemalloc option flags<br>
- brief description of what application does<br>
<br>
Hopefully that will narrow down the possible explanations.<br>
<br>
Thanks,<br>
Jason</blockquote></div><br></div></blockquote></div></div><div><div class="h5"><blockquote type="cite"><div class="gmail_extra"><div dir="ltr"><div><div>Jemalloc version: 3.2.0<br>Operating system: Linux 2.6.32-220.7.1.el6.x86_64<br>
Compile-time jemalloc configuration flags:<br>autogen : 0<br>experimental : 1<br>cc-silence : 0<br>debug : 0<br>stats : 1<br>prof : 0<br>prof-libunwind : 0<br>
prof-libgcc : 0<br>prof-gcc : 0<br>tcache : 1<br>fill : 1<br>utrace : 0<br>valgrind : 0<br>xmalloc : 0<br>mremap : 0<br>munmap : 0<br>
dss : 0<br>lazy_lock : 0<br>tls : 1<br><br>Run-time jemalloc configuration flags:<br>MALLOC_CONF=narenas:1,tcache:false,lg_chunk:24<br><br>Application description:<br>This is a server that caches and serves data from sqlite database. The database size can be multiple of the cache size.<br>
The data is paged in and out as necessary to keep the process RSS under control. The server is written in C++.<br>All data and metadata is dynamically allocated, so allocator is used quite extensively.<br>In the test, server starts with a healthy data/RSS ratio (say 0.84). This ratio reduces with time as RSS keeps growing<br>
whereas server starts to page out data to keep RSS under control. In the test the ratio came down to 0.42.<br></div></div></div></div></blockquote><br></div></div></div><div>Okay, I've taken a close look at this, and I see no direct evidence of a bug in jemalloc. The difference between active and mapped memory is due to page run fragmentation within the chunks, but the total fragmentation-induced overhead attributable to chunk metadata and unused dirty pages appears to be 200-300 MiB. The only way I can see for the statistics to be self-consistent, yet have such a high RSS is if the madvise() call within pages_purge() is failing. You should be able to eliminate this possibility by looking at strace output.</div>
<div><br></div><div>Are you certain that you are looking at RES (resident set size, aka RSS) rather than VIRT (virtual size, aka VSIZE or VSZ)? Assuming that your application doesn't do a bunch of mmap()ing outside jemalloc, I would expect VIRT to be pretty close to jemalloc's 'mapped' statistic, and RES to be pretty close to jemalloc's 'active' statistic.</div>
<div><br></div><div>Thanks,</div><div>Jason</div><br></div></blockquote></div><br></div>