<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>On May 7, 2013, at 1:16 PM, Thomas W Savage wrote:</div><blockquote type="cite"><div><p><font size="2" face="sans-serif">My team is having trouble determining how to address increasing internal fragmentation (sizeable diff b/w Jm allocated and active) for a particular workload. </font><br>
<br>
<font size="2" face="sans-serif">We are allocating objects into three small bins (48, 320, 896). We start with an insertion phase in which we continually allocate "entries", which are made up of four allocations: 2x 48-byte objects, 1x 320 obj, and 1x 896 obj. Once we have inserted entries up to a certain threshold, we begin an eviction phase in which we have some threads continuing insertion and another thread freeing 320's and 896's (not touching the 48's). By the end of this run, we observe significant internal fragmentation as demonstrated in the stats below. Is there anything that can be done to mitigate this internal frag?</font><br>
<br>
<tt><font size="1">Version: 3.3.1-0-g9ef9d9e8c271cdf14f664b871a8f98c827714784<br>
Assertions disabled<br>
Run-time option settings:<br>
opt.abort: false<br>
opt.lg_chunk: 21<br>
opt.dss: "secondary"<br>
opt.narenas: 96<br>
opt.lg_dirty_mult: 1<br>
opt.stats_print: false<br>
opt.junk: false<br>
opt.quarantine: 0<br>
opt.redzone: false<br>
opt.zero: false<br>
CPUs: 24<br>
Arenas: 96<br>
Pointer size: 8<br>
Quantum size: 16<br>
Page size: 4096<br>
Min active:dirty page ratio per arena: 2:1<br>
Chunk size: 2097152 (2ˆ21)<br>
Allocated: 7574200736, active: 8860864512, mapped: 9013559296<br>
Current active ceiling: 8963227648<br>
chunks: nchunks highchunks curchunks<br>
4553 4298 4298<br>
huge: nmalloc ndalloc allocated<br>
16 15 35651584<br>
<br>
Merged arenas stats:<br>
assigned threads: 79<br>
dss allocation precedence: N/A<br>
dirty pages: 2154593:0 active:dirty, 0 sweeps, 0 madvises, 0 purged<br>
allocated nmalloc ndalloc nrequests<br>
small: 7515054496 29540988 3552884 29540988<br>
large: 23494656 1432 0 1432<br>
total: 7538549152 29542420 3552884 29542420<br>
active: 8825212928<br>
mapped: 8973713408<br>
bins: bin size regs pgs allocated nmalloc ndalloc newruns reruns curruns<br>
0 8 501 1 176 22 0 11 0 11<br>
[1]<br>
2 32 126 1 68448 2187 48 22 0 21<br>
3 48 84 1 <span class="Apple-tab-span" style="white-space:pre"> </span> 13880077 0 165272 0 165272<br>
[4]<br>
5 80 50 1 1760 22 0 11 0 11<br>
6 96 84 2 2112 22 0 11 0 11<br>
[7..12]<br>
13 320 63 5 2221154560 8717502 1776394 125156 701794 125156<br>
[14..18]<br>
19 896 45 10 4627583744 6941156 1776442 135776 692084 135774<br>
[20..27]<br>
large: size pages nmalloc ndalloc nrequests curruns<br>
[1]<br>
8192 2 22 0 22 22<br>
[1]<br>
16384 4 1408 0 1408 1408<br>
[13]<br>
73728 18 1 0 1 1<br>
[23]<br>
172032 42 1 0 1 1<br>
[467]<br>
--- End jemalloc statistics --- </font></tt><br></p></div></blockquote></div>The external fragmentation for 320- and 896-byte region runs is 12% and 15%, respectively. First off, that doesn't strike me as terrible, depending on the details of what's going on in the application. There are two possible explanations (not mutually exclusive): 1) the application's memory usage is not at the high water mark, and 2) the eviction thread does not evict in a pattern that impacts the allocating threads proportionally to their allocation volumes. Say that there are two arenas, and 75% of the evictions are objects allocated from arena 0, but arenas 0 and 1 are utilized equally by the allocating threads. The result will be substantial arena 0 external fragmentation in the equilibrium state. You can figure out whether (2) is a factor by running with one arena (which will surely impact performance, since you have thread caching disabled). If fragmentation remains the same with one arena, then (1) is the entire explanation.<div><br></div><div>One possible solution that should be allocator-agnostic would be to interleave eviction with normal allocation in all threads, such that threads evict their own previous allocations at a rate proportional to their allocation rates. This changes the global eviction policy to one that is distributed though, so it may not be appropriate, depending on what your application does.</div><div><br></div><div>Jason</div></body></html>