<p dir="ltr">Are transparent huge pages enabled? Does disabling them help? (If so, you may be able to thus workaround, and perhaps jemalloc could be improved.)</p>
<p dir="ltr">Bradley C Kuszmaul - via snartphone</p>
<div class="gmail_quote">On Apr 29, 2014 3:47 AM, "Antony Dovgal" <<a href="mailto:antony.dovgal@gmail.com">antony.dovgal@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Here is the output of malloc_stats_print() ~20h after the process start.<br>
The dirty/active pages ratio is 0.02%, the stats.allocated is stable at 107.42G for about 8 hours already,<br>
but both maxrss from getrusage() and `top` report slow, but constant growth and the process is currently at 117G.<br>
<br>
Can somebody help me to locate the problem?<br>
Jemalloc is the latest 3.6.1, the server is SLES 11SP2 with 3.0.13-0.28.1-default kernel.<br>
<br>
<br>
Merged arenas stats:<br>
assigned threads: 1534<br>
dss allocation precedence: N/A<br>
dirty pages: 30687799:8168 active:dirty, 33408 sweeps, 601205 madvises, 2438751 purged<br>
allocated nmalloc ndalloc nrequests<br>
small: 93168540424 3004799703 1470963737 9264491673<br>
large: 22176190464 14660311 9267172 63169190<br>
total: <a href="tel:115344730888" value="+15344730888" target="_blank">115344730888</a> <a href="tel:3019460014" value="+13019460014" target="_blank">3019460014</a> 1480230909 9327660863<br>
active: 125697224704<br>
mapped: 126835752960<br>
bins: bin size regs pgs allocated nmalloc ndalloc nrequests nfills nflushes newruns reruns curruns<br>
0 8 501 1 187080 2126908 2103523 1116416777 1555126 581832 259 446 250<br>
1 16 252 1 1009393200 73690994 10603919 409774890 1195005 634185 260659 2339496 260298<br>
2 32 126 1 25238593216 1023814477 235108439 2436114995 12326052 2442614 6785629 53333719 6749049<br>
3 48 84 1 <a href="tel:3038375616" value="+13038375616" target="_blank">3038375616</a> 904172945 840873453 1949316101 11110139 10331717 835619 279254946 764188<br>
4 64 63 1 690966016 53752004 42955660 639269895 1759730 1333503 247975 15673477 171852<br>
5 80 50 1 40474885680 650494671 144558600 672186803 13318752 3230015 11177482 32259383 11175827<br>
6 96 84 2 1062968448 39626341 28553753 465736871 1112411 1067446 137727 16327920 132170<br>
7 112 72 2 2240 31813 31793 60475 21899 22328 153 2 2<br>
8 128 63 2 4549588736 62704970 27161308 442268899 1941056 965362 588306 17299897 564516<br>
9 160 51 2 878880 884471 878978 10989296 547747 73180 7112 9614 646<br>
10 192 63 3 3332299200 68192386 50836661 350773752 1828422 1430862 298788 24850226 280966<br>
11 224 72 4 82880 201645 201275 1355238 120818 125326 1985 916 126<br>
12 256 63 4 <a href="tel:4436903168" value="+14436903168" target="_blank">4436903168</a> 65932357 48600704 343566754 1922969 1395158 312713 23206402 298496<br>
13 320 63 5 820800 300581 298016 1021102 194469 198863 770 3320 529<br>
14 384 63 6 3776567808 38617426 28782614 186385036 1731152 1105440 230725 7560797 218083<br>
15 448 63 7 323904 264187 263464 2136968 163723 167838 6654 1232 136<br>
16 512 63 8 5546707456 19354481 8521068 221862388 1584431 831055 175294 2714070 172054<br>
17 640 51 8 1280 43868 43866 5529027 19499 20648 1299 61 1<br>
18 768 47 9 768 26068 26067 11346 5544 6621 722 52 1<br>
19 896 45 10 0 15578 15578 24313 15494 15498 15494 0 0<br>
20 1024 63 16 1235968 200289 199082 2304970 102401 106699 1035 2014 600<br>
21 1280 51 16 5251840 20599 16496 319262 14088 14157 130 2 130<br>
22 1536 42 16 1536 85 84 53 39 43 4 0 1<br>
<a href="tel:23%20%201792%20%20%2038%20%2017" value="+12317923817" target="_blank">23 1792 38 17</a> 0 93 93 67 50 54 50 0 0<br>
24 2048 65 33 2504704 330001 328778 7066010 172616 139016 1039 2291 604<br>
<a href="tel:25%20%202560%20%20%2052%20%2033" value="+12525605233" target="_blank">25 2560 52 33</a> 0 173 173 152 113 117 113 0 0<br>
26 3072 43 33 0 199 199 170 144 148 144 0 0<br>
27 3584 39 35 0 93 93 63 46 50 46 0 0<br>
<br>
<br>
<br>
On 04/28/2014 03:08 PM, Antony Dovgal wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello all,<br>
<br>
I'm currently working on a daemon that processes a lot of data and has to store the most recent of it.<br>
Unfortunately, memory is allocated and freed in small blocks and in totally random (for the allocator) manner.<br>
I use "stats.allocated" to measure how much memory is currently in use, delete the oldest data when the memory limit is reached and purge thread caches with "arena.N.purge" from time to time.<br>
<br>
The problem is that keeping stat.allocated on a certain level doesn't keep the process from growing until it's killed by OOM-killer.<br>
I suspect that this is caused by memory fragmentation issues, though I've no idea how to prove it (or at least all my ideas involve complex stats and are quite inefficient).<br>
<br>
So my main questions are:<br>
is there any way to see how much memory is currently being (under)used because of fragmentation in Jemalloc?<br>
is there a way to prevent it or force some garbage collection?<br>
<br>
Thanks in advance.<br>
<br>
</blockquote>
<br>
<br>
-- <br>
Wbr,<br>
Antony Dovgal<br>
---<br>
<a href="http://pinba.org" target="_blank">http://pinba.org</a> - realtime profiling for PHP<br>
<br>
-- <br>
Wbr,<br>
Antony Dovgal<br>
---<br>
<a href="http://pinba.org" target="_blank">http://pinba.org</a> - realtime profiling for PHP<br>
______________________________<u></u>_________________<br>
jemalloc-discuss mailing list<br>
<a href="mailto:jemalloc-discuss@canonware.com" target="_blank">jemalloc-discuss@canonware.com</a><br>
<a href="http://www.canonware.com/mailman/listinfo/jemalloc-discuss" target="_blank">http://www.canonware.com/<u></u>mailman/listinfo/jemalloc-<u></u>discuss</a><br>
</blockquote></div>