RES memory footprint on Ubuntu 14.04
Sheppard Parker
Sheppard.Parker at maxpoint.com
Tue Jun 3 13:02:01 PDT 2014
Turns out transparent_hugepages is set to "always" on 14.04, was set to "madvise" on 12.04. Should it be set to "never" everywhere rather than "madvise"?
Testing now... will update here when results are clear.
Thanks,
Sheppard
From: Jason Evans [mailto:jasone at canonware.com]
Sent: Tuesday, June 03, 2014 2:36 PM
To: Sheppard Parker
Cc: jemalloc-discuss at canonware.com
Subject: Re: RES memory footprint on Ubuntu 14.04
On Jun 3, 2014, at 11:41 AM, Sheppard Parker <Sheppard.Parker at maxpoint.com<mailto:Sheppard.Parker at maxpoint.com>> wrote:
We have been running a particular jemalloc-aware application on Ubuntu Server 12.04 LTS for about a year now with no trouble (thx btw... we were having major fragmentation issues prior to trying jemalloc). Nominal RES memory footprint is between 23GB and 27GB depending on how much data is loaded at any given time of the day or day of the week.
We recently purchased some new hardware that came with Ubuntu Server 14.04 LTS and now our same app shows almost 2X the RES memory footprint when loaded with the same data. I don't think it is growing, so it may not be a big deal, but I am still concerned. Why the major difference? Exact same executable installed, no code changes, even the same hardware (Dell R620s). Only difference is the move from Ubuntu Server 12.04 (3.8.0-36-generic #52~precise1-Ubuntu SMP) to Ubuntu Server 14.04 (one machine is 3.13.0-24-generic #47-Ubuntu SMP, another is 3.13.0-27-generic #50-Ubuntu SMP).
Any ideas? Is there something enabled/disabled on Ubuntu 14.04 that I need to modify to get things working like they do on the older 12.04? FWWIW, we have been using jemalloc v3.5.0.
I tried rebuilding our app (again, no code changes) with jemalloc 3.5.1 and 3.6.0. No change. Seems like jemalloc is unhappy about something on Ubuntu 14.04 (or visa versa), but what? Has anybody else encountered anything similar?
My only guess is that the newer system is using transparent huge pages, whereas the older one is not. I recently came across this related blog post:
http://dev.nuodb.com/techblog/linux-transparent-huge-pages-jemalloc-and-nuodb
In short, madvise() isn't actually purging unused dirty (4 KiB) pages if the underlying memory has been promoted to huge (2 MiB) pages. In the short term, the solution seems to be disabling transparent huge pages. In the long run I hope the Linux kernel improves its algorithms for such usage patterns, and I'm also contemplating layout strategies that coexist better with huge pages.
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://jemalloc.net/mailman/jemalloc-discuss/attachments/20140603/c7c455d8/attachment.html>
More information about the jemalloc-discuss
mailing list