Profiling memory allocations in run-time in production

Evgeniy Ivanov i at
Wed Jan 15 01:09:18 PST 2014

What settings had you been using and what had been measured, when you
got 2% slowdown? In our test (latency related) I got following
normal jemalloc: %99 <= 87 usec (Avg: 65 usec)
inactive profiling: %99 <= 88 usec (Avg: 66 usec)

prof-libgcc: %99 <= 125 usec (Avg: 70 usec)
prof-libunwind: %99 <= 146 usec (Avg: 76 usec)

So in average slowdown is 6% for libgcc and 15% for libunwind. But for
distribution (99% < X) slowdown is 42% or 65% depending on library,
which is huge difference. For 64 Kb numbers are dramatic: 154% (99% <
X) performance lose.

Do I miss something in configuration?

On Tue, Jan 14, 2014 at 10:22 PM, Jason Evans <jasone at> wrote:
> On Dec 22, 2013, at 11:41 PM, Evgeniy Ivanov <i at> wrote:
>> I need to profile my application running in production. Is it
>> performance safe to build jemalloc with "--enable-prof", start
>> application with profiling disabled and enable it for short time
>> (probably via mallctl() call), when I need? I'm mostly interested in
>> stacks, i.e. opt.prof_accum. Or are there better alternatives in
>> Linux? I've tried perf, but it just counts stacks and doesn't care
>> about amount of memory allocated. There is also stap, but I haven't
>> try it yet.
> Yes, you can use jemalloc's heap profiling as you describe, with essentially no performance impact while heap profiling is inactive.  You may even be able to leave heap profiling active all the time with little performance impact, depending on how heavily your application uses malloc.  At Facebook we leave heap profiling active all the time for a wide variety of server applications; there are only a couple of exceptions I'm aware of for which the performance impact is unacceptable (heavy malloc use, ~2% slowdown when heap profiling is active).
> Jason


More information about the jemalloc-discuss mailing list