Excessive VM usage with jemalloc
Jason Evans
jasone at canonware.com
Sat May 4 09:44:24 PDT 2013
On Apr 28, 2013, at 9:03 PM, Abhishek Singh <abhishek at abhishek-singh.com> wrote:
> We are trying to replace glibc malloc with jemalloc because we have several concurrent allocations and in all our benchmarks jemalloc is consistently better than glibc malloc and many others.
>
> Our setups start typically with 96 GB of RAM and up. We have observed that using jemalloc the virtual memory usage of our process rises up to around 75GB. While the resident memory stays low and it is not a problem as such, when we try to fork a process from within here, it fails as the kernel assumes there is not enough memory to copy the VM space. Perhaps a vfork would be better but we can't use that for now.
>
> So we have made some modifications to jemalloc so that all huge memory allocations are forced to unmap their memory on freeing up. Non huge memory allocations and freeing up remain the same. This seems to help us. I have attached the patch here which is against jemalloc-3.3.1. Please review and suggest if there is a better way to handle this.
Does --enable-munmap give you similar results? Ideally, munmap() would always be enabled, but Linux has some unfortunate VM map fragmentation issues (it doesn't consistently reuse holes left by munmap()). I don't think there's much benefit to using munmap() only selectively, because avoiding the VM map fragmentation issue is an all-or-nothing proposition.
At times I've considered adding pre-fork() code that unmaps everything possible, and purges unused dirty pages. The problem with this is that it's expensive, and it doesn't pay off if the process does exec() right after fork() -- the common case.
Jason
More information about the jemalloc-discuss
mailing list