开发者

malloc()/free() behavior differs between Debian and Redhat

开发者 https://www.devze.com 2022-12-23 15:09 出处:网络
I have a Linux app (written in C) th开发者_开发问答at allocates large amount of memory (~60M) in small chunks through malloc() and then frees it (the app continues to run then). This memory is not ret

I have a Linux app (written in C) th开发者_开发问答at allocates large amount of memory (~60M) in small chunks through malloc() and then frees it (the app continues to run then). This memory is not returned to the OS but stays allocated to the process.

Now, the interesting thing here is that this behavior happens only on RedHat Linux and clones (Fedora, Centos, etc.) while on Debian systems the memory is returned back to the OS after all freeing is done.

Any ideas why there could be the difference between the two or which setting may control it, etc.?


I'm not certain why the two systems would behave differently (probably different implementations of malloc from different glibc's). However, you should be able to exert some control over the global policy for your process with a call like:

mallopt(M_TRIM_THRESHOLD, bytes)

(See this linuxjournal article for details).

You may also be able to request an immediate release with a call like

malloc_trim(bytes)

(See malloc.h). I believe that both of these calls can fail, so I don't think you can rely on them working 100% of the time. But my guess is that if you try them out you will find that they make a difference.


Some mem handler dont present the memory as free before it is needed. It instead leaves the CPU to do other things then finalize the cleanup. If you wish to confirm that this is true, then just do a simple test and allocate and free more memory in a loop more times than you have memeory available.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号