I need to shave off as much memory as开发者_如何学Python possible. I am using standard C++ with STL. The program does not do much (yet) and it still takes 960Kb [according to top
]! The executabe size is only 64KB.
The code is 3000 lines long, I am not going to post obviously. I believe the problem is not with my code, but with the system libraries.
A single main() function (includes all my code but doesn't use it) uses 732Kb of RAM!
Simple Code:int main() {
sleep(1000); //do nothing
return 0;
}
//Uses 732kb of RAM
My code has no global variables (apart from ones in libraries that are hidden from me).
I am using standard libraries: libstdc++ (STL), GNU libc. Also a single BSD socket and libev and the non-standard STL rope class.
Is there some memory-profiler I can run?
Platform: Linux 2.6.18-32, 32-bit processor, 16MB total system RAM, no swap available
Compiler: GCC 4 Standard Library: GCC's libstdc++ Compiler Options: -Os (no debugging symbols)I am not making heavy use of templates: containers and iterators that's all. However I am making heavy use of the SGI STL rope class.
The test environment is a basic server running Linux with 128MB RAM, Pentium III 667 Mhz, CentOS 5.5, no emulation.
UPDATE: I am wondering if the libraries themselves (code size) are causing the problem. Doesn't shared libraries require to be loaded into RAM?
Start stripping out functionality until the memory usage goes down. Go extreme first -- if you can replace main
with sleep(1000);
and your memory use is still high, look to code and static data -- anything initialized at global scope or static inside a class or function, along with template instantiations of different types and debug symbols.
UPDATE: Removed incorrect commentary about STL allocators. It may apply to other compiler/STL versions (check the history if you want to see it) but it's not applicable to this question.
Be aware that malloc
/operator new
will often be stingy about giving free memory back to the OS, which will cause your program as a whole not to shrink its apparent usage over time; that memory will get reused throughout your program by future allocations, so it's usually not a huge issue aside from keeping your "memory use" numbers at or near their high-water mark indefinitely.
UPDATE: I am wondering if the libraries themselves (code size) are causing the problem. Doesn't shared libraries require to be loaded into RAM?
Bingo. On Mac OS X at least, Top includes the size of shared libraries in the physical memory usage. Only one copy of each library is resident in memory, of course.
Check the documentation for top
for a workaround, or just chuck it and use malloc_info()
. Be careful to find a way to account for code, stack, and global usage, though.
Get the linker to emit a link map file; you can use that to determine exactly how much statically linked code and static data space your code requires.
Stack, heap space, and shared libraries are additional to that, and are allocated at run-time.
If you have 16Mb of RAM does it really matter? It is likley that there is a relatively large but fixed overhead, and that your overall memory footprint will not grow linearly with lines of code added.
Since the target is a linux, I would think you could learn something about the details of memory usage, particularly shared library components by looking in the maps and smaps files in /proc/{pid_number}
精彩评论