This question made me curious. Questions like this always get answers like "It's generally safe but you shouldn't assume that the OS will do this for you", which sounds like good advice to me, but I'm wondering: Are there any actively-developed (released) operating syst开发者_如何学Goems that don't do this?
Is this something that was fixed back in the age of dinosaurs (the 80's)?
Short answer is "none". Even a program on DOS years ago would release memory on program termination (simply by the virtue that nothing was managing the memory when the program stopped). I'm sure someone might sight that kernel mode code doesn't necessarily free its memory on app exit or they may cite some obscure embedded os.... but you can assume that app-exit returns all the memory your user mode code acquired. (Windows 3.x might have had this problem depending on which allocator was used...)
The reason for the virtue that you "should free your memory" is that for large scale software engineering, you should strive to develop components that are flexible in their use because you never know how someone else is going to change the use of your code long after you've left the team.
Think of it like this. Let's say you design some class that is designed to be a singleton (only instantiated once during the app lifetime). As such, you decide not to bother with memory cleanup when your component destructs or gets finalized. That's a perfectly fine decision for that moment. Years later, after you've left for greener pastures, someone else may come along and decide that they need to use your class in multiple places such that many instances will come and go during the app lifetime. Your memory leak will become their problem.
On my team, we've often talked about making the user initiated "close" of the application just be exit() without doing any cleanup. If we ever do this, I would still enforce that the team develop classes and components that properly cleanup after themselves.
In CP/M, it wasn't a matter of freeing memory so much, since you had a static area of RAM for your program, and every program ran in the same space. So, when Program A quit, and Program B ran, B was simply loaded over on top of A.
Now there were mechanisms to reserve memory away from the OS, but this wasn't typically heap memory (in the classic case that we consider it today), it was special reserved areas design for various tasks.
For example, DOS had this exit routine called "Terminate and Stay Resident". This "quit" the program, but didn't not release the space after the program quit. Typically these programs loaded up interrupt vectors (such as keyboard interrupts) to trigger routines. Borland Sidekick was a very popular "TSR" back in the day and offered things like a calculator and contact list.
Finally, since these weren't protected memory systems, your programs could abuse the system in all sorts of ways to do what you want, but that's a different discussion.
No recent unix-like operating system fails to free all process memory when a process exits, where recent probably means "since 1970 or so". I'm fairly sure that very old PC operating systems such as DOS and CP/M had this problem, and some older versions of Windows. I don't know enough about recent Windows to be sure, but I would be very surprised if any of Windows XP, Vista, or Windows 7 would have a problem freeing process memory.
As a rule of thumb, I would suggest that any operating system that doesn't use virtual memory to give processes separate address spaces is likely vulnerable to leaking memory when processes fail in major ways. Once an os has implemented per-process virtual address spaces, it already has to keep track of all physical memory allocated to a process anyway, so freeing it reliably is straightforward.
All that said, it's often a good idea to write your programs to clean up after themselves anyway. It tends to lead to better designed subcomponents, and it also makes it easier to apply tools that look for memory leaks and the like.
精彩评论