开发者

Does allocating memory and then releasing constitute a side effect in a C++ program?

开发者 https://www.devze.com 2023-03-19 01:29 出处:网络
Inspired by this question about whether the compiler can optimize away a call to a function without side effects. Suppose I have the following code:

Inspired by this question about whether the compiler can optimize away a call to a function without side effects. Suppose I have the following code:

delete[] new char[10];

It does nothing useful.开发者_JS百科 But does it have a side effect? Is heap allocation immediately followed by a deallocation considered a side effect?


It's up to the implementation. Allocating and freeing memory isn't "observable behavior" unless the implementation decides that it's observable behavior.

In practice, your implementation probably links against a C++ runtime library of some sort, and when your TU is compiled, the compiler is forced to recognize that calls into that library may have observable effects. As far as I know, that's not mandated by the standard, it's just how things normally work. If an optimizer can somehow work out that certain calls or combinations of calls in fact don't affect observable behavior then it can remove them, so I believe that a special case to spot your example code and remove it would conform.

Also, I can't remember how user-defined global new[] and delete[] works [I've been reminded]. Since the code might call definitions of those things in another user-defined TU that's later linked to this TU, the calls can't be optimized away at compile time. They could be removed at link time if turns out that the operators aren't user-defined (although then the stuff about the runtime library applies), or are user-defined but don't have side-effects (once the pair of them is inlined - this seems pretty implausible in a reasonable implementation, actually[*]).

I'm pretty sure that you aren't allowed to rely on the exception from new[] to "prove" whether or not you've run out of memory. In other words, just because new char[10] doesn't throw this time, doesn't mean it won't throw after you free the memory and try again. And just because it threw last time and you haven't freed anything since, doesn't mean it'll throw this time. So I don't see any reason on those grounds why the two calls can't be eliminated - there's no situation where the standard guarantees that new char[10] will throw, so there's no need for the implementation to find out whether it would or not. For all you know, some other process on the system freed 10 bytes just before the call to new[], and allocated it just after the call to delete[].

[*]

Or maybe not. If new doesn't check for space, perhaps relying on guard pages, but just increments a pointer, and delete normally does nothing (relying on process exit to free memory), but in the special case where the block freed is the last block allocated it decrements the pointer, your code could be equivalent to:

// new[]
global_last_allocation = global_next_allocation;
global_next_allocation += 10 + sizeof(size_t);
char *tmp = global_last_allocation;
*((size_t *)tmp) = 10; // code to handle alignment requirements is omitted
tmp += sizeof(size_t);

// delete[]
tmp -= sizeof(size_t);
if (tmp == global_last_allocation) {
    global_next_allocation -= 10 + *((size_t*)tmp);
}

Which could almost all be removed assuming nothing is volatile, just leaving global_last_allocation = global_next_allocation;. You could get rid of that too by storing the prior value of last in the block header along with the size, and restoring that prior value when the last allocation is freed. That's a pretty extreme memory allocator implementation, though, you'd need to have a single-threaded program, with a speed demon programmer who's confident the program doesn't churn through more memory than was made available to begin with.


No. Neither should it be removed by compiler nor considered to be a side effect. Consider below:

struct A {
  static int counter;
  A () { counter ++; }
};

int main ()
{
  A obj[2]; // counter = 2
  delete [] new A[3]; // counter = 2 + 3 = 5
}

Now, if the compiler removes this as side effect then, the logic goes wrong. So, even if you are not doing anything, compiler will always assume that something useful is happening (in constructor). That's the reason why;

A(); // simple object allocation and deallocation

is not optimized away.


new[] and delete[] could ultimately result in system calls. Additionally, new[] might throw. With this in mind, I don't see how the new-delete sequence can be legitimately considered free from side effects and optimized away.

(Here, I assume no overloading of new[] and delete[] is involved.)


The compiler cannot see the implementation of delete[] and new[] and must assume it does.

If you had implemented delete[] and new[] above it, the compiler may inline / optimize away those functions entirely.


new and delete in usual cases will result in calls to the operating systems heap manager, and this can very well have some side effects. If your program only has a single thread the code you show should not have side effects but by my observations on windows (mostly on 32 Bit platforms) show that at least large allocations and following deallocations often lead to 'heap contention' even if all of the memory is been released. See also this related post on MSDN.

More complex problems may occur if multiple threads are running. Although your code releases the memory in the meantime a different thread may have allocated (or freed) memory, and your allocation might lead to further heap fragmentation. This all is rather theoretical but it may sometimes arise.

if your call to new fails, depending on the compiler version you use probably a exception bad_alloc will be thrown and that will of course have side effects.

0

精彩评论

暂无评论...
验证码 换一张
取 消