开发者

unordered_map throws bad_alloc in VS10 but not in VS9, is this a bug?

开发者 https://www.devze.com 2023-01-05 20:21 出处:网络
While writing a post about project euler\'s 14th problem I ran into a difference in behaviour between VC9 and VC10.

While writing a post about project euler's 14th problem I ran into a difference in behaviour between VC9 and VC10.

The following code runs OK in VC9 but in VC10 std::unordered_map throws a bad_alloc exception. The strange thing is that if I recover from the exception future allocations will succeed (the size of the container continues to grow). Also if I use boost::unordered_map it works fine in both compilers.

Regarding the actual memory usage, I'm running on a machine with 4GB RAM, (1.7 in use) the VC9 version gets up to ~810MB of memory before completing the task and the VC10 on开发者_开发技巧e crashes at ~658MB.

Is this a bug in VC10? I'm running on the same machine what else could cause memory to consistently run out in one version and not in the other when the amount of work done is identical?

<edit>

Some more information: The first time the exception takes place is when calculating 7,718,688 with a stack depth of 1 (no recursion just main->length). After that it seems to happen for each number that is added to the cache. The cache had 16,777,217 elements in it before the exception happened (according to cache.size()). The interesting thing is that even when insert fails the cache size grows by one so it appears that it doesn't supply the strong exception guarantee (in violation of §23.2.1.11).

</edit>

Code follows:

#include <iostream>
#include <unordered_map>

typedef std::unordered_map<_int64, int> cache_type;

_int64 collatz(_int64 i)
{
    return (i&1)? i*3+1 : i/2;
}

int length(_int64 n, cache_type& cache)
{
    if (n == 1)
        return 1;

    cache_type::iterator found = cache.find(n);
    if (found != cache.end())
        return found->second;
    int len = length(collatz(n), cache) + 1; 
    cache.insert(std::make_pair(n, len)); // this sometimes throws
    return len;
}

int main(int argc, char** argv)
{
    const int limit = 10000000;
    cache_type cache;
    std::pair<int, int> max = std::make_pair(0, 0);
    for (int i = 2; i <= limit; ++i) {
        int len = length(i, cache);
        if (len > max.second)
            max = std::make_pair(i, len);
    }

    std::cout<< "Number with longest orbit is " << max.first 
        << " with a lenght of " << max.second 
        << " cache size is " << cache.size() << std::endl;
}

<edit>

Also can anyone reproduce this behaviour, at one time it disappeared (and re-appeared) so there may be something special about my configuration.

</edit>


It might be incidental, but changing the value of _SECURE_SCL causes the behavoir you describe.

i.e Compiling with:

cl /EHa /MD /D_SECURE_SCL=1 /Ox /c t1.cpp
link /LIBPATH:"c:/Program Files/Microsoft Visual Studio 10.0/VC/lib" /LIBPATH:"C:/Program Files/Microsoft SDKs/Windows/v7.0A/Lib" t1.obj

crashes, but the same commands with _SECURE_SCL=0 runs to completion on my XP 32bit machine. The msdn page for _SECURE_SCL says it's enabled for debug builds, but not release, which might be important if you're building under the IDE.


Inserting a single element could result in a large memory allocation if the map's hash table needs to be resized. The map seems to be about 0.5GB at the end of the run. (See my comment above.)

There is presumably some heuristic being used to decide how much to expand the hash table when it needs to grow, and this could conceivably be to double it each time. This would therefore use ~1.5GB for old + new data while the hash table is being copied.

It's therefore possible your program is hitting a limit on process memory size. (See comment again.) If so, it's possible that VC10 takes slightly more memory overall than VC9, and that slightly different amounts of memory get allocated on different runs or builds of the program, so that VC10 hits the limit sometimes while VC9 doesn't ever hit it.


Does _int64 have alignment requirements that the map may not be honoring in allocation?

Try using long long int instead and see if the behavior changes.


1 - Check EventLog to see if there are any events talking about a process going over it's allowed quota.

2 - If you are on 32-bit OS try starting it with 3GB for user space.

3 - Look to see if you have different allocators available

4 - Diff unordered_map in 9.0 and 10.0 and it's in-lined file on on off-chance that there's an artificial size limiter added ("security features" :-). It would most probably be in a macro with different values for x86 and x64 build.

5 - Try to put a light wrapper around the allocator and just print sizes for each allocation. That will also tell you if it's really allocator that's throwing or something before it.

6 - If it's allocator throwing look at actual WinNT API calls made from it (and diff with 9.0 again)

7 - Try pre-allocating huge block (say 1 GB).


You're blowing the stack in the deeply recursive call to length().

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号