开发者

memory pools: will they improve cache usage for structs larger than the cache line size?

开发者 https://www.devze.com 2023-04-02 13:42 出处:网络
As I understand it, memory pools should improve cache performance for objects commonly accessed together, if the objects are smaller than the cache line size - because then adjacent objects will likel

As I understand it, memory pools should improve cache performance for objects commonly accessed together, if the objects are smaller than the cache line size - because then adjacent objects will likely be fetched into the cache at the same time.

But what about objects larger 开发者_开发技巧than the cache line size? Is there any benefit to pooling such data into the same region of memory?

(Assuming that allocation/deallocation times are insignificant, it's access I'm worried about...)

Thanks!


One important reason for using pools is that they make for a much simpler allocation scheme than a general-purpose allocator. Since all objects have the same size, there's no fragmentation, and you just need to maintain a free list. For a new allocation, you try to pop off the top of the free list, or if the list is empty you increment the high watermark, done. (You can implement the free list in O(1) space inside the pool memory itself.)

However, the use of pools is highly situational, and whether there's any benefit depends very much on your actual code path and allocation requirements. The modern standard allocator is already very good with many short-lived fixed-size allocations, so you really need to profile and check.


Memory pooling makes sense if your app uses a huge amount of memory and starts to swap. Then, if the objects lie adjacent to each other, they will be paged in and out together.


My previous project was an embedded app with built-in webserver for ARM SAM9x platform. It has only 64K heap, and does not have any console/display or filesystem, so there's no way to just printf() an error to stderr or log it to file. Altough, it must work 7/24 there's it should not stop with "out of memory" error, it must run without error. If it's started once, it would never stop. Out of memory is not a recoverable error, it's the complete fail of the system.

So, I decided not to use new. I've been used object arrays: ringbuffers, fixed-size pools, etc. - and it just works. Java (and C# etc.) makes us wrong, these modern languages say, that memory is a big ocean, which anyone can dip from. Yep, it's true, if you have plenty, but the cost is high, just as you brought up in this post.

Try it! Use as few new (and malloc(), of course), just as possible. A nice side effect: you have not to use delete (and free()), there would be no memory leak problems.

0

精彩评论

暂无评论...
验证码 换一张
取 消