开发者

Storing data effectively

开发者 https://www.devze.com 2022-12-20 01:01 出处:网络
maybe i\'m having stupid question as always, but somehow i can\'t google out how should I store variables so it\'s effective. Our teacher on c++ just by the way droped something about how could size o

maybe i'm having stupid question as always, but somehow i can't google out how should I store variables so it's effective. Our teacher on c++ just by the way droped something about how could size of the stored data type affect the speed of storing it (like searching for closest sufficient continuous block of memory) and I would like to find out more about it. Could you please gi开发者_如何学JAVAve me some directions?


In general for numeric variables (such as loop counters) you should use "int" and let the compiler choose the most efficient size for the task. If you have a particular need for a specific size (eg uint16 for a 16-bit part of a packet header being received off a network, or similar) then use a typedef that gives that specific size on your specific platform; otherwise, just use int.

That said, it sounds like your teacher may have been talking about dynamic memory allocators (i.e. the code behind "malloc" and "free"). If you request to allocate 64 bytes, say, the allocator is responsible for providing you with a block of at least that size, and keeping track of it so it can be returned to available storage when it is freed. There's a lot of info about these, for example on Wikipedia here: http://en.wikipedia.org/wiki/Dynamic_memory_allocation


Do you mean persistent storage or allocation in memory?

In memory, the datastructure that you define, the way you allocate memory for your data structure (heap or stack) the compiler you use, and the C++ standard together determine how memory is allocated.

Persistent storage is an altogether different story.


How to store variables is seldom up to you. But the type, and thereby the size, of a variable is often up to you.

He might be referring to things like "if you need to store a small integer, like a street address, you should problably not use a long, but rather a short". These things to tend to require quite a bit of domain-knowledge, it's easy to optimize yourself into a corner (think of the Y2K problem, for instance).


One way is to use variable types. There is no need to store a value from 1 to 10 in a int64. The idea is to match the possible values of a variable with a variable type that fits its possible values the best. This will cut down on memory used, and in more complex data structures cut down on processing speed.


Speed of data writing and retrieval will be influenced by the performance of your local storage mechanism. On modern CPU's there are registers, 2 levels of cache(L1 & L2), DRAM and sometimes disk (via swap). If access patterns and size effectively utilize L1 cache (i.e. small and locally coherent) then once the data is in L1 it only needs to be loaded into the registers to be accessed by the CPU. If the required data is in L2 cache, it must first be loaded into L1 before being loaded into a register for processing. The same goes for DRAM to L2 to L1 to registers. Registers are faster then L1, L1 is faster then L2, and DRAM is just slow.

Herb Sutter gave a talk that addresses these issues several years ago at NWCPP:

http://video.google.com/videoplay?docid=-4714369049736584770#

From a programming perspective, if your data can fit inside a cache line and needs to be repeatedly accessed or written to, you will have higher performance due to fewer cache misses (resulting in the need to fetch from a higher level of cache). This is true for all levels of "cache" be they registers, L1, L2, DRAM, disk or a far off server.


What your teacher probably meant is that when you allocate an object on the heap (with new) the whole process tends to be slower the bigger the object. I have written a little program to measure this effect.

Here are the results I get (compiled in release mode with VS2008 and optimizations turned off):

Cost of allocating 1 chars: 0.104 microseconds
Cost of allocating 4 chars: 0.1 microseconds
Cost of allocating 16 chars: 0.104 microseconds
Cost of allocating 64 chars: 0.128 microseconds
Cost of allocating 256 chars: 0.192 microseconds
Cost of allocating 1024 chars: 0.416 microseconds
Cost of allocating 4096 chars: 1.28 microseconds
Cost of allocating 16384 chars: 2.56016 microseconds
Cost of allocating 65536 chars: 3.0722 microseconds
Cost of allocating 262144 chars: 3.58423 microseconds

So your teacher was right, allocating huge objects can be significantly slower than allocating objects of normal size (but very fast nevertheless, we are talking about a few microseconds in the worst case).

0

精彩评论

暂无评论...
验证码 换一张
取 消