This is part of cachegrind output. This part of code has been executed for 1224 times. elmg1 is an array of unsigned long of size 16 x 20. My machine L1 cache size is 32KB, 64B cache line size and 8-way set associative.
- for (i = 0; i < 20; i++) 78,336 2,448 2 50,184 0 0 1,224 0 0
- {
- telm01 = elmg1[i]; 146,880 0 0 73,440 0 0 24,480 0 0
- telm31 = (telm01 << 3) ^ val1; 97,920 0 0 48,960 0 0 24,480 0 0
- telm21 = (telm01 << 2) ^ (val1 >> 1); 146,880 1,224 1 48,960 0 0 24,480 0 0
- telm11 = (telm01 << 1) ^ (val1 >> 2); 146,880 0 0 48,960 0 0 24,480 0 0
- }
A. The reason I have put it here, is that in the 3rd line inside the for loop, I see a number of I1 misses (one L2 miss as well). It is somewhat confusing and I co开发者_JS百科uld not guess the reason why?
B. I am trying to optimize (time) a portion of code. The above is just a small snippet. I think in my program memory access a costing me a lot. Like in the above example elmg1 is an array of 16 x 20 size of unsigned longs. When I try to use it in code, there are always some misses, and in my program these variables occur a lot. Any suggestions?
C. I need to allocate and (sometimes initialize) these unsigned longs. Can you suggest which one should I prefer, calloc or array declaration and then explicit initialization. By the way will there be any difference in the way cache handles them?
Thanks.
Have you tried to unroll the loop?
- I wouldn't worry about L1 misses right now. Also one L2 miss out of 1224 times is ok, the cpu has to load the values into the cache at some point.
- What percentage of L2 misses does this code cost compared to the rest of the program?
- Use calloc(), if the array size is always the same and you use constants for the size, then the compiler can optimize the zero'ing of the array. Also the only thing that would effect the cache lines usages is alignment, not how it was initizliated.
edit: The number where hard to read that way and read them wrong the first time.
lets make sure I am reading the numbers right for line 5:
Ir 146,880
I1mr 1,224
ILmr 1
Dr 48,960
D1mr 0
DLmr 0
Dw 24,480
D1mw 0
DLmw 0
The L1 Cache is split into two 32KByte caches one for code I1 and one of data D1. IL & DL are the L2 or L3 cache which is shared by both data and instructions.
The large number of I1mr is instruction misses not data misses, this means that the loop code is being ejected from the I1 instruction cache.
I1 misses at line 1 & 5 total 3672 which is 3 times 1224, so each time the loop is run you get 3 I1 cache misses with 64Byte cache lines that means you loop code size is between 128-192 bytes to cover 3 cache lines. So those I1 misses at line 5 is because that is where the loop code crosses the last cache line.
I would recommend using KCachegrind for viewing the results from cachegrind
Edit: More about cache lines.
That loop code doesn't look like it is being call 1224 times by itself, so that means there is more code that is pushing this code out of the I1 cache.
Your 32Kbyte I1 cache is divided into 512 cache lines (64bytes each). The "8-way set associative" part means that each memory address is mapped to only 8 out of those 512 cache lines. If the whole program you are profile was one continuous block of 32Kbytes of memory, then it would all fit into the I1 cache and none would be ejected. That is mostlikely not the case and there will be more then 8 64byte blocks of code contenting for the same 8 cache lines. Lets say that your whole program has 1Mbyte of code (this includes libraries), then each group of 8 cache lines will have about 32 (1Mbyte/32Kbyte) pieces of code contenting for those same 8 cache lines.
Read this lwn.net article for all the gory details about CPU caches
The compiler can't always detect which functions of the program will be hotspots (called many many times) and which will be codespots (i.e. error handler code, which almost never runs). GCC has function attributes hot/cold which will allow you to mark functions as hot/cold, this will allow the compiler to group the hot functions together in one block of memory to get better cache usage (i.e. cold code will not be pushing hotcode out of the caches).
Anyways those I1 misses are really not worth the time to worry about.
精彩评论