I am trying to optimize a function using SSE2. I'm wondering if I can prepare the data for my assembly code better than this way. My source data is a bunch of unsigned chars from pSrcData. I copy it to this array of floats, as my calculation needs to happen in float.
unsigned char *pSrcData = GetSourceDataPointer();
__declspec(align(16)) float vVectX[4];
vVectX[0] = (float)pSrcData[0];
vVectX[1] = (float)pSrcData[2];
vVectX[2] = (float)pSrcData[4];
vVectX[3] = (float)pSrcData[6];
__asm
{
movaps xmm0, [vVectX]
[...] // do some floating point calculations on float v开发者_运维问答ectors using addps, mulps, etc
}
Is there a quicker way for me to cast every other byte of pSrcData to a float and store it into vVectX?
Thanks!
(1) AND with a mask to zero out the odd bytes (PAND
)
(2) Unpack from 16 bits to 32 bits (PUNPCKLWD
with a zero vector)
(3) Convert 32 bit ints to floats (CVTDQ2PS
)
Three instructions.
Super old thread I realise, but I was searching for code myself to do this. This is my solution, which I think is simpler:
#include <immintrin.h>
#include <stdint.h>
#ifdef __AVX__
// Modified from http://stackoverflow.com/questions/16031149/speedup-a-short-to-float-cast
// Convert unsigned 8 bit integer to float. Length must be multiple of 8
int avxu8tof32(uint8_t *src, float *dest, int length) {
int i;
for (i=0; i<length; i+= 8) {
// Load 8 8-bit int into the low half of a 128 register
__m128i v = _mm_loadl_epi64 ((__m128i const*)(src+i));
// Convert to 32-bit integers
__m256i v32 = _mm256_cvtepu8_epi32(v);
// Convert to float
__m256 vf = _mm256_cvtepi32_ps (v32);
// Store
_mm256_store_ps(dest + i,vf);
}
return(0);
}
#endif
However benchmarking shows it no faster than just looping over the array in C, with compiler optimisation enabled. Maybe the approach will be more useful as the initial stage of a bunch of AVX computations.
精彩评论