I'm currently developing my own augmented reality app. I'm trying to write my own AR Engine, since all frameworks I've seen so far are just usable with GPS data. It's going to be used indoors, I'm getting my position data from another system.
What I have so far is:
float[] vector = { 2, 2, 1, 0 };
float transformed[] = new float[4];
float[] R = new float[16];
float[] I = new float[16];
float[] r = new float[16];
float[] S = { 400f, 1, 1, 1, 1, -240f, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 };
float[] B = { 1, 1, 1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 400f, 240f, 1, 1 };
float[] temp1 = new float[16];
float[] temp2 = new float[16];
float[] frustumM = {1.5f,0,0,0,0,-1.5f,0,0,0,0,1.16f,1,0,0,-3.24f,0};
//Rotationmatrix to get transformation for device to world coordinates
SensorManager.getRotationMatrix(R, I, accelerometerValues, geomagneticMatrix);
SensorManager.remapCoordinateSystem(R, SensorManager.AXIS_X, SensorManager.AXIS_Z, r);
//invert to get transformation for world to camera
Matrix.invertM(R, 0, r, 0);
Matrix.multiplyMM(temp1, 0, frustumM, 0, R, 0);
Matrix.multiplyMM(temp2, 0, S, 0, temp1, 0);
Matrix.multiplyMM(temp1, 0, B, 0, temp2, 0);
Matrix.multiplyMV(transformed, 0, temp1, 0, ve开发者_Python百科ctor, 0);
I know its ugly code, but i'm just trying to get the object "vector" get painted correctly with my position being (0,0,0) for now. My screen size is hardcoded in the matrix S and B (800x480).
The result should be stored in "transformed" and should be in a form like transformed = {x,y,z,w} For the math I've used this link: http://www.inf.fu-berlin.de/lehre/WS06/19605_Computergrafik/doku/rossbach_siewert/camera.html
Sometimes my graphic gets painted but it jumps around and its not at the correct position. I've logged the rolling with SensorManager.getOrientation and they seem ok and stable.
So I think I'm doing something with the math wrong but I couldn't find better sources about the math to transform my data. Could anyone help me please? Thanks in advance
martin
精彩评论