I'm writing my own shader with OpenGL, and I am stumped why this shader won't compile. Could anyone else have a look at it?
What I'm passing in as a vertex is 2 floats (separated as bytes) in this format:
Float 1:
Byte 1: Position X
Byte 2: Position Y
Byte 3: Position Z
Byte 4: Texture Coordinate X
Float 2:
Byte 1: Color R
Byte 2: Color G
Byte 3: Color B
Byte 4: Texture Coordinate Y
And this is my shader:
in vec2 Data;
varying vec3 Color;
varying vec2 TextureCoords;
uniform mat4 projection_mat;
uniform mat4 view_mat;
uniform mat4 world_mat;
void main()
{
vec4 dataPosition = UnpackValues(Data.x);
vec4 dataColor = UnpackValues(Data.y);
vec4 position = dataPosition * vec4(1.0, 1.0, 1.0, 0.0);
Color = dataColor.xyz;
TextureCoords = vec2(dataPosition.w, dataColor.w)
gl_Position = projection_mat * view_mat * world_ma开发者_Go百科t * position;
}
vec4 UnpackValues(float value)
{
return vec4(value % 255, (value >> 8) % 255, (value >> 16) % 255, value >> 24);
}
If you need any more information, I'd be happy to comply.
You need to declare UnpackValues
before you call it. GLSL is like C and C++; names must have a declaration before they can be used.
BTW: What you're trying to do will not work. Floats are floats; unless you're working with GLSL 4.00 (and since you continue to use old terms like "varying", I'm guessing not), you cannot extract bits out of a float. Indeed, the right-shift operator is only defined for integers; attempting to use it on floats will fail with a compiler error.
GLSL is not C or C++ (ironically).
If you want to pack your data, use OpenGL to pack it for you. Send two vec4 attributes that contain normalized unsigned bytes:
glVertexAttribPointer(X, 4, GL_UNSIGNED_BYTE, GL_TRUE, 8, *);
glVertexAttribPointer(Y, 4, GL_UNSIGNED_BYTE, GL_TRUE, 8, * + 4);
Your vertex shader would take two vec4
values as inputs. Since you are not using glVertexAttribIPointer, OpenGL knows that you're passing values that are to be interpreted as floats.
精彩评论