Is it possible to disable texture colors, and use only white as the color? It would still read the texture, so i cant use glDisable(GL_TEXTURE_2D) because i want to render the alpha channels too.
All i can think of now is to make new texture where all c开发者_如何学JAVAolor data is white, remaining alpha as it is.
I need to do this without shaders, so is this even possible?
Edit: to clarify: i want to use both textures: white and normal colors.
Edit: APPARENTLY THIS IS NOT POSSIBLE
I'm still not entirely sure I understand correctly what you want, but I'll give it a shot. What I had in mind is that when you call glTexImage2D
, you specify the format of the texels you're loading and you specify the "internal format" of the texture you're creating from those texels. In a typical case, you specify (roughly) the same format for both -- e.g. you'll typically use GL_RGBA
for both.
There is a reason for specifying both though: the format
(the parameter close to the end of the list) specifies the format of the texels you're loading from, but the internal format
(the one close to the beginning of the parameter list) specifies the format for the actual texture you create from those texels.
If you want to load some texels, but only actually use the alpha channel from them, you can specify GL_ALPHA
for the internal format
, and that's all you'll get -- when you map that texture to a surface, it'll affect only the alpha, not the color. This not only avoids making an extra copy of your texels, but (at least usually) reduces the memory consumed by the texture itself as well, since it only includes an alpha channel, not the three color channels.
Edit: Okay, thinking about it a bit more, there's a way to do what (I think) you want, using only the one texture. First, you set the blend function to just use the alpha channel, then when you want to copy the color of the texture, you call glTextureEnvf
with GL_REPLACE
, but when you only want to use the alpha channel, you call it with GL_BLEND
. For example, let's create a green texture, and draw it (twice) over a blue quad, once with GL_REPLACE, and one with GL_BLEND. For simplicity, we'll use a solid gree texture, with alpha increasing linearly from top (0) to bottom (1):
static GLubyte Image[128][128][4];
for (int i=0; i<128; i++)
for (int j=0; j<128; j++) {
Image[i][j][0] = 0;
Image[i][j][1] = 255;
Image[i][j][2] = 0;
Image[i][j][3] = i;
}
I'll skip over most of creating and binding the texture, setting the parameters, etc., and get directly to drawing a couple of quads with the texture:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glBegin(GL_QUADS);
glColor4f(0.0f, 0.0f, 1.0f, 1.0f);
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0f, 1.0f, 0.0f);
glColor4f(0.0f, 0.0f, 1.0f, 1.0f);
glTexCoord2f(1.0, 0.0); glVertex3f(0.0f, 1.0f, 0.0f);
glColor4f(0.0f, 0.0f, 1.0f, 1.0f);
glTexCoord2f(1.0, 1.0); glVertex3f(0.0f, -1.0f, 0.0f);
glColor4f(0.0, 0.0f, 1.0f, 1.0f);
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0f, -1.0f, 0.0f);
glEnd();
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_BLEND);
glBegin(GL_QUADS);
glColor4f(0.0f, 0.0f, 1.0f, 1.0f);
glTexCoord2f(0.0, 0.0); glVertex3f(0.0f, 1.0f, 0.0f);
glColor4f(0.0f, 0.0f, 1.0f, 1.0f);
glTexCoord2f(1.0, 0.0); glVertex3f(1.0f, 1.0f, 0.0f);
glColor4f(0.0f, 0.0f, 1.0f, 1.0f);
glTexCoord2f(1.0, 1.0); glVertex3f(1.0f, -1.0f, 0.0f);
glColor4f(0.0, 0.0f, 1.0f, 1.0f);
glTexCoord2f(0.0, 1.0); glVertex3f(0.0f, -1.0f, 0.0f);
glEnd();
This produces:
So, on the left where it's drawn with GL_REPLACE, we get the green of the texture, but on the right, where it's drawn with GL_BLEND (and glBlendFunc was set to use only the alpha channel) we get the blue quad, but with its Alpha taken from the texture -- but we use exactly the same texture for both.
Edit 2: If you decide you really do need a texture that's all white, I'd just create a 1-pixel texture, and set GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T to GL_REPEAT. Even though this is still using an extra texture, that extra texture will be really tiny, since it's only one pixel. Both the time to load it and the memory consumed will be truly minuscule -- the data for the pixel is basically "noise". I haven't tried to test, but you might be better off with something like an 8x8 or 16x16 block of pixels instead. This is still so small it hardly matters, but those are the sizes used in JPEG and MPEG respectively, and I can see where the card and (especially) driver might be optimized for them. It might help, and won't hurt (enough to care about anyway).
What about changing all texture color (except alpha) to white after they are loaded and before they are utilized in OpenGL? If you have them as bitmaps in memory at some point, it should be easy and you won't need separate texture files.
精彩评论