开发者

Run multiple OpenGL shaders on a CVImageBuffer

开发者 https://www.devze.com 2023-02-20 21:57 出处:网络
Relating to my previous two questions I\'ve spent a week attempting to figure out how to run multiple shaders against a core video buffer. I know what I need to do but I, frankly, can\'t get the code

Relating to my previous two questions I've spent a week attempting to figure out how to run multiple shaders against a core video buffer. I know what I need to do but I, frankly, can't get the code to work (pasted below is the original, non ping-pong version).

Lacking the Eureka moment I'm now totally stuck :). The code to compile and link the shaders is not shown for brevity. The whole thing renders (successfully - but one shader overwrites the other - so missing the vital step) to a GL compatible layer and theres a UIToolbar underneath which, eventually, will have a button per shader and a button to run all shaders.

Thanks,

Simon

-(void) DrawFrame:(CVImageBufferRef)cameraframe;
{
    int bufferHeight = CVPixelBufferGetHeight(cameraframe);
    int bufferWidth = CVPixelBufferGetWidth(cameraframe);

    // Create a new texture from the camera frame data, display that using the shaders
    glGenTextures(1, &videoFrameTexture);
    glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    // Using BGRA extension to pull in video frame data directly
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraframe));

    static const GLfloat squareVertices[] = {
        -1.0f, -1.0f,
        1.0f, -1.0f,
        -1.0f,  1.0f,
        1.0f,  1.0f,
    };

    static const GLfloat textureVertices[] = {
        1.0f, 1.0f,
        1.0f, 0.0f,
        0.0f,  1.0f,
        0.0f,  0.0f,
    };

    [self setDisplayFramebuffer];

    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, videoFrameTexture);

    // Update uniform values
    glUniform1i(uniforms[UNIFORM_VIDEOFRAME], 0);   

    // Update attribute values.
    glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
    开发者_如何学PythonglEnableVertexAttribArray(ATTRIB_VERTEX);
    glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
    glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);


    glUseProgram(greyscaleProgram);
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    [self presentFramebuffer];

    // Obviously here is where the ping pong starts (assuming correct mods 
    // to the framebuffer setup method below
    glUseProgram(program);
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    [self presentFramebuffer];

    glDeleteTextures(1, &videoFrameTexture);
}

- (void)setDisplayFramebuffer;
{
    if (context)
    {
        [EAGLContext setCurrentContext:context];

        if (!viewFramebuffer)
        {
            [self createFramebuffers];
        }

        glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);



        glViewport(0, 0, backingWidth, backingHeight);

    }
}

- (BOOL)presentFramebuffer;
{
    BOOL success = FALSE;

    if (context)
    {
        [EAGLContext setCurrentContext:context];

        glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);

        success = [context presentRenderbuffer:GL_RENDERBUFFER];
    }

    return success;
}

- (BOOL)createFramebuffers
{   
    glEnable(GL_TEXTURE_2D);
    glDisable(GL_DEPTH_TEST);

    // Onscreen framebuffer object
    glGenFramebuffers(1, &viewFramebuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);

    // Render buffer for final output
    glGenRenderbuffers(1, &viewRenderbuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);

    [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];

    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
    NSLog(@"Backing width: %d, height: %d", backingWidth, backingHeight);

    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, viewRenderbuffer);


    if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) 
    {
        NSLog(@"Failure with framebuffer generation");
        return NO;
    }

    return YES;
}

Edit: Clarified what is missing


To do ping-pong rendering, you need to do the following:

  • Create 2 textures, with the exact same configuration
  • Create 2 Framebuffers with the same configuration, and attach one texture to each framebuffer.

Let's call the Framebuffers A and B, and the attached textures texA and texB:

To render:

  • Use the first shader with glUseProgram.
  • Bind Framebuffer A.
  • Render a quad.

Now you have the result of the shader execution in texA. To do the ping-pong:

  • Use the second shader with glUseProgram.
  • Bind Framebuffer B.
  • Bind texA and setup texture units for your shader.
  • Render a quad.
  • Use shader with glUseProgram.
  • Bind Framebuffer A.
  • Bind texB and setup texture units.
  • Render a quad.

Now you have the result in texA and you can repeat the process again, hope this helps!

0

精彩评论

暂无评论...
验证码 换一张
取 消