2

I am writing OpenGL app for iOS, and I need to take in-app screenshot of rendered scene. All is working ok when I am not using multi-sampling. But when I turn multi-sampling on, glReadPixels does not return the correct data (scene is drawn correctly - graphics quality is much better with multi-sampling).

I already checked bunch of similar questions at SO, and on some other places, but none of them solve my problem, since I am already doing it on proposed ways:

  1. I am taking screenshot after buffers are resolved, but before render buffer is presented.
  2. glReadPixels does not return error.
  3. I tried even to set kEAGLDrawablePropertyRetainedBacking to YES and to take screenshot after buffer is presented - does not work either.
  4. I support OpenGLES 1.x rendering API (context initialised with kEAGLRenderingAPIOpenGLES1)

Basically I am out of ideas what can be wrong. Posting question on SO is my last resort.

This is the relevant source code:

Creating frame buffers

- (BOOL)createFramebuffer
{

    glGenFramebuffersOES(1, &viewFramebuffer);
    glGenRenderbuffersOES(1, &viewRenderbuffer);

    glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
    [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(CAEAGLLayer*)self.layer];
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);

    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);

    // Multisample support

    glGenFramebuffersOES(1, &sampleFramebuffer);
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, sampleFramebuffer);

    glGenRenderbuffersOES(1, &sampleColorRenderbuffer);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, sampleColorRenderbuffer);
    glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_RGBA8_OES, backingWidth, backingHeight);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, sampleColorRenderbuffer);

    glGenRenderbuffersOES(1, &sampleDepthRenderbuffer);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, sampleDepthRenderbuffer);
    glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, sampleDepthRenderbuffer);

    // End of multisample support

    if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) {
        NSLog(@"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
        return NO;
    }

    return YES;
}

Resolving buffers part and taking snapshot

    glBindFramebufferOES(GL_DRAW_FRAMEBUFFER_APPLE, viewFramebuffer);
    glBindFramebufferOES(GL_READ_FRAMEBUFFER_APPLE, sampleFramebuffer);
    glResolveMultisampleFramebufferAPPLE();
    [self checkGlError];

    //glFinish();

    if (capture)
        captureImage = [self snapshot:self];    

    const GLenum discards[]  = {GL_COLOR_ATTACHMENT0_OES,GL_DEPTH_ATTACHMENT_OES};
    glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE,2,discards);

    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);    

    [context presentRenderbuffer:GL_RENDERBUFFER_OES];    

Snapshot method (basically copied from apple docs)

- (UIImage*)snapshot:(UIView*)eaglview
{

    // Bind the color renderbuffer used to render the OpenGL ES view
    // If your application only creates a single color renderbuffer which is already bound at this point,
    // this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
    // Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.    
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);


    NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
    NSInteger dataLength = width * height * 4;
    GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));

    // Read pixel data from the framebuffer
    glPixelStorei(GL_PACK_ALIGNMENT, 4);
    [self checkGlError];
    glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
    [self checkGlError];

    // Create a CGImage with the pixel data
    // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
    // otherwise, use kCGImageAlphaPremultipliedLast
    CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
                                ref, NULL, true, kCGRenderingIntentDefault);

    // OpenGL ES measures data in PIXELS
    // Create a graphics context with the target size measured in POINTS
    NSInteger widthInPoints, heightInPoints;
    if (NULL != UIGraphicsBeginImageContextWithOptions) {
        // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
        // Set the scale parameter to your OpenGL ES view's contentScaleFactor
        // so that you get a high-resolution snapshot when its value is greater than 1.0
        CGFloat scale = eaglview.contentScaleFactor;
        widthInPoints = width / scale;
        heightInPoints = height / scale;
        UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
    }
    else {
        // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
        widthInPoints = width;
        heightInPoints = height;
        UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
    }

    CGContextRef cgcontext = UIGraphicsGetCurrentContext();

    // UIKit coordinate system is upside down to GL/Quartz coordinate system
    // Flip the CGImage by rendering it to the flipped bitmap context
    // The size of the destination area is measured in POINTS
    CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
    CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);

    // Retrieve the UIImage from the current context
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    // Clean up
    free(data);
    CFRelease(ref);
    CFRelease(colorspace);
    CGImageRelease(iref);

    return image;
}
4

1 回答 1

2

您可以像往常一样通过glResolveMultisampleFramebufferAPPLE绑定viewFramebufferas draw framebuffer 和sampleFramebufferas read framebuffer 来解析多重采样缓冲区。但是您是否还记得在之前绑定viewFramebufferas read framebuffer( glBindFramebuffer(GL_READ_FRAMEBUFFER, viewFramebuffer)) glReadPixelsglReadPixels将始终从当前绑定的读取帧缓冲区中读取,如果您在多样本解析后未更改此绑定,则这仍将是多样本帧缓冲区,而不是默认帧缓冲区。

我还发现你的glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer)-calls 很烦人,因为这并没有真正做任何有意义的事情,当前绑定的渲染缓冲区仅与在渲染缓冲区上工作的函数相关(实际上仅glRenderbufferStorage)(但也可能是 ES 对它和绑定做了一些有意义的事情它是[context presentRenderbuffer:GL_RENDERBUFFER_OES]工作所必需的)。但尽管如此,也许您认为此绑定也控制glReadPixels将从中读取的缓冲区,但事实并非如此,它总是从绑定到的当前帧缓冲区GL_READ_FRAMEBUFFER中读取。

于 2013-06-05T16:31:52.983 回答