0

我将如何完成这两项任务的部分拼凑在一起,我只是不知道如何将它们放在一起。第一个代码块捕获一个图像,但它只是一个图像缓冲区,不能转换为 UIImage。

- (void) captureStillImage
{
    AVCaptureConnection *stillImageConnection = [[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo];

    [[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:stillImageConnection
                                                         completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {

                                                             if (imageDataSampleBuffer != NULL) {
                                                                 NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

                                                                 UIImage *captureImage = [[UIImage alloc] initWithData:imageData];


                                                             }

                                                             if ([[self delegate] respondsToSelector:@selector(captureManagerStillImageCaptured:)]) {
                                                                 [[self delegate] captureManagerStillImageCaptured:self];
                                                             }
                                                         }];
}

这是一个苹果的例子,它使用一个图像缓冲区并将其转换为 UIImage。我如何结合这两种方法一起工作?

-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) imageSampleBuffer{

    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);

    if (imageBuffer==NULL) {
        NSLog(@"No buffer");
    }

    // Lock the base address of the pixel buffer
    if((CVPixelBufferLockBaseAddress(imageBuffer, 0))==kCVReturnSuccess){
        NSLog(@"Buffer locked successfully");
    }

    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    NSLog(@"bytes per row %zu",bytesPerRow );
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    NSLog(@"width %zu",width);

    size_t height = CVPixelBufferGetHeight(imageBuffer);
    NSLog(@"height %zu",height);

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);

    // Free up the context and color space
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image= [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);


    return (image );

}
4

1 回答 1

0

The first block of code does exactly what you need and is an acceptable way of doing it. What are you trying to do with the second block?

于 2013-11-07T00:27:49.823 回答