我有一个程序可以实时查看相机输入并获取中间像素的颜色值。我使用 captureOutput: 方法从 AVCaptureSession 输出(恰好被读取为 CVPixelBuffer)中获取 CMSampleBuffer,然后使用以下代码获取像素的 rgb 值:
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
unsigned char* pixel = (unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer);
NSLog(@"Middle pixel: %hhu", pixel[((width*height)*4)/2]);
int red = pixel[(((width*height)*4)/2)+2];
int green = pixel[(((width*height)*4)/2)+1];
int blue = pixel[((width*height)*4)/2];
int alpha = 1;
UIColor *color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha)];
我虽然那个公式 ((width*height)*4)/2 会给我中间像素,但它给了我图像的顶部中间像素。我想知道我需要使用什么公式来访问屏幕中间的像素。我有点卡住了,因为我真的不知道这些像素缓冲区的内部结构。
将来我想抓取 4 个中间像素并将它们平均以获得更准确的颜色读数,但现在我只想了解这些东西是如何工作的。