我正在修改 Apple SquareCam 示例面部检测应用程序,以便在写入相机胶卷之前裁剪面部,而不是在面部周围绘制红色方块。我使用与绘制红色正方形相同的 CGRect 进行裁剪。然而行为是不同的。在纵向模式下,如果脸部位于屏幕的水平中心,则会按预期裁剪脸部(与红色方块相同的位置)。如果脸部偏左或偏右,则裁剪似乎总是从屏幕中间而不是红色方块的位置。
这是苹果的原始代码:
- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features
inCGImage:(CGImageRef)backgroundImage
withOrientation:(UIDeviceOrientation)orientation
frontFacing:(BOOL)isFrontFacing
{
CGImageRef returnImage = NULL;
CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(backgroundImage), CGImageGetHeight(backgroundImage));
CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
CGContextClearRect(bitmapContext, backgroundImageRect);
CGContextDrawImage(bitmapContext, backgroundImageRect, backgroundImage);
CGFloat rotationDegrees = 0.;
switch (orientation) {
case UIDeviceOrientationPortrait:
rotationDegrees = -90.;
break;
case UIDeviceOrientationPortraitUpsideDown:
rotationDegrees = 90.;
break;
case UIDeviceOrientationLandscapeLeft:
if (isFrontFacing) rotationDegrees = 180.;
else rotationDegrees = 0.;
break;
case UIDeviceOrientationLandscapeRight:
if (isFrontFacing) rotationDegrees = 0.;
else rotationDegrees = 180.;
break;
case UIDeviceOrientationFaceUp:
case UIDeviceOrientationFaceDown:
default:
break; // leave the layer in its last known orientation
}
UIImage *rotatedSquareImage = [square imageRotatedByDegrees:rotationDegrees];
// features found by the face detector
for ( CIFaceFeature *ff in features ) {
CGRect faceRect = [ff bounds];
NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect));
CGContextDrawImage(bitmapContext, faceRect, [rotatedSquareImage CGImage]);
}
returnImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease (bitmapContext);
return returnImage;
}
和我的替代品:
- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features
inCGImage:(CGImageRef)backgroundImage
withOrientation:(UIDeviceOrientation)orientation
frontFacing:(BOOL)isFrontFacing
{
CGImageRef returnImage = NULL;
//I'm only taking pics with one face. This is just for testing
for ( CIFaceFeature *ff in features ) {
CGRect faceRect = [ff bounds];
returnImage = CGImageCreateWithImageInRect(backgroundImage, faceRect);
}
return returnImage;
}
更新*
根据 Wains 的输入,我试图让我的代码更像原始代码,但结果是一样的:
- (NSArray*)extractFaceImages:(NSArray *)features
fromCGImage:(CGImageRef)sourceImage
withOrientation:(UIDeviceOrientation)orientation
frontFacing:(BOOL)isFrontFacing
{
NSMutableArray *faceImages = [[[NSMutableArray alloc] initWithCapacity:1] autorelease];
CGImageRef returnImage = NULL;
CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(sourceImage), CGImageGetHeight(sourceImage));
CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
CGContextClearRect(bitmapContext, backgroundImageRect);
CGContextDrawImage(bitmapContext, backgroundImageRect, sourceImage);
CGFloat rotationDegrees = 0.;
switch (orientation) {
case UIDeviceOrientationPortrait:
rotationDegrees = -90.;
break;
case UIDeviceOrientationPortraitUpsideDown:
rotationDegrees = 90.;
break;
case UIDeviceOrientationLandscapeLeft:
if (isFrontFacing) rotationDegrees = 180.;
else rotationDegrees = 0.;
break;
case UIDeviceOrientationLandscapeRight:
if (isFrontFacing) rotationDegrees = 0.;
else rotationDegrees = 180.;
break;
case UIDeviceOrientationFaceUp:
case UIDeviceOrientationFaceDown:
default:
break; // leave the layer in its last known orientation
}
// features found by the face detector
for ( CIFaceFeature *ff in features ) {
CGRect faceRect = [ff bounds];
NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect));
returnImage = CGBitmapContextCreateImage(bitmapContext);
returnImage = CGImageCreateWithImageInRect(returnImage, faceRect);
UIImage *clippedFace = [UIImage imageWithCGImage:returnImage];
[faceImages addObject:clippedFace];
}
CGContextRelease (bitmapContext);
return faceImages;
}
我拍了三张照片,并用这些结果记录了 faceRect;
照片是在靠近设备左边缘的位置拍摄的。捕获图像完全错过了向右的脸:faceRect={{972, 43.0312}, {673.312, 673.312}}
脸部位于设备中间的照片。抓图不错:faceRect={{1060.59, 536.625}, {668.25, 668.25}}
使用靠近设备右边缘的脸部拍摄的照片。捕获图像完全错过左边的脸: faceRect={{982.125, 999.844}, {804.938, 804.938}}
所以看起来“x”和“y”是相反的。我以纵向方式拿着设备,但 faceRect 似乎是横向的。但是,我无法弄清楚 Apple 原始代码的哪一部分是造成这种情况的原因。该方法中的方向代码似乎只影响红色正方形叠加图像本身。