3

编辑发现此代码有助于处理前置摄像头图像http://blog.logichigh.com/2008/06/05/uiimage-fix/

希望其他人有类似的问题,可以帮助我。还没有找到解决办法。(它可能看起来有点长,但只是一堆帮助代码)

我在从相机(正面和背面)获取的图像以及来自画廊的图像上使用 ios 面部检测器(我正在使用UIImagePicker- 用于通过相机捕获图像和从画廊中选择图像 - 不使用 avfoundation像在 squarecam 演示中那样拍照)

我真的搞砸了检测的坐标(如果有的话),所以我写了一个简短的调试方法来获取人脸的边界以及一个在他们上面画一个正方形的实用程序,我想检查检测器的方向正在工作:

#define RECTBOX(R)   [NSValue valueWithCGRect:R]
- (NSArray *)detectFaces:(UIImage *)inputimage
{
    _detector = \[CIDetector detectorOfType:CIDetectorTypeFace context:nil options:\[NSDictionary dictionaryWithObject:CIDetectorAccuracyLow forKey:CIDetectorAccuracy\]\];
    NSNumber *orientation = \[NSNumber numberWithInt:\[inputimage imageOrientation\]\]; // i also saw code where they add +1 to the orientation
    NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\];

    CIImage* ciimage = \[CIImage imageWithCGImage:inputimage.CGImage options:imageOptions\];


    // try like this first
    //    NSArray* features = \[self.detector featuresInImage:ciimage options:imageOptions\];
    // if not working go on to this (trying all orientations)
    NSArray* features;

    int exif;
    // ios face detector. trying all of the orientations
    for (exif = 1; exif <= 8 ; exif++)
    {
        NSNumber *orientation = \[NSNumber numberWithInt:exif\];

        NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\];

        NSTimeInterval start = \[NSDate timeIntervalSinceReferenceDate\];

        features = \[self.detector featuresInImage:ciimage options:imageOptions\];

        if (features.count > 0)
        {
            NSString *str = \[NSString stringWithFormat:@"found faces using exif %d",exif\];
                    \[faceDetection log:str\];
            break;
        }
        NSTimeInterval duration = \[NSDate timeIntervalSinceReferenceDate\] - start;
        NSLog(@"faceDetection: facedetection total runtime is %f s",duration);
    }
    if (features.count > 0)
    {
        [faceDetection log:@"-I- Found faces with ios face detector"];
        for(CIFaceFeature *feature in features)
        {
            CGRect rect = feature.bounds;
            CGRect r = CGRectMake(rect.origin.x,inputimage.size.height - rect.origin.y - rect.size.height,rect.size.width,rect.size.height);
            [returnArray addObject:RECTBOX(r)];
        }
        return returnArray;
    } else {
        // no faces from iOS face detector. try OpenCV detector
    }

[1]

在尝试了大量不同的图片后,我注意到人脸检测器的方向与相机图像属性不一致。我从前置摄像头拍摄了一堆照片,其中 uiimage 方向为 3(查询 imageOrientation),但人脸检测器没有为该设置找到人脸。当运行所有的 exif 可能性时,人脸检测器最终会拾取人脸,但都是针对不同的方向。

![1]: http://i.stack.imgur.com/D7bkZ.jpg

我该如何解决这个问题?我的代码有错误吗?

我遇到的另一个问题(但与面部检测器密切相关),当面部检测器拾取面部时,但是对于“错误”的方向(主要发生在前置摄像头上),UIImage最初使用的在 uiiimageview 中正确显示,但是当我绘制了一个方形叠加层(我在我的应用程序中使用 opencv,所以我决定将其转换UIImage为 cvmat 以使用 opencv 绘制叠加层)整个图像旋转 90 度(只有 cvmat 图像而不是UIImage最初显示的 i)

我能想到的原因是人脸检测器弄乱了 UIimage 转换为 opencv mat 正在使用的一些缓冲区(上下文?)。我怎样才能分离这些缓冲区?

将 uiimage 转换为 cvmat 的代码是(来自UIImage某人制作的“著名”类别):

-(cv::Mat)CVMat
{

    CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage);
    CGFloat cols = self.size.width;
    CGFloat rows = self.size.height;

    cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels

    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
                                                    cols, // Width of bitmap
                                                    rows, // Height of bitmap
                                                    8, // Bits per component
                                                    cvMat.step[0], // Bytes per row
                                                    colorSpace, // Colorspace
                                                    kCGImageAlphaNoneSkipLast |
                                                    kCGBitmapByteOrderDefault); // Bitmap info flags

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
    CGContextRelease(contextRef);

    return cvMat;
}

- (id)initWithCVMat:(const cv::Mat&)cvMat
{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];

    CGColorSpaceRef colorSpace;

    if (cvMat.elemSize() == 1)
    {
        colorSpace = CGColorSpaceCreateDeviceGray();
    }
    else
    {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                     // Width
                                            cvMat.rows,                                     // Height
                                            8,                                              // Bits per component
                                            8 * cvMat.elemSize(),                           // Bits per pixel
                                            cvMat.step[0],                                  // Bytes per row
                                            colorSpace,                                     // Colorspace
                                            kCGImageAlphaNone | kCGBitmapByteOrderDefault,  // Bitmap info flags
                                            provider,                                       // CGDataProviderRef
                                            NULL,                                           // Decode
                                            false,                                          // Should interpolate
                                            kCGRenderingIntentDefault);                     // Intent   

     self = [self initWithCGImage:imageRef];
     CGImageRelease(imageRef);
     CGDataProviderRelease(provider);
     CGColorSpaceRelease(colorSpace);

     return self;
 }  

 -(cv::Mat)CVRgbMat
 {
     cv::Mat tmpimage = self.CVMat;
     cv::Mat image;
     cvtColor(tmpimage, image, cv::COLOR_BGRA2BGR);
     return image;
 }

 - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)img editingInfo:(NSDictionary *)editInfo {
    self.prevImage = img;
 //   self.previewView.image = img;
    NSArray *arr = [[faceDetection sharedFaceDetector] detectFaces:img];
    for (id r in arr)
    {
         CGRect rect = RECTUNBOX(r);
         //self.previewView.image = img;
         self.previewView.image = [utils drawSquareOnImage:img square:rect];
    }
    [self.imgPicker dismissModalViewControllerAnimated:YES];
    return;
}
4

2 回答 2

4

我认为旋转整束图像像素并匹配 CIFaceFeature 不是一个好主意。您可以想象在旋转方向重绘非常繁重。我遇到了同样的问题,我通过转换 CIFaceFeature 相对于 UIImageOrientation 的坐标系来解决它。我用一些转换方法扩展了 CIFaceFeature 类,以获取关于 UIImage 及其 UIImageView(或 UIView 的 CALayer)的正确点位置和边界。完整的实现发布在这里:https ://gist.github.com/laoyang/5747004 。你可以直接使用。

这是 CIFaceFeature 中点的最基​​本转换,返回的 CGPoint 基于图像的方向进行转换:

- (CGPoint) pointForImage:(UIImage*) image fromPoint:(CGPoint) originalPoint {

    CGFloat imageWidth = image.size.width;
    CGFloat imageHeight = image.size.height;

    CGPoint convertedPoint;

    switch (image.imageOrientation) {
        case UIImageOrientationUp:
            convertedPoint.x = originalPoint.x;
            convertedPoint.y = imageHeight - originalPoint.y;
            break;
        case UIImageOrientationDown:
            convertedPoint.x = imageWidth - originalPoint.x;
            convertedPoint.y = originalPoint.y;
            break;
        case UIImageOrientationLeft:
            convertedPoint.x = imageWidth - originalPoint.y;
            convertedPoint.y = imageHeight - originalPoint.x;
            break;
        case UIImageOrientationRight:
            convertedPoint.x = originalPoint.y;
            convertedPoint.y = originalPoint.x;
            break;
        case UIImageOrientationUpMirrored:
            convertedPoint.x = imageWidth - originalPoint.x;
            convertedPoint.y = imageHeight - originalPoint.y;
            break;
        case UIImageOrientationDownMirrored:
            convertedPoint.x = originalPoint.x;
            convertedPoint.y = originalPoint.y;
            break;
        case UIImageOrientationLeftMirrored:
            convertedPoint.x = imageWidth - originalPoint.y;
            convertedPoint.y = originalPoint.x;
            break;
        case UIImageOrientationRightMirrored:
            convertedPoint.x = originalPoint.y;
            convertedPoint.y = imageHeight - originalPoint.x;
            break;
        default:
            break;
    }
    return convertedPoint;
}

以下是基于上述转换的类别方法:

// Get converted features with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image;
- (CGPoint) rightEyePositionForImage:(UIImage *)image;
- (CGPoint) mouthPositionForImage:(UIImage *)image;
- (CGRect) boundsForImage:(UIImage *)image;

// Get normalized features (0-1) with respect to the imageOrientation property
- (CGPoint) normalizedLeftEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedRightEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedMouthPositionForImage:(UIImage *)image;
- (CGRect) normalizedBoundsForImage:(UIImage *)image;

// Get feature location inside of a given UIView size with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) rightEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) mouthPositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGRect) boundsForImage:(UIImage *)image inView:(CGSize)viewSize;

(需要注意的另一件事是在从 UIImage 方向提取面部特征时指定正确的 EXIF 方向。相当混乱......这是我所做的:

int exifOrientation;
switch (self.image.imageOrientation) {
    case UIImageOrientationUp:
        exifOrientation = 1;
        break;
    case UIImageOrientationDown:
        exifOrientation = 3;
        break;
    case UIImageOrientationLeft:
        exifOrientation = 8;
        break;
    case UIImageOrientationRight:
        exifOrientation = 6;
        break;
    case UIImageOrientationUpMirrored:
        exifOrientation = 2;
        break;
    case UIImageOrientationDownMirrored:
        exifOrientation = 4;
        break;
    case UIImageOrientationLeftMirrored:
        exifOrientation = 5;
        break;
    case UIImageOrientationRightMirrored:
        exifOrientation = 7;
        break;
    default:
        break;
}

NSDictionary *detectorOptions = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh };
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];

NSArray *features = [faceDetector featuresInImage:[CIImage imageWithCGImage:self.image.CGImage]
                                          options:@{CIDetectorImageOrientation:[NSNumber numberWithInt:exifOrientation]}];

)

于 2013-06-10T07:39:53.000 回答
0

iOS 10 and Swift 3

You can check apple example you can detect face or value of barcode and Qrcode

https://developer.apple.com/library/content/samplecode/AVCamBarcode/Introduction/Intro.html

于 2016-12-20T11:42:08.197 回答