0

我有个问题。我使用2图像。一种是从网上下载。另一个是由 iPhone 的相机拍摄的。我使用CIDetector在 2 张图像中检测人脸。它在从互联网下载的图像中完美运行。但另一方面,它无法检测或检测错误。

我检查了许多图像。结果是一样的。

4

3 回答 3

0

我尝试使用上面的代码。它可以检测由 Iphone 捕获的图像。但它无法检测到从 Internet 下载的图像。这是我的代码

NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow            forKey: CIDetectorAccuracy];
CIDetector  *detector = [CIDetector detectorOfType: CIDetectorTypeFace context:    nil options: options];

CIImage *ciImage = [CIImage imageWithCGImage: [facePicture CGImage]];
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber        numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray *features = [detector featuresInImage:ciImage options:imageOptions];

当它检测到人脸时。我用代码显示

for (CIFaceFeature *feature in features) {

// // 设置红色特征颜色

    CGRect faceRect = [feature bounds];

    CGContextSetRGBFillColor(context, 0.0f, 0.0f, 0.0f, 0.5f);
    CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
    CGContextSetLineWidth(context, 2.0f * scale);
    CGContextAddRect(context, feature.bounds);
    CGContextDrawPath(context, kCGPathFillStroke);
    CGContextDrawImage(context, faceRect, [imgDraw CGImage]);

这不是正确的位置。它向右移动一段距离。

于 2012-11-09T06:55:09.050 回答
0

尝试这个

   NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow            forKey: CIDetectorAccuracy];
        CIDetector  *detector = [CIDetector detectorOfType: CIDetectorTypeFace context:    nil options: options];

      CIImage *ciImage = [CIImage imageWithCGImage: [image CGImage]];
    NSNumber *orientation = [NSNumber numberWithInt:[image imageOrientation]+1];
    NSDictionary *fOptions = [NSDictionary dictionaryWithObject:orientation forKey: CIDetectorImageOrientation];
        NSArray *features = [detector featuresInImage:ciImage options:fOptions];
        for (CIFaceFeature *f in features) {

            NSLog(@"left eye found: %@", (f. hasLeftEyePosition ? @"YES" : @"NO"));

            NSLog(@"right eye found: %@", (f. hasRightEyePosition ? @"YES" : @"NO"));

            NSLog(@"mouth found: %@", (f. hasMouthPosition ? @"YES" : @"NO"));

            if(f.hasLeftEyePosition)

                NSLog(@"left eye position x = %f , y = %f", f.leftEyePosition.x, f.leftEyePosition.y);

            if(f.hasRightEyePosition)

                NSLog(@"right eye position x = %f , y = %f", f.rightEyePosition.x, f.rightEyePosition.y);

            if(f.hasMouthPosition)

                NSLog(@"mouth position x = %f , y = %f", f.mouthPosition.x, f.mouthPosition.y);

        }

如果您始终在纵向使用前置摄像头,请添加此

      NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber        numberWithInt:6] forKey:CIDetectorImageOrientation];
      NSArray* features = [detector featuresInImage:image options:imageOptions];

欲了解更多信息

示例:https ://github.com/beetlebugorg/PictureMe

iOS 人脸检测问题

使用 CIDetector 的人脸检测问题

https://stackoverflow.com/questions/4332868/detect-face-in-iphone?rq=1

于 2012-11-09T04:24:11.067 回答
0

我有同样的问题。您可以在检测之前更改图像的大小。

CGSize size = CGSizeMake(cameraCaptureImage.size.width, cameraCaptureImage.size.height);
UIGraphicsBeginImageContext(size);
[cameraCaptureImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
cameraCaptureImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
于 2016-01-17T14:50:09.260 回答