1

我正在尝试制作一个人脸识别应用程序,一旦它检测到他的脸,它就会识别出这个人。我已经完成了面部检测部分,但我找不到将面部与存储在应用程序中的相册中的照片进行比较的方法。

下面是人脸检测代码:

-(void)markFaces:(UIImageView *)facePicture

{

// draw a CI image with the previously loaded face detection picture

CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];



// create a face detector

CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace

                                          context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];



// create an array containing all the detected faces from the detector

NSArray* features = [detector featuresInImage:image];


for(CIFaceFeature* faceFeature in features)

{

    // get the width of the face

    CGFloat faceWidth = faceFeature.bounds.size.width;



    // create a UIView using the bounds of the face

    UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];



    // add a border around the newly created UIView

    faceView.layer.borderWidth = 1;

    faceView.layer.borderColor = [[UIColor redColor] CGColor];



    // add the new view to create a box around the face

    [self.view addSubview:faceView];



    if(faceFeature.hasLeftEyePosition)

    {

        // create a UIView with a size based on the width of the face

        UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];

        // change the background color of the eye view

        [leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];

        // set the position of the leftEyeView based on the face

        [leftEyeView setCenter:faceFeature.leftEyePosition];

        // round the corners

        leftEyeView.layer.cornerRadius = faceWidth*0.15;

        // add the view to the window

        [self.view addSubview:leftEyeView];

    }



    if(faceFeature.hasRightEyePosition)

    {

        // create a UIView with a size based on the width of the face

        UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];

        // change the background color of the eye view

        [leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];

        // set the position of the rightEyeView based on the face

        [leftEye setCenter:faceFeature.rightEyePosition];

        // round the corners

        leftEye.layer.cornerRadius = faceWidth*0.15;

        // add the new view to the window

        [self.view addSubview:leftEye];

    }



    if(faceFeature.hasMouthPosition)

    {

        // create a UIView with a size based on the width of the face

        UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];

        // change the background color for the mouth to green

        [mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];

        // set the position of the mouthView based on the face

        [mouth setCenter:faceFeature.mouthPosition];

        // round the corners

        mouth.layer.cornerRadius = faceWidth*0.2;

        // add the new view to the window

        [self.view addSubview:mouth];

    }

}

}



-(void)faceDetector

{

// Load the picture for face detection

UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"testpicture.png"]];



// Draw the face detection image

[self.view addSubview:image];



// Execute the method used to markFaces in background

[self performSelectorInBackground:@selector(markFaces:) withObject:image];



// flip image on y-axis to match coordinate system used by core image

[image setTransform:CGAffineTransformMakeScale(1, -1)];



// flip the entire window to make everything right side up

[self.view setTransform:CGAffineTransformMakeScale(1, -1)];





}
4

1 回答 1

1

从文档:

Core Image 可以分析和查找图像中的人脸。它执行人脸检测,而不是识别。人脸检测是识别包含人脸特征的矩形,而人脸识别是识别特定人脸(约翰、玛丽等)。Core Image 检测到人脸后,它可以提供有关人脸特征的信息,例如眼睛和嘴巴的位置。它还可以跟踪视频中识别出的人脸的位置。

不幸的是,苹果还没有提供识别人脸的 api。您可能会查看第三方库。

于 2016-01-08T13:57:46.750 回答