2

我发现这个链接将图像放在面部点上。就像我需要检测眼睛并在那里放置图像一样?

为了简单起见,我需要在人眼上放置一个图像。我该怎么做?任何提示将不胜感激!

4

2 回答 2

1
for ( CIFaceFeature *ff in features ) {

    // find the correct position for the square layer within the previewLayer

    // the feature box originates in the bottom left of the video frame.

    // (Bottom right if mirroring is turned on)

    CGRect faceRect = [ff bounds];



    // flip preview width and height

    CGFloat temp = faceRect.size.width;

    faceRect.size.width = faceRect.size.height;

    faceRect.size.height = temp;

    temp = faceRect.origin.x;

    faceRect.origin.x = faceRect.origin.y;

    faceRect.origin.y = temp;

    // scale coordinates so they fit in the preview box, which may be scaled

    CGFloat widthScaleBy = previewBox.size.width / clap.size.height;

    CGFloat heightScaleBy = previewBox.size.height / clap.size.width;

    faceRect.size.width *= widthScaleBy;

    faceRect.size.height *= heightScaleBy;

    faceRect.origin.x *= widthScaleBy;

    faceRect.origin.y *= heightScaleBy;



    if ( isMirrored )

        faceRect = CGRectOffset(faceRect, previewBox.origin.x + previewBox.size.width - faceRect.size.width - (faceRect.origin.x * 2), previewBox.origin.y);

    else

        faceRect = CGRectOffset(faceRect, previewBox.origin.x, previewBox.origin.y);

你可以得到正确的脸,但你需要精细图像的眼睛

这将帮助您获得每个职位

-(void)markFaces:(CIImage *)image
{
    // draw a CI image with the previously loaded face detection picture
    @autoreleasepool {
        CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                                  context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracy forKey:CIDetectorAccuracyHigh]];

        // create an array containing all the detected faces from the detector
        NSArray* features = [detector featuresInImage:image];

        NSLog(@"The Address Of CIImage In: %p %s",image,__FUNCTION__);
        NSLog(@"Array Count %d",[features count]);


        NSUserDefaults *prefs = [NSUserDefaults standardUserDefaults];


        if([features count]==0)
        {
           //No image is present
        }
        else 
        {
            for(CIFaceFeature* faceFeature in features)
            {

                if(faceFeature.hasMouthPosition)
                {
                   // Your code based on the mouth position
                }


              if (faceFeature.hasLeftEyePosition) {
                 // Write your code Note: points are mirrored point so u need to take care of that 

                }
                if (faceFeature.hasRightEyePosition) {
                 // Write your code Note: points are mirrored point so u need to take care of that 

                }

                }
}
}
于 2013-04-02T06:33:04.597 回答
0

Apple iOS 5 及更高版本已在内部提供此功能。

请参阅此 Apple 文档:

http://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/CoreImaging/ci_detect_faces/ci_detect_faces.html

在此示例中,他们按照您的要求执行相同的操作 - 面部和眼睛检测。

希望这对您有所帮助。

*还要检查这些link1link2,它包含检测到的区域的图像叠加。我想这就是你要找的。

于 2013-04-02T07:10:13.757 回答