我正在从相机中获取图像(利用UIImagePickerController
)并将其保存到文档目录中。
然后我在不同的视图控制器中获取这些图像以获取面部部分,使用CIDetector API
and CIfacefeature API
。
问题是尽管我能够正确获取图像,但它根本没有检测到面部。如果我将相同的图像存储在它检测到的主包中。
不知道问题出在哪里???。我已经尝试了一切。可能是问题出在UIImage
或可能是图像保存在文档目录或相机中的格式。
请帮忙。我会很感激你的。
- (void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,
NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString* path = [documentsDirectory stringByAppendingPathComponent:
[NSString stringWithString: @"SampleImage.jpg"] ];
NSData* data = UIImageJPEGRepresentation(image, 0);
[data writeToFile:path atomically:YES];
[picker dismissModalViewControllerAnimated:YES];
FCVC *fcvc = [[FCVC alloc] initwithImage:image];
[self.navigationController pushViewController:fcvc animated:YES];
}
在 FCVC 的 ViewDidLoad 中,我通过传递调用以下函数:
-(void)markFaces:(UIImage *)pic
{
CIImage* image = [CIImage imageWithCGImage:pic.CGImage];
CGImageRef masterFaceImage;
CIDetector* detector = [CIDetector detectorOfType: CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features)
{
masterFaceImage = CGImageCreateWithImageInRect(facePicture.CGImage,CGRectMake(faceFeature.bounds.origin.x,faceFeature.bounds.origin.y, faceFeature.bounds.size.width,faceFeature.bounds.size.height));
}
self.masterExtractedFace = [UIImage imageWithCGImage:masterFaceImage];
}
提前致谢。