6

我正在使用CIPixellate过滤器进行一些测试,并且可以正常工作,但是生成的图像大小不同。我认为这是有道理的,因为我正在改变输入比例,但这不是我所期望的——我认为它会在图像的矩形内缩放。

我是否误解/使用了错误的过滤器,或者我只需要将输出图像裁剪为我想要的大小。

此外,inputCenter从阅读标题/试验和错误中,我不清楚参数。谁能解释那个参数是关于什么的?

NSMutableArray * tmpImages = [[NSMutableArray alloc] init];
for (int i = 0; i < 10; i++) {
    double scale = i * 4.0;
    UIImage* tmpImg = [self applyCIPixelateFilter:self.faceImage withScale:scale];
    printf("tmpImg    width: %f height: %f\n",  tmpImg.size.width, tmpImg.size.height);
    [tmpImages addObject:tmpImg];
}

tmpImg    width: 480.000000 height: 640.000000
tmpImg    width: 484.000000 height: 644.000000
tmpImg    width: 488.000000 height: 648.000000
tmpImg    width: 492.000000 height: 652.000000
tmpImg    width: 496.000000 height: 656.000000
tmpImg    width: 500.000000 height: 660.000000
tmpImg    width: 504.000000 height: 664.000000
tmpImg    width: 508.000000 height: 668.000000
tmpImg    width: 512.000000 height: 672.000000
tmpImg    width: 516.000000 height: 676.000000

- (UIImage *)applyCIPixelateFilter:(UIImage*)fromImage withScale:(double)scale
{
    /*
     Makes an image blocky by mapping the image to colored squares whose color is defined by the replaced pixels.
     Parameters

     inputImage: A CIImage object whose display name is Image.

     inputCenter: A CIVector object whose attribute type is CIAttributeTypePosition and whose display name is Center.
     Default value: [150 150]

     inputScale: An NSNumber object whose attribute type is CIAttributeTypeDistance and whose display name is Scale.
     Default value: 8.00
     */
    CIContext *context = [CIContext contextWithOptions:nil];
    CIFilter *filter= [CIFilter filterWithName:@"CIPixellate"];
    CIImage *inputImage = [[CIImage alloc] initWithImage:fromImage];
    CIVector *vector = [CIVector vectorWithX:fromImage.size.width /2.0f Y:fromImage.size.height /2.0f];
    [filter setDefaults];
    [filter setValue:vector forKey:@"inputCenter"];
    [filter setValue:[NSNumber numberWithDouble:scale] forKey:@"inputScale"];
    [filter setValue:inputImage forKey:@"inputImage"];

    CGImageRef cgiimage = [context createCGImage:filter.outputImage fromRect:filter.outputImage.extent];
    UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:1.0f orientation:fromImage.imageOrientation];

    CGImageRelease(cgiimage);

    return newImage;
}
4

3 回答 3

1

正如 Apple Core Image Programming Guide这篇文章中提到的,

默认情况下,模糊滤镜还通过模糊图像像素以及(在滤镜的图像处理空间中)围绕图像的透明像素来柔化图像的边缘

因此,您的输出图像会根据您的比例而有所不同。

对于 inputCenter,正如 Joshua Sullivan 在CIFilter 上这篇文章的评论中提到的那样,“它调整了像素网格与源图像的偏移量”。因此,如果 inputCenter 坐标不是您的 CI Pixellate inputScale 的倍数,它会稍微偏移像素正方形(大部分在 inputScale 的大值上可见)。

于 2019-04-24T09:10:49.933 回答
1

问题仅在于缩放

只需这样做:

let result = UIImage(cgImage: cgimgresult!, scale: (originalImageView.image?.scale)!, orientation: (originalImageView.image?.imageOrientation)!)
            originalImageView.image = result
于 2017-09-03T18:45:43.057 回答
0

有时 inputScale 不会平均划分您的图像,这就是我发现我得到不同大小的输出图像的时候。

例如,如果 inputScale = 0 或 1,则输出图像大小非常准确。

我发现图像周围的额外空间居中的方式因 inputCenter 而“不透明”而变化。即,我没有花时间弄清楚到底是怎么回事(我通过视图中的点击位置设置它)。

我对不同尺寸的解决方案是将图像重新渲染到输入图像大小的范围内,我使用 Apple Watch 的黑色背景来执行此操作。

CIFilter *pixelateFilter = [CIFilter filterWithName:@"CIPixellate"];
[pixelateFilter setDefaults];
[pixelateFilter setValue:[CIImage imageWithCGImage:editImage.CGImage] forKey:kCIInputImageKey];
[pixelateFilter setValue:@(amount) forKey:@"inputScale"];
[pixelateFilter setValue:vector forKey:@"inputCenter"];
CIImage* result = [pixelateFilter valueForKey:kCIOutputImageKey];    
CIContext *context = [CIContext contextWithOptions:nil];
CGRect extent = [pixelateResult extent];
CGImageRef cgImage = [context createCGImage:result fromRect:extent];

UIGraphicsBeginImageContextWithOptions(editImage.size, YES, [editImage scale]);
CGContextRef ref = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ref, 0, editImage.size.height);
CGContextScaleCTM(ref, 1.0, -1.0);

CGContextSetFillColorWithColor(ref, backgroundFillColor.CGColor);
CGRect drawRect = (CGRect){{0,0},editImage.size};
CGContextFillRect(ref, drawRect);
CGContextDrawImage(ref, drawRect, cgImage);
UIImage* filledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
returnImage = filledImage;

CGImageRelease(cgImage);

如果您要坚持实施,我建议至少更改提取 UIImage 的方式以使用原始图像的“比例”,不要与 CIFilter 比例混淆。

UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:fromImage.scale orientation:fromImage.imageOrientation];
于 2015-04-22T03:37:52.243 回答