0

我正在使用 Brad Larson 的 GPUImage 框架添加 UIImage 元素,我已经成功添加了图像,但主要问题是图像被拉伸到视频的纵横比。这是我的代码:

    GPUImageView *filterView = (GPUImageView *)self.view;
    videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
    videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
    transformFilter=[[GPUImageTransformFilter alloc]init];
    CGAffineTransform t=CGAffineTransformMakeScale(0.5, 0.5);
    [(GPUImageTransformFilter *)filter setAffineTransform:t];
    [videoCamera addTarget:transformFilter];

    filter = [[GPUImageOverlayBlendFilter alloc] init];
    [videoCamera addTarget:filter];
    inputImage = [UIImage imageNamed:@"eye.png"];
    sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
    [sourcePicture forceProcessingAtSize:CGSizeMake(50, 50)];
    [sourcePicture processImage];
        [sourcePicture addTarget:filter];
    [sourcePicture addTarget:transformFilter];


    [filter addTarget:filterView];
    [videoCamera startCameraCapture];

我曾尝试在混合图像之前使用变换滤镜,但它没有被缩放。我希望图像出现在中心。我该怎么做?谢谢

4

1 回答 1

0

你走在正确的轨道上,只是有一些不合适的地方。

以下代码将加载叠加图像并应用转换以使其保持实际大小。默认情况下,它将以视频为中心。

GPUImageView *filterView = (GPUImageView *)self.view;
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;

filter = [[GPUImageOverlayBlendFilter alloc] init];
transformFilter = [[GPUImageTransformFilter alloc]init];

[videoCamera addTarget:filter];
[transformFilter addTarget:filter];

// setup overlay image
inputImage = [UIImage imageNamed:@"eye.png"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];

// determine the necessary scaling to keep image at actual size
CGFloat tx = inputImage.size.width / 480.0;     // 480/640: based on video camera preset
CGFloat ty = inputImage.size.height / 640.0;

// apply transform to filter
CGAffineTransform t = CGAffineTransformMakeScale(tx, ty);
[(GPUImageTransformFilter *)transformFilter setAffineTransform:t];

//
[sourcePicture addTarget:filter];
[sourcePicture addTarget:transformFilter];
[sourcePicture processImage];

[filter addTarget:filterView];
[videoCamera startCameraCapture];
于 2013-09-12T16:24:03.813 回答