2

我正在尝试使用 swift 4 在 ios 中创建 pip 效果相机,为此我采取了以下控件: View FrontCameraView(ImageView) BackCameraView(ImageView) MaskedCameraView(ImageView)

FrontCameraView 获取模糊图像,Frame Image 获取 pip 帧,而 maskedCameraView 获取蒙版图像。

现在,当我捕捉图像时

这是在自定义相机上应用实时效果:-

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        connection.videoOrientation = orientation
        let videoOutput = AVCaptureVideoDataOutput()
        videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main)

        let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
        let cameraImage = CIImage(cvImageBuffer: pixelBuffer!)
        let cgImage:CGImage = context.createCGImage(cameraImage, from: cameraImage.extent)!
        globalImage = UIImage.init(cgImage: cgImage)
        let croppedImage = HelperClass.shared().imageByScalingAndCropping(forSize:globalImage, CGSize.init(width:200, height: 200))
        DispatchQueue.main.async {
            self.backCameraView?.image = HelperClass.shared().blur(withCoreImage:croppedImage, andView:self.view)
            self.frontCameraView?.image = frameArray[self.tagValue]
            let maskedImage = self.maskImage(image:(croppedImage)!, mask:maskedArray[self.tagValue])
            let maskedcroppedImage = HelperClass.shared().imageByScalingAndCropping(forSize:maskedImage, CGSize.init(width:200, height: 200))
            self.maskedCameraView.image = maskedcroppedImage

        }
    }

//捕获图像的代码 @IBAction func takePhoto(_ sender: UIButton) {

        if(captureSession.canAddOutput(videoOutput)){
            captureSession.addOutput(videoOutput)
        }
        let videoConnection = videoOutput.connection(with: AVMediaType.video)
        if(videoConnection != nil){
            photoOutput?.captureStillImageAsynchronously(from:videoConnection!, completionHandler: { (sampleBuffer, error) in

                if sampleBuffer != nil{

                    let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer!)
                    let dataProvider = CGDataProvider.init(data:imageData! as CFData)
                    let cgImageRef = CGImage.init(jpegDataProviderSource:dataProvider!, decode:nil, shouldInterpolate: true, intent: CGColorRenderingIntent.relativeColorimetric)
                    let image = UIImage.init(cgImage: cgImageRef!, scale:1.0, orientation:UIImageOrientation.right)


                    let sourceImage = HelperClass.shared().imageByScalingAndCropping(forSize:image,CGSize.init(width:200, height:200))

                    let storyBoard : UIStoryboard = UIStoryboard(name: "Main", bundle:nil)
                    let framePreviewVC = storyBoard.instantiateViewController(withIdentifier: "FramePreviewController") as! FramePreviewController;                    framePreviewVC.frameImage = sourceImage!
                    self.navigationController?.pushViewController(framePreviewVC, animated: true)
                }
            })
        }
    }

问题:-我可以使用此代码实现画中画效果,但是当我尝试使用 videoOutput(AVCaptureVideoDataOutput) 捕获图像时,它无法捕获图像。如果我使用 photoOutput(AVCaptureStillImageOutput) 做同样的事情,它不会让我在实​​时相机上应用 PIP 效果。请帮助我解决这个问题,我在这一点上已经坚持了一周。对此方向的任何形式的指导将不胜感激。在此先感谢!

4

0 回答 0