1

我正在使用 macOS Big Sur 版本 11.2.3 和 Xcode 版本 12.4。我想获得具有透视失真的数独图像的外部正方形。我通过以下方式进行操作:

  1. 执行矩形检测请求。这提供了外部矩形的点。

  2. 执行透视校正。这提供了一个完美的二次矩形。

  3. 现在我想在数独外框裁剪图像。

  4. 对透视校正图像执行第二次矩形检测请求,以获取用于裁剪操作的矩形。

令人惊讶的是,矩形检测结果是一个空数组。

我有一个怀疑,可能是什么原因。

打印出原始 CGImage 的属性提供:

Original image:
    <CGImage 0x7f92e4415560> (IP)
        <<CGColorSpace 0x6000035faf40> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; sRGB IEC61966-2.1)>
            width = 2448, height = 3264, bpc = 8, bpp = 32, row bytes = 9792
            kCGImageAlphaNoneSkipLast | 0 (default byte order)  | kCGImagePixelFormatPacked
            is mask? No, has masking color? No, has soft mask? No, has matte? No, should interpolate? Yes
    2021-04-06 19:15:04.445374+0200 StackExchangeHilfe[1959:100561] Metal API Validation Enabled

打印出透视校正的 CGImage 的属性提供:

    Corrected image:
    <CGImage 0x7f92f451f180> (DP)
        <<CGColorSpace 0x6000035fae80> (kCGColorSpaceDeviceRGB)>
            width = 2073, height = 2194, bpc = 8, bpp = 32, row bytes = 8320
            kCGImageAlphaPremultipliedLast | 0 (default byte order)  | kCGImagePixelFormatPacked
            is mask? No, has masking color? No, has soft mask? No, has matte? No, should interpolate? Yes

bitmapInfo 有区别:

原始图像:kCGImageAlphaNoneSkipLast

修正图像:kCGImageAlphaPremultipliedLast

任何其他 CIFilter 都不会更改 bitmapInfo。

我试图改变这个值,但它是一个只读变量。但是,也许我的怀疑是完全错误的。

无论如何,有人可以帮忙吗?提前致谢。

    import UIKit
    import Vision

    class ViewController: UIViewController {
        @IBOutlet weak var origImageView: UIImageView!
        @IBOutlet weak var correctedImageView: UIImageView!
        
        let imageName = "sudoku"
        var origImage: UIImage!
        
        override func viewDidLoad() {
            super.viewDidLoad()
            origImage = UIImage(named: imageName)
            origImageView.image = origImage
            let correctedImage = performOperationsWithUIImage(origImage)
            correctedImageView.image = correctedImage
        }
        
        func performOperationsWithUIImage(_ image: UIImage) -> UIImage? {
            let cgImage = image.cgImage!
            print("Original image:")
            print("\(String(describing: cgImage))")
            
            // Create rectangle detect request
            let rectDetectRequest = VNDetectRectanglesRequest()
             // Customize & configure the request to detect only certain rectangles.
            rectDetectRequest.maximumObservations = 8 // Vision currently supports up to 16.
            rectDetectRequest.minimumAspectRatio = 0.8 // height / width
            rectDetectRequest.quadratureTolerance = 30
            rectDetectRequest.minimumSize = 0.5
            rectDetectRequest.minimumConfidence = 0.6
            
            // Create a request handler.
            let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage, orientation: .up, options: [:])
            // Send the requests to the request handler.
            do {
                try imageRequestHandler.perform([rectDetectRequest])
                } catch let error as NSError {
                    print("Failed to perform first image request: \(error)")
                }
            guard let results = rectDetectRequest.results as? [VNRectangleObservation]
            else {return nil}
            print("\nFirst rectangle request result:")
            print("\(results.count) rectangle(s) detected:")
            print("\(String(describing: results))")
            
            // Perform pespective correction
            let width = Int(cgImage.width)
            let height = Int(cgImage.height)
            guard let filter = CIFilter(name:"CIPerspectiveCorrection")  else { return nil }
            
            filter.setValue(CIImage(image: image), forKey: "inputImage")
            filter.setValue(CIVector(cgPoint: VNImagePointForNormalizedPoint(results.first!.topLeft, width, height)), forKey: "inputTopLeft")
            filter.setValue(CIVector(cgPoint: VNImagePointForNormalizedPoint(results.first!.topRight, width, height)), forKey: "inputTopRight")
            filter.setValue(CIVector(cgPoint: VNImagePointForNormalizedPoint(results.first!.bottomLeft, width, height)), forKey: "inputBottomLeft")
            filter.setValue(CIVector(cgPoint: VNImagePointForNormalizedPoint(results.first!.bottomRight, width, height)), forKey: "inputBottomRight")
            
            guard
                let outputCIImage = filter.outputImage,
                let outputCGImage = CIContext(options: nil).createCGImage(outputCIImage, from: outputCIImage.extent)  else {return nil}
            
            print("\nCorrected image:")
            print("\(String(describing: outputCGImage))")
            
            // Perform another rectangle detection
            let newImageRequestHandler = VNImageRequestHandler(cgImage: outputCGImage, orientation: .up, options: [:])
            // Send the requests to the request handler.
            do {
                try newImageRequestHandler.perform([rectDetectRequest])
                } catch let error as NSError {
                    print("Failed to perform second image request: \(error)")
                }
            guard let newResults = rectDetectRequest.results as? [VNRectangleObservation]
            else {return nil}
            print("\nSecond rectangle request result:")
            print("\(newResults.count) rectangle(s) detected:")
            print("\(String(describing: newResults))")

            return UIImage(cgImage: outputCGImage)
        }


    }

在此处输入图像描述

4

0 回答 0