我正在尝试用图像分类训练 MLModel。我创建了一个应用程序来创建图像以用作训练数据(最后将使用相同的过程来获得预测)。我从 AvCaptureSession 获得 CVPixelBuffer,将其转换为 UIImage 并将其以 JPEG 格式保存到文档目录中。后来我给它们贴上标签,并在操场上用 CreateML 训练 MLModel。由于我收集了数千张图像,因此在操场上的结果是 %100。
但是当我将这个模型集成到我的应用程序中并以相同的方式提供它时,结果很糟糕。我得到 CVPixelBuffer,将其转换为 UIImage(裁剪)并将裁剪后的图像转换为 CVPixelBuffer 并将其提供给模型。我必须将 UIImage 转换为 CVPixelBuffer,因为 CoreML 模型只有 CVPixelBuffer 除外。我使用这种方法将 UIImage 转换为 CVPixelBuffer:
func pixelBuffer(width: Int, height: Int) -> CVPixelBuffer? {
var maybePixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue]
let status = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32ARGB,
attrs as CFDictionary,
&maybePixelBuffer)
guard status == kCVReturnSuccess, let pixelBuffer = maybePixelBuffer else {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer)
guard let context = CGContext(data: pixelData,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
else {
return nil
}
UIGraphicsPushContext(context)
context.translateBy(x: 0, y: CGFloat(height))
context.scaleBy(x: 1, y: -1)
self.draw(in: CGRect(x: 0, y: 0, width: width, height: height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
}
我认为我的结果很差,因为 CoreML 模型不喜欢转换后的 CVPixelBuffer。
有人有什么建议吗?