0

我有一个绘制了虚拟对象的 ARSCNView。虚拟对象被绘制在用户的脸上。会话具有以下配置:

let configuration = ARFaceTrackingConfiguration()
configuration.worldAlignment = .gravityAndHeading

sceneView.session.run(configuration)

此 ARSCNView 是视频通话的一部分。如果我们像下面这样发回像素缓冲区,

 public func session(_ session: ARSession, didUpdate frame: ARFrame) {
    videoSource.sendBuffer(frame.capturedImage, timestamp: frame.timestamp)
 }

虚拟对象不会显示给我的调用者。

我尝试过的一件事是,不依赖ARSessionDelegate的回调,而是使用 DispatchSourceTimer 发送事件。

func startCaptureView() {  
  // Timer with 0.1 second interval
  timer.schedule(deadline: .now(), repeating: .milliseconds(100))
  timer.setEventHandler { [weak self] in
    // Turn sceneView data into UIImage
    guard let sceneImage: CGImage = self?.sceneView.snapshot().cgImage else {
      return
    }

    self?.videoSourceQueue.async { [weak self] in
       if let buffer: CVPixelBuffer = ImageProcessor.pixelBuffer(forImage: sceneImage) {
             self?.videoSource.sendBuffer(buffer, timestamp: Double(mach_absolute_time()))
        }
    }
  }

  timer.resume()
}

呼叫者缓慢地接收数据,视频体验不连贯,并且图像大小不合适。

关于如何与捕获的帧一起发送有关虚拟对象的数据的任何建议?

参考:https ://medium.com/agora-io/augmented-reality-video-conference-6845c001aec0

4

1 回答 1

2

虚拟对象没有出现的原因是因为 ARKit 只提供原始图像,frame.capturedImage相机捕获的图像也是如此,没有任何 SceneKit 渲染。要传递渲染的视频,您需要实现一个屏幕外SCNRenderer并将像素缓冲区传递给 Agora 的 SDK。

我建议您查看开源框架AgoraARKit。我编写了框架,它实现了 Agora.io Video SDKARVideoKit作为依赖项。ARVideoKit是一个流行的库,它实现了离屏渲染器并提供渲染的像素缓冲区。

该库默认实现 WorldTracking。如果您想扩展ARBroadcaster该类以实现 faceTracking,您可以使用以下代码:

import ARKit

class FaceBroadcaster : ARBroadcaster {

    // placements dictionary
    var faceNodes: [UUID:SCNNode] = [:]           // Dictionary of faces

    override func viewDidLoad() {
        super.viewDidLoad() 
    }

    override func setARConfiguration() {
        print("setARConfiguration")        // Configure ARKit Session
        let configuration = ARFaceTrackingConfiguration()
        configuration.isLightEstimationEnabled = true
        // run the config to start the ARSession
        self.sceneView.session.run(configuration)
        self.arvkRenderer?.prepare(configuration)
    }

    // anchor detection
    override func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        super.renderer(renderer, didAdd: node, for: anchor)
        guard let sceneView = renderer as? ARSCNView, anchor is ARFaceAnchor else { return }
        /*
         Write depth but not color and render before other objects.
         This causes the geometry to occlude other SceneKit content
         while showing the camera view beneath, creating the illusion
         that real-world faces are obscuring virtual 3D objects.
         */
        let faceGeometry = ARSCNFaceGeometry(device: sceneView.device!)!
        faceGeometry.firstMaterial!.colorBufferWriteMask = []
        let occlusionNode = SCNNode(geometry: faceGeometry)
        occlusionNode.renderingOrder = -1

        let contentNode = SCNNode()
        contentNode.addChildNode(occlusionNode)
        node.addChildNode(contentNode)
        faceNodes[anchor.identifier] = node
    }
}
于 2020-04-20T17:37:19.517 回答