在学习 Reality Composer 的过程中,我发现可以使用 Reality Composer 锚定图像,这意味着如果我在现实生活中有一个图像,并且在 Reality Composer 中有它的副本,那么使用它我可以在上面构建一个完整的场景图片。我想知道,实际的锚定是如何发生的?
我以前使用过 SIFT 关键点匹配,也可以在这种情况下使用,但是,我无法在 Reality Composer 中找到它的工作原理。
在学习 Reality Composer 的过程中,我发现可以使用 Reality Composer 锚定图像,这意味着如果我在现实生活中有一个图像,并且在 Reality Composer 中有它的副本,那么使用它我可以在上面构建一个完整的场景图片。我想知道,实际的锚定是如何发生的?
我以前使用过 SIFT 关键点匹配,也可以在这种情况下使用,但是,我无法在 Reality Composer 中找到它的工作原理。
The principle of operation is as simple as that:
Reality Composer's scene element called AnchorEntity
contained in .rcproject
file in RealityKit app conforms to HasAnchoring protocol. When RealityKit app's Artificial Intelligence sees any image thru rear camera, it compares it with the one containing inside reference image folder. If both images are identical, app creates an image-based anchor AnchorEntity
(similar to ARImageAnchor in ARKit) that tethers its corresponding 3D model. Invisible anchor appears in the center of a picture.
AnchorEntity(.image(group: "ARResourceGroup", name: "imageBasedAnchor"))
When you're using image-based anchors in RealityKit apps, you're using a RealityKit's analog of ARImageTrackingConfig that is less processor intensive than ARWorldTrackingConfig.
The difference between AnchorEntity(.image)
and ARImageAnchor
is that RealityKit automatically tracks all its anchors, while ARKit uses renderer(...)
or session(...)
methods for updating.