1

如何WebXR 'immersive-xr'在使用相同的浏览器中从 HMD(如 VIVE 或 Oculus)镜像或克隆视图WebGL canvas

关于将像素复制到纹理 2D,然后将其应用为渲染纹理,或使用调整后的viewTransform. 如果您要渲染不同的视图,例如远程摄像头或第三人称旁观者视图,这些效果很好,但是如果只想在桌面上镜像当前的 HMD 视图,两者都是资源浪费。

自我回答如下,因为当我遇到这个问题时没有可靠的答案,我想为未来的开发人员节省时间。(特别是如果他们不是都精通WebGl2and WebXR

请注意,出于“原因”,我没有为此项目使用任何现有框架。如果您是,它不应该有太大变化,您只需要在库的渲染管道中的适当位置执行这些步骤。

4

1 回答 1

1

事实证明,答案非常简单,几乎没有达到我的 fps。

  1. 将画布附加到 DOM 并将其设置为所需的大小。(我的是流动的,所以它的父容器的 CSS 宽度为 100%,高度为 auto)
  2. 初始化 glContext 时,请务必指定抗锯齿为 false。如果您的观众和 HMD 视图是不同的分辨率,这一点很重要。{xrCompatible: true, webgl2: true, antialias: false}
  3. 创建一个用于存储渲染的 HMD 视图的 frameBuffer。spectateBuffer
  4. 在回调中像往常一样绘制immersive-xr图层xrSession.requestAnimationFrame(OnXRFrame);
  5. 就在退出您的OnXRFrame方法之前,实现调用以绘制旁观者视图。我个人使用了一个 boolshowCanvas来允许我根据需要打开和关闭旁观镜:
//a quick reference I like to use for enums and types
const GL = WebGL2RenderingContext;

//Create a buffer for my spectate view so that I can just re-use it at will.
let spectateBuffer = _glContext.createFramebuffer();

//Called each frame, as per usual
function OnXRFrame(timestamp, xrFrame){
    //Bind my spectate framebuffer to the webGL2 readbuffer
    _glContext.bindFramebuffer(GL.READ_FRAMEBUFFER, spectateBuffer);

    //...Get my pose, update my scene objects
    //...Oh my, a bunch of stuff happens here
    //...finally gl.drawElements(GL.TRIANGLES...

    //render spectator canvas
    if(showCanvas){
        DrawSpectator();
    }

    //Request next animation callback
    xrFrame.session.requestAnimationFrame(OnXRFrame);
}

//A tad more verbose that needed to illustrate what's going on.
//You don't need to declare the src and dest x/y's as their own variables
function DrawSpectator(){
    //Set the DRAW_FRAMEBUFER to null, this tells the renderer to draw to the canvas.
    _glContext.bindFramebuffer(GL.DRAW_FRAMEBUFFER, null);

    //Store last HMD canvas view size (Mine was 0.89:1 aspect, 2296x2552)
    let bufferWidth = _glContext.canvas.width;
    let bufferHeight = _glContext.canvas.height;

    //Set canvas view size for the spectator view (Mine was 2:1 aspect, 1280x640)
    _glContext.canvas.width = _glContext.canvas.clientWidth;
    _glContext.canvas.height = _glContext.canvas.clientWidth / 2;

    //Define the bounds of the source buffer you want to use
    let srcX0 = 0;
    let srcY0 = bufferHeight * 0.25;    //I crop off the bottom 25% of the HMD's view
    let srcX1 = bufferWidth;
    let srcY1 = bufferHeight - (bufferHeight * 0.25);   //I crop off the top 25% of the HMD's view

    //Define the bounds of the output buffer
    let dstY0 = 0;
    let dstX0 = 0;
    let dstY1 = _glContext.canvas.height;
    let dstX1 = _glContext.canvas.width;

    //Blit the source buffer to the output buffer
    _glContext.blitFramebuffer(
        srcX0, srcY0, srcX1, srcY1,
        dstX0, dstY0, dstX1, dstY1,
        GL.COLOR_BUFFER_BIT, GL.NEAREST);
}

注意:我只将我的 HMD 眼睛视图之一显示为旁观者视图,以显示两者,您需要为每只眼睛存储一个旁观者帧缓冲区并将它们并排在一起。

我希望这个保存的未来谷歌人一些痛苦。

于 2021-05-11T12:46:45.093 回答