2

我想从 OSVR 插件中使用,但我不知道它是如何工作的。

在 OpenVR 中,我每只眼睛都有一个帧缓冲区,当我在该缓冲区上写一些东西时,我会在眼镜中看到它们,并且我正在使用 HTC Vive。

但现在我不知道这些缓冲区在哪里以及如何更改 VR 眼睛内容,并且我正确安装了 OSVR 服务器和 OSVR Vive 插件,但即使是这个简单的示例也无法正常工作,而且我在 VR 中看不到任何东西:

#include <osvr/ClientKit/ClientKit.h>
#include <osvr/ClientKit/Display.h>
#include "SDL2Helpers.h"
#include "OpenGLCube.h"

#include <SDL.h>
#include <SDL_opengl.h>

#include <iostream>

static auto const WIDTH = 1920;
static auto const HEIGHT = 1080;

// Forward declarations of rendering functions defined below.
void render(osvr::clientkit::DisplayConfig &disp);
void renderScene();

int main(int argc, char *argv[]) {
    namespace SDL = osvr::SDL2;

    // Open SDL
    SDL::Lib lib;

    // Use OpenGL 2.1
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 2);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 1);

    // Create a window
    auto window = SDL::createWindow("OSVR", SDL_WINDOWPOS_UNDEFINED,
                                    SDL_WINDOWPOS_UNDEFINED, WIDTH, HEIGHT,
                                    SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
    if (!window) {
        std::cerr << "Could not create window: " << SDL_GetError() << std::endl;
        return -1;
    }

    // Create an OpenGL context and make it current.
    SDL::GLContext glctx(window.get());

    // Turn on V-SYNC
    SDL_GL_SetSwapInterval(1);

    // Start OSVR and get OSVR display config
    osvr::clientkit::ClientContext ctx("com.osvr.example.SDLOpenGL");
    osvr::clientkit::DisplayConfig display(ctx);
    if (!display.valid()) {
        std::cerr << "\nCould not get display config (server probably not "
                     "running or not behaving), exiting."
                  << std::endl;
        return -1;
    }

    std::cout << "Waiting for the display to fully start up, including "
                 "receiving initial pose update..."
              << std::endl;
    while (!display.checkStartup()) {
        ctx.update();
    }
    std::cout << "OK, display startup status is good!" << std::endl;

    // Event handler
    SDL_Event e;
#ifndef __ANDROID__ // Don't want to pop up the on-screen keyboard
    SDL::TextInput textinput;
#endif
    bool quit = false;
    while (!quit) {
        // Handle all queued events
        while (SDL_PollEvent(&e)) {
            switch (e.type) {
            case SDL_QUIT:
                // Handle some system-wide quit event
                quit = true;
                break;
            case SDL_KEYDOWN:
                if (SDL_SCANCODE_ESCAPE == e.key.keysym.scancode) {
                    // Handle pressing ESC
                    quit = true;
                }
                break;
            }
            if (e.type == SDL_QUIT) {
                quit = true;
            }
        }

        // Update OSVR
        ctx.update();

        // Render
        render(display);

        // Swap buffers
        SDL_GL_SwapWindow(window.get());
    }

    return 0;
}

/// @brief A simple dummy "draw" function - note that drawing occurs in "room
/// space" by default. (that is, in this example, the modelview matrix when this
/// function is called is initialized such that it transforms from world space
/// to view space)
void renderScene() { draw_cube(1.0); }

/// @brief The "wrapper" for rendering to a device described by OSVR.
///
/// This function will set up viewport, initialize view and projection matrices
/// to current values, then call `renderScene()` as needed (e.g. once for each
/// eye, for a simple HMD.)
void render(osvr::clientkit::DisplayConfig &disp) {

    // Clear the screen to black and clear depth
    glClearColor(0, 0, 0, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    /// For each viewer, eye combination...
    disp.forEachEye([](osvr::clientkit::Eye eye) {

        /// Try retrieving the view matrix (based on eye pose) from OSVR
        double viewMat[OSVR_MATRIX_SIZE];
        eye.getViewMatrix(OSVR_MATRIX_COLMAJOR | OSVR_MATRIX_COLVECTORS,
                          viewMat);
        /// Initialize the ModelView transform with the view matrix we
        /// received
        glMatrixMode(GL_MODELVIEW);
        glLoadIdentity();
        glMultMatrixd(viewMat);

        /// For each display surface seen by the given eye of the given
        /// viewer...
        eye.forEachSurface([](osvr::clientkit::Surface surface) {
            auto viewport = surface.getRelativeViewport();
            glViewport(static_cast<GLint>(viewport.left),
                       static_cast<GLint>(viewport.bottom),
                       static_cast<GLsizei>(viewport.width),
                       static_cast<GLsizei>(viewport.height));

            /// Set the OpenGL projection matrix based on the one we
            /// computed.
            double zNear = 0.1;
            double zFar = 100;
            double projMat[OSVR_MATRIX_SIZE];
            surface.getProjectionMatrix(
                zNear, zFar, OSVR_MATRIX_COLMAJOR | OSVR_MATRIX_COLVECTORS |
                                 OSVR_MATRIX_SIGNEDZ | OSVR_MATRIX_RHINPUT,
                projMat);

            glMatrixMode(GL_PROJECTION);
            glLoadIdentity();
            glMultMatrixd(projMat);

            /// Set the matrix mode to ModelView, so render code doesn't
            /// mess with the projection matrix on accident.
            glMatrixMode(GL_MODELVIEW);

            /// Call out to render our scene.
            renderScene();
        });
    });

    /// Successfully completed a frame render.
} 

有人知道它是如何工作的吗?

4

1 回答 1

2

代替显示配置 API,使用 osvrRenderManager 获取渲染信息并将帧呈现给 HMD。显示配置 API 是一个较低级别的 API,它不处理诸如在 Vive 上进行扩展模式渲染或直接模式渲染的窗口放置,或根据渲染目标缩放因子调整投影。这通常由 RenderManager API 处理。

https://github.com/sensics/osvr-rendermanager

于 2017-07-12T17:42:56.643 回答