3

问题

我的主要目标是获取设备上触摸的模型坐标,以便检查您触摸的内容。我和一个大模型一起工作,必须画很多东西,这些东西也必须是可触摸的。

为了实现这一点,我知道两种可能的方法:一方面,我们可以进行光线投射并将相机的指向矢量与模型相交,我们必须将其存储在内存中的某个位置。另一方面,这就是我想要做的,我们可以用旧的方式来做:

function gluUnProject(winx, winy, winz: TGLdouble; 
                  const modelMatrix: TGLMatrixd4; 
                  const projMatrix: TGLMatrixd4; 
                  const viewport: TGLVectori4; 
                  objx, objy, objz

并将屏幕坐标转换回模型坐标。直到这里我是正确的吗?你知道在 opengl 应用程序中进行触摸处理的其他方法吗?如您所见,该函数将 winz 作为参数,这将是屏幕坐标处片段的高度,此信息通常来自深度缓冲区。我已经知道 opengl-es 2.0 不提供对其内部使用的深度缓冲区的访问,就像在“普通”opengl 中那样。那么我怎样才能得到这些信息呢?

苹果提供了两种可能性。要么创建一个带有深度附件的屏幕外帧缓冲区,要么将深度信息渲染到纹理中。对我来说可悲的是,该手册没有显示将信息读回 iOS 的方法。我想我必须使用 glReadPixels 并从他们那里读回来。我实现了我能找到的所有东西,但无论我如何设置它,我都没有从屏幕外帧缓冲区或纹理中得到正确的高度结果。我期望得到一个带有 z 值的 GL_FLOAT 。

z:28550323

r:72 g:235 b:191 [3]:1 <-- 总是这个

代码

胶合项目

就像我们现在一样,glu 库在 iOS 中不可用,所以我查找了代码并基于此源实现了以下方法:link。GLKVector2 屏幕输入变量是屏幕上的 X、Y 坐标,由 UITabGestureRecognizer 读取

-(GLKVector4)unprojectScreenPoint:(GLKVector2)screen {

//get active viewport
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
NSLog(@"viewport [0]:%d [1]:%d [2]:%d [3]:%d", viewport[0], viewport[1], viewport[2], viewport[3]);

//get matrices
GLKMatrix4 projectionModelViewMatrix = GLKMatrix4Multiply(_modelViewMatrix, _projectionMatrix);
projectionModelViewMatrix = GLKMatrix4Invert(projectionModelViewMatrix, NULL);

//in ios Y is inverse
screen.v[1] = viewport[3]-screen.v[1];
NSLog(@"screen: [0]:%.2f [1]:%.2f", screen.v[0], screen.v[1]);

//read from the depth component of the last rendererd offscreen framebuffer
/*
GLubyte z;
glBindFramebuffer(GL_FRAMEBUFFER, _depthFramebuffer);
glReadPixels(screen.v[0], screen.v[1], 1, 1, GL_DEPTH_COMPONENT16, GL_UNSIGNED_BYTE, &z);
NSLog(@"z:%c", z);
*/

//read from the last rendererd depth texture
Byte rgb[4];
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _depthTexture);
glReadPixels(screen.v[0], screen.v[1], 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, &z);
glBindTexture(GL_TEXTURE_2D, 0);
NSLog(@"r:%d g:%d b:%d [3]:%d", rgb[0], rgb[1], rgb[2], rgb[3]);

GLKVector4 in = GLKVector4Make(screen.v[0], screen.v[1], 1, 1.0);

/* Map x and y from window coordinates */
in.v[0] = (in.v[0] - viewport[0]) / viewport[2];
in.v[1] = (in.v[1] - viewport[1]) / viewport[3];

/* Map to range -1 to 1 */
in.v[0] = in.v[0] * 2.0 - 1.0;
in.v[1] = in.v[1] * 2.0 - 1.0;
in.v[2] = in.v[2] * 2.0 - 1.0;

GLKVector4 out = GLKMatrix4MultiplyVector4(projectionModelViewMatrix, in);
if(out.v[3]==0.0) {
    NSLog(@"out.v[3]==0.0");
    return GLKVector4Make(0.0, 0.0, 0.0, 0.0);
}
out.v[0] /= out.v[3];
out.v[1] /= out.v[3];
out.v[2] /= out.v[3];

return out;

}

它尝试从深度缓冲区或深度纹理中读取数据,这两者都是在绘制时生成的。我知道这段代码效率很低,但首先它必须在我清理之前运行。

我尝试只绘制额外的帧缓冲区(在这里注释掉),只绘制到平铺和一起,没有成功。

-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
    
    glUseProgram(_program);
    
    //http://stackoverflow.com/questions/10761902/ios-glkit-and-back-to-default-framebuffer
    GLint defaultFBO;
    glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &defaultFBO);
    GLint defaultRBO;
    glGetIntegerv(GL_RENDERBUFFER_BINDING_OES, &defaultRBO);
    //GLint defaultDepthRenderBuffer;
    //glGetIntegerv(GL_Depth_B, &defaultRBO);
    
    GLuint width, height;
    //width = height = 512;
    width = self.view.frame.size.width;
    height = self.view.frame.size.height;
    
    
    GLuint framebuffer;
    glGenFramebuffers(1, &framebuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
    
    /* method offscreen framebuffer
    GLuint depthRenderbuffer;
    glGenRenderbuffers(1, &depthRenderbuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
    */
    
    //method render to texture
    glActiveTexture(GL_TEXTURE1);
    //https://github.com/rmaz/Shadow-Mapping        
    //http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithEAGLContexts/WorkingwithEAGLContexts.html
    GLuint depthTexture;
    glGenTextures(1, &depthTexture);
    glBindTexture(GL_TEXTURE_2D, depthTexture);
    
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    // we do not want to wrap, this will cause incorrect shadows to be rendered
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    // set up the depth compare function to check the shadow depth
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_EXT, GL_LEQUAL);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_EXT, GL_COMPARE_REF_TO_TEXTURE_EXT);
    
    //glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8_OES,  width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTexture, 0);
    
    
    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
    if(status != GL_FRAMEBUFFER_COMPLETE) {
        NSLog(@"failed to make complete framebuffer object %x", status);
    }
    
    GLenum glError = glGetError();
    if(GL_NO_ERROR != glError) {
        NSLog(@"Offscreen OpenGL Error: %d", glError);
    }
    
    glClear(GL_DEPTH_BUFFER_BIT);
    //glCullFace(GL_FRONT);
    glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
    glUniform1i(_uniforms.renderMode, 1);
    
    //        
    //Drawing calls
    //

    _depthTexture = depthTexture;
    //_depthFramebuffer = depthRenderbuffer;
    
    // Revert to the default framebuffer for now
    glBindFramebuffer(GL_FRAMEBUFFER, defaultFBO);
    glBindRenderbuffer(GL_RENDERBUFFER, defaultRBO);
    
    
    // Render normally
    glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
    glClearColor(0.316f, 0.50f, 0.86f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    //glCullFace(GL_BACK);
    
    glUniform1i(_uniforms.renderMode, 0);
    [self update];

    //        
    //Drawing calls
    //        
}

}

z 的值可能来自不同的类型。它只是一个必须放回浮点数据类型的浮点数吗?

谢谢你的支持!模式

1. 编辑

我现在也从我渲染的纹理中获取 RGBA。为此,我在绘制时激活了单独的帧缓冲区,而不是深度扩展,并将纹理连接到它。我编辑了上面的代码。现在我得到以下值:

screen: [0]:604.00 [1]:348.00
r:102 g:102 b:102 [3]:255

screen: [0]:330.00 [1]:566.00
r:73 g:48 b:32 [3]:255

screen: [0]:330.00 [1]:156.00
r:182 g:182 b:182 [3]:255

screen: [0]:266.00 [1]:790.00
r:80 g:127 b:219 [3]:255

screen: [0]:548.00 [1]:748.00
r:80 g:127 b:219 [3]:255

如您所见,读取了 rgba 值。好消息是,当我触摸没有模型的天空时,值始终相同,而触摸模型时,值会有所不同。所以我认为纹理应该是正确的。但是我现在如何从这 4 个字节中重新组合出真正的值,然后可以将其传递给 gluUnProject?我不能把它扔到一个浮点数上。

4

0 回答 0