0

我正在使用 OpenGL ES 2.0 的 vispy 接口在球体上绘制一些海岸线数据。我正在使用纬度和经度值来计算球体上数据的 3d 坐标并绘制这些坐标。我能够成功地绘制数据,但我只想看到球体侧面的那些数据点,这些数据点可以从视口看到。

我尝试了两种完全不同的方法来创建这种效果,但都导致了同样的问题。首先,我计算了视图方向和数据位置的点积,并只绘制了那些结果为负的点(即仅那些面向视口的点),其次,我简单地绘制了一个通过球体中心的平面,垂直于查看方向。

在这两种情况下,我都观察到了相同的情况——平面似乎稍微偏离了视口,位于球体中心的后面。换句话说,您可以看到数据在球体被平面遮盖之前稍微环绕球体的背面。

我已经检查了我所绘制的点实际上是在单位球面上,我确信从 3d 世界的角度来看,一切都是正确的。作为 3d 图形的相对初学者,我不太自信的是,我是否误解了投影矩阵的某些内容。我已经阅读了一些内容 - 但我的理解使我认为投影不应改变“Z方向”(视口所面对的方向)上的点顺序。

我确信这不是深度测试问题,因为我的第一种方法没有启用深度测试,并且在顶点着色器中完成了遮罩(通过将片段颜色 alpha 设置为 0.0)。除此之外,我找不到任何其他解释这个问题。

这是平面方法的代码:

import numpy as np
import cartopy
from vispy import app
from vispy import gloo
import time
from vispy.util.transforms import perspective, translate, rotate

xpts = []
ypts = []

#getting coastlines data

for string in cartopy.feature.NaturalEarthFeature('physical', 'coastline', '10m').geometries():
    for line in string:
        points = list(line.coords)
        for point in points:
            xpts.append(point[0])
            ypts.append(point[1])

coasts = np.array(zip(xpts,ypts), dtype=np.float32)

theta = (np.pi/180)*np.array(xpts, dtype=np.float32)
phi = (np.pi/180)*np.array(ypts, dtype=np.float32)

x3d = np.cos(phi)*np.cos(theta)
y3d = np.sin(theta)*np.cos(phi)
z3d = np.sin(phi)



vertex = """
// Uniforms
uniform mat4 u_model;
uniform mat4 u_view;
uniform mat4 u_projection;
uniform vec3 u_color;

attribute vec3 a_position;
void main (void)
{
    gl_Position = u_projection*u_view*u_model*vec4(a_position, 1.0);
}
"""

fragment = """
// Uniforms
uniform vec3 u_color;

void main()
{
    gl_FragColor = vec4(u_color, 1.0);
}
"""

class Canvas(app.Canvas):
    def __init__(self):
        app.Canvas.__init__(self, keys='interactive')

        gloo.set_state(clear_color = 'red', depth_test=True, blend=True, blend_func=('src_alpha', 'one_minus_src_alpha'))

        self.x = 0

        self.plane = 5*np.array([(0.,-1., -1.,1), (0, -1., +1.,1), (0, +1., -1.,1), (0, +1., +1.,1)], dtype=np.float32)

        self._timer = app.Timer(connect=self.on_timer, start=True)
        self.program = gloo.Program(vertex, fragment)

        self.view = np.dot(rotate(-90, (1, 0, 0)), np.dot(translate((-3, 0, 0)), rotate(-90.0, (0.0,1.0,0.0))))
        self.model = np.eye(4, dtype=np.float32)
        self.projection = perspective(45.0, self.size[0]/float(self.size[1]), 2.0, 10.0)

        self.program['u_projection'] = self.projection
        self.program['u_view'] = self.view
        self.program['u_model'] = self.model
        self.program['u_color'] = np.array([0.0, 0.0, 0.0], dtype=np.float32)

        self.program2 = gloo.Program(vertex, fragment)

        self.program2['u_projection'] = self.projection
        self.program2['u_view'] = self.view
        self.program2['u_model'] = self.model
        self.program2['u_color'] = np.array([1.0, 1.0, 1.0], dtype=np.float32)

        self.program2['a_position'] =  self.plane[:,:3].astype(np.float32)

    def on_timer(self, event):
        self.x += 0.05
        self.model = rotate(self.x, (0.0,0.0,1.0))
        pointys = np.concatenate((x3d,y3d,z3d)).reshape((3, -1)).T
        self.program['a_position'] = pointys
        self.program['u_model'] = self.model

        self.update()


    def on_resize(self, event):
        gloo.set_viewport(0, 0, *event.size)
        self.projection = perspective(45.0, event.size[0]/float(event.size[1]), 2.0, 10.0)
        self.program['u_projection'] = self.projection
        self.program2['u_projection'] = self.projection


    def on_draw(self, event):
        gloo.clear((1,1,1,1))
        self.program2.draw('triangle_strip')
        self.program.draw('points')

Canvas().show()
app.run()
4

1 回答 1

0

The way I understand your description, what you're seeing is a result of the perspective projection. I used all of my MS Paint skills to create this very elaborate diagram of the situation viewed from the side:

Sphere with perspective

The outline of the sphere is drawn in black. The red line indicates a plane through the center of the sphere.

The blue lines show two lines of sight from the viewpoint, which is at the bottom of the diagram. If you picture the result after applying the projection, what shows up as the front facing part of the sphere in the rendered image is everything below the green line. The parts of the sphere above the green line form the back facing part of the sphere in the resulting rendering.

Or in other words, the green line shows the plane that corresponds to the outline of the sphere in the resulting rendering.

As you can see from this, the plane through the center of the sphere is indeed some distance behind the section of the sphere that shows up as the front facing part of the sphere in the rendered image. This is just in the nature of a perspective projection. The distance between the red plane and the green plane will decrease with a smaller viewing angle (i.e. a weaker perspective), and the two are the same when using a parallel projection.

于 2015-09-12T05:48:35.650 回答