我正在尝试使用 GLSL 在前向渲染(而不是后处理)中创建自己的 SSAO 着色器。我遇到了一些问题,但我真的无法弄清楚我的代码有什么问题。
它是使用 Babylon JS 引擎作为 a 创建BABYLON.ShaderMaterial
并设置在 aBABYLON.RenderTargetTexture
中,它的主要灵感来自这个著名的 SSAO 教程:http: //john-chapman-graphics.blogspot.fr/2013/01/ssao-tutorial.html
出于性能原因,我必须在屏幕空间中不投影和不投影的情况下进行所有计算,我宁愿使用上面教程中描述的视图射线方法。
在解释整个事情之前,请注意巴比伦 JS 使用左手坐标系,这可能对我的代码有很大影响。
这是我的经典步骤:
- 首先,我在我的 JS 代码中计算我的四个相机远平面角位置。它们可能每次都是常数,因为它们是在视图空间位置中计算的。
// Calculating 4 corners manually in view space
var tan = Math.tan;
var atan = Math.atan;
var ratio = SSAOSize.x / SSAOSize.y;
var far = scene.activeCamera.maxZ;
var fovy = scene.activeCamera.fov;
var fovx = 2 * atan(tan(fovy/2) * ratio);
var xFarPlane = far * tan(fovx/2);
var yFarPlane = far * tan(fovy/2);
var topLeft = new BABYLON.Vector3(-xFarPlane, yFarPlane, far);
var topRight = new BABYLON.Vector3( xFarPlane, yFarPlane, far);
var bottomRight = new BABYLON.Vector3( xFarPlane, -yFarPlane, far);
var bottomLeft = new BABYLON.Vector3(-xFarPlane, -yFarPlane, far);
var farCornersVec = [topLeft, topRight, bottomRight, bottomLeft];
var farCorners = [];
for (var i = 0; i < 4; i++) {
var vecTemp = farCornersVec[i];
farCorners.push(vecTemp.x, vecTemp.y, vecTemp.z);
}
这些角位置被发送到顶点着色器——这就是向量坐标在
farCorners[]
数组中序列化以在顶点着色器中发送的原因。在我的顶点着色器中,
position.x
标志position.y
让着色器知道每次通过时使用哪个角。然后在我的片段着色器中对这些角进行插值以计算视图光线,即从相机到远平面的矢量(因此,它的 .z 分量等于远平面到相机的距离)。
片段着色器遵循 John Chapman 教程的说明(参见下面的注释代码)。
BABYLON.RenderTargetTexture
我使用该DepthRenderer.getDepthMap()
方法获得了深度缓冲区。深度纹理查找实际上返回(根据 Babylon JS 的深度着色器):,
(gl_FragCoord.z / gl_FragCoord.w) / far
其中:
gl_FragCoord.z
:非线性深度gl_FragCoord.z = 1/Wc
,Wc
剪辑空间顶点位置在哪里(即gl_Position.w
在顶点着色器中)far
:相机到远平面的正距离。
内核样本排列在一个半球中,在 [0,1] 中具有随机浮点数,大多数分布在靠近原点的位置,采用线性插值。
因为我没有正常的纹理,所以我从当前的深度缓冲区值计算它们getNormalFromDepthValue()
:
vec3 getNormalFromDepthValue(float depth) {
vec2 offsetX = vec2(texelSize.x, 0.0);
vec2 offsetY = vec2(0.0, texelSize.y);
// texelSize = size of a texel = (1/SSAOSize.x, 1/SSAOSize.y)
float depthOffsetX = getDepth(depthTexture, vUV + offsetX); // Horizontal neighbour
float depthOffsetY = getDepth(depthTexture, vUV + offsetY); // Vertical neighbour
vec3 pX = vec3(offsetX, depthOffsetX - depth);
vec3 pY = vec3(offsetY, depthOffsetY - depth);
vec3 normal = cross(pY, pX);
normal.z = -normal.z; // We want normal.z positive
return normalize(normal); // [-1,1]
}
最后,我的getDepth()
函数允许我以 32 位浮点数获取当前 UV 的深度值:
float getDepth(sampler2D tex, vec2 texcoord) {
return unpack(texture2D(tex, texcoord));
// unpack() retreives the depth value from the 4 components of the vector given by texture2D()
}
这是我的顶点和片段着色器代码(没有函数声明):
// ---------------------------- Vertex Shader ----------------------------
precision highp float;
uniform float fov;
uniform float far;
uniform vec3 farCorners[4];
attribute vec3 position; // 3D position of each vertex (4) of the quad in object space
attribute vec2 uv; // UV of each vertex (4) of the quad
varying vec3 vPosition;
varying vec2 vUV;
varying vec3 vCornerPositionVS;
void main(void) {
vPosition = position;
vUV = uv;
// Map current vertex with associated frustum corner position in view space:
// 0: top left, 1: top right, 2: bottom right, 3: bottom left
// This frustum corner position will be interpolated so that the pixel shader always has a ray from camera->far-clip plane.
vCornerPositionVS = vec3(0.0);
if (positionVS.x > 0.0) {
if (positionVS.y <= 0.0) { // top left
vCornerPositionVS = farCorners[0];
}
else if (positionVS.y > 0.0) { // top right
vCornerPositionVS = farCorners[1];
}
}
else if (positionVS.x <= 0.0) {
if (positionVS.y > 0.0) { // bottom right
vCornerPositionVS = farCorners[2];
}
else if (positionVS.y <= 0.0) { // bottom left
vCornerPositionVS = farCorners[3];
}
}
gl_Position = vec4(position * 2.0, 1.0); // 2D position of each vertex
}
// ---------------------------- Fragment Shader ----------------------------
precision highp float;
uniform mat4 projection; // Projection matrix
uniform float radius; // Scaling factor for sample position, by default = 1.7
uniform float depthBias; // 1e-5
uniform vec2 noiseScale; // (SSAOSize.x / noiseSize, SSAOSize.y / noiseSize), with noiseSize = 4
varying vec3 vCornerPositionVS; // vCornerPositionVS is the interpolated position calculated from the 4 far corners
void main() {
// Get linear depth in [0,1] with texture2D(depthBufferTexture, vUV)
float fragDepth = getDepth(depthBufferTexture, vUV);
float occlusion = 0.0;
if (fragDepth < 1.0) {
// Retrieve fragment's view space normal
vec3 normal = getNormalFromDepthValue(fragDepth); // in [-1,1]
// Random rotation: rvec.xyz are the components of the generated random vector
vec3 rvec = texture2D(randomSampler, vUV * noiseScale).rgb * 2.0 - 1.0; // [-1,1]
rvec.z = 0.0; // Random rotation around Z axis
// Get view ray, from camera to far plane, scaled by 1/far so that viewRayVS.z == 1.0
vec3 viewRayVS = vCornerPositionVS / far;
// Current fragment's view space position
vec3 fragPositionVS = viewRay * fragDepth;
// Creation of TBN matrix
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
for (int i = 0; i < NB_SAMPLES; i++) {
// Get sample kernel position, from tangent space to view space
vec3 samplePosition = tbn * kernelSamples[i];
// Add VS kernel offset sample to fragment's VS position
samplePosition = samplePosition * radius + fragPosition;
// Project sample position from view space to screen space:
vec4 offset = vec4(samplePosition, 1.0);
offset = projection * offset; // To view space
offset.xy /= offset.w; // Perspective division
offset.xy = offset.xy * 0.5 + 0.5; // [-1,1] -> [0,1]
// Get current sample depth:
float sampleDepth = getDepth(depthTexture, offset.xy);
float rangeCheck = abs(fragDepth - sampleDepth) < radius ? 1.0 : 0.0;
// Reminder: fragDepth == fragPosition.z
// Range check and accumulate if fragment contributes to occlusion:
occlusion += (samplePosition.z - sampleDepth >= depthBias ? 1.0 : 0.0) * rangeCheck;
}
}
// Inversion
float ambientOcclusion = 1.0 - (occlusion / float(NB_SAMPLES));
ambientOcclusion = pow(ambientOcclusion, power);
gl_FragColor = vec4(vec3(ambientOcclusion), 1.0);
}
水平和垂直的高斯着色器模糊会清除随机纹理产生的噪声。
我的参数是:
NB_SAMPLES = 16
radius = 1.7
depthBias = 1e-5
power = 1.0
结果如下:
结果在其边缘有伪影,并且接近的阴影不是很强烈......有人会在我的代码中看到错误或奇怪的东西吗?
非常感谢!