从 unsigned int 纹理转换为标准化浮点数并再次转换回来的正确方法是什么?
作为测试,我目前正在尝试将 unsigned int 纹理渲染到标准 RGB 上下文,并且以下内容正在工作,但感觉不对
相关抽奖代码:
ShaderPropertySetter.SetUniform(gl, "uTexture_us2", 0);
ShaderPropertySetter.SetUniform(gl, "maxIntensity_u", MaxIntensity);
ShaderPropertySetter.SetUniform(gl, "minIntensity_u", MinIntensity);
ShaderPropertySetter.SetUniformMat4(gl, "uModelMatrix_m4", modelMatrix);
canvas.Bind(gl);// this binds the vertex buffer
gl.BindTexture(OpenGL.GL_TEXTURE_2D, texture);
gl.DrawArrays(OpenGL.GL_QUADS, 0, 4);
纹理创建
public static void FillTextureDataWithUintBuffer(OpenGL gl, uint[] buffer, int width, int height)
{
unsafe
{
fixed (uint* dataprt = buffer)
{
IntPtr pixels = new IntPtr(dataprt);
const int GL_R32UI = 0x8236; //GL_R32UI is not currently defined in SharpGL
gl.TexImage2D(OpenGL.GL_TEXTURE_2D,
0,
GL_R32UI,
width,
height,
0,
OpenGL.GL_RED_INTEGER,
OpenGL.GL_UNSIGNED_INT,
pixels);
}
}
OpenGLTesting.CheckForFailure(gl);
}
当前的 GLSL 代码
---更新--- (修复了评论者友好指出的愚蠢错误) #version 150 core
in vec2 pass_texCord;
uniform usampler2D uTexture_us2;
uniform uint maxIntensity_u;
uniform uint minIntensity_u;
float linearNormalize(float value, in float max, in float min)
{
//normalized = (x-min(x))/(max(x)-min(x))
return (value - min) / (max - min);
}
void main(void)
{
uvec4 value = texture(uTexture_us2, pass_texCord);
float valuef = float(value.r);
float max = float(maxIntensity_u);
float min = float(minIntensity_u);
float normalized = linearNormalize(valuef,max,min) ;
gl_FragColor = vec4(normalized,normalized,normalized,1);
}
所以我对 GLSL 代码的当前状态不是很满意(尤其是因为它不起作用:p),因为我正在执行浮动转换,这似乎与这一点相悖。
原因:
我正在开发一个合成器,其中一些纹理存储为单通道无符号整数,而其他纹理存储在三通道浮点中,当一个与另一个混合时,我想转换“blendee”
注意:我正在使用 SharpGL