1

我编写了一个体积渲染程序,可以将一些 2d 图像转换为用户可以旋转的 3d 体积。我需要通过在该点周围的每个方向上取梯度来计算 3d 纹理中每个点的法线(用于照明)。

计算法线需要在片段着色器中进行六个额外的纹理访问。如果没有这些额外的纹理访问,程序会快得多,所以我试图以字节为单位预先计算每个方向 (x,y,z) 的梯度,并将其存储在原始纹理的 BGA 通道中。当我在 CPU 上测试时,我的字节似乎包含正确的值,但是当我到达着色器时,它看起来不对。很难从着色器中判断为什么它会失败,我认为这是因为一些渐变值是负数。但是,当我将纹理类型指定为 GL_BYTE(而不是 GL_UNSIGNED_BYTE)时,它仍然是错误的,这会破坏原始纹理的外观。仅通过将数据呈现为颜色,我无法准确判断出了什么问题。将负值放入纹理的正确方法是什么?当我在片段着色器中读取它时,我怎么知道值是负数?

以下代码显示了我如何运行该操作以从字节数组 (byte[] all) 计算梯度,然后将其转换为作为 3d 纹理读入的字节缓冲区 (byteBuffer bb)。函数 'toLoc(x,y,z,w,h,l)' 只返回 (x+w*(y+z*h))*4)——它将 3d 下标转换为 1d 索引。图像是灰度的,所以我丢弃了 gba,只使用 r 通道来保存原始值。其余通道 (gba) 存储梯度。

        int pixelDiffxy=5;
    int pixelDiffz=1;

    int count=0;  
    Float r=0f;
    byte t=r.byteValue();

    for(int i=0;i<w;i++){
        for(int j=0;j<h;j++){
            for(int k=0;k<l;k++){
                count+=4;
                if(i<pixelDiffxy || i>=w-pixelDiffxy || j<pixelDiffxy || j>=h-pixelDiffxy || k<pixelDiffz || k>=l-pixelDiffz){
                    //set these all to zero since they are out of bounds
                    all[toLoc(i,j,k,w,h,l)+1]=t;//green=0
                    all[toLoc(i,j,k,w,h,l)+2]=t;//blue=0
                    all[toLoc(i,j,k,w,h,l)+3]=t;//alpha=0
                }
                else{

                    int ri=(int)all[toLoc(i,j,k,w,h,l)+0] & 0xff;

                    //find the values on the sides of this pixel in each direction (use red channel)
                    int xgrad1=(all[toLoc(i-pixelDiffxy,j,k,w,h,l)])& 0xff;
                    int xgrad2=(all[toLoc(i+pixelDiffxy,j,k,w,h,l)])& 0xff;

                    int ygrad1=(all[toLoc(i,j-pixelDiffxy,k,w,h,l)])& 0xff;
                    int ygrad2=(all[toLoc(i,j+pixelDiffxy,k,w,h,l)])& 0xff;

                    int zgrad1=(all[toLoc(i,j,k-pixelDiffz,w,h,l)])& 0xff;
                    int zgrad2=(all[toLoc(i,j,k+pixelDiffz,w,h,l)])& 0xff;


                    //find the difference between the values on each side and divide by the distance between them
                    int xgrad=(xgrad1-xgrad2)/(2*pixelDiffxy);
                    int ygrad=(ygrad1-ygrad2)/(2*pixelDiffxy);
                    int zgrad=(zgrad1-zgrad2)/(2*pixelDiffz);

                    Vec3f grad=new Vec3f(xgrad,ygrad,zgrad);

                    Integer xg=(int) (grad.x);
                    Integer yg=(int) (grad.y);
                    Integer zg=(int) (grad.z);

                    //System.out.println("gs are: "+xg +", "+yg+", "+zg);

                    byte gby= (byte) (xg.byteValue());//green channel
                    byte bby= (byte) (yg.byteValue());//blue channel
                    byte aby= (byte) (zg.byteValue());//alpha channel

                    //System.out.println("gba is: "+(int)gby +", "+(int)bby+", "+(int)aby);
                    all[toLoc(i,j,k,w,h,l)+1]=gby;//green
                    all[toLoc(i,j,k,w,h,l)+2]=bby;//blue
                    all[toLoc(i,j,k,w,h,l)+3]=aby;//alpha
                }
            }
        }
    }

ByteBuffer bb=ByteBuffer.wrap(all);
    final GL gl = drawable.getGL();
    final GL2 gl2 = gl.getGL2();
    final int[] bindLocation = new int[1];
    gl.glGenTextures(1, bindLocation, 0);
    gl2.glBindTexture(GL2.GL_TEXTURE_3D, bindLocation[0]);
    gl2.glPixelStorei(GL.GL_UNPACK_ALIGNMENT, 1);//-byte alignment
    gl2.glTexParameteri(GL2.GL_TEXTURE_3D, GL.GL_TEXTURE_WRAP_S, GL2.GL_CLAMP);
    gl2.glTexParameteri(GL2.GL_TEXTURE_3D, GL.GL_TEXTURE_WRAP_T, GL2.GL_CLAMP);
    gl2.glTexParameteri(GL2.GL_TEXTURE_3D, GL2.GL_TEXTURE_WRAP_R, GL2.GL_CLAMP);
    gl2.glTexParameteri(GL2.GL_TEXTURE_3D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
    gl2.glTexParameteri(GL2.GL_TEXTURE_3D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
    gl2.glTexEnvf(GL2.GL_TEXTURE_ENV, GL2.GL_TEXTURE_ENV_MODE, GL.GL_REPLACE);
    gl2.glTexImage3D( GL2.GL_TEXTURE_3D, 0,GL.GL_RGBA,
            w, h, l, 0,
            GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, bb );//GL_UNSIGNED_BYTE

有没有更好的方法将大量签名数据放入着色器?

4

1 回答 1

5
gl2.glTexImage3D( GL2.GL_TEXTURE_3D, 0,GL.GL_RGBA,
        w, h, l, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, bb );

好吧,有两种方法可以做到这一点,这取决于你想在着色器中做多少工作,以及你想限制什么 OpenGL 版本。

The version that requires more shader work also requires a bit more out of your code. See, what you want to do is have your shader take unsigned bytes, then reinterpret them as signed bytes.

The way that this would typically be done is to pass unsigned normalized bytes (as you're doing), which produces floating-point values on the [0, 1] range, then simply expand that range by multiplying by 2 and subtracting 1, yielding numbers on the [-1, 1] range. This means that your uploading code needs to take it's [-128, 127] signed bytes and convert them into [0, 255] unsigned bytes by adding 128 to them.

I have no idea how to do this in Java, which does not appear to have an unsigned byte type at all. You can't just pass a 2's complement byte and expect it to work in the shader; that's not going to happen. The byte value -128 would map to the floating-point value 1, which isn't helpful.

If you can manage to convert the data properly as I described above, then your shader access would have to unpack from the [0, 1] range to the [-1, 1] range.

If you have access to GL 3.x, then you can do this quite easily, with no shader changes:

gl2.glTexImage3D( GL2.GL_TEXTURE_3D, 0,GL.GL_RGBA8_SNORM,
        w, h, l, 0, GL.GL_RGBA, GL.GL_BYTE, bb );

The _SNORM in the image format means that it is a signed, normalized format. So your bytes on the range [-128, 127] will be mapped to floats on the range [-1, 1]. Exactly what you want.

于 2013-03-18T03:18:15.457 回答