1

我有这个 DDS文件。我编写了一个简单的 DDS 阅读器来读取 DDS 标头并根据 MSDN 规范打印其详细信息。它说这是一个 RGB DDS,每个像素位深度为 32 个字节,并且忽略了 alpha,即像素格式为 X8R8G8B8(或 A8R8G8B8)。为了验证这一点,我还在十六进制编辑器中打开了这个文件,它显示了第一个(即从数据开始)4 个字节BB GG RR 00(用第一个像素的右十六进制颜色值替换它们)。我读到OpenGL的纹理复制函数作用于字节(至少在概念上),因此从它的角度来看,这个数据是B8G8R8A8。如果我在这里的理解有误,请纠正我。

现在我通过glTexImage2D 内部格式RGBA8外部格式输入我通过BGRAUNSIGNED_BYTE. 这会导致渲染输出中出现蓝色调。在我的片段着色器中,为了验证,我做了一个swizzle来交换R并且B它正确渲染。

我恢复了着色器代码,然后将类型UNSIGNED_BYTE替换为UNSIGNED_INT_8_8_8_8_REV(基于此建议),它仍然呈现蓝色调。现在将外部格式更改为RGBA任何类型UNSIGNED_BYTEUNSIGNED_INT_8_8_8_8_REV),它都可以正常呈现!

  • 由于OpenGL不支持ARGB,所以给出BGRA是可以理解的。但是 RGBA 怎么会在这里正常工作呢?这似乎是错误的。
  • 为什么类型对通道的顺序没有影响?
  • GL_UNPACK_ALIGNMENT 对此有影响吗?我将其保留为默认值 (4)。如果我正确阅读了手册,这应该不会影响读取客户端内存的方式。

细节

  • OpenGL 3.3 版
  • 支持最高 OpenGL 4.0 的英特尔核芯显卡
  • 使用GLI加载 DDS 文件并获取数据指针
4

1 回答 1

3

I finally found the answers myself! Posting it here so that it may help someone in future.

Since OpenGL doesn't support ARGB, giving BGRA is understandable. But how come RGBA is working correctly here? This seems wrong.

By inspecting the memory pointed to by void* data that GLI returns when a pointer to the image's binary data is asked for, it can be seen that GLI had already reordered the bytes when transferring data from the file to client memory. The memeory window shows, from lower to higher address, data in the form RR GG BB AA. This explains why passing GL_RGBA works. However, the wrong on GLI's part is that when external format is queried for it returns GL_BGRA instead of GL_RGBA. A bug to address this has been raised.

Why does the type have no effect on the ordering of the channels?

No, it has an effect. The machine that I'm trying this experiment on is an Intel x86_64 little endian machine. OpenGL Wiki clearly states that the client pixel data is always in client byte ordering. Now when GL_UNSIGNED_BYTE or GL_UNSIGNED_INT_8_8_8_8_REV is passed, the underlying base type (not the component type) is an unsigned int for both; thus reading an int from data, on a little-endian machine would mean, the variable in register would end up with the bytes swapped i.e. RR GG BB AA in the RAM would reach the VRAM asAA BB GG RR; when addressed by a texture of type RGBA (RR GG BB AA), reading AA would actually give RR. To correct it, the OpenGL implementation swaps the bytes to neutralise the endianness of the machine, in the case of GL_UNSIGNED_BYTE type. While for GL_UNSIGNED_INT_8_8_8_8_REV we explicitly instruct OpenGL to swap the byte order and thus it renders correctly. However, if the type is passed as GL_UNSIGNED_INT_8_8_8_8 then the rendering is screwed up, since we instruct OpenGL to copy the bytes as it was read on the machine.

Does the GL_UNPACK_ALIGNMENT have a bearing in this? I left it as the default (4). If I read the manual right, this should have no effect on how the client memory is read.

It does have a bearing on the unpacking of texture data from client memory to server memory. However, that's to account for the padding bytes present in an image's rows to compute the stride (pitch) correctly. But to this issue specifically it doesn't have a bearing since it's pitch flag is 0 i.e. there're no padding bits in the DDS file in question.

Related material: https://www.opengl.org/wiki/Pixel_Transfer#Pixel_type

于 2014-11-29T10:10:48.903 回答