我想使用 opencv mat 数据作为 opengl 纹理。我正在开发一个 Qt4.8 应用程序(但通过 qimage 是我并不真正需要的)扩展一个 QGLWidget。但是有什么不对。。
首先是屏幕截图中的问题,然后是我正在使用的代码。
如果我不调整 cv::Mat 的大小(从视频中获取)一切都很好。如果我将它缩放为尺寸的一半(scaleFactor = 2),一切都很好。如果比例因子是 2.8 或 2.9.. 一切正常。但是.. 在某些比例因子上.. 它是错误的。
这里是带有漂亮红色背景的屏幕截图,用于了解 opengl 四维:
比例因子 = 2
比例因子 = 2.8
比例因子 = 3
比例因子 = 3.2
现在是paint方法的代码。我从这篇不错的博客文章中找到了将 cv::Mat 数据复制到 gl 纹理的代码。
void VideoViewer::paintGL()
{
glClear (GL_COLOR_BUFFER_BIT);
glClearColor (1.0, 0.0, 0.0, 1.0);
glEnable(GL_BLEND);
// Use a simple blendfunc for drawing the background
glBlendFunc(GL_ONE, GL_ZERO);
if (!cvFrame.empty()) {
glEnable(GL_TEXTURE_2D);
GLuint tex = matToTexture(cvFrame);
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2f(1, 1); glVertex2f(0, cvFrame.size().height);
glTexCoord2f(1, 0); glVertex2f(0, 0);
glTexCoord2f(0, 0); glVertex2f(cvFrame.size().width, 0);
glTexCoord2f(0, 1); glVertex2f(cvFrame.size().width, cvFrame.size().height);
glEnd();
glDeleteTextures(1, &tex);
glDisable(GL_TEXTURE_2D);
glFlush();
}
}
GLuint VideoViewer::matToTexture(cv::Mat &mat, GLenum minFilter, GLenum magFilter, GLenum wrapFilter)
{
// http://r3dux.org/2012/01/how-to-convert-an-opencv-cvmat-to-an-opengl-texture/
// Generate a number for our textureID's unique handle
GLuint textureID;
glGenTextures(1, &textureID);
// Bind to our texture handle
glBindTexture(GL_TEXTURE_2D, textureID);
// Catch silly-mistake texture interpolation method for magnification
if (magFilter == GL_LINEAR_MIPMAP_LINEAR ||
magFilter == GL_LINEAR_MIPMAP_NEAREST ||
magFilter == GL_NEAREST_MIPMAP_LINEAR ||
magFilter == GL_NEAREST_MIPMAP_NEAREST)
{
std::cout << "VideoViewer::matToTexture > "
<< "You can't use MIPMAPs for magnification - setting filter to GL_LINEAR"
<< std::endl;
magFilter = GL_LINEAR;
}
// Set texture interpolation methods for minification and magnification
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter);
// Set texture clamping method
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapFilter);
// Set incoming texture format to:
// GL_BGR for CV_CAP_OPENNI_BGR_IMAGE,
// GL_LUMINANCE for CV_CAP_OPENNI_DISPARITY_MAP,
// Work out other mappings as required ( there's a list in comments in main() )
GLenum inputColourFormat = GL_BGR;
if (mat.channels() == 1)
{
inputColourFormat = GL_LUMINANCE;
}
// Create the texture
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
mat.cols, // Image width i.e. 640 for Kinect in standard mode
mat.rows, // Image height i.e. 480 for Kinect in standard mode
0, // Border width in pixels (can either be 1 or 0)
inputColourFormat, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
mat.ptr()); // The actual image data itself
return textureID;
}
以及如何加载和缩放 cv::Mat:
void VideoViewer::retriveScaledFrame()
{
video >> cvFrame;
cv::Size s = cv::Size(cvFrame.size().width/scaleFactor, cvFrame.size().height/scaleFactor);
cv::resize(cvFrame, cvFrame, s);
}
有时图像正确渲染有时不正确..为什么?可以肯定的是,opencv和opengl之间的像素存储顺序有些不匹配是有问题的。但是,如何解决呢?为什么有时可以,有时不行?