2

I'm trying to implement the method of improving fingerprint images by Anil Jain. As a starter, I encountered some difficulties while extracting the orientation image, and am strictly following those steps described in Section 2.4 of that paper.

So, this is the input image:

enter image description here

And this is after normalization using exactly the same method as in that paper:

enter image description here

I'm expecting to see something like this (an example from the internet):

enter image description here

However, this is what I got for displaying obtained orientation matrix:

enter image description here

Obviously this is wrong, and it also gives non-zero values for those zero points in the original input image.

This is the code I wrote:

cv::Mat orientation(cv::Mat inputImage)
{
    cv::Mat orientationMat = cv::Mat::zeros(inputImage.size(), CV_8UC1);

    // compute gradients at each pixel
    cv::Mat grad_x, grad_y;
    cv::Sobel(inputImage, grad_x, CV_16SC1, 1, 0, 3, 1, 0, cv::BORDER_DEFAULT);
    cv::Sobel(inputImage, grad_y, CV_16SC1, 0, 1, 3, 1, 0, cv::BORDER_DEFAULT);

    cv::Mat Vx, Vy, theta, lowPassX, lowPassY;
    cv::Mat lowPassX2, lowPassY2;

    Vx = cv::Mat::zeros(inputImage.size(), inputImage.type());
    Vx.copyTo(Vy);
    Vx.copyTo(theta);
    Vx.copyTo(lowPassX);
    Vx.copyTo(lowPassY);
    Vx.copyTo(lowPassX2);
    Vx.copyTo(lowPassY2);

    // estimate the local orientation of each block
    int blockSize = 16;

    for(int i = blockSize/2; i < inputImage.rows - blockSize/2; i+=blockSize)
    {    
        for(int j = blockSize / 2; j < inputImage.cols - blockSize/2; j+= blockSize)
        {
            float sum1 = 0.0;
            float sum2 = 0.0;

            for ( int u = i - blockSize/2; u < i + blockSize/2; u++)
            {
                for( int v = j - blockSize/2; v < j+blockSize/2; v++)
                {
                    sum1 += grad_x.at<float>(u,v) * grad_y.at<float>(u,v);
                    sum2 += (grad_x.at<float>(u,v)*grad_x.at<float>(u,v)) * (grad_y.at<float>(u,v)*grad_y.at<float>(u,v));
                }
            }

            Vx.at<float>(i,j) = sum1;
            Vy.at<float>(i,j) = sum2;

            double calc = 0.0;

            if(sum1 != 0 && sum2 != 0)
            {
                calc = 0.5 * atan(Vy.at<float>(i,j) / Vx.at<float>(i,j));
            }

            theta.at<float>(i,j) = calc;

            // Perform low-pass filtering
            float angle = 2 * calc;
            lowPassX.at<float>(i,j) = cos(angle * pi / 180);
            lowPassY.at<float>(i,j) = sin(angle * pi / 180);

            float sum3 = 0.0;
            float sum4 = 0.0;

            for(int u = -lowPassSize / 2; u < lowPassSize / 2; u++)
            {
               for(int v = -lowPassSize / 2; v < lowPassSize / 2; v++)
               {
                  sum3 += inputImage.at<float>(u,v) * lowPassX.at<float>(i - u*lowPassSize, j - v * lowPassSize);
                  sum4 += inputImage.at<float>(u, v) * lowPassY.at<float>(i - u*lowPassSize, j - v * lowPassSize);
               }
            }
        lowPassX2.at<float>(i,j) = sum3;
        lowPassY2.at<float>(i,j) = sum4;

        float calc2 = 0.0;

        if(sum3 != 0 && sum4 != 0)
        {
           calc2 = 0.5 * atan(lowPassY2.at<float>(i, j) / lowPassX2.at<float>(i, j)) * 180 / pi;
        }
        orientationMat.at<float>(i,j) = calc2;

        }

    }
return orientationMat;

}

I've already searched a lot on the web, but almost all of them are in Matlab. And there exist very few ones using OpenCV, but they didn't help me either. I sincerely hope someone could go through my code and point out any error to help. Thank you in advance.

Update

Here are the steps that I followed according to the paper:

  1. Obtain normalized image G.
  2. Divide G into blocks of size wxw (16x16).
  3. Compute the x and y gradients at each pixel (i,j).
  4. Estimate the local orientation of each block centered at pixel (i,j) using equations: enter image description here

  5. Perform low-pass filtering to remove noise. For that, convert the orientation image into a continuous vector field defined as:

enter image description here

enter image description here

where W is a two-dimensional low-pass filter, and w(phi) x w(phi) is its size, which equals to 5.

  1. Finally, compute the local ridge orientation at (i,j) using:

enter image description here

Update2

This is the output of orientationMat after changing the mat type to CV_16SC1 in Sobel operation as Micka suggested:

enter image description here

4

3 回答 3

3

也许我回答为时已晚,但无论如何有人可以稍后阅读并解决同样的问题。

我一直在使用相同的算法,您发布的相同方法工作了一段时间......但是当纸被编辑时有一些写作错误(我猜)。在与方程式进行了很多斗争之后,我通过查看其他类似的作品发现了这个错误。

这对我有用...

Vy(i, j) = 2*dx(u,v)*dy(u,v)
Vx(i,j) = dx(u,v)^2 - dy(u,v)^2
O(i,j) = 0.5*arctan(Vy(i,j)/Vx(i,j)

(对不起,我无法发布图像,所以我写了修改后的计算。记住“u”和“v”是 BlockSize by BlockSize 窗口中求和的位置)

首先也是最重要的(显然)是方程,我看到在不同的作品中,这种表达方式是非常不同的,而且在每一个作品中,他们都谈到了 Hong 等人的相同算法。关键是找到梯度(Vx 和 Vy)的最小均方(前 3 个方程),我在上面为此提供了修正后的公式。然后您可以计算非重叠窗口的角度 theta(在论文中推荐 16x16 大小),之后算法说您必须计算“x”和“y”方向(Phi_x 和 Phi_y)上的双倍角的大小。

Phi_x(i,j) = V(i,j) * cos(2*O(i,j))
Phi_y(i,j) = V(i,j) * sin(2*O(i,j))

幅度只是:

V = sqrt(Vx(i,j)^2 + Vy(i,j)^2)

请注意,在相关工作中没有提到您必须使用梯度幅度,但这样做(对我来说)是有意义的。在所有这些更正之后,您可以将低通滤波器应用于 Phi_x 和 Phi_y,我使用大小为 5x5 的简单掩码来平均这个幅度(类似于 opencv 的 medianblur())。

最后一件事是计算新角度,即 O(i,j) 图像中第 25 个邻居的平均值,为此您只需:

O'(i,j) = 0.5*arctan(Phi_y/Phi_x)

我们就在那里......所有这些只是为了通过 BlockSize 非重叠窗口计算 BlockSize 中法线向量与山脊方向 (O'(i,j)) 的角度,这是什么意思?这意味着我们刚刚计算的角度垂直于脊,简单来说我们只是计算了脊的角度加上90度......要得到我们需要的角度,我们只需减去得到的角度90°。

要绘制线条,我们需要有一个初始点 (X0, Y0) 和一个终点 (X1, Y1)。为此想象一个以 (X0, Y0) 为中心的圆,半径为“r”:

x0 = i + blocksize/2
y0 = j + blocksize/2
r = blocksize/2

请注意,我们将 i 和 j 添加到第一个坐标,因为窗口正在移动,我们将从非重叠窗口的中心开始绘制线,因此我们不能只使用非重叠窗口的中心。然后要计算结束坐标来画一条线,我们只需要使用一个直角三角形,所以......

X1 = r*cos(O'(i,j)-90°)+X0
Y1 = r*sin(O'(i,j)-90°)+Y0
X2 = X0-r*cos(O'(i,j)-90°)
Y2 = Y0-r*cos(O'(i,j)-90°)

然后只需使用opencv线函数,其中初始点是(X0,Y0),最终点是(X1,Y1)。除此之外,我绘制了 16x16 的窗口,并计算了 X1 和 Y1(X2 和 Y2)的对点以绘制整个窗口的一条线。

希望这对某人有所帮助。

我的结果...

原始图像

从 O+(i,j) 获得的结果绘制点

于 2017-03-10T22:35:09.750 回答
1

主功能:

Mat mat = imread("nwmPa.png",0);
mat.convertTo(mat, CV_32F, 1.0/255, 0);
Normalize(mat);
int blockSize = 6;
int height = mat.rows;
int width = mat.cols;
Mat orientationMap;
orientation(mat, orientationMap, blockSize);

标准化:

void Normalize(Mat & image)
{
    Scalar mean, dev;
    meanStdDev(image, mean, dev);
    double M = mean.val[0];
    double D = dev.val[0];

    for(int i(0) ; i<image.rows ; i++)
    {
        for(int j(0) ; j<image.cols ; j++)
        {
            if(image.at<float>(i,j) > M)
                image.at<float>(i,j) = 100.0/255 + sqrt( 100.0/255*pow(image.at<float>(i,j)-M,2)/D );
            else
                image.at<float>(i,j) = 100.0/255 - sqrt( 100.0/255*pow(image.at<float>(i,j)-M,2)/D );
        }
    }
}

方向图:

void orientation(const Mat &inputImage, Mat &orientationMap, int blockSize)
{
    Mat fprintWithDirectionsSmoo = inputImage.clone();
    Mat tmp(inputImage.size(), inputImage.type());
    Mat coherence(inputImage.size(), inputImage.type());
    orientationMap = tmp.clone();

    //Gradiants x and y
    Mat grad_x, grad_y;
//    Sobel(inputImage, grad_x, CV_32F, 1, 0, 3, 1, 0, BORDER_DEFAULT);
//    Sobel(inputImage, grad_y, CV_32F, 0, 1, 3, 1, 0, BORDER_DEFAULT);
    Scharr(inputImage, grad_x, CV_32F, 1, 0, 1, 0);
    Scharr(inputImage, grad_y, CV_32F, 0, 1, 1, 0);

    //Vector vield
    Mat Fx(inputImage.size(), inputImage.type()),
        Fy(inputImage.size(), inputImage.type()),
        Fx_gauss,
        Fy_gauss;
    Mat smoothed(inputImage.size(), inputImage.type());

    // Local orientation for each block
    int width = inputImage.cols;
    int height = inputImage.rows;
    int blockH;
    int blockW;

    //select block
    for(int i = 0; i < height; i+=blockSize)
    {
        for(int j = 0; j < width; j+=blockSize)
        {
            float Gsx = 0.0;
            float Gsy = 0.0;
            float Gxx = 0.0;
            float Gyy = 0.0;

            //for check bounds of img
            blockH = ((height-i)<blockSize)?(height-i):blockSize;
            blockW = ((width-j)<blockSize)?(width-j):blockSize;

            //average at block WхW
            for ( int u = i ; u < i + blockH; u++)
            {
                for( int v = j ; v < j + blockW ; v++)
                {
                    Gsx += (grad_x.at<float>(u,v)*grad_x.at<float>(u,v)) - (grad_y.at<float>(u,v)*grad_y.at<float>(u,v));
                    Gsy += 2*grad_x.at<float>(u,v) * grad_y.at<float>(u,v);
                    Gxx += grad_x.at<float>(u,v)*grad_x.at<float>(u,v);
                    Gyy += grad_y.at<float>(u,v)*grad_y.at<float>(u,v);
                }
            }

            float coh = sqrt(pow(Gsx,2) + pow(Gsy,2)) / (Gxx + Gyy);
            //smoothed
            float fi =  0.5*fastAtan2(Gsy, Gsx)*CV_PI/180;

            Fx.at<float>(i,j) = cos(2*fi);
            Fy.at<float>(i,j) = sin(2*fi);

            //fill blocks
            for ( int u = i ; u < i + blockH; u++)
            {
                for( int v = j ; v < j + blockW ; v++)
                {
                    orientationMap.at<float>(u,v) = fi;
                    Fx.at<float>(u,v) =  Fx.at<float>(i,j);
                    Fy.at<float>(u,v) =  Fy.at<float>(i,j);
                    coherence.at<float>(u,v) = (coh<0.85)?1:0;
                }
            }

        }
    } ///for

    GaussConvolveWithStep(Fx, Fx_gauss, 5, blockSize);
    GaussConvolveWithStep(Fy, Fy_gauss, 5, blockSize);

    for(int m = 0; m < height; m++)
    {
        for(int n = 0; n < width; n++)
        {
            smoothed.at<float>(m,n) = 0.5*fastAtan2(Fy_gauss.at<float>(m,n), Fx_gauss.at<float>(m,n))*CV_PI/180;
            if((m%blockSize)==0 && (n%blockSize)==0){
                int x = n;
                int y = m;
                int ln = sqrt(2*pow(blockSize,2))/2;
                float dx = ln*cos( smoothed.at<float>(m,n) - CV_PI/2);
                float dy = ln*sin( smoothed.at<float>(m,n) - CV_PI/2);
                arrowedLine(fprintWithDirectionsSmoo, Point(x, y+blockH), Point(x + dx, y + blockW + dy), Scalar::all(255), 1, CV_AA, 0, 0.06*blockSize);
//                qDebug () << Fx_gauss.at<float>(m,n) << Fy_gauss.at<float>(m,n) << smoothed.at<float>(m,n);
//                imshow("Orientation", fprintWithDirectionsSmoo);
//                waitKey(0);
            }
        }
    }///for2

    normalize(orientationMap, orientationMap,0,1,NORM_MINMAX);
    imshow("Orientation field", orientationMap);
    orientationMap = smoothed.clone();

    normalize(smoothed, smoothed, 0, 1, NORM_MINMAX);
    imshow("Smoothed orientation field", smoothed);

    imshow("Coherence", coherence);
    imshow("Orientation", fprintWithDirectionsSmoo);

}

似乎什么都没有忘记)

于 2016-05-29T20:40:09.397 回答
0

我仔细阅读了您的代码,发现您在计算sum3sum4时犯了一个错误:

sum3 += inputImage.at<float>(u,v) * lowPassX.at<float>(i - u*lowPassSize, j - v * lowPassSize);
sum4 += inputImage.at<float>(u, v) * lowPassY.at<float>(i - u*lowPassSize, j - v * lowPassSize);

而不是inputImage您应该使用低通滤波器。

于 2019-07-17T05:59:42.437 回答