0

I have a small bug in some OpenCV code that attempts to do the follow:

  1. Stitch two webcam images together in real time.

  2. Perform background subtraction and calculate the typical vector of contours from the resulting foreground Mat.

  3. From the contours vector, calculate points that represent where the object is in the image. It could just be the centroid, but I'm using the center point on the bottom edge of all of the bounding boxes. This is done in the function "lowestPoints".

  4. Transfer these points into a new coordinate system, which roughly approximates to just scaling them down or up to the new images size, but in the future it could be anything. This is done in the function "tf".

I can confirm from testing and seeing the results that #'s 1-3 work just fine. For some reason, #4 doesn't work correctly at all. The new calculated coordinates always have a zero x component, and the y component doesn't scale at all. When I plot them or cout the coordinates, it's always zero, and always plotted on the left hand side of the new image. The relevant code is below:

//from transfunc.cpp

#include <opencv2/opencv.hpp>
#include <vector>
#include "defines.h"
#include "protos.h"

vector<Point> lowestPoints(vector<vector<Point>> contours)
{
    // Returns the midpoints of the bottom edge of contour bounding boxes
    Point p;
    Rect box;
    vector<Point> lowPoints;

    for (unsigned int i = 0; i < contours.size(); i++)
    {
        box = boundingRect(contours[i]);
        p.x = box.x + box.width/2;
        p.y = box.y + box.height;
        lowPoints.push_back(p);
    }

    return lowPoints;
}

vector<Point> tf(vector<Point> in, vector<Point> grid, Mat camImg, Mat layoutImg)
{
    int a, b;
    Point temp;
    vector<Point> out;

    std::cout << "Points set in to TF..." << std::endl << std::endl;
    printPoints(in);

    a = layoutImg.cols / camImg.cols;
    b = layoutImg.rows / camImg.rows;

    for (unsigned int i = 0; i < in.size(); i++)
    {
        temp.x = in[i].x * a;
        temp.y = in[i].y * b;
        out.push_back(temp);
    }

    std::cout << "Points calculated in TF..." << std::endl << std::endl;
    printPoints(out);

    return out;
}

// from main.cpp

#include <opencv2/opencv.hpp>
#include <iostream>
#include <vector>
#include "defines.h"
#include "protos.h"

int main()
{
    //variable declarations and initializations

    while(true) //main webcam feed loop
    {
        // First time through calculate necessary camera parameters for stitching
        // Grab frames from videoCaptures
        // Stitch the frames together (stored in Mat pano)
        // Background subtraction + calculate vector<vector<Point>> contours

        if (contours.size() > 0)
        {
            drawBoundingBoxes(contours, pano);
            lowPoints = lowestPoints(contours);
            objPoints = tf(lowPoint, pano, topDown);
            // Draw everything

            std::cout << "Printing 'Lowest' Points in main..." << std::endl;
            printPoints(lowestPoints(contours));
            std::cout << "Printing 'Object' Points in main..." << std::endl;
            printPoints(objPoints);
        }

        // Show images

    }

    return 0;
}

The output for a sample point looks as follows:

Points sent in to TF...

Point 0: (509, 340) Point 1: (477, 261)

Points calculated in TF...

Point 0: (0, 340) Point 1: (0, 261)

Printing 'Lowest' points in main...

Point 0: (509, 340) Point 1: (477, 261)

Printing 'Object' points in main...

Point 0: (0, 340) Point 1: (0, 261)

Why is the x-coordinate always zero, and why is the y-coordinate not being scaled by b?

Thanks,

-Tony

4

1 回答 1

1

我的猜测是你有一个整数除法问题。默认情况下,C++ 将假定当您对整数使用除法运算符时,您实际上需要整数除法。在你的情况下,你可能不会。试试这个:

double a = static_cast<double>(layoutImg.cols) / camImg.cols;
double b = static_cast<double>(layoutImg.rows) / camImg.rows;
于 2013-07-08T15:48:33.637 回答