我正在寻找人体运动跟踪的实现,还讨论了使用差分分析和 lukas kanade 光学方法从多个视频帧中提取的多个运动对象检测。
我找到了科学论文,发现我们必须使用连接组件过滤连接组件来进行连续运动跟踪,但我不明白如何进行这个过程。我所需要的只是骨骼化轨迹和人类步态运动的坐标。
我在 Opencv 和 C++ 中工作,但在我的情况下,opencv 中用于对象检测的文档还不够。我来自医学背景,需要这是儿科医生项目的一部分。
我找到了这段代码运动检测并试图执行它(不知道它是否检测并跟踪运动)。但是,它返回了这些错误,我很困惑,因为这些错误是微不足道的,其他评论提到它们能够运行此代码。但我无法减轻这些错误,也无法理解它们发生的原因。我正在使用 OpenCv2.3,以下是错误
- 无法打开源文件 stdafx.h
- 警告 C4996:“fopen”:此函数或变量可能不安全。考虑改用 fopen_s。要禁用弃用,请使用 _CRT_SECURE_NO_WARNINGS。详细信息请参见在线帮助。
- 错误 C2086:“CvSize imgSize”:重新定义
- 错误 C2065:“临时”:未声明的标识符
- 错误 C4430:缺少类型说明符 - 假定为 int。注意:C++ 不支持默认整数
- 错误 C2365:“cvReleaseImage”:重新定义;以前的定义是'function' 1> c:\opencv2.3\opencv\build\include\opencv2\core\core_c.h(87) :见'cvReleaseImage'的声明
- 错误 C2065:“差异”:未声明的标识符
- 错误 C4430:缺少类型说明符 - 假定为 int。注意:C++ 不支持默认整数
- 错误 C2365:“cvReleaseImage”:重新定义;以前的定义是'function' 1> c:\opencv2.3\opencv\build\include\opencv2\core\core_c.h(87) :见'cvReleaseImage'的声明
- 错误 C2065:“greyImage”:未声明的标识符
- 错误 C4430:缺少类型说明符 - 假定为 int。注意:C++ 不支持默认整数
- 错误 C2365:“cvReleaseImage”:重新定义;以前的定义是“功能”
- \opencv2.3\opencv\build\include\opencv2\core\core_c.h(87) :请参阅“cvReleaseImage”错误 C2065 的声明:“movingAverage”:未声明的标识符 - 错误 C4430:缺少类型说明符 - 假定为 int。注意:C++ 不支持 default-int -error C2365: 'cvReleaseImage' : redefinition; 之前的定义是 'function' -1> c:\opencv2.3\opencv\build\include\opencv2\core\core_c.h(87) :参见 'cvReleaseImage' 的声明 -error C4430:缺少类型说明符 - 假定为 int。注意:C++ 不支持 default-int -error C2365: 'cvDestroyWindow' : redefinition; 以前的定义是“功能”
- c:\opencv2.3\opencv\build\include\opencv2\highgui\highgui_c.h(136) : 见 'cvDestroyWindow' 的声明
- 错误 C2440:“正在初始化”:无法从“const char [10]”转换为“int”-1> 没有可以进行此转换的上下文-错误 C2065:“输入”:未声明的标识符-错误 C4430:缺少类型说明符 - 假定为 int。注意:C++ 不支持默认整数
- 错误 C2365:“cvReleaseCapture”:重新定义;之前的定义是'function' -1> c:\opencv2.3\opencv\build\include\opencv2\highgui\highgui_c.h(311) : 见'cvReleaseCapture' 的声明 -error C2065: 'outputMovie' : undeclared identifier
- 错误 C4430:缺少类型说明符 - 假定为 int。注意:C++ 不支持 default-int -error C2365: 'cvReleaseVideoWriter' : redefinition; 以前的定义是 'function' -1 c:\opencv2.3\opencv\build\include\opencv2\highgui\highgui_c.h(436) :参见 'cvReleaseVideoWriter' 的声明 -error C2059:语法错误:'return' == ======== 构建:0 成功,1 失败,0 最新,0 跳过 ==========
代码
// MotionDetection.cpp : Defines the entry point for the console application.
//
// Contourold.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "iostream"
#include "stdlib.h"
// OpenCV includes.
#include "cv.h"
#include "highgui.h"
#pragma comment(lib,"cv.lib")
#pragma comment(lib,"cxcore.lib")
#pragma comment(lib,"highgui.lib")
using namespace std;
int main(int argc, char* argv[])
{
//Create a new window.
cvNamedWindow("My Window", CV_WINDOW_AUTOSIZE);
//Create a new movie capture object.
CvCapture *input;
//Assign the movie to capture.
//inputMovie = cvCaptureFromAVI("vinoth.avi");
char *fileName = "E:\\highway.avi";
//char *fileName = "D:\\Profile\\AVI\\cardriving.wmv";
input = cvCaptureFromFile(fileName);
//if (!input)
//cout << "Can't open file" << fileName < ;
//Size of the image.
CvSize imgSize;
IplImage* frame = cvQueryFrame(input);
CvSize imgSize = cvGetSize(frame);
//Images to use in the program.
IplImage* greyImage = cvCreateImage( imgSize, IPL_DEPTH_8U, 1);
IplImage* colourImage;
IplImage* movingAverage = cvCreateImage( imgSize, IPL_DEPTH_32F, 3);
IplImage* difference;
IplImage* temp;
IplImage* motionHistory = cvCreateImage( imgSize, IPL_DEPTH_8U, 3);
//Rectangle to use to put around the people.
CvRect bndRect = cvRect(0,0,0,0);
//Points for the edges of the rectangle.
CvPoint pt1, pt2;
//Create a font object.
CvFont font;
//Create video to output to.
char* outFilename = argc==2 ? argv[1] : "E:\\outputMovie.avi";
CvVideoWriter* outputMovie = cvCreateVideoWriter(outFilename,
CV_FOURCC('F', 'L', 'V', 'I'), 29.97, cvSize(720, 576));
//Capture the movie frame by frame.
int prevX = 0;
int numPeople = 0;
//Buffer to save the number of people when converting the integer
//to a string.
char wow[65];
//The midpoint X position of the rectangle surrounding the moving objects.
int avgX = 0;
//Indicates whether this is the first time in the loop of frames.
bool first = true;
//Indicates the contour which was closest to the left boundary before the object
//entered the region between the buildings.
int closestToLeft = 0;
//Same as above, but for the right.
int closestToRight = 320;
//Keep processing frames...
for(;;)
{
//Get a frame from the input video.
colourImage = cvQueryFrame(input);
//If there are no more frames, jump out of the for.
if( !colourImage )
{
break;
}
//If this is the first time, initialize the images.
if(first)
{
difference = cvCloneImage(colourImage);
temp = cvCloneImage(colourImage);
cvConvertScale(colourImage, movingAverage, 1.0, 0.0);
first = false;
}
//else, make a running average of the motion.
else
{
cvRunningAvg(colourImage, movingAverage, 0.020, NULL);
}
//Convert the scale of the moving average.
cvConvertScale(movingAverage,temp, 1.0, 0.0);
//Minus the current frame from the moving average.
cvAbsDiff(colourImage,temp,difference);
//Convert the image to grayscale.
cvCvtColor(difference,greyImage,CV_RGB2GRAY);
//Convert the image to black and white.
cvThreshold(greyImage, greyImage, 70, 255, CV_THRESH_BINARY);
//Dilate and erode to get people blobs
cvDilate(greyImage, greyImage, 0, 18);
cvErode(greyImage, greyImage, 0, 10);
//Find the contours of the moving images in the frame.
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contour = 0;
cvFindContours( greyImage, storage, &contour, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
//Process each moving contour in the current frame...
for( ; contour != 0; contour = contour->h_next )
{
//Get a bounding rectangle around the moving object.
bndRect = cvBoundingRect(contour, 0);
pt1.x = bndRect.x;
pt1.y = bndRect.y;
pt2.x = bndRect.x + bndRect.width;
pt2.y = bndRect.y + bndRect.height;
//Get an average X position of the moving contour.
avgX = (pt1.x + pt2.x) / 2;
//If the contour is within the edges of the building...
if(avgX > 90 && avgX < 250)
{
//If the the previous contour was within 2 of the left boundary...
if(closestToLeft >= 88 && closestToLeft <= 90)
{
//If the current X position is greater than the previous...
if(avgX > prevX)
{
//Increase the number of people.
numPeople++;
//Reset the closest object to the left indicator.
closestToLeft = 0;
}
}
//else if the previous contour was within 2 of the right boundary...
else if(closestToRight >= 250 && closestToRight <= 252)
{
//If the current X position is less than the previous...
if(avgX < prevX)
{
//Increase the number of people.
numPeople++;
//Reset the closest object to the right counter.
closestToRight = 320;
}
}
//Draw the bounding rectangle around the moving object.
cvRectangle(colourImage, pt1, pt2, CV_RGB(255,0,0), 1);
}
//If the current object is closer to the left boundary but still not across
//it, then change the closest to the left counter to this value.
if(avgX > closestToLeft && avgX <= 90)
{
closestToLeft = avgX;
}
//If the current object is closer to the right boundary but still not across
//it, then change the closest to the right counter to this value.
if(avgX < closestToRight && avgX >= 250)
{
closestToRight = avgX;
}
//Save the current X value to use as the previous in the next iteration.
prevX = avgX;
}
//Save the current X value to use as the previous in the next iteration.
prevX = avgX;
}
//Write the number of people counted at the top of the output frame.
cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, 0.8, 0.8, 0, 2);
cvPutText(colourImage, _itoa(numPeople, wow, 10), cvPoint(60, 200), &font, cvScalar(0, 0, 300));
//Show the frame.
cvShowImage("My Window", colourImage);
//Wait for the user to see it.
cvWaitKey(10);
//Write the frame to the output movie.
cvWriteFrame(outputMovie, colourImage);
}
// Destroy the image, movies, and window.
cvReleaseImage(&temp);
cvReleaseImage(&difference);
cvReleaseImage(&greyImage);
cvReleaseImage(&movingAverage);
cvDestroyWindow("My Window");
cvReleaseCapture(&input);
cvReleaseVideoWriter(&outputMovie);
return 0;
}
- 请帮助解决错误和问题。
- 如何进行可能通过骨架化方法返回轨迹坐标的运动(人类)跟踪。