0

下面是我从某处下载的源代码,它能够检测红色物体并显示其中心坐标。

a = imaqhwinfo;
[camera_name, camera_id, format] = getCameraInfo(a);


% Capture the video frames using the videoinput function
% You have to replace the resolution & your installed adaptor name.
vid = videoinput(camera_name, camera_id, format);

% Set the properties of the video object
set(vid, 'FramesPerTrigger', Inf);
set(vid, 'ReturnedColorspace', 'rgb')
vid.FrameGrabInterval = 1;

%start the video aquisition here
start(vid)

% Set a loop that stop after 100 frames of aquisition
while(vid.FramesAcquired<=100)

% Get the snapshot of the current frame
data = getsnapshot(vid);

% Now to track red objects in real time
% we have to subtract the red component 
% from the grayscale image to extract the red components in the image.
diff_im = imsubtract(data(:,:,1), rgb2gray(data));
%Use a median filter to filter out noise
diff_im = medfilt2(diff_im, [3 3]);
% Convert the resulting grayscale image into a binary image.
diff_im = im2bw(diff_im,0.17);

% Remove all those pixels less than 300px
diff_im = bwareaopen(diff_im,300);

% Label all the connected components in the image.
bw = bwlabel(diff_im, 8);

% Here we do the image blob analysis.
% We get a set of properties for each labeled region.
stats = regionprops(bw, 'BoundingBox', 'Centroid');

% Display the image
imshow(data)

hold on

%This is a loop to bound the red objects in a rectangular box.
for object = 1:length(stats)
    bb = stats(object).BoundingBox;
    bc = stats(object).Centroid;
    rectangle('Position',bb,'EdgeColor','r','LineWidth',2)
    plot(bc(1),bc(2), '-m+')
    a=text(bc(1)+15,bc(2), strcat('X: ', num2str(round(bc(1))), 'Y: ',  num2str(round(bc(2)))));
    %disp(' X-Coordinate   Y-cordinate')
    %x=gallery('uniformdata',[5 3],0);
    %disp(x)
    set(a, 'FontName', 'Arial', 'FontWeight', 'bold', 'FontSize', 12, 'Color',      'yellow');
end

hold off
end
% Both the loops end here.

% Stop the video aquisition.
stop(vid);

% Flush all the image data stored in the memory buffer.
flushdata(vid);

% Clear all variables
% clear all
sprintf('%s','That was all about Image tracking, Guess that was pretty easy :) ')

问题是我想检测眼睛的瞳孔,所以我需要检测图像中的黑色,但我不知道如何修改代码以改变它能够检测黑色。那么,对此有什么想法吗?请帮助我,谢谢大家。

4

1 回答 1

4
diff_im = imsubtract(data(:,:,1), rgb2gray(data));

是算法提取颜色数据的红色分量的地方。所以这就是你必须做出一些改变的地方。

您可以继续使用灰度,而不是提取红色组件(如代码注释中所指出的那样)。

diff_im = rgb2gray(data);

但我认为这会导致找到白色物体。为了解决这个问题,您可以更改 blob 分析,或者只是反转输入。我认为它是这样的:

diff_im = imcomplement(rgb2gray(data));

我无法在这里测试它,因为我无法访问图像处理工具箱。你能自己测试一下吗?

用八度音阶测试使用图像包

我用于测试的图片可以在这里找到。

% Get the snapshot of the current frame
  data = imread('child-eye1-560x372.jpg');

% Now to track red objects in real time we have to subtract the red component
% from the grayscale image to extract the red components in the image.
  diff_im = rgb2gray(data);
  imwrite(diff_im,'diff_im.jpg');
%Use a median filter to filter out noise
  diff_im = medfilt2(diff_im, [3 3]);
  imwrite(diff_im,'diff_im_filt1.jpg');
% Convert the resulting grayscale image into a binary image.
  diff_im = im2bw(diff_im,0.17);
  imwrite(diff_im,'diff_im_filt2.jpg');

这些只是过滤步骤,blob 分析功能在 octave 中不可用。生成的图像是:

child-eye1-560x372.jpg diff_im.jpg diff_im_filt1.jpg diff_im_filt2.jpg

如果我降低过滤器值im2bw到 0.07,结果会更好: diff_im_filt2b.jpg

如您所见,这部分过程似乎没问题。最后一张图片是二进制的,所以大的大斑点不应该太难找到。和以前一样,我自己无法测试...

也许问题不在于算法,而在于您提供的数据。如果图片中有许多小的黑色斑点,算法会找到它们并将它们包含在结果中。

于 2012-05-08T22:03:30.920 回答