0

我正在做一个图像检测,我必须对每个红色、绿色和蓝色元素进行处理以获取边缘图(黑白形式的二进制图像)并将它们组合成一个以显示输出。在我提取每个红色、绿色和蓝色的元素并设置阈值以获得二进制图像后,它不会显示二进制图像。相反,它向我展示了灰度图像。有人可以帮助我吗?到目前为止,这是我的代码。

Buffered Image buff_red;
int[] process_red;
int width = 256; 
int height = 256;

private void processActionPerformed(java.awt.event.ActionEvent evt) { 
width = inputimage.getWidth(null);
height = inputimage.getHeight(null);

buff_red = new BufferedImage(width,height,BufferedImage.TYPE_INT_RGB);
Graphics r = buff_red.getGraphics();
r.drawImage(inputimage, 0, 0, null);
r.dispose();

//get the red element
process_red = new int[width * height];
counter = 0;
 for(int i = 0; i < 256; i++) {
     for(int j = 0; j < 256; j++) {
         int clr = buff_red.getRGB(j, i);
         int red = (clr & 0x00ff0000) >> 16;
         red = (0xFF<<24)|(red<<16)|(red<<8)|red;
         process_red[counter] = red;
         counter++;
     }
}

//set threshold value for red element
int threshold = 100;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
     int bin = (buff_red.getRGB(x, y) & 0x000000ff);
     if (bin < threshold)
               bin = 0;
     else
               bin = 255;
     buff_red.setRGB(x,y, 0xff000000 | bin << 16 | bin << 8 | bin);
     }
}

更新:

的初始化buff_red是在“获取红色元素”(第一个循环)之前完成的,即:

buff_red = new BufferedImage(width,height,BufferedImage.TYPE_INT_RGB);
Graphics r = buff_red.getGraphics();
r.drawImage(inputimage, 0, 0, null);

我应该缓冲图像process_red并将其用于阈值方法,以便获得边缘图吗?

4

1 回答 1

1

One thing that looks suspicious from your code is that your "get red element step" (first loop) writes to process_red, but your thresholding step (second loop) reads from buf_red, which doesn't seem to be initialized anywhere. Is this a typo, or a bug in your code?

You mention edge detection, but I can't see anything that looks like edge detection in the code that you posted. All you seem to be doing is extracting the red (green, blue) channels, thresholding them, and then combining them.

It would help if you were more analytical in your approach to the problem. What is the earliest point where the problem manifests yourself? Are you extracting the channels from the image correctly? Do your edge detection images look right? Is your thresholding result giving you what you expect? You can answer all of these questions by yourself -- write/show debug images.

Finally, ideally you shouldn't have to manually code such low-level mundane tasks (fetch pixel, mask by 0xff, etc) by coding them yourself, at least in Java. It's fun the first time through, but after that it's just another source for bugs and unexpected features. I don't currently use Java, but I'm certain that it has an image processing API that can handle such tasks for you.

于 2011-01-08T14:46:01.220 回答