3

I am trying to perform image segmentation using scikit mean shift algorithm. I use opencv to display the segmented image. My problem is the following: I use the code as given in different examples, and when I display the image after segmentation, I get a black image. I was wondering if someone could see what my mistake is... Thanks a lot for the help !

Here is my code:

import numpy as np    
import cv2    
from sklearn.cluster import MeanShift, estimate_bandwidth

#Loading original image
originImg = cv2.imread('Swimming_Pool.jpg')

# Shape of original image    
originShape = originImg.shape


# Converting image into array of dimension [nb of pixels in originImage, 3]
# based on r g b intensities    
flatImg=np.reshape(originImg, [-1, 3])


# Estimate bandwidth for meanshift algorithm    
bandwidth = estimate_bandwidth(flatImg, quantile=0.1, n_samples=100)    
ms = MeanShift(bandwidth = bandwidth, bin_seeding=True)

# Performing meanshift on flatImg    
ms.fit(flatImg)

# (r,g,b) vectors corresponding to the different clusters after meanshift    
labels=ms.labels_

# Remaining colors after meanshift    
cluster_centers = ms.cluster_centers_    

# Finding and diplaying the number of clusters    
labels_unique = np.unique(labels)    
n_clusters_ = len(labels_unique)    
print("number of estimated clusters : %d" % n_clusters_)    

# Displaying segmented image    
segmentedImg = np.reshape(labels, originShape[:2])    
cv2.imshow('Image',segmentedImg)    
cv2.waitKey(0)    
cv2.destroyAllWindows()
4

3 回答 3

1

您可以转换为其他颜色空间(例如,Lab颜色空间,使用以下代码)并对颜色进行分段(丢弃强度)。

from skimage.color import rgb2lab
image = rgb2lab(image)

然后使用上面的代码调整函数的参数(quantilen_samples),estimate_bandwidth()最后使用matplotlib'ssubplot绘制分割图像,如下所示:

plt.figure()
plt.subplot(121), plt.imshow(image), plt.axis('off'), plt.title('original image', size=20)
plt.subplot(122), plt.imshow(np.reshape(labels, image.shape[:2])), plt.axis('off'), plt.title('segmented image with Meanshift', size=20)
plt.show()

pepper图像获得以下输出。

在此处输入图像描述

于 2019-10-17T00:50:46.983 回答
1

对于显示图像,正确的代码是

segmentedImg = cluster_centers[np.reshape(labels, originShape[:2])]
cv2.imshow('Image',segmentedImg.astype(np.uint8)
cv2.waitKey(0)
cv2.destroyAllWindows()

我在随机样本照片上尝试了你的分割方法,分割看起来很糟糕,可能是因为你的均值偏移只在颜色空间上起作用,它丢失了位置信息。python包skimage带有一个分割模块,它提供了一些超像素分割方法。quickshift方法基于meanshift所基于的“模式搜索”机制。这些方法都不会分割出图像中的整个对象。它们提供了极其本地化的分割。

于 2018-03-11T06:10:14.813 回答
0

问题是您正在尝试显示标签,您应该使用标签映射将图像转换为超像素。

import numpy as np    
import cv2    
from sklearn.cluster import MeanShift, estimate_bandwidth

#Loading original image
originImg = cv2.imread('Swimming_Pool.jpg')

# Shape of original image    
originShape = originImg.shape


# Converting image into array of dimension [nb of pixels in originImage, 3]
# based on r g b intensities    
flatImg=np.reshape(originImg, [-1, 3])


# Estimate bandwidth for meanshift algorithm    
bandwidth = estimate_bandwidth(flatImg, quantile=0.1, n_samples=100)    
ms = MeanShift(bandwidth = bandwidth, bin_seeding=True)

# Performing meanshift on flatImg    
ms.fit(flatImg)

# (r,g,b) vectors corresponding to the different clusters after meanshift    
labels=ms.labels_

# Remaining colors after meanshift    
cluster_centers = ms.cluster_centers_    

# Finding and diplaying the number of clusters    
labels_unique = np.unique(labels)    
n_clusters_ = len(labels_unique)    
print("number of estimated clusters : %d" % n_clusters_)    

# Displaying segmented image    
segmentedImg = np.reshape(labels, originShape[:2])

superpixels=label2rgb(segmentedImg,originImg,kind="'avg'")

cv2.imshow('Image',superpixels)    
cv2.waitKey(0)    
cv2.destroyAllWindows()
于 2020-12-29T09:49:10.747 回答