I have a large image in numpy array form (opencv returns it as a 2d array of 3 uint8 values) and want to compute a sum of gaussian kernels for each pixel, i.e. (there's still no LaTeX support in SO is there?):
for N different kernels with a specified weight w, mean and diagonal covariance matrix.
So basically I want a function compute_densities(image, kernels) -> numpy array of floats
. What's the best way to do this efficiently in python? I'd be surprised if there wasn't already a library function in scipy for this, but I had statistics at uni a long time ago, so I do get a bit confused with the details of the documentation..
Basically I want the following, just way more efficient than naive python (2pi^{-3/2} is ignored since it's a constant factor that doesn't matter for me since I'm only interested in ratios between the probabilities)
def compute_probabilities(img, kernels):
np.seterr(divide='ignore') # 1 / covariance logs an error otherwise
result = np.zeros((img.shape[0], img.shape[1]))
for row_pos, row_val in enumerate(img):
for col_pos, val in enumerate(row_val):
prob = 0.0
for kernel in kernels:
mean, covariance, weight = kernel
val_sub_mu = np.array([val]).T - mean
cov_inv = np.where(covariance != 0, 1 / covariance, 0)
tmp = val_sub_mu.T.dot(cov_inv).dot(val_sub_mu)
prob += weight / np.sqrt(np.linalg.norm(covariance)) * \
math.exp(-0.5 * tmp)
result[row_pos][col_pos] = prob
np.seterr(divide='warn')
return result
Input: cv2.imread
on some jpg, which gives a 2d array (height x width) of a 3 uint8 struct containing the 3 color channels.
Kernels is a namedtuple('Kernel', 'mean covariance weight')
, mean is a vector, covariance is a 3x3
matrix with everything but the diagonal being zero and weight is a float 0 < weight < 1
. For simplicity I only specify the diagonals and then convert it to a 3x3 matrix afterwards: (the representation isn't set in stone I don't care how it's represented so be free to change all of that):
some_kernels = [
Kernel(np.array([(73.53, 29.94, 17.76)]), np.array([(765.40, 121.44, 112.80)]), 0.0294),
...
]
def fixup_kernels(kernels):
new_kernels = []
for kernel in kernels:
cov = np.zeros((3, 3))
for pos, c in enumerate(kernel.covariance[0]):
cov[pos][pos] = c
new_kernels.append(Kernel(kernel.mean.T, cov, kernel.weight))
return new_kernels
some_kernels = fixup_kernels(some_kernels)
img = cv2.imread("something.jpg")
result = compute_probabalities(img, some_kernels)