I'm using Python to count the frequency of pixel colors in an image. The Python Imaging Library can convert an image to a list of RGB values, and from there I can easily count duplicates, ending up with a dictionary of pixel values (as strings) and frequencies, like so:
{
"255-255-255": 450,
"255-254-254": 345,
"249-250-255": 184,
"124-130-200": 3,
} [etc etc]
(Essentially it's a histogram.)
For large images, I'm then quantizing the colors to multiples of N, so then I might have:
[
("255-255-255", 450),
("255-255-255", 345),
("250-250-255", 184),
("125-130-200", 3),
] [etc etc]
This leaves a lot of duplicate "keys" (stored as tuples since we have duplicates). I now need to condense, adding the values of all duplicates. So far I have:
c = 0
while c < len(vals) - 1:
if vals[c][0] == vals[c+1][0]:
vals[c][1] += vals[c+1][1]
vals.pop(c+1)
else:
c += 1
return vals
It works fine, but there must be a way with list comprehensions? Or some other more efficient manner? I realize PIL may be able to do this, but I'd like to do by hand while learning how images work. Thanks!