I am using the library found at this link: https://code.google.com/p/simple-iphone-image-processing/
I implemented Image.h and Image.mm and I get the Canny Edge Detection filter by using this code:
- (IBAction)findEdges {
ImageWrapper *greyScale=Image::createImage(_sourceImage, _sourceImage.size.width/4, _sourceImage.size.height/4);
ImageWrapper *edges=greyScale.image->gaussianBlur().image->cannyEdgeExtract(0.4,0.6);
// show the results
UIImage *newimageView = edges.image->toUIImage();
_imageView.image = newimageView;
}
Now I notice there is a method called findLargestStructure
. This is that method:
void Image::findLargestStructure(std::vector<ImagePoint> *maxPoints) {
// process the image
std::vector<ImagePoint> points;
points.reserve(10000);
for(int y=0; y<m_height; y++) {
for(int x=0; x<m_width; x++) {
// if we've found a point in the image then extract everything connected to it
if((*this)[y][x]!=0) {
extractConnectedRegion(x, y, &points);
if(points.size()>maxPoints->size()) {
maxPoints->clear();
maxPoints->resize(points.size());
std::copy(points.begin(), points.end(), maxPoints->begin());
}
points.clear();
}
}
}
}
My question is, what exactly does this method do and how can I call it/use it from Objective C code? Pretty much my goal in the end of all of this is to use the Canny Edge Detection this library has and derive a CGRect somehow out of it by using the average of the biggest structure's edge points and then use that in my app.
So will the above method return a vector of those points or not? I am familiar with Objective-C but I am not that great with C++.
Any help/tips would be greatly appreciated!