Hi, I am using opencv to search an image frame for a specific pattern of dots, get the bounding box, ID it by the dot pattern and get the orientation..
I do this by saying: If contours.size() = number of dots, go ahead and process.
So, in a black image with one such cluster, I can find the contours, get the orientation and all is well.
BUT, in an actual video frame (which is what it will eventually be), or when there are multiple clusters of dots, this obviously won't work, as there will be many more than 7 contours found.
I THINK the way to do this, is to use quadtree image segmentation, split the image, and on each split, search each region for the 7 dots.
If they are found, attempt processing.
if not, split again, and so on.
Does this sound right?
i am looking at this c++ implementation:
but i can't figure out how to actually use it!
If someone could explain where to implement my processing code, it would be much appreciated!
Or, conversely, a different approach :)