We're working with a GP (Doctor) to create an Otoscope that can detect inner ear movement (Ear Rumbling) - using an Otoscope.
We can do this - and have shown it can work via basic movement detection - but are now trying to make it more robust.
We're looking to use BOOFCV's - KLT method to then apply motion detection on it (whilst using USB - OTG to keep it compatible with COTS cameras / Otoscopes.
This seems to allow us to see whole movement (camera movement) vs clustered movement of the KLT - which gives us a good impression / probability of Typani motion.
I wonder if anyone could suggest any methods for this last bit? To then look for clustered KLT changes in an image? It's the clustered movement that appeals to this - since there are lots of in-ear variables - e.g. light, cleanliness, narrowness of aperture - etc.