I see no problem feeding your captured images into a neuro-net after visualization is complete. Using Fiji, once your stitching is finished, you have the option to output a .txt file containing the relevant position of each tile, and afterward is you want, do operations on the data.
The problem in your case when you are registering data on the fly with Example_Video_Mosaic
, is that you still must ensure the student move the sample accurately enough to keep the process going. If the sample is jerked out of the field of view of the camera, or moved too quickly, it will lose its reference and the process will crash. Boofcv does provide a way to recover, with object tracking algorithm, but I am not very familiar with it and don't know if compatible with low quality or distorted images.
The microscope does not have to be moved automatically to provide a suitable dataset to stitch in a grid format. The highly optimised algorithms contained inside ImageJ
does a very good job at compensating for uneven overlap. But is has to have a minimum. You could start at about 20% overlap and increase if needed.
In case your supply or budget is limited, you could always borrow some sort of dial indicator to index your microscope slide. You would still use your camera setup but only take a snapshot when the slide is indexed at the correct position. You would name the snapshot accordingly to its position on the grid. In the worst case, if all of it has to be done manually, it would be very accurate and provide the students with an easy to increment and repeatable visual reference.
In my honest opinion, if you say you have limited experience in programming, your best bet would be to structure you data in a grid fashion and use an already highly optimized and working software.
However, if you still wish to implement a phase-correlation algorithm into Boofcv, you might want to have a look at this
Github repo and get a grip of the process with an already working implementation.