Dear idtracker devs & community
First of all, thanks for making such a high quality and well documented piece of software. I must say for me it worked out of the box, just following the instructions on the website. I was elated to discover the
conda install tensorflow-gpu=1.13
command also installs CUDA and cuDNN. Amazing. Keep it up!
I had a question regarding the feasibility of making idtrackerai faster by transferring knowledge from one video to another. I have tried to look up this question in the documentation and also on the Google group but I could not find anything.
As I understand it, idtrackerai has at least one training step, which enables it to work out of the box on a new (and very different) videos. I was wondering if it would be possible to run the tracker in a mode that makes use of the weights produced on video A to analyze video B, provided video A and B are just chops of video AB? I understand the so called "global fragments" need to be computed on every video, but the training may not have to? If so, this would make it feasible to make offline analysis of longer and longer videos (>few mins).
Thank you very much again for making idtrackerai!