Hi! I hope this email finds you well. I am a student in Fudan University, China, currently studying how to improve tracking with Deep learning. I came across your paper, idtracker.ai
: tracking all individuals in small or large collectives of unmarked animals, published in Nature Method and found it to be quite interesting and thought-provoking.
However, I had a few questions regarding some of the evaluation metrics presented in the paper that I was hoping you could clarify for me.
1. I think "human/manual validation" mentioned in the supplementary table means identity correction with human efforts. And the results are evaluated mostly after id correction. Do I have an exact understanding?
2. In Supplementary Table 5, are the columns “Accuracy prot. cascade” and " Accuracy" all come from human validation?
3. Do you have some results directly evaluated without human correction? In Supplementary Table 4, it seems that it is not from human validation.
Thank you. Looking forward to your reply.