Hello again,
I wanted to let you know that there are some typos and inconsistencies in the baseline description in bob.bio.vein for bob.db.utfvp database. I tested the MC pipeline:
Currently, for MC, the results are:
Nom 1vsall nom
4.5 0.9 0.4
however, they should be:
nom 1vsall full
1.1 0.7 0.6
For protocol 1vsall, contrary to the protocol description, which suggests that there are 1299 probes to each of the 1300 templates, in reality, there are 1300 probes (also each image is scored to itself) -- there are 325 classes, 4 images each; 5200 genuine scores and 1684800 impostor scores; (1684800+5200)/(325*4)=1300; if we look at the genuine scores -- 5200/325=16=4*4 -- each of the 325 classes produce 16 genuine scores, meaning that for every class, the 4 images belonging to that class (finger) is scored in all vs all scenario, producing 16 scores.
BTW -- is there any paper besides VERA, which describes these results?
I used the stable BOB version:
bob.measure 4.2.2
bob.bio.base 4.1.1
bob.bio.vein 1.1.6
bob.db.utfvp 3.0.5
More details:
Protocol NOM
verify.py utfvp mc --protocol nom --groups dev eval --parallel 12 -T /TEMP -R /RESULTS -vv
bob bio metrics -e RESULTS/mc/nom/nonorm/scores-{dev,eval} -c eer
[Min. criterion: EER ] Threshold on Development set `RESULTS/mc/nom/nonorm/scores-dev`: 6.913305e-02
===================== ================ =================
.. Development Evaluation
===================== ================ =================
Failure to Acquire 0.0% 0.0%
False Match Rate 0.5% (244/46224) 0.2% (298/146688)
False Non Match Rate 0.5% (2/432) 2.0% (15/768)
False Accept Rate 0.5% 0.2%
False Reject Rate 0.5% 2.0%
Half Total Error Rate 0.5% 1.1%
===================== ================ =================
Protocol FULL
verify.py utfvp mc --protocol full --groups dev --parallel 12 -T /TEMP -R /RESULTS -vv
bob bio metrics -c eer /RESULTS/mc/full/nonorm/scores-dev
[Min. criterion: EER ] Threshold on Development set `RESULTS/mc/full/nonorm/scores-dev`: 6.714574e-02
===================== ====================
.. Development
===================== ====================
Failure to Acquire 0.0%
False Match Rate 0.6% (12874/2067840)
False Non Match Rate 0.6% (36/5760)
False Accept Rate 0.6%
False Reject Rate 0.6%
Half Total Error Rate 0.6%
===================== ====================
Protocol 1vsall
verify.py utfvp mc --protocol 1vsall --parallel 12 -T /TEMP -R /RESULTS -vv
bob bio metrics -c eer /RESULTS/mc/1vsall/nonorm/scores-dev
[Min. criterion: EER ] Threshold on Development set `RESULTS/mc/1vsall/nonorm/scores-dev`: 6.691505e-02
===================== ====================
.. Development
===================== ====================
Failure to Acquire 0.0%
False Match Rate 0.7% (11340/1684800)
False Non Match Rate 0.7% (35/5200)
False Accept Rate 0.7%
False Reject Rate 0.7%
Half Total Error Rate 0.7%
===================== ====================