nn4.small2.v1 model released: LFW accuracy improved from 91.5% to 93.6% and better performance.

963 views
Skip to first unread message

Brandon Amos

unread,
Jan 12, 2016, 4:49:55 PM1/12/16
to CMU-OpenFace, Bartosz Ludwiczuk
Hi OpenFace users,

This is a smaller announcement of the latest model, nn4.small2.v1, that improves
the LFW accuracy from 91.5% to 93.6%.
It uses a smaller model and improves the runtime performance
compared to nn4.v2 from 679.75 ms to 460.89 ms on an 8-core 3.70 GHz CPU
and from 21.96 ms to 13.72 ms on a Tesla K40 GPU.



these embeddings aren't compatible with our other models.

I've updated the "Models and Accuracies" page at http://cmusatyalab.github.io/openface/models-and-accuracies/
with more information about the available models.
The model definition is available at
and the model is available at

This improvement is from manually making a smaller neural network
than FaceNet's original nn4 network with the (naive) intuition that a small
model will work better with less data.
I think further exploring model architectures will result in better
performance and accuracies.
I think the best approach to this is to randomly sample hyper-parameter
choices, train for a day while saving the best result, then repeat.
I won't start implementing this for a while,
but I'll track progress at this GitHub issue:
I've labeled it with the 'help wanted' tag for now if anybody wants to contribute.

-Brandon.
Reply all
Reply to author
Forward
0 new messages