I think what you are looking for is "hard negative mining". You can
use it when you have highly unbalanced classes: for example, if you
want to recognise platypuses in images, it is much easier to get
photos of non-platypuses than of platypuses.
In that case, you train on a balanced set, sweep the negative
examples, and add a few of the "most wrong" ones: some of the
non-platypuses that your network is very sure they are platypuses, but
keeping your original training set. Rinse and repeat if needed. This
technique is only useful if you have a very unbalanced set.
If you retrain only on the worse cases, you will improve them, but it
will be at the cost of the good predictions: your network will be
optimised for the special case, but ignoring the common ones.
One thing that may improve your performance is to train a specialist
network using Hinton's dark knowledge:
http://deepdish.io/2014/10/28/hintons-dark-knowledge/
/David.
> --
> You received this message because you are subscribed to the Google Groups
> "Keras-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to
keras-users...@googlegroups.com.
> To view this discussion on the web, visit
>
https://groups.google.com/d/msgid/keras-users/d7a8c71b-e52b-4af7-ab7a-ee06dffb4eb5%40googlegroups.com.
> For more options, visit
https://groups.google.com/d/optout.