Hi!
I haven't used the code in the notebook verbatim but have instead
adapted it for my own needs. In particular, I implemented some
functions to resume training in case I had to abort a training run. My
training regimen is almost as given in the notebook, except that I
shuffle the data (only once, of course) before model training. I think
that is more interesting than using the pre-generated folds in
the onsets_ISMIR_2012/splits/ directory.
Right now I'm rerunning the experiments to see if my score was the
result of a fluke or of bugs.
Btw, I have noticed that both the recurrent and convolutional neural
network in Madmom are quite different from those described in your
articles. How come? For instance, the CNN uses tanh activations on the
convolutional layers, but you got a higher score using relu.
Den mån 29 apr. 2019 kl 11:40 skrev Sebastian Böck
<
sebastian.bo...@gmail.com>:
> --
> You received this message because you are subscribed to the Google Groups "madmom-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
madmom-users...@googlegroups.com.
> To post to this group, send email to
madmom...@googlegroups.com.
> To view this discussion on the web, visit
https://groups.google.com/d/msgid/madmom-users/fa9ee38c-f804-4d6a-9076-0d61b8c21850%40googlegroups.com.
> For more options, visit
https://groups.google.com/d/optout.