LSTM training about parameter on controling computing resources

30 views
Skip to first unread message

Zhiyong Sun

unread,
Jun 6, 2017, 12:57:02 AM6/6/17
to tesseract-ocr
hi, all

I'm tried to use 4.00alpha training with lstm, I'm appreciate that a huge improvement.

I have a problem with training speed when training from scratch.
so could I speed it up with setting on training, that is  

Is there parameter that i could use to control the number of cpus used during LSTM training?
Whether it has a plan in the future version to speed up training progress for large scale data training, like distributed or with gpu?

Thanks
Reply all
Reply to author
Forward
0 new messages