Combine sklearn with GridSearchCV to get the best parameters model is usual and use n_jobs>1 to parallelize jobs, too.
When I use CPU with Tensorflow, there is no problem (only use cores of computer). When I use GPU with CUDA with n_jobs>1 (parallel jobs) it does not work. CUDA error is showed.
Context:
Ubuntu 16.04 (64 bits)
Keras 2.0.8 (latest)
Tensorflow-gpu 1.3.0 (latest)
sklearn 0.19.0 (latest)
GPU: NVidia 1080 GTX, CUDA 8.0, cuDNN 6.0 (lastest stable and recomanded to work with Tensorflow)
Any advise?
Regards,
kadir...@gmail.com
unread,
Dec 4, 2018, 4:17:52 PM12/4/18
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Keras-users
Can you run two jupyter kernels with tensorflow? I cannot. I always shutdown the unused ones.
I don't have parallel GPUs configured.
So it seems normal that I cannot run GPU GridSearchCV with n_jobs>1.