Is it possible to assign GPU memory when cmd.sh is set to run.pl

71 views
Skip to first unread message

Sage Khan

unread,
Jul 1, 2022, 4:33:25 AM7/1/22
to kaldi-help
Hello.

As of now I know that when we want to train data on local machine, we change queue.pl in cmd.sh to run.pl

queue.pl usually is written with -- 2G mem or whatever memory we want to allocate.

How do I set GPU memory when running cmd.sh on run.pl? I have RTX 3080Ti and have 12 GB RAM. at times when I want to work on larger dataset, I would want more of my GPU used to pace things up.

Please help me out on this!

Jan Yenda Trmal

unread,
Jul 1, 2022, 10:19:28 AM7/1/22
to kaldi-help
run.pl does not manage memory limits.
Also, the --mem 2G manages system ram, no GPU mem
y.

--
Go to http://kaldi-asr.org/forums.html to find out how to join the kaldi-help group
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/c5ed469a-81ac-4694-9019-f9b634f14b4dn%40googlegroups.com.

Sage Khan

unread,
Jul 1, 2022, 10:52:44 AM7/1/22
to kaldi-help
How can I bring my PCs GPU in use? For local system we use run.pl. It does not give option to use GPU mem. So what should be done?

Jan Yenda Trmal

unread,
Jul 1, 2022, 10:54:52 AM7/1/22
to kaldi-help
there is no way using gpu memory as a normal memory
y.

Sage Khan

unread,
Jul 1, 2022, 11:15:23 AM7/1/22
to kaldi-help
Does Kaldi use CUDA and CUDNN in any manner? The tensor cores etc? is it built totally as CPU based when working on local machine?

Jan Yenda Trmal

unread,
Jul 1, 2022, 11:17:12 AM7/1/22
to kaldi-help
only for neural networks training

Sage Khan

unread,
Jul 1, 2022, 11:42:34 AM7/1/22
to kaldi-help
so how do we set GPU mem for Neural Network training?

Desh Raj

unread,
Jul 1, 2022, 11:48:14 AM7/1/22
to kaldi...@googlegroups.com
You don't need to "set" GPU memory. There's an argument called "--use-gpu" in the train.py stage (see any of the run_tdnn.sh scripts), which makes the training use GPU. Then the GPU memory used will be based on your batch size. Usually we try to maximize GPU memory usage so you should check nvidia-smi during training and set the batch size accordingly.

Sage Khan

unread,
Jul 2, 2022, 1:06:13 AM7/2/22
to kaldi-help
Thank you so much... makes sense now :)
Reply all
Reply to author
Forward
0 new messages