what is the command line for GPU supported - Running examples/ssd/score_ssd_pascal.py

69 views
Skip to first unread message

Eren Aktaş

unread,
Sep 21, 2021, 7:55:19 AM9/21/21
to Caffe Users
Hi folks, 

My environment: 
- Ubuntu 18.04
- Cuda/cuDNN/opencv: All working properly

I fully compiled the Caffe framework to experience SSD example in Caffe framework. I am using pre-trained SSD for my experiment. In the test phase, I am able to run score_ssd_pascal.py on my CPU without issue. However,  I want to decrease the testing duration by the usage of GPU.  

My Caffe framework has been also ready for the usage of GPU. However, when I run the command line below, 

 aktaseren@aktaseren-UX310UQK:~/caffe$ python examples/ssd/score_ssd_pascal.py

I got the response below on my shell. (I guess that it is not an error and I might miss a parameter on the command line to trigger GPU fo my experiment) 

caffe: command line brew
usage: caffe <command> <args>

commands:
  train           train or finetune a model
  test            score a model
  device_query    show GPU diagnostic information
  time            benchmark model execution time

  Flags from tools/caffe.cpp:
    -gpu (Optional; run in GPU mode on given device IDs separated by ','.Use
      '-gpu all' to run on all available GPUs. The effective training batch
      size is multiplied by the number of devices.) type: string default: ""
      currently: "0,"
    -iterations (The number of iterations to run.) type: int32 default: 50
    -level (Optional; network level.) type: int32 default: 0
    -model (The model definition protocol buffer text file.) type: string
      default: ""
    -phase (Optional; network phase (TRAIN or TEST). Only used for 'time'.)
      type: string default: ""
    -sighup_effect (Optional; action to take when a SIGHUP signal is received:
      snapshot, stop or none.) type: string default: "snapshot"
    -sigint_effect (Optional; action to take when a SIGINT signal is received:
      snapshot, stop or none.) type: string default: "stop"
    -snapshot (Optional; the snapshot solver state to resume training.)
      type: string default: ""
    -solver (The solver definition protocol buffer text file.) type: string
      default: ""
      currently: "models/VGGNet/VOC0712/SSD_300x300_score/solver.prototxt"
    -stage (Optional; network stages (not to be confused with phase), separated
      by ','.) type: string default: ""
    -weights (Optional; the pretrained weights to initialize finetuning,
      separated by ','. Cannot be set simultaneously with snapshot.)
      type: string default: ""
      currently: "models/VGGNet/VOC0712/SSD_300x300/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel"

In score_ssd_pascal/py, I already defined which GPUs to use as follows. (Related Cuda and cuDNN are working properly. I have Nvidia Geforce 940 Mx. Is the definition below correct to the graphic card I have?)

# Solver parameters.
# Defining which GPUs to use.
gpus = "0, 1, 2, 3"
gpulist = gpus.split(",")
num_gpus = len(gpulist)

Any suggestion or thought is welcome. Thank you very much in advance.



Eren Aktaş

unread,
Sep 21, 2021, 9:39:33 AM9/21/21
to Caffe Users
Sorry guys. I actually recognized my funny mistake now. As I have one GPU, I should have defined only one in the script as follows.

After updating the script over this, my test has got automatic acceleration managed by the Caffe framework. That is absolutely faster than the one done on CPU.

I monitor that the resources that came from my GPU to accelerate the test have been dynamic and automatically assigned somehow. I guess that there is a resource usage metric between CPU and GPU obliged to use on a machine. I am just wondering how I can get to know whether my GPU is used with its full capacity on my test??
Reply all
Reply to author
Forward
0 new messages