RateMLon multiple-GPU

Visto 43 veces
Saltar al primer mensaje no leído

Jeriffer Smith

no leída,
14 mar 2022, 0:20:5014/3/22
a TVB Users
Dear all,

 I've two stupid questions about TVB.

 Q1. How to use rateMl in multiple-GPU? Actually when I sbatch --gres=gpu:2, although my driver file occupy two GPUS, but only run on the first one. Would you please give me such a simple code to run it on two or more GPUs?

 Q2. When I use GUI to run my simulation, I've found such an odd thing that when i run the same settings for twice or more times, the result is so difficult (about 5%-10%), is this normal?
 
 Thanks for your time!

Best regards,

Jeriffer



WOODMAN Michael

no leída,
14 mar 2022, 4:42:1214/3/22
a TVB Users

hi


  1. the current driver for GPU doesn't support multiple GPUs, though your slurm script could run the same Python script twice on different GPUs (cf https://stackoverflow.com/a/17949379 for how to select a device).

2. Stochastic simulations which are seeded different will produce slightly different results, and this can also occur for deterministic simulations when the initial conditions are randomly  chosen.



cheers,

Marmaduke


From: tvb-...@googlegroups.com <tvb-...@googlegroups.com> on behalf of Jeriffer Smith <jeriffe...@gmail.com>
Sent: Monday, March 14, 2022 5:20:50 AM
To: TVB Users
Subject: [TVB] RateMLon multiple-GPU
 
--
You received this message because you are subscribed to the Google Groups "TVB Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tvb-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/tvb-users/dee770a5-2144-4f4c-9c39-5a5df31e3241n%40googlegroups.com.

Jeriffer Smith

no leída,
14 mar 2022, 7:56:2314/3/22
a TVB Users
Thanks, Marmaduke! You really help me a lot!!

Jeriffer Smith

no leída,
14 mar 2022, 8:11:4814/3/22
a TVB Users
Dear Marmaduke,

  I'm so sorry to bothor you again! I have another question about RateML. When I use 'python model_driver.py -s0 10 -s1 10 -n 800' to run my simulation, the '-n 800' means the simulation length is 800ms or 800 times sampling ( when period is 2ms, then the simulation length is 1600ms)?
  
Best regards,
Jeriffer

Jeriffer Smith

no leída,
14 mar 2022, 10:59:0414/3/22
a TVB Users
Sorry again! I have other questions again!

  Q1. I wonder if i want to get bold signal not the temporal average signal, how to modify my cuda .c file like in attachment? Would you please give me such a simple example?

  Q2. if TR of Bold is 2s,  how to set the temporal period? (temporal period = 2000?)

  Q3. When I use 'python model_driver.py -s0 10 -s1 10 -n 800' to run my simulation, the '-n 800' means the simulation length is 800ms or 800 times sampling ( when period is 2ms, then the simulation length is 1600ms)?

Thanks for your time!

WOODMAN Michael

no leída,
14 mar 2022, 12:06:3714/3/22
a TVB Users

hi


The BOLD is to be integrated, I will check on the current status.  


-n 800 indicates 800 ms, and the driver code determines the number of time points required. 


cheers,

Marmaduke




Sent: Monday, March 14, 2022 3:59:04 PM
To: TVB Users
Subject: Re: [TVB] RateMLon multiple-GPU
 

Jeriffer Smith

no leída,
14 mar 2022, 20:48:4514/3/22
a TVB Users
Thanks, Marmaduke. You mean if -n 800, then the number of time points = 800/temporal.period? 

However, when i set -n 240 and temporal period = 2,  i read the output 'tavg_data' and find its shape is (240, 2, 68, 100), i guess '2' means two states of model, '68' means number of regions and '100' means s0 * s1, i.e., 10 * 10, so maybe '240' means the number of time points? If '240' is the simulation length, then i should get 120 time points.

I'm so sorry for it, however I'm really puzzled for it! Thanks for you time!

Michiel van der Vlag

no leída,
24 mar 2022, 9:58:1424/3/22
a TVB Users
Dear Jerrifer, 


Regarding the run time. The -n option sets the number of time steps. The -dt determines the length of each step in ms. So simulation time is n*dt. The temporal period (tavg_period), which needs to be set manually, set the number of inner_steps that the kernel will integrate over before timestep is increased. If set to 2 then the total integration steps are n * (2/dt). The average of this is saved each n step and thus will not effect the size of tavg_data. And you are correct for the entries. The '2' corresponds to the number of exposures exported from GPU to main memory. The '100' is every possible parameter combination (s0*s1). 

Kind regards,
Michiel.

Op dinsdag 15 maart 2022 om 01:48:45 UTC+1 schreef Jeriffer Smith:

Jeriffer Smith

no leída,
24 mar 2022, 10:37:1424/3/22
a TVB Users
It's so cool! Thanks, Michiel! You help me a lot!
Responder a todos
Responder al autor
Reenviar
0 mensajes nuevos