Is there a way to speed up simulation runtime?

163 views
Skip to first unread message

Yafim Simanovsky

unread,
Nov 5, 2019, 12:26:41 PM11/5/19
to Accelerad Users
Hi,
I am running Rhino+GH for a Daylight simulation, analyzing a grid of points (23 points total).

Given the attached cmd command screen, and the attached rad parameters image, is there a way to speed up runtime without compromising simulation accuracy?

Also, is there a way to make sure I'm running it properly on the GPU? Other simulations had a reference to CUDA etc. but this one doesn't seem to have an indication for GPU.


Thank you
Simulation_runtime_cmd.png
rad_params.JPG

Nathaniel Jones

unread,
Nov 5, 2019, 12:52:57 PM11/5/19
to Accelerad Users
Hi Yafim,

In order to benefit from using the GPU, you should be using a large grid of points. The GPU is quite useful if you have thousands of sensors to run calculations on. For a very small grid of 23 points, it is unlikely that you will receive any benefit from using the GPU.

In order to run the simulation from your screenshot on the GPU, you can run the command with accelerad_rtrace instead of rtrace. There are several ways to accomplish this, described in the documentation. For Honeybee, the easiest way is to replace the original programs with the Accelerad versions.

Can you explain what you mean about speeding up the run time without compromising accuracy? The settings you show in the Honeybee screenshot are already inaccurate because the -ad setting is very low. I would recommend using more accurate settings to start out with. In general, Accelerad is faster than Radiance because it uses GPU parallelism, but it still performs calculations using the same algorithms, and therefore it does not compromise accuracy.

There are some additional problems with your simulation settings. First, you need to adjust the relay limit (-lr) whenever you change the ambient bounces (-ab). The absolute value of -lr needs to be greater than -ab, unless you set -lr to zero. Second, the ray weight limit (-lw) should be set to the inverse of the ambient divisions (-ad) or smaller; otherwise you will not do full ambient sampling. You can look at the examples in Rendering with Radiance to find typical combinations of settings.

To make sure that you are running on the GPU, look for this line in the output:
OptiX x.x.x found display driver xxx.xx, CUDA driver x.x.x, and x GPU device(s):
This will guarantee that you are running on the GPU.

Nathaniel

Yafim Simanovsky

unread,
Nov 5, 2019, 1:01:03 PM11/5/19
to Accelerad Users
Hi Nathaniel,

Thank you for the quick reply.

Regarding the rad_params, which would you recommend for -ad, -lr, -ab, -lw  ?
I've been using this guide which was recommended on the Honeybee forum:
(-ad there for 'accurate' is 512...)

I've seen this info
OptiX x.x.x found display driver xxx.xx, CUDA driver x.x.x, and x GPU device(s):
on another simulation I ran, and I know that my PATH uses accelerad_rtrace instead of rtrace, so that shouldn't effect the runtime.

23 points isn't a lot, but I am running simulations iteratively thousands of times, so it adds up to a long total runtime. Each simulation is about 10 seconds, so even if I bring it down to 9 or 8 seconds it would help a lot to reduce total time.


Thanks

Nathaniel Jones

unread,
Nov 5, 2019, 9:05:19 PM11/5/19
to Accelerad Users
Hi Yafim,

There is no single best set of Radiance parameters. They usually need to be chosen for each scene. For indoor scenes, a typical good quality setting is -ab 6 -ad 1024 -lr 8 -lw 0.001. There's nothing wrong with the values recommended on the Honeybee forum. They're just old, from the 1990's, and so the column labeled 'accurate' is no longer considered to have high accuracy.

As I mentioned to you in a previous thread, Honeybee does not use your PATH variable, which is why replacing the original Radiance programs with Accelerad versions is the easier method.

For only 23 points, I doubt you will even see a 1-second speedup with Accelerad, and in fact Accelerad could take even longer because of the added GPU setup that needs to happen with each run. If you have many iterations you need to run, you could combine them all into one model, and then you would experience speedup from the GPU. Otherwise, you are probably better of using CPU parallelism by running multiple instances of Radiance simultaneously.

Nathaniel

tree...@gmail.com

unread,
Nov 6, 2019, 6:38:00 AM11/6/19
to Accelerad Users
Okay, I'll try to play with the parameters and see if accuracy improves and if simulation runtime is longer.
Thanks :)

tree...@gmail.com

unread,
Nov 7, 2019, 12:05:27 PM11/7/19
to Accelerad Users
Hi,

I was trying to further see if the GPU is being used properly.

Honeybee has an option to choose how many CPU cores run the simulation. I changed the number from 1 to 4, and when I run the simulation, there are 4 cmd windows instead of 1 previously, so this indicates that the simulation isn't running on the GPU as far as I can tell..

I have the folders:
C:\Radiance\bin
C:\Radiance\lib

in which, as per the tutorial, I've replaced the radiance .exe's with Accelerad's, and changed the names..

Do you have an idea why it's still running on the CPU?

Thanks

Nathaniel Jones

unread,
Nov 7, 2019, 6:15:49 PM11/7/19
to Accelerad Users
The number of cmd windows you see has absolutely nothing to do with whether or not you are using the GPU. When you tell Honeybee to use 4 cores, it launches 4 instances of whatever simulation you are running and breaks up the work between them.

For Accelerad, you should always choose to use one core and allow Accelerad to break the work into parallel. If you launch multiple instances of Accelerad at the same time, they take turns accessing the GPU, which is slower than sending the entire simulation job to a single Accelerad instance.

As I stated before, to make sure that you are running on the GPU, look for this line in the cmd window:
OptiX x.x.x found display driver xxx.xx, CUDA driver x.x.x, and x GPU device(s):
This guarantees that you are running on the GPU.

Nathaniel

tree...@gmail.com

unread,
Nov 8, 2019, 10:40:02 AM11/8/19
to Accelerad Users
Do you know of a 'test' GH file where I would be able to see this line:
OptiX x.x.x found display driver xxx.xx, CUDA driver x.x.x, and x GPU device(s):

For my Daylighting and radiation studies, it doesn't appear at the moment. Perhaps the lack of that line points to an issue..

Nathaniel Jones

unread,
Nov 8, 2019, 11:24:03 AM11/8/19
to Accelerad Users
Based on the cmd window contents in the screenshot at the beginning of this chain, the line should appear in the file C:\ladybug\unnamed\gridBasedSimulation\error.log. If you do not see the line, then you have one of these issues:
  1. You have not successfully replaced the Radiance programs with the Accelerad programs in the folder that Honeybee uses. You can test this by opening a new cmd window and typing C:\Radiance\bin\rtrace -version to see if you are using the Accelerad version of rtrace, for example.
  2. You are running a Honeybee simulation that does not use one of the ray tracing programs (rtrace, rpict, rcontrib, rfluxmtx, or genBSDF).
  3. You are running Accelerad, but you are using the -g parameter to turn off GPU acceleration.
  4. You are running Accelerad with the GPU, but you are using the -w parameter to hide output messages.
Again based on the screenshot, if you do not see the line, then it is the first issue, and there is not test file for this because it is a setup issue.

Nathaniel

tree...@gmail.com

unread,
Nov 8, 2019, 11:49:37 AM11/8/19
to Accelerad Users
I think everything is good then :)
I have this in the error.log:

rtrace: OptiX 6.5.0 found display driver 436.15, CUDA driver 10.1.0, and 1 GPU device:
rtrace: Device 0: GeForce GTX 960M with 5 multiprocessors, 1024 threads per block, 1176000 Hz, 4294967296 bytes global memory, 1048576 hardware textures, compute capability 5.0, timeout disabled, Tesla compute cluster driver disabled, PCI 0000:01:00.0.

And the  C:\Radiance\bin\rtrace -version  tells me it's Accelerad version 0.7

Thank you!
Reply all
Reply to author
Forward
0 new messages