Hi Yafim,
In order to benefit from using the GPU, you should be using a large grid of points. The GPU is quite useful if you have thousands of sensors to run calculations on. For a very small grid of 23 points, it is unlikely that you will receive any benefit from using the GPU.
In order to run the simulation from your screenshot on the GPU, you can run the command with accelerad_rtrace instead of rtrace. There are several ways to accomplish this,
described in the documentation. For Honeybee, the easiest way is to replace the original programs with the Accelerad versions.
Can you explain what you mean about speeding up the run time without compromising accuracy? The settings you show in the Honeybee screenshot are already inaccurate because the -ad setting is very low. I would recommend using more accurate settings to start out with. In general, Accelerad is faster than Radiance because it uses GPU parallelism, but it still performs calculations using the same algorithms, and therefore it does not compromise accuracy.
There are some additional problems with your simulation settings. First, you need to adjust the relay limit (-lr) whenever you change the ambient bounces (-ab). The absolute value of -lr needs to be greater than -ab, unless you set -lr to zero. Second, the ray weight limit (-lw) should be set to the inverse of the ambient divisions (-ad) or smaller; otherwise you will not do full ambient sampling. You can look at the examples in Rendering with Radiance to find typical combinations of settings.
To make sure that you are running on the GPU, look for this line in the output:
OptiX x.x.x found display driver xxx.xx, CUDA driver x.x.x, and x GPU device(s):
This will guarantee that you are running on the GPU.
Nathaniel