Automatizing light simulation with Blender, Vi-Suite and Radiance in large scale

402 views
Skip to first unread message

ttsesm

unread,
Sep 18, 2020, 8:50:34 AM9/18/20
to VI-Suite
Hi Ryan,

Thanks for the nice work that you have done with the suite. I am quite new in LiVi, blender and Radiance so bear with me and my questions.

I would need your feedback regarding a project that I have in my mind.

I have a bunch (couple of hundreds) of .obj models and their corresponding .mtl files from indoor environments. These does not include any light sources which I will have to add manually. My goal is to apply and extract the results of light simulations based on Radiance. My idea is create a script which will automatize all these procedure, i.e. meaning loading the files, inserting light sources (I guess for this I would need to extract information regarding the dimensions of the mesh or the ceiling position), create measuring planes (or use the existing vertices/faces of the surfaces, I've seen in one of your tutorials that you could do that), apply light simulation and store the results either as falsecolor images, 2D space, or as values corresponding to the vertices/faces, 3D space.

My question now is whether you have done anything similar in the past, thus you have some experience or if not whether you believe that it is feasible to be done with Blender and Vi-Suite.

Thank you for your time.

Best regards,
Theodore

Ryan Southall

unread,
Sep 18, 2020, 8:55:17 AM9/18/20
to VI-Suite
Hi Theo.
Interesting question. I personally have not done something like this. Scripting Blender allows you to do almost anything but it may not be easy and this does not look easy. The best way off the top of my head is to write a function that sits in the VI-Suite Radiance geometry export routine. If doing a parametric run this function would convert the data in an OBj file at every step to a mesh representation that can then be inserted into a Blender object which is the subject of the analysis. The object geometry then matches a different OBJ geometry at every step of the parametric simulation. Placing of the lights in a suitable spot for each model will be tricky and you would need a routine to delete/create lights and put them in the right place and I am not sure you can create delete lights over steps of an animation/parametric analysis. You would also need a routine to create the VI-Suite Radiance material you want from the names in your mtl files and match them to the faces of the model. Geometry can indeed be used as a sensing surface but it needs to be subdivided to give suitable sensor points. This would be easier if it was done to the OBJ models before importing as a routine in the VI-Suite to do this would have to be rather clever.
Having said all that, if I had to do this I would probably do it in pure Python and repurpose sections of the VI-Suite code that I needed.
It won't be easy but good luck with it.
Ryan

ttsesm

unread,
Sep 22, 2020, 4:26:01 AM9/22/20
to VI-Suite
Thanks Ryan for the prompt response and the analytical feedback.
Please check on my comments below inline.


On Friday, September 18, 2020 at 2:55:17 PM UTC+2, Ryan Southall wrote:
Hi Theo.
Interesting question. I personally have not done something like this. Scripting Blender allows you to do almost anything but it may not be easy and this does not look easy.

That's why I wanted to start with Blender since it seems there is quite a lot of information in the net which in principle it could support me on this task. Moreover, having the 3D view panel and in general some panels where I can check on the fly whether everything goes well I believe it would be helpful for easier debugging (initially I started working directly with python and Radiance commands, but the learning curve is really steep plus not being easy to visualize some of the steps makes it a bit harder).
 
The best way off the top of my head is to write a function that sits in the VI-Suite Radiance geometry export routine. If doing a parametric run this function would convert the data in an OBj file at every step to a mesh representation that can then be inserted into a Blender object which is the subject of the analysis. The object geometry then matches a different OBJ geometry at every step of the parametric simulation.

What I had in mind is to load each model in a scene or all together in multiple scenes (to which I would have to loop through). Is this not done automatically once you load an .obj file in blender? I mean from what I've seen each object is associated with the materials which they link to the associated faces.Then these objects could used as a subject for analysis as you mention. I do not know maybe I did not understand well this part. Maybe you could elaborate a bit more.
 
Placing of the lights in a suitable spot for each model will be tricky and you would need a routine to delete/create lights and put them in the right place and I am not sure you can create delete lights over steps of an animation/parametric analysis.

Yup, this might be tricky. However, for now at least I am not interested to find the perfect position for the light sources (this actually is even now a subject of study from what I am aware of for even commercial software in the field, e.g. Relux, Dialux) but rather having somewhere at least one artificial light source which I can randomly position in the space once I am able to extract the mesh dimensions or position of the ceiling (usually from what I have noticed the ceiling object is loaded as an individual object, thus I could apply some search and find function to do the job).
 
You would also need a routine to create the VI-Suite Radiance material you want from the names in your mtl files and match them to the faces of the model.

From what I've seen in your tutorials you just pass the model (i.e. the objects as faces/vertices and materials) to the VI-Suite corresponding nodes which you connect and then you apply your simulations. From what I've seen it is possible to connect inputs-outputs from nodes through scripting so in theory I could do something similar with the VI-Suite nodes, right?
 
Geometry can indeed be used as a sensing surface but it needs to be subdivided to give suitable sensor points. This would be easier if it was done to the OBJ models before importing as a routine in the VI-Suite to do this would have to be rather clever.

Yes, this was also my intention. My only concern is that, from what I've noticed in your tutorials for doing this you had to change the material of the object to sensing device. Does that mean that you replace the initial material or you add it as an extra layer on top of that? Because if you replacing the material doesn't that mean that it will affect the simulation or not really. Are you doing something on the background for preserving the initial material properties which as far as I know it is essential for the lighting simulation.
 
Having said all that, if I had to do this I would probably do it in pure Python and repurpose sections of the VI-Suite code that I needed.
It won't be easy but good luck with it.

Thanks, yup I know that it is not gonna be straight forward but I am willing to give it a try. Hopefully, I can get some support from you and maybe some other guys in here.

Ryan Southall

unread,
Sep 22, 2020, 4:32:33 AM9/22/20
to VI-Suite
Hi.
If you are happy to simulate them all at once that could well be easier although you may run into memory problems. You could then just write a script to import all the objs and offset the position of each one as you import and work out the rest from there.
Cheers
Ryan

ttsesm

unread,
Oct 1, 2020, 1:21:28 PM10/1/20
to VI-Suite
Hi Ryan,

Regarding the geometry subdivision, does it need to be in squares (as I've seen in your tutorials) or small triangular faces are pretty much ok as well.
Thanks.

Best,
Theo

Ryan Southall

unread,
Oct 1, 2020, 1:23:43 PM10/1/20
to VI-Suite
No. They can be any shape.

ttsesm

unread,
Oct 2, 2020, 7:36:18 AM10/2/20
to VI-Suite
Thanks Ryan.
I have opened an issue on the github repository (https://github.com/rgsouthall/vi-suite06/issues/6), maybe we can switch the conversation there if you do not mind.

Best,
Theo

Ryan Southall

unread,
Oct 9, 2020, 3:37:47 PM10/9/20
to VI-Suite
Depends on the question. If it is about usage it should go here. If there is a bug to report it should go on github.

ttsesm

unread,
Oct 9, 2020, 4:43:18 PM10/9/20
to vi-s...@googlegroups.com
Ok I see, then I will continue the discussion here and I will close the non-bug related issues I've opened in git.

More or less I've managed to simulate a scene now I just need to automatize the whole procedure for all the bunch of scenes that I have.

So some questions that I still have are the following:

  1. When I map the materials from Blender's material tab to Vi-Suite, it is not clear what type to give them since in the most of the cases this is not notified anywhere (some times the name might have some additional information that I could for that). Thus, I was wondering whether you are aware of any way that could help on that.
  2. At the moment when I do the mapping of the materials to the corresponding Radiance materials I do the following Base Color --> Material Reflectance, Specular --> Specularity, Roughness --> Roughness and when there is texture I activate the Textured checkbox and specify Subsurface Color --> Material Reflectance (as I understand this is not necessary though). Should these be sufficient?
  3. I am specifying my light source as a plane mesh (in principle I would like to change this to an IES light source but possibly later on), so far it seems to work fine? If you think this is not proper please let me know.
  4. Considering the complexity of my scenes I've noticed that it might take time, thus have you thought about embedding some GPU acceleration based on Accelerad. The installation files depend on the existing Radiance files, thus it should be quite straight forward to include it. In principle you just put the corresponding binaries e.g. accelerad_rpict, accelerad_rtrace, etc along with their cpu correspondences and you just need to add the functionality to call these binaries instead. I might be able to put some time on that as well, it seems as a nice addition.
  5. On my simulation results I've noticed that the output on the ceiling usually it has values much lower even from parts of the scene that do not have a direct visibility to the light source similarly to lighting, so my question is if this has to do with the radiance parameters (meaning how many bounces of the light are configured) or something else.
  6. If you have any idea how export the numeric results as an extra .ply property or something similarly to how you can include the color information (this question might be more proper to be asked in the Blender forums).
That's it for now, apologies for the hassle and thank you for your time.

Best,
Theo

ttsesm

unread,
Oct 14, 2020, 3:42:42 PM10/14/20
to VI-Suite
Hi Ryan,

I have created the following node structure in the node editor for the radiance simulation:

# tree
 ng
= bpy.data.node_groups.new('rad_sim', 'ViN')
   
 
# nodes
 location_node
= ng.nodes.new(type="No_Loc")
 location_node
.location = (0, 0)
   
 geometry_node
= ng.nodes.new(type="No_Li_Geo")
 geometry_node
.location = (230, 140)
 geometry_node
.cpoint = '1'
   
 context_node
= ng.nodes.new(type="No_Li_Con")
 context_node
.skyprog = '4'
 context_node
.location = (210, -120)

 simulation_node
= ng.nodes.new(type="No_Li_Sim")
 simulation_node
.location = (410, 0)
   
 export_node
= ng.nodes.new(type="No_CSV")
 export_node
.location = (610, -60)
   
 
# links
 ng
.links.new(location_node.outputs[0], context_node.inputs[0])
 ng
.links.new(geometry_node.outputs[0], simulation_node.inputs[0])
 ng
.links.new(context_node.outputs[0], simulation_node.inputs[1])
 ng
.links.new(simulation_node.outputs[0], export_node.inputs[0])

but I cannot understand how to invoke the geometry  and context `Export` setting. I would guess that something like `bpy.ops.geometry_node.ligexport('INVOKE_DEFAULT')` should do the job but it didn't.

Thanks.

Best,
Theo

Ryan Southall

unread,
Oct 15, 2020, 8:34:52 AM10/15/20
to VI-Suite
To run an operator from outside the context the operator normally sits in you have to override the context.
The ligexport calls the context twice I think: context.node, and context.scene to identify the node and the scene respectively and you need to override the node one. You can do this by creating a dictionary with 'node'  as the key and the node as the item.
e.g. override = {'node': bpy.data.node_groups[node_group_name].nodes['node_name'}.
The operator can then be run with bpy.ops.node.ligexport(override, 'INVOKE_DEFAULT']

ttsesm

unread,
Oct 15, 2020, 9:01:34 AM10/15/20
to vi-s...@googlegroups.com
Yes these operators are calling the context twice as you said contex.node and context.scene, and in the node one is where I get the problem:


>>> bpy.ops.node.ligexport('INVOKE_DEFAULT')
Error: Traceback (most recent call last):
 
File "/home/ttsesm/blender/blender-2.83.2-linux64/2.83/scripts/addons/vi-suite06/vi_operators.py", line 587, in invoke
    node
= context.node
AttributeError: 'Context' object has no attribute 'node'

location
: /home/ttsesm/blender/blender-2.83.2-linux64/2.83/scripts/modules/bpy/ops.py:199


Traceback (most recent call last):
 
File "<blender_console>", line 1, in <module>
 
File "/home/ttsesm/blender/blender-2.83.2-linux64/2.83/scripts/modules/bpy/ops.py", line 199, in __call__
    ret
= op_call(self.idname_py(), C_dict, kw, C_exec, C_undo)
RuntimeError: Error: Traceback (most recent call last):
 
File "/home/ttsesm/blender/blender-2.83.2-linux64/2.83/scripts/addons/vi-suite06/vi_operators.py", line 587, in invoke
    node
= context.node
AttributeError: 'Context' object has no attribute 'node'

location
: /home/ttsesm/blender/blender-2.83.2-linux64/2.83/scripts/modules/bpy/ops.py:199

I've applied your solution and it seems to work.
Thanks a lot.

ttsesm

unread,
Oct 16, 2020, 6:25:54 AM10/16/20
to VI-Suite
Hi Ryan,

One more question. I am trying to save now the simulation results with the VI CSV Export node. If I use the bpy.ops.node.csvexport(override, 'INVOKE_DEFAULT') command I activate the context.window_manager.fileselect_add(self) window browser in order to select path and filename. However I would like to avoid this and directly save the results to a given path with a specific filename through the script.

Then searching around I noticed that I can avoid calling the invoke() function and directly call the execute() one by bpy.ops.node.csvexport('EXEC_DEFAULT') but then this gives me the following error:

>>> bpy.ops.node.csvexport('EXEC_DEFAULT')

Error: Traceback (most recent call last):

 
File "/home/ttsesm/blender-2.83.6-linux64/2.83/scripts/addons/vi-suite06/vi_operators.py", line 2200, in execute
    node
= self.node
 
File "/home/ttsesm/blender-2.83.6-linux64/2.83/scripts/modules/bpy_types.py", line 713, in __getattribute__
   
return super().__getattribute__(attr)
AttributeError: 'NODE_OT_CSV' object has no attribute 'node'

location
: /home/ttsesm/blender-2.83.6-linux64/2.83/scripts/modules/bpy/ops.py:199



Traceback (most recent call last):
 
File "<blender_console>", line 1, in <module>

 
File "/home/ttsesm/blender-2.83.6-linux64/2.83/scripts/modules/bpy/ops.py", line 199, in __call__
    ret
= op_call(self.idname_py(), C_dict, kw, C_exec, C_undo)

RuntimeError: Error: Traceback (most recent call last):

 
File "/home/ttsesm/blender-2.83.6-linux64/2.83/scripts/addons/vi-suite06/vi_operators.py", line 2200, in execute
    node
= self.node
 
File "/home/ttsesm/blender-2.83.6-linux64/2.83/scripts/modules/bpy_types.py", line 713, in __getattribute__
   
return super().__getattribute__(attr)
AttributeError: 'NODE_OT_CSV' object has no attribute 'node'

I've already tried the trick with the override dictionary (I could be doing it wrongly though) and I couldn't make it work. Thus, I would appreciate if you could give me a hand here how to save the csv file and if possible how to provide a custom path and filename where the file should be saved.

Thanks.

Ryan Southall

unread,
Oct 16, 2020, 10:16:30 AM10/16/20
to VI-Suite
I've no idea. I've never run a file export operator from a script. Blender community may be of some help. Would be pretty easy to write some code to take the contents of the reslists dictionary in the simulation node and parse it to a file of your choosing.

ttsesm

unread,
Oct 16, 2020, 11:35:06 AM10/16/20
to VI-Suite
I see, well the point is that I do not want to re-invent the wheel since you already have the functionality. I do not need to invoke the file export operator, I can directly execute the saving functionality with the bpy.ops.node.csvexport('EXEC_DEFAULT') so no need to go through the file browser.

What is supposed the node = self.node correspond to? It is the export node itself, isn't it?  Because in principe I need to override this and the self.filename variables. I've tried to do that the way you showed me before with: override = {'node': bpy.data.node_groups['node_group_name'].nodes['VI CSV Export'], 'self.filepath':'path_and_filename_to_save_the_reults.csv'} and then execute bpy.ops.node.csvexport(override, 'EXEC_DEFAULT') but it fails.

In any case indeed you are right that I can access the simulation node reslists dictionary directly. Morevoer, in principle the code I need you have it already in the execute() function:

rl = resnode['reslists']
zrl
= list(zip(*rl))
 

if len(set(zrl[0])) > 1 and node.animated:
    resstring
= ''.join(['{} {},'.format(r[2], r[3]) for r in rl if r[0] == 'All']) + '\n'
    metriclist
= list(zip(*[r.split() for ri, r in enumerate(zrl[4]) if zrl[0][ri] == 'All']))
else:
    resstring
= ''.join(['{} {} {},'.format(r[0], r[2], r[3]) for r in rl if r[0] != 'All']) + '\n'
    metriclist
= list(itertools.zip_longest(*[r.split() for ri, r in enumerate(zrl[4]) if zrl[0][ri] != 'All'], fillvalue = ''))

for ml in metriclist:
    resstring
+= ''.join(['{},'.format(m) for m in ml]) + '\n'
    resstring
+= '\n'

with open(self.filepath, 'w') as csvfile:
    csvfile
.write(resstring)

where resnode is my simulation node and I just need to modify the filepath and possibly save the output row-wise instead of column-wise as we have discussed here.

But I am curious now why I cannot override the parameters and call directly the execute().

ttsesm

unread,
Oct 19, 2020, 5:50:36 AM10/19/20
to VI-Suite
Hi Ryan,

I was discussing the last issue with the Blender community and whether it is possible to bypass the invoke() function and the suggestion I've got was that a simple fix would be to replace the node = self.node with node = context.node and then most likely I would be able to override it with
bpy.ops.node.csvexport(override, 'EXEC_DEFAULT', filename='<path_to_save_file>.csv'). I do not know whether you are willing to apply the change or not, but if not I will go with the alternative that you suggested by creating my own function or possibly fork and apply the change on my fork.

Thanks.

Best,
Theo

ttsesm

unread,
Oct 21, 2020, 6:15:47 AM10/21/20
to vi-s...@googlegroups.com
Hi Ryan,

Some further questions that I would like to clarify.

1. What is the intensity value 0-100 corresponds for a light source?

2. Is it possible somehow to identify which face coordinates from the "reslists" correspond to a light source? I know which object is related to a light source but the faces that are assigned as a light source are dependent to the material each time. Consider also that I am assigning all my scene/objects to be a "Light sensor" since I want to measure the illuminance all over around. Now I want to label the corresponding extracted coordinates in the csv file that correspond to a face which its "LiVi Radiance type" is a light or the corresponding Blender material type is assigned as an "Emission" shader.

Initially my idea was to extract the center point coordinates of the faces which their material type is related the "Emission" shader and then use them as a query to find them in the metriclist and label the corresponding lines but I've noticed that the coordinates that you are extracting seem to be slightly different (check on my third question).

3. Why are the objects face center point coordinates different from the extracted coordinates in the csv file? Are you kind of under the hood creating a new layer that is a bit shifted from the original surface of the object or something.

Thanks.

Best,
Theo

ttsesm

unread,
Oct 21, 2020, 8:24:22 AM10/21/20
to VI-Suite
Ok, for (3) I found that you are using the 'bmesh.calc_center_bounds()' instead of the 'obj.data.polygons.center'. Thus, now the points are matching and my approach with the  queries for (2) should work. In any case, if you have any easier alternative feel free to let me know.

ttsesm

unread,
Dec 3, 2020, 6:48:13 AM12/3/20
to VI-Suite
Thinking it better, I guess that you might expect that someone has already done the calculation in advance based on the lumens and the area of the light source and then to set it as the light source radiance. However, then the question is why is it capped to 100 since as I can see it, it can have a value more than 100.

Thanks.

Best,
Theo

ttsesm

unread,
Dec 3, 2020, 6:48:13 AM12/3/20
to VI-Suite
Hi Ryan,

Regarding the question how the light intensity range 0-100 corresponds to the actual light source intensity still I cannot find any relevant information. Looking on the radiance manual on page 43 they describe how can someone specify the radiance values based on the lumens and the area of the light source but still I do not get the relevance to your 0-100 intensity range. Thus, I would appreciate if you could elaborate a bit more on that.

Thanks.

Best,
Theo

On Wednesday, October 21, 2020 at 12:24:22 PM UTC ttsesm wrote:

Ryan Southall

unread,
Dec 3, 2020, 12:49:08 PM12/3/20
to VI-Suite
The 100 limit is probably just an arbitrary limit I coded in. I can make it higher if realistic artificial light sources have a value greater than that.

ttsesm

unread,
Dec 4, 2020, 6:32:20 AM12/4/20
to VI-Suite
Hi Ryan,

I see, thanks for the clarification. Well, not really necessary to increase the value. I do not think I will needed, but I just wanted to understand how it works.

One more question that I am trying to figure out. How the triangulate option in the LiVi Geometry works? According to the manual it will "triangulate the mesh before Radiance export" and that this might be helpful for some complex geometry but using on simple scene. However, I do not see any visible difference neither on the exported results nor in the mesh geometry. So what it really does if it is not a hassle to explain.

Thanks.

Ryan Southall

unread,
Dec 4, 2020, 7:02:24 AM12/4/20
to VI-Suite
You won't see the effect as it does not apply to sensor surfaces, only to the internal Radiance mesh description which you can see with the "Text edit" node. Even if it did apply to sensor surfaces I don't think you would see it anyway as your meshes are already triangulated I believe.

ttsesm

unread,
Dec 4, 2020, 11:03:56 AM12/4/20
to VI-Suite
Thanks.

Yanxiang Wang

unread,
Aug 15, 2021, 5:31:00 AM8/15/21
to VI-Suite

Do you find how to call the operator from outside. Like the 'Calculate' Button in the LiVi Simulation?

ttsesm

unread,
Aug 15, 2021, 6:27:01 AM8/15/21
to VI-Suite
Hi,

The way I did it was to create the node structure and from there you can modify the different module settings. For example I initialize the node tree with the following function:


def create_vi_suite_node_structure():

     # this might be redundant

     bpy.context.scene.use_nodes = True


     # tree

     ng = bpy.data.node_groups.new('rad_sim', 'ViN')


     # nodes

     location_node = ng.nodes.new(type="No_Loc")

     location_node.location = (0, 0)


     geometry_node = ng.nodes.new(type="No_Li_Geo")

     geometry_node.location = (230, 140)

     geometry_node.cpoint = '0' # set to 1 for vertices

     geometry_node.offset = 0.001


     context_node = ng.nodes.new(type="No_Li_Con")

     # ng.nodes["LiVi Context"].skyprog = '4'

     context_node.skyprog = '4'

     context_node.location = (210, -120)


     simulation_node = ng.nodes.new(type="No_Li_Sim")

     simulation_node.location = (410, 70)


     export_node = ng.nodes.new(type="No_CSV")

     export_node.location = (610, 0)


     # links

     ng.links.new(location_node.outputs[0], context_node.inputs[0])

     ng.links.new(geometry_node.outputs[0], simulation_node.inputs[0])

     ng.links.new(context_node.outputs[0], simulation_node.inputs[1])

     ng.links.new(simulation_node.outputs[0], export_node.inputs[0])

 
   return ng


# Save project, otherwise Vi-Suite add on is not working

save_blend_file(filepath, filename)


# Create node network architecture

node_tree = create_vi_suite_node_structure()

# Overide and export corresponding nodes in order to apply simulation

override = {'node': bpy.data.node_groups[node_tree.name].nodes['LiVi Context']}

bpy.ops.node.liexport(override, 'INVOKE_DEFAULT')

override = {'node': bpy.data.node_groups[node_tree.name].nodes['LiVi Geometry']}

bpy.ops.node.ligexport(override, 'INVOKE_DEFAULT')

override = {'node': bpy.data.node_groups[node_tree.name].nodes['LiVi Simulation']}

bpy.ops.node.livicalc(override, 'INVOKE_DEFAULT')

# override = {'node': bpy.data.node_groups[node_tree.name].nodes['VI CSV Export'], 'filename': "tessstttt"}

# bpy.ops.node.csvexport(override, 'INVOKE_DEFAULT')

I hope it helps.
Reply all
Reply to author
Forward
0 new messages