In my case, we are more using 3D geometries rather than generating. The context is more geospatial / earth-observation related.
I have a workflow that performs gridding of
sampled data by LiDAR
over massive regions of forested, rural and urban areas into tiles.
Each tile is then sent directly as point cloud to a dockerized Python ML application to predict classes.
An alternate branch of the workflow converts the tiled point clouds into 2D images using a C++ based
tool wrapped in Docker, which are then sent to a different ML application to perform similar predictions in combination with digital elevation model data.
Predicted classes are then regrouped to form JSON FeatureCollection entries used in map services for tasks like urban planing and inventory management.
CWL is used to dispatch both for the encapsulation aspect of each sub-step application using different dependencies, and takes advantage of parallelization
of the tiles
and
chaining of the steps because it is too big (memory and time wise) for processing in a single shot.