Thanks for chiming in! I am really no expert, but just to continue the discussion:
M: For both ShapeNet and ModelNet, point clouds are derived by sampling points from mesh surfaces, which doesn't seem very realistic to me.
C: From my understanding, sampling from 2D manifolds (i.e. mesh surfaces?) is a popular approach today because of scalability. For e.g.,
this recent CVPR'20 paper talks about this:
> "
In computer vision and graphics, early attempts at applying deep learning to 3D shapes were based on dense voxel
representations or multiple planar views. These
methods suffer from three main drawbacks, stemming from
their extrinsic nature: high computational cost of 3D convolutional filters, lack of invariance to rigid motions or non-rigid
deformations, and loss of detail due to rasterisation.
A more efficient way of representing 3D shapes is modeling them as surfaces (two-dimensional manifolds). In computer graphics and geometry processing, a popular type of
efficient and accurate discretisation of surfaces are meshes
or simplicial complexes,
which can be considered as graphs with additional structure
(faces)."
M: * GNNs for point clouds can not typically be applied to other domains
* It's hard to standardize the graph representation, and it's usually much better to just provide the raw points and let the user decide which graph representation works best for his/her use-case
C: Indeed, SotA approaches for point clouds seem to consider the input as a set of point and use the k-nearest neighbors of each point to dynamically compute the graph at each GNN layer. However, I feel that the layer update equation/GNN equations are usually general enough for them to be applied to fixed graphs from other domains, too. I am thinking of
this or
this one.