I'm replying to your old findings here. Yes, I believe that the depthmap view of one camera is going to give you data points from a single camera. (I can't reproduce the red parts of your image, only the blue ones), so I think your image above is to be expected.
(For the sake of sharing and documentation)
For each image selected in the images pane, on the lower right corner of the Image Viewer, click on View Depth Map in 3D (an arrow over a box icon).
Each time this is performed, a layer is added to the 3D viewer. You can toggle the layers on and off.
My understanding is that the subsequent processes in the workflow use the overlap between the points from these depth maps, and blend or stitch them together.
To me, the sfm data points look good, and the depthmap points look good. What's left is the depthmap filtering and meshing. There are several options that can be fiddled with for these nodes but I'd like to find some documentation. I'm trying to follow the source code...
The DepthMapFilter node stores its results in a folder shown in the node properties. That folder is full of some files I don't know how to interpret.
The next node, Meshing, uses those files along with the Sfm data.
Right now my gut says that the meshing algorithm is having trouble, possibly when it assigns weights, and it is smoothing everything out into a brick.