Hello, I'm posting to get some general guidance about the best way to use OpenVDB for my application as well as some specific advice about a particular problem. I've spent the last few weeks getting familiar with OpenVDB, and I'm very excited about the software. Thanks for contributing this!
As background: I'm trying to accumulate a time-history of LIDAR point cloud data into a single cloud for a SLAM application. This requires a couple of core operations:
1) The ability to convert points to OpenVDB format. Easy enough via `createPointDataGrid`
2) The ability to store additional information other than point location. Also easy via additional Point `AttributeArray`s.
3) The ability to combine overlapping clouds. I thought I had this working via `tree->merge`, but it was only appending the new points. This is my "specific advice" question.
4) The ability to update individual cloud frames based on SLAM corrections. This is less easy, but I think I got something working by assigning a unique `PointGroup` to each frame and basically deleting / re-adding that cloud when it updates. I tried `movePoints`, but that seemed quite a bit less performant. The question of performance is maybe invalidated by the fact that I was doing #3 wrong.
My main blocking issue is #3 - I can't seem to figure out how to actually merge trees and have the leafNodes of each tree concatenate their subset of points via native OpenVDB methods. I started to hack together a method of my own (see below). I figured I could just iterate through the leafNodes of the new cloud, find corresponding leaves of the main cloud, and literally append the former's points to the latter.
So, finally, my specific questions are as follows:
- Is there an existing method in `openvb` that I should be using which truly combines overlapping PointDataGrids? I'm using OpenVDB 9.0.0, but I'm not wedded to that version.
- If not, what's the best way to go about doing this? Assuming my brute force method isn't the optimal way - it's not terribly fast and I have concerns about multithreaded access.
- Any suggestions about my general approach are welcome!
Thanks for your time and for this impressive software.
Sincerely,
Daniel Sahu
```
// interface for TBB based multithreaded merging
struct MergeOp
{
// convenience types
using Index = openvdb::Index;
using LeafRangeT = typename openvdb::tree::LeafManager<openvdb::points::PointDataTree>::LeafRange;
using LeafNodeType = typename openvdb::points::PointDataTree::LeafNodeType;
using ValueType = typename LeafNodeType::ValueType;
using AttributeSet = openvdb::points::AttributeSet;
using AttributeArray = openvdb::points::AttributeArray;
using PointDataGrid = openvdb::points::PointDataGrid;
MergeOp(PointDataGrid::Ptr original, const AttributeArray::ScopedRegistryLock* lock)
: original_(original), mLock(lock) { }
void operator()(const LeafRangeT& range) const
{
// iterate through incoming tree's leaf nodes
for (auto incomingLeafIter = range.begin(); incomingLeafIter != range.end(); ++incomingLeafIter)
{
// skip empty leaves (shouldn't happen?)
if (!incomingLeafIter || incomingLeafIter->isEmpty())
continue;
// check if a leaf node exists in our original tree
const auto origin {incomingLeafIter->origin()};
if (auto originalLeafPtr = original_->tree().probeLeaf(origin)
; originalLeafPtr == nullptr)
{
ROS_DEBUG("corresponding leaf doesn't exist - adding...");
// make a new leaf from a deep copy of the incoming leaf
auto newLeaf = new LeafNodeType(*incomingLeafIter);
assert(newLeaf != nullptr);
// explicitly add this leaf as a deep copy of the incoming leaf
original_->tree().addLeaf(newLeaf);
// mark this leaf's transient data to indicate it's updated
if (auto leaf = original_->tree().probeLeaf(origin); leaf)
leaf->setTransientData(leaf_updated::LEAF_CHANGED);
else
ROS_ERROR("Failed to add leaf to tree!");
// sanity check that this actually got added
assert(original_->tree().probeConstLeaf(origin) != nullptr);
assert(!original_->tree().probeConstLeaf(origin)->isEmpty());
// we don't check the following because we're copying the leaf, not moving it.
// @TODO figure out how to move it instead
// assert(incoming->tree().probeConstLeaf(origin) == nullptr);
}
else
{
ROS_DEBUG_STREAM("Adding " << incomingLeafIter->pointCount() << " new points (" << originalLeafPtr->pointCount() << " exist now)");
// we expect empty nodes to not exist - otherwise their attributes might not be initialized properly.
// @TODO if this fails it might be because all the points in this leaf were deleted, which is supported!
// it's still fine as long as the descriptors match.
assert(!originalLeafPtr->isEmpty());
// assert that descriptors (data type associated with each point, essentially) are identical
assert(originalLeafPtr->attributeSet().descriptor() == incomingLeafIter->attributeSet().descriptor());
// note: much of this is adapted from the PointDelete operation in OpenVDB
//
https://github.com/AcademySoftwareFoundation/openvdb/blob/master/openvdb/openvdb/points/PointDelete.h // get the sizes of both leaves, as well as the desired final leaf size
const size_t originalSize = originalLeafPtr->getLastValue();
const size_t incomingSize = incomingLeafIter->getLastValue();
const size_t newSize = originalSize + incomingSize;
assert(newSize > originalSize);
// create new (longer!) attribute set
const AttributeSet& originalAttributeSet = originalLeafPtr->attributeSet();
const AttributeSet& incomingAttributeSet = incomingLeafIter->attributeSet();
AttributeSet* newAttributeSet = new AttributeSet(originalAttributeSet, static_cast<Index>(newSize), mLock);
const size_t attributeSetSize = originalAttributeSet.size();
// a redundant assert since we're checking the descriptors above, but a little paranoia never hurt anyone
assert(attributeSetSize == incomingAttributeSet.size());
// cache the attribute arrays for efficiency
std::vector<AttributeArray*> newAttributeArrays;
std::vector<const AttributeArray*> originalAttributeArrays, incomingAttributeArrays;
for (size_t i = 0; i < attributeSetSize; i++)
{
AttributeArray* newArray = newAttributeSet->get(i);
const AttributeArray* originalArray = originalAttributeSet.getConst(i);
const AttributeArray* incomingArray = incomingAttributeSet.getConst(i);
// replicate existing sanity checks in openvdb (probably not applicable, so only asserting)
assert(newArray->hasConstantStride() && originalArray->hasConstantStride() && incomingArray->hasConstantStride());
assert(incomingArray->stride() == originalArray->stride());
newAttributeArrays.push_back(newArray);
originalAttributeArrays.push_back(originalArray);
incomingAttributeArrays.push_back(incomingArray);
}
// initialize offsets (essentially the set of indices marking the end of each voxel within the leaf, I think)
Index attributeIndex = 0;
std::vector<ValueType> endOffsets;
endOffsets.reserve(LeafNodeType::NUM_VALUES);
// create mappings which show which indice in the concatenated array corresponds to the original/incoming indices
std::vector<std::pair<Index, Index>> originalIndexMapping, incomingIndexMapping;
originalIndexMapping.reserve(originalSize);
incomingIndexMapping.reserve(incomingSize);
// now construct new attribute arrays which concatenate the data from each leaf
for (auto originalVoxel = originalLeafPtr->cbeginValueAll(), incomingVoxel = incomingLeafIter->cbeginValueAll()
; originalVoxel && incomingVoxel; ++originalVoxel, ++incomingVoxel)
{
// @TODO perform this operation in-place on the original attribute arrays to prevent the need for duplicate copying.
// find where each index in the original arrays will end up in the new array (should be a 1 to 1 mapping...)
for (auto iter = originalLeafPtr->beginIndexVoxel(originalVoxel.getCoord()); iter; ++iter)
originalIndexMapping.emplace_back(*iter, attributeIndex++);
// find where each index in the incoming arrays will end up in the new array (should be a fixed offset for each voxel)
for (auto iter = incomingLeafIter->beginIndexVoxel(incomingVoxel.getCoord()); iter; ++iter)
incomingIndexMapping.emplace_back(*iter, attributeIndex++);
// mark the offset corresponding to the end of this voxel
endOffsets.push_back(static_cast<ValueType>(attributeIndex));
}
// copy over data from each attribute array into the new concatenated version
for (size_t i = 0; i != attributeSetSize; i++)
{
VectorWrapper originalIndexMappingWrapper(originalIndexMapping), incomingIndexMappingWrapper(incomingIndexMapping);
newAttributeArrays[i]->copyValues(*(originalAttributeArrays[i]), originalIndexMappingWrapper);
newAttributeArrays[i]->copyValues(*(incomingAttributeArrays[i]), incomingIndexMappingWrapper);
}
// actually perform replacement of existing data
originalLeafPtr->replaceAttributeSet(newAttributeSet);
originalLeafPtr->setOffsets(endOffsets);
// make sure we didn't screw things up too badly
originalLeafPtr->validateOffsets();
// mark this leaf's transient data to indicate it's updated
originalLeafPtr->setTransientData(leaf_updated::LEAF_CHANGED);
}
}
}
private:
PointDataGrid::Ptr original_; // shared pointer to grid we're modifying
const AttributeArray::ScopedRegistryLock* mLock;
}; // struct MergeOp
// merge points from Tree other into Tree original
void mergePoints(openvdb::points::PointDataGrid::Ptr& original, openvdb::points::PointDataGrid::Ptr& incoming)
{
// alias commonly used namespaces
using namespace openvdb::points;
using ScopedRegistryLock = openvdb::points::AttributeArray::ScopedRegistryLock;
using LeafManagerT = openvdb::tree::LeafManager<openvdb::points::PointDataTree>;
// clear all accesssors to prevent bad threading issues
original->tree().clearAllAccessors();
incoming->tree().clearAllAccessors();
{ // acquire registry lock to avoid locking when appending attributes in parallel
ScopedRegistryLock lock;
LeafManagerT leafManager(incoming->tree());
MergeOp mergeOp(original, &lock);
tbb::parallel_for(leafManager.leafRange(), mergeOp);
}
// clear other tree so people don't try to use it semi-cannabilized
incoming->clear();
}
```