I recently started working with openVDB library for a slam
application. Basically I am trying to follow the pipeline from the
Kinectfusion paper and apply the tsdf fusion part with openvdb.
The
idea is, for each new coming scan (point cloud) , I need to shoot rays
from the center of the camera to each point in the new scan. I need to
find intersections between the rays and all the voxels and update each
voxel with current readings and with previous weights. So the code goes
something like this:
for (int i = 0; i < point_cloud_global_frame.size(); i++)
{
auto point = point_cloud_global_frame[i];
Point3D point_index = tsdf_grid->worldToIndex(point);
Point3D vec_point_cam = point_index - cam_index;
double point_cam_distance_world = tsdf_grid->indexToWorld(vec_point_cam).length();
if ((point_cam_distance_world <= 0) || (point_cam_distance_world >= kinect_cam.get_max_depth()))
continue;
double point_cam_distance = vec_point_cam.length();
// if end of of ray traversal is within the threshold distance mu of start of ray, then no point in going
// further
if (point_cam_distance < mu)
continue;
vec_point_cam.normalize();
openvdb::math::Ray<double> ray_cam_pt(0, vec_point_cam);
////std::cout << "point_cam_distance - mu: " << point_cam_distance - mu << std::endl;
// Voxel Travesal using openvdb builtin DDA
openvdb::math::DDA<openvdb::math::Ray<double>, 0 /*Log2Dim=0 corresponds to voxel*/> voxel_traversal(
ray_cam_pt, 0 /*startTime*/, point_cam_distance- mu /* maxTime */);
// From the camera center, go towards the point by DDA traversal
bool continue_ray = true;
// Background value
while (continue_ray = true)
{
voxel_coord = voxel_traversal.voxel();
f_k_p_prev = tsdf_accessor.getValue(voxel_coord);
w_k_p_prev = weight_accessor.getValue(voxel_coord);
w_k_p_new = std::min(w_k_p_prev + 1, max_weight);
w_k_p_new_inverse = 1. / w_k_p_new;
f_k_p_new = ((w_k_p_prev * f_k_p_prev) + (1 * background)) * (w_k_p_new_inverse);
tsdf_accessor.setValue(voxel_coord, f_k_p_new);
weight_accessor.setValue(voxel_coord, w_k_p_new);
active_voxels_accessor.setValue(voxel_coord, f_k_p_new);
continue_ray = voxel_traversal.step(); // tells if the maxTime is reached or not
}
// TSDF region from background to -background
ray_dist_from_cam = voxel_traversal.time();
while (ray_dist_from_cam < (point_cam_distance + mu)) {
voxel_coord = voxel_traversal.voxel();
f_k_p_prev = tsdf_accessor.getValue(voxel_coord);
w_k_p_prev = weight_accessor.getValue(voxel_coord);
w_k_p_new = std::min(w_k_p_prev + 1, max_weight);
w_k_p_new_inverse = 1. / w_k_p_new;
tsdf_value = -1 * std::min(background, (background * static_cast<float>(ray_dist_from_cam)) * mu_inv);
f_k_p_new = ((w_k_p_prev * f_k_p_prev) + (1 * tsdf_value)) * (w_k_p_new_inverse);
tsdf_accessor.setValue(voxel_coord, f_k_p_new);
weight_accessor.setValue(voxel_coord, w_k_p_new);
active_voxels_accessor.setValue(voxel_coord, f_k_p_new);
ray_dist_from_cam = voxel_traversal.time();
voxel_traversal.step();
}
// occupied voxels
for (int i = 0; i < thickness_occ; ++i) {
voxel_coord = voxel_traversal.voxel();
f_k_p_prev = tsdf_accessor.getValue(voxel_coord);
w_k_p_prev = weight_accessor.getValue(voxel_coord);
w_k_p_new = std::min(w_k_p_prev + 1, max_weight);
w_k_p_new_inverse = 1. / w_k_p_new;
f_k_p_new = ((w_k_p_prev * f_k_p_prev) + (-1 * background)) * (w_k_p_new_inverse);
tsdf_accessor.setValue(voxel_coord, f_k_p_new);
weight_accessor.setValue(voxel_coord, w_k_p_new);
active_voxels_accessor.setValue(voxel_coord, f_k_p_new);
voxel_traversal.step();
}
}
}
The problem here is that this process is fairly slow.
The
problem here is that I want to parallelize this loop. But when I do
that for example using OpenMP parallel for, the code crashes. After some
debugging, I found that it crashes because of these lines.