Hi Alex,
First, be sure you are recording the bag file on the same machine
that is attached to the Kinect. Recording to another machine over a
wireless connection is bound to get bogged down.
A good place to start is
this
Wiki page on recording data from a Kinect although it hasn't
been updated in awhile. The trick appears to be to record the raw
data topics for rgb an depth images since the pointclouds can be
reconstructed on the fly during playback. Pay particular attention
to the depth registration parameter during recording and the
publish_tf parameter during playback as follows.
Try these steps to record your bag file:
$ rosparam set use_sim_time false
$ roslaunch openni_launch openni.launch depth_registration:=true
$ rosbag record -O ~/catkin_ws/src/rbx1/rbx1_nav/bagfiles/data_14dec14.bag camera/depth_registered/image_raw
camera/depth_registered/camera_info
camera/rgb/image_raw camera/rgb/camera_info /tf /cmd_vel /pose /scan /depthimage_to_laserscan
Then for playback. First Ctrl-C out of your openni_launch file so
that the camera driver is no longer running. Then:
$ rosparam set use_sim_time true
$ roslaunch openni_launch openni.launch
load_driver:=false publish_tf:=false
This launches an openni node without the camera driver and without
publishing camera transforms which you have already captured in your
bag file. The openni_node does the magic of converting your
recorded raw image topics to pointclouds on the fly.
If you want to see the pointcloud during playback, bring up RViz
with a PointCloud2 display and listen to the topic
/camera/depth_registered/points. For example, you could use:
$ rosrun rviz rviz -d `rospack find rbx1_vision`/pcl.rviz
And then change the Original Point Cloud topic to
/camera/depth_registered/points.
Now play back your file:
$ rosbag play --clock
~/catkin_ws/src/rbx1/rbx1_nav/bagfiles/data_14dec14.bag
Let us know if that works for you!
--patrick