Best Kinect 3d Scanner Software

0 views
Skip to first unread message

Karleen Chura

unread,
Aug 5, 2024, 3:45:10 AM8/5/24
to forbovocer
DearFelipe,

unfortunately I am still stuck and unable to convert Kinect point cloud to mesh inside Grasshopper environment.

I did find tutorials in Houdini converting point cloud to mesh but how do I do the same for grasshopper.

the point cloud generated using Kinect contains ID, color, point, position. I believe converting it to mesh should not be hard with nearest point logic but how???


thanks alot Dear Martin, I agree that Firefly kinect is good at importing the point cloud to grasshopper env. but then converting these point cloud to mesh is where i am stuck. I will follow the recommendations suggested by you, Felipe and Riccardo and post the results here. thanks alot for taking time reading my thread


thank you martin for your recommendation.

it looks really powerful managing point cloud data compared to gh. I will get more in deep in it. thank you again for the suggestion

by the way could you give me your feedback on trimble x7 scanner? are they better than Artec scanners?


Me and my project group from the Netherlands are building a robot, this robot is build on a mobility scooter platform with TV on it. With this robot we want it in the future to drive through masses with people like a convention. But at this moment we still have some issues with the robot. We are the sixth projectgroup to work on this robot, and we only got 18 weeks to finish it, so it is quite hard to completly understand ROS. We are using ubuntu 16.04 with ROS kinetic.


Our robot is quite high like 1,5m which brings us to one problem and that is tables. When we are driving the robot around it wants to underneath tables because it sees the legs but not the tabletop. At this moment we have a Lidar sensor and multiple sonar sensors at the bottom of the base, but not really something to detect obstakels higher from the ground.


Now we had the idea to implement an kinect v1 which we already have, but we could not find the best way to implement this, i have read something about SLAM, Gmapping etc. But we dont really get what we could best implement. I also saw a package which takes a point cloud from the kinect and converts it to a laser scan.


You can use a Kinect sensor to obtain a DepthImage from the surroundings, adding the additional camera frames to the robot model. Once you have built the proper tf tree with the roboy model + sensors you will have to use Kinect driver on ROS to obtain the data through a ROS topic and use it with this conversion node to obtain a laserScan available to use in the ROS Navigation stack.


It seems much broader question then only setting up kinect. One thing you can do is tilt the Lidar but that is question of implementation if you just want to move forward. I don't exactly know what sensors you have, but here are steps.


With the help of Sylvain, I tried to get a physical object to the digital world using a 3D scanner.We have a Kinect here in the fablab. We installed Skannect and the Kinect SDK 1.0 to use this piece of technology.


The kinect camera embed multiple color camera and also an IR projector. This part of the kinect project infrared dots in the room, those IR beams will hit different objects and the kinect will have a 3D representation of the scene. In other word the kinect camera can see a cloud of point and the computer will try his best to match these dots with the colorful image to get a usable 3D scan.


Unfortunately, the free version of skannect only provide low quality export. So we had to cut the head of the Sylvain scan to export only his head. But still it is pretty amazing how we can recognize him !


Fortunately there is a free software called Meshroom made to convert a series of pictures from a model into a 3D model ! Meshroom use photogrammetry to convert picture to mesh.I choose a simple model I have at home : Cocott the chicken


I tried to take several photos around Cocott to get as much detail as possible. Then I downloaded these pictures onto my computer (Powerful unit) and press start without changing any settings


Okay this is my problem: I am attempting to make a device that will give me a position (or change in position, either one) like a GPS would, and a rotation but the gyros cover that already, this would be installed on the top of a xbox Kinect which would be connected to my computer (I am trying to make a 3D object scanner) so I need it to give me something along the lines of "You are x,y,z away from where you started" a few times a second (obviously just the numbers). I am not using this over long distances (I would just be moving at most five feet over a couple of seconds) but a GPS is not accurate enough for this, at least not any that I have seen. I need it to be at the very least within two inches for accuracy.

Assuming the accelerometer and gyro are accurate I know the math needed to be done to get the position. Also I know that the longer you would use this the more error you would have but like I said I am only using it for short spans at a time.


So my first question is: Is this feasible? Would it be accurate enough? Or would there just be to much noise? (Maybe someone has done this before and can tell me how stupid I am for trying to do this...)


If you move slowly enough your 3D scans should be highly correlated. Perhaps some math would be enough to determine the Kinect's relative position. See this example: 3D models built with Kinect style depth camera - YouTube


If this is an object scanner, does this mean that you can't rely on it being visible to a static camera? Putting a highly visible marker on it and tracking the position in 3D using video cameras would seem like your best bet to get accurate positions. I can't imagine any solution based on GPS or inertial movement sensing giving you anything like that accuracy. The only other way I can think of would be using a low powered RADAR-like range finder based on a reflector or transponder, but I can't think of any commercial applications that would provide a solution and it would be extraordinarily difficult to develop your own RADAR system (at all, let alone with this resolution).


Good thing the comments on that YouTube video tell you the name of the technical paper: "RGB-D Mapping: Using Depth Cameras for? Dense 3D Modeling of Indoor Environments". Google found it here: -mapping-iser-10-final.pdf


@PeterH Thats a good idea although I think it would take two or three normal cameras, although I could just use another kinect to get the position of the first one although I think im going to try johnwasser's idea first because it requires less componets.


The SR300 is integrated into various devices by third parties. For 3D Scanning, the most important one is the 3D Systems Sense 2 (Review) pictured above. This version is really intended for 3D scanning and comes in a housing that makes it easy to hold it in upright position. It works with the great (and free) Sense for RealSense software.


If you want a depth sensor to make 3D Scans on a Windows 10 machine, the RealSense SR300 is a great option. Because of the with free, versatile software I currently advice to get the 3D Systems Sense 2 instead of the slightly cheaper webcam-style alternatives.


As you can see the object scans are comparable to those made with the Structure Sensor, albeit with slightly less crispy textures. For scanning people, I found that the SR300 works okay for busts but that things get a bit tricky when trying to do full body scans (see examples of that in my Full SR300 Review). The Structure Sensor is better in that field.


Follow me on Twitter, Facebook or Instagram for future updates. And if you think this post could be useful for your friends and followers, please share it on your favorite social network by clicking one of the buttons below:


I do share your frustration, to be honest! There is so much potential for 3D Scanning with the Structure Sensor, but Occipital seems to be more focussed on the VR/AR applications of the device, which obviously have more mainstream potential.


Hello Nick, first of all congrats on your site as is top notch and one of the most informative on 3d scan matters !

I am looking for a 3d scanner to mainly scan small parts like holders and plastic covers and was looking at SR300 as it is on my price range and i think it can work maybe with a motorized turntable.

I was looking also on Einscan and they seem to have more quality but the price is also higher.

I alredy had a kinect v2 for this but sold it due to lack of quality.

Thank you and keep up with the good job !


That said, the 3D scanner market is very dynamic currently and many companies are trying to lower the entry price. One example is the Eora3D laser scanner that is expected to start shipping somewhere this year: -pixelio-bevel-smartphone-3d-scanners/#eora3d


hi there ,all i need is to process depth frames , i am looking to buy a depth sensor which i can attach to raspberry pi 3

please suggest me something,

what i am going to do is , track the heads of the people ,that where they are going, in or out of the building, from depth frame i will get the head and from there, i will count people in out

please do some suggestions about buy which depth sensor

which has OpenNI support, SDKs for raspberry pi 3 model b


Using 3dScan in Win10 I got the Kinect V2 to scan people just fine. I just

couldnt get it to scan anything else with any kind of half decent results. I also had to build a heavy duty lazy susan to spin people. After the scan i used Meshmixxer to clean the scan and was able to 3d print models of people.


I did a preliminary test last night and the Astra Mini camera is not working with RecFusion. It does work with ReconstructMe, so I know the camera is working. I have questions out to both RecFusion and Orbbec, so, hopefully I hear back from them soon so that I can continue my review.


Hi Nick. Congrats on the site and all your work I think it is great!

I am thinking of buying the structure sensor for creating 3D scans of crashed vehicles. My idea is to make 3D scan of the crashed vehicles and their position in space along with a scan of the terrain. Th 3D models of that can be used afterwards to calculate various parameters that can speed up the crash insight. Can you provide a feedback if the structure sensor using an iPad Pro is suitable for such a task (outside scanning, large objects like vehicles and buses, scan of the terrain).

Thanks

3a8082e126
Reply all
Reply to author
Forward
0 new messages