Kinect 3d Scan

0 views
Skip to first unread message

Reginald Hanfy

unread,
Aug 5, 2024, 3:52:27 AM8/5/24
to emtenrena
DearFelipe,

unfortunately I am still stuck and unable to convert Kinect point cloud to mesh inside Grasshopper environment.

I did find tutorials in Houdini converting point cloud to mesh but how do I do the same for grasshopper.

the point cloud generated using Kinect contains ID, color, point, position. I believe converting it to mesh should not be hard with nearest point logic but how???


thanks alot Dear Martin, I agree that Firefly kinect is good at importing the point cloud to grasshopper env. but then converting these point cloud to mesh is where i am stuck. I will follow the recommendations suggested by you, Felipe and Riccardo and post the results here. thanks alot for taking time reading my thread


thank you martin for your recommendation.

it looks really powerful managing point cloud data compared to gh. I will get more in deep in it. thank you again for the suggestion

by the way could you give me your feedback on trimble x7 scanner? are they better than Artec scanners?


I would like to use my old Xbox 360 Kinect as a scanner for 3D modeling and (hopefully) printing a few busts of friends/family members; however, my efforts have failed in each direction that I have taken. Has anyone had success with this, and if so, how do I fix the issues that I am facing?


Any ideas? I also have a Linux Mint laptop that I will happily use for these efforts if I knew what software to capture the 3D scan with. It seems that there are several driver options out there, just unsure of what to use besides that. It seems that Blender can be used for some motion capture with Kinect; however, I am unsure of how this relates to my goals.


Well the thing is....we have these printers to print hi-res...and the Kinect stuff I messed with did a lousy job (which would have lead to a lousy print)...if you guys find something or have better results, do tell!


Microsoft 3D Builder - Free, requires powerful machine to get an easy scan; messing up tracking typically means restarting, directional lighting conditions not so much a problem. Takes longer to do the scan the slower your computer is, requires high fps from Kinect.


KScan3D - (kscan3d.com), used to be $300, now free. Easier to use then Microsoft 3D Builder (IMHO), doesn't need as powerful of a machine. Takes single 3D Snapshots and auto-aligns them. Directional lighting not as big of a problem because the program seems to alter brightness, but you'll still need plenty of captures. Also, messing up on alignment is a pain to fix. Quality seems low on some scans, I assume it's the bad scan data.


SFL:Standard - (scanfromlife.com), $75, no demo yet. Filters out the garbage and cleans up the sensor data. Outputs full color PLY directly from sensor data (so resolution is max possible). Doesn't auto-align 3D Snapshots, you have to use MeshLabs or other external program to do that. Doesn't touch color, so directional lighting is a problem (either have 1 light source over head or unfocused ambient lighting). Pro is it only needs 1 fps from Kinect to work, so low powered laptops can be used to take 3D Snapshots with. I took six 3D snapshots around a person with a dinky laptop, copied them to my more robust workstation, and then I was able to create a full color model.


From what I've researched, the Kinect V2 is made more as a 3D Motion Sensor than 3D Scanner, the sensor outputs a lot of bad data which can make scans unreliable (flat walls appear bumpy, edges have random spikes, etc), because of this it seems most 3D Scanning Software companies are not supporting it.


To use it like other 3D Scanners that build the 3d model in real time, you'd need a powerful machine to handle averaging out the bad data with the good. But if you're going to spend a lot on a powerful machine, imo might as well pay a bit more for something that was built as a 3D Scanner.


Forgive my naivety but i'm new to ROS so its all a bit strange to me.I followed the method provided in answers.ros.org/question/11363/how-to-generate-map-with-hector_slam-and-kinect/ and after fixing a few problems with deprecated stacks I was able to get the first launch file working. But the second launch file seems to raise the errors shown below.


yes i am using fuerte.. and the pdf file has /map at the top followed by /base stablized, /base_frame then a split into /camera_link and /nav, the camera_link is connected to the Camera rgb and depth frame... Could it be that i have not included the right dependencies?


Sounds good. As long as everything is connected its fine. You may have to change the base_frame parameter of hector_mapping to base_stabilized to get it working quick and dirty. But you should really consider building a valid tf tree for your robot and adapting the nodes to that one. :)


Ok, that should work if you make sure, that the only movements happen along the x- and y-axis and the only rotation is yaw (along the z-Axis). The fake-laser should always be parallel to the ground if you dont have an position sensor at hand.


Could you upload the generated frames.pdf and post your current launch-files? There has to be some error in the tf-configuration. You could also try to replace every apperance of base_stabilized with base_link`.


Although this generates the first observation and map updates correctly, Ive found that the Kinect fake laser scan moves in the opposite direction to that actual camera motion so the map does not update properly beyond this point. So when rotating the camera clock wise the fake laser scan view moves counter clockwise.


Photogrammetry is probably the most accurate and fastest ways of 3d scanning. We have a full body 3d scanner in at Emerald 3d in Quincy, Ma. It takes .8 of a second to take 170 simultaneous pictures and 10 min to stich them all together with a computer program.


I usally make a short video while walking around an object. Then take 70 frames out of it (using blender) to upload to 123dcatch. I chose to make a video instead because photos can take up more time and for small childeren to sit still is quite hard Untill now this works decent but i rather have a better solution


I'm new to the board and having trouble viewing this picture..... I'm gonna guess that it's because I haven't posted enough. In addition to Mickey I also have a working Goofy scan tag. I can upload as soon as I figure out how.


Refractive index of glass, a cube with any object inside it where a camera does an orbit will create a distorted image because the apparent position of the object inside the case will move. The middle of the surface is 90 degree incident, the edge is 45degree incident, these will have different refractive positional changes to the apparent location of the subject.


I totally disagree. Take a look at my in-depth technical analysis here: -scanner/technical-analysis-3d-scanned-nefertiti/

The glas reduced the maximal scanning distance but it was still high enough to perform the scan through the glas protection as shown in my article.


Actually this is not true. I have a number of scans I have done through glass, but the second paragraph is spot-on. Yes the refractive and reflective properties of glass can create distortions, but ironically due to the way that the PrimeSense technology works, as well as the way we take this into account in our software, makes the less of an issue. Plus the lenses on the camera as well protective shield on the Structure sensor are glass.


But I do agree that this, plus many other things in the process they alleged to use in scanning the Nefertiti scan, politely saying is very misleading and they are now saying their Kinect was modified. Something that was not mentioned in my interview with Nikolai.


The Kinect TOP will give you the depth information from the Kinect. If you are referring to creating 3D meshes by scanning objects with the Kinect (something Microsoft is referring to as Kinect Fusion), the is not implemented in TouchDesigner at this time.


To clarify the steps I listed above does the 3D scan outside of TouchDesigner and then you can import this scan in to TD as a 3D object. You can import this 3d file using the Free Thinking Environment.


Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."


Last year, a team of forensic dentists got authorization to perform a 3-D scan of the prized Tyrannosaurus rex skull at the Field Museum of Natural History in Chicago, in an effort to try to explain some strange holes in the jawbone.


Das is joined on the paper by Ramesh Raskar, a professor of media arts and science at MIT, who directs the Camera Culture group, and by Denise Murmann and Kenneth Cohrn, the forensic dentists who launched the project.


Das envisions that Kinect scans could prove as useful in other fields, such as archaeology and anthropology, as they could in paleontology. An archaeologist who unearths a large, fragile, artifact in a remote corner of the world could scan it and immediately share the scan with colleagues around the world.


After looking at the wall-wart for the Kinect, I saw that it works off 12v DC: Based on that, I picked up a small 12v Tysonic battery from Jameco, cut off the wall-wart, and soldered in some alligator clips to the power-cord. Now the Kinect clips directly to the battery (which fits in my pocket), allowing me to hold the Kinect in one hand (based on this sweet 3d-printed grip), my laptop in the other, for completely untethered scanning.


I would post news about the progress of the development here to keep you informed. So, what do you think? Are you interested in supporting me on this? You can also post questions and suggestions, of course.

3a8082e126
Reply all
Reply to author
Forward
0 new messages