Kinect Scanner Software

0 views
Skip to first unread message

Yaima President

unread,
Aug 5, 2024, 2:25:50 AM8/5/24
to dumboxylyc
Iwould like to use my old Xbox 360 Kinect as a scanner for 3D modeling and (hopefully) printing a few busts of friends/family members; however, my efforts have failed in each direction that I have taken. Has anyone had success with this, and if so, how do I fix the issues that I am facing?

Any ideas? I also have a Linux Mint laptop that I will happily use for these efforts if I knew what software to capture the 3D scan with. It seems that there are several driver options out there, just unsure of what to use besides that. It seems that Blender can be used for some motion capture with Kinect; however, I am unsure of how this relates to my goals.


Dear Felipe,

unfortunately I am still stuck and unable to convert Kinect point cloud to mesh inside Grasshopper environment.

I did find tutorials in Houdini converting point cloud to mesh but how do I do the same for grasshopper.

the point cloud generated using Kinect contains ID, color, point, position. I believe converting it to mesh should not be hard with nearest point logic but how???


thanks alot Dear Martin, I agree that Firefly kinect is good at importing the point cloud to grasshopper env. but then converting these point cloud to mesh is where i am stuck. I will follow the recommendations suggested by you, Felipe and Riccardo and post the results here. thanks alot for taking time reading my thread


thank you martin for your recommendation.

it looks really powerful managing point cloud data compared to gh. I will get more in deep in it. thank you again for the suggestion

by the way could you give me your feedback on trimble x7 scanner? are they better than Artec scanners?


Seems pretty clear people are interested in hacking Leap and creating a scanner in the line of Skanect or similar. Bought my unit with just this goal in mind. I firmly believe this implementation is possible, though I have just now come on to the Leap scene and have much work to do before I understand the SDK and APIs.


Just a bit about me: I am a seasoned fabricator with some coding skills and some CNC knowledge - looking for math nerds and code junkies to collaborate on the software end. I'll be posting some Sketchup drawings for what I have in mind in the coming days. And since this is possibly a large scale endeavor, I figure the scannable volume should be human-sized - full body.


At its heart, this machine is simply a motorized Lazy Susan. The mechanical end should be very straightforward - a few gear calculations and some trigonometry makes the math, and a stepper motor, xl timing belt - these are relatively cheap and plug-n-play. I'd love a unit that spun around a body like an airport scanner, but we should stick with the simplest workable implementation before we decide to get fancy.


We should be interested in nailing down the best code language approach in the beginning - whatever works best to process the raw data and convert it to a usable 3-D image. Python and Blender seem appropriate, but I'd love to hear from the community. It also depends on final use for the 3-D image - will it be a model? Used online or in a game? 3-D printed or otherwise CNC'd into existence?


Hello, the Leap Motion controller is not made to be used as a 3d scanner and works quite different than the Microsoft Kinect. The most I have seen up till now, was access to the raw image data (no depth information at all) the Leap generates.


If you want to build a 3D scanner from scratch, the Leap Motion hardware is not the place to start. If you want to use the Leap Motion approach of reconstructing edges from edge pairs in scanlines in stereoscopic images, the Leap SDK will provide absolutely zero assistance here. You would also be restricted to scanning convex objects, or - as with the human hand - objects that can be represented as ellipses or circles arranged in a manner that avoids obstruction/occlusion.


If the Leap SDK was reasonably layered at the lower level - with interfaces to (a) controller setup and image acquisition, (b) contrast enhancement and edge detection, (c) ellipse reconstruction - then you could use a Leap (or, if multiple Leaps would be supported at the lower level, several) and a rotating base (and an IR-absorbent enclosure) to generate your own polygon slice/mesh reconstruction from silhouettes, but there has been no indication that Leap is considering - or even able to deliver - any such refactoring of the SDK.


As the Irish say, if you want to go there, you shouldn't start from here, but if you are really determined, a USB capture tool and OpenCV are your likely starting points. To my knowledge, nobody has reverse engineered and documented the controller setup USB protocol so far.


I don't see any reason that raw stereoscopic image sets can't be algorithmically resolved into a usable 3D image - in fact, the patent documents claim it can be done. Then again, I haven't seen the raw data - has anyone really?


"As another example, locations of points on an object's surface in a particular slice can be determined directly (e.g., using a time-of-flight camera), and the position and shape of a cross-section of the object in the slice can be approximated by fitting an ellipse or other simple closed curve to the points. Positions and cross-sections can be correlated to construct a 3-D model of the object."


That image you provided from the patent documents doesn't look like raw data to me. All that mathematics cited in the same document represents how the software resolves the camera information into an easily trackable hand-like object, if I'm reading it correctly.


So the question seems to be, is anyone willing/able to hack down past the SDK and find that raw data, so we can use it to generate usable meshes and objects. As stated before, I am not the man for that job, this post is intended to ferret out those who might be able.


It is amazing that two years later the insistence that the " stereoscopic point-cloud visualization" has anything to do with the raw data has not abated at all in the forums. The Leap is a straight-forward stereo camera. The Leap software attempts to reconstruct fingers from silhouette edges. If you are hoping for some other "raw data", you are reading too much into the void between the [scan]lines.


The Leap has size and cost advantages over other hardware that could theoretically do 3d scanning, but it doesn't have a color camera for object texturing and it's wide FoV makes it impractical to scan things at medium-to-long distances (unless you do monocular SLAM maybe, but then you might as well use a color camera).


If you want a full body scanner, one has been successfully done using multiple Raspberry Pi's with the camera modules. The spherical framework is set up with the cameras and they simultaneously capture multiple images rather than using a rotating platform.


Then of course, connect it to a 3D printer and have an army of mini-mi's


Sorry for restarting a 5 year old thread. I've been using OpenCV's stereoscopic reconstruction functions to generate a point cloud from the raw images and the results have been okayish so far but, I think the accuracy can improve with better calibration. Has anyone here had better luck with the intrinsic/extrinsic camera parameters? I've had to make do with the values I found in this paper.


I was wondering if anyone here had any insight into 3D scanning. Ive looked at some consumer products that look very pricey and have been looking at some homemade solutions too but they seem quite time consuming and inaccurate. I would really like to have 3D scanner with my ultimaker it would make modifying/upgrading existing things alot easier.


I've seen people use ReconstructMe. It's actually the only thing that I've seen being used with results. But it's limited to the resolution of the Kinect. So it can only scan things that are about "human sized"


yeah Ive looked at reconstruct me it looks like a better solution than nothing especially since you can pick up a kineckt for around 50 euros in the local classifieds. i think i might give it a try. Otherwise I think i have to wait until some new sort of technology comes or some new sort of cash flow comes!


Reconstructme worked well, when I had hardware it was OK with. Unfortunately it was completely incompatible with my two laptops. KScan3D didn't perform as well, but it was more hardware friendly -- as its just come out I'm sure it has been refined a bit.


The sensor Im using is a Primesense Carmine 1.09 with Skanect software and it delivers me printable objects within a few minutes. Skanect is great since it has finishing tools inside like making the scan watertight, plane cuts to remove background noise, etc etc. Also you can export full color scans with this and send it off to your full color 3d printing service.


In order to get an even higher resolution output from the sensor Ive put a +2.5 strong reading glasses on top of it, the amount of detail is crazy with this. This also works for the Kinect but still doesn't deliver the same results as with the carmine with glasses.


The software itself works best with a high end video card with lots of CUDA cores on it, but it's not fully needed. Since the last version (1.3) there is also the option to use CPU instead of GPU to reconstruct the scan. You won't get a good feedback during scanning, but it will record and reconstruct it on a slower pace later on.


How does the depth sensing work? Have you read my post on photogrammetry yet? There are some overlaps between the techniques. There are two IR cameras spaced apart, and it can see two very similar but shifted images. The processor finds interesting points (i.e. feature extraction) in the two images that are similar and match them up, the coordinates of these points can be used to calculate the 3D coordinates of those points relative to the cameras (i.e. binocular disparity).


Did you pay attention to my video, the part where I walk down a hallway? Did you notice how I picked a location with many posters on the walls? The video below is the same clip but slowed down, pay attention to those green and yellow dots, those are extracted features that are being tracked.

3a8082e126
Reply all
Reply to author
Forward
0 new messages