Kinect as 3D Scanner

742 views
Skip to first unread message

Markus Vervier

unread,
Nov 19, 2010, 8:24:45 AM11/19/10
to make...@googlegroups.com

Hi Group,

I´m a happy Makerbot owner since august and just read the list so far. So I like to say hi everybody! :)

But back to the subject: Wouldn´t it be possible to use a Microsoft Kinect "Camera" as a 3D scanner? We ordered one at work and it gives very good depth images. They could be converted to meshes without much effort I guess. Take three or four images of an object, bake the meshes together (that´s the non-trivial part) and you have a 3D-reconstruction.

It is cheap (~150 € in Europe), has a good resolution and some people already hacked Linux and Windows drivers for it.

Here are a few links:

http://kinecthacks.net/ - tons of info
http://codelaboratories.com/nui - windows driver + sdk

Markus

Cid Vilas

unread,
Nov 19, 2010, 9:40:31 AM11/19/10
to make...@googlegroups.com

http://forums.reprap.org/read.php?138,66271

I proposed this on reprap forum. Seems people aren't sure its resolution is enough for it to work very well.

Hope they're wrong :)

> --
> You received this message because you are subscribed to the Google Groups "MakerBot Operators" group.
> To post to this group, send email to make...@googlegroups.com.
> To unsubscribe from this group, send email to makerbot+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/makerbot?hl=en.
>

Markus Vervier

unread,
Nov 19, 2010, 9:50:32 AM11/19/10
to make...@googlegroups.com

Nice, seems you´re some days ahead of me. ;) I would also assume that you could use more than one image to eliminate the error...also perhaps using the normal camera image you get from kinect, to find corresponding points.

Stan Seibert

unread,
Nov 19, 2010, 10:26:58 AM11/19/10
to MakerBot Operators
Given the speed of the Kinect, one could obtain a very rich dataset by
repeatedly imaging an object on a very slowly rotating turntable of
known frequency.


On Nov 19, 9:50 am, Markus Vervier <verv...@gmail.com> wrote:
> Nice, seems you´re some days ahead of me. ;) I would also assume that you
> could use more than one image to eliminate the error...also perhaps using
> the normal camera image you get from kinect, to find corresponding points.
>
>
>
>
>
>
>
> On Fri, Nov 19, 2010 at 3:40 PM, Cid Vilas <cidvi...@gmail.com> wrote:
> >http://forums.reprap.org/read.php?138,66271
>
> > I proposed this on reprap forum. Seems people aren't sure its resolution is
> > enough for it to work very well.
>
> > Hope they're wrong :)
> > On Nov 19, 2010 7:24 AM, "Markus Vervier" <verv...@gmail.com> wrote:
> > > Hi Group,
>
> > > I´m a happy Makerbot owner since august and just read the list so far. So
> > I
> > > like to say hi everybody! :)
>
> > > But back to the subject: Wouldn´t it be possible to use a Microsoft
> > Kinect
> > > "Camera" as a 3D scanner? We ordered one at work and it gives very good
> > > depth images. They could be converted to meshes without much effort I
> > guess.
> > > Take three or four images of an object, bake the meshes together (that´s
> > the
> > > non-trivial part) and you have a 3D-reconstruction.
>
> > > It is cheap (~150 € in Europe), has a good resolution and some people
> > > already hacked Linux and Windows drivers for it.
>
> > > Here are a few links:
>
> > >http://kinecthacks.net/- tons of info
> > >http://codelaboratories.com/nui- windows driver + sdk
>
> > > Markus
>
> > > --
> > > You received this message because you are subscribed to the Google Groups
> > "MakerBot Operators" group.
> > > To post to this group, send email to make...@googlegroups.com.
> > > To unsubscribe from this group, send email to
> > makerbot+u...@googlegroups.com<makerbot%2Bunsubscribe@googlegroups.c om>
> > .
> > > For more options, visit this group at
> >http://groups.google.com/group/makerbot?hl=en.
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "MakerBot Operators" group.
> > To post to this group, send email to make...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > makerbot+u...@googlegroups.com<makerbot%2Bunsubscribe@googlegroups.c om>
> > .

RyanP

unread,
Nov 19, 2010, 10:31:13 AM11/19/10
to MakerBot Operators
I'd think that you could make a rotating scanner stand out of a
stepper motor and then decode it out of the time stamped data. If
resolution is really a problem you could take multiple positions w/
the delta position using a makerbot build platform.



On Nov 19, 9:50 am, Markus Vervier <verv...@gmail.com> wrote:
> Nice, seems you´re some days ahead of me. ;) I would also assume that you
> could use more than one image to eliminate the error...also perhaps using
> the normal camera image you get from kinect, to find corresponding points.
>
>
>
> On Fri, Nov 19, 2010 at 3:40 PM, Cid Vilas <cidvi...@gmail.com> wrote:
> >http://forums.reprap.org/read.php?138,66271
>
> > I proposed this on reprap forum. Seems people aren't sure its resolution is
> > enough for it to work very well.
>
> > Hope they're wrong :)
> > On Nov 19, 2010 7:24 AM, "Markus Vervier" <verv...@gmail.com> wrote:
> > > Hi Group,
>
> > > I´m a happy Makerbot owner since august and just read the list so far. So
> > I
> > > like to say hi everybody! :)
>
> > > But back to the subject: Wouldn´t it be possible to use a Microsoft
> > Kinect
> > > "Camera" as a 3D scanner? We ordered one at work and it gives very good
> > > depth images. They could be converted to meshes without much effort I
> > guess.
> > > Take three or four images of an object, bake the meshes together (that´s
> > the
> > > non-trivial part) and you have a 3D-reconstruction.
>
> > > It is cheap (~150 € in Europe), has a good resolution and some people
> > > already hacked Linux and Windows drivers for it.
>
> > > Here are a few links:
>
> > >http://kinecthacks.net/- tons of info
> > >http://codelaboratories.com/nui- windows driver + sdk
>
> > > Markus
>
> > > --
> > > You received this message because you are subscribed to the Google Groups
> > "MakerBot Operators" group.
> > > To post to this group, send email to make...@googlegroups.com.
> > > To unsubscribe from this group, send email to
> > makerbot+u...@googlegroups.com<makerbot%2Bunsubscribe@googlegroups.c­om>
> > .
> > > For more options, visit this group at
> >http://groups.google.com/group/makerbot?hl=en.
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "MakerBot Operators" group.
> > To post to this group, send email to make...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > makerbot+u...@googlegroups.com<makerbot%2Bunsubscribe@googlegroups.c­om>
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/makerbot?hl=en.- Hide quoted text -
>
> - Show quoted text -

Linkreincarnate

unread,
Nov 19, 2010, 2:57:36 PM11/19/10
to MakerBot Operators
Yeah they did that on openkonect today! Exports a point cloud as a
obj file in blender
> http://kinecthacks.net/- tons of infohttp://codelaboratories.com/nui- windows driver + sdk
>
> Markus

beverageexpert

unread,
Nov 19, 2010, 4:49:58 PM11/19/10
to MakerBot Operators
there is a thread on this in the structured light group
http://groups.google.com/group/structured-light

bryan

smonkey

unread,
Nov 20, 2010, 1:34:00 AM11/20/10
to MakerBot Operators
Also some lovely videos on youtube,

http://www.youtube.com/watch?v=ke9r70QpqQg

nice point cloud, a bit low res for our purposes but could be layered
up for goodness!

On Nov 19, 3:49 pm, beverageexpert <beermake...@gmail.com> wrote:
> there is a thread on this in the structured light grouphttp://groups.google.com/group/structured-light

James McCracken

unread,
Nov 20, 2010, 8:48:06 AM11/20/10
to make...@googlegroups.com
Keep in mind the kinect is designed for a 6-8 ft range, it'll scan
smaller object closer, but at extermely close ranges you're likely to
get some parallax.

Other than that, yes, it's a great idea I think. I have some
background in computer vision processing... the real richness here is
that every frame has full depth data, so you can do rich interpolation
over time, as others have pointed out. The individual scans are
pretty messy though, you have to clean them up...

There's actually already a well explored software method to do the
clean up, it's called the Kalman filter. Basically, it's stated
purpose is in open-loop control systems where there is a perceived
error, it takes multiple readings over time and uses the reading
history along with an estimation model of the sensor error rate to
compute actuals. It is self correcting over time and results in
progressively better estimates of environmental factors. In this
case, you could run the kalman until the point cloud settles, then
rotate a turn table or delta the object and repeat. There are 3D
kalman methods, part of the Generalized Extended Kalman filter family,
that take a pose estimation into account and could actually work with
a continuously rotating turntable.

Kalman filters are very easy to do on image and depth data in opencv
(I use EMGU, one of the .NET wrappers for opencv) - tuning them is
somewhat of a black art, but you can get decent results with an
empirical method...

I'm very dyslexic, every time I try to code up 3D point cloud methods
I make so many sign and axis reversal errors, its a nightmare to
debug... I could code up the kalman filter if someone was interested
in doing the extra leg work to turn it into a point cloud...

> --
> You received this message because you are subscribed to the Google Groups "MakerBot Operators" group.
> To post to this group, send email to make...@googlegroups.com.

> To unsubscribe from this group, send email to makerbot+u...@googlegroups.com.

Widget

unread,
Nov 20, 2010, 11:17:01 AM11/20/10
to MakerBot Operators
I saw a great post with the kinect on a roomba
http://www.wired.com/gadgetlab/2010/11/gesture-controlled-3d-mapping-robot-just-add-kinect/
OpenSLAM seems to be quite a piece of work. Maybe if if we combine
the kinect with something that can track 3d positions in space, like a
wiimote-Plus, you can make a 3d scanning wand. It would be awesome,
but I can't afford a kinect. :(
> On Sat, Nov 20, 2010 at 1:34 AM, smonkey <smon...@gmail.com> wrote:
> > Also some lovely videos on youtube,
>
> >http://www.youtube.com/watch?v=ke9r70QpqQg
>
> > nice point cloud, a bit low res for our purposes but could be layered
> > up for goodness!
>
> > On Nov 19, 3:49 pm, beverageexpert <beermake...@gmail.com> wrote:
> >> there is a thread on this in the structured light grouphttp://groups.google.com/group/structured-light
>
> >> bryan
>
> >> On Nov 19, 2:57 pm, Linkreincarnate <linkreincarn...@gmail.com> wrote:
>
> >> > Yeah they did that on openkonect today!  Exports a point cloud as a
> >> > obj file in blender
>
> >> > On Nov 19, 8:24 am, Markus Vervier <verv...@gmail.com> wrote:
>
> >> > > Hi Group,
>
> >> > > I´m a happy Makerbot owner since august and just read the list so far. So I
> >> > > like to say hi everybody! :)
>
> >> > > But back to the subject: Wouldn´t it be possible to use a Microsoft Kinect
> >> > > "Camera" as a 3D scanner? We ordered one at work and it gives very good
> >> > > depth images. They could be converted to meshes without much effort I guess.
> >> > > Take three or four images of an object, bake the meshes together (that´s the
> >> > > non-trivial part) and you have a 3D-reconstruction.
>
> >> > > It is cheap (~150 € in Europe), has a good resolution and some people
> >> > > already hacked Linux and Windows drivers for it.
>
> >> > > Here are a few links:
>
> >> > >http://kinecthacks.net/-tonsofinfohttp://codelaboratories.com/nui-windowsdriver+ sdk

Linkreincarnate

unread,
Nov 21, 2010, 2:01:18 PM11/21/10
to MakerBot Operators
There has also been talk of combining it with the wiimote plus for a
wand scanner

On Nov 20, 11:17 am, Widget <chrisdima...@gmail.com> wrote:
> I saw a great post with the kinect on a roomba
>  http://www.wired.com/gadgetlab/2010/11/gesture-controlled-3d-mapping-...
> > >> > >http://kinecthacks.net/-tonsofinfohttp://codelaboratories.com/nui-win...sdk

James McCracken

unread,
Nov 22, 2010, 8:00:44 AM11/22/10
to make...@googlegroups.com
Yeah, don't know if it's the same post but I saw someone do it with
ROS on an iRobot create (the name of roomba's development platform -
it's basically a roomba without any of the pesky cleaning components)

ROS is open source and free, you could conceivably make up the exact
same software stack for any roving robot + camera. OpenSLAM depends
on semi-accurate navigation coordinates, so you'd need a platform with
a differential drive, wheel encoders, that sort of thing. I have a
rovio and it works pretty well, except there's no USB to connect it to
the kinect (I suspect the iRobot create build is much the same)

In terms of cost, MakerScanner is relatively cheap. And I think it'll
have comparable accuracy to a kinect based scanner

Markus Vervier

unread,
Nov 22, 2010, 9:41:54 AM11/22/10
to make...@googlegroups.com

Okay somebody did it and build a working system, he even printed something on a Cupcake!


Nice.

RyanP

unread,
Nov 22, 2010, 11:22:00 AM11/22/10
to MakerBot Operators
Impressive. So each "frame" is over 200K vertices? That's an insane
amount of data.

I really want to see a full 3D object instead of a front view, but
that's my problem since I have no time for such a project. I think
you could do it with a compass and some string, but it's probably not
that easy. Gravity levels will give you any pitch, but my iphone
doesn't give very precise compass readings for orientation.
> <http://www.youtube.com/watch?v=j1w7R__98_Y>Nice.

James McCracken

unread,
Nov 23, 2010, 7:39:42 AM11/23/10
to make...@googlegroups.com
That's actually one of the central problems in AI and CV right now -
figuring out your universal bearing is a particularly tricky endeavor.
Doing it accurately requires gyros, accelerometers, GPS, a digital
compass, and magnetometers (kind of like an accelerometer for the
magnetic field - and like accelerometers, you need an x,y, and z)

That's one of the reasons for SLAM's coolness - it figures out a
fairly good local bearing without worrying about referencing it to an
external field, and it does it visually...

Trev

unread,
Dec 13, 2010, 2:05:26 PM12/13/10
to MakerBot Operators
Why not pair it with the wimote or Ps3 'move' systems for universal
bearing? The kinnect can scan for 3d data, and the other system can
keep track of the position of the kinect itself.

On Nov 23, 7:39 am, James McCracken <merlin...@gmail.com> wrote:
> That's actually one of the central problems in AI and CV right now -
> figuring out your universal bearing is a particularly tricky endeavor.
>  Doing it accurately requires gyros, accelerometers, GPS, a digital
> compass, and magnetometers (kind of like an accelerometer for the
> magnetic field - and like accelerometers, you need an x,y, and z)
>
> That's one of the reasons for SLAM's coolness - it figures out a
> fairly good local bearing without worrying about referencing it to an
> external field, and it does it visually...
>

RyanP

unread,
Dec 13, 2010, 2:39:43 PM12/13/10
to make...@googlegroups.com
Wiimote works with an IR sensor so may be incompatible with Kinect, but I think using visual reference is the best bet and could be done as a post-process (like using a fixed background marked for this purpose).  Accelerometer data is notorious for having errors, but I haven't looked at it lately.  A 1/10 degree error doesn't matter when you are trying to move an on screen sword, but when you start pointing a camera that's a completely unacceptable error.

James McCracken

unread,
Dec 14, 2010, 7:57:51 AM12/14/10
to make...@googlegroups.com
The accelerometer + gyro combo in the wiimote motion plus is good
enough that you can use it for dead reckoning, at least a lot of games
do... but overall, I wouldn't want to have to trust it...

A turntable with a sticker around it works just as well. All you need
is two strips of black and white alternating with a 90 degree phase
shift. As long as the rotation rate is < 1/2 strip, you can pull out
the rotation post-process. The algorithm is the same one the old ball
mice used to read their optical encoders. If you want absolute
positioning, you can either add a third strip with a timing mark or
use some more complicated gray coding schema - the same way n-position
rotary optical encoders work. Gray coding is kind of like writing the
binary of each position on the edge of the turntable - but it's been
re-ordered so that only one bit changes at a time - if your algorithm
knows that's the maximum difference, it can be written to much more
accurately decode the position.

I don't really want to move the kinect around; except with the built
in motor. There's an accelerometer inside as well - basically you can
use it to find out it's tilt. Panning up and down should give you
lots of samples for a particular position.

Reply all
Reply to author
Forward
0 new messages