http://forums.reprap.org/read.php?138,66271
I proposed this on reprap forum. Seems people aren't sure its resolution is enough for it to work very well.
Hope they're wrong :)
Other than that, yes, it's a great idea I think. I have some
background in computer vision processing... the real richness here is
that every frame has full depth data, so you can do rich interpolation
over time, as others have pointed out. The individual scans are
pretty messy though, you have to clean them up...
There's actually already a well explored software method to do the
clean up, it's called the Kalman filter. Basically, it's stated
purpose is in open-loop control systems where there is a perceived
error, it takes multiple readings over time and uses the reading
history along with an estimation model of the sensor error rate to
compute actuals. It is self correcting over time and results in
progressively better estimates of environmental factors. In this
case, you could run the kalman until the point cloud settles, then
rotate a turn table or delta the object and repeat. There are 3D
kalman methods, part of the Generalized Extended Kalman filter family,
that take a pose estimation into account and could actually work with
a continuously rotating turntable.
Kalman filters are very easy to do on image and depth data in opencv
(I use EMGU, one of the .NET wrappers for opencv) - tuning them is
somewhat of a black art, but you can get decent results with an
empirical method...
I'm very dyslexic, every time I try to code up 3D point cloud methods
I make so many sign and axis reversal errors, its a nightmare to
debug... I could code up the kalman filter if someone was interested
in doing the extra leg work to turn it into a point cloud...
> --
> You received this message because you are subscribed to the Google Groups "MakerBot Operators" group.
> To post to this group, send email to make...@googlegroups.com.
> To unsubscribe from this group, send email to makerbot+u...@googlegroups.com.
ROS is open source and free, you could conceivably make up the exact
same software stack for any roving robot + camera. OpenSLAM depends
on semi-accurate navigation coordinates, so you'd need a platform with
a differential drive, wheel encoders, that sort of thing. I have a
rovio and it works pretty well, except there's no USB to connect it to
the kinect (I suspect the iRobot create build is much the same)
In terms of cost, MakerScanner is relatively cheap. And I think it'll
have comparable accuracy to a kinect based scanner
That's one of the reasons for SLAM's coolness - it figures out a
fairly good local bearing without worrying about referencing it to an
external field, and it does it visually...
A turntable with a sticker around it works just as well. All you need
is two strips of black and white alternating with a 90 degree phase
shift. As long as the rotation rate is < 1/2 strip, you can pull out
the rotation post-process. The algorithm is the same one the old ball
mice used to read their optical encoders. If you want absolute
positioning, you can either add a third strip with a timing mark or
use some more complicated gray coding schema - the same way n-position
rotary optical encoders work. Gray coding is kind of like writing the
binary of each position on the edge of the turntable - but it's been
re-ordered so that only one bit changes at a time - if your algorithm
knows that's the maximum difference, it can be written to much more
accurately decode the position.
I don't really want to move the kinect around; except with the built
in motor. There's an accelerometer inside as well - basically you can
use it to find out it's tilt. Panning up and down should give you
lots of samples for a particular position.