modify kinect for short-range

1,762 views
Skip to first unread message

John Schulman

unread,
Oct 28, 2011, 2:36:11 PM10/28/11
to OpenNI
I'm a robotics researcher, and I've been using the kinect. I'd like to
modify the kinect to work at shorter range, to obtain higher-
resolution images of a nearby object. It seems to me like the
following modification would work: remove the casing and move the IR
sensor closer to the IR projector.

1. would this work?
2. has anyone tried this with a kinect?
3. does Primesense (or some company) plan to release a sensor that
would work at shorter range?

Thanks,
John

Murilo Saraiva de Queiroz

unread,
Oct 28, 2011, 2:39:26 PM10/28/11
to openn...@googlegroups.com
Nyko produces a set of "zoom" lenses for using Kinect in smaller rooms. Perhaps a similar optical solution could be devised?




--
You received this message because you are subscribed to the Google Groups "OpenNI" group.
To post to this group, send email to openn...@googlegroups.com.
To unsubscribe from this group, send email to openni-dev+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/openni-dev?hl=en.




--

Joshua Blake

unread,
Oct 28, 2011, 3:02:31 PM10/28/11
to John Schulman, OpenNI
If you change the configuration of the cameras and IR projectors then
you'll need to do a new factory calibration, which no one public knows
how to do yet.

As far as I know, no one has announced plans for a shorter range sensor.

Sent from my Windows Phone
From: John Schulman
Sent: 10/28/2011 2:36 PM
To: OpenNI
Subject: [OpenNI-dev] modify kinect for short-range

John Schulman

unread,
Oct 28, 2011, 6:53:32 PM10/28/11
to OpenNI


On Oct 28, 12:02 pm, Joshua Blake <joshbl...@gmail.com> wrote:
> If you change the configuration of the cameras and IR projectors then
> you'll need to do a new factory calibration, which no one public knows
> how to do yet.

Why would you need a factory calibration? Based on my understanding of
the block matching and triangulation procedure, if the projector is
half as far from the sensor, then the geometry is the same, except all
lengths are divided by two.

Arthur Dam

unread,
Oct 28, 2011, 8:42:55 PM10/28/11
to openn...@googlegroups.com
I'm not too sure about this, but RGB-Demo has a calibration binary. Haven't looked into the coke yet, but there might be some focal length estimation algorithm in there (wich would essentially get it sorted if you get the moving or the sensor figured out)... you'll have to take a look or check with the rgbdemo mailinglist.

Joshua Blake

unread,
Oct 28, 2011, 10:27:06 PM10/28/11
to openn...@googlegroups.com
On Fri, Oct 28, 2011 at 6:53 PM, John Schulman <john.d....@gmail.com> wrote:

On Oct 28, 12:02 pm, Joshua Blake <joshbl...@gmail.com> wrote:
> If you change the configuration of the cameras and IR projectors then
> you'll need to do a new factory calibration, which no one public knows
> how to do yet.

Why would you need a factory calibration? Based on my understanding of
the block matching and triangulation procedure, if the projector is
half as far from the sensor, then the geometry is the same, except all
lengths are divided by two.
 
Because that's how it works. :)
 
1) Mike Harrison disassembled his Kinect in the early OpenKinect days and posted tons of great info on the electronics to the OpenKinect list (and some on the OpenKinect.org wiki.) He noted on Nov 18 2010 in this thread:  
 
Another thing I noticed was the relative positions of the illuminator and sensor are very sensitive to change, which may explain the metal frame - even a sligt bend of the frame makes a noticeable difference to the depth image.
 
2) Mike later wrote on Nov 26 2010 in this thread:
 

Slackening the screws on the illuminator, and rotating it slightly, just the amount that the screws allow within their holes, progressively narrows the depth image field of view from full to a narrow vertical strip about 10% of the normal width.

Slackening some more and panning left/right shifts the depth values without noticeably affecting the FOV or the geometry.
Panning up/down shifts the FOV left/right - not the image, just the part of the image that remains visible after rotating as above.
 
The amount by which even small movements affect the image suggest that some post-assembly calibration would be necessary. It also shows that a rigid metal mounting plate is essential in maintaining good aligmment between illuminator and sensor

 3) An IRC conversation with one of the PrimeSense engineers, which unfortunately I'm not authorized to paste, confirms that to change the effective depth range of the sensor you would change the distance between IR cam and projector. He says if you do this with Kinect you'll just stop seeing depth, but if you then redo the factory calibration process then you'll get a different depth range.
 
Unfortunately we have not yet figured out what that calibration process involves or what software commands are required to read/write the calibration used by the embedded chip to go from the speckle pattern to depth values.
 
Hope that helps!
Josh

John Schulman

unread,
Oct 28, 2011, 11:09:43 PM10/28/11
to OpenNI
Thanks Josh, that's extremely informative.

On Oct 28, 7:27 pm, Joshua Blake <joshbl...@gmail.com> wrote:
> On Fri, Oct 28, 2011 at 6:53 PM, John Schulman <john.d.schul...@gmail.com>wrote:
>
>
>
> > On Oct 28, 12:02 pm, Joshua Blake <joshbl...@gmail.com> wrote:
> > > If you change the configuration of the cameras and IR projectors then
> > > you'll need to do a new factory calibration, which no one public knows
> > > how to do yet.
>
> > Why would you need a factory calibration? Based on my understanding of
> > the block matching and triangulation procedure, if the projector is
> > half as far from the sensor, then the geometry is the same, except all
> > lengths are divided by two.
>
> Because that's how it works. :)
>
> 1) Mike Harrison disassembled his Kinect in the early OpenKinect days and
> posted tons of great info on the electronics to the OpenKinect list (and
> some on the OpenKinect.org wiki.) He noted on Nov 18 2010 in this
> thread<http://groups.google.com/group/openkinect/browse_thread/thread/ad3b14...>
> :
>
> Another thing I noticed was the relative positions of the illuminator and
> sensor are very sensitive to change, which may explain the metal frame -
> even a sligt bend of the frame makes a noticeable difference to the depth
> image.
>
> 2) Mike later wrote on Nov 26 2010 in this
> thread<http://groups.google.com/group/openkinect/msg/045af6b931eb1562>
> :

barry....@primesense.com

unread,
Oct 29, 2011, 7:50:04 PM10/29/11
to OpenNI
I can't comment on the calibration procedure (sorry), but I'll throw
two cautions out there for people taking these things apart:

1) Any physical shift of the various optical components relative to
each other, or damage to the thermal connection between those
components and the metal body is very likely to brick your sensor.
These things are much more sensative inside than your average consumer
electronics item.

2) Eye safety testing was done with the internal laser module
intact... if you manage to expose the actual laser diode, treat it
with the same respect that you would a consumer laser pointer -- the
light frequency is at the extreme end of your visual range, so the dot
will appear much dimmer than it actually is.

Regards,
Barry Gackle
Field Apps Engineer, North America
PrimeSense
> > Josh- Hide quoted text -
>
> - Show quoted text -

Lorne Covington

unread,
Oct 30, 2011, 12:26:55 AM10/30/11
to openn...@googlegroups.com

I have the Nyko "Zoom", actually a wide-angle lens set.  It reduces the minimum depth on my Kinects from 19" to 14".  This comes at a price, namely some very strong vignetting.  It also reduces the strength of the laser by quite a bit, so it is much more sensitive to ambient IR-producing light such as regular tungsten light bulbs.  But even considering that, it does get you a wider usable field of view and somewhat closer working distance.

The depth shift I see is about .75, meaning that what it sees as 1 meter with the Nyko is actually .75 meters.  So their 40% claim is, well, stretching it.

Also, since the field of view (FoV) is different, any computations to go from projected space to real-world XYZ need to be modified.  I haven't figured out the actual numbers yet (just using a rough guess for testing) but will post when I do.

Overall it works OK for my application, which is an overhead view looking down in a controlled lighting situation, as it gives me more area coverage given a fixed ceiling height.

Ciao!
Reply all
Reply to author
Forward
0 new messages