High Resolution Depth Image

858 views
Skip to first unread message

mankoff

unread,
Aug 25, 2011, 10:08:31 AM8/25/11
to OpenKinect
Hi,

I've run others samples and re-compiled examples to access the high
resolution (1280x1024) RGB and IR images. However, I'm interested in
the depth image, for example, the one dumped by the 'record' program.
Is it possible to get this image at high res? Or is the IR image ->
depth image conversion done onboard somewhere, still unhacked, and
only returns the 640x480 image?

Thanks,

-k.

Vahag

unread,
Aug 25, 2011, 1:59:08 PM8/25/11
to OpenKinect
Hello.

As I know there is not any possibility to get depth image at higher
resolution than 640x480.
According to Microsoft, Kinect can generate depth image at higher
resolution, but USB cable used with
Kinect can not process such amount of information.

Vahag

Neeraj Kulkarni

unread,
Aug 25, 2011, 2:52:01 PM8/25/11
to openk...@googlegroups.com
Mankoff,

Can you please elaborate on "I've run others samples and re-compiled examples to access the high resolution (1280x1024) RGB and IR images" ?

Thanks,
Neeraj
--
Best regards,
Neeraj Kulkarni
Masters student
Indian Institute of Technology Delhi
http://www.cse.iitd.ernet.in/~mcs103467/


drew.m...@gmail.com

unread,
Aug 25, 2011, 3:28:25 PM8/25/11
to openk...@googlegroups.com
All right, I'll try to address each of your posts.

On Thu, Aug 25, 2011 at 11:52 AM, Neeraj Kulkarni <onlyn...@gmail.com> wrote:
> Mankoff,
> Can you please elaborate on "I've run others samples and re-compiled
> examples to access the high resolution (1280x1024) RGB and IR images" ?

We are able to stream the full-resolution RGB and IR images at around
10FPS; see examples/hiview.c. If you wanted to empirically determine
the calibration image and reimplement what the SoC does, theoretically
you could do so, but you'd hit the same limitation.

> On Thu, Aug 25, 2011 at 11:29 PM, Vahag <vahagna...@gmail.com> wrote:
>> As I know there is not any possibility to get depth image at higher
>> resolution than 640x480.
>> According to  Microsoft, Kinect can generate depth image at higher
>> resolution, but USB cable used with
>> Kinect can not process such amount of information.

I'm curious as to your source where Microsoft said that the depth
image could be obtained in higher resolution than 640x480. To my
knowledge, there is no higher-resolution depth image than 640x480.
The OpenNI drivers do not suggest otherwise. Lower resolutions at
higher framerates may exist, but I haven't gotten any of them to work
correctly, so they may have trimmed them from the firmware.

Even if you were to calculate it yourself, like in [1], you'd probably
only get up to 640x512. In addition, there aren't that many dots in
the IR pattern, so even if you were to produce a higher resolution
image, you'd probably have to do so by interpolating between the same
points, which wouldn't actually give you more data.

>> On Aug 25, 7:08 pm, mankoff <mank...@gmail.com> wrote:
>> > Hi,
>> >
>> > I've run others samples and re-compiled examples to access the high
>> > resolution (1280x1024) RGB and IR images. However, I'm interested in
>> > the depth image, for example, the one dumped by the 'record' program.
>> > Is it possible to get this image at high res? Or is the IR image ->
>> > depth image conversion done onboard somewhere, still unhacked, and
>> > only returns the 640x480 image?

The depth image is computed on the Kinect itself from the difference
between the 1280x1024 IR image and a reference image stored somewhere
in the Kinect's flash (that we do not know how to access). Since the
depth image is computed from the horizontal shift of a particular dot
pattern, I'm under the impression that it necessarily must be at a
lower resolution than the IR image it is computed from. The ROS folks
have a pretty detailed analysis of it at [1].

Hope that clears things up a bit.

-Drew

[1] - http://www.ros.org/wiki/kinect_calibration/technical

mankoff

unread,
Aug 25, 2011, 3:53:29 PM8/25/11
to OpenKinect


On Aug 25, 2:52 pm, Neeraj Kulkarni <onlynee...@gmail.com> wrote:
> Mankoff,
>
> Can you please elaborate on "I've run others samples and re-compiled
> examples to access the high resolution (1280x1024) RGB and IR images" ?

See hiview.c and I modified record.c too.

-k.

mankoff

unread,
Aug 25, 2011, 3:54:28 PM8/25/11
to OpenKinect
Hi Drew,

On Aug 25, 3:28 pm, "drew.m.fis...@gmail.com"
<drew.m.fis...@gmail.com> wrote:
>
> >> On Aug 25, 7:08 pm, mankoff <mank...@gmail.com> wrote:
> >> > Hi,
>
> >> > I've run others samples and re-compiled examples to access the high
> >> > resolution (1280x1024) RGB and IR images. However, I'm interested in
> >> > the depth image, for example, the one dumped by the 'record' program.
> >> > Is it possible to get this image at high res? Or is the IR image ->
> >> > depth image conversion done onboard somewhere, still unhacked, and
> >> > only returns the 640x480 image?
>
> The depth image is computed on the Kinect itself from the difference
> between the 1280x1024 IR image and a reference image stored somewhere
> in the Kinect's flash (that we do not know how to access).  Since the
> depth image is computed from the horizontal shift of a particular dot
> pattern, I'm under the impression that it necessarily must be at a
> lower resolution than the IR image it is computed from.  The ROS folks
> have a pretty detailed analysis of it at [1].
>
> [1] -http://www.ros.org/wiki/kinect_calibration/technical

Yes I've read the calibration document. Thanks for the reply. So for
now, and maybe forever, or at least until Kinect 2.0, I'll just work
with everything at 640x480.

-k.

genbattle

unread,
Aug 25, 2011, 5:40:26 PM8/25/11
to OpenKinect
Because of the way the Primesense depth processor works, it must have
an input image which is much larger than the resolution of the depth
output in order to accurately estimate depth from the infrared dot
grid. For this reason, if you ask for a depth stream of more than
640x480, it reverts to sending the raw IR information rather than the
processed depth stream.

As someone else said, the depth conversion is done onboard, and can
only return a 640x480 image. The Primesense reference design is the
same, it won't output a depth stream higher than 640x480.

Mohamed Ikbel Boulabiar

unread,
Aug 26, 2011, 4:53:59 AM8/26/11
to openk...@googlegroups.com
Hi,


On Thu, Aug 25, 2011 at 11:40 PM, genbattle <gen.b...@gmail.com> wrote:
Because of the way the Primesense depth processor works, it must have
an input image which is much larger than the resolution of the depth
output in order to accurately estimate depth from the infrared dot
grid. For this reason, if you ask for a depth stream of more than
640x480, it reverts to sending the raw IR information rather than the
processed depth stream.
 
As someone else said, the depth conversion is done onboard, and can
only return a 640x480 image. The Primesense reference design is the
same, it won't output a depth stream higher than 640x480.

Is it possible to only get the 1280x1024 depth information (@~30Hz) without the RGB Camera ? (I mean, in that case, does the USB2 transfer rate become sufficient ?)
 
Then, someone can apply other algorithms like the one used in KinectFusion to build a scene and with own estimates etc.

i

drew.m...@gmail.com

unread,
Aug 26, 2011, 5:06:14 AM8/26/11
to openk...@googlegroups.com
On Fri, Aug 26, 2011 at 1:53 AM, Mohamed Ikbel Boulabiar
<boul...@gmail.com> wrote:
> Hi,

>
> Is it possible to only get the 1280x1024 depth information (@~30Hz) without
> the RGB Camera ? (I mean, in that case, does the USB2 transfer rate become
> sufficient ?)

No, for two reasons:

1) There is no such thing as 1280x1024 depth information. There is
640x480@30Hz depth information. There is 1280x1024@10Hz IR
information.

2) The two isochronous streams are for different USB endpoints. The
bandwidth allotted for each endpoint is statically declared, so
avoiding the use of one does not increase the bandwidth available to
the other.

> Then, someone can apply other algorithms like the one used in KinectFusion
> to build a scene and with own estimates etc.

I would think that the KinectFusion work uses the same 640x480@30Hz
depth image that is available to everyone else. You might find the
Point Cloud Library [1] interesting.

Best,
Drew

[1] - http://pointclouds.org/

Murilo Saraiva de Queiroz

unread,
Aug 26, 2011, 10:49:52 AM8/26/11
to openk...@googlegroups.com
On Fri, Aug 26, 2011 at 6:06 AM, drew.m...@gmail.com <drew.m...@gmail.com> wrote:
> Then, someone can apply other algorithms like the one used in KinectFusion
> to build a scene and with own estimates etc.

I would think that the KinectFusion work uses the same 640x480@30Hz
depth image that is available to everyone else.

You're right. Their high-resolution result is obtained improving the mesh progressively, using the multiple samples obtained as the Kinect is moved.

Look for "superresolution" (or "super-resolution") for more details. Here's an interesting Kinect-based example:

3D Reconstruction with Kinect
http://www.youtube.com/watch?v=YH58u_057Ac

--

Reply all
Reply to author
Forward
0 new messages