Kinect IR sensor resolution

1,614 views
Skip to first unread message

Florian Echtler

unread,
Nov 17, 2010, 1:42:39 PM11/17/10
to openk...@googlegroups.com
Hello again,

one thing just crossed my mind: what's the resolution of the kinect IR
sensor? In order for it to make sense out of the dot pattern, it needs
to see a "local neighborhood" around every depth pixel, correct? So the
actual image resolution should be at least on the order of full HD, or
am I missing something here?

Florian

Murilo Saraiva de Queiroz

unread,
Nov 17, 2010, 2:27:15 PM11/17/10
to openk...@googlegroups.com
I wondered that, too. 

The PrimeSense documentation suggests that the raw IR image sensor in the reference design is 1600x1200: 


Murilo Q. 

--
Murilo Saraiva de Queiroz, MSc.
Senior Software Engineer 
http://www.vettalabs.com
http://www.tecnologiainteligente.com.br
http://www.acalantoemcasa.com.br

surfacant

unread,
Nov 17, 2010, 4:00:45 PM11/17/10
to OpenKinect
From the pdf linked it appears the kinect IR sensor is 640x480 VGA.

On Nov 17, 11:27 am, Murilo Saraiva de Queiroz <muri...@gmail.com>
wrote:
> I wondered that, too.
>
> The PrimeSense documentation suggests that the raw IR image sensor in the
> reference design is 1600x1200:
>
> http://www.primesense.com/files/FMF_2.PDF
>
> Murilo Q.
>

Murilo Saraiva de Queiroz

unread,
Nov 18, 2010, 7:49:43 AM11/18/10
to openk...@googlegroups.com
The *output* depth map in the reference design is 640x480, but as Florian mentioned in order to produce a 640x480 depth map you need raw IR image with a much larger resolution, in order to identify the pattern in each point (the projected IR dots aren't identical; their format encode some information). 

Since the PDF mentions a 1600x1200 resolution for the other (RGB) camera, it's possible that the IR camera captures the raw image in 1600x1200, post-process it and then output a lower-resolution depth map. 

Of course that's mere speculation, I don't know how to confirm this. 

muriloq

Zsolt Ero

unread,
Nov 18, 2010, 9:28:39 AM11/18/10
to openk...@googlegroups.com
For me it would make sense if the 
1. emitter had 640x480 resolution (I mean 640x480 small IR dots)
2. sensor had about 2-3x times that resolution (1600x1200 is about that)
3. the depth image (640x480) would contain a Z value for every single dot emitted. 

Can someone actually count the x and y resolution of the emitted image? If someone could take a really high resolution IR photograph, it would be easy to count the dots in x and y resolution. Or if someone had a night vision goggle...

Here is the primesense specs:
primesense specs.gif
primesense specs.gif

Sebastian

unread,
Nov 18, 2010, 9:48:56 AM11/18/10
to OpenKinect
You can find a lot of pictures here: http://www.futurepicture.org/

On Nov 18, 3:28 pm, Zsolt Ero <zsolt....@gmail.com> wrote:
> For me it would make sense if the
> 1. emitter had 640x480 resolution (I mean 640x480 small IR dots)
> 2. sensor had about 2-3x times that resolution (1600x1200 is about that)
> 3. the depth image (640x480) would contain a Z value for every single dot
> emitted.
>
> Can someone actually count the x and y resolution of the emitted image? If
> someone could take a really high resolution IR photograph, it would be easy
> to count the dots in x and y resolution. Or if someone had a night vision
> goggle...
>
> Here is the primesense specs:
> [image: primesense specs.gif]
>
> On Thu, Nov 18, 2010 at 12:49 PM, Murilo Saraiva de Queiroz <
>
>
>
> muri...@gmail.com> wrote:
> > The *output* depth map in the reference design is 640x480, but as Florian
> > mentioned in order to produce a 640x480 depth map you need raw IR image with
> > a much larger resolution, in order to identify the pattern in each point
> > (the projected IR dots aren't identical; their format encode some
> > information).
>
> > Since the PDF mentions a 1600x1200 resolution for the other (RGB) camera,
> > it's possible that the IR camera captures the raw image in 1600x1200,
> > post-process it and then output a lower-resolution depth map.
>
> > Of course that's mere speculation, I don't know how to confirm this.
>
> > muriloq
>
>  primesense specs.gif
> 106KViewDownload

Joshua Blake

unread,
Nov 18, 2010, 10:57:23 AM11/18/10
to openk...@googlegroups.com

I don't have the links (on my phone) but ifixit year down identified the CMOS sensors and you can look up the data sheets. Both rgb and ir sensors list 1280x1024 at 15 fps.  Since we're getting 30 fps, they are likely running at a lower resolution.

Probably not possible to switch camera modes now, but with custom firmware yes.

It isn't necessary to detect every dot individually. Instead, it likely looks at the density of the dot field (as the IR brightness). The patent apps would detail the approach. (i haven't read them yet. :( )

Daniel Reetz

unread,
Nov 18, 2010, 11:06:33 AM11/18/10
to OpenKinect
On Nov 18, 6:48 am, Sebastian <sebi.k...@gmail.com> wrote:
> You can find a lot of pictures here:http://www.futurepicture.org/

Thanks, Sebastian.

> > Can someone actually count the x and y resolution of the emitted image? If
> > someone could take a really high resolution IR photograph, it would be easy
> > to count the dots in x and y resolution. Or if someone had a night vision
> > goggle...

Following up on this, I posted a bunch of 5mpix shots of the sensor
output, which should be just good enough to count the dots with
software, if you want. They just finished uploading.
http://www.futurepicture.org/?p=129

You might also be interested in this other post, which shows evidence
that the Kinect is not using random speckle patterns, as the patents
and papers suggest, but rather a very carefully crafted speckle.
http://www.futurepicture.org/?p=116

Also, I swear someone was discussing this, but I can't find it. As far
as I can tell, the projector output is not modulated, blinked, or
sync'd in any way, it's just on all the time:
http://www.futurepicture.org/?p=124

Adam Crow

unread,
Nov 18, 2010, 11:18:16 AM11/18/10
to openk...@googlegroups.com
I was thinking about a solution for utilising multiple Kinects.

I have worked on 3D laser scanners in the past. To assist in creating
a scanning system that could work with bright daytime conditions
(outside the warehouse) we used a very expensive ($200) optical filter
that was centred on the IR laser diodes we used.

I would imagine that replacing the IR diode in a Kinect and applying a
different filter on each Kinect used should isolate the viewed dot
patterns. Alternatively the notion of synchronizing the beams in a
time fashion would also work.

The use of multiple colour filters has assisted 3D scanning in the
past to achieve better resolution.

This group is amazing. Well done folks!

Adam

--
Adam Crow BEng (hons) MEngSc MIEEE
Technical Director
DC123 Pty Ltd
Suite 10, Level 2
13 Corporate Drive
HEATHERTON VIC 3202
http://dc123.com
phone: 1300 88 04 07
fax: +61 3  9923 6590
int: +61 3 8689 9798

Zsolt Ero

unread,
Nov 18, 2010, 11:47:48 AM11/18/10
to openk...@googlegroups.com
> It isn't necessary to detect every dot individually. Instead, it likely looks at the density of the dot field (as the IR brightness).

The density of the dot field is only visible from our viewpoint! If
the camera is close to the emitter then it doesn't matter if the
object is 1 meter or 5 meter away, it's always seeing almost the
original pattern. I think the big thing in PrimeSense is the
processor, what is essentially:
0. doing a camera calibration using the checkerboard pattern (+central
brighter dots)
1. recognising each individual dot
2. calculating a depth information for each dot, using the same
technique as all laser range-finders use
3. does it all in parallel for all dots, in real time.

But it's just my theory. I would be happy if someone could tell something more.

Adam Crow

unread,
Nov 18, 2010, 12:28:52 PM11/18/10
to openk...@googlegroups.com
Sounds good.

I'd guess a few other things...

Intensity thresholding to find camera calibration points.
A Markov Chain from each point to determine the surrounding points.

Assumptions would be that the scanned object is a concave surface
(such as most of the scanned bodys part)
Different Clothing materials would provide a challenge. Intensities in
each region would roughly be in proprtion to each other.

Adam

--

ismael....@gmail.com

unread,
Nov 19, 2010, 8:13:44 AM11/19/10
to OpenKinect
Hi all, I was playing with the images in order to get the points and
heres is a more or less clean image:
https://docs.google.com/leaf?id=0B1LuyyMoFbJqMzRlZDc0NTAtMjhmYi00OTdhLTk4MzgtMGUzYmE4YTU3NmY4&hl=en&authkey=CP7MjfsP

For those who are interested, that were the steps in Fiji.

Convert to grayscale
Apply a Morphological open (radius 10) and subtract it (this is konwn
as White Tophat). In this way we remove the background.
Threshold image to make it binary
Apply Morphological open (radius 1) to remove noise. (Median filter
joined near points :-( )
It is not perfect but the majority of points are there.

See you!

On 18 nov, 17:06, Daniel Reetz <danre...@gmail.com> wrote:
> On Nov 18, 6:48 am, Sebastian <sebi.k...@gmail.com> wrote:
>
> > You can find a lot of pictures here:http://www.futurepicture.org/
>
> Thanks, Sebastian.
>
> > > Can someone actually count the x and y resolution of the emitted image? If
> > > someone could take a really high resolution IR photograph, it would be easy
> > > to count the dots in x and y resolution. Or if someone had a night vision
> > > goggle...
>
> Following up on this, I posted a bunch of 5mpix shots of the sensor
> output, which should be just good enough to count the dots with
> software, if you want. They just finished uploading.http://www.futurepicture.org/?p=129
>
> You might also be interested in this other post, which shows evidence
> that the Kinect is not using random speckle patterns, as the patents
> and papers suggest, but rather a very carefully crafted speckle.http://www.futurepicture.org/?p=116

Zsolt Ero

unread,
Nov 19, 2010, 1:10:55 PM11/19/10
to openk...@googlegroups.com
What channels does your grayscale conversion use?

I found some interesting things (thanks for the futurepicture images!):
Only thing I applied was levels to make it more contrasty:
color: http://imgur.com/bNObP
blue: http://imgur.com/i5Duo
red: http://imgur.com/vLViq

1. The blue channel contains much more dots compared to the red
channel (which is a bit more than the green channel). Why is this
happening? And actually why does IR light have a blue/violet color on
the images? Isn't infra-red supposed to be close to red, while
ultra-violet supposed to be close to blue/violet colors?

2. The 9 central dots seems to be white, while no other dot is white
on the color picture. How is it possible that the central dots are
white, while the other dots have this strange coloration? They are
emitted from the same 830 nm light source. Do you think all what we
see is just the problem of the digital camera's CMOS sensor's noise?

3. I manually counted the points on the edges of color image. In x
direction it had about 200 points, in the y direction it had about
170. So a very rought guess would be that here are 200x170 points
emitted, which is scanned by a 1280x960 CMOS sensor. It would roughly
put one dot in a 5x5 pixel region, which is I think enough for
detecting the points one by one.

Ismael Salvador

unread,
Nov 19, 2010, 1:58:21 PM11/19/10
to OpenKinect
I think it's blueish because he used some filter to minimise visible
light.
You can take IR photos with a normal camera and an old unexposed
developed film as a filter (http://photocritic.org/create-your-own-ir-
filter/)

I do not know exactly the gray conversion because Fiji does it for me
but I suppose it should be the standard Y = 0.3R + 0.59G + 0.11B

The reason for the central points to white it is just they are
saturating the sensor.

Just think about the IR image as a 1 channel image, here there are
colors because it was taken with a color camera.

On 19 nov, 19:10, Zsolt Ero <zsolt....@gmail.com> wrote:
> What channels does your grayscale conversion use?
>
> I found some interesting things (thanks for the futurepicture images!):
> Only thing I applied was levels to make it more contrasty:
> color:http://imgur.com/bNObP
> blue:http://imgur.com/i5Duo
> red:http://imgur.com/vLViq
>
> 1. The blue channel contains much more dots compared to the red
> channel (which is a bit more than the green channel). Why is this
> happening? And actually why does IR light have a blue/violet color on
> the images? Isn't infra-red supposed to be close to red, while
> ultra-violet supposed to be close to blue/violet colors?
>
> 2. The 9 central dots seems to be white, while no other dot is white
> on the color picture. How is it possible that the central dots are
> white, while the other dots have this strange coloration? They are
> emitted from the same 830 nm light source. Do you think all what we
> see is just the problem of the digital camera's CMOS sensor's noise?
>
> 3. I manually counted the points on the edges of color image. In x
> direction it had about 200 points, in the y direction it had about
> 170. So a very rought guess would be that here are 200x170 points
> emitted, which is scanned by a 1280x960 CMOS sensor. It would roughly
> put one dot in a 5x5 pixel region, which is I think enough for
> detecting the points one by one.
>
> On Fri, Nov 19, 2010 at 1:13 PM, issa...@yahoo.es
>
> <ismael.salva...@gmail.com> wrote:
> > Hi all, I was playing with the images in order to get the points and
> > heres is a more or less clean image:
> >https://docs.google.com/leaf?id=0B1LuyyMoFbJqMzRlZDc0NTAtMjhmYi00OTdh...
Reply all
Reply to author
Forward
0 new messages