I know zero Linux & would rather not have to spend ages getting it to turn on - would be greatful if
someone can someone point me to the quickest & easiest way to get a display of depth data on a
windows PC.
https://github.com/openkinect/libfreenect
after you git clone it, take a look in the platform folder.
>Instead of ripping it apart why don't you just read this at iFixit?
>http://www.ifixit.com/Teardown/Microsoft-Kinect-Teardown/4066/1
>
><http://www.ifixit.com/Teardown/Microsoft-Kinect-Teardown/4066/1>
>
Because I can't connect a scope to their pictures, or measure the optical output of the illuminator.
And I like taking things apart..
>In the Git repository, there are windows drivers and example code...
>we are doing all of this cross platform for the big 3 OS's (Linux, Mac
>OSX and Windows)
>https://github.com/openkinect/libfreenect
>after you git clone it, take a look in the platform folder.
Is there a newbie guide to using github anywhere... can't see any obvious documentation at
github.com -
I was kinda hoping someone had done some some precompiled USB driver /.exe demos available as a
simple download by now...!
>Mike,
>Glad to hear there's some hardware hackers here.
>Things I want to know:
>1) laser wavelength
>2) optical power
>3) whether it's pulsed or continuous
>4) output polarization
>
>We'll start there :)
>-William
Me too, as soon as I can get it fired up, which is why I'm looking for a quick-start to get the
thing running without learning a ton of software stuff first!!
I have a monochromator to determine wavelength, laaser power meter and should be able to determine
IR sensor resolution by looking at the data from the sensors.
>easiest one just to get a display on windows is here:
>http://groups.google.com/group/openkinect/browse_thread/thread/ab150d174e105794#
Thanks - that's exactly what I was after!
A few findings :
Wavelength is 830nm, no modulation. Camera probably has a narrowband filter, as it seems pretty
immune to light from other sources, e.g. blasting a 950nm IR remote directly at it.
Power measures about 60mW at the aperture. It's likely the patterning element introduces significant
loss, so raw laser power could easily be in the 100-200mW range - certainly an eye hazard if the
diffuser is removed!
The peltier element behind the laser is NOT for cooling, but for temperature stabilisation - this is
not uncommon in lasers to stabilise the wavelength. I suspect there may be some optical feedback to
stabilise power level.
measuring the current into the peltier, it stabilises at around 120mA at room temp. Spraying freezer
on the backplate causes this to increas to over 500mA. Applying heat to the packplte, the current
decreases through zero,changes polarity then increases.
At room temp, the peltier is actually heating the laser, not cooling it.
There must be something important about a stable wavelength to justify the cost of active control -
either the pattern generator is wavelength-sensitive, or it's to keep it within the camera filter's
passband.
There's a thermal fuse in contact with the illuminator housing - clearly a safety-critical
protection device as it's wired in series with the 12v supply.
I wonder if this is to protect against a damaged optical assembly causing the laser to hit the
housing wall and start melting it...
Or could be to do with peltier power if the sensing element fails, or both.
Only the IR cam has a shield over it, and also has a ferrite-loaded sleeve on the flex - my guess
this is to minimise noise pickup on the camera's analogue section.
Based on sync and pixel data waveforms, the sistance camera appears to be 1200x960
Pixel clock 48MHz, Frame period 33.33ms of which 31ms active, line period 31.8uS of which about
25.2uS active
Florian
Probably, however if there's an air gap between the water and camera/illuminator lenses, the
refractive index at the boundaries may mess things up.
You may need to use something like a clear silicone compound to fill the gap to avoid this, making
an air-free path from the lens surfaces and the water.
Why don't you test it and find out!
http://www.wikihow.com/Make-Your-Own-Vacuum-Sealed-Storage-Bags
Sent from my BlackBerry
-----Original Message-----
From: Arthur Wolf <wolf....@gmail.com>
Sender: openk...@googlegroups.com
Date: Thu, 18 Nov 2010 15:11:10
To: OpenKinect<openk...@googlegroups.com>
Reply-To: openk...@googlegroups.com
You'd need to sync the cameras together, and switch the illuminators accordingly, but this shouldn't
be too hard, by using a phase-locked loop on the 12MHz crystal, and the frame sync signal from the
depth camera sensor.
This assumes there is no frame-to-frame processing going on.
Incidentally I did see the camera module receiving 2 bytes of data via (presumably) I2C every frame,
but didn't notice any differences in the data value - it's conceivable this may be something like
gain control for auto-exposure.
Another thing I noticed was the relative positions of the illuminator and sensor are very sensitive
to change, which may explain the metal frame - even a sligt bend of the frame makes a noticeable
difference to the depth image.
It's a Class 1 device (mostly harmless) because the power is spread out over a wide area,
A gazillion watt laser product is still class 1 if the beam can't get out of the box.
I suspect it should really be class 1M (unsafe with optical aids) as it may be possible to refocus
the pattern into a small area.
>> Based on sync and pixel data waveforms, the sistance camera appears to be 1200x960
>> Pixel clock 48MHz, Frame period 33.33ms of which 31ms active, line period 31.8uS of which about
>> 25.2uS active
>
>sorry, what does it exactly means? 1200x960 pixel resolution at 30
>fps?
Yes.
But the depth resolution is less as it needs to resolve the dot pattern - depth image resolution is
widely quoted as 640x480, i.e. about 1.5 image pixels per depth pixel, which seems plausible.
For images comprising known-size dots, it could conceivably be doing centroid fitting to get
sub-pixel resolution on dot position.
nothing obvious - there is an unpopulated FFC connector but this aligns with a hole on the faceplate
so probably for an optional feature.
There are tons of testpoints though, so JTAG is probably there somewhere - you'd need to get the
pinout of the Primesense chip to find it though.
What would the JTAG let us do?
Can someone measure how much time does it take for the PrimeSense chip
to start sending point cloud data after the IR source is turned off
then on? Maybe it doesn't trigger a calibration.
--
Adam Crow BEng (hons) MEngSc MIEEE
Technical Director
DC123 Pty Ltd
Suite 10, Level 2
13 Corporate Drive
HEATHERTON VIC 3202
http://dc123.com
phone: 1300 88 04 07
fax: +61 3 9923 6590
int: +61 3 8689 9798
So I started wondering what that big Marvell chip is actually for...
Turns out it's for the audio, and has nothing to do with the depth sensor system.
I followed its connection to the USB hub, and shorted the data lines together.
The cameras still work - the only difference is the "NUI Audio" device doesn't appear.
There are tracks connecting it to the audio ADC chips.
Now what are they doing with that much processing power just for audio?
Can't find the exact part on the Marvell site but looking at their range it looks like one of their
Armada series with a clock of the order of 400MHz.
There is a SPI flash memory attatched, which I suspect contains a bootloader to allow the XBOX to
load code into the SDRAM.
Do any of the current XBOX games do cool stuff with audio?
I suspect this may also account for the discrepency between the power draw stated on the label and
actually measured when connected to a PC- the Marvell chip isn't actually doing anything!
Running on a PC, the heatsink certainly doesn't get hot enough to justify its presence.
Eventually with a custom firmware or loaded instructions we could make the Marvell do our bidding. If there are hidden hooks in the USB protocol, we could offload computations to it.
I now have it running with only power and USB pins connected.
This means that with an external supply (+5V,+3.3V and +1.8V), the image sensor board should be
useable on its own, which would reduce power draw and size significatly for those wanting to use the
sensor on small/mobile platforms.
Unfortunately, the overall power draw is still too high to run from a single USB port, due to the
initial power draw of the peltier. Looks like the figures in the Primesense ref design didn't
include this.... There might be some ways round this.
I've updated the hardware page with pinouts etc.
http://openkinect.org/wiki/Hardware_info
Unfortunately I can't currently find the connector in stock anywhere..
One suggestion arising from this for those doing drivers and other software is not to assume that
the USB hub or audio or motor devices are present, and therefore not throw an error if they aren't
found.
The only thing coming for the rear PCB is power.
Saludos.