News about New Kinect and SDK

245 views
Skip to first unread message

mankoff

unread,
Jan 9, 2012, 11:36:32 PM1/9/12
to OpenKinect

Kyle McDonald

unread,
Jan 10, 2012, 12:08:06 AM1/10/12
to openk...@googlegroups.com
very interesting:

"non-commercial deployments using Kinect for Xbox 360 that were
allowed using the beta SDK are not permitted with the newly released
software"

in other words, if you want to do something with kinect AND the soon
to be released non-beta SDK, you have to use the $250 kinect. anything
you've deployed with a $150 kinect and the beta SDK will stop being
legal in 2016, unless you migrate to the $250 kinect and the new SDK.
these kind of restrictions are so arcane.

they don't even mention, or try to clarify, the status of libfreenect,
which is responsible for 95% of the "kinect effect" they keep talking
about.

unrelated to the above, can someone clarify "near mode" on this new
kinect? all i've heard is that it can do 40-50 cm, but the current
kinect already does 40-50 cm in the right lighting. what's special
about the new version?

kyle

On Mon, Jan 9, 2012 at 11:36 PM, mankoff <man...@gmail.com> wrote:
> https://blogs.msdn.com/b/kinectforwindows/archive/2012/01/09/kinect-for-windows-commercial-program-announced.aspx

Γιάννης Γράβεζας

unread,
Jan 10, 2012, 12:22:22 AM1/10/12
to openk...@googlegroups.com
On Tue, Jan 10, 2012 at 7:08 AM, Kyle McDonald <ky...@kylemcdonald.net> wrote:
very interesting:

"non-commercial deployments using Kinect for Xbox 360 that were
allowed using the beta SDK are not permitted with the newly released
software"

in other words, if you want to do something with kinect AND the soon
to be released non-beta SDK, you have to use the $250 kinect. anything
you've deployed with a $150 kinect and the beta SDK will stop being
legal in 2016, unless you migrate to the $250 kinect and the new SDK.
these kind of restrictions are so arcane.


Yet they make perfect sense in meetings among cocaine sniffing execs
 
they don't even mention, or try to clarify, the status of libfreenect,
which is responsible for 95% of the "kinect effect" they keep talking
about.


And what could they say? "Hey, there's also an alternative that's proven, cross platform and available to everyone without any restrictions whatsoever. Fuck ours, get that". That doesn't sound so great on the above mentioned meetings
 
unrelated to the above, can someone clarify "near mode" on this new
kinect? all i've heard is that it can do 40-50 cm, but the current
kinect already does 40-50 cm in the right lighting. what's special
about the new version?


Probably nothing. What would make a difference would be a wider camera angle and getting rid of the power plug. What's the status on these? 
 
kyle

 
--
bliss is ignorance

mankoff

unread,
Jan 10, 2012, 12:34:59 AM1/10/12
to OpenKinect

> Probably nothing. What would make a difference would be a wider camera
> angle and getting rid of the power plug. What's the status on these?
>

I think no power plug on the ASUS Xtion (or whatever the name is). Not
100% sure.

-k.

Kyle McDonald

unread,
Jan 10, 2012, 12:36:36 AM1/10/12
to openk...@googlegroups.com
hah hah -- i don't feel so cynical as you, i think :) i'm glad
microsoft is doing what they can to get this technology into more
people's hands. it just seems like they're making some really obvious
mistakes along the way that reflect an outdated way of thinking.

but i've never been a cocaine sniffing exec, so i don't really know
what i'm talking about.

really: i'm super curious if anyone could chime in on the genuinely
'new' features of kinect for windows. the closest thing i've seen is
from josh tweeting the amazon page:

http://www.amazon.com/exec/obidos/ASIN/B006UIS53K/34a6-20/

"has a shortened USB cable to ensure reliability across a broad range
of computers and includes a small dongle to improve coexistence with
other USB peripherals."

i also like the note:

"The sensor will only work on computers running the SDK softawre [sic]"

i wonder if that's like a standard compatibility notice, or more like a threat?

kyle

2012/1/10 Γιάννης Γράβεζας <wiz...@gmail.com>:

Joshua Blake

unread,
Jan 10, 2012, 12:55:22 AM1/10/12
to openk...@googlegroups.com
On Tue, Jan 10, 2012 at 12:08 AM, Kyle McDonald <ky...@kylemcdonald.net> wrote:
they don't even mention, or try to clarify, the status of libfreenect,
which is responsible for 95% of the "kinect effect" they keep talking
about.
 
Nor do they mention OpenNI, but they've already stated that they're not going to pursue legal action for those uses, but it voids any hardware warranty.  
 

unrelated to the above, can someone clarify "near mode" on this new
kinect? all i've heard is that it can do 40-50 cm, but the current
kinect already does 40-50 cm in the right lighting. what's special
about the new version?

 
 
We know that the Kinect for Xbox hardware returns data down to about 40cm and up to about 6m. The Kinect for Windows betas locked the range to 80cm to 4m since that is the range where the depth data values are actually accurate. Outside that range the accuracy and resolution of reported values and image quality reduces dramatically (e.g. more holes in depth image). In the v1 SDK, 80cm to 4m is the default mode. Near mode in the new firmware is able to report accurate depth values from 40cm to 3m. You can switch between near mode and default mode.
 
Near mode does not change the field of view or let you stand closer to the Kinect
The firmware also has better rgb and depth quality in general and much better audio quality and the new sensor works for more PCs whereas the Kinect for Xbox had problems with certain USB chipsets. The v1 SDK is also honestly has really well designed API, has a great install experience, and the skeleton tracking works much better than we've seen in the beta or OpenNI. The Kinect for Windows team is also going to continue making improvements to the SDK, adding new tracking modes and features and releasing free updates.
 
The SDK invements and commercial support is primarily what the higher hardware cost goes towards. IMHO, sure I'd rather pay less for the hardware but honestly most of the Kinects I use are paid for by work or customers, i.e. Other People's Money.
 
In the bigger picture, the Kinect for Windows hardware is still a great deal at $249, compared to previous types of commercial 3d scanning hardware (TOF, LIDAR), as well as compared to the other competing commercially usable sensor, Asus Xtion Pro. The Xtion Pro is currently $149 for depth only or $199 for Xtion Pro Live for depth and rgb (http://us.estore.asus.com/index.php?l=product_list&c=3009). Neither of those have audio beam forming and they have more problems with certain lighting conditions than Kinect.
 
The Kinect for Windows team is really responsive to feedback though and I'm going to point them to this thread. I'm sure they'll love to read any fair criticisms and requests. (Pretty sure cocaine is not involved though.)
 
Josh

Γιάννης Γράβεζας

unread,
Jan 10, 2012, 1:24:15 AM1/10/12
to openk...@googlegroups.com
On Tue, Jan 10, 2012 at 7:55 AM, Joshua Blake <josh...@gmail.com> wrote:
On Tue, Jan 10, 2012 at 12:08 AM, Kyle McDonald <ky...@kylemcdonald.net> wrote:
they don't even mention, or try to clarify, the status of libfreenect,
which is responsible for 95% of the "kinect effect" they keep talking
about.
 
Nor do they mention OpenNI, but they've already stated that they're not going to pursue legal action for those uses, but it voids any hardware warranty.  
 

No it doesn't. They have to prove that running alternative software would mean damage to the kinect. Now, since they've already used concepts created by the open source community for their marketing, they basically lost that option 
 

unrelated to the above, can someone clarify "near mode" on this new
kinect? all i've heard is that it can do 40-50 cm, but the current
kinect already does 40-50 cm in the right lighting. what's special
about the new version?

 
 
We know that the Kinect for Xbox hardware returns data down to about 40cm and up to about 6m. The Kinect for Windows betas locked the range to 80cm to 4m since that is the range where the depth data values are actually accurate. Outside that range the accuracy and resolution of reported values and image quality reduces dramatically (e.g. more holes in depth image). In the v1 SDK, 80cm to 4m is the default mode. Near mode in the new firmware is able to report accurate depth values from 40cm to 3m. You can switch between near mode and default mode.
 

This is achieved just by upgrading the firmware? Can this be done on the xbox kinect as well?
 
In the bigger picture, the Kinect for Windows hardware is still a great deal at $249, compared to previous types of commercial 3d scanning hardware (TOF, LIDAR), as well as compared to the other competing commercially usable sensor, Asus Xtion Pro. The Xtion Pro is currently $149 for depth only or $199 for Xtion Pro Live for depth and rgb (http://us.estore.asus.com/index.php?l=product_list&c=3009). Neither of those have audio beam forming and they have more problems with certain lighting conditions than Kinect.

Some stability fixes and a firmware upgrade don't justify doubling the price of what's already in the market. That's ok though, it's a free market afterall. But we should try to keep it decent as well

 
The Kinect for Windows team is really responsive to feedback though and I'm going to point them to this thread. I'm sure they'll love to read any fair criticisms and requests. (Pretty sure cocaine is not involved though.)

I've got a request that I feel most of the people here would agree on. Since libfreenect is on a different league from the MS SDK(No skeletons and stuff), how about opening your driver so we can use it as well? Let's all be friends and proliferate the technology
 
 
Josh



--
bliss is ignorance

Joshua Blake

unread,
Jan 10, 2012, 2:00:18 AM1/10/12
to openk...@googlegroups.com

2012/1/10 Γιάννης Γράβεζας <wiz...@gmail.com>

This is achieved just by upgrading the firmware? Can this be done on the xbox kinect as well?

 

 
I don't think it will be possible to upgrade the firmware of Kinect for Xbox hardware.
 

 Some stability fixes and a firmware upgrade don't justify doubling the price of what's already in the market. That's ok though, it's a free market afterall. But we should try to keep it decent as well

 
With Kinect for Xbox Microsoft also made money from game licensing and Xbox subscriptions, etc, which plays into the pricing decision. With Kinect for Windows, the sensor hardware is the only way for them to make back their R&D investment since we'll be using it for many different purposes outside of the Xbox ecosystem and many will be making money using K4W in various ways.
 
I agree, it's kind of like pulling a rug out from us to increase the price for the different use scenario after we're used to the $150 price point, but in the bigger picture there's not many other options that would work. If you were the business decision maker in Microsoft, what would you do to make back the investment in the hardware and SDK?
 
Also, I'm curious if you are using Kinect for commercial or non-commercial purposes?
 
I've got a request that I feel most of the people here would agree on. Since libfreenect is on a different league from the MS SDK(No skeletons and stuff), how about opening your driver so we can use it as well? Let's all be friends and proliferate the technology
 
 
Can you elaborate? Do you mean allowing the libfreenect API to communicate with the drivers installed by the Kinect SDK, or something else?

Γιάννης Γράβεζας

unread,
Jan 10, 2012, 2:18:45 AM1/10/12
to openk...@googlegroups.com


2012/1/10 Joshua Blake <josh...@gmail.com>

 Some stability fixes and a firmware upgrade don't justify doubling the price of what's already in the market. That's ok though, it's a free market afterall. But we should try to keep it decent as well

 
With Kinect for Xbox Microsoft also made money from game licensing and Xbox subscriptions, etc, which plays into the pricing decision. With Kinect for Windows, the sensor hardware is the only way for them to make back their R&D investment since we'll be using it for many different purposes outside of the Xbox ecosystem and many will be making money using K4W in various ways.
 
I agree, it's kind of like pulling a rug out from us to increase the price for the different use scenario after we're used to the $150 price point, but in the bigger picture there's not many other options that would work. If you were the business decision maker in Microsoft, what would you do to make back the investment in the hardware and SDK?
 

I'm a developer so my opinion is biased. But since MS didn't actually thought of using the kinect outside of the xbox ecosystem they had probably figured out already how to make back the investment. If the extra R&D for enabling use on other platforms was so much of a burden they could just ask the help of the community like so many other big and successful companies have done in the past with great results. 
 
Also, I'm curious if you are using Kinect for commercial or non-commercial purposes?
 

That's a bit complicated. I'm building a platform that allows web developers to perform basic computer vision inside browsers. The software is free(as in speech) but I do charge for my time as a consultant and/or developer for specific installations. My main objective is the proliferation of the technology though so I don't charge for workshops(besides expenses)
 
I've got a request that I feel most of the people here would agree on. Since libfreenect is on a different league from the MS SDK(No skeletons and stuff), how about opening your driver so we can use it as well? Let's all be friends and proliferate the technology
 
 
Can you elaborate? Do you mean allowing the libfreenect API to communicate with the drivers installed by the Kinect SDK, or something else?


The best option would be providing assistance to the libusb project so the 1.0 version for windows could use the K4W driver as a backend like it does now with WinUSB(which sucks). I think this option is feasible and logical

--
bliss is ignorance

Grant Kot

unread,
Jan 10, 2012, 8:22:47 AM1/10/12
to openk...@googlegroups.com
I'm a bit disappointed by this. I have already bought 2 Kinects for Xbox and now I will need to get these new Kinects. The same thing for all the other early adopters who got the Kinect for Xbox. In addition, there don't seem to be that many hardware improvements, just new firmware. I'm afraid that this will seriously mess up the user base and drive user count down so low it won't be worthwhile to develop using the Windows SDK. In that case I seriously hope the new Kinect will work with the other drivers that are available and there will be no effort from Microsoft to block these and that you can write software that works for both devices.

2012/1/10 Γιάννης Γράβεζας <wiz...@gmail.com>

Kyle McDonald

unread,
Jan 10, 2012, 10:53:20 AM1/10/12
to openk...@googlegroups.com
> In the v1 SDK, 80cm to 4m is the default mode. Near mode in the new firmware is able to report accurate depth values from 40cm to 3m. You can switch between near mode and default mode.

ah, ok -- so the windows SDK just hasn't been reporting <80cm so far.
but now it will, and they're marketing it as a 'feature' of the
windows kinect even though it's already been exposed through
libfreenect and openni.

let's step back once more, and look at the options:

kinect with the official SDK
- more expensive
- has previously been artificially limited (e.g., 80 vs 40 cm)
- windows only
- small community
- unpredictable licensing (e.g., 2016 cutoff)
- has only recently caught up on skeleton accuracy
+ sound localization

kinect with openni/libfreenect
+ cheaper
+ no artificial limitations
+ cross platform
+ huge community
+ clearly licensed
+ openni now supports calibration-free detection
- "voids your warranty"

i think the best thing microsoft could do would be:

- make real changes to the kinect (rather than shortening the cable or
adding a usb hub) so they have something they can feed back into the
xbox market. anyone can write a decent api for kinect, but only
microsoft has the resources to further push the tech.
- don't try to make a separate version (kinect for windows) because
i'm just not sure that the size of the market is in any way
comparable. do we have any numbers on how many kinects out there are
not plugged in to xboxes?
- get all the awesome developers who are writing the kinect SDK to
focus their energy where there is already good cross-platform momentum
and strong communities.

there are some interviews where MS says they expected the hacker scene
to happen. i can't believe this, because they were completely
unprepared. if they really expected the 'kinect effect' from the
beginning here's what would have happened:

- kinect SDK and kinect are pre-released to select devs
- then kinect SDK and kinect are released to the world
- windows users have a head start on everyone, start making crazy demos
- people reverse the SDK and start openkinect in response to the
windows-only SDK (rather than in response to the lack of an SDK)

but that's not what happened. and now they're in a really weird position :(

kyle

2012/1/10 Grant Kot <kot...@gmail.com>:

Joshua Blake

unread,
Jan 10, 2012, 11:16:09 AM1/10/12
to Kyle McDonald, openk...@googlegroups.com
Where you say unpredictable licensing (e.g. 2016 cutoff) isn't
accurate. The 2016 date is the extended license EULA for current
non-commercial deployments of the Kinect SDK betas using Kinect for
Xbox hardware. The previous beta EULA was only valid until June 16,
2013, which is a typical beta license restriction.

The actual v1 Kinect SDK will use standard Microsoft support life cycle
which has a minimum of 10 years of support and no cutoff date for
actual use.
From: Kyle McDonald
Sent: 1/10/2012 10:53 AM
To: openk...@googlegroups.com
Subject: Re: [OpenKinect] News about New Kinect and SDK

Kyle McDonald

unread,
Jan 10, 2012, 11:49:34 AM1/10/12
to Joshua Blake, openk...@googlegroups.com
aha, ok! thanks for clarifying josh. "unpredictable" was the wrong
word choice. maybe something like "archaic" or "corporate" would have
been better.

it's just the difference between something like LGPL/MIT/Apache where
the community is the support, and the license is indefinite... and MS
syle EULA where the support is centralized and the license is finite.
the EULA feels "unpredictable" in comparison, but it's not the primary
feature.

kyle

2012/1/10 Joshua Blake <josh...@gmail.com>:

Γιάννης Γράβεζας

unread,
Jan 10, 2012, 12:10:50 PM1/10/12
to openk...@googlegroups.com


2012/1/10 Kyle McDonald <ky...@kylemcdonald.net>

aha, ok! thanks for clarifying josh. "unpredictable" was the wrong
word choice. maybe something like "archaic" or "corporate" would have
been better.

it's just the difference between something like LGPL/MIT/Apache where
the community is the support, and the license is indefinite... and MS
syle EULA where the support is centralized and the license is finite.
the EULA feels "unpredictable" in comparison, but it's not the primary
feature.

kyle


Archaic corporate mentality indeed. MS is the only IT giant that hasn't benefit yet from cooperating with the community. Apple did and got darwin and webkit. I'm certain that at least some of the managers in MS have taken notice of these developments and think about it. Now if we could only get them to quit sniffing, oh what a wonderful world we would live in :P 

--
bliss is ignorance

Nink

unread,
Jan 10, 2012, 7:22:12 PM1/10/12
to openk...@googlegroups.com, openk...@googlegroups.com
Grant have to agree with you here.  I have a several kinects that I would like to use but they became obsolete overnight.   

Microsoft should have added something new to the kinect for windows to justify a price increase and still provide backwards support for Xbox kinects.  That something new could have been as simple as a gyroscope so you always know the orientation of kinect (would make hand scanning so much easier to calculate). I am sure this community could come up with a nice shopping list of items to add that we would gladly pay extra for. 

The good news is people will still develop alternative drivers and keep buying the old kinect.  

.......

Ha Loo

unread,
Jan 11, 2012, 4:31:02 AM1/11/12
to openk...@googlegroups.com
I ask myself the whole time i read this thread why all the people discuss such a stupid marketing "gag" as the new kinect? There are no new features! Near field mode is a nice idea but not new and if the actuall WinSDK limited the range between 0,8 and 6m than it is a software problem not a hardware one! I agree to the last phrase of Nink:


    "The good news is people will still develop alternative drivers and keep buying the old kinect."

Vasilie

Lorne Covington

unread,
Jan 11, 2012, 12:14:34 PM1/11/12
to openk...@googlegroups.com

For me it depends on whether the new Kinect has real performance improvements.  For instance, I'm seeing transient depth artifacts* with the Kinect that are rather troublesome for my application, that I do not see with the Xtion.  If MS fixed this in the new Kinect, and added the 60fps 320x240 option, I would use it instead of the Xtion for its better range and light tolerance.

But if I read things in the most pessimistic way (usually the correct way), then it looks like the new SDK will orphan Xbox Kinects, and the new Kinects will have additional anti-hack measures in place to make them work only with the SDK (at least for a while!).  We'll see.  If the latter is true I would avoid them even if they were hacked again, as it it would indicate where MS is going and means it would not be a reliable source for non-SDK apps.

Ciao!

- Lorne

* Frequently when a person moves toward the Kinect, I see an outline that appears further away by about a half meter.  In normal applications this is not a big deal, but I use the Kinect (and now Xtion) off-axis, up in the corner of a screen looking down and sideways.  So say if someone is moving their hand in front of the screen, that outline looks like another hand moving in front of the screen too.  It's hard to filter out and results in false touches.

As cool as the kinect is, off-axis is really where the future lies - there are at least 10x more applications than where someone is standing or sitting back in front of a screen with the sensor positioned dead in front.  Slap a depth camera up the corner of a wall, and you can turn that whole wall into a giant touchscreen with hover-depth sensing, and that's just the start.

Γιάννης Γράβεζας

unread,
Jan 11, 2012, 3:18:49 PM1/11/12
to openk...@googlegroups.com
On Wed, Jan 11, 2012 at 7:14 PM, Lorne Covington <mediado...@gmail.com> wrote:

For me it depends on whether the new Kinect has real performance improvements.  For instance, I'm seeing transient depth artifacts* with the Kinect that are rather troublesome for my application, that I do not see with the Xtion.  If MS fixed this in the new Kinect, and added the 60fps 320x240 option, I would use it instead of the Xtion for its better range and light tolerance.


Yeah the 320x240@60fps would also halve the processing requirements, +1 from me as well for this one.
 
But if I read things in the most pessimistic way (usually the correct way), then it looks like the new SDK will orphan Xbox Kinects, and the new Kinects will have additional anti-hack measures in place to make them work only with the SDK (at least for a while!).  We'll see.  If the latter is true I would avoid them even if they were hacked again, as it it would indicate where MS is going and means it would not be a reliable source for non-SDK apps.


I don't think they'll go there. I've come to believe that they actually don't know where they're going. Come on MS let's do this together and do it right. We're not competitors, we're free resources, use us. All we ask of you is to keep it decent, is it that hard?
 
Ciao!

- Lorne

* Frequently when a person moves toward the Kinect, I see an outline that appears further away by about a half meter.  In normal applications this is not a big deal, but I use the Kinect (and now Xtion) off-axis, up in the corner of a screen looking down and sideways.  So say if someone is moving their hand in front of the screen, that outline looks like another hand moving in front of the screen too.  It's hard to filter out and results in false touches.

I'm a bit confused here, do you mean the shadow?
 

As cool as the kinect is, off-axis is really where the future lies - there are at least 10x more applications than where someone is standing or sitting back in front of a screen with the sensor positioned dead in front.  Slap a depth camera up the corner of a wall, and you can turn that whole wall into a giant touchscreen with hover-depth sensing, and that's just the start.


Indeed this is a great approach, I've used it here http://www.youtube.com/watch?v=0diSk-YecT8 and here http://www.youtube.com/watch?v=dwYfVjoTQXQ. I'm currently writing a simple shooter, where you actually throw stuff at the screen. I'll test it tomorrow on my local hackerspace and post a vid. Cheers

Yannis

--
bliss is ignorance

Lorne Covington

unread,
Jan 12, 2012, 1:33:12 AM1/12/12
to openk...@googlegroups.com


On 1/11/2012 3:18 PM, Γιάννης Γράβεζας wrote:

* Frequently when a person moves toward the Kinect, I see an outline that appears further away by about a half meter.  In normal applications this is not a big deal, but I use the Kinect (and now Xtion) off-axis, up in the corner of a screen looking down and sideways.  So say if someone is moving their hand in front of the screen, that outline looks like another hand moving in front of the screen too.  It's hard to filter out and results in false touches.

I'm a bit confused here, do you mean the shadow?

No, as that would not be translated into the point cloud.  I'm talking about a fringe of pixels (depthels?) on one side of a moving object that appear to be further away than the object.  This would have little or not effect on routines that just process the depth image (user/skeleton tracking) but when turned into points and rotated they are a bogus shape in space.  But I will look now and see if they are always on the shadow side.  Could be some funky averaging going on.

So you are using the Kinect in a similar way and you have not seen this?  It is not constant, but happens often enough (several times a minute) to be a pain.  I thought it might be a flaky Kinect, but a new one does the same thing.



 
  Slap a depth camera up the corner of a wall, and you can turn that whole wall into a giant touchscreen with hover-depth sensing, and that's just the start.


Indeed this is a great approach, I've used it here http://www.youtube.com/watch?v=0diSk-YecT8 and here http://www.youtube.com/watch?v=dwYfVjoTQXQ. I'm currently writing a simple shooter, where you actually throw stuff at the screen. I'll test it tomorrow on my local hackerspace and post a vid. Cheers

Yes, exactly.  But just using it to imitate a touch screen is limiting, as you can have a "hover screen" where intensity, size, etc. etc. can be taken from nearness to the screen, plus you can read things like velocity to/away from the screen (good for your shooter).  Imagine a virtual piano that could play pianissimo to forte... on a finger by finger basis just like the real thing by reading the velocity preceding the touch.  In one test I'm using this and doing sort of "air bongos" floating in 3D in front of the user.

Ciao!

- Lorne


Joshua Blake

unread,
Jan 12, 2012, 1:59:25 AM1/12/12
to openk...@googlegroups.com
On Thu, Jan 12, 2012 at 1:33 AM, Lorne Covington <mediado...@gmail.com> wrote:
No, as that would not be translated into the point cloud.  I'm talking about a fringe of pixels (depthels?) on one side of a moving object that appear to be further away than the object.  This would have little or not effect on routines that just process the depth image (user/skeleton tracking) but when turned into points and rotated they are a bogus shape in space.  But I will look now and see if they are always on the shadow side.  Could be some funky averaging going on.

So you are using the Kinect in a similar way and you have not seen this?  It is not constant, but happens often enough (several times a minute) to be a pain.  I thought it might be a flaky Kinect, but a new one does the same thing.
 
I have not seen anything like this. I assume you are not just talking about the natural falling away of the side of an object, like the curve of your hand at the edge.
 
If you are rendering your point cloud using shaders and are sampling the depth map, then you will get interpolated values. Make sure you get values from the depth map using a non-interpolated method, like the myTexture.Load() method in HLSL.
 
Josh

Γιάννης Γράβεζας

unread,
Jan 12, 2012, 5:11:52 AM1/12/12
to openk...@googlegroups.com
On Thu, Jan 12, 2012 at 8:33 AM, Lorne Covington <mediado...@gmail.com> wrote:


On 1/11/2012 3:18 PM, Γιάννης Γράβεζας wrote:

* Frequently when a person moves toward the Kinect, I see an outline that appears further away by about a half meter.  In normal applications this is not a big deal, but I use the Kinect (and now Xtion) off-axis, up in the corner of a screen looking down and sideways.  So say if someone is moving their hand in front of the screen, that outline looks like another hand moving in front of the screen too.  It's hard to filter out and results in false touches.

I'm a bit confused here, do you mean the shadow?

No, as that would not be translated into the point cloud.  I'm talking about a fringe of pixels (depthels?) on one side of a moving object that appear to be further away than the object.  This would have little or not effect on routines that just process the depth image (user/skeleton tracking) but when turned into points and rotated they are a bogus shape in space.  But I will look now and see if they are always on the shadow side.  Could be some funky averaging going on.

So you are using the Kinect in a similar way and you have not seen this?  It is not constant, but happens often enough (several times a minute) to be a pain.  I thought it might be a flaky Kinect, but a new one does the same thing.


Now that you say so I have indeed noticed something like that but in a different use case. I take the first frame received from the kinect as reference and on every subsequent frame I process pixels that are up to 100mm closer to the camera than the reference pixels. This way I can track hands against the wall etc. I've noticed that as I move in front of the camera but far away from the wall an outline of a few random pixels appears creating a ghost of me(I'll post a pic later on). It doesn't hurt my app as I also apply a pixel count threshold for blobs but the issue it's definitely there. It must be some kind of averaging as you say, I have to investigate it further
 
  Slap a depth camera up the corner of a wall, and you can turn that whole wall into a giant touchscreen with hover-depth sensing, and that's just the start.


Indeed this is a great approach, I've used it here http://www.youtube.com/watch?v=0diSk-YecT8 and here http://www.youtube.com/watch?v=dwYfVjoTQXQ. I'm currently writing a simple shooter, where you actually throw stuff at the screen. I'll test it tomorrow on my local hackerspace and post a vid. Cheers

Yes, exactly.  But just using it to imitate a touch screen is limiting, as you can have a "hover screen" where intensity, size, etc. etc. can be taken from nearness to the screen, plus you can read things like velocity to/away from the screen (good for your shooter).  Imagine a virtual piano that could play pianissimo to forte... on a finger by finger basis just like the real thing by reading the velocity preceding the touch.  In one test I'm using this and doing sort of "air bongos" floating in 3D in front of the user.

Yeah that's what I'm dealing with next week. A guy I know wants to make a full size portrait of himself that you can actually stone to death(Yeah I know it's weird, he's an artist). I was thinking of using the timestamps provided by the kinect to calculate velocities, is that what you use as well? 
 
Yannis


Ciao!

- Lorne





--
bliss is ignorance

Lorne Covington

unread,
Jan 12, 2012, 12:39:31 PM1/12/12
to openk...@googlegroups.com

On 1/12/2012 1:59 AM, Joshua Blake wrote:
>
> I have not seen anything like this. I assume you are not just talking
> about the natural falling away of the side of an object, like the
> curve of your hand at the edge.

Nope. If there is a person at 1M, I'll get that outline at about 1.5M,
nothing in between.


> If you are rendering your point cloud using shaders and are sampling
> the depth map, then you will get interpolated values. Make sure you
> get values from the depth map using a non-interpolated method, like
> the myTexture.Load() method in HLSL.

This is seen using per-pixel processing of the depth map in a DLL. I do
use shaders for other stuff, and did see white lines around the no-depth
areas until I switched MipFilter to POINT versus LINEAR (but that was a
constant issue, not intermittent). Is that doing effectively the same
thing as Load (I'm no HLSL hacker)?

Thanks, though!

- Lorne


Lorne Covington

unread,
Jan 12, 2012, 12:58:33 PM1/12/12
to openk...@googlegroups.com


On 1/12/2012 5:11 AM, οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½ οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½ wrote:
On Thu, Jan 12, 2012 at 8:33 AM, Lorne Covington <mediado...@gmail.com> wrote:
So you are using the Kinect in a similar way and you have not seen this?οΏ½ It is not constant, but happens often enough (several times a minute) to be a pain.οΏ½ I thought it might be a flaky Kinect, but a new one does the same thing.


Now that you say so I have indeed noticed something like that but in a different use case. I take the first frame received from the kinect as reference and on every subsequent frame I process pixels that are up to 100mm closer to the camera than the reference pixels. This way I can track hands against the wall etc. I've noticed that as I move in front of the camera but far away from the wall an outline of a few random pixels appears creating a ghost of me(I'll post a pic later on). It doesn't hurt my app as I also apply a pixel count threshold for blobs but the issue it'sοΏ½definitelyοΏ½there. It must be some kind of averaging as you say, I have to investigate it further

Good to know.οΏ½ I was doing something similar about the blob size, a simple count helps but isn't good enough.οΏ½ I was going to look at width/aspect ratio as it usually shows up as a long thin line.

But now my interest is really piqued and I'm going to start breaking this down more.οΏ½ I know all these buffers are supposed to be locked, but this smells a little like a sync issue.



Yeah that's what I'm dealing with next week. A guy I know wants to make a full size portrait of himself that you can actually stone to death(Yeah I know it's weird, he's an artist). I was thinking of using the timestamps provided by the kinect to calculate velocities, is that what you use as well?

Haha, cool!οΏ½ I'm doing effectively the same thing, I'm looking atοΏ½ real time in my frame processing loop, and had actually just added the timestamp reference to make it more accurate (hopefully) but haven't done any real picky testing of it yet.

Good luck!

- Lorne



So Townsend

unread,
Jan 12, 2012, 1:04:48 PM1/12/12
to openk...@googlegroups.com
The ir projector does cast a shadow like a point light source...the fact that it's appearing when your "far enough from the wall" makes it seem like a shadow.

Sent from my iPhone

On Jan 12, 2012, at 12:58 PM, Lorne Covington <mediado...@gmail.com> wrote:



On 1/12/2012 5:11 AM, Γιάννης Γράβεζας wrote:


On Thu, Jan 12, 2012 at 8:33 AM, Lorne Covington <mediado...@gmail.com> wrote:
So you are using the Kinect in a similar way and you have not seen this?  It is not constant, but happens often enough (several times a minute) to be a pain.  I thought it might be a flaky Kinect, but a new one does the same thing.


Now that you say so I have indeed noticed something like that but in a different use case. I take the first frame received from the kinect as reference and on every subsequent frame I process pixels that are up to 100mm closer to the camera than the reference pixels. This way I can track hands against the wall etc. I've noticed that as I move in front of the camera but far away from the wall an outline of a few random pixels appears creating a ghost of me(I'll post a pic later on). It doesn't hurt my app as I also apply a pixel count threshold for blobs but the issue it's definitely there. It must be some kind of averaging as you say, I have to investigate it further

Good to know.  I was doing something similar about the blob size, a simple count helps but isn't good enough.  I was going to look at width/aspect ratio as it usually shows up as a long thin line.

But now my interest is really piqued and I'm going to start breaking this down more.  I know all these buffers are supposed to be locked, but this smells a little like a sync issue.



Yeah that's what I'm dealing with next week. A guy I know wants to make a full size portrait of himself that you can actually stone to death(Yeah I know it's weird, he's an artist). I was thinking of using the timestamps provided by the kinect to calculate velocities, is that what you use as well?

Haha, cool!  I'm doing effectively the same thing, I'm looking at  real time in my frame processing loop, and had actually just added the timestamp reference to make it more accurate (hopefully) but haven't done any real picky testing of it yet.

Good luck!

- Lorne



jeff kramer

unread,
Jan 12, 2012, 2:17:48 PM1/12/12
to openk...@googlegroups.com
It could also be a mixed pixel effect - a result of the hardware as well.

-Jeff

2012/1/12 So Townsend <so.to...@gmail.com>

Γιάννης Γράβεζας

unread,
Jan 12, 2012, 6:36:50 PM1/12/12
to openk...@googlegroups.com


2012/1/12 So Townsend <so.to...@gmail.com>

The ir projector does cast a shadow like a point light source...the fact that it's appearing when your "far enough from the wall" makes it seem like a shadow.

Sent from my iPhone


Not sure if I'm getting it right but this doesn't have to do with the regular shadow. The issue is pixels that have an actual, but wrong, depth reading. They only appear on outlines so they're probably artifacts from an averaging algorithm used in the kinect.

On a happier note I've just presented the particular use case we've talked about with Lorne on my local hackerspace. The audience wasn't really thrilled with the theory but they sure enjoyed the practical application of it. So here it is, Tweet Hunt, a social shooter using real ammo. This is something microsoft would never endorse, the mere thought of the suits coming in for broken TV sets would drive their execs insane :P http://www.youtube.com/watch?v=tlG3pGAxztM 
 
--
bliss is ignorance

Kyle McDonald

unread,
Jan 12, 2012, 11:29:46 PM1/12/12
to openk...@googlegroups.com
that video is awesome. i really like the idea of throwing things as an
interaction technique. i'm also amazed that the kinect picks up a lot
of those balls (pieces of paper?) since i've found that very small
items are hard for the kinect to reconstruct (presumably because the
depth algorithm uses some locality information, like any other stereo
matching algorithm).

kyle

2012/1/12 Γιάννης Γράβεζας <wiz...@gmail.com>:

Reply all
Reply to author
Forward
0 new messages