"non-commercial deployments using Kinect for Xbox 360 that were
allowed using the beta SDK are not permitted with the newly released
software"
in other words, if you want to do something with kinect AND the soon
to be released non-beta SDK, you have to use the $250 kinect. anything
you've deployed with a $150 kinect and the beta SDK will stop being
legal in 2016, unless you migrate to the $250 kinect and the new SDK.
these kind of restrictions are so arcane.
they don't even mention, or try to clarify, the status of libfreenect,
which is responsible for 95% of the "kinect effect" they keep talking
about.
unrelated to the above, can someone clarify "near mode" on this new
kinect? all i've heard is that it can do 40-50 cm, but the current
kinect already does 40-50 cm in the right lighting. what's special
about the new version?
kyle
On Mon, Jan 9, 2012 at 11:36 PM, mankoff <man...@gmail.com> wrote:
> https://blogs.msdn.com/b/kinectforwindows/archive/2012/01/09/kinect-for-windows-commercial-program-announced.aspx
very interesting:
"non-commercial deployments using Kinect for Xbox 360 that were
allowed using the beta SDK are not permitted with the newly released
software"
in other words, if you want to do something with kinect AND the soon
to be released non-beta SDK, you have to use the $250 kinect. anything
you've deployed with a $150 kinect and the beta SDK will stop being
legal in 2016, unless you migrate to the $250 kinect and the new SDK.
these kind of restrictions are so arcane.
they don't even mention, or try to clarify, the status of libfreenect,
which is responsible for 95% of the "kinect effect" they keep talking
about.
unrelated to the above, can someone clarify "near mode" on this new
kinect? all i've heard is that it can do 40-50 cm, but the current
kinect already does 40-50 cm in the right lighting. what's special
about the new version?
kyle
but i've never been a cocaine sniffing exec, so i don't really know
what i'm talking about.
really: i'm super curious if anyone could chime in on the genuinely
'new' features of kinect for windows. the closest thing i've seen is
from josh tweeting the amazon page:
http://www.amazon.com/exec/obidos/ASIN/B006UIS53K/34a6-20/
"has a shortened USB cable to ensure reliability across a broad range
of computers and includes a small dongle to improve coexistence with
other USB peripherals."
i also like the note:
"The sensor will only work on computers running the SDK softawre [sic]"
i wonder if that's like a standard compatibility notice, or more like a threat?
kyle
2012/1/10 Γιάννης Γράβεζας <wiz...@gmail.com>:
they don't even mention, or try to clarify, the status of libfreenect,
which is responsible for 95% of the "kinect effect" they keep talking
about.
unrelated to the above, can someone clarify "near mode" on this new
kinect? all i've heard is that it can do 40-50 cm, but the current
kinect already does 40-50 cm in the right lighting. what's special
about the new version?
On Tue, Jan 10, 2012 at 12:08 AM, Kyle McDonald <ky...@kylemcdonald.net> wrote:they don't even mention, or try to clarify, the status of libfreenect,
which is responsible for 95% of the "kinect effect" they keep talking
about.
Nor do they mention OpenNI, but they've already stated that they're not going to pursue legal action for those uses, but it voids any hardware warranty.
unrelated to the above, can someone clarify "near mode" on this new
kinect? all i've heard is that it can do 40-50 cm, but the current
kinect already does 40-50 cm in the right lighting. what's special
about the new version?
We know that the Kinect for Xbox hardware returns data down to about 40cm and up to about 6m. The Kinect for Windows betas locked the range to 80cm to 4m since that is the range where the depth data values are actually accurate. Outside that range the accuracy and resolution of reported values and image quality reduces dramatically (e.g. more holes in depth image). In the v1 SDK, 80cm to 4m is the default mode. Near mode in the new firmware is able to report accurate depth values from 40cm to 3m. You can switch between near mode and default mode.
In the bigger picture, the Kinect for Windows hardware is still a great deal at $249, compared to previous types of commercial 3d scanning hardware (TOF, LIDAR), as well as compared to the other competing commercially usable sensor, Asus Xtion Pro. The Xtion Pro is currently $149 for depth only or $199 for Xtion Pro Live for depth and rgb (http://us.estore.asus.com/index.php?l=product_list&c=3009). Neither of those have audio beam forming and they have more problems with certain lighting conditions than Kinect.
The Kinect for Windows team is really responsive to feedback though and I'm going to point them to this thread. I'm sure they'll love to read any fair criticisms and requests. (Pretty sure cocaine is not involved though.)
Josh
This is achieved just by upgrading the firmware? Can this be done on the xbox kinect as well?
Some stability fixes and a firmware upgrade don't justify doubling the price of what's already in the market. That's ok though, it's a free market afterall. But we should try to keep it decent as well
I've got a request that I feel most of the people here would agree on. Since libfreenect is on a different league from the MS SDK(No skeletons and stuff), how about opening your driver so we can use it as well? Let's all be friends and proliferate the technology
Some stability fixes and a firmware upgrade don't justify doubling the price of what's already in the market. That's ok though, it's a free market afterall. But we should try to keep it decent as well
With Kinect for Xbox Microsoft also made money from game licensing and Xbox subscriptions, etc, which plays into the pricing decision. With Kinect for Windows, the sensor hardware is the only way for them to make back their R&D investment since we'll be using it for many different purposes outside of the Xbox ecosystem and many will be making money using K4W in various ways.I agree, it's kind of like pulling a rug out from us to increase the price for the different use scenario after we're used to the $150 price point, but in the bigger picture there's not many other options that would work. If you were the business decision maker in Microsoft, what would you do to make back the investment in the hardware and SDK?
Also, I'm curious if you are using Kinect for commercial or non-commercial purposes?
I've got a request that I feel most of the people here would agree on. Since libfreenect is on a different league from the MS SDK(No skeletons and stuff), how about opening your driver so we can use it as well? Let's all be friends and proliferate the technologyCan you elaborate? Do you mean allowing the libfreenect API to communicate with the drivers installed by the Kinect SDK, or something else?
ah, ok -- so the windows SDK just hasn't been reporting <80cm so far.
but now it will, and they're marketing it as a 'feature' of the
windows kinect even though it's already been exposed through
libfreenect and openni.
let's step back once more, and look at the options:
kinect with the official SDK
- more expensive
- has previously been artificially limited (e.g., 80 vs 40 cm)
- windows only
- small community
- unpredictable licensing (e.g., 2016 cutoff)
- has only recently caught up on skeleton accuracy
+ sound localization
kinect with openni/libfreenect
+ cheaper
+ no artificial limitations
+ cross platform
+ huge community
+ clearly licensed
+ openni now supports calibration-free detection
- "voids your warranty"
i think the best thing microsoft could do would be:
- make real changes to the kinect (rather than shortening the cable or
adding a usb hub) so they have something they can feed back into the
xbox market. anyone can write a decent api for kinect, but only
microsoft has the resources to further push the tech.
- don't try to make a separate version (kinect for windows) because
i'm just not sure that the size of the market is in any way
comparable. do we have any numbers on how many kinects out there are
not plugged in to xboxes?
- get all the awesome developers who are writing the kinect SDK to
focus their energy where there is already good cross-platform momentum
and strong communities.
there are some interviews where MS says they expected the hacker scene
to happen. i can't believe this, because they were completely
unprepared. if they really expected the 'kinect effect' from the
beginning here's what would have happened:
- kinect SDK and kinect are pre-released to select devs
- then kinect SDK and kinect are released to the world
- windows users have a head start on everyone, start making crazy demos
- people reverse the SDK and start openkinect in response to the
windows-only SDK (rather than in response to the lack of an SDK)
but that's not what happened. and now they're in a really weird position :(
kyle
2012/1/10 Grant Kot <kot...@gmail.com>:
The actual v1 Kinect SDK will use standard Microsoft support life cycle
which has a minimum of 10 years of support and no cutoff date for
actual use.
From: Kyle McDonald
Sent: 1/10/2012 10:53 AM
To: openk...@googlegroups.com
Subject: Re: [OpenKinect] News about New Kinect and SDK
it's just the difference between something like LGPL/MIT/Apache where
the community is the support, and the license is indefinite... and MS
syle EULA where the support is centralized and the license is finite.
the EULA feels "unpredictable" in comparison, but it's not the primary
feature.
kyle
2012/1/10 Joshua Blake <josh...@gmail.com>:
aha, ok! thanks for clarifying josh. "unpredictable" was the wrong
word choice. maybe something like "archaic" or "corporate" would have
been better.
it's just the difference between something like LGPL/MIT/Apache where
the community is the support, and the license is indefinite... and MS
syle EULA where the support is centralized and the license is finite.
the EULA feels "unpredictable" in comparison, but it's not the primary
feature.
kyle
For me it depends on whether the new Kinect has real performance improvements. For instance, I'm seeing transient depth artifacts* with the Kinect that are rather troublesome for my application, that I do not see with the Xtion. If MS fixed this in the new Kinect, and added the 60fps 320x240 option, I would use it instead of the Xtion for its better range and light tolerance.
But if I read things in the most pessimistic way (usually the correct way), then it looks like the new SDK will orphan Xbox Kinects, and the new Kinects will have additional anti-hack measures in place to make them work only with the SDK (at least for a while!). We'll see. If the latter is true I would avoid them even if they were hacked again, as it it would indicate where MS is going and means it would not be a reliable source for non-SDK apps.
Ciao!
- Lorne
* Frequently when a person moves toward the Kinect, I see an outline that appears further away by about a half meter. In normal applications this is not a big deal, but I use the Kinect (and now Xtion) off-axis, up in the corner of a screen looking down and sideways. So say if someone is moving their hand in front of the screen, that outline looks like another hand moving in front of the screen too. It's hard to filter out and results in false touches.
As cool as the kinect is, off-axis is really where the future lies - there are at least 10x more applications than where someone is standing or sitting back in front of a screen with the sensor positioned dead in front. Slap a depth camera up the corner of a wall, and you can turn that whole wall into a giant touchscreen with hover-depth sensing, and that's just the start.
* Frequently when a person moves toward the Kinect, I see an outline that appears further away by about a half meter. In normal applications this is not a big deal, but I use the Kinect (and now Xtion) off-axis, up in the corner of a screen looking down and sideways. So say if someone is moving their hand in front of the screen, that outline looks like another hand moving in front of the screen too. It's hard to filter out and results in false touches.
I'm a bit confused here, do you mean the shadow?
Slap a depth camera up the corner of a wall, and you can turn that whole wall into a giant touchscreen with hover-depth sensing, and that's just the start.
Indeed this is a great approach, I've used it here http://www.youtube.com/watch?v=0diSk-YecT8 and here http://www.youtube.com/watch?v=dwYfVjoTQXQ. I'm currently writing a simple shooter, where you actually throw stuff at the screen. I'll test it tomorrow on my local hackerspace and post a vid. Cheers
No, as that would not be translated into the point cloud. I'm talking about a fringe of pixels (depthels?) on one side of a moving object that appear to be further away than the object. This would have little or not effect on routines that just process the depth image (user/skeleton tracking) but when turned into points and rotated they are a bogus shape in space. But I will look now and see if they are always on the shadow side. Could be some funky averaging going on.
So you are using the Kinect in a similar way and you have not seen this? It is not constant, but happens often enough (several times a minute) to be a pain. I thought it might be a flaky Kinect, but a new one does the same thing.
No, as that would not be translated into the point cloud. I'm talking about a fringe of pixels (depthels?) on one side of a moving object that appear to be further away than the object. This would have little or not effect on routines that just process the depth image (user/skeleton tracking) but when turned into points and rotated they are a bogus shape in space. But I will look now and see if they are always on the shadow side. Could be some funky averaging going on.
On 1/11/2012 3:18 PM, Γιάννης Γράβεζας wrote:
* Frequently when a person moves toward the Kinect, I see an outline that appears further away by about a half meter. In normal applications this is not a big deal, but I use the Kinect (and now Xtion) off-axis, up in the corner of a screen looking down and sideways. So say if someone is moving their hand in front of the screen, that outline looks like another hand moving in front of the screen too. It's hard to filter out and results in false touches.
I'm a bit confused here, do you mean the shadow?
So you are using the Kinect in a similar way and you have not seen this? It is not constant, but happens often enough (several times a minute) to be a pain. I thought it might be a flaky Kinect, but a new one does the same thing.
Yes, exactly. But just using it to imitate a touch screen is limiting, as you can have a "hover screen" where intensity, size, etc. etc. can be taken from nearness to the screen, plus you can read things like velocity to/away from the screen (good for your shooter). Imagine a virtual piano that could play pianissimo to forte... on a finger by finger basis just like the real thing by reading the velocity preceding the touch. In one test I'm using this and doing sort of "air bongos" floating in 3D in front of the user.Slap a depth camera up the corner of a wall, and you can turn that whole wall into a giant touchscreen with hover-depth sensing, and that's just the start.
Indeed this is a great approach, I've used it here http://www.youtube.com/watch?v=0diSk-YecT8 and here http://www.youtube.com/watch?v=dwYfVjoTQXQ. I'm currently writing a simple shooter, where you actually throw stuff at the screen. I'll test it tomorrow on my local hackerspace and post a vid. Cheers
Ciao!
- Lorne
On 1/12/2012 1:59 AM, Joshua Blake wrote:
>
> I have not seen anything like this. I assume you are not just talking
> about the natural falling away of the side of an object, like the
> curve of your hand at the edge.
Nope. If there is a person at 1M, I'll get that outline at about 1.5M,
nothing in between.
> If you are rendering your point cloud using shaders and are sampling
> the depth map, then you will get interpolated values. Make sure you
> get values from the depth map using a non-interpolated method, like
> the myTexture.Load() method in HLSL.
This is seen using per-pixel processing of the depth map in a DLL. I do
use shaders for other stuff, and did see white lines around the no-depth
areas until I switched MipFilter to POINT versus LINEAR (but that was a
constant issue, not intermittent). Is that doing effectively the same
thing as Load (I'm no HLSL hacker)?
Thanks, though!
- Lorne
On Thu, Jan 12, 2012 at 8:33 AM, Lorne Covington <mediado...@gmail.com> wrote:
So you are using the Kinect in a similar way and you have not seen this?οΏ½ It is not constant, but happens often enough (several times a minute) to be a pain.οΏ½ I thought it might be a flaky Kinect, but a new one does the same thing.
Now that you say so I have indeed noticed something like that but in a different use case. I take the first frame received from the kinect as reference and on every subsequent frame I process pixels that are up to 100mm closer to the camera than the reference pixels. This way I can track hands against the wall etc. I've noticed that as I move in front of the camera but far away from the wall an outline of a few random pixels appears creating a ghost of me(I'll post a pic later on). It doesn't hurt my app as I also apply a pixel count threshold for blobs but the issue it'sοΏ½definitelyοΏ½there. It must be some kind of averaging as you say, I have to investigate it further
Yeah that's what I'm dealing with next week. A guy I know wants to make a full size portrait of himself that you can actually stone to death(Yeah I know it's weird, he's an artist). I was thinking of using the timestamps provided by the kinect to calculate velocities, is that what you use as well?
On 1/12/2012 5:11 AM, Γιάννης Γράβεζας wrote:
On Thu, Jan 12, 2012 at 8:33 AM, Lorne Covington <mediado...@gmail.com> wrote:
So you are using the Kinect in a similar way and you have not seen this? It is not constant, but happens often enough (several times a minute) to be a pain. I thought it might be a flaky Kinect, but a new one does the same thing.
Now that you say so I have indeed noticed something like that but in a different use case. I take the first frame received from the kinect as reference and on every subsequent frame I process pixels that are up to 100mm closer to the camera than the reference pixels. This way I can track hands against the wall etc. I've noticed that as I move in front of the camera but far away from the wall an outline of a few random pixels appears creating a ghost of me(I'll post a pic later on). It doesn't hurt my app as I also apply a pixel count threshold for blobs but the issue it's definitely there. It must be some kind of averaging as you say, I have to investigate it further
Good to know. I was doing something similar about the blob size, a simple count helps but isn't good enough. I was going to look at width/aspect ratio as it usually shows up as a long thin line.
But now my interest is really piqued and I'm going to start breaking this down more. I know all these buffers are supposed to be locked, but this smells a little like a sync issue.
Yeah that's what I'm dealing with next week. A guy I know wants to make a full size portrait of himself that you can actually stone to death(Yeah I know it's weird, he's an artist). I was thinking of using the timestamps provided by the kinect to calculate velocities, is that what you use as well?
Haha, cool! I'm doing effectively the same thing, I'm looking at real time in my frame processing loop, and had actually just added the timestamp reference to make it more accurate (hopefully) but haven't done any real picky testing of it yet.
Good luck!
- Lorne
The ir projector does cast a shadow like a point light source...the fact that it's appearing when your "far enough from the wall" makes it seem like a shadow.
Sent from my iPhone
kyle
2012/1/12 Γιάννης Γράβεζας <wiz...@gmail.com>: