Chers VR Geeks,
Je viens vers vous pour la première fois mais cela fait un moment que j'apprécie votre mentalité, même si cela fait moins d'un an que j'ai mis les pieds dans la VR.
Apparemment vous avez apprécié l'occulus rift sur kickstarter, et bien il me semble qu'une autre initiative mérite votre attention : DUO (Leap Motion killer) grâce au nui group et à une initiative de Code Laboratories. En bref c'est un système qui permet de faire de la détection 3D ultra-low latency et ultra-précise dans un cône relativement limité.
Hi Arthur,
2013/4/19 Arthur Van Ceulen <vand...@gmail.com>
Apparemment vous avez apprécié l'occulus rift sur kickstarter, et bien il me semble qu'une autre initiative mérite votre attention : DUO (Leap Motion killer)
proprietary and costs $$
On Friday, April 19, 2013 10:35:06 AM UTC-5, Jan Ciger wrote:Hi Arthur,2013/4/19 Arthur Van Ceulen <vand...@gmail.com>
Apparemment vous avez apprécié l'occulus rift sur kickstarter, et bien il me semble qu'une autre initiative mérite votre attention : DUO (Leap Motion killer)
proprietary and costs $$
Looks like they are going down the same path as Oculus. They may have good intentions about using open-source software but it is not a priority. Just re-read these typical quotes from their website:
"We will use the funds to: Continue research and software development efforts, further integrate with Windows and support for Linux and OSX."
"In our second phase we will release the software for Windows and shortly after for Linux and OSX."
At least with kickstarters you have a small chance not to get frozen-in some proprietary dead-end, but if they don't do it to themselves by the end of their cash-burn, the big daddy patent troll's shell companies will take care of them...
Hello,On Fri, Apr 19, 2013 at 6:05 PM, Juan Mari <juanm...@gmail.com> wrote:On Friday, April 19, 2013 10:35:06 AM UTC-5, Jan Ciger wrote:Hi Arthur,2013/4/19 Arthur Van Ceulen <vand...@gmail.com>
Apparemment vous avez apprécié l'occulus rift sur kickstarter, et bien il me semble qu'une autre initiative mérite votre attention : DUO (Leap Motion killer)
proprietary and costs $$
Looks like they are going down the same path as Oculus. They may have good intentions about using open-source software but it is not a priority. Just re-read these typical quotes from their website:I don't recall Oculus mentioning anything about open source in the past. They don't provide Linux/OSS SDK at the moment, but from what I have seen so far, it is fairly well documented and adding support for e.g. a Linux game shouldn't be any more difficult than for Unity. So that's a bit unfair to say.
Replying in English as this group is international already, I hope you do not mind :)
I fail to see the point of the DUO a bit and have some doubts about the performance claims. It is nothing more than two PS 3 Eye cameras stuck in a single case, mounted side-by-side, with added infrared filters and some LEDs around them.If they don't provide hardware shutter synchronization, it is going to be really tricky to get usable 3D tracking out of it. I can't see any synchronization connection between the two cameras in the photos at least. The 374fps claimed is maybe possible at some silly low resolution, otherwise USB2.0 doesn't have enough bandwidth to go that fast on a single USB controller with two cameras connected. The applications they are showing there are from the their drivers (SDK), touchlib and other projects that don't need stereo vision at all for the most part.I am sure the two cameras can be used for stereoscopic tracking, but their setup has quite a few problems and they aren't really showing anything different that you couldn't do with with two off-the-shelf PS3 Eyes already (and many people do already - there is even a motion capture system built around PS3 Eye cameras already ...). I don't see the need for their particular sensor enclosure - the cameras cost some 33EUR on Amazon here, you can likely find them even cheaper. If you need infrared, the LEDs can be had cheaply from eBay and there are tons of tutorials online how to mod a PS3 Eye with a filter or even how to add a shutter sync. And for stereo tracking you will likely want a bit wider stereo base in order to have actually somewhat usable working volume. If you mount the cameras side by side like they did, the usable working volume is going to be really small - likely comparable to the LEAP. Which is unlikely to be something that most people will want for VR work ...Basically, for $110 + shipping you get a DIY kit with two cameras and nothing else. Not exactly all that great deal - you can have the same thing pretty much today from Amazon and eBay. Moreover, the CodeLaboratories stuff may be open hardware, but it is not good for much without their software - which is proprietary and costs $$. The driver alone is around $60 for a single PC and camera if you want to use outside their personal use license (i.e. for research or commercial purposes).
Sorry I got in the group too late to catch those discussions! So please excuse my naivete and let me know where my logic is off the mark.
For instance, if the CGC is mounted on the Rift, the data from the CGC will always be directly correct for where my hands should be represented in my field of view. It just doesn't depend on anything else.
I do not use the skeleton tracking with the depth cameras, that stuff is awful (though I hear it's better with the new Kinect SDK) but process the point cloud directly for lower latency and use adaptive sampling so regions-of-interest are as accurate as possible while still getting good framerates/latency.
So from the depth cameras I get world head AND hands positions. The weak spot here would be the Rift, as it's orientation errors would skew the world hand positions as seen by the CGC, but that can be correlated with the head/hand positions from the depth cameras. Seems to me that the CGC vs. depth camera hand position data cold be used to correct the Rift data, not add errors to it.
So I don't see it as an accumulation of errors, but rather essentially two systems, the Rift/CGC, and the depth cameras, that could be integrated, overlaying the CGC finger data onto the hand positions. I certainly do see how calibration will be a real job, though.
Arthur, I do wonder whether Codelabs are actually paying you for this promotion. There is nothing open or free about the device you are pushing (apart from the plastic). If you aren't associated with Codelabs, I think you need to step back and actually see past the hype with a critical look.
However, feel free to invest in the DUO, if you think it is worth it. I don't think it is and considering that they barely made half of their target with only 3 days left, lot of other people think the same.
I agree, there is no point at buying their DIY kit ; the point is to
support an open movement around 3D interaction for everyone.
Well, that's a noble goal, but I am not sure how giving money to yet another company that has yet another solution looking for a problem helps that ...
And yes, quoting the kickstarter page, it will be *"/with an open source
Driver, SDK and examples/"*.
You need the read the whole thing: "Although we believe in it being open, this application will *not be immediately released* as open source." (emphasis mine). It is about two pages later.
Codelaboratories is promising Linux port of their SDK for years now (not even open source one). Considering that they are actually selling their drivers and SDKs for PS3 Eye, it will likely be a good while before you see anything open source as that would directly cannibalize their current product line.
There are some open source apps like CCV and touchlib, but Codelaboratories has nothing to do with those, AFAIK.
Well, that is their current business model. They didn't say anything else so far and I guess they need to pay the bills too.
I think the business model you have
described will be used only if the project isn't funded on kickstarter.
Moreover, there is an electronic card (whose plans will be free) in thekit, and "/Yes, the two PS3Eye*cameras are synchronized both in hardware
and software* so that every frame is captured simultaneously resultingparticular video we used 160 stereo frames per second./".
in up to 187 stereo frames per second or total of 374 fps. In that
Where did you find that? The Kickstarter page doesn't mention anything of the kind - do you have some insider info?
Ehm, I hope you are serious - so basically I should donate $10 to a commercial company to develop their product so that I can feel good about being named as a supporter? LOL :)
The money will be used to open software
development (and industrialization process in order to make available
the cheapest kit possible), and it seems like the software is where
there is most of the innovation. For example, before the kickstarter
launch and unlike Leap Motion today, they had already the detection
algorithm for the two sides of a hand.
Oh come on. I am the last one to defend Leap, but this is ridiculous. So you don't like the closed nature of the Leap but you are drumming up support for another commercial company that is developing a straight knockoff of it, equally closed (by their own admission).
I *do hope not*. That would be a disaster, because no matter how is the SDK good (or bad), it would limit the development to that one toolkit and device. That is like saying that Microsoft Windows monopoly on the desktop is good for development of computing ... I don't want to end up in the world where the only thing understood as "3D interaction" would waving hands over that sort of gadget.
> I hope
> the duo SDK to become for 3D interaction what MT4J is for multi-touch!
As can Kinect, any two cameras, two PS3 eyes (you can even use that same software with them), etc. The capabilities that DUO is promising are not at all unique to the hardware. Stereoscopic tracking/reconstruction is a mature field, there are plenty of tools available for that even for DIY if you want.
I also like a lot the DIY philosophy, as the duo could be used for 3D
scanning, face recognizing, or anything else as it is open and that
tweaking the PS3 lens could make us explore other usage at different
ranges.
--
You received this message because you are subscribed to a topic in the Google Groups "VR Geeks" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/vr-geeks/F6NzBOHCzkk/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to vr-geeks+unsubscribe@googlegroups.com.
Visit this group at http://groups.google.com/group/vr-geeks?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
Arthur Van Ceulen
E-mail personnel : vand...@gmail.com
Adresse postale : 7 rue du Grand Ferré, 60200 Compiègne
--
You received this message because you are subscribed to a topic in the Google Groups "VR Geeks" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/vr-geeks/F6NzBOHCzkk/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to vr-geeks+unsubscribe@googlegroups.com.
Visit this group at http://groups.google.com/group/vr-geeks?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
Hello,
Sorry I got in the group too late to catch those discussions! So please excuse my naivete and let me know where my logic is off the mark.
For instance, if the CGC is mounted on the Rift, the data from the CGC will always be directly correct for where my hands should be represented in my field of view. It just doesn't depend on anything else.
Yes, however, that is not all that helpful if you want to actually interact with the 3D environment. For that you need the hands in the "world coordinates", not relative to your (rotating) head.
I do not use the skeleton tracking with the depth cameras, that stuff is awful (though I hear it's better with the new Kinect SDK) but process the point cloud directly for lower latency and use adaptive sampling so regions-of-interest are as accurate as possible while still getting good framerates/latency.
OK. The skeleton tracking doesn't work in this setup, because the SDKs assume person standing up and then use model fitting to find the user and fit the skeleton. If you are sitting down, your silhouette is a lot different than what it expects, thus it isn't working well.
So from the depth cameras I get world head AND hands positions. The weak spot here would be the Rift, as it's orientation errors would skew the world hand positions as seen by the CGC, but that can be correlated with the head/hand positions from the depth cameras. Seems to me that the CGC vs. depth camera hand position data cold be used to correct the Rift data, not add errors to it.
So I don't see it as an accumulation of errors, but rather essentially two systems, the Rift/CGC, and the depth cameras, that could be integrated, overlaying the CGC finger data onto the hand positions. I certainly do see how calibration will be a real job, though.This could work, but in order to correlate the depth camera data with the CGC data, you need to transform the coordinate system of the CGC into the depth camera coordinate system (or vice versa) - they have to be in the same coordinate system before you can do anything. The Rift/CGC combo alone will not give you world coordinates of the hands, only coordinates relative to your head. The depth cameras give you "absolute" position in space.
To get an absolute position in space for your hands using the Rift/CGC, you will get a matrix that gives you the *local* (head-referenced) position. Then you must multiply this by the matrix that gives you position of your head in the world space. That can only come from the depth cameras (Xtion/Kinect) in your case. Or you can use something like Hydra or Gametrak.
Once you have this world matrix, you can try to do some Kalman filtering or something similar to improve the accuracy of the hand position over plain depth cameras, because they are now in the same coordinate frame.
So you will have to chain the transformations, because the only "global" reference you have is the "kinect". This is where I was thinking about error accumulation - the matrix multiplication chain required to change the coordinate systems. Each of the matrices is going to be noisy (as they come from noisy sensors), so the errors will be multiplied as well.
--
You received this message because you are subscribed to the Google Groups "VR Geeks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vr-geeks+u...@googlegroups.com.
Well, duh! But for representing my hands properly in my view that is all I need. Just talking about that, having my arms/hands in the view is WAY more realistic than no hands whatsoever.
On 4/22/2013 5:03 AM, Jan Ciger wrote:Hello,
Sorry I got in the group too late to catch those discussions! So please excuse my naivete and let me know where my logic is off the mark.
For instance, if the CGC is mounted on the Rift, the data from the CGC will always be directly correct for where my hands should be represented in my field of view. It just doesn't depend on anything else.
Yes, however, that is not all that helpful if you want to actually interact with the 3D environment. For that you need the hands in the "world coordinates", not relative to your (rotating) head.
I would like to put some more information in the debate, please correct me if I'm wrong. The natural user interface group is a worldwide DIY and open-source community, I hope that like me you don't doubt of its members faith. Code Laboratories has been founded in 2008 by Alexander Popovich and Christian Moore (founder of the nui group in 2006). The first purpose was to give to the nui group industry quality drivers for DIY projects, as the whole community was asking for it. For that, of course they needed full-time work on it, so they have built a company and a business model in order to keep living. I suppose most of the early customers were nui group members who thanked the initiative. For the rest, they have continued to be administrators of the nui community and participated at building MT4J, touchlib, CL NUI...I would be very pleased to know what information makes you seem to think Code Laboratories is such a "bad company". I think they share my "noble goal" rather than searching business.
Now let me tell you a short story that happened just after Leap Motion has been funded and founded. They had the money from investors, the company structure, the idea (maybe even some patents) and this such easy to build device. And I think you will agree that this is nothing. All the cool stuff comes from the post-processing algorithms, the quality of the SDK that you give to application developers, the synchronization and such other really difficult things. So then, they came on the nui group forums seeking for the engineers they needed, purposing them to "make history together", like if they were making a gift to the world of technologies. Purposing jobs at a closed-software and business-driven company on an open-source and DIY community is very, very, very stupid.
As they had already a hand-detection algorithm made four years ago, and with a such easy to build device, the nui group reacted by making a joke video (with direct plagiarism to the Leap Motion video) with two PS3 Eyes. Months later, Code Laboratories made the joke to a real project, called this time duo 3D, which is supported by the whole nui group.
Knowing this story, I think you understand why they can't release it as open-source now : Leap Motion needs any piece of software engineering and will rob anything from it. Everyone want this engineering to profit to communities rather than Leap Motion company.
I agree, we cannot really trust them. But we can trust kickstarter and the nui group to made them keep their promises.
You are wrong, sorry but verify your information before posting please. The co-founders of Code Laboratories are founder and core developer of CCV. Here is the official source. And they made it as they were working in Code Laboratories.
Sorry, but if they sell a product or a kit, like the duo 3D, they won't have the same business model, which allow them to make it completely open-source. And I agree with a commercial license to avoid Leap Motion stealing.
Moreover, there is an electronic card (whose plans will be free) in thekit, and "/Yes, the two PS3Eye*cameras are synchronized both in hardware
and software* so that every frame is captured simultaneously resultingparticular video we used 160 stereo frames per second./".
in up to 187 stereo frames per second or total of 374 fps. In that
Where did you find that? The Kickstarter page doesn't mention anything of the kind - do you have some insider info?
Look at duo3d.com, the official website... And I already said that they sucked a lot in communication.
The money will be used to open software
development (and industrialization process in order to make available
the cheapest kit possible), and it seems like the software is where
there is most of the innovation. For example, before the kickstarter
launch and unlike Leap Motion today, they had already the detection
algorithm for the two sides of a hand.
Yes I am serious, I give money to kickstarter open-source project made by persons or companies without getting anything back. And sorry, but "commercial company" is not what I think of Code Laboratories.
Oh come on. I am the last one to defend Leap, but this is ridiculous. So you don't like the closed nature of the Leap but you are drumming up support for another commercial company that is developing a straight knockoff of it, equally closed (by their own admission).
I completely disagree.
I disagree here too, as an open SDK could be linked to any device driver and as they are the best for building drivers. And sorry, but "waving hands" is what Leap Motion offers ; duo 3D can be used for object scanning, kinect-like application (yes with some more DIY, but still with their sync hardware, drivers, SDK...), etc...
Yes, capabilities mostly comes from software, where Code Laboratories and nui group are experts. Who doesn't prefer this software directly working with a device open to any modification and that could be made by ourself ?
Maybe Leap Motion nor duo 3D has any future. But it is false to say that both companies or projects are kind of the same.
Yes, but that's just the normal VR point-of-perspective problem which is working quite nicely using the depth camera for head position tracking; the hands don't figure into that problem at all. The hands are locked to that point-of-perspective by the CGC being mounted on that head.
The hands would not modulate randomly in size any more than they would with the "real-world" application of the CGC being mounted over your monitor. So if the CGC works there, it works here, as by being mounted to the Rift it is "mounted" to my view, same as the real-world case.
Yes, but that's just the normal VR point-of-perspective problem which is working quite nicely using the depth camera for head position tracking; the hands don't figure into that problem at all. The hands are locked to that point-of-perspective by the CGC being mounted on that head.
If the depth camera is monitor-mounted, then you can of course do object-tracking of the head, but I think we are discussing a head-mounted application? If so, how do you plan to use the depth camera to resolve head position? There are some mature SLAM algorithms, but ego-motion is certainly not yet solved for the highly-precise and real-time case.
Basically, without rock-solid ego-motion tracking, you need to measure head position externally for this to definitely work.
--
You received this message because you are subscribed to the Google Groups "VR Geeks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vr-geeks+u...@googlegroups.com.
Visit this group at http://groups.google.com/group/vr-geeks?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
I'm afraid I still do not see the disconnect. I am tracking the head in the world with depth cameras, which gives me the point of perspective. The rift will give me orientation for my view. The CGC on the Rift gives me view-relative hand position. I can test for hand and virtual world object intersection. Where is the problem?
I'm afraid I still do not see the disconnect. I am tracking the head in the world with depth cameras, which gives me the point of perspective. The rift will give me orientation for my view. The CGC on the Rift gives me view-relative hand position. I can test for hand and virtual world object intersection. Where is the problem?
I'd be worried about the propagation error between all of these transforms and multiple tracking methods, but I still think it's worth trying.
Not what I am talking about. I can test for hand position in view space, and the virtual button position in view space, to test for intersection. And these are all just transforms anyway, it does not really matter which way the transforms go.
Then it should be easy to point out my error! (;^}) Again, it does not matter which way I do the transforms to test for collisions, if I transform my hands into world space or the world into hand space. No loss of precision either way.
Please bear with me. If my head is tracked, and I am using that to present the view of the virtual world button, it does not matter to me if that representation has errors in it, only that I can line my hand up with it.
I'm afraid I still do not see the disconnect. I am tracking the head in the world with depth cameras, which gives me the point of perspective. The rift will give me orientation for my view. The CGC on the Rift gives me view-relative hand position. I can test for hand and virtual world object intersection. Where is the problem?
I think this is the important concept: even if my world head position is wrong, so is what I'm seeing, and so is my hand by the same amount.
My hands will ALWAYS be rendered correctly in my view, to the accuracy of the CGC, as that is relative to the head position, no matter where that is. And since the view of the world will be rendered for that head position, errors and all, my hands and the position of the objects in my view track together ON TOP of any head/orientation errors.