3D DUO: The World's First DIY 3D Sensor

148 views
Skip to first unread message

Arthur Van Ceulen

unread,
Apr 19, 2013, 10:41:08 AM4/19/13
to vr-g...@googlegroups.com
Chers VR Geeks,

Je viens vers vous pour la première fois mais cela fait un moment que j'apprécie votre mentalité, même si cela fait moins d'un an que j'ai mis les pieds dans la VR.

Apparemment vous avez apprécié l'occulus rift sur kickstarter, et bien il me semble qu'une autre initiative mérite votre attention : DUO (Leap Motion killer) grâce au nui group et à une initiative de Code Laboratories. En bref c'est un système qui permet de faire de la détection 3D ultra-low latency et ultra-précise dans un cône relativement limité.

Open-source software avec accès au point cloud, DIY hardware modulable, bref la totale pour pouvoir créer de bons VR games quand l'on est bon amateur ;-).
Je ne sais pas si vous avez déjà pu tester un Leap Motion, en tous cas paraît-il (de ma prof Indira Thouvenin et ses doctorants de l'UTC) que c'est pas terrible finalement : ça ne correspond pas aux attentes actuelles, et le projet est tellement fermé autour de ses investisseurs, de son API et de son hardware qu'on ne peut rien faire d'autre que de récupérer des events de gesture 3D fait avec la main... Mais le fait est que Leap Motion réussira à créer un éco-système qui permettra aux développeurs d'applis de vivre, comme le play store de google ou l'app store d'apple ; et donc nombre d'applis seront développés pour. Alors pour booster Leap Motion à faire de l'innovation qui servira aux users, quoi de mieux que de lui donner un compétiteur direct en face de lui ?! Sans compter que l'expertise du nui group écrase complètement celle de Leap Motion, qui était venu demander de l'aide au premier et s'était fait recevoir par un plagiat total de leur vidéo : hardware "trop facile à concevoir" et "idée vieille de quatre ans".

Le monde de l'interaction 3D est prêt à décoller à mon avis, c'est le moment de le lancer, mais malheureusement qui dit expert ne dit pas forcément expert en communication, et je crois bien que DUO a réellement besoin d'un coup de pouce pour réussir à atteindre leur objectif kickstarter...
Alors si vous êtes convaincu, n'hésitez pas à les supporter directement, mais surtout svp parlez en autour de vous dans tous ces endroits où vous connaissez des amateurs de RV, même si vous n'êtes pas intéressé, car je pense que l'avenir d'une technologie se joue aujourd'hui.

Merci de m'avoir lu, et longue vie à nos passions !

Arthur

PS : greyna.eu pour en apprendre plus sur moi, et notamment mon unique projet de RV, The Wonderland Builder.

Jan Ciger

unread,
Apr 19, 2013, 11:35:06 AM4/19/13
to vr-g...@googlegroups.com
Hi Arthur, 

2013/4/19 Arthur Van Ceulen <vand...@gmail.com>

Chers VR Geeks,

Je viens vers vous pour la première fois mais cela fait un moment que j'apprécie votre mentalité, même si cela fait moins d'un an que j'ai mis les pieds dans la VR.

Apparemment vous avez apprécié l'occulus rift sur kickstarter, et bien il me semble qu'une autre initiative mérite votre attention : DUO (Leap Motion killer) grâce au nui group et à une initiative de Code Laboratories. En bref c'est un système qui permet de faire de la détection 3D ultra-low latency et ultra-précise dans un cône relativement limité.

Replying in English as this group is international already, I hope you do not mind :)

I fail to see the point of the DUO a bit and have some doubts about the performance claims. It is nothing more than two PS 3 Eye cameras stuck in a single case, mounted side-by-side, with added infrared filters and some LEDs around them. 

If they don't provide hardware shutter synchronization, it is going to be really tricky to get usable 3D tracking out of it. I can't see any synchronization connection between the two cameras in the photos at least. The 374fps claimed is maybe possible at some silly low resolution, otherwise USB2.0 doesn't have enough bandwidth to go that fast on a single USB controller with two cameras connected. The applications they are showing there are from the their drivers (SDK), touchlib and other projects that don't need stereo vision at all for the most part.

I am sure the two cameras can be used for stereoscopic tracking, but their setup has quite a few problems and they aren't really showing anything different that you couldn't do with with two off-the-shelf PS3 Eyes already (and many people do already - there is even a motion capture system built around PS3 Eye cameras already ...). I don't see the need for their particular sensor enclosure - the cameras cost some 33EUR on Amazon here, you can likely find them even cheaper. If you need infrared, the LEDs can be had cheaply from eBay and there are tons of tutorials online how to mod a PS3 Eye with a filter or even how to add a shutter sync. And for stereo tracking you will likely want a bit wider stereo base in order to have actually somewhat usable working volume. If you mount the cameras side by side like they did, the usable working volume is going to be really small - likely comparable to the LEAP. Which is unlikely to be something that most people will want for VR work ...

Basically, for $110 + shipping you get a DIY kit with two cameras and nothing else. Not exactly all that great deal - you can have the same thing pretty much today from Amazon and eBay. Moreover, the CodeLaboratories stuff may be open hardware, but it is not good for much without their software - which is proprietary and costs $$. The driver alone is around $60 for a single PC and camera if you want to use outside their personal use license (i.e. for research or commercial purposes).

Regards,

Jan


Juan Mari

unread,
Apr 19, 2013, 12:05:20 PM4/19/13
to vr-g...@googlegroups.com
On Friday, April 19, 2013 10:35:06 AM UTC-5, Jan Ciger wrote:
Hi Arthur, 

2013/4/19 Arthur Van Ceulen <vand...@gmail.com>
Apparemment vous avez apprécié l'occulus rift sur kickstarter, et bien il me semble qu'une autre initiative mérite votre attention : DUO (Leap Motion killer)
 
proprietary and costs $$

Looks like they are going down the same path as Oculus. They may have good intentions about using open-source software but it is not a priority. Just re-read these typical quotes from their website:

"We will use the funds to: Continue research and software development efforts, further integrate with Windows and support for Linux and OSX."

"In our second phase we will release the software for Windows and shortly after for Linux and OSX."

At least with kickstarters you have a small chance not to get frozen-in some proprietary dead-end, but if they don't do it to themselves by the end of their cash-burn, the big daddy patent troll's shell companies will take care of them...
 

Jan Ciger

unread,
Apr 19, 2013, 12:16:25 PM4/19/13
to vr-g...@googlegroups.com
Hello,

On Fri, Apr 19, 2013 at 6:05 PM, Juan Mari <juanm...@gmail.com> wrote:
On Friday, April 19, 2013 10:35:06 AM UTC-5, Jan Ciger wrote:
Hi Arthur, 

2013/4/19 Arthur Van Ceulen <vand...@gmail.com>


Apparemment vous avez apprécié l'occulus rift sur kickstarter, et bien il me semble qu'une autre initiative mérite votre attention : DUO (Leap Motion killer)
 
proprietary and costs $$

Looks like they are going down the same path as Oculus. They may have good intentions about using open-source software but it is not a priority. Just re-read these typical quotes from their website:

I don't recall Oculus mentioning anything about open source in the past. They don't provide Linux/OSS SDK at the moment, but from what I have seen so far, it is fairly well documented and adding support for e.g. a Linux game shouldn't be any more difficult than for Unity. So that's a bit unfair to say. 
 

"We will use the funds to: Continue research and software development efforts, further integrate with Windows and support for Linux and OSX."

Well, CodeLabs are promising Linux drivers and port of their SDK for PS3 Eye since a long time ago and nothing. Fortunately, Linux has a good native driver for it, just not as full featured as the CL one. It is certainly usable, though. However, Linux support != open source. Don't read what isn't there :) (e.g. Matlab exists for Linux too and that is hardly open source software ...)
 

"In our second phase we will release the software for Windows and shortly after for Linux and OSX."

At least with kickstarters you have a small chance not to get frozen-in some proprietary dead-end, but if they don't do it to themselves by the end of their cash-burn, the big daddy patent troll's shell companies will take care of them...

I think you are reading into it too much - there is only mention of the thing being open hardware, nothing more. All that means is that they publish the plans/design files for anyone to build their own enclosure for the cameras. Basically those are plans for that 3D printed piece of plastic. The real part happens in the software and there is no promise anywhere that that should be open source. It mostly won't be. Basically when you buy their DUO, you get a demo version/time limited version of their software and if you want to use it for real, you will likely need to pay for a full license later. Which is OK business model, I am just not going to buy that myself.

Regards,

Jan



Juan Mari

unread,
Apr 20, 2013, 10:16:39 AM4/20/13
to vr-g...@googlegroups.com


On Friday, April 19, 2013 11:16:25 AM UTC-5, Jan Ciger wrote:
Hello,

On Fri, Apr 19, 2013 at 6:05 PM, Juan Mari <juanm...@gmail.com> wrote:
On Friday, April 19, 2013 10:35:06 AM UTC-5, Jan Ciger wrote:
Hi Arthur, 

2013/4/19 Arthur Van Ceulen <vand...@gmail.com>


Apparemment vous avez apprécié l'occulus rift sur kickstarter, et bien il me semble qu'une autre initiative mérite votre attention : DUO (Leap Motion killer)
 
proprietary and costs $$

Looks like they are going down the same path as Oculus. They may have good intentions about using open-source software but it is not a priority. Just re-read these typical quotes from their website:

I don't recall Oculus mentioning anything about open source in the past. They don't provide Linux/OSS SDK at the moment, but from what I have seen so far, it is fairly well documented and adding support for e.g. a Linux game shouldn't be any more difficult than for Unity. So that's a bit unfair to say. 
 
Good luck for anybody porting that to OpenGL, and even more so with tracker code, regardless of how "well documented" it is...
 

Jan Ciger

unread,
Apr 20, 2013, 6:19:32 PM4/20/13
to vr-g...@googlegroups.com
On 04/20/2013 04:16 PM, Juan Mari wrote:

>
> Good luck for anybody porting that to OpenGL, and even more so with
> tracker code, regardless of how "well documented" it is...

Juan, did you actually see the SDK documentation? Or do you just have a
grudge against Oculus for whatever mysterious reason?

I see no reason why those HLSL shaders from the doc can't be ported to
OpenGL/GLSL. You can even do it in software if you don't like shaders
(e.g. OpenCV uses very similar warping routines for camera distortion
correction). And if you don't want to port, the necessary math is
described in the doc too. Any semi-competent OpenGL programmer can set
up side-by-side stereo with two viewports as well, so where is the
problem? What "good luck" are you talking about?

In fact, I am planning to do just that once my Rift is here, because I
need to use it with OpenSceneGraph. Even if Palmer's company was
providing an OpenGL SDK already, I would still have to redo the
algorithm to integrate it properly with OSG, so not having that is no
big deal.

Regarding the tracker I will withhold my judgement until I have the Rift
in hand, but I don't expect major issues with that neither.

Regards,

Jan

Julian

unread,
Apr 21, 2013, 2:50:55 AM4/21/13
to vr-g...@googlegroups.com
Hi Jan

I take it the Leap's 1 metre range is why it has limited use in VR, or are there other reasons?
I notice HP plan to try them in laptops.  If suppose if it helps people to accept and expect NUI that could be good.
Regards,
Julian

Jan Ciger

unread,
Apr 21, 2013, 9:05:32 AM4/21/13
to vr-g...@googlegroups.com
On 04/21/2013 08:50 AM, Julian wrote:
> Hi Jan
>
> I take it the Leap's 1 metre range is why it has limited use in VR, or
> are there other reasons?

From my quick tests, the Leap doesn't have 1m range. Perhaps
approximately 50x50x50cm (yeah, technically it is a frustum, not a cube,
but I hope you get my point). And that cannot be really significantly
improved without making the device a lot larger (the cameras are very
close together). The DUO with the PS3 Eye cameras will probably have a
bit more range, because the device is slightly larger, but not
significantly more.

It is a very close range device, even using both hands together is going
to be a major challenge in that volume. So something like that would be
really hard to use with an HMD when you can't see where you hands are
relative to the sensor. To use it with projector-based setups (large
walls, CAVE, etc.) is not practical, you need more working volume for
working standing up.

> I notice HP plan to try them in laptops. If suppose if it helps people
> to accept and expect NUI that could be good.

Possibly, who knows. I have no idea what their plans with it are.
However, if they just bundle it with the device and the only thing it
could do out of the box will be to emulate mouse by waving hands in the
air, it is likely going to end up as many other gimmicks that were
bundled with laptops before - e.g. fingerprint scanners. Nobody will use it.

In my personal opinion, I think that they have some major issues trying
to find some usable application niche for the device, after that
enormous hype. Right now they have a solution (gadget, not even
officially released yet!), but what was the problem? So they are
marketing it to PC makers that are desperate for anything that could
help them improve the falling PC sales.

Regards,

Jan

Lorne Covington

unread,
Apr 21, 2013, 1:14:55 PM4/21/13
to vr-g...@googlegroups.com

I have not used the Leap, but I have experimented with the Intel/Creative Gesture Camera, a time-of-flight (as I understand it) camera that fits in between the Leap/Duo and structured-light cameras like the Primesense as far as interaction volume goes.

I am currently experimenting with using it strapped on my forehead, in order to track my arms and hands in front of me as it operates out to about a meter, and the hand tracking SDK works fine with "inverted hands" (unlike the OpenNI/Kinect SDK skeleton tracking).  The idea being that when my Rift arrives, I will mount the CGC on the front of it and then be able to add my tracked hands into the VR for interaction, since you can't see your real hands while wearing the Rift.

Merging that data with body/point-of-perspective tracking from my Xtions and Rifts should enable a pretty complete multi-person VR space.  We'll see how much the Xtions and the CGC interfere with each other, but I'd be surprised if it's much.

Ever onward!

- Lorne

P.S. - View from the CGC:




http://noirflux.com

Arthur Van Ceulen

unread,
Apr 21, 2013, 4:34:31 PM4/21/13
to vr-g...@googlegroups.com
Le vendredi 19 avril 2013 17:35:06 UTC+2, Jan Ciger a écrit :
Replying in English as this group is international already, I hope you do not mind :)
Sorry I thought talking here would be in French as the last Laval Virtual posts here are in French.
I have no problem with English, if you don't mind my language level ;-)

I'll try to give answers to the replies of everyone, but be informed that I have not understood everything... I am not such a specialist as you all seem to be ^^

I fail to see the point of the DUO a bit and have some doubts about the performance claims. It is nothing more than two PS 3 Eye cameras stuck in a single case, mounted side-by-side, with added infrared filters and some LEDs around them. 

If they don't provide hardware shutter synchronization, it is going to be really tricky to get usable 3D tracking out of it. I can't see any synchronization connection between the two cameras in the photos at least. The 374fps claimed is maybe possible at some silly low resolution, otherwise USB2.0 doesn't have enough bandwidth to go that fast on a single USB controller with two cameras connected. The applications they are showing there are from the their drivers (SDK), touchlib and other projects that don't need stereo vision at all for the most part.

I am sure the two cameras can be used for stereoscopic tracking, but their setup has quite a few problems and they aren't really showing anything different that you couldn't do with with two off-the-shelf PS3 Eyes already (and many people do already - there is even a motion capture system built around PS3 Eye cameras already ...). I don't see the need for their particular sensor enclosure - the cameras cost some 33EUR on Amazon here, you can likely find them even cheaper. If you need infrared, the LEDs can be had cheaply from eBay and there are tons of tutorials online how to mod a PS3 Eye with a filter or even how to add a shutter sync. And for stereo tracking you will likely want a bit wider stereo base in order to have actually somewhat usable working volume. If you mount the cameras side by side like they did, the usable working volume is going to be really small - likely comparable to the LEAP. Which is unlikely to be something that most people will want for VR work ...

Basically, for $110 + shipping you get a DIY kit with two cameras and nothing else. Not exactly all that great deal - you can have the same thing pretty much today from Amazon and eBay. Moreover, the CodeLaboratories stuff may be open hardware, but it is not good for much without their software - which is proprietary and costs $$. The driver alone is around $60 for a single PC and camera if you want to use outside their personal use license (i.e. for research or commercial purposes).
I agree, there is no point at buying their DIY kit ; the point is to support an open movement around 3D interaction for everyone.
And yes, quoting the kickstarter page, it will be "with an open source Driver, SDK and examples". I think the business model you have described will be used only if the project isn't funded on kickstarter.
Moreover, there is an electronic card (whose plans will be free) in the kit, and "Yes, the two PS3Eye cameras are synchronized both in hardware and software so that every frame is captured simultaneously resulting in up to 187 stereo frames per second or total of 374 fps. In that particular video we used 160 stereo frames per second.".
And the purpose of pre-buying a kit on kickstarter is to announce you as a future user and support an emerging community rather than getting a cheap kit. You can give 1$ or 10$ if you want to support the project without getting the device! The money will be used to open software development (and industrialization process in order to make available the cheapest kit possible), and it seems like the software is where there is most of the innovation. For example, before the kickstarter launch and unlike Leap Motion today, they had already the detection algorithm for the two sides of a hand.


I want to support this movement against Leap Motion because in a few years, for example when I will have to decide which technologies to use in a VR game project (supposing standards would be established then) for my 3D interface, I don't want to be forced to use closed software technologies because the hardware everyone has only support such libraries. (And for "everyone" I am not speaking of customers but developers or universities that could give a hand to projects.) I hope the duo SDK to become for 3D interaction what MT4J is for multi-touch!

I also like a lot the DIY philosophy, as the duo could be used for 3D scanning, face recognizing, or anything else as it is open and that tweaking the PS3 lens could make us explore other usage at different ranges. The exact quote about this is "Although the exact APIs that developers will have access to haven’t been nailed down yet, I can say fairly confidently that per-pixel depth data will be available and that DUO will be supported as a general-purpose stereo system. This is a benefit of the DUO approach over competitors, who cannot offer depth directly. The maximum range of the setup depends on the interocular spacing of the stereo cameras, so depending on the application (and how much DIY you want to put into it) it can be tweaked somewhat.".

Finally, I hope the duo to be a valid competitor for Leap Motion, so that innovation in this world evolves faster, and in the right direction. I think that now we can choose which future we want for this niche, and I hope that like me you want a future of freeness and openness.
On the other hand, maybe Leap Motion, duo 3D and other 2d generation kinect-like systems have no future because of the usage and not the technology behind. I believe the contrary, if you believe it too, please support this project!

Jan Ciger

unread,
Apr 21, 2013, 7:04:01 PM4/21/13
to vr-g...@googlegroups.com
Hi Lorne,

On 04/21/2013 07:14 PM, Lorne Covington wrote:
> I am currently experimenting with using it strapped on my forehead, in
> order to track my arms and hands in front of me as it operates out to
> about a meter, and the hand tracking SDK works fine with "inverted
> hands" (unlike the OpenNI/Kinect SDK skeleton tracking). The idea being
> that when my Rift arrives, I will mount the CGC on the front of it and
> then be able to add my tracked hands into the VR for interaction, since
> you can't see your real hands while wearing the Rift.
>

This sort of thing has been discussed on the list extensively before -
how do you plan to actually establish the 3D position of the hands?
Using the Xtion/Kinect tracker to get the head position, the Rift
tracker to get orientation and then use the camera data to get the hand
positions on top of that? It could work, but I think you will have tons
of noise and accumulated errors if you go in this way.

Regards,

Jan

Jan Ciger

unread,
Apr 21, 2013, 7:45:34 PM4/21/13
to vr-g...@googlegroups.com
Hello,

On 04/21/2013 10:34 PM, Arthur Van Ceulen wrote:
> I agree, there is no point at buying their DIY kit ; the point is to
> support an open movement around 3D interaction for everyone.

Well, that's a noble goal, but I am not sure how giving money to yet
another company that has yet another solution looking for a problem
helps that ...

> And yes, quoting the kickstarter page, it will be *"/with an open source
> Driver, SDK and examples/"*.

You need the read the whole thing: "Although we believe in it being
open, this application will *not be immediately released* as open
source." (emphasis mine). It is about two pages later.

Codelaboratories is promising Linux port of their SDK for years now (not
even open source one). Considering that they are actually selling their
drivers and SDKs for PS3 Eye, it will likely be a good while before you
see anything open source as that would directly cannibalize their
current product line. There are some open source apps like CCV and
touchlib, but Codelaboratories has nothing to do with those, AFAIK.

> I think the business model you have
> described will be used only if the project isn't funded on kickstarter.

Well, that is their current business model. They didn't say anything
else so far and I guess they need to pay the bills too.

> Moreover, there is an electronic card (whose plans will be free) in the
> kit, and "/Yes, the two PS3Eye*cameras are synchronized both in hardware
> and software* so that every frame is captured simultaneously resulting
> in up to 187 stereo frames per second or total of 374 fps. In that
> particular video we used 160 stereo frames per second./".

Where did you find that? The Kickstarter page doesn't mention anything
of the kind - do you have some insider info?

Software sync is no good (USB latencies ...), the cameras need to have
hardware connection between them. The Kickstarter page doesn't say
anything about it and it is obvious that the two PS3 Eye camera boards
are just directly connected to the PC using their original USB leads
(there isn't even a hub in the box). The only electronics that seems to
be in there are the LEDs + maybe some driver for them.


> And the purpose of pre-buying a kit on kickstarter is to announce you as
> a future user and support an emerging community rather than getting a
> cheap kit. You can give 1$ or 10$ if you want to support the project
> without getting the device!

Ehm, I hope you are serious - so basically I should donate $10 to a
commercial company to develop their product so that I can feel good
about being named as a supporter? LOL :)

> The money will be used to open software
> development (and industrialization process in order to make available
> the cheapest kit possible), and it seems like the software is where
> there is most of the innovation. For example, before the kickstarter
> launch and unlike Leap Motion today, they had already the detection
> algorithm for the two sides of a hand.

Sorry, without me. That is not an argument for me. They have absolutely
no convincing business case, IMO. Building of a "community" for
something that has no clear purpose, use and applications - that's fine
for a hobbyist or someone like Leap Motion (they got private financing
AFAIK), but these people are asking me to invest my money in a boondoggle.

If I compare it with the Oculus Rift that I did invest in on
Kickstarter, it is a huge difference - that was a product with a clear
goal, clear objective, clear application and future. Even if Oculus
disappears tomorrow (I hope not!), they have delivered already on many
fronts. I don't see that with the DUO.

> I want to support this movement against Leap Motion because in a few
> years, for example when I will have to decide which technologies to use
> in a VR game project (supposing standards would be established then) for
> my 3D interface, I don't want to be forced to use closed software
> technologies because the hardware everyone has only support such
> libraries. (And for "everyone" I am not speaking of customers but
> developers or universities that could give a hand to projects.)

Oh come on. I am the last one to defend Leap, but this is ridiculous. So
you don't like the closed nature of the Leap but you are drumming up
support for another commercial company that is developing a straight
knockoff of it, equally closed (by their own admission).

If you don't want to use closed software in the future, feel free to
develop your own solution or participate in improving the existing
projects (OpenCV, PCL, ROS, etc.). The DUO and Leap are hardly a
standard or something that has any meaningful chance to become
indispensable in the future.

> I hope
> the duo SDK to become for 3D interaction what MT4J is for multi-touch!

I *do hope not*. That would be a disaster, because no matter how is the
SDK good (or bad), it would limit the development to that one toolkit
and device. That is like saying that Microsoft Windows monopoly on the
desktop is good for development of computing ... I don't want to end up
in the world where the only thing understood as "3D interaction" would
waving hands over that sort of gadget.

>
> I also like a lot the DIY philosophy, as the duo could be used for 3D
> scanning, face recognizing, or anything else as it is open and that
> tweaking the PS3 lens could make us explore other usage at different
> ranges.

As can Kinect, any two cameras, two PS3 eyes (you can even use that same
software with them), etc. The capabilities that DUO is promising are not
at all unique to the hardware. Stereoscopic tracking/reconstruction is a
mature field, there are plenty of tools available for that even for DIY
if you want.

> The exact quote about this is "/Although the exact APIs that
> developers will have access to haven’t been nailed down yet, I can say
> fairly confidently that per-pixel depth data will be available and that
> DUO will be supported as a general-purpose stereo system. This is a
> benefit of the DUO approach over competitors, who cannot offer depth
> directly.

Right, because these are simply two webcams, nothing more. So yeah, it
is a kinda obvious "feature" :) Leap could probably deliver per-pixel
data as well if they decided to adapt the software, but who knows.

> The maximum range of the setup depends on the interocular
> spacing of the stereo cameras, so depending on the application (and how
> much DIY you want to put into it) it can be tweaked somewhat./".

How do you "tweak" a molded piece of plastic? Again, take two PS3 Eyes,
put them side-by-side on your desk and you have the same thing. And you
can even "tweak the interocular distance" if you want ...

>
> Finally, I hope the duo to be a valid competitor for Leap Motion, so
> that innovation in this world evolves faster, and in the right
> direction. I think that now we can choose which future we want for this
> niche, and I hope that like me you want a future of freeness and openness.
> On the other hand, maybe Leap Motion, duo 3D and other 2d generation
> kinect-like systems have no future because of the usage and not the
> technology behind. I believe the contrary, if you believe it too, please
> support this project!

Arthur, I do wonder whether Codelabs are actually paying you for this
promotion. There is nothing open or free about the device you are
pushing (apart from the plastic). If you aren't associated with
Codelabs, I think you need to step back and actually see past the hype
with a critical look.

However, feel free to invest in the DUO, if you think it is worth it. I
don't think it is and considering that they barely made half of their
target with only 3 days left, lot of other people think the same.

Regards,

Jan



Lorne Covington

unread,
Apr 21, 2013, 10:10:14 PM4/21/13
to vr-g...@googlegroups.com
Sorry I got in the group too late to catch those discussions! So please
excuse my naivete and let me know where my logic is off the mark.

For instance, if the CGC is mounted on the Rift, the data from the CGC
will always be directly correct for where my hands should be represented
in my field of view. It just doesn't depend on anything else.

I do not use the skeleton tracking with the depth cameras, that stuff is
awful (though I hear it's better with the new Kinect SDK) but process
the point cloud directly for lower latency and use adaptive sampling so
regions-of-interest are as accurate as possible while still getting good
framerates/latency.

So from the depth cameras I get world head AND hands positions. The
weak spot here would be the Rift, as it's orientation errors would skew
the world hand positions as seen by the CGC, but that can be correlated
with the head/hand positions from the depth cameras. Seems to me that
the CGC vs. depth camera hand position data cold be used to correct the
Rift data, not add errors to it.

So I don't see it as an accumulation of errors, but rather essentially
two systems, the Rift/CGC, and the depth cameras, that could be
integrated, overlaying the CGC finger data onto the hand positions. I
certainly do see how calibration will be a real job, though.

So what am I missing? With my experiments using a VR wall in place of
the Rift, this certainly seems viable if the Rift is as good as everyone
claims.

Thanks!

- Lorne

--

http://noirflux.com

Jan Ciger

unread,
Apr 22, 2013, 5:03:53 AM4/22/13
to vr-g...@googlegroups.com
Hello,

Sorry I got in the group too late to catch those discussions!  So please excuse my naivete and let me know where my logic is off the mark.

For instance, if the CGC is mounted on the Rift, the data from the CGC will always be directly correct for where my hands should be represented in my field of view.  It just doesn't depend on anything else.

Yes, however, that is not all that helpful if you want to actually interact with the 3D environment. For that you need the hands in the "world coordinates", not relative to your (rotating) head. That's why you need some sort of external reference. 
 

I do not use the skeleton tracking with the depth cameras, that stuff is awful (though I hear it's better with the new Kinect SDK) but process the point cloud directly for lower latency and use adaptive sampling so regions-of-interest are as accurate as possible while still getting good framerates/latency.


OK. The skeleton tracking doesn't work in this setup, because the SDKs assume person standing up and then use model fitting to find the user and fit the skeleton. If you are sitting down, your silhouette is a lot different than what it expects, thus it isn't working well. 


So from the depth cameras I get world head AND hands positions.  The weak spot here would be the Rift, as it's orientation errors would skew the world hand positions as seen by the CGC, but that can be correlated with the head/hand positions from the depth cameras.  Seems to me that the CGC vs. depth camera hand position data cold be used to correct the Rift data, not add errors to it.

So I don't see it as an accumulation of errors, but rather essentially two systems, the Rift/CGC, and the depth cameras, that could be integrated, overlaying the CGC finger data onto the hand positions.  I certainly do see how calibration will be a real job, though. 
 
This could work, but in order to correlate the depth camera data with the CGC data, you need to transform the coordinate system of the CGC into the depth camera coordinate system (or vice versa) - they have to be in the same coordinate system before you can do anything. The Rift/CGC combo alone will not give you world coordinates of the hands, only coordinates relative to your head. The depth cameras give you "absolute" position in space. 

To get an absolute position in space for your hands using the Rift/CGC, you will get a matrix that gives you the *local* (head-referenced)  position. Then you must multiply this by the matrix that gives you position of your head in the world space. That can only come from the depth cameras (Xtion/Kinect) in your case. Or you can use something like Hydra or Gametrak. Once you have this world matrix, you can try to do some Kalman filtering or something similar to improve the accuracy of the hand position over plain depth cameras, because they are now in the same coordinate frame.

So you will have to chain the transformations, because the only "global" reference you have is the "kinect". This is where I was thinking about error accumulation - the matrix multiplication chain required to change the coordinate systems. Each of the matrices is going to be noisy (as they come from noisy sensors), so the errors will be multiplied as well. 


Regards,

Jan

Arthur Van Ceulen

unread,
Apr 22, 2013, 8:11:07 AM4/22/13
to vr-g...@googlegroups.com
Hello,

2013/4/22 Jan Ciger <jan....@gmail.com>

Arthur, I do wonder whether Codelabs are actually paying you for this promotion. There is nothing open or free about the device you are pushing (apart from the plastic). If you aren't associated with Codelabs, I think you need to step back and actually see past the hype with a critical look.

However, feel free to invest in the DUO, if you think it is worth it. I don't think it is and considering that they barely made half of their target with only 3 days left, lot of other people think the same.

Of course I am not, it is just my personality : I am 100% for something, or against, but I never really find a balanced position. As a sort of humanist, I also know me to trust the unknown too rapidly. I am 21, and I don't pretend to be more mature that I am. So thank you, really, for being critical and help me to have the most correct opinion.

I would like to put some more information in the debate, please correct me if I'm wrong. The natural user interface group is a worldwide DIY and open-source community, I hope that like me you don't doubt of its members faith. Code Laboratories has been founded in 2008 by Alexander Popovich and Christian Moore (founder of the nui group in 2006). The first purpose was to give to the nui group industry quality drivers for DIY projects, as the whole community was asking for it. For that, of course they needed full-time work on it, so they have built a company and a business model in order to keep living. I suppose most of the early customers were nui group members who thanked the initiative. For the rest, they have continued to be administrators of the nui community and participated at building MT4J, touchlib, CL NUI...
I would be very pleased to know what information makes you seem to think Code Laboratories is such a "bad company". I think they share my "noble goal" rather than searching business.


I agree, there is no point at buying their DIY kit ; the point is to
support an open movement around 3D interaction for everyone.

Well, that's a noble goal, but I am not sure how giving money to yet another company that has yet another solution looking for a problem helps that ...
 
Now let me tell you a short story that happened just after Leap Motion has been funded and founded. They had the money from investors, the company structure, the idea (maybe even some patents) and this such easy to build device. And I think you will agree that this is nothing. All the cool stuff comes from the post-processing algorithms, the quality of the SDK that you give to application developers, the synchronization and such other really difficult things. So then, they came on the nui group forums seeking for the engineers they needed, purposing them to "make history together", like if they were making a gift to the world of technologies. Purposing jobs at a closed-software and business-driven company on an open-source and DIY community is very, very, very stupid.
As they had already a hand-detection algorithm made four years ago, and with a such easy to build device, the nui group reacted by making a joke video (with direct plagiarism to the Leap Motion video) with two PS3 Eyes. Months later, Code Laboratories made the joke to a real project, called this time duo 3D, which is supported by the whole nui group.

And yes, quoting the kickstarter page, it will be *"/with an open source
Driver, SDK and examples/"*.

You need the read the whole thing: "Although we believe in it being open, this application will *not be immediately released* as open source." (emphasis mine). It is about two pages later.

Knowing this story, I think you understand why they can't release it as open-source now : Leap Motion needs any piece of software engineering and will rob anything from it. Everyone want this engineering to profit to communities rather than Leap Motion company.

Codelaboratories is promising Linux port of their SDK for years now (not even open source one). Considering that they are actually selling their drivers and SDKs for PS3 Eye, it will likely be a good while before you see anything open source as that would directly cannibalize their current product line.

I agree, we cannot really trust them. But we can trust kickstarter and the nui group to made them keep their promises.

There are some open source apps like CCV and touchlib, but Codelaboratories has nothing to do with those, AFAIK.

You are wrong, sorry but verify your information before posting please. The co-founders of Code Laboratories are founder and core developer of CCV. Here is the official source. And they made it as they were working in Code Laboratories.

I think the business model you have
described will be used only if the project isn't funded on kickstarter.

Well, that is their current business model. They didn't say anything else so far and I guess they need to pay the bills too.

Sorry, but if they sell a product or a kit, like the duo 3D, they won't have the same business model, which allow them to make it completely open-source. And I agree with a commercial license to avoid Leap Motion stealing.
 
Moreover, there is an electronic card (whose plans will be free) in the
kit, and "/Yes, the two PS3Eye*cameras are synchronized both in hardware
and software* so that every frame is captured simultaneously resulting

in up to 187 stereo frames per second or total of 374 fps. In that
particular video we used 160 stereo frames per second./".

Where did you find that? The Kickstarter page doesn't mention anything of the kind - do you have some insider info?

Look at duo3d.com, the official website... And I already said that they sucked a lot in communication.

Ehm, I hope you are serious - so basically I should donate $10 to a commercial company to develop their product so that I can feel good about being named as a supporter? LOL :)


The money will be used to open software
development (and industrialization process in order to make available
the cheapest kit possible), and it seems like the software is where
there is most of the innovation. For example, before the kickstarter
launch and unlike Leap Motion today, they had already the detection
algorithm for the two sides of a hand.

Yes I am serious, I give money to kickstarter open-source project made by persons or companies without getting anything back. And sorry, but "commercial company" is not what I think of Code Laboratories.

Oh come on. I am the last one to defend Leap, but this is ridiculous. So you don't like the closed nature of the Leap but you are drumming up support for another commercial company that is developing a straight knockoff of it, equally closed (by their own admission).

I completely disagree.

> I hope
> the duo SDK to become for 3D interaction what MT4J is for multi-touch!

I *do hope not*. That would be a disaster, because no matter how is the SDK good (or bad), it would limit the development to that one toolkit and device. That is like saying that Microsoft Windows monopoly on the desktop is good for development of computing ... I don't want to end up in the world where the only thing understood as "3D interaction" would waving hands over that sort of gadget.

I disagree here too, as an open SDK could be linked to any device driver and as they are the best for building drivers. And sorry, but "waving hands" is what Leap Motion offers ; duo 3D can be used for object scanning, kinect-like application (yes with some more DIY, but still with their sync hardware, drivers, SDK...), etc...


I also like a lot the DIY philosophy, as the duo could be used for 3D
scanning, face recognizing, or anything else as it is open and that
tweaking the PS3 lens could make us explore other usage at different
ranges.

As can Kinect, any two cameras, two PS3 eyes (you can even use that same software with them), etc. The capabilities that DUO is promising are not at all unique to the hardware. Stereoscopic tracking/reconstruction is a mature field, there are plenty of tools available for that even for DIY if you want.

Yes, capabilities mostly comes from software, where Code Laboratories and nui group are experts. Who doesn't prefer this software directly working with a device open to any modification and that could be made by ourself ?

Maybe Leap Motion nor duo 3D has any future. But it is false to say that both companies or projects are kind of the same.

Regards,
Arthur


2013/4/22 Jan Ciger <jan....@gmail.com>




--
You received this message because you are subscribed to a topic in the Google Groups "VR Geeks" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/vr-geeks/F6NzBOHCzkk/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to vr-geeks+unsubscribe@googlegroups.com.
Visit this group at http://groups.google.com/group/vr-geeks?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.





Arthur Van Ceulen
E-mail personnel : vand...@gmail.com
Adresse postale : 7 rue du Grand Ferré, 60200 Compiègne
Téléphone portable : + 33 (0)6 51 79 98 69


2013/4/22 Jan Ciger <jan....@gmail.com>




--
You received this message because you are subscribed to a topic in the Google Groups "VR Geeks" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/vr-geeks/F6NzBOHCzkk/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to vr-geeks+unsubscribe@googlegroups.com.
Visit this group at http://groups.google.com/group/vr-geeks?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.



Lorne Covington

unread,
Apr 22, 2013, 9:03:34 AM4/22/13
to vr-g...@googlegroups.com


On 4/22/2013 5:03 AM, Jan Ciger wrote:
Hello,

Sorry I got in the group too late to catch those discussions!  So please excuse my naivete and let me know where my logic is off the mark.

For instance, if the CGC is mounted on the Rift, the data from the CGC will always be directly correct for where my hands should be represented in my field of view.  It just doesn't depend on anything else.

Yes, however, that is not all that helpful if you want to actually interact with the 3D environment. For that you need the hands in the "world coordinates", not relative to your (rotating) head.

Well, duh!  But for representing my hands properly in my view that is all I need.  Just talking about that, having my arms/hands in the view is WAY more realistic than no hands whatsoever.



 
I do not use the skeleton tracking with the depth cameras, that stuff is awful (though I hear it's better with the new Kinect SDK) but process the point cloud directly for lower latency and use adaptive sampling so regions-of-interest are as accurate as possible while still getting good framerates/latency.


OK. The skeleton tracking doesn't work in this setup, because the SDKs assume person standing up and then use model fitting to find the user and fit the skeleton. If you are sitting down, your silhouette is a lot different than what it expects, thus it isn't working well.

Yeah, that's what I said.  It sucks.




So from the depth cameras I get world head AND hands positions.  The weak spot here would be the Rift, as it's orientation errors would skew the world hand positions as seen by the CGC, but that can be correlated with the head/hand positions from the depth cameras.  Seems to me that the CGC vs. depth camera hand position data cold be used to correct the Rift data, not add errors to it.

So I don't see it as an accumulation of errors, but rather essentially two systems, the Rift/CGC, and the depth cameras, that could be integrated, overlaying the CGC finger data onto the hand positions.  I certainly do see how calibration will be a real job, though. 
 
This could work, but in order to correlate the depth camera data with the CGC data, you need to transform the coordinate system of the CGC into the depth camera coordinate system (or vice versa) - they have to be in the same coordinate system before you can do anything. The Rift/CGC combo alone will not give you world coordinates of the hands, only coordinates relative to your head. The depth cameras give you "absolute" position in space.

Well duh again!



To get an absolute position in space for your hands using the Rift/CGC, you will get a matrix that gives you the *local* (head-referenced)  position. Then you must multiply this by the matrix that gives you position of your head in the world space. That can only come from the depth cameras (Xtion/Kinect) in your case. Or you can use something like Hydra or Gametrak.

Hydra/Gametrak doesn't work if I want to walk around a space.



Once you have this world matrix, you can try to do some Kalman filtering or something similar to improve the accuracy of the hand position over plain depth cameras, because they are now in the same coordinate frame.

So you will have to chain the transformations, because the only "global" reference you have is the "kinect". This is where I was thinking about error accumulation - the matrix multiplication chain required to change the coordinate systems. Each of the matrices is going to be noisy (as they come from noisy sensors), so the errors will be multiplied as well.

Yes of course.  But I am not talking about trying to accurately represent an interaction with the "real world", where I press a real physical button as represented in the VR space so they have to match closely, but the virtual button one shown in the Rift.  That world view moves along with the various errors, so there is NO error chain, just the error of the CGC, same as in any other application.

To be pedantic, the view I see in the Rift is all I really care about for the interaction; I just need to be able to accurately place my hand/fingers in the view being presented to me.  It is totally irrelevant if the button I see in the rift is not exactly in the real world position, because I have no idea nor care about that, as I can't see the real world.  The only thing that matters is where my virtual hand appears in the local view I see it in the Rift, so the only data I need is Rift -> Hand, period.

Now if two of us are sharing this VR world, then trying to play patty-cake with each other may get dodgey, but then the depth cam data could help with that.

Having my hands visible in the VR world is such a huge win, I don't see a big downside to this.  If there's another way to do this with someone roaming around a VR space, please let me know!

Cédric Syllebranque

unread,
Apr 22, 2013, 9:31:33 AM4/22/13
to vr-g...@googlegroups.com, Lorne Covington
Hi Lorne. It is interesting as I was planning doing roughly the same job, but with another hardware.
I think the point is what is it good to see your (real) hands in the HMD if not using them? Maybe self perception?
It is ok when interacting with real objects that you can see (I was planning doing it with a force feedback wheel, as an example, or for virtual formation).

Let me know about your progress though, cause I think we will fall on the same real big problem: active "see-trough"calibration for a given depth range...
--
You received this message because you are subscribed to the Google Groups "VR Geeks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vr-geeks+u...@googlegroups.com.

Eric Vaughan

unread,
Apr 22, 2013, 9:58:40 AM4/22/13
to vr-g...@googlegroups.com
On Mon, Apr 22, 2013 at 7:03 AM, Lorne Covington <noir...@gmail.com> wrote:


On 4/22/2013 5:03 AM, Jan Ciger wrote:
Hello,

Sorry I got in the group too late to catch those discussions!  So please excuse my naivete and let me know where my logic is off the mark.

For instance, if the CGC is mounted on the Rift, the data from the CGC will always be directly correct for where my hands should be represented in my field of view.  It just doesn't depend on anything else.

Yes, however, that is not all that helpful if you want to actually interact with the 3D environment. For that you need the hands in the "world coordinates", not relative to your (rotating) head.

Well, duh!  But for representing my hands properly in my view that is all I need.  Just talking about that, having my arms/hands in the view is WAY more realistic than no hands whatsoever.

I hashed this out with Jan a few months ago, and 6DOF tracking is definitely required for this to work properly.  The issue is that the CGC cannot distinguish between bringing the hands closer to the head, or the head moving closer to the hands.  This means that, when the head is moving towards the hands, the hands will appear to grow larger, which is correct, but the rest of the environment will stay the same size, instead of growing in scale along with the hands.

I'd still like to see somebody try it though, because without 6DOF headtracking, the user shouldn't be moving their head too much anyway.  But, I suspect that the lack of translation control becomes much more apparent and disconcerting once you add the reference point of hands that modulate randomly in size...

Lorne Covington

unread,
Apr 22, 2013, 11:52:38 AM4/22/13
to vr-g...@googlegroups.com


On 4/22/2013 9:58 AM, Eric Vaughan wrote:
>
> I hashed this out with Jan a few months ago, and 6DOF tracking is
> definitely required for this to work properly. The issue is that the
> CGC cannot distinguish between bringing the hands closer to the head,
> or the head moving closer to the hands. This means that, when the
> head is moving towards the hands, the hands will appear to grow
> larger, which is correct, but the rest of the environment will stay
> the same size, instead of growing in scale along with the hands.

Yes, but that's just the normal VR point-of-perspective problem which is
working quite nicely using the depth camera for head position tracking;
the hands don't figure into that problem at all. The hands are locked
to that point-of-perspective by the CGC being mounted on that head.


>
> I'd still like to see somebody try it though, because without 6DOF
> headtracking, the user shouldn't be moving their head too much
> anyway. But, I suspect that the lack of translation control becomes
> much more apparent and disconcerting once you add the reference point
> of hands that modulate randomly in size...

The hands would not modulate randomly in size any more than they would
with the "real-world" application of the CGC being mounted over your
monitor. So if the CGC works there, it works here, as by being mounted
to the Rift it is "mounted" to my view, same as the real-world case.

My point is that even with errors in the head world position and
orientation tracking, my hands will still look correct in the virtual
world being presented, even if it was not a "true" real world
representation due to depth camera position errors and Rift orientation
errors, because that's all I see, and my hands will always appear
correctly in that frame (to the limit of the CGC). My hands would be
following the same errors as the button I'm trying to press, there is no
addition of errors.

For example, if I tilt my head down, I don't even need to know any Rift
data to properly paint my hands in the view - it is completely
unimportant whether I tilted my head down or raised my hands up. The
world behind my hands will be susceptible to Rift orientation and depth
camera position tracking error, but that's the case already, hands or
no! The important point is that my hands should always appear correct
for the world I am being presented with.

Ciao!

Jan Ciger

unread,
Apr 22, 2013, 1:18:22 PM4/22/13
to vr-g...@googlegroups.com
Hi Arthur,

On Mon, Apr 22, 2013 at 2:11 PM, Arthur Van Ceulen <vand...@gmail.com> wrote:
Hello,

I would like to put some more information in the debate, please correct me if I'm wrong. The natural user interface group is a worldwide DIY and open-source community, I hope that like me you don't doubt of its members faith. Code Laboratories has been founded in 2008 by Alexander Popovich and Christian Moore (founder of the nui group in 2006). The first purpose was to give to the nui group industry quality drivers for DIY projects, as the whole community was asking for it. For that, of course they needed full-time work on it, so they have built a company and a business model in order to keep living. I suppose most of the early customers were nui group members who thanked the initiative. For the rest, they have continued to be administrators of the nui community and participated at building MT4J, touchlib, CL NUI...
I would be very pleased to know what information makes you seem to think Code Laboratories is such a "bad company". I think they share my "noble goal" rather than searching business.

I have never said that CodeLaboratories are bad or good company. They have their business model, their products and their customers. That's fine, there is no problem with that. If they are involved in the NUI group, even better (I didn't know that). I am not going to speculate about their goals being noble or not, as I have no insider info. 

The issue here is twofold - first, you were promoting something as what it is not (an open source product) and second, supporting something on Kickstarter is not charity. It is an investment. For that I expect something in return. And I do not really see the value there. That's all. I am not buying these "open source (but not really)" and "revolutionary (but almost ..)" buzzwords. I think I am bit too old for those, having seen those "paradigm shifts" and "revolutions" that failed to materialize few times too many.


Now let me tell you a short story that happened just after Leap Motion has been funded and founded. They had the money from investors, the company structure, the idea (maybe even some patents) and this such easy to build device. And I think you will agree that this is nothing. All the cool stuff comes from the post-processing algorithms, the quality of the SDK that you give to application developers, the synchronization and such other really difficult things. So then, they came on the nui group forums seeking for the engineers they needed, purposing them to "make history together", like if they were making a gift to the world of technologies. Purposing jobs at a closed-software and business-driven company on an open-source and DIY community is very, very, very stupid.

Why is that stupid? I think that is a smart move from the company to harness the existing resources. Where were they most likely to find engineers capable to do the job? I think among people who actually do have experience with similar stuff. 

If someone did free consulting for them, well ... that is their problem, to be honest. It is not all that nice move from Leap if that is how it went, but we are hopefully all adults and can think for ourselves when approached by someone wanting to pick our brains.
 
As they had already a hand-detection algorithm made four years ago, and with a such easy to build device, the nui group reacted by making a joke video (with direct plagiarism to the Leap Motion video) with two PS3 Eyes. Months later, Code Laboratories made the joke to a real project, called this time duo 3D, which is supported by the whole nui group.

That is what I kinda thought when I saw the project. I wasn't too impressed by the Leap neither - look in the archives for the discussion we had here when the Leap hype broke out.
 
Knowing this story, I think you understand why they can't release it as open-source now : Leap Motion needs any piece of software engineering and will rob anything from it. Everyone want this engineering to profit to communities rather than Leap Motion company.

Honestly, I am not buying that. Basically, by that argument it means that the software will never be open source until a single competitor exists. The company had a closed source drivers and SDK and what not even before Leap Motion/Ocuspec existed I believe. Moreover, there is no rocket science in doing stereo tracking or depth map calculations - there is even free code to do that. The same for things like point clouds, surface reconstruction, SLAM, etc. Plus tons and tons of computer vision papers from at least the last 20 years. So if the engineers at Leap are worth their salt, they could easily redevelop/reuse that. I am not sure how a community will benefit from another closed source product - that is directly in contradiction of what you are saying. 


I agree, we cannot really trust them. But we can trust kickstarter and the nui group to made them keep their promises.

Ehm, how exactly do you intend to do that? Sue them if they don't deliver something they didn't promise? Or shame them publicly? Or how exactly?

 
You are wrong, sorry but verify your information before posting please. The co-founders of Code Laboratories are founder and core developer of CCV. Here is the official source. And they made it as they were working in Code Laboratories.

OK, mea culpa, I wasn't aware of that. I am not too much in the multi-touch/desktop interaction field, I am more a VR/computer vision guy.
 
Sorry, but if they sell a product or a kit, like the duo 3D, they won't have the same business model, which allow them to make it completely open-source. And I agree with a commercial license to avoid Leap Motion stealing.

So, again another contradiction. First you exhort DUO as an open source solution and now it is OK to have commercial (closed) license so that Leap doesn't steal something. That doesn't compute for me, IMO. 

And the business model - of course, we can only speculate, but it is obvious to me that the business model is going to be - free (or cheap) kits, sell the software. There isn't much else there.

 
 
Moreover, there is an electronic card (whose plans will be free) in the
kit, and "/Yes, the two PS3Eye*cameras are synchronized both in hardware
and software* so that every frame is captured simultaneously resulting

in up to 187 stereo frames per second or total of 374 fps. In that
particular video we used 160 stereo frames per second./".

Where did you find that? The Kickstarter page doesn't mention anything of the kind - do you have some insider info?

Look at duo3d.com, the official website... And I already said that they sucked a lot in communication.

Well, it says only "Precise Sensor Sync" (whatever that means ...) on the feature page. That's really vague and the Kickstarter image of the boards in the 3D printed prototype lacks any sort of extra electronics necessary for it (even a simple wire as it was described on the NUI forums in the past). 

Anyhow, personally I think that a product based on hacking cameras produced by Sony is pretty much doomed to fail the moment Sony decides to discontinue the current Eye (either to replace it with a newer model or whatever) or modifies the cameras. That's not a way how to design a product like this. Leap at least has custom cameras or some independent manufacturer under contract to keep supplying the cameras to them.



The money will be used to open software
development (and industrialization process in order to make available
the cheapest kit possible), and it seems like the software is where
there is most of the innovation. For example, before the kickstarter
launch and unlike Leap Motion today, they had already the detection
algorithm for the two sides of a hand.

Yes I am serious, I give money to kickstarter open-source project made by persons or companies without getting anything back. And sorry, but "commercial company" is not what I think of Code Laboratories.

Well, for me, as an outsider, CodeLaboratories are certainly a business trying to make money. There is nothing wrong with that, IMO. However, to claim that the money will be used for open software/product when it is clearly not going to be (by their own admission and even your own words) is a bit disingenuous. That is not really an open source project by any stretch. And they fail as a commercial project for me too. Building a project where the only "innovation" you bring is to be "against someone else" is a bit weak. People buy things on their own merit, not whether or not it is to defeat someone else.
 

Oh come on. I am the last one to defend Leap, but this is ridiculous. So you don't like the closed nature of the Leap but you are drumming up support for another commercial company that is developing a straight knockoff of it, equally closed (by their own admission).

I completely disagree.

OK, but then you are completely inconsistent with yourself or we have a very different understanding what "open" means. 
 
I disagree here too, as an open SDK could be linked to any device driver and as they are the best for building drivers. And sorry, but "waving hands" is what Leap Motion offers ; duo 3D can be used for object scanning, kinect-like application (yes with some more DIY, but still with their sync hardware, drivers, SDK...), etc...

Eh, sorry. There is *no* open SDK. That is a red herring. You yourself said it will be commercial (ergo closed). And yeah duo can be used for exactly the same things that any two cameras can be. Why should I buy it? Only because it is competition to Leap??


Yes, capabilities mostly comes from software, where Code Laboratories and nui group are experts. Who doesn't prefer this software directly working with a device open to any modification and that could be made by ourself ?

Sure, except those same capabilities using the same software can be had using two off-the-shelf cameras. I don't need that stupid piece of plastic. If they create the software, I will perhaps buy it, but I am not buying the DUO. 
 

Maybe Leap Motion nor duo 3D has any future. But it is false to say that both companies or projects are kind of the same.

I am not debating the companies - I have no idea (and I honestly don't care) about how they work. However, the products are pretty much the same to me. At least the only thing you have managed to present here is that DUO is basically Leap-that-is-not-Leap with nebulous promises of open source stuff (which it isn't).  And the marketing of the device by CodeLaboratories themselves based only on vague promises of what could be done with it if we pay them for the development of software that they will keep for sale later isn't really convincing for me. 

Sorry, 

Jan

Eric Vaughan

unread,
Apr 22, 2013, 1:19:57 PM4/22/13
to vr-g...@googlegroups.com
Yes, but that's just the normal VR point-of-perspective problem which is working quite nicely using the depth camera for head position tracking; the hands don't figure into that problem at all.  The hands are locked to that point-of-perspective by the CGC being mounted on that head.
 
If the depth camera is monitor-mounted, then you can of course do object-tracking of the head, but I think we are discussing a head-mounted application?  If so, how do you plan to use the depth camera to resolve head position?  There are some mature SLAM algorithms, but ego-motion is certainly not yet solved for the highly-precise and real-time case.

The hands would not modulate randomly in size any more than they would with the "real-world" application of the CGC being mounted over your monitor.  So if the CGC works there, it works here, as by being mounted to the Rift it is "mounted" to my view, same as the real-world case.

When the CGC is mounted to the monitor, you can also track the user's head in 6DOF.  When it is mounted to the head, you do not have the external reference.

Basically, without rock-solid ego-motion tracking, you need to measure head position externally for this to definitely work.

Lorne Covington

unread,
Apr 22, 2013, 1:33:36 PM4/22/13
to vr-g...@googlegroups.com


On 4/22/2013 1:19 PM, Eric Vaughan wrote:
Yes, but that's just the normal VR point-of-perspective problem which is working quite nicely using the depth camera for head position tracking; the hands don't figure into that problem at all.  The hands are locked to that point-of-perspective by the CGC being mounted on that head.
 
If the depth camera is monitor-mounted, then you can of course do object-tracking of the head, but I think we are discussing a head-mounted application?  If so, how do you plan to use the depth camera to resolve head position?  There are some mature SLAM algorithms, but ego-motion is certainly not yet solved for the highly-precise and real-time case.

I think you are misunderstanding me.  I was talking about monitor-mounted in relation to the CGC's target role of reading hands in front of monitors.  I am not talking about that in any VR capacity.

Take the simplest case: just the CGC mounted on the rift, and no graphics whatsoever displayed in the Rift other than my hands via the data from the CGC.  That's it - no other data is involved whatsoever.  I do not need to know anything else other than how the CGC is mounted to the Rift to properly put the hand data in the graphics being displayed to me.  There will be no greater error in this case than the CGC being used to move a mouse pointer by hand over a keyboard when mounted to a monitor.




Basically, without rock-solid ego-motion tracking, you need to measure head position externally for this to definitely work.

Yes, that is exactly what I have been saying, that is what the depth cameras are for.

And again, unless I am missing something here (and if so please point it out!), the only thing that matters to me the user is that my hands line up with the world AS I AM SEEING IT; and as my hands are attached to my head frame, any errors by the head tracking and Rift are irrelevant, as what matters to me is how my hands line up with the error-laden world frame I am seeing, not any real-world referent.

Thanks!

- Lorne



--
You received this message because you are subscribed to the Google Groups "VR Geeks" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vr-geeks+u...@googlegroups.com.
Visit this group at http://groups.google.com/group/vr-geeks?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Eric Vaughan

unread,
Apr 22, 2013, 2:41:23 PM4/22/13
to vr-g...@googlegroups.com
Nope, just sounds like I was misunderstanding your plan!

Jan Ciger

unread,
Apr 22, 2013, 5:30:56 PM4/22/13
to vr-g...@googlegroups.com
On 04/22/2013 03:03 PM, Lorne Covington wrote:
> Well, duh! But for representing my hands properly in my view that is
> all I need. Just talking about that, having my arms/hands in the view
> is WAY more realistic than no hands whatsoever.

Agreed. However just hands in the space are not likely to help you much
if you cannot do anything with them, not even manage occlusions and
collisions properly. It will feel a bit like the movie Ghost when the
guy tried to touch things and his hands were passing straight through
objects. Really disturbing ...

>
> Yeah, that's what I said. It sucks.

More like it is not the right tool for the job :)

> Hydra/Gametrak doesn't work if I want to walk around a space.

Yep. Then you likely need an optical tracker. Maybe Kinect will do,
maybe something else, depending on the size of the volume you want to
track (and your budget, of course).

> Yes of course. But I am not talking about trying to accurately
> represent an interaction with the "real world", where I press a real
> physical button as represented in the VR space so they have to match
> closely, but the virtual button one shown in the Rift. That world view
> moves along with the various errors, so there is NO error chain, just
> the error of the CGC, same as in any other application.

That isn't actually correct. You are missing one crucial detail. When
you render your scene, you will have to position the HMD in the space -
using some transformation (viewing transform in OpenGL - the HMD is your
"camera"). Any object you want to render has to be relative to that.
However, the Rift alone doesn't give you a way to establish that
position in the 3D space, only orientation. So you will either have to
fix the camera somewhere or use something else to drive it around.

Now if you want to interact with those virtual objects, you will need to
use the transform above to calculate where your hands (tracked relative
to your head) are relative to those objects which are positioned in the
world coordinate system (so that you can check for intersections, for
example). Otherwise you would have to render everything relative to your
head and thus "carry it with you" as you are navigating the environment.
That's is OK for a HUD like thing, but not for a 3D scene - I want the
geometry to stay put in the scene, not to travel with me ...

However, again, this has been all discussed before.

>
> To be pedantic, the view I see in the Rift is all I really care about
> for the interaction; I just need to be able to accurately place my
> hand/fingers in the view being presented to me. It is totally
> irrelevant if the button I see in the rift is not exactly in the real
> world position, because I have no idea nor care about that, as I can't
> see the real world. The only thing that matters is where my virtual
> hand appears in the local view I see it in the Rift, so the only data I
> need is Rift -> Hand, period.

I am not talking about real world at all. I am speaking about basic 3D
scene setup, where the HMD is your virtual camera, the hands are (in
your case) tracked relatively to your head and the objects are
positioned somewhere in the 3D space (in world coordinates, independent
from your head position).

> Having my hands visible in the VR world is such a huge win, I don't see
> a big downside to this. If there's another way to do this with someone
> roaming around a VR space, please let me know!

Sure, completely agreed - I was testing the Rift today at work and
everyone complained that they cannot see their bodies. So yeah, that's a
problem that needs to be addressed in any such application (it isn't a
new issue - there is some body of papers published on this too).

However, you will need to keep the math requirements in check - your
setup could work, just not in the way you want to do it. Well, if you
don't care at all about interaction and want only the hands displayed,
then what you want to do is OK. However, if you want to interact with
something in the scene, you will need to do what I have described above.

Best,

Jan

Lorne Covington

unread,
Apr 22, 2013, 7:34:06 PM4/22/13
to vr-g...@googlegroups.com


On 4/22/2013 5:30 PM, Jan Ciger wrote:
> On 04/22/2013 03:03 PM, Lorne Covington wrote:
>> Well, duh! But for representing my hands properly in my view that is
>> all I need. Just talking about that, having my arms/hands in the view
>> is WAY more realistic than no hands whatsoever.
>
> Agreed. However just hands in the space are not likely to help you
> much if you cannot do anything with them, not even manage occlusions
> and collisions properly. It will feel a bit like the movie Ghost when
> the guy tried to touch things and his hands were passing straight
> through objects. Really disturbing ...

Not what I am talking about. I can test for hand position in view
space, and the virtual button position in view space, to test for
intersection. And these are all just transforms anyway, it does not
really matter which way the transforms go.


> Now if you want to interact with those virtual objects, you will need
> to use the transform above to calculate where your hands (tracked
> relative to your head) are relative to those objects which are
> positioned in the world coordinate system (so that you can check for
> intersections, for example). Otherwise you would have to render
> everything relative to your head and thus "carry it with you" as you
> are navigating the environment. That's is OK for a HUD like thing, but
> not for a 3D scene - I want the geometry to stay put in the scene, not
> to travel with me ...
>
> However, again, this has been all discussed before.

Then it should be easy to point out my error! (;^}) Again, it does not
matter which way I do the transforms to test for collisions, if I
transform my hands into world space or the world into hand space. No
loss of precision either way.

Please bear with me. If my head is tracked, and I am using that to
present the view of the virtual world button, it does not matter to me
if that representation has errors in it, only that I can line my hand up
with it.

Granted that if the head tracking were noisy (either Rift or depth cam),
so that the button seemed to jump around, that would be a problem. But
I have pretty stable head position tracking with the depth cameras so
that is not an issue, unless it is with the Rift.

Otherwise, as I (the user) am in the middle if these two systems as it
were, I simply move my hand until the representation of it touches the
representation of the button. Absolute accuracy is unimportant as long
as it is not so gross as to conflict with my body's proprioception,
which it does not in my tests so far - not even close.


> However, you will need to keep the math requirements in check - your
> setup could work, just not in the way you want to do it. Well, if you
> don't care at all about interaction and want only the hands displayed,
> then what you want to do is OK. However, if you want to interact with
> something in the scene, you will need to do what I have described above.

I'm afraid I still do not see the disconnect. I am tracking the head in
the world with depth cameras, which gives me the point of perspective.
The rift will give me orientation for my view. The CGC on the Rift
gives me view-relative hand position. I can test for hand and virtual
world object intersection. Where is the problem?

I think this is the important concept: even if my world head position is
wrong, so is what I'm seeing, and so is my hand by the same amount.

My hands will ALWAYS be rendered correctly in my view, to the accuracy
of the CGC, as that is relative to the head position, no matter where
that is. And since the view of the world will be rendered for that head
position, errors and all, my hands and the position of the objects in my
view track together ON TOP of any head/orientation errors.

So I simply do not see where things can get out of whack past the
accuracy of the CGC. Again, sorry if I am being dense, but just saying
"this has been discussed before" is not helpful without stating where in
that chain the problem is. I absolutely can see where there would be
problems with hand position taken by another system not fixed to the
head, like the Hydra and Gametrak, but if the CGC on the Rift is
properly calibrated I do not.

Thanks Jan!

Eric Vaughan

unread,
Apr 22, 2013, 9:45:46 PM4/22/13
to vr-g...@googlegroups.com
I'm afraid I still do not see the disconnect.  I am tracking the head in the world with depth cameras, which gives me the point of perspective.  The rift will give me orientation for my view.  The CGC on the Rift gives me view-relative hand position.  I can test for hand and virtual world object intersection.  Where is the problem?
I'd be worried about the propagation error between all of these transforms and multiple tracking methods, but I still think it's worth trying.

Lorne Covington

unread,
Apr 22, 2013, 11:52:27 PM4/22/13
to vr-g...@googlegroups.com


On 4/22/2013 9:45 PM, Eric Vaughan wrote:

I'm afraid I still do not see the disconnect.  I am tracking the head in the world with depth cameras, which gives me the point of perspective.  The rift will give me orientation for my view.  The CGC on the Rift gives me view-relative hand position.  I can test for hand and virtual world object intersection.  Where is the problem?
I'd be worried about the propagation error between all of these transforms and multiple tracking methods, but I still think it's worth trying.

If a few double-precision matrix operations result in much error, then it's sure lucky NASA can hit Mars! (;^}) But what I'm trying to say is in practice I think since I ONLY care about hands in relation to objects in my view, the only error that could cause a problem here is that of the CGC.  And even then, it should be small enough that the user should be able to easily compensate and probably not notice it (they cannot see their real hands, and there are no real world objects I'm trying to guide them to).

I'm all ears if someone can specify where the fault is other than "it's been talked about and it won't work".  Because so far I don't see where there is any adding of various sensor errors.  I just want to reach out and touch what is in my view, and it doesn't matter if the depth cameras think I'm in the next county and the Rift thinks I'm sideways; the CGC should give me good data for guiding my hand to that sideways thing I'm looking at in the next county because it went along with me for the ride.

And believe me, I'll try it the day my first Rift gets here!  (Sound of fingers drumming on table...)

Thanks Eric!

Jan Ciger

unread,
Apr 23, 2013, 5:46:36 AM4/23/13
to vr-g...@googlegroups.com
Hi, 

On Tue, Apr 23, 2013 at 1:34 AM, Lorne Covington <noir...@gmail.com> wrote:


Not what I am talking about.  I can test for hand position in view space, and the virtual button position in view space, to test for intersection.  And these are all just transforms anyway, it does not really matter which way the transforms go.

Of course. But to bring the 3D object from world space to view space, you need to know *where* your camera is in world space. That is what I am talking about. You cannot practically model your entire scene directly in view space, unless you say that your camera is fixed and you will not be able to navigate the scene.
 
Then it should be easy to point out my error! (;^})  Again, it does not matter which way I do the transforms to test for collisions, if I transform my hands into world space or the world into hand space.  No loss of precision either way.

The error is above, you seem to ignore the fact that you cannot model your scene in view space. You can do the testing/interaction in view space, but you need to first transform the models there. And that transform you don't have unless you use the extra info from your Xtion or some other tracker (even a virtual tracker based on a gamepad input used for navigation).

The loss of precision I was talking about is due to the chain of transforms - if you build the transforms using data from the Xtion and add data from the Rift and ToF camera on your head, errors/noise of each will multiply. The question is how much this will be an issue, but in my experience with Kinect (Xtion is likely similar, as it has the same sensor), Kinect is perhaps accurate to about 1cm, not really more, due to the low resolution of the depth map. Moreover, it tends to be quite noisy. However, if you do your own calculations instead of the skeleton tracking (as you said), who knows, perhaps it will be enough. I was playing with the iPi motion capture with two Kinects and they can do some amazing stuff - however there the processing is a very heavy offline calculation. 
 

Please bear with me.  If my head is tracked, and I am using that to present the view of the virtual world button, it does not matter to me if that representation has errors in it, only that I can line my hand up with it.

But your head is tracked only by the Rift, no? You said you don't want to use the Xtion for this (only to improve the accuracy later by some sort of sensor fusion). You have only rotation info from that. How do you know where in space is it so that you can actually align that button (modeled in world space) with it?
 

I'm afraid I still do not see the disconnect.  I am tracking the head in the world with depth cameras, which gives me the point of perspective.  The rift will give me orientation for my view.  The CGC on the Rift gives me view-relative hand position.  I can test for hand and virtual world object intersection.  Where is the problem?

Ah ok, so it seems that we have misunderstood each other. If you do this, then yes, that will work and it is what I was describing, in fact.   

I was under the impression that you *don't want* to use the depth cameras to actually track the head and use them only to improve the accuracy of the CGC + Rift combination (e.g. by using some sort of interpolation/sensor fusion), not to add the missing 3DOF.  That would make you lack the position information and the setup wouldn't work. 

The discussion I was referring to was an older thread about some people who wanted to mount Leap to the Rift to track the hands in a similar way as you do with the CGC, but with no additional head tracking (only the orientation from the Rift). That would not work, obviously.
 

I think this is the important concept: even if my world head position is wrong, so is what I'm seeing, and so is my hand by the same amount.

My hands will ALWAYS be rendered correctly in my view, to the accuracy of the CGC, as that is relative to the head position, no matter where that is.  And since the view of the world will be rendered for that head position, errors and all, my hands and the position of the objects in my view track together ON TOP of any head/orientation errors.

Yes, that was not in dispute. I was concerned that you don't actually have the head position tracked relative to the world (using the depth cameras). If you have that, no problem. The only issue could be the concern about the accuracy/noise of that data if you use the Xtion/Kinect, but if you are doing your own processing then you can perhaps manage it. I was using the regular skeletal tracking for head tracking before for a large screen, standing up application (thus no problems with the skeleton). However, we had to add a strong low-pass filter (predictive Kalman in our case, but also the 1€ filter would be perfect for this) to reduce the amount of jitter we were getting, otherwise it was incredibly disturbing with the camera constantly shaking and jumping around. 
 
Regards,

Jan

Reply all
Reply to author
Forward
0 new messages