Kinect virtual world integration

29 views
Skip to first unread message

Nink

unread,
Nov 22, 2010, 9:31:59 PM11/22/10
to OpenKinect
I saw a video by Mitch Kapor a couple of years ago where he used a
zcam to cruise around second life. Was wondering what we could do in
this space and what the best platform to work with would be for
integrating kinect and virtual worlds.

We could do some basic gestures forward backwards up down left right
or we could get a little fancier with some skeleton tracking and have
the avatar follow our moves. Any thoughts

http://www.youtube.com/watch?v=2t52gkAwJq8

chuan_l

unread,
Nov 23, 2010, 3:21:55 AM11/23/10
to OpenKinect


On Nov 23, 1:31 pm, Nink <nink...@gmail.com> wrote:

> We could do some basic gestures forward backwards up down left right
> or we could get a little fancier with some skeleton tracking and have
> the avatar follow our moves.  Any thoughts

We've been thinking a lot about gestural controls and how they might
relate to movement / in particular for a non-visual game experience
over the last 8 months or so. The takeaways so far have been that when
we added technologies such as head-tracking for adaptive binaural
playback this kind of technology ends up "fixing" or rooting the
player to a spot.

It's really difficult to overcome the problem of "movement" or flying
through space and you can see in all the Kinect games so far that the
avatar is basically stationary with the surroundings scrolling past.
I'm not sure this kind of technology can even address it properly as
it privileges a small square for detection and some thinking outside
of the box is due.

IMHO simply mapping avatar "movement" to hand gestures [ such as
pointing ] a la Jaron Lanier VR work *is* a bad UX hack that's been
perpetuated simply because we haven't thought of a more natural or
intuitive solution. Hands are important and ideally should be kept
free for manipulating items / data / objects in front while a user
should be able to rotate / move their body position at the same time
[ in relation to the world]. Nevertheless it's inevitable we will see
a "vocabulary" of gestures begin to emerge and one that cobbles from
the work of many systems such as g-Speak, PrimeSense, Kinect games and
so on; and this is something to watch.

Whether it's any good or not at the beginning is a different matter.
What I am certain about is that such a system of gestures should be
"stackable" so that the user / like human physiology can perform
multiple actions at the same time; which leads to more interesting
aspects of machine interaction like performance and virtuosity.



-- Chuan




Daynuv

unread,
Nov 23, 2010, 4:49:04 AM11/23/10
to OpenKinect
I'm hugely interested in this area and would love to see basic
gestures mapped to animated avatar gestures (usually triggered by
function keys in the virtual world client). For instance clapping my
hands to trigger the hand-clapping animation. Waving with one hand to
trigger the waving animation. Holding one hand outstretched and making
a shake-hands motion. Holding both hands in the air and pumping them
from one side to the other to trigger a dance animation. And so on.

chuan_l

unread,
Nov 23, 2010, 5:56:19 AM11/23/10
to OpenKinect


On Nov 23, 8:49 pm, Daynuv <eirepren...@gmail.com> wrote:
> I'm hugely interested in this area and would love to see basic
> gestures mapped to animated avatar gestures (usually triggered by
> function keys in the virtual world client). For instance clapping my
> hands to trigger the hand-clapping animation. Waving with one hand to
> trigger the waving animation. Holding one hand outstretched and make
> a shake-hands motion. Holding both hands in the air and pumping them
> from one side to the other to trigger a dance animation. And so on.

|

Miming actions to "trigger" them seems a bit redundant in that you
could either represent these "commands" as more symbolic gestures that
are easier to detect. Better still, how about projecting a user's RGB-
Depth image directly into the virtual space so that you enable a wider
range of expression [ and as a bonus don't have to do any detection ].
I'm just trying to think through what might work best practically, and
of course there's probably lots to learn from doing it with
classifiers and so on. It just kind of reminds me of this old Borges
story [ Don Quixote? ] where the protagonists map ends up being almost
the same size as the terrain ..!


Cheers,


-- Chuan

Nink

unread,
Nov 23, 2010, 9:07:35 AM11/23/10
to openk...@googlegroups.com

Gestures.
Lots of virtual worlds use 3D spatial voice (vivox, mumble, diamondware etc) When we get the microphones working we maybe able to do something cool with voice commands perhaps a hand or voice trigger and a following word to issue commands but I think hand gestures are still our best opportunity.

I like the mimic me idea especially when your in a cave (or virtual cave) environment. The concept of building with something like tisch is also interesting. If you have see the video world builder this is all very possibe. Stretch a prim, mold mesh in your hands like putty. Very exciting.

I think the first step is to just get some basic gestures going and perhaps map to the keyboard. (Up arrow, down arrow, left arror, right arrow, page up, page down). The two finger piece sign seems be clear on the kinect so maybe two fingers up for up arrow two fingers down for down arror etc. Page up paged down could be thumbs up and down. this would be good in a range of programs as well as virtual worlds.

I like the idea of projecting a 3D model of ourselves into the virtual world would be a great use of the technology and the use of kinects in 3D video conferencing will be huge. We also need to have a solution for avatars as one of the reasons virtual worlds are so succesful is that people are able to appear as they would like to appear and not necessarily as they actually appear in real life. Ie I am tanned, muscular and good looking in a virtual world and white, freckly and could lose a few pounds in the real world. You can scan your photo on various sites and create an avatar that is a 3D mesh of your real self but very few people do this.




Sent from my BlackBerry

Dan

unread,
Nov 23, 2010, 4:28:05 PM11/23/10
to OpenKinect
Mimicking is actually a great way to trigger actions. Since the
gesture and the action animation are the same, authoring is
simplified. Once triggered, you can augment the user performance with
other inputs like physics feedback from the virtual world or the high-
fidelity keyframe data from the mimicked animation which gives you
expressiveness beyond just projecting the user's performance. We’re
working on a system along those lines at Activate3D. That system is
proprietary (unfortunately), but we’d be happy to share some thoughts
and lessons learned if folks have questions.

dba

Dave Pentecost

unread,
Nov 23, 2010, 4:55:21 PM11/23/10
to openk...@googlegroups.com
I may start another thread for this, but I am watching for someone who is working with Kinect and Unity, the game engine. I see great potential there, and for myself would want a way to control a Unity-based digital dome that we are building (30 ft tilted fulldome planetarium) through gestures. The first step might be a general, open Kinect/OSC library. If anyone has interest in this or has already started (I've seen one iPad/OSC/Processing/Kinect hack) please let me know. 


--
Director, Technology
Center for Community
http://www.girlsclub.org/building

@dpentecost
Cell 646 704 2021


Chris Rojas

unread,
Nov 23, 2010, 6:30:45 PM11/23/10
to openk...@googlegroups.com, dave.pe...@gmail.com
Yeah, the Kinect/iPad/Processing/OSC thing was mine. I'm rewriting it in OF and adding some features because Processing isn't as fast as OF. Drop me an email for some source code...I'm literally working on it right now.

-Chris Rojas
303-886-8922
Crux...@gmail.com
www.ProjectAllusion.com
www.CruxPhotography.com


Interactive Engineer
SuperTouch / Interference Inc.
611 Broadway #819
New York, NY 10012
Ch...@SuperTou.ch
www.SuperTou.ch
www.InterferenceInc.com
Reply all
Reply to author
Forward
0 new messages