On Nov 23, 1:31 pm, Nink <
nink...@gmail.com> wrote:
> We could do some basic gestures forward backwards up down left right
> or we could get a little fancier with some skeleton tracking and have
> the avatar follow our moves. Any thoughts
We've been thinking a lot about gestural controls and how they might
relate to movement / in particular for a non-visual game experience
over the last 8 months or so. The takeaways so far have been that when
we added technologies such as head-tracking for adaptive binaural
playback this kind of technology ends up "fixing" or rooting the
player to a spot.
It's really difficult to overcome the problem of "movement" or flying
through space and you can see in all the Kinect games so far that the
avatar is basically stationary with the surroundings scrolling past.
I'm not sure this kind of technology can even address it properly as
it privileges a small square for detection and some thinking outside
of the box is due.
IMHO simply mapping avatar "movement" to hand gestures [ such as
pointing ] a la Jaron Lanier VR work *is* a bad UX hack that's been
perpetuated simply because we haven't thought of a more natural or
intuitive solution. Hands are important and ideally should be kept
free for manipulating items / data / objects in front while a user
should be able to rotate / move their body position at the same time
[ in relation to the world]. Nevertheless it's inevitable we will see
a "vocabulary" of gestures begin to emerge and one that cobbles from
the work of many systems such as g-Speak, PrimeSense, Kinect games and
so on; and this is something to watch.
Whether it's any good or not at the beginning is a different matter.
What I am certain about is that such a system of gestures should be
"stackable" so that the user / like human physiology can perform
multiple actions at the same time; which leads to more interesting
aspects of machine interaction like performance and virtuosity.
-- Chuan