Hi Nathan,
I was working upon possibilities of leap device usage for our simulations. I have
been reading the leap developer release notes extensively having tried the in-built functions and basic custom gestures writing some joint-mapping algorithms. Now, I can pretty confidently use leap's pre-defined functions / gestures for my own requirements. I have now started with the analysis on how to integrate a leap with any of our simulations. So,
do you have any simulation in mind that I should try integrating gestures to?Also, while I was thinking of going ahead with this simulation, I've come up with a lot of ways this can be implemented. Now comes the main point of discussion i.e.
What all functionality should the gestures provide?
Note :- All gesture names written below are terms used in leap's documentation.
Some of the one's I was thinking of were :-
This is one approach to gesture simulation. But maybe these
won't attach a lot of ''''meaning'''' which is what the project aims for. Hence, then we'll
define gestures separately for each simulation rather than for the common elements of a simulation and serve them to the users. Some of them can be like :-
- A closed-fist gesture for increasing the pressure. An open palm thus for decreasing it.
- Palms of two hands going apart/coming close for increase/decrease in volume.
- Play/Pause remains the same as above.
A lot more gestures that can be implemented but
the point of this mail is to decide HOW they need to be implemented.Now, comes the most difficult and technical approach but
this is what users will absolutely love.( Me included! ) :D
This is called
Train your Leap. We can give a personalized experience to each of our user, so that they can
map the
gestures with simulation elements( such as temperature slider, play/pause button etc.) however they want to. See a diagram of the process attached. We intend to save the gestures a user decides once and use them every other time he opens the simulation
in the same machine and in the same browser.
Now explaining how do we obtain this a bit technically :-
This would work
somewhat like cookies. When the user maps the simulation the first time we save a
simulation-gesture-map file in their system containing his mapping and every time he changes a gesture map, those changes are made in the simulation-gesture-map file too. This way he gets a personalized experience of our simulations. The format of the file and other specifications can be decided later if we move ahead with this idea.
Please do give your thoughts and suggestions, how much user acceptability do you expect out of each of these and also how can these be improved. Audiences will love gesture additions I'm sure and taking care of the users this way will get our simulations to reach an even larger user-base.
Eagerly waiting for your reply.
Thank You,
Aniket Gupta.
ankg.github.io