Thank you, Golan for your questions. Below are our answers for your in depth analysis. Let us know if you have any follow ups, and we’re excited to see what else you create with the Sandbox!
1. Lag, latency, and responsiveness:
Gestures on Soli Sandbox are currently optimized for mobile phone experiences. As you pointed out, Soli technology can be applied to a wide range of applications and use cases, and we are super excited about its possibilities.
Unfortunately, this setting is locked.
Swipe gestures are registered after the complete motion, and there is a slight computational delay as well.
This is by design—the reach gesture in its current state was designed to detect picking up a phone—we are unable to disable this.
2. "Non-updated data fields"
These are reserved fields and are not usable in the Sandbox at this time.
3. Accuracy
We think adding a smoothing filter (e.g., moving average, Kalman filter) may help. The code we shared for hand tracking has smoothing feature built in, so please take a look and let us know if that helps
General Experiences Developing with the Four Soli Gestures.
"Reach": The best way to reliably trigger this event is to reach as though you are going to pick up the phone. We designed this gesture to sense the intent of a person to interact with the device to speed up the face recognition camera, so that’s why you have to reach like you’re going to pick up the phone. Your description of a hover event is interesting. That kind of gesture may be better suited for gaming interactions like the kinds in your team’s prototypes.
“Tap”: We would recommend trying a range of different speeds and scales of the gesture to practice triggering it reliably.
Reliability
Yes, your expected use case is correct. While developing some of our game experiences, we’ve altered game mechanics to fit within these parameters. Check out the Pokémon Wave Hello experience to see some of it in action.
Delay and Latency
Swipe gestures are registered after the complete motion, and there is a slight computational delay as well.
There are advantages and disadvantages of any technology, but here are some of the reasons why we’re excited about using Soli for such use cases:
Understands movement from a wider angle and at different scales (from presence to gestures).
Always sensing to detect movement at low power.
Detects objects and motion through various materials for seamless interaction.
Soli is not a camera and doesn’t capture any visual images.
If you are interested in learning more about our vision for Soli, please check our website or this article we wrote in the Google design blog.
Raw sensor data
We were very excited by the ideas that our developers came up with with the alpha kit back then, but at this time we are not exposing any of the lower level data.
In a future state, a ""dev kit"" would be great to release, but we do not have any public information about it as of now.
Again, thank you so much for your contribution, and please feel free to submit your experiments for the team to check out here
On Sunday, August 2, 2020 at 11:24:55 PM UTC-7, Golan Levin wrote:
Hello,
I am a professor of Art at Carnegie Mellon University. My assistant and I have been experimenting for the past couple of days with Soli for the purposes of making expressive interactive artworks. It has been fun, and you can see some of our early prototypes here:
https://groups.google.com/d/msg/soli-sandbox-community/D5RWxWXtFbk/lxlswZ4XDAAJ.
We have some observations and requests about Soli development, hopefully easily addressable by someone from the Soli/ATAP team.
Thanks so much,
Golan Levin
Observations from studying the Soli data streams:
We built a "Dashboard" in order to view the signals coming from the Soli sensor.
This app is located at:
https://soli-dashboard.glitch.me/
1. Lag, latency, and responsiveness:
Soli events are received with an update rate of ~3 Hz. So it’s challenging to produce smooth animation that is coupled to Soli events. Is there any chance this could be improved to 10-30Hz?Swipe gestures are detected after a lag of ~300-400 milliseconds. Is this due to the slow update rate, or is this the amount of time required for the ML to decide a gesture has occurred?The Reach event is only capable of being detected for some 3-5 seconds, after which no matter where your hand is, it stops detecting, and you have to retrieve your hand and re-Reach. Could this be made to not shut off?