Polymer input events

254 views
Skip to first unread message

Tom Wiltzius

unread,
Jul 14, 2014, 5:52:00 AM7/14/14
to polym...@googlegroups.com, input-dev
I have a naive question about input events-- to what extent is the Polymer input events library meant to obviate gesture disambiguation in script?

Specifically, if I have an element that I'm interested in taps and swipes on, I can listen to both the tap and the trackstart/track/trackend event with Polymer's gesture event library. But the tap gesture fires after the touch up whether the user tracked in the middle or not.

It isn't too painful to stop listening for other events when a track begins, but in an ideal world we'd lean on the browser to do disambiguation of all gestures that the gesture library exposes -- it needs to do them anyway for browser UI (like long press and scrolling), and this provides much neater encapsulation of behavior for components (the various gesture handlers don't need to know about one another).

Do we have a path to making that happen?

Daniel Freedman

unread,
Jul 15, 2014, 2:25:42 PM7/15/14
to Tom Wiltzius, polym...@googlegroups.com, input-dev
Hey Tom, I'm not sure I understand all of your question.


On Mon, Jul 14, 2014 at 2:51 AM, Tom Wiltzius <wilt...@chromium.org> wrote:
I have a naive question about input events-- to what extent is the Polymer input events library meant to obviate gesture disambiguation in script?

Specifically, if I have an element that I'm interested in taps and swipes on, I can listen to both the tap and the trackstart/track/trackend event with Polymer's gesture event library. But the tap gesture fires after the touch up whether the user tracked in the middle or not.

This is because we made tap have this behavior. A tap will always fire on the deepest part of the DOM that contains the start and the end position.
You can disable this by calling "preventTap" on any gesture event, like track.
 

It isn't too painful to stop listening for other events when a track begins, but in an ideal world we'd lean on the browser to do disambiguation of all gestures that the gesture library exposes -- it needs to do them anyway for browser UI (like long press and scrolling), and this provides much neater encapsulation of behavior for components (the various gesture handlers don't need to know about one another).

This is the part I'm confused about. Do you mean to suggest that the gesture library should be subsumed by the platform, or that there should be some sort of exposed low level hooks that a library could use to make a set of gestures?
 

Do we have a path to making that happen?

To unsubscribe from this group and stop receiving emails from it, send an email to input-dev+...@chromium.org.

Tom Wiltzius

unread,
Jul 15, 2014, 11:03:32 PM7/15/14
to Daniel Freedman, polym...@googlegroups.com, input-dev
On Wed, Jul 16, 2014 at 3:25 AM, Daniel Freedman <dfr...@google.com> wrote:
Hey Tom, I'm not sure I understand all of your question.


On Mon, Jul 14, 2014 at 2:51 AM, Tom Wiltzius <wilt...@chromium.org> wrote:
I have a naive question about input events-- to what extent is the Polymer input events library meant to obviate gesture disambiguation in script?

Specifically, if I have an element that I'm interested in taps and swipes on, I can listen to both the tap and the trackstart/track/trackend event with Polymer's gesture event library. But the tap gesture fires after the touch up whether the user tracked in the middle or not.

This is because we made tap have this behavior. A tap will always fire on the deepest part of the DOM that contains the start and the end position.
You can disable this by calling "preventTap" on any gesture event, like track.

Got it, that makes sense.
 
 

It isn't too painful to stop listening for other events when a track begins, but in an ideal world we'd lean on the browser to do disambiguation of all gestures that the gesture library exposes -- it needs to do them anyway for browser UI (like long press and scrolling), and this provides much neater encapsulation of behavior for components (the various gesture handlers don't need to know about one another).

This is the part I'm confused about. Do you mean to suggest that the gesture library should be subsumed by the platform, or that there should be some sort of exposed low level hooks that a library could use to make a set of gestures?

Either one would be fine, but in an ideal world neither the web application nor the framework (i.e. Polymer) would need to register for JS events that it doesn't actually need (from a really simple efficiency standpoint).
 
Folks on input-dev can correct my understanding if its wrong, but there's already a first-class notion of a "tap" gesture in Chrome. I don't think we expose it, but it's there (eating up resources), so it would be really nice to take advantage of it. There's also a gesture recognizer in the browser that decides whether you're e.g. scrolling vs tapping. It seems a real shame to have this logic running every time the user touches the screen, only to have all of that logic re-done (potentially with slight differences in behavior) in JS.

Zeeshan Qureshi

unread,
Jul 15, 2014, 11:19:22 PM7/15/14
to Tom Wiltzius, Daniel Freedman, polym...@googlegroups.com, input-dev
Folks on input-dev can correct my understanding if its wrong, but there's already a first-class notion of a "tap" gesture in Chrome. I don't think we expose it, but it's there (eating up resources), so it would be really nice to take advantage of it. There's also a gesture recognizer in the browser that decides whether you're e.g. scrolling vs tapping. It seems a real shame to have this logic running every time the user touches the screen, only to have all of that logic re-done (potentially with slight differences in behavior) in JS.

I don't think the logic is run twice, Tim can correct me if I'm wrong. If there is a touch handler, the touch events are dispatched to blink which can be consumed and prevent defaulted. Only if they are not prevent defaulted are they fed into the gesture recognizer to generate Gesture* events and then dispatched to blink.

Any library that wants to have their own gesture detector would start consuming them at the touch events level and prevent default them.

Tom Wiltzius

unread,
Jul 15, 2014, 11:25:32 PM7/15/14
to Zeeshan Qureshi, Daniel Freedman, polym...@googlegroups.com, input-dev
It can't quite be that neat, though, because you can still get scrolling with touch event handlers. So a JS gesture library would intercept all touch events and at some point early in a touch stream decide that this wasn't a gesture it was interested in. At that point, the browser would have to look at the touch event stream and decide that this was worth scrolling. The conclusions are different, but the logic must be nearly the same...
 

Zeeshan Qureshi

unread,
Jul 15, 2014, 11:27:59 PM7/15/14
to Tom Wiltzius, Daniel Freedman, input-dev, polym...@googlegroups.com

Yes, and with eager gr we might already be processing the event stream in parallel waiting for the ack.

Timothy Dresser

unread,
Jul 16, 2014, 8:49:40 AM7/16/14
to Zeeshan Qureshi, Tom Wiltzius, Daniel Freedman, input-dev, polym...@googlegroups.com, Rick Byers
With eager gesture detection, the browser gesture detector will run on all touches, which means that when a JS gesture detection library is running, gesture detection is happening twice.

There are two problems here:
1. JS gesture detection feels different from native gesture detection
2. Doing gesture detection twice may cause performance issues.

The first problem could be solved for Chrome by releasing a JS version of Chrome's gesture detector (emscripten anyone?). Perhaps the community would also create versions which match the behavior of other browsers' gesture detectors.

We could solve both problems by dispatching some event to JS for each gesture that the browser detects, but I think there are a lot of non-technical hurdles here. Trying to expose the notion of "gestures" through web standards would be... difficult to navigate. +rbyers, as he's more familiar with why this would be hard.

Tom, do you have any data on much of a performance problem JS gesture detection is?

Jared Duke

unread,
Jul 16, 2014, 11:37:29 AM7/16/14
to Timothy Dresser, Zeeshan Qureshi, Tom Wiltzius, Daniel Freedman, input-dev, polym...@googlegroups.com, Rick Byers
We've worked hard to keep browser-side gesture detection costs down (~60 microseconds CPU time per event across a given touch scroll on a Nexus 4).  There may be some clever tricks we can do to cut this down further, e.g, if we see that Javascript has preventDefault'ed the initial touchstart we can early out from detection for that sequence (modulo some async nuances).  The advantages in doing eager/greedy gesture detection have so far outweighed the potentially redundant cost.

Longer term, there may be advantages to hosting the (native) gesture detector in the render process where communication with JS is cheaper and tighter.


Rick Byers

unread,
Jul 16, 2014, 12:03:25 PM7/16/14
to Jared Duke, Timothy Dresser, Zeeshan Qureshi, Tom Wiltzius, Daniel Freedman, input-dev, polym...@googlegroups.com
Yeah I don't think there's really a perf argument here.  The overhead of eager GR should be trivial.

Back to Tom's original question about detecting 'tap' in a way consistent with the platform: we do effectively expose this gesture to the web page - via the "click" event.  Note that this currently takes things into account (does touch adjustment, uses device-specific slop values, etc.) which can't be done well from JavaScript today.  So we generally encourage people to use 'click' rather than do their own gesture detection for tap.

There are some scenarios where this is inconvenient though, but we've got some ideas for how to fix that - it's just not clear to me what's important enough to prioritize.  Daniel, what are the scenarios where your "tap" gesture is better than relying on the browser's click event?

Rick

Timothy Dresser

unread,
Jul 16, 2014, 12:34:06 PM7/16/14
to Rick Byers, Jared Duke, Zeeshan Qureshi, Tom Wiltzius, Daniel Freedman, input-dev, polym...@googlegroups.com
The potential perf win here would be if we could move all gesture detection from JS to the browser, by exposing more types of gestures to the web page. If JS gesture detection is hurting the performance of a significant number of important scenarios, it might be possible to build an argument that this is the right path forward.

I doubt that the impact of JS gesture detection is that significant.

Daniel Freedman

unread,
Jul 16, 2014, 1:29:29 PM7/16/14
to Rick Byers, Jared Duke, Timothy Dresser, Zeeshan Qureshi, Tom Wiltzius, input-dev, polym...@googlegroups.com
We can't use click as is because preventDefault on touchevents will prevent the click, and we need to call preventDefault to receive events on safari.
Also, click on touch does not have the "sloppy cursor" mechanics as it does on mouse, where the deepest common ancestor of the mousedown and mouseup determine where the click is sent.

Rick Byers

unread,
Jul 16, 2014, 2:59:46 PM7/16/14
to Daniel Freedman, Jared Duke, Timothy Dresser, Zeeshan Qureshi, Tom Wiltzius, input-dev, polym...@googlegroups.com
On Wed, Jul 16, 2014 at 1:29 PM, Daniel Freedman <dfr...@google.com> wrote:
We can't use click as is because preventDefault on touchevents will prevent the click, and we need to call preventDefault to receive events on safari.

I agree this is a problem on Safari (on Chrome you can just preventDefault touchmoves, or use 'touch-action: none').  I've filed https://bugs.webkit.org/show_bug.cgi?id=134987 against WebKit to get their advice on how this should be handled.  Ideally they'd just adopt one or both of the solutions we've got for this.

Also, click on touch does not have the "sloppy cursor" mechanics as it does on mouse, where the deepest common ancestor of the mousedown and mouseup determine where the click is sent.

 Are you saying that if I touch, drag an arbitrary distance, then lift, that should be considered a 'tap'?  That's how IE generates 'click' events for touch, you could argue we should do the same.  Or perhaps there should still be a distance threshold, but the event should still go the ancestor of the start and end nodes.  Can you describe the scenarios where you want this?  How does it interact with the touch affordance you'd apply to something that indicate it's about to be tapped?
Reply all
Reply to author
Forward
0 new messages