I have a naive question about input events-- to what extent is the Polymer input events library meant to obviate gesture disambiguation in script?Specifically, if I have an element that I'm interested in taps and swipes on, I can listen to both the tap and the trackstart/track/trackend event with Polymer's gesture event library. But the tap gesture fires after the touch up whether the user tracked in the middle or not.
It isn't too painful to stop listening for other events when a track begins, but in an ideal world we'd lean on the browser to do disambiguation of all gestures that the gesture library exposes -- it needs to do them anyway for browser UI (like long press and scrolling), and this provides much neater encapsulation of behavior for components (the various gesture handlers don't need to know about one another).
Do we have a path to making that happen?
To unsubscribe from this group and stop receiving emails from it, send an email to input-dev+...@chromium.org.
Hey Tom, I'm not sure I understand all of your question.On Mon, Jul 14, 2014 at 2:51 AM, Tom Wiltzius <wilt...@chromium.org> wrote:
I have a naive question about input events-- to what extent is the Polymer input events library meant to obviate gesture disambiguation in script?Specifically, if I have an element that I'm interested in taps and swipes on, I can listen to both the tap and the trackstart/track/trackend event with Polymer's gesture event library. But the tap gesture fires after the touch up whether the user tracked in the middle or not.This is because we made tap have this behavior. A tap will always fire on the deepest part of the DOM that contains the start and the end position.You can disable this by calling "preventTap" on any gesture event, like track.
It isn't too painful to stop listening for other events when a track begins, but in an ideal world we'd lean on the browser to do disambiguation of all gestures that the gesture library exposes -- it needs to do them anyway for browser UI (like long press and scrolling), and this provides much neater encapsulation of behavior for components (the various gesture handlers don't need to know about one another).This is the part I'm confused about. Do you mean to suggest that the gesture library should be subsumed by the platform, or that there should be some sort of exposed low level hooks that a library could use to make a set of gestures?
Folks on input-dev can correct my understanding if its wrong, but there's already a first-class notion of a "tap" gesture in Chrome. I don't think we expose it, but it's there (eating up resources), so it would be really nice to take advantage of it. There's also a gesture recognizer in the browser that decides whether you're e.g. scrolling vs tapping. It seems a real shame to have this logic running every time the user touches the screen, only to have all of that logic re-done (potentially with slight differences in behavior) in JS.
Yes, and with eager gr we might already be processing the event stream in parallel waiting for the ack.
We can't use click as is because preventDefault on touchevents will prevent the click, and we need to call preventDefault to receive events on safari.
Also, click on touch does not have the "sloppy cursor" mechanics as it does on mouse, where the deepest common ancestor of the mousedown and mouseup determine where the click is sent.