On Tue, 4 Mar 2014, at 10:20, Peter Beverloo wrote:
> When I, as a developer, am faced with the decision of whether to build a
> native or a Web app, I'd start with listing the requirements of my app.
> Even when developing a Phone app, the ability to prevent accidental input
> isn't my top priority. I'd be more concerned with getting access to the
> person's contacts, keeping the device awake during the duration of the
> call, muting other audio playing on the device, maybe opening itself in
> the
> foreground if it isn't, and so on. Preventing accidental input could also
> be done by having focused UI, and requiring a more complicated gesture
> (e.g. sliding) to stop the conversation.
As much as I agree that proximity sensor is not a top priority, it is
definitely a nice to have. Hardware capabilities are a serious gap
between native and Web on mobile and having an API to access a new
hardware part is always a win. There are indeed a lot of features that
are missing to allow a developer to write a phone application on top of
the Web Platform. However, proximity sensor is part of those features
and I do not see why we should not consider it because some more complex
things would still be required.
It is indeed a fact that without keeping the device awake, your call
will unlikely be cut when you would be talking to your friend over a
WebRTC connection. But it would be as true for you hanging up on him/her
because your jaw pressed the hangup button.
> For air gestures, the granularity of the API often won't be high enough
> to
> effectively create gesture recognizers, unless the gestures are really
> basic -- slowly waving your hand three times, for example. The direction
> of
> a wave would not be detectable, either. Furthermore, it's noteworthy that
> all of these would require the device to be already unlocked with the
> page
> in the foreground, which makes most of the named use-cases unlikely.
You indeed need more than a proximity sensor to enable complex gestures.
A proximity sensor can't differentiate a hand going left from a hand
going right or a hand making a fist from an open hand. As far as I know
Samsung devices have a "gesture sensor" for that kind of stuff. This
said, even if the proximity sensor is pretty simple by itself, it still
allow applications to enable gestures. A quick wave in front of your
phone would pause the video you've been watching or the music, turn the
page of the book you were reading or simply lock the screen. Even if the
wave isn't needed per-se, it definitely improves the user experience.
> To give some more technical feedback which may be relevant...
>
> The specification defines two events, DeviceProximityEvent for indicating
> the distance to a nearby object, and UserProximityEvent for indicating
> the
> presence of a nearby object. This seems to be a security measure in case
> the user agent does not want to (or cannot) expose the precise distance
> between the device and a nearby object. The specification has a brief and
> very generic section on privacy considerations, but is inconclusive.
> Would
> we want to expose this to every website out there? Is there a need for a
> security mechanism?
Isn't the security mechanism pretty open here? We could imagine that the
UA would prompt after the content subscribed for the event. Another
solution would be to make the proximity estimation blurry. The events
could be sent based on the page visibility: they could be randomly
delayed for background tabs (to prevent tracking) or even not sent at
all.
> It would be a lot better to have a single event (UserProximityEvent?)
> with
> an additional property for a distance estimate when both known and it's
> safe to expose.
I don't know much about this API but from a quick glance I agree with
that.
> For Android, it's important to note that the platform does not make any
> guarantees about the values returned by the proximity sensor. Many
> sensors
> on Android seem to be based on the amount of lux received by a
> front-facing
> sensor, and estimate the distance based on the relative darkness. Others
> return fixed values depending on whether an object is far or near. The
> specification seems to be crafted around this, and defines that whatever
> values the platform happens to offer should just be forwarded to the Web
> application. How interoperable will these values be with, say, a mobile
> Apple or Windows device?
Given that the limitation is from the hardware and not the OS, with the
same hardware, you should see the same values regardless, right?
> If Blink were to support the Ambient Light API, a reasonable estimate of
> this functionality could be supported by a page monitoring the light
> conditions, and detecting when the amount of lux (steeply) decreases.
> When
> using WebRTC, a developer could approximate the distance of an object in
> front of the camera by measuring its velocity in relation to the device's
> orientation, but that can get complicated really quickly :-).
A bit complicated indeed :) Also quite a waste of CPU. It would be a
shame to have a sensor dedicated to detect user proximity and being
required to use another sensor (ambient light or camera) to emulate the
hardware capability.
On Tue, 4 Mar 2014, at 20:03, Kostiainen, Anssi wrote:
> > It would be a lot better to have a single event (UserProximityEvent?) with an additional property for a distance estimate when both known and it's safe to expose.
>
> Given an implementation of this feature is shipping in Firefox, I think
> we may not want to merge the interfaces at this time. If you feel
> strongly about this, I'd invite you to discuss this on the appropriate
> W3C mailing list (feel free to ping me off-the-list for more details).
I guess it would be more fair to point that the spec is now in CR
because Samsung implemented it in Webkit [1], thus it got to compatible
implementation I guess.
[1]
https://bugs.webkit.org/show_bug.cgi?id=97630
-- Mounir