Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: Apps and Sensitive APIs (TCPSocket API specific example subthread)

74 views
Skip to first unread message

Andrew Sutherland

unread,
Mar 9, 2015, 9:23:46 PM3/9/15
to dev...@lists.mozilla.org
On Mon, Mar 9, 2015, at 08:23 PM, Jonas Sicking wrote:
> One interesting question to ask here is, would we be interested in
> adopting Tizen's API for, for example, SD card access? Or Chrome-app's
> API for TCPSocket?

Apologies if you mean these generically without wanting specific
discussion. But if it's helpful, I think TCPSocket is an interesting
real situation to discuss in this scope because besides it being a
dangerous API, there is:

1) A standardization effort is being made at
http://www.w3.org/2012/sysapps/tcp-udp-sockets/ (with issues tracked
under the github project at https://github.com/sysapps/tcp-udp-sockets).
I've been under the impression we intend to convert our implementation
to this spec. I've certainly been providing feedback to the spec from
experiences developing our email app.

2) An open source shim at https://github.com/whiteout-io/tcp-socket that
wraps Chrome's socket API and node.js's socket API to look like
(moz)TCPSocket. It will likely be updated to the former standard
assuming we follow-through on updating our implementation to conform
with the standard.

3) A weird Google Chrome TCP API. Their new API which supersedes their
old lower-level-styled API can be found documented at
https://developer.chrome.com/apps/sockets_tcp. Integer socket
identifiers are used and AFAICT you register listener callbacks for all
sockets only. Like for onReceive, the payload is {socketId, data}.

Given the existing standardization effort and its arguably superior API,
I think we would only want to implement support for the Chrome API as a
concerted effort to ease porting of existing Google Chrome extensions or
to create a sufficiently-compatible runtime. If we didn't have the
standardization effort ongoing, I could see adopting the Google Chrome
API and wrapping it for any higher level needs.

Andrew

Jonas Sicking

unread,
Mar 9, 2015, 9:39:26 PM3/9/15
to Andrew Sutherland, dev...@lists.mozilla.org
On Mon, Mar 9, 2015 at 6:21 PM, Andrew Sutherland
<asuth...@asutherland.org> wrote:
> On Mon, Mar 9, 2015, at 08:23 PM, Jonas Sicking wrote:
>> One interesting question to ask here is, would we be interested in
>> adopting Tizen's API for, for example, SD card access? Or Chrome-app's
>> API for TCPSocket?
>
> Apologies if you mean these generically without wanting specific
> discussion.

I actually chose that one intentionally, since I know there's a
standardization effort. So thanks for starting this thread.

> But if it's helpful, I think TCPSocket is an interesting
> real situation to discuss in this scope because besides it being a
> dangerous API, there is:
>
> 1) A standardization effort is being made at
> http://www.w3.org/2012/sysapps/tcp-udp-sockets/ (with issues tracked
> under the github project at https://github.com/sysapps/tcp-udp-sockets).
> I've been under the impression we intend to convert our implementation
> to this spec. I've certainly been providing feedback to the spec from
> experiences developing our email app.

Has Google given any indication that they are actually planning on
implementing the "standard"?

Note that a standard is not a standard because someone wrote a
document with a W3C logo on it. It's not even a standard if Google,
Mozilla and other companies worked on that document together (which
was the case here).

It's a standard once there are multiple interoperable implementations.

So, has Google given any indication that they are going to implement
the API in that document and make it available to Chrome Apps or
Chrome extensions?

If not, we'd just change one FirefoxOS specific API for another. Which
could still be worth it if the new API is sufficiently superior.

/ Jonas

Fabrice Desré

unread,
Mar 9, 2015, 10:12:21 PM3/9/15
to Jonas Sicking, dev...@lists.mozilla.org
On 03/09/2015 05:23 PM, Jonas Sicking wrote:

> Let me know what you think.

I agree. Let's do it.

Fabrice
--
Fabrice Desré
b2g team
Mozilla Corporation

Antonio Manuel Amaya Calvo

unread,
Mar 10, 2015, 6:37:07 AM3/10/15
to dev...@lists.mozilla.org

On 10/03/2015 1:23, Jonas Sicking wrote:
(Sorry to change from dev-webapi to dev-b2g, but I think dev-b2g is
better given the size of these changes).

First off, I think we should get rid of "apps" as a platform feature.
This doesn't need to mean that we should change the UX of B2G. That is
a separate consideration.

But we should get rid of cookie jars. And accept the web for the big
goop of content that it is :)

We could add features to allow websites to indicate that it wants the
security protections that the current cookie jars support. But per the
above, that's not a feature that we should push through FirefoxOS
alone. If it's something that we think is important, we should push it
as a web feature together with Firefox desktop and other browser
vendors.

>From a security point of view, I think the cookie jar model is much better than having everything on the same pile. Maybe rather than getting rid of that, or even making it an opt-in feature we could try and hide the implementation details from the content. So we keep a cookie jar per origin (instead of per-app as we do know, which is really the same since we can only have one app per origin) and every time that origin is loaded it uses it's own cookie jar. To keep this isolated at a process level this would mean changing the way the processes work (having one process per origin instead of one process per app, for example, so every window that's loaded for https://a.origin.com lives on the same process independently of where that window was defined)... but that should be transparent to content developers.



I think we should also keep exposing "sensitive APIs", both to gaia
and to 3rd party developers. Converting all the email servers in the
world to use CORS simply isn't realistic. But we should make
improvements in how these APIs are exposed.

I do think that we still want code that uses these "sensitive APIs" to
be signed. However that doesn't mean that we have to keep using the
same model, of app:-protocol and installable zip files that we
currently use.

There's a few things that I think would be great to accomplish for
content that uses these sensitive APIs:

* Enable the user to simply navigate to content which uses sensitive
APIs, without the need to install it first. I.e. enable the content to
be available through "real" linkable URLs.
* Enable developers to sign the content without going through a
marketplace review. I.e. enable developers to distribute and updated
version of their content without having to go through marketplace
review.
* Enable Marketplace to hand out the ability to use a particular API
to a developer, rather than to a particular version of a particular
app.
This is nice... but I believe it's ultimately futile. It works for well known, giant developers... I could say I trust Google, or Mozilla, or Facebook... but I don't think it'll work for Pop&Mom SW, Inc. How would Mozilla know if they should trust them and with what APIs?

The model we have right now sucks: it's slow to deploy, it's slow to update, it's not similar to the rest of the web, at all (it's basically Apple's model on a smaller scale, in other words). But as a user I can be reasonably secure that whatever I install will use only the permissions it needs, and won't get my data and run with it. But just giving access to random developers, which will then do whatever they want with that permission is the same as opening the permissions to everyone, only more complicated to implement.

* Remove technical separation between "privileged" and "certified"
APIs. We can still decide not to grant any third party content the
ability to use, for example, the power API, by simply not granting the
right to use that API to any developers other than gaia developers.
But the client-side code doesn't need to make that decision.
* While I think signed content that can use sensitive APIs should have
real URLs, I think it needs to never be same-origin with unsigned
"normal" content.
* It would be good if we can keep the process separation advantages
that we currently have for content that can use "sensitive APIs". I.e.
it would be nice if it required more than finding a buffer overflow
bug in Gecko in order to gain access to use the telephony API. But
it'd be good if we can hide this fact as much as possible from web
developers.
This is what I was talking before about the jars :).
* I think we should still keep the CSP requirements that we have for
content that uses "sensitive APIs". I.e. all JS that can use those
APIs has to be signed by the developer.
Again, self signature without revision only means something if the signer is really trustable. And you can only know that if you know the signer. Which isn't really doable on the wild web. Otherwise, it's just security theater, and it's only good for antimalware companies (which are the ones that benefit more from Android's model :P).


What signing format to use, and how to keep it not same-origin as
unsigned content, is probably best done as a separate thread.
Hopefully we can get agreement on the rest of this thread without
solving that part.

I also think we need to stop worrying so much about that these APIs
aren't standardized. It simply isn't in our power to make these really
standardized. It requires that other vendors are actually willing to
work together with us on aligning APIs, and I just haven't seen that.

And more importantly, very little content is using these APIs. The web
is far greater. Even developers that target FirefoxOS specifically are
90-95% of the time able to write their content without using these
APIs.
The problem here is that we need people using those APIS (or maybe new ones?) for the platform to move out of the "cheap smartphone for people that only had feature phones before" segment. And the second problem is that bootstrapping this requires those APIs to be useable in some form for the rest of the web.


That said, if anyone wants to make an effort and reach out to other
vendors to get agreement on any API, feel free to give it a try.

The only thing that I could see being successful in the short term
would be to simply adopt APIs from other platforms. Cordova and
Node.js would be prime candidates here I think. If anyone has
suggestions for APIs that you think would be good candidates, please
let me know.
This is actually a very nice way of making those APIs 'reusable' :).



As mentioned above, I think there's still some APIs that I'm very
nervous about exposing to 3rd party websites. For example, enabling
placing phone calls through direct calls to the API, rather than by
using <a href="tel:..."> or a WebActivity seems like inviting malware.

It's also not a terribly good way to enable users to replace the
built-in dialer. Since the user would still have the built-in dialer.

A better solution to enable replacing the dialer UI might be to use
some form of addon system. An addon which tweaked, or completely
replaced, the built-in dialer UI would be awesome.
Hmm... I think that while those things would be nice to have, on a geeky way, we might be losing sight of the target. Replacing the dialer is a nice configuration option, if we can do it (well, not replacing, *adding* a new dialer because I think replacing the certified one is a no-go from a certification point of view), but the phone already *has* a dialer. What we should worry about is giving developers enough tools to do things the phone doesn't have but other phones do. For example:
  • IM applications (be it Whatsapp, Line, Telegram, Wickr, whatever, it should be doable on the new model).
  • VoIP applications (Skype). We already have Firefox Hello, which is nice, and it's packaged, and it uses a bunch of privileged APIs and another bunch of "they're privileged but only for you" APIs.
  • Richer calendar apps with support for public and private servers (Exchange, Google,... whatever) which might or might not have a nice REST/HTTP API.
  • File browser (since we do have a SD card that can be mounted)
  • Wearable APIs (ok, reaching for the moon here but it's something big that will only get bigger)

Maybe we're going at this the wrong way... and instead of seeing what APIs we have or would like to have we should look at what applications we want our ecosystem to have, and what is needed to implement those applications. I know we learned a lot about what was missing while writing the Firefox Hello app.


Likewise an addon which sat between the dialer and the actual phone
hardware, and did things like block lists, or changed incoming and/or
outgoing phone numbers would be great. Or addons which encrypt the
voice audio when calling friends which has the same addon.

Addons have been great for Firefox desktop. I think it can be as
awesome for FirefoxOS, if not more so.

Addons will definitely be Firefox(OS) specific. But no more so than
the telephony API is, and is likely to remain for the foreseeable
future.

Again, I think how exactly how these addons will work is a better
topic for a separate thread.


In summary:

On a technical level we'd be much more like the web has traditionally
been. I.e. no cookie jars or app silos. The user can navigate between
any content by following normal links. This will include content that
use "sensitive APIs".

The only content distinction we'd end up with is "signed" vs.
"unsigned". And to the user both would look like normal web.

The "signed" content will be FirefoxOS specific until we find others
which are interested in collaborating on APIs, which isn't expected to
be soon. But to put this in perspective, the vast majority of authors
are able to author content without using these APIs.

On a technical level, Gaia would just be normal signed content. The
distinction between "certified" and "privileged" disappear. Though we
can still choose on a per-API and per-developer basis which API we
allow what developers to use. Though UX-wise we might still want to
give gaia special treatment.

Users can install addons which change the behavior of other content.
This will include both change behavior of gaia, as well as of signed
and unsigned websites.


Let me know what you think.

/ Jonas
_______________________________________________
dev-b2g mailing list
dev...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-b2g




Este mensaje y sus adjuntos se dirigen exclusivamente a su destinatario, puede contener información privilegiada o confidencial y es para uso exclusivo de la persona o entidad de destino. Si no es usted. el destinatario indicado, queda notificado de que la lectura, utilización, divulgación y/o copia sin autorización puede estar prohibida en virtud de la legislación vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente por esta misma vía y proceda a su destrucción.

The information contained in this transmission is privileged and confidential information intended only for the use of the individual or entity named above. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this transmission in error, do not read it. Please immediately reply to the sender that you have received this communication in error and then delete it.

Esta mensagem e seus anexos se dirigem exclusivamente ao seu destinatário, pode conter informação privilegiada ou confidencial e é para uso exclusivo da pessoa ou entidade de destino. Se não é vossa senhoria o destinatário indicado, fica notificado de que a leitura, utilização, divulgação e/ou cópia sem autorização pode estar proibida em virtude da legislação vigente. Se recebeu esta mensagem por erro, rogamos-lhe que nos o comunique imediatamente por esta mesma via e proceda a sua destruição

Benjamin Francis

unread,
Mar 10, 2015, 8:33:22 AM3/10/15
to Jonas Sicking, dev...@lists.mozilla.org
This sounds great, I fully agree. Comments inline.

On 10 March 2015 at 00:23, Jonas Sicking <jo...@sicking.cc> wrote:
First off, I think we should get rid of "apps" as a platform feature.

Let's unpack that a little.
  • Get rid of the current .zip packages - Yes. For the offline part we have Service Workers. For the signing part we now have three proposed alternatives, which we can discuss in a separate thread:
    • Signed hosted packages [1]
    • Signed Service Worker cache [2]
    • Signed manifests [3] (currently my preference)
  • Get rid of the app:// protocol - yes please! Giving apps real HTTP URLs on the web will be a huge step forward. It will make them web apps, which is always what we intended.
  • Get rid of cookie jars - yes, data jars are a nice idea but break some things on the web. We can experiment with this for some apps, but probably not enforce it for all content.

What I don't necessarily think we should get rid of is the app registry.

One thing that other vendors (Google/Microsoft/Apple/Intel) *are* currently showing a lot of interest in is installable hosted web apps using a standard web app manifest [4]. The manifest can be used to describe a part of the web as a web app which can break out of the browser and register to handle a defined URL scope as a standalone app on the OS (Android, Windows 10, Firefox OS etc.). The W3C spec defines an "application context" as a browsing context with a manifest applied, which can have a different display mode with no browser chrome like "standalone" or "fullscreen" for a defined URL scope.

I think the app registry is still useful as a place to register an app as handling a particular URL scope. The W3C spec does not define an app installation API (for use by an app store), but instead defines an HTML manifest link relation (for use by a user agent which can detect it and provide UI to offer to install the app). This allows web apps to be discovered simply by searching and browsing the web rather than distributed through a central app store. We can still use our proprietary mozApps install API to do this internally from browser chrome in the built-in system app or bookmark app (signed Firefox apps) which could provide that UI (this is actually already landed in master). I think this registry needs to be maintained in Gecko so it knows how to handle any given HTTP navigation.

* Enable the user to simply navigate to content which uses sensitive
APIs, without the need to install it first. I.e. enable the content to
be available through "real" linkable URLs.

Agreed. Ideally web apps should work equally well inside and outside the browser.

One thing I don't think we want (though this is largely a UX issue) is for a browser window to constantly switch between different display modes as you simply browser non-installed web apps, i.e. a manifest shouldn't be applied to a browsing context simply by navigating to a URL if the app that manifest describes has not been installed/pinned. This could make for a very nauseating experience if the display mode keeps changing as you browse, and provides a vector for phishing attacks if any web content can enter a standalone display mode without an explicit user action.

My opinion is that the browser chrome provides the user with a safety net while they browse web sites and web apps (they always know where they are, and can always go back), the manifest should only be applied when the user is no longer "just browsing" but has installed/pinned the app to their device. From that point on that app should handle its own URL scope in its preferred display mode and can capture navigations to those URLs so the system can switch to an app window to handle a navigation.

Maybe a creative UX designer can re-invent browser chrome and provide a UI where the installation step is not necessary, but I haven't yet seen a design which achieves that.
 
* Enable developers to sign the content without going through a
marketplace review. I.e. enable developers to distribute and updated
version of their content without having to go through marketplace
review. 

I think this is crucial. All three of the proposed alternatives to the current signed packaged app system have problems with updates if every update has to be signed by a central authority rather than the developer themselves.
 
* While I think signed content that can use sensitive APIs should have
real URLs, I think it needs to never be same-origin with unsigned
"normal" content.

This makes sense from a security point of view, a challenge is going to be that for a signed hosted resource to talk to other resources on its own domain (like a server-side API), it will need to have CORS set up (otherwise something like systemXHR is needed). The obvious answer is to encourage developers to use CORS, but that can be a hard pill to swallow. How do you even configure CORS to give access to its own domain...?
 
The only content distinction we'd end up with is "signed" vs.
"unsigned". And to the user both would look like normal web.

The "signed" content will be FirefoxOS specific until we find others
which are interested in collaborating on APIs, which isn't expected to
be soon. But to put this in perspective, the vast majority of authors
are able to author content without using these APIs.

This isn't a technical issue, but I would suggest that we re-brand Firefox* specific certified and privileged apps as "Firefox Apps", and hosted apps as simply "Web Apps", and start to migrate those apps to use the new W3C manifest format. the properties of the mozApps hosted app manifest could just be proprietary extensions of the W3C manifest until that spec matures enough to fulfil all use cases.

We also need to provide an easy migration path for existing authors of packaged apps. I would suggest this should include offering to host those apps for people on a Mozilla-run or partner-run web server. Perhaps we can talk to the Webmaker folks about that?

I would also like to see the Marketplace move away from a "you come here to download apps" model, and towards a "guide to the best of the web" model where it just provides links to web apps already on the web which you can use instantly and optionally install afterwards.

Let me know what you think.

Fabrice Desré

unread,
Mar 10, 2015, 12:18:01 PM3/10/15
to Antonio Manuel Amaya Calvo, dev...@lists.mozilla.org
On 03/10/2015 03:35 AM, Antonio Manuel Amaya Calvo wrote:

> From a security point of view, I think the cookie jar model is much
> better than having everything on the same pile. Maybe rather than
> getting rid of that, or even making it an opt-in feature we could try
> and hide the implementation details from the content. So we keep a
> cookie jar per origin (instead of per-app as we do know, which is really
> the same since we can only have one app per origin) and every time that

Nit: we currently support multiple apps per origin - the app identifier
is the manifest url.

Antonio Manuel Amaya Calvo

unread,
Mar 10, 2015, 1:06:05 PM3/10/15
to Fabrice Desré, dev...@lists.mozilla.org
Hmm for packaged we cannot as the origin 'host' doubles as the directory
were the packaged is installed. And for hosted it used to not work,
didn't knew we had changed that. In fact the test store we use
internally is hosted and I remember having to install it from a fake
origin or it didn't let me install more apps from that origin. Good to
know that's not the case anymore :)

>
> Fabrice

Andrew Sutherland

unread,
Mar 10, 2015, 4:24:25 PM3/10/15
to dev...@lists.mozilla.org
On Tue, Mar 10, 2015, at 06:35 AM, Antonio Manuel Amaya Calvo wrote:
On 10/03/2015 1:23, Jonas Sicking wrote:
* Enable Marketplace to hand out the ability to use a particular API
to a developer, rather than to a particular version of a particular
app.
This is nice... but I believe it's ultimately futile. It works for well known, giant developers... I could say I trust Google, or Mozilla, or Facebook... but I don't think it'll work for Pop&Mom SW, Inc. How would Mozilla know if they should trust them and with what APIs?
 
The model we have right now sucks: it's slow to deploy, it's slow to update, it's not similar to the rest of the web, at all (it's basically Apple's model on a smaller scale, in other words). But as a user I can be reasonably secure that whatever I install will use only the permissions it needs, and won't get my data and run with it. But just giving access to random developers, which will then do whatever they want with that permission is the same as opening the permissions to everyone, only more complicated to implement.
 
I think it's important to recognize that there are real limitations to the marketplace review mechanism.  I have not participated in our apps marketplace, but I was a reviewer for addons.mozilla.org previously and authored extensions that went through the review process (which is documented at https://developer.mozilla.org/en-US/Add-ons/AMO/Policy/Reviews).
 
Much of those reviews is about avoiding doing risky/ill-advised things: using eval from a chrome context, remotely loading code in a chrome context, using sync XHR which results in a nested event loop spinning, etc.  Other aspects were making sure there was no hidden sketchy stuff (obfuscated blobs), or known sketchy stuff (third-party monetization libraries provided to add-on authors by third-parties that were known to violate Mozilla addons policy.)  Most of these were automatically detected.
 
Doing thorough code review is hard.  And slow.  The review queue can be a major hassle for extension authors, especially for complicated extensions.  At some point, we are simply just trusting the developer and the reputation they've built up with the reviewers and the users.  Recognizing that fact can be an improvement for everyone: it lessens review-burden and lets the developers get security fixes and feature enhancements out to their users faster.
 
For real-world examples in the add-ons domain, I'd cite the enigmail Thunderbird extension (https://addons.mozilla.org/en-US/thunderbird/addon/enigmail/) that provides PGP support for Thunderbird and the Thunderbird Conversations extension (https://addons.mozilla.org/en-US/thunderbird/addon/gmail-conversation-view/) that provides a conversation view reminiscent of the gmail experience.  These are both deeply complicated projects in their own right that fundamentally depend on their developers' competence, intentions, and their policies/procedures for security and correctness.
 
It's also possible and acceptable for us to require specific policies as a precondition to use of this mechanism, especially for significantly more dangerous APIs and/or where self-updating is a risk.  For a web-app example, https://whiteout.io/ has an HTML/JS/CSS based PGP mail client.  If I knew that Firefox OS would only accept an updated release of the app (manifest) when it has been signed by 2 of the developers of the client using their own personal private keys that are nominally kept encrypted with a strong passphrase, that would greatly increase my trust.  (Noting that that's a pretty hard-core example.)
 
Andrew

Jonas Sicking

unread,
Mar 10, 2015, 6:56:08 PM3/10/15
to Antonio Manuel Amaya Calvo, dev...@lists.mozilla.org
On Tue, Mar 10, 2015 at 3:35 AM, Antonio Manuel Amaya Calvo <antoniomanue...@telefonica.com> wrote:

On 10/03/2015 1:23, Jonas Sicking wrote:
(Sorry to change from dev-webapi to dev-b2g, but I think dev-b2g is
better given the size of these changes).

First off, I think we should get rid of "apps" as a platform feature.
This doesn't need to mean that we should change the UX of B2G. That is
a separate consideration.

But we should get rid of cookie jars. And accept the web for the big
goop of content that it is :)

We could add features to allow websites to indicate that it wants the
security protections that the current cookie jars support. But per the
above, that's not a feature that we should push through FirefoxOS
alone. If it's something that we think is important, we should push it
as a web feature together with Firefox desktop and other browser
vendors.

>From a security point of view, I think the cookie jar model is much better than having everything on the same pile. Maybe rather than getting rid of that, or even making it an opt-in feature we could try and hide the implementation details from the content. So we keep a cookie jar per origin (instead of per-app as we do know, which is really the same since we can only have one app per origin) and every time that origin is loaded it uses it's own cookie jar.

We actually tried this on desktop, but it broke the web.

The problem is that when a page from http://site-a.com loads a javascript file from http://site-b.com/scripts/library.js it expects to get site-b's cookies.

Even more importantly, if http://site-a.com opens an <iframe> to http://site-b.com/widget.html, it's expected that the iframe is loaded using site-b's cookies. While we theoretically could use a different child process to load the <iframe>, Gecko isn't able to do this yet.

What I'm hoping we can do eventually, is just that though. I.e. make each origin load in a separate process. Even if the origin is loaded inside an <iframe> inside another origin.

That way we could use process boundaries to ensure that a given origin is not able to read data that belongs to another origin.

Hopefully this is something that the platform team will start looking at once the initial process separation work is done for desktop.
 
To keep this isolated at a process level this would mean changing the way the processes work (having one process per origin instead of one process per app, for example, so every window that's loaded for https://a.origin.com lives on the same process independently of where that window was defined)... but that should be transparent to content developers.
I think we should also keep exposing "sensitive APIs", both to gaia
and to 3rd party developers. Converting all the email servers in the
world to use CORS simply isn't realistic. But we should make
improvements in how these APIs are exposed.

I do think that we still want code that uses these "sensitive APIs" to
be signed. However that doesn't mean that we have to keep using the
same model, of app:-protocol
 and installable zip files that we
currently use.

There's a few things that I think would be great to accomplish for
content that uses these sensitive APIs:

* Enable the user to simply navigate to content which uses sensitive
APIs, without the need to install it first. I.e. enable the content to
be available through "real" linkable URLs.
* Enable developers to sign the content without going through a
marketplace review. I.e. enable developers to distribute and updated
version of their content without having to go through marketplace
review.
* Enable Marketplace to hand out the ability to use a particular API
to a developer, rather than to a particular version of a particular
app.
This is nice... but I believe it's ultimately futile. It works for well known, giant developers... I could say I trust Google, or Mozilla, or Facebook... but I don't think it'll work for Pop&Mom SW, Inc. How would Mozilla know if they should trust them and with what APIs?

This is a good point.

I think in practice we're not doing a lot of verification right now. The reviews that marketplace does is done mainly by running the app, rather than by looking at it's source code.

But keeping the *ability* to review each app version sounds like a good idea. Then leave it up to the review team when they want to review the app vs. review the developer.

Possibly in instances where we can identify, and get legally binding commitments from, a legally liable person, that we can provide more access to that developer.

This definitely sounds like something that we should discuss with the marketplace team that's currently responsible for reviews.
 
 

What signing format to use, and how to keep it not same-origin as
unsigned content, is probably best done as a separate thread.
Hopefully we can get agreement on the rest of this thread without
solving that part.

I also think we need to stop worrying so much about that these APIs
aren't standardized. It simply isn't in our power to make these really
standardized. It requires that other vendors are actually willing to
work together with us on aligning APIs, and I just haven't seen that.

And more importantly, very little content is using these APIs. The web
is far greater. Even developers that target FirefoxOS specifically are
90-95% of the time able to write their content without using these
APIs.
The problem here is that we need people using those APIS (or maybe new ones?) for the platform to move out of the "cheap smartphone for people that only had feature phones before" segment. And the second problem is that bootstrapping this requires those APIs to be useable in some form for the rest of the web.

I'm not sure what you are arguing here. You were pretty strongly arguing above that simply exposing these APIs to untrusted developers was a bad idea :)

Like I've said, if anyone has ideas for how we can expose a particular API to the web at large, I'm all ears.
 

That said, if anyone wants to make an effort and reach out to other
vendors to get agreement on any API, feel free to give it a try.

The only thing that I could see being successful in the short term
would be to simply adopt APIs from other platforms. Cordova and
Node.js would be prime candidates here I think. If anyone has
suggestions for APIs that you think would be good candidates, please
let me know.
This is actually a very nice way of making those APIs 'reusable' :).

As mentioned above, I think there's still some APIs that I'm very
nervous about exposing to 3rd party websites. For example, enabling
placing phone calls through direct calls to the API, rather than by
using <a href="tel:..."> or a WebActivity seems like inviting malware.

It's also not a terribly good way to enable users to replace the
built-in dialer. Since the user would still have the built-in dialer.

A better solution to enable replacing the dialer UI might be to use
some form of addon system. An addon which tweaked, or completely
replaced, the built-in dialer UI would be awesome.
Hmm... I think that while those things would be nice to have, on a geeky way, we might be losing sight of the target. Replacing the dialer is a nice configuration option, if we can do it (well, not replacing, *adding* a new dialer because I think replacing the certified one is a no-go from a certification point of view), but the phone already *has* a dialer. What we should worry about is giving developers enough tools to do things the phone doesn't have but other phones do. For example:
  • IM applications (be it Whatsapp, Line, Telegram, Wickr, whatever, it should be doable on the new model).
  • VoIP applications (Skype). We already have Firefox Hello, which is nice, and it's packaged, and it uses a bunch of privileged APIs and another bunch of "they're privileged but only for you" APIs.
  • Richer calendar apps with support for public and private servers (Exchange, Google,... whatever) which might or might not have a nice REST/HTTP API.
  • File browser (since we do have a SD card that can be mounted)
  • Wearable APIs (ok, reaching for the moon here but it's something big that will only get bigger)

Maybe we're going at this the wrong way... and instead of seeing what APIs we have or would like to have we should look at what applications we want our ecosystem to have, and what is needed to implement those applications. I know we learned a lot about what was missing while writing the Firefox Hello app.

Agreed.

We should definitely continue to find APIs which allows awesome content to be developed for FirefoxOS. But I believe that there will always be some set of APIs which we can't expose to the "untrusted" web. Or at least that for a long time such APIs will exist.

This thread was intended to discuss those APIs, old and new.

But I definitely think that we should continue to look at adding more APIs. And we should continue to look at modifying the APIs that we have as to make them meet more use cases. That's just not the topic I tried to address in this thread.

/ Jonas

Andrew Sutherland

unread,
Mar 10, 2015, 10:38:15 PM3/10/15
to Jonas Sicking, dev...@lists.mozilla.org
On Mon, Mar 9, 2015, at 09:38 PM, Jonas Sicking wrote:
> Has Google given any indication that they are actually planning on
> implementing the "standard"?

I'm not in the know, but in my limited involvement with the
standardization effort, I haven't seen any implications that Google or
anyone is going to implement the resulting spec. I was only assuming we
were going to implement it because I was under the impression it was our
goal to standardize all Firefox OS-introduced APIs and you participated
in discussions for this specific API.

The Mozilla sysapps participants seem to be listed at
http://www.w3.org/2000/09/dbwg/details?group=58119&public=1&order=org#_MozillaFoundation,
maybe one of them might know? Certainly Mounir's announcement at
https://lists.w3.org/Archives/Public/public-sysapps/2014Dec/0000.html
that Google had left sysapps could imply they're not particularly
invested in the spec...

Andrew

Paul Theriault

unread,
Mar 11, 2015, 2:08:54 AM3/11/15
to Jonas Sicking, dev...@lists.mozilla.org
On Tue, Mar 10, 2015 at 11:23 AM, Jonas Sicking <jo...@sicking.cc> wrote:
(Sorry to change from dev-webapi to dev-b2g, but I think dev-b2g is
better given the size of these changes).

On Wed, Feb 4, 2015 at 4:49 AM, Benjamin Francis <bfra...@mozilla.com> wrote:
> One potential answer is that:
>
>    - Privileged hosted web apps can be self-signed by developers (using a
>    certificate issued by an issuer with a CA root Authority installed in
>    Firefox OS) and users must decide whether to trust the developer. Mozilla
>    could choose to become an issuer of free certificates (as we are with Let's
>    Encrypt) but would not be the sole issuer.
>    - Certified apps become a new type of chrome which happens to use HTML5
>    as a markup language. If we want third parties to be able to build apps or
>    extensions to this chrome then they become a new type of chrome extension
>    which can only be signed by Mozilla. Firefox OS apps/extensions would by
>    definition be Firefox OS-only.

Hi All,

This matches pretty closely with my thinking as of late.

I think there are a few things that we need to realize and accept.

First off, it's really hard to "move the web". Especially with a
marketshare as long as FirefoxOS has. If we're going to stand any
chance of actually changing web developer behavior at large, we need
at the very least Firefox for Android and Firefox desktop to push for
the same changes. But realistically also other browser vendors.

In this light, I would say that some of our efforts of trying to make
the web more "appy" has been a mistake. Though one that we've learned
a lot from I think. We should make sure that this experience is used
as we work on standards like web manifests and web activities.


Second, I think we need to accept the fact that some of the APIs that
are needed to build a complete OS simply aren't safe to expose to
untrusted content authors.

As long as the web maintains a model that content can be consumed
without users having to worry about security concerns, and be
published without any need to go through reviews, there has to be
limits on what type of content that can be created. And I think
there's broad agreement here that we don't want to give up that model.

I don't think statements like "being able to write an irc client is a
minimum" is really fair. We could likewise say "being able to run an
run any software without worrying that it's going to reconfigure your
router is a minimum", which is a statement that all other platforms
fail.

That said, I'm all ears for constructive ideas for how we can solve
shortcomings of the web platform. But I also think we should also keep
this in perspective. There's a lot of content on the web. All of it
has been built without access to these APIs. Even for content that is
explicitly built for FirefoxOS and submitted to the Firefox
marketplace, only about 5-10% need these APIs.


Third, there is essentially no interest from other browser vendors to
"standardize" or even get alignment on APIs that can't be exposed to
normal webpages.

Google is about as interested in aligning their chrome apps APIs with
our APIs, as they are to listen to us about what their Android APIs
should look like. Other vendors have shown about the same amount of
interest.

There just isn't much value in anyone changing their APIs. Authors
aren't really asking for it, and the platforms are different enough
that authors wouldn't see large benefits anyway.


One interesting question to ask here is, would we be interested in
adopting Tizen's API for, for example, SD card access? Or Chrome-app's
API for TCPSocket?


So what does this mean that we should do?


First off, I think we should get rid of "apps" as a platform feature.
This doesn't need to mean that we should change the UX of B2G. That is
a separate consideration.

But we should get rid of cookie jars. And accept the web for the big
goop of content that it is :)

We could add features to allow websites to indicate that it wants the
security protections that the current cookie jars support. But per the
above, that's not a feature that we should push through FirefoxOS
alone. If it's something that we think is important, we should push it
as a web feature together with Firefox desktop and other browser
vendors.

The security engineering team has been working on a related idea: https://wiki.mozilla.org/Security/Contextual_Identity_Project/Containers
It's only partially related, and still a WIP, but definitely would like to see us more aligned with desktop.
 


I think we should also keep exposing "sensitive APIs", both to gaia
and to 3rd party developers. Converting all the email servers in the
world to use CORS simply isn't realistic. But we should make
improvements in how these APIs are exposed.

I do think that we still want code that uses these "sensitive APIs" to
be signed. However that doesn't mean that we have to keep using the
same model, of app:-protocol and installable zip files that we
currently use.

There's a few things that I think would be great to accomplish for
content that uses these sensitive APIs:

* Enable the user to simply navigate to content which uses sensitive
APIs, without the need to install it first. I.e. enable the content to
be available through "real" linkable URLs.
* Enable developers to sign the content without going through a
marketplace review. I.e. enable developers to distribute and updated
version of their content without having to go through marketplace
review.
* Enable Marketplace to hand out the ability to use a particular API
to a developer, rather than to a particular version of a particular
app.
* Remove technical separation between "privileged" and "certified"
APIs. We can still decide not to grant any third party content the
ability to use, for example, the power API, by simply not granting the
right to use that API to any developers other than gaia developers.
But the client-side code doesn't need to make that decision.
* While I think signed content that can use sensitive APIs should have
real URLs, I think it needs to never be same-origin with unsigned
"normal" content.
* It would be good if we can keep the process separation advantages
that we currently have for content that can use "sensitive APIs". I.e.
it would be nice if it required more than finding a buffer overflow
bug in Gecko in order to gain access to use the telephony API. But
it'd be good if we can hide this fact as much as possible from web
developers.
* I think we should still keep the CSP requirements that we have for
content that uses "sensitive APIs". I.e. all JS that can use those
APIs has to be signed by the developer.


What signing format to use, and how to keep it not same-origin as
unsigned content, is probably best done as a separate thread.
Hopefully we can get agreement on the rest of this thread without
solving that part.

I also think we need to stop worrying so much about that these APIs
aren't standardized. It simply isn't in our power to make these really
standardized. It requires that other vendors are actually willing to
work together with us on aligning APIs, and I just haven't seen that.

And more importantly, very little content is using these APIs. The web
is far greater. Even developers that target FirefoxOS specifically are
90-95% of the time able to write their content without using these
APIs.

That said, if anyone wants to make an effort and reach out to other
vendors to get agreement on any API, feel free to give it a try.

The only thing that I could see being successful in the short term
would be to simply adopt APIs from other platforms. Cordova and
Node.js would be prime candidates here I think. If anyone has
suggestions for APIs that you think would be good candidates, please
let me know.


As mentioned above, I think there's still some APIs that I'm very
nervous about exposing to 3rd party websites. For example, enabling
placing phone calls through direct calls to the API, rather than by
using <a href="tel:..."> or a WebActivity seems like inviting malware.

It's also not a terribly good way to enable users to replace the
built-in dialer. Since the user would still have the built-in dialer.

A better solution to enable replacing the dialer UI might be to use
some form of addon system. An addon which tweaked, or completely
replaced, the built-in dialer UI would be awesome.

Likewise an addon which sat between the dialer and the actual phone
hardware, and did things like block lists, or changed incoming and/or
outgoing phone numbers would be great. Or addons which encrypt the
voice audio when calling friends which has the same addon.

Addons have been great for Firefox desktop. I think it can be as
awesome for FirefoxOS, if not more so.

Addons will definitely be Firefox(OS) specific. But no more so than
the telephony API is, and is likely to remain for the foreseeable
future.

Again, I think how exactly how these addons will work is a better
topic for a separate thread.


In summary:

On a technical level we'd be much more like the web has traditionally
been. I.e. no cookie jars or app silos. The user can navigate between
any content by following normal links. This will include content that
use "sensitive APIs".

The only content distinction we'd end up with is "signed" vs.
"unsigned". And to the user both would look like normal web.

The "signed" content will be FirefoxOS specific until we find others
which are interested in collaborating on APIs, which isn't expected to
be soon. But to put this in perspective, the vast majority of authors
are able to author content without using these APIs.

On a technical level, Gaia would just be normal signed content. The
distinction between "certified" and "privileged" disappear. Though we
can still choose on a per-API and per-developer basis which API we
allow what developers to use. Though UX-wise we might still want to
give gaia special treatment.

Users can install addons which change the behavior of other content.
This will include both change behavior of gaia, as well as of signed
and unsigned websites.

This all sounds pretty good to me. A couple thoughts:

- merging certified & privileged will place a greater reliance on the marketplace as a signing authority. Just want to call out the need to engage with marketplace/AMO and get them on board with this change (and probably b2g devoting increased resources to marketplace?).

- what's the difference between apps and add-ons in a FxOS sense? (Rhetorical question perhaps, but as a developer why would i choose one over the other, especially if apps are less powerful. Probably a topic for a seperate add-ons thread...)

-Paul

 


Let me know what you think.

Jonas Sicking

unread,
Mar 11, 2015, 1:59:10 PM3/11/15
to Andrew Sutherland, dev...@lists.mozilla.org
On Tue, Mar 10, 2015 at 7:36 PM, Andrew Sutherland
<asuth...@asutherland.org> wrote:
> On Mon, Mar 9, 2015, at 09:38 PM, Jonas Sicking wrote:
>> Has Google given any indication that they are actually planning on
>> implementing the "standard"?
>
> I'm not in the know, but in my limited involvement with the
> standardization effort, I haven't seen any implications that Google or
> anyone is going to implement the resulting spec. I was only assuming we
> were going to implement it because I was under the impression it was our
> goal to standardize all Firefox OS-introduced APIs and you participated
> in discussions for this specific API.

I'm not sure what you're point is then? If only mozilla implements the
"standard" then it's still not really a standard. Even if it has a W3C
logo on it.

So my point that if we're not interested in implementing chrome API,
then why should Google be interested in implementing one that only we
support. With or without a W3C logo on it?

Or am I misunderstanding your point?

/ Jonas

Jonas Sicking

unread,
Mar 11, 2015, 2:16:36 PM3/11/15
to Benjamin Francis, dev...@lists.mozilla.org
On Tue, Mar 10, 2015 at 5:32 AM, Benjamin Francis <bfra...@mozilla.com> wrote:
>
> This sounds great, I fully agree. Comments inline.
>
> On 10 March 2015 at 00:23, Jonas Sicking <jo...@sicking.cc> wrote:
>>
>> First off, I think we should get rid of "apps" as a platform feature.
>
> Let's unpack that a little.

https://www.youtube.com/watch?v=E3AjTqTwIdk

> What I don't necessarily think we should get rid of is the app registry.

I guess I don't have a strong opinion.

I do think that the registry can be simplified compared to what it is today.

What we basically need is a bookmark registry. Which maintains a list
of which pages/manifests that the user has bookmarked.

I'm absolutely not in a hurry to modify our registry code though. I'll
leave that up to the owners of that code.

I would imagine that exactly what we do will depend on various UX
decisions. Such as how to implement display modes.

>> * Enable the user to simply navigate to content which uses sensitive
>> APIs, without the need to install it first. I.e. enable the content to
>> be available through "real" linkable URLs.
>
> Agreed. Ideally web apps should work equally well inside and outside the browser.
>
> One thing I don't think we want (though this is largely a UX issue) is for a browser window to constantly switch between different display modes as you simply browser non-installed web apps, i.e. a manifest shouldn't be applied to a browsing context simply by navigating to a URL if the app that manifest describes has not been installed/pinned. This could make for a very nauseating experience if the display mode keeps changing as you browse, and provides a vector for phishing attacks if any web content can enter a standalone display mode without an explicit user action.
>
> My opinion is that the browser chrome provides the user with a safety net while they browse web sites and web apps (they always know where they are, and can always go back), the manifest should only be applied when the user is no longer "just browsing" but has installed/pinned the app to their device. From that point on that app should handle its own URL scope in its preferred display mode and can capture navigations to those URLs so the system can switch to an app window to handle a navigation.
>
> Maybe a creative UX designer can re-invent browser chrome and provide a UI where the installation step is not necessary, but I haven't yet seen a design which achieves that.

Indeed. I think we need to get more UX input on this stuff.

>> * While I think signed content that can use sensitive APIs should have
>> real URLs, I think it needs to never be same-origin with unsigned
>> "normal" content.
>
> This makes sense from a security point of view, a challenge is going to be that for a signed hosted resource to talk to other resources on its own domain (like a server-side API), it will need to have CORS set up (otherwise something like systemXHR is needed). The obvious answer is to encourage developers to use CORS, but that can be a hard pill to swallow. How do you even configure CORS to give access to its own domain...?

Yes, this is a good point. I don't think we'll want to force
developers to use CORS to talk to their own server.

>> The only content distinction we'd end up with is "signed" vs.
>> "unsigned". And to the user both would look like normal web.
>>
>> The "signed" content will be FirefoxOS specific until we find others
>> which are interested in collaborating on APIs, which isn't expected to
>> be soon. But to put this in perspective, the vast majority of authors
>> are able to author content without using these APIs.
>
> This isn't a technical issue, but I would suggest that we re-brand Firefox* specific certified and privileged apps as "Firefox Apps", and hosted apps as simply "Web Apps", and start to migrate those apps to use the new W3C manifest format. the properties of the mozApps hosted app manifest could just be proprietary extensions of the W3C manifest until that spec matures enough to fulfil all use cases.

Yeah, I agree that we should make clear that signed content is Firefox-specific.

> We also need to provide an easy migration path for existing authors of packaged apps. I would suggest this should include offering to host those apps for people on a Mozilla-run or partner-run web server. Perhaps we can talk to the Webmaker folks about that?

My hope is definitely to automatically convert packed apps on the
marketplace to whatever new formats we'll end up using.

/ Jonas

Jonas Sicking

unread,
Mar 11, 2015, 2:23:26 PM3/11/15
to Paul Theriault, dev...@lists.mozilla.org
On Tue, Mar 10, 2015 at 11:08 PM, Paul Theriault <pther...@mozilla.com> wrote:

> This all sounds pretty good to me. A couple thoughts:
>
> - merging certified & privileged will place a greater reliance on the
> marketplace as a signing authority. Just want to call out the need to engage
> with marketplace/AMO and get them on board with this change (and probably
> b2g devoting increased resources to marketplace?).

Agreed. Hopefully it won't put much more strain on the marketplace. I
believe they already have lists of what permissions they ever sign
for.

> - what's the difference between apps and add-ons in a FxOS sense?
> (Rhetorical question perhaps, but as a developer why would i choose one over
> the other, especially if apps are less powerful. Probably a topic for a
> seperate add-ons thread...)

In my mind:

Apps is just "content" in the same way that web content is. It lives
inside a browser window and doesn't break out of that UX-wise (though
it can still do things like read sdcard content).

So apps is something you simply navigate to.

But addons change behavior of existing content. I.e. it might theme
your homescreen, change your contacts list UI, blocklist certain phone
numbers, add download buttons on video websites.

Hence addons is something that need to be installed.

/ Jonas

Andrew Sutherland

unread,
Mar 11, 2015, 3:02:45 PM3/11/15
to Jonas Sicking, dev...@lists.mozilla.org
On Wed, Mar 11, 2015, at 01:58 PM, Jonas Sicking wrote:
> I'm not sure what you're point is then? If only mozilla implements the
> "standard" then it's still not really a standard. Even if it has a W3C
> logo on it.

I'm not sure I have a point anymore. In this most recent reply I just
wanted to answer your question.

Making this thread useful, I guess I have a few questions because
although I probably should know these answers, I do not:

1) Is there an easy way to tell what Mozilla's stance on a
potential/proposed standard is? Chromium has
https://www.chromestatus.com/features and it seems quite useful since it
makes it clear who the owner is, what someone in the know thinks the
other browsers' interest/plans are, and links to bugs/standards, etc.
It seems like https://bugzil.la/1055074 might be the bug on Mozila
having something like that.

I do see a couple resources in our docs:
* https://wiki.mozilla.org/WebAPI/ExposureGuidelines seems to suggest
that searching dev-platform for intent to implement emails works once
we've crossed a certain threshold.
* Our specific API pages seem potentially misleading in their reference
of standards but without clarification of our intent:
**
https://developer.mozilla.org/en-US/docs/Web/API/TCP_Socket_API#Standard
links to the TCPSocket spec-work
** so does https://wiki.mozilla.org/WebAPI
* https://wiki.mozilla.org/WebAPI/PlannedWork has some stuff but it
hasn't been meaningfully updated in exactly 6 months.

I suspect right now I should be just asking you or :ehsan or :overholt?

NB: I do get that what's happening now is you're proposing a change in
our behaviour related to all of this.


2) Is there an easy way to tell whether standardization efforts are
actually going to go anywhere? Like a list of implementers who are
tentatively planning to implement? From looking at raw sockets, that
fact that none of the editors are browser implementers and there hasn't
been much recent activity on the lists certainly makes some
implications.


> So my point that if we're not interested in implementing chrome API,
> then why should Google be interested in implementing one that only we
> support. With or without a W3C logo on it?

I do think it makes sense to converge to the Google Chrome API if we
make any changes to our implementation given that:
* it seems that the W3C effort is unlikely to yield an implemented
standard, merely a proposed spec
* we're explicitly recognizing this is something that can never be
exposed to the web directly and given that no one else cares about a
spec for this or a lot of other things, it's clear we end up in a custom
runtime environment no matter what, and then it's just a question of
minimizing pain/hassle for developers
* sufficiently capable stream APIs can be wrapped to look like anything
you want
* the Google Chrome developers seem to take the stability/deprecation of
their extension API seriously (noting caveat below)

I'm not sure what to do about the "chrome.sockets.tcp" namespace or how
to deal with the potential future API changes of what is explicitly an
extension API for a single evergreen browser. Their APIs do change.
The previous chrome.socket API https://developer.chrome.com/apps/socket
was apparently introduced in Chrome 24, added new multicast methods in
Chrome 28, was superseded by chrome.sockets.tcp
https://developer.chrome.com/apps/sockets_tcp in Chrome 33, and added
support for socket upgrade to SSL/TLS in Chrome 38. (There does not
appear to be away to request that the socket be upgraded to TLS until
connected.) These all seem like sane changes and the refactor
intentional, so I don't think I'd assume the trend would continue
forever, but I also don't think it means we can assume the API will now
be forever stable.

Certainly we can try and talk to them about it. One question would be
what do we do if they really don't want us imitating their APIs? :)

Andrew

Benjamin Francis

unread,
Mar 11, 2015, 3:40:39 PM3/11/15
to Jonas Sicking, dev...@lists.mozilla.org
On 11 March 2015 at 18:15, Jonas Sicking <jo...@sicking.cc> wrote:
https://www.youtube.com/watch?v=E3AjTqTwIdk

I'm glad someone appreciates my puns :)
 
I do think that the registry can be simplified compared to what it is today.

What we basically need is a bookmark registry. Which maintains a list
of which pages/manifests that the user has bookmarked.

It's a bit more than just a bookmark registry in my opinion. A bookmark is a shortcut to a single URL which opens in a browsing context. A web app on the other hand can consist of multiple URLs. Installing an app into the app registry should register that app (identified by manifest URL) as handling a particular collection of URLs (defined by the "scope" property) using an application context (a browsing context with the manifest applied) in its preferred display mode (defined by the "display" property). You can also bookmark/pin a page (we don't have to have a registry in Gecko for that), and that bookmark could open in an application context (if it falls under the scope of an installed app) or a browsing context.

But yes, there are probably things we could get rid of.

I would imagine that exactly what we do will depend on various UX
decisions. Such as how to implement display modes.

I actually already implemented display modes in bug 1088009, as per the 2.2 spec. But yes there are still open questions around app scope for example.

I think we need to get more UX input on this stuff.

Agreed, I have no idea how any of this would fit into current v3 proposals for example.

Ben

David Bruant

unread,
Mar 11, 2015, 5:34:42 PM3/11/15
to Jonas Sicking, Antonio Manuel Amaya Calvo, dev...@lists.mozilla.org
Le 10/03/2015 23:48, Jonas Sicking a écrit :
> Even more importantly, if http://site-a.com opens an <iframe> to
> http://site-b.com/widget.html, it's expected that the iframe is loaded
> using site-b's cookies. While we theoretically could use a different
> child process to load the <iframe>, Gecko isn't able to do this yet.
>
> What I'm hoping we can do eventually, is just that though. I.e. make
> each origin load in a separate process. Even if the origin is loaded
> inside an <iframe> inside another origin.
Note that this is tedious as two contexts with originally different
origins may end up with the same origin if they set document.domain on
both sides.
However, this question is taken care of for @sandbox-ed iframes on the
standards side as well as in Firefox and Chrome. (See
https://www.w3.org/Bugs/Public/show_bug.cgi?id=23040 )

> That way we could use process boundaries to ensure that a given origin
> is not able to read data that belongs to another origin.
>
> Hopefully this is something that the platform team will start looking
> at once the initial process separation work is done for desktop.
I am looking forward to this moment.
And I believe this relates to the question at hand about vendor-specific
APIs. Mozilla has expressed the intention to implement new API to have
<canvas> handled in a Worker (and I haven't read interest for that from
other browsers, please correct me if I'm mistaken). However, it appears
that process-isolated iframes should lead to the same performance
benefits without needing any new API (but perhaps strongly encouraging
other vendors to improve their iframe@sandbox implementation).

David

Jonas Sicking

unread,
Mar 11, 2015, 8:15:54 PM3/11/15
to Andrew Sutherland, dev...@lists.mozilla.org
On Wed, Mar 11, 2015 at 12:01 PM, Andrew Sutherland
<asuth...@asutherland.org> wrote:
> On Wed, Mar 11, 2015, at 01:58 PM, Jonas Sicking wrote:
>> I'm not sure what you're point is then? If only mozilla implements the
>> "standard" then it's still not really a standard. Even if it has a W3C
>> logo on it.
>
> I'm not sure I have a point anymore. In this most recent reply I just
> wanted to answer your question.
>
> Making this thread useful, I guess I have a few questions because
> although I probably should know these answers, I do not:
>
> 1) Is there an easy way to tell what Mozilla's stance on a
> potential/proposed standard is? Chromium has
> https://www.chromestatus.com/features and it seems quite useful since it
> makes it clear who the owner is, what someone in the know thinks the
> other browsers' interest/plans are, and links to bugs/standards, etc.
> It seems like https://bugzil.la/1055074 might be the bug on Mozila
> having something like that.

To my knowledge this does not exist. But I agree that it'd be great if
it did. I know that it's been talked about, but I don't know that
anything has happened beyond that.

This would probably fall on jst's and dougt's teams.

> I do see a couple resources in our docs:
> * https://wiki.mozilla.org/WebAPI/ExposureGuidelines seems to suggest
> that searching dev-platform for intent to implement emails works once
> we've crossed a certain threshold.

This is true.

> * Our specific API pages seem potentially misleading in their reference
> of standards but without clarification of our intent:
> **
> https://developer.mozilla.org/en-US/docs/Web/API/TCP_Socket_API#Standard
> links to the TCPSocket spec-work
> ** so does https://wiki.mozilla.org/WebAPI
> * https://wiki.mozilla.org/WebAPI/PlannedWork has some stuff but it
> hasn't been meaningfully updated in exactly 6 months.
>
> I suspect right now I should be just asking you or :ehsan or :overholt?

Yes, for the specific tcp-socket API I think that's true.

> NB: I do get that what's happening now is you're proposing a change in
> our behaviour related to all of this.

Only with regards to "sensitive APIs". But that's a small minority of
the APIs that we support.

> 2) Is there an easy way to tell whether standardization efforts are
> actually going to go anywhere? Like a list of implementers who are
> tentatively planning to implement? From looking at raw sockets, that
> fact that none of the editors are browser implementers and there hasn't
> been much recent activity on the lists certainly makes some
> implications.

No, there's no such thing.

>> So my point that if we're not interested in implementing chrome API,
>> then why should Google be interested in implementing one that only we
>> support. With or without a W3C logo on it?
>
> I do think it makes sense to converge to the Google Chrome API if we
> make any changes to our implementation given that:
> * it seems that the W3C effort is unlikely to yield an implemented
> standard, merely a proposed spec
> * we're explicitly recognizing this is something that can never be
> exposed to the web directly and given that no one else cares about a
> spec for this or a lot of other things, it's clear we end up in a custom
> runtime environment no matter what, and then it's just a question of
> minimizing pain/hassle for developers
> * sufficiently capable stream APIs can be wrapped to look like anything
> you want
> * the Google Chrome developers seem to take the stability/deprecation of
> their extension API seriously (noting caveat below)

I agree with all of the above. What's less clear to me is what
benefits we'd get in practice from implementing the chrome API.

> I'm not sure what to do about the "chrome.sockets.tcp" namespace or how
> to deal with the potential future API changes of what is explicitly an
> extension API for a single evergreen browser. Their APIs do change.
> The previous chrome.socket API https://developer.chrome.com/apps/socket
> was apparently introduced in Chrome 24, added new multicast methods in
> Chrome 28, was superseded by chrome.sockets.tcp
> https://developer.chrome.com/apps/sockets_tcp in Chrome 33, and added
> support for socket upgrade to SSL/TLS in Chrome 38. (There does not
> appear to be away to request that the socket be upgraded to TLS until
> connected.) These all seem like sane changes and the refactor
> intentional, so I don't think I'd assume the trend would continue
> forever, but I also don't think it means we can assume the API will now
> be forever stable.

Makes sense. Though this would indeed be a quite big problem since we
couldn't update our API at the same rate without requiring authors to
start supporting multiple versions.

/ Jonas

Jonas Sicking

unread,
Mar 11, 2015, 8:20:04 PM3/11/15
to David Bruant, Antonio Manuel Amaya Calvo, dev...@lists.mozilla.org
On Wed, Mar 11, 2015 at 2:34 PM, David Bruant <brua...@gmail.com> wrote:
> Le 10/03/2015 23:48, Jonas Sicking a écrit :
>>
>> Even more importantly, if http://site-a.com opens an <iframe> to
>> http://site-b.com/widget.html, it's expected that the iframe is loaded using
>> site-b's cookies. While we theoretically could use a different child process
>> to load the <iframe>, Gecko isn't able to do this yet.
>>
>> What I'm hoping we can do eventually, is just that though. I.e. make each
>> origin load in a separate process. Even if the origin is loaded inside an
>> <iframe> inside another origin.
>
> Note that this is tedious as two contexts with originally different origins
> may end up with the same origin if they set document.domain on both sides.
> However, this question is taken care of for @sandbox-ed iframes on the
> standards side as well as in Firefox and Chrome. (See
> https://www.w3.org/Bugs/Public/show_bug.cgi?id=23040 )

Yeah. In practice it likely wouldn't be a process per origin, but
rather a process per eTLD+1. Possibly there could be some way (http
header) for a page to promise that it won't document.domain, which
would allow us to load that page in an origin-specific process.

>> That way we could use process boundaries to ensure that a given origin is
>> not able to read data that belongs to another origin.
>>
>> Hopefully this is something that the platform team will start looking at
>> once the initial process separation work is done for desktop.
>
> I am looking forward to this moment.
> And I believe this relates to the question at hand about vendor-specific
> APIs. Mozilla has expressed the intention to implement new API to have
> <canvas> handled in a Worker (and I haven't read interest for that from
> other browsers, please correct me if I'm mistaken). However, it appears that
> process-isolated iframes should lead to the same performance benefits
> without needing any new API (but perhaps strongly encouraging other vendors
> to improve their iframe@sandbox implementation).

It's not at all obvious to me that process-isolated <iframe>s would be
a performance gain at all. The overhead of running more processes,
could offset any separation of threads.

/ Jonas

Dave Huseby

unread,
Mar 25, 2015, 6:34:43 PM3/25/15
to dev...@lists.mozilla.org
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 03/11/2015 12:40 PM, Benjamin Francis wrote:
> It's a bit more than just a bookmark registry in my opinion. A
> bookmark is a shortcut to a single URL which opens in a browsing
> context. A web app on the other hand can consist of multiple URLs.
> Installing an app into the app registry should register that app
> (identified by manifest URL) as handling a particular collection
> of URLs (defined by the "scope" property) using an application
> context (a browsing context with the manifest applied) in its
> preferred display mode (defined by the "display" property). You can
> also bookmark/pin a page (we don't have to have a registry in Gecko
> for that), and that bookmark could open in an application context
> (if it falls under the scope of an installed app) or a browsing
> context.

I asked a few non-programmer people how they define an "app" vs a "web
app". They pretty much agreed that and "app" was software "on your
phone" that runs even if the device isn't connected to the internet,
whereas a "web app" requires an internet connection.

If I apply their expectations for "apps" to Firefox OS, the only
change from regular web browsing we need to make is to allow for web
content to "run" even when not connected to the web. (Sure, the app
may "run" just enough to show an error message about no connection,
but at least it is a "branded" error message and not a browser error.
We see this behavior in existing iOS and Android apps).

With plain web browsing, the default behavior is shotgun caching of
content with least-recently-used (LRU) deletes and the minimal set of
API perms. The manifest is our primary mechanism for overriding all
of those browser behaviors. I don't think we need an app registry at
all as long as we make bookmarking "manifest aware":

* When a user loads a page that links to a manifest, we load the
manifest, follow the caching instructions, check signatures, and set
up access to sensitive APIs (i.e. prompting the user for permission if
needed).

* When a user adds a bookmark, we check for a manifest and follow the
caching instructions therein. If the page was already loaded, we just
mark the cached files so they will get skipped during the LRU calculatio
n.

* When a user loads a bookmark, we check for a manifest and follow the
presentation, api, and caching instructions therein. This meets the
"runs even when not connected" expectation by instructing the browser
to load from the cache first.

* When a user deletes a bookmark, we check for a manifest and unflag
the cached files so that they are once again considered during LRU.

It's a grand unified theory of (web) apps. What's nice about thinking
along these lines is it is device independent. It would work anywhere
the web does, assuming the browser is allowed to cache files locally.

- --dave
-----BEGIN PGP SIGNATURE-----

iQIcBAEBCAAGBQJVEzfuAAoJEJ7v31qiCP4gloQP/i4bdQpbcvZpzb7XC8Nht+JI
uOcqRi9IMwpAeUDQm5qaMV4cR3iOAA6SMCfL1dvF32FJHqOl00BrUK7wCSa0q1Al
Y29bUGsF2NsZga7PvbljTvKd08y9+d42VHpPzZx9G5tlnU7++PpVYckOwRgDZitn
ESAJQwYIwHCmIEzWLoehYKDNMKVglWQcOlEdDeer2U2RmhEzIKAaaYmr9u3KmgPe
aTmXV3ETM2WqgHtdRAnCoSu59QNblgvufwN+MypdJp6pMcYsHte27w1JahDIsKLT
CeYsPdgauBLn9r/e032Q+ABJKVUV8nQe4UzbJ1jurZYQ3lBe7JHWGdH9B0JCl3fJ
Vc1Fj99OzGZZRKhueh2AvLvJE1wy1KdpmTNbZQC7nDfjlsSrWwGR6+3tZgUgdjnX
qf8rFuX6o+mf5HK0X4LXcGOBY4UZVbLZUQQNSCQAQ9WjHWdr09qJrSHmwpfMCgEK
I5KsxKSS5TqpLUyE0twRbNJ58JBwX9VQ4qBARG2Kl/aprX3ZP0N4RLHsYqUs4kQ3
HnV2GcIQkeZVAETaGPuRdKJCCcu85N+t0EO+DHrRCIWaMXcRS/66cBu64qgid0s6
ow7BtNqz6I4XEkY73EmUh88ez1rrZgNrw/MvMc2F/ZmmSX5jCzzTVBSeYH9XsoTv
Nr25T3rvJtBx1na4SiH/
=4ymH
-----END PGP SIGNATURE-----
0 new messages