Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Future of packaged apps

161 views
Skip to first unread message

Jonas Sicking

unread,
Jun 18, 2014, 2:28:00 PM6/18/14
to dev...@lists.mozilla.org, dev-webapi, David Herman
Hi all,

Since we introduced packaged apps, there has been a lot of concern
that they are very different from the web. And rightfully so. The
current incarnation of packaged apps require an installation step,
they result in pages that don't have real URLs and while they use zip,
they are effectively non-standard.

However packages also solve some quite important problems. The main
problem that they solve for us is signing. While technically it's
possible to sign files without using packaging, it is more complex and
error prone to do compared to signing a package.

It would be great if we could get rid of the need for signing. However
for now they solve very important problems for us. For some APIs they
are the only ways to secure what could be argued are legacy features,
like TCPSocket, UDPSocket and System XHR (I.e. raw HTTP). Ideally
everyone would move to using WebSocket, WebRTC and CORS, but we're
likely many years away from that.

For other APIs, like DeviceStorage, we still need signatures as an
extra layer of security so that we're simply not relying on the user
to understand that answering "yes" to a security dialog might cause
all your pictures to be deleted and held for money.

Signatures also provide something that the web is in big need of. An
additional security layer so that simply hacking a webserver isn't the
only thing needed by an attacker that wants to steal all your
application data, as well as any personal data that the user has
granted the application access to.

However if we can enable developers to sign their own applications,
rather than having to have them signed by the marketplace, then that
would still mean that developers could roll out updates as quickly as
web developers do today. I.e. no need to wait for review from a
marketplace.


Additionally packages bring other advantages. The W3C TAG is currently
working on creating a standardized packaging format for the web. They
are doing this for at least a couple of reasons.

First of all pages currently provides some performance benefits since
it reduces the number of http roundtrips needed. People today do all
sorts of crazy hacks to inline resources into each other, including
inlining CSS and JS into HTML files, sprite image files together into
larger images, base64 encode and inline images into HTML and JS, etc.
All of this to reduce the number of http requests, while working
around the fact that the web currently doesn't enable bundling
multiple files into one in a simple uniform way.

Second, having to juggle multiple files can be a big annoyance. Anyone
who has created a HTML slide deck knows how annoying it is to then
send that slide deck to someone. You are forced to either do inlining
hacks like the ones described above, or to zip all dependencies into a
single file and then ask the person you are sending the deck to to
unpack them. Compare this to sending a keynote or powerpoint file to
anyone who can then simply double-click the file to watch it.

The same problem occurs if you are writing a JS library which has
dependencies on other libraries, or which for other reasons are spread
out over multiple JS files. Or possibly which has dependencies on JSON
files or image files. Right now you have to either compile the files
into a single file, which is hard to do in a generic way, or you have
to manage multiple separate files and pay the performance cost that's
involved.

Hopefully SPDY and HTTP/2 will solve many of the performance issues
mentioned here. It should dramatically lower the cost of making
multiple requests to the server. Though it will take time for those to
get adoption. And it still leaves us with the extra cost that's
involved with managing multiple files. And it's not clear that it will
have a 100% overlap with the performance issues that packages help
with (though really, they should help with most).


So in short, packages still are useful for:
* Additional security through signatures
* Managing dependent resources
* Possibly some performance issues


So the question is, how can we make sure that packages have all the
benefits that normal webpages have today. I propose that we do the
following:

1. Create a new package format. The format should support streaming
and support adding metadata, like headers, to individual resources. It
likely also needs to support metadata for the package itself.
2. Add CSP syntax similar to "self" but which means "in the same
package" rather than "from the same origin".
3. Add a new URL syntax for referring to resources within a package.
One candidate is
http://server.com/url/to/myapp.webpackage!//images/pic.jpg. I.e. the
"!//" signals boundary between the URL sent to the server to fetch the
package, and the path within the package to get a resource. But we
need to check that this is web compatible.
4. Define metadata which can be added to the package for "this is a
privileged app" and/or "this package is signed"

This way, a user can simply browse to pages within a packaged app,
just like you can to pages within a hosted app. To the user there
would be absolutely no difference, other than that the URL would
contain "!//".

To the developer the main difference would be that you only upload a
single file to the server rather than multiple files. And of course
that you need to sign the package and use an appropriate CSP policy if
you want the app to use privileged APIs.


There's still lots of details here to hammer out. Like finding a web
compatible syntax for URLs to resources within apps. And what's the
performance of browsing to a signed app where we need to verify the
signature. And how do we delegate the ability to sign apps to
developers, i.e. how do we make the marketplace vouch for a developer
rather than an app.

Another problem, which isn't packaging specific but that is still
important, is how do we give a browsed-to app additional permissions
over what webpages have. Currently permissions are created by the
installation code, but that won't work for browsed-to privileged apps.
But by seeing that the user has browsed to a package which is flagged
with "this is a privileged app", we can run the extra steps of
verifying signatures and loading the application manifest inside the
package and grant the appropriate permissions requested there while
the user is using the app.

And what's the origin of a privileged app. If they get the same origin
as the server that they were loaded from, that would mean that any
page on that server could simply <iframe> the packaged app, then
inject a <script> tag into the <iframe> which would cause the script
to run with privileged access.


Anyhow, this is intended to be a starting point of discussion. And a
way to ease people's mind that we aren't breaking the web without
working on a plan to unbreak it as well as improve it.

But this is a complex subject and I can promise we won't be able to
solve all problems that we'd like to. Security tends to be that way.
But I think that we have a decent start of a plan for how to make
things significantly better.

Additionally we need to figure out what timeline we can do what on.
Not only will finding the proper solutions here take a while, a lot of
this stuff needs to be standardized which means that it'll take quite
some time before we can rely on cross-platform solutions.

But we should figure out what we can build for now and ship in
FirefoxOS. But this email is long enough as it is :)

/ Jonas

Ehsan Akhgari

unread,
Jun 18, 2014, 4:28:11 PM6/18/14
to Jonas Sicking, dev...@lists.mozilla.org, dev-webapi, David Herman
Thanks for writing this up, Jonas!

Here are a couple of questions:

1. If we only allow access to privileged APIs to resources that are
served from the "package" source through CSP, would the concern about
injecting inline scripts from the same origin into the code running
inside the package remain?

2. How well do packages play with HTTP/2? One side effect of using
packages that you need to download all of the code for the application
in order to verify the signature (assuming that we won't have per
resource signatures), but HTTP/2 would allow you to only download the
resources needed for the current document in a minimum number of HTTP
transactions. It's not immediately obvious to me how we can reconcile
these two models...

Cheers,
Ehsan
> _______________________________________________
> dev-b2g mailing list
> dev...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-b2g
>

Paul Theriault

unread,
Jun 18, 2014, 9:21:59 PM6/18/14
to Ehsan Akhgari, dev-webapi, David Herman, Jonas Sicking, dev...@lists.mozilla.org

On 19 Jun 2014, at 6:28 am, Ehsan Akhgari <ehsan....@gmail.com> wrote:

> Thanks for writing this up, Jonas!
>
> Here are a couple of questions:
>
> 1. If we only allow access to privileged APIs to resources that are served from the "package" source through CSP, would the concern about injecting inline scripts from the same origin into the code running inside the package remain?

Yeh I was thinking that too. But then we also need to consider DOM access too I think? Consider a gallery app hosted in this manner: scripts from the same origin my be prevented from executing code in the packaged context and using device-storage directly to read photos (for example), but they could just access the photos loaded in the DOM couldn’t they?

>
> 2. How well do packages play with HTTP/2? One side effect of using packages that you need to download all of the code for the application in order to verify the signature (assuming that we won't have per resource signatures), but HTTP/2 would allow you to only download the resources needed for the current document in a minimum number of HTTP transactions. It's not immediately obvious to me how we can reconcile these two models…
> _______________________________________________
> dev-webapi mailing list
> dev-w...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-webapi

Karl Dubost

unread,
Jun 18, 2014, 9:32:24 PM6/18/14
to Jonas Sicking, dev-webapi, dev...@lists.mozilla.org, David Herman
Jonas,

Le 19 juin 2014 à 03:28, Jonas Sicking <jo...@sicking.cc> a écrit :
> Anyhow, this is intended to be a starting point of discussion. And a
> way to ease people's mind that we aren't breaking the web without
> working on a plan to unbreak it as well as improve it.

Very cool and interesting email. There are a lot of issues hidden in there.
Is there a breakdown of the issues on a site somewhere?

One part that would be interesting for feedback would be to have user testing on how people want to interact with these apps, be users and developers. Do we have a place with "user stories" and/or somewhere where we could put them?



--
Karl Dubost, Mozilla
http://www.la-grange.net/karl/moz

Jonas Sicking

unread,
Jun 18, 2014, 9:36:48 PM6/18/14
to Ehsan Akhgari, dev-webapi, dev...@lists.mozilla.org, David Herman
On Thu, Jun 19, 2014 at 4:28 AM, Ehsan Akhgari <ehsan....@gmail.com> wrote:
> Thanks for writing this up, Jonas!
>
> Here are a couple of questions:
>
> 1. If we only allow access to privileged APIs to resources that are served
> from the "package" source through CSP, would the concern about injecting
> inline scripts from the same origin into the code running inside the package
> remain?

Yes, definitely. While you're right that my example to inject a
<script> wouldn't work as it would be prevented by the CSP policy, the
attacker could call arbitrary trusted JS functions and pass arbitrary
JS values to it. That will most likely result in being able to trick
the trusted code to do bad things. Especially given how dynamic and
untyped most JS code is.

> 2. How well do packages play with HTTP/2? One side effect of using packages
> that you need to download all of the code for the application in order to
> verify the signature (assuming that we won't have per resource signatures),
> but HTTP/2 would allow you to only download the resources needed for the
> current document in a minimum number of HTTP transactions. It's not
> immediately obvious to me how we can reconcile these two models...

We would indeed result in the whole new package being downloaded.
Though there is work going on to support differential downloads for
HTTP resources in general which would work if those resources happen
to be a packaged file as well.

So HTTP/2 and packages would work just fine together in the sense that
neither is incompatible with the other. But packages do cause more
data to be downloaded during an update.

/ Jonas

Jonas Sicking

unread,
Jun 18, 2014, 9:38:35 PM6/18/14
to Karl Dubost, dev-webapi, dev...@lists.mozilla.org, David Herman
This is just getting started, so no. Wanna help put something like
that together? I think this would mostly feed into the packaging
discussion that is happening at the W3C TAG I think.

/ Jonas

David Bruant

unread,
Jun 19, 2014, 5:06:56 AM6/19/14
to Jonas Sicking, dev...@lists.mozilla.org, dev-webapi, David Herman
Le 18/06/2014 20:28, Jonas Sicking a �crit :
> It would be great if we could get rid of the need for signing. However
> for now they solve very important problems for us. For some APIs they
> are the only ways to secure what could be argued are legacy features,
> like TCPSocket, UDPSocket and System XHR (I.e. raw HTTP). Ideally
> everyone would move to using WebSocket, WebRTC and CORS, but we're
> likely many years away from that.
I'm not sure I understand this paragraph. How is WebSocket easier to
secure than SystemXHR, for instance?

> Second, having to juggle multiple files can be a big annoyance. Anyone
> who has created a HTML slide deck knows how annoying it is to then
> send that slide deck to someone. You are forced to either do inlining
> hacks like the ones described above, or to zip all dependencies into a
> single file and then ask the person you are sending the deck to to
> unpack them. Compare this to sending a keynote or powerpoint file to
> anyone who can then simply double-click the file to watch it.
+1. But this will require a form of OS cooperation that is hard to
achieve, no?

> So the question is, how can we make sure that packages have all the
> benefits that normal webpages have today. I propose that we do the
> following:
>
> 1. Create a new package format.
Have the existing packaging formats been investigated? Chrome extensions
and apps have a package format ending in .crx (if only we knew some
people who work on Chrome apps :-p). Maybe it already fits?
Likewise for Mozilla Jetpack extensions (.cfx). They're another way to
package a bunch of resources together.
I think Tizen has its thing as well (based on the W3C Widget spec IIRC).

> Not only will finding the proper solutions here take a while, a lot of
> this stuff needs to be standardized which means that it'll take quite
> some time before we can rely on cross-platform solutions.
Has discussion already been engaged with other browser vendors or
mobile-web-app-platform vendors?

David

Jonas Sicking

unread,
Jun 19, 2014, 6:05:09 AM6/19/14
to David Bruant, dev-webapi, dev...@lists.mozilla.org, David Herman
On Thu, Jun 19, 2014 at 5:06 PM, David Bruant <brua...@gmail.com> wrote:
> Le 18/06/2014 20:28, Jonas Sicking a écrit :
>
>> It would be great if we could get rid of the need for signing. However
>> for now they solve very important problems for us. For some APIs they
>> are the only ways to secure what could be argued are legacy features,
>> like TCPSocket, UDPSocket and System XHR (I.e. raw HTTP). Ideally
>> everyone would move to using WebSocket, WebRTC and CORS, but we're
>> likely many years away from that.
>
> I'm not sure I understand this paragraph. How is WebSocket easier to secure
> than SystemXHR, for instance?

Because the WebSocket protocol was designed with server security in
mind. The protocol contains steps where the client says "this is
origin X, are you ok with me connecting to you" and where the server
can accept the connection or not.

HTTP on the other hand does not. Many HTTP servers that live inside
corporate firewalls use the assumption that if a client can connect to
the server, then it must be a trusted client. They were built with
this assumption because this has been true in browsers since day one.

>> Second, having to juggle multiple files can be a big annoyance. Anyone
>> who has created a HTML slide deck knows how annoying it is to then
>> send that slide deck to someone. You are forced to either do inlining
>> hacks like the ones described above, or to zip all dependencies into a
>> single file and then ask the person you are sending the deck to to
>> unpack them. Compare this to sending a keynote or powerpoint file to
>> anyone who can then simply double-click the file to watch it.
>
> +1. But this will require a form of OS cooperation that is hard to achieve,
> no?

Not more than we have for single-file HTML files today. We can simply
make Firefox register a .webpacket extension with the OS which causes
any such files to be opened in the browser. Other browsers can and
should do the same.

>> So the question is, how can we make sure that packages have all the
>> benefits that normal webpages have today. I propose that we do the
>> following:
>>
>> 1. Create a new package format.
>
> Have the existing packaging formats been investigated?

Yes. Some of them.

> Chrome extensions and
> apps have a package format ending in .crx (if only we knew some people who
> work on Chrome apps :-p). Maybe it already fits?
> Likewise for Mozilla Jetpack extensions (.cfx). They're another way to
> package a bunch of resources together.
> I think Tizen has its thing as well (based on the W3C Widget spec IIRC).

IIRC none of these fit since they are all based on zip which neither
supports the type of metadata that we need, nor does it support
streaming as well as we need.

But certainly, if we find an existing package format that fits the
needs, then there's no reason not to reuse it.

>> Not only will finding the proper solutions here take a while, a lot of
>> this stuff needs to be standardized which means that it'll take quite
>> some time before we can rely on cross-platform solutions.
>
> Has discussion already been engaged with other browser vendors or
> mobile-web-app-platform vendors?

There's been lots of discussions about web-app-platforms. And lots of
standards written. And none that have been particularly successful.

There are some discussions currently around simply focusing on simply
adding a packaging format to the web, but otherwise not changing
anything. I.e. no new security models, no new "app" formats, no new
extensions. I have high hopes that this will succeed.

Once we will hopefully have almost everything that we need to support
browsable packaged apps. An "app" and a "website" is really not that
different after all, so if websites can use packages, then apps should
be able to as well.

Standardizing additional security models, like
privileged/signed/certified apps, on top of this is likely going to be
dramatically more controversial. But maybe it will be easier once all
other pieces are in place.

/ Jonas

Ehsan Akhgari

unread,
Jun 19, 2014, 11:24:12 AM6/19/14
to Jonas Sicking, dev-webapi, dev...@lists.mozilla.org, David Herman
On 2014-06-18, 9:36 PM, Jonas Sicking wrote:
> On Thu, Jun 19, 2014 at 4:28 AM, Ehsan Akhgari <ehsan....@gmail.com> wrote:
>> Thanks for writing this up, Jonas!
>>
>> Here are a couple of questions:
>>
>> 1. If we only allow access to privileged APIs to resources that are served
>> from the "package" source through CSP, would the concern about injecting
>> inline scripts from the same origin into the code running inside the package
>> remain?
>
> Yes, definitely. While you're right that my example to inject a
> <script> wouldn't work as it would be prevented by the CSP policy, the
> attacker could call arbitrary trusted JS functions and pass arbitrary
> JS values to it. That will most likely result in being able to trick
> the trusted code to do bad things. Especially given how dynamic and
> untyped most JS code is.

Ah yes. This, plus what Paul mentioned, are probably going to mean that
we'd need to figure out a different origin for these packaged apps... :/

>> 2. How well do packages play with HTTP/2? One side effect of using packages
>> that you need to download all of the code for the application in order to
>> verify the signature (assuming that we won't have per resource signatures),
>> but HTTP/2 would allow you to only download the resources needed for the
>> current document in a minimum number of HTTP transactions. It's not
>> immediately obvious to me how we can reconcile these two models...
>
> We would indeed result in the whole new package being downloaded.
> Though there is work going on to support differential downloads for
> HTTP resources in general which would work if those resources happen
> to be a packaged file as well.
>
> So HTTP/2 and packages would work just fine together in the sense that
> neither is incompatible with the other. But packages do cause more
> data to be downloaded during an update.

Makes sense.

Cheers,
Ehsan

Anne van Kesteren

unread,
Jun 26, 2014, 8:17:59 AM6/26/14
to Jonas Sicking, dev-webapi, dev...@lists.mozilla.org, David Herman
On Wed, Jun 18, 2014 at 8:28 PM, Jonas Sicking <jo...@sicking.cc> wrote:
> However if we can enable developers to sign their own applications,
> rather than having to have them signed by the marketplace, then that
> would still mean that developers could roll out updates as quickly as
> web developers do today. I.e. no need to wait for review from a
> marketplace.

Could you elaborate on this? I thought part of the point of allowing
certain features to be used was that we could inspect the code and
make sure nothing malicious was going on. Do we actually secure things
in a different way?


> Additionally packages bring other advantages. The W3C TAG is currently
> working on creating a standardized packaging format for the web. They
> are doing this for at least a couple of reasons.

Note that what the TAG is working on now does not have the new URL
scheme. It only works for subresources of an HTML document. That
seemed somewhat disappointing to me, but nobody else cared much.


PS: That was an amazing email. I'm sorry I missed it initially. Andrew
had to point it out to me. It captures the entire picture really well.
Thanks!


--
http://annevankesteren.nl/

Dave Herman

unread,
Jun 26, 2014, 12:12:16 PM6/26/14
to Anne van Kesteren, Jonas Sicking, dev-webapi, dev...@lists.mozilla.org
A new URL scheme wouldn't work at all; relative addressing and different
existing schemes (eg http vs https) would be broken. Or did you mean a new
URL separator? I favored that approach at first but Jeni's <link>-based
proposal has a couple key benefits over doing it through URLs:

1. It's hard to find a separator that won't break existing content.
Whatever it is it would certainly be ugly, but I'm not even sure what it
would be.

2. Using the <link> tag makes degradation much easier; it's just a tag
browsers ignore, and then the server gets requests for normal paths of
individual assets like "/foo/bar/baz.png" instead of "/foo.pkg". With the
URL approach servers have to handle requests from old browsers with the
funky separator like "/foo.pkg!//bar/baz.png".

3. Most critical: the <link> tag is a localized change to your HTML when
you decide to package up a directory, whereas the URL approach requires
changing lots of references all over your HTML.

Dave


Sent with AquaMail for Android
http://www.aqua-mail.com

David Herman

unread,
Jun 26, 2014, 3:15:54 PM6/26/14
to Anne van Kesteren, Jonas Sicking, dev-webapi, dev...@lists.mozilla.org
Actually maybe I don't find my own points here all that compelling. :) Points #1 and #2 aren't death blows, and point #3 could possibly be treated as orthogonal; e.g. if we had a URL approach to packaging we could *also* have a <link> tag that effectively rewrites URLs to package URLs. (Or this could even be polyfilled with service worker.) I'm still open to the URL approach, or frankly any approach that works. Jonas is working on spelling out all the requirements in this space as clearly as possible, and I'll keep working with him to help however I can. We can reach out to people who care most about this, including the TAG and Dimitri and the web components crew.

Dave

Anne van Kesteren

unread,
Aug 5, 2014, 7:12:17 AM8/5/14
to Jonas Sicking, dev-webapi, dev...@lists.mozilla.org, David Herman
On Thu, Jun 26, 2014 at 2:17 PM, Anne van Kesteren <ann...@annevk.nl> wrote:
> On Wed, Jun 18, 2014 at 8:28 PM, Jonas Sicking <jo...@sicking.cc> wrote:
>> However if we can enable developers to sign their own applications,
>> rather than having to have them signed by the marketplace, then that
>> would still mean that developers could roll out updates as quickly as
>> web developers do today. I.e. no need to wait for review from a
>> marketplace.
>
> Could you elaborate on this? I thought part of the point of allowing
> certain features to be used was that we could inspect the code and
> make sure nothing malicious was going on. Do we actually secure things
> in a different way?

Still interested in this.


>> Additionally packages bring other advantages. The W3C TAG is currently
>> working on creating a standardized packaging format for the web. They
>> are doing this for at least a couple of reasons.
>
> Note that what the TAG is working on now does not have the new URL
> scheme. It only works for subresources of an HTML document.

See http://w3ctag.github.io/packaging-on-the-web/ this.


--
http://annevankesteren.nl/

Jonas Sicking

unread,
Aug 6, 2014, 2:49:24 AM8/6/14
to Anne van Kesteren, dev-webapi, dev...@lists.mozilla.org, David Herman
On Tue, Aug 5, 2014 at 4:12 AM, Anne van Kesteren <ann...@annevk.nl> wrote:
> On Thu, Jun 26, 2014 at 2:17 PM, Anne van Kesteren <ann...@annevk.nl> wrote:
>> On Wed, Jun 18, 2014 at 8:28 PM, Jonas Sicking <jo...@sicking.cc> wrote:
>>> However if we can enable developers to sign their own applications,
>>> rather than having to have them signed by the marketplace, then that
>>> would still mean that developers could roll out updates as quickly as
>>> web developers do today. I.e. no need to wait for review from a
>>> marketplace.
>>
>> Could you elaborate on this? I thought part of the point of allowing
>> certain features to be used was that we could inspect the code and
>> make sure nothing malicious was going on. Do we actually secure things
>> in a different way?
>
> Still interested in this.

*If* we enable developer signing, the idea would be that we somehow
verify that a developer is a "good guy", rather than doing the current
verification that the app is a "good app".

This could be done by for example requiring the developer to sign some
form of contract, and make sure they know what the UX/privacy/other
requirements are for the various APIs, and make it clear that we'll
revoke access if those requirements aren't met.

So very fluffy ideas. I'm always looking for better solutions if you have ideas.

/ Jonas

Tobie Langel

unread,
Aug 6, 2014, 4:07:56 AM8/6/14
to Jonas Sicking, dev-webapi, dev...@lists.mozilla.org, David Herman
Has only exposing a predefined and limited set of APIs at runtime been
considered? This would allow developers to ship self-signed updates as
long as they kept within the bounds of the API surface they initially
defined. They could go beyond those bounds, of course, but APIs would
just throw or noop. Changing these permissions would need extra
approval by Mozilla (or other third parties trusted by the user) and
end-users would be alerted to the new capability requirements of the
app on update.

Thoughts?

--tobie

Jonas Sicking

unread,
Aug 6, 2014, 12:15:50 PM8/6/14
to Tobie Langel, dev-webapi, David Herman, dev...@lists.mozilla.org
We would definitely do this. Any time we enable a developer to do
self-signing we would also include a list of APIs that they could get
access to.

The developer would always be able to enumerate a shorter list in its app
manifest, but if they wanted more than what Mozilla originally granted,
they would have to go to mozilla and ask again.

/ Jonas

Lars Knudsen

unread,
Aug 20, 2014, 10:02:05 AM8/20/14
to mozilla-d...@lists.mozilla.org
Hi Jonas,

it sounds like you have put a lot of good thought into this.

Maybe this is not 100% on the same topic but could be related:

* What about translations - could it make sense to start considering a way to include those outside/with the package somehow (outside the signing) - but still related. This way, translators can deliver packages post signing. (could be a non-issue.. was just thinking). The same goes for some other resources like static images, etc.

* Could it makes sense to look at how Qt/QML addresses bundled resources (afaik - also works from packaged QML2/WK2/WebView apps in a nice way).

* If I have a lot of content in my app, would it be possible to define multiple sources for updates/huge payloads via approved CDNs or such.

br
Lars

Jonas Sicking

unread,
Aug 20, 2014, 8:26:30 PM8/20/14
to Lars Knudsen, mozilla-d...@lists.mozilla.org
On Wed, Aug 20, 2014 at 7:02 AM, Lars Knudsen <lar...@gmail.com> wrote:
> Maybe this is not 100% on the same topic but could be related:
>
> * What about translations - could it make sense to start considering a way to include those outside/with the package somehow (outside the signing) - but still related. This way, translators can deliver packages post signing. (could be a non-issue.. was just thinking). The same goes for some other resources like static images, etc.

I'd love to find a good solution for translations for the web in
general. It's not clear to me what a good translation system for
traditional websites is, and so it's not clear if we need to do
anything special for packaged apps or not.

> * Could it makes sense to look at how Qt/QML addresses bundled resources (afaik - also works from packaged QML2/WK2/WebView apps in a nice way).

Bundling resources isn't the hard part. The hard part is figuring out
how that interacts with web technologies such as URLs, headers,
streaming downloads and CSP.

> * If I have a lot of content in my app, would it be possible to define multiple sources for updates/huge payloads via approved CDNs or such.

You can already download resources from a CDN. This will become easier
once ServiceWorkers ship, but until then simply using things like
XMLHttpRequest, or <img src="http://yourcdn.com/path/to/image.jpg">.

/ Jonas

Ben Francis

unread,
Sep 10, 2014, 3:16:42 PM9/10/14
to Jonas Sicking, dev-webapi, dev...@lists.mozilla.org, David Herman
On Wed, Jun 18, 2014 at 7:28 PM, Jonas Sicking <jo...@sicking.cc> wrote:

> The W3C TAG is currently
> working on creating a standardized packaging format for the web.
>

I believe this is http://w3ctag.github.io/packaging-on-the-web/


> 1. Create a new package format. The format should support streaming
> and support adding metadata, like headers, to individual resources. It
> likely also needs to support metadata for the package itself.
>

This seems to be addressed by
http://w3ctag.github.io/packaging-on-the-web/#streamable-package-format


> 2. Add CSP syntax similar to "self" but which means "in the same
> package" rather than "from the same origin".
>

No mention of this yet in the Editor's Draft.


> 3. Add a new URL syntax for referring to resources within a package.
> One candidate is
> http://server.com/url/to/myapp.webpackage!//images/pic.jpg. I.e. the
> "!//" signals boundary between the URL sent to the server to fetch the
> package, and the path within the package to get a resource. But we
> need to check that this is web compatible.
>

As you know, this is currently explicitly listed as a "rejected approach"
here https://github.com/w3ctag/packaging-on-the-web#specialising-urls


> 4. Define metadata which can be added to the package for "this is a
> privileged app" and/or "this package is signed"
>

Not surprisingly, no mention of this yet.

In the W3C proposal a package link relation (e.g. <link rel="package"
href="/app.package" scope="/">) points to a file which is just a packaged
version of a collection of resources which would otherwise be individually
requested from their respective URLs.

As I understand it the idea is that a user agent which supports packages
can notice when it comes across a reference to a file which falls under the
scope of a package, and wait to download and process the resources
contained in the package before attempting to individually download that
resource, thereby cutting down the number of HTTP requests. It could then
use the contents of the package to populate a local cache so that resources
can be fetched from the cache rather than individually requested over the
network.

A user agent which does not support packages would ignore the package link
relation and just go ahead and download the individual resources from their
respective URLs. The package is just a progressive enhancement to help the
user agent get resources more quickly. For backwards compatibility all of
the resources would still have to be individually retrievable from the
server at their own URLs.

One of the examples given (
http://w3ctag.github.io/packaging-on-the-web/#example-scenario) allows for
an individual resource whose cached copy has expired to be updated
independently of the rest of the resources in the package.

Unfortunately this approach wouldn't allow for a security model based on
the signing of packages or a separate origin for files retrieved from
inside a package, because the package is just used to populate a cache of
resources. The user agent is free to either load the resources from a cache
or to load them from the server at any time, they will always be considered
as having come from the same origin.

It seems that the W3C proposal is incompatible with arguably the main use
case of packaged apps in Firefox OS, which is the cryptographic signing of
source code by a trusted party.

My question is should we again raise this contentious issue with those
responsible for the W3C proposal and risk derailing the standardisation
effort of packages on the basis that they don't meet this basic use case,
when standardised packages may have some value in their own right?

Jonas Sicking

unread,
Sep 10, 2014, 6:55:58 PM9/10/14
to Ben Francis, dev-webapi, dev...@lists.mozilla.org, David Herman
On Wed, Sep 10, 2014 at 12:16 PM, Ben Francis <bfra...@mozilla.com> wrote:
> It seems that the W3C proposal is incompatible with arguably the main use
> case of packaged apps in Firefox OS, which is the cryptographic signing of
> source code by a trusted party.

There are quite a few use cases that the W3C proposal is incompatible
with. I'm currently working with various people to try to make the
"!//" a reality, but since it's a change in how URLs, i.e. a change in
one of the fundamental building blocks of the web, it's a change
that's taking quite some time and quite some convincing.

So so far I wouldn't count out "!//" as a contender for the final
standardized solution. But of course it's also not certain that it
will work.

I'd rather not bring this proposal to W3C or IETF yet without working
through it with various stake holders to make sure that we iron out
more issues, as well as more thoroughly evaluate more alternatives.

/ Jonas
0 new messages