Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Did W3C EME just criminalize privacy?

404 views
Skip to first unread message

Mike Perry

unread,
May 19, 2014, 7:43:19 AM5/19/14
to dev-p...@lists.mozilla.org
I just saw
https://hacks.mozilla.org/2014/05/reconciling-mozillas-mission-and-w3c-eme/
and I'm a bit concerned.

Obviously, it will be simple enough for Tor Browser and other Free/Libre
Firefox derivatives to disable this DRM mechanism, but I'm worried about
the long term effects of giving the web a persistent device identifier
(which that blog post mentions, but I can't find direct reference to in
the EME draft spec).

It seems to me that a device identifier will quickly be abused by more
than just streaming media sites. What will prevent banking sites,
government sites, and even sites that are simply hostile to privacy from
requiring the receipt of a device id before allowing access to their
content? I've already encountered sites that require me to view a
full-page captive advertisement prior to viewing their content. It does
not seem too much of a stretch for this type of captive advertisement to
use EME to obtain a device identifier as part of this process, too.

Worse: if this does happen, and a Firefox addon, Tor Browser, or other
Firefox derivative decides to alter the behavior of this device
identifier to bring it fully under user control, will we be violating
the DMCA by creating a 'circumvention device'?

Have these issues been considered?


--
Mike Perry
signature.asc

Mike Perry

unread,
May 20, 2014, 8:22:40 AM5/20/14
to dev-p...@lists.mozilla.org
Henri Sivonen:
> On Monday, May 19, 2014 2:43:19 PM UTC+3, Mike Perry wrote:
> > I just saw
> > https://hacks.mozilla.org/2014/05/reconciling-mozillas-mission-and-w3c-eme/
> > and I'm a bit concerned.
>
> Note that a FAQ has now been appended to that post.

Unfortunately. The FAQ still does not answer most of my questions.

> > Obviously, it will be simple enough for Tor Browser and other Free/Libre
> > Firefox derivatives to disable this DRM mechanism,
>
> Many derivatives don't disable the NPAPI, though. Does Tor Browser? If
> not, why not?

We do in fact hard-block (via C++ patch) all NPAPI plugins from loading
into the Tor Browser address space, except for Adobe Flash. We do allow
the Flash plugin to load into the address space, but disable it by
default through Mozilla's XPCOM plugin service APIs, and warn the user
and ask for confirmation when they attempt to enable it. We also warn
them again if they leave it enabled across a browser restart. Even when
the plugin is enabled, Flash objects still default to click-to-play.

Some members of the Tor community are not satisfied even with this
compromise, as Flash has proven to be an utter disaster for browser
security, and has an ignorant (if not actively malicious) attitude
towards privacy issues and browser proxy settings. The fact that Adobe
is still involved in this DRM scheme does not exactly inspire confidence
in us for this reason, either.

> Considering that the CDM will be sandboxed but NPAPI plug-ins aren't,
> it would be more rational for Tor Browser to support downloading the
> CDM than to support NPAPI plug-ins. After, all if the sandbox doesn't
> have bugs and the networking goes over Tor, the CDM should be no worse
> for privacy than cookies or IndexedDB (see below).

It is still not clear to me after reading the FAQ that this is the case.

As the FAQ also mentions, unless the CRM host sandbox binary is
reproducible/deterministic, it is difficult to know for sure that it is
providing whatever privacy properties the spec and source code claims.
It will also be impossible to produce hardened builds of it with ASAN or
similar mechanisms.

Will the CDM host source code be compiled by Mozilla, or Adobe?

> > but I'm worried about the long term effects of giving the web a
> > persistent device identifier (which that blog post mentions,
>
> The post mentions it specifically to explain what we are doing about
> it. To make we are doing clearer, we are:
> 1) Making Mozilla code gather the device-identifying raw data instead
> of letting the CDM have that level of system access.
> 2) Hashing the Mozilla-code-gathered device-identifying information
> together with a per-origin browser-generated secret and letting the
> CDM see the hash.
> 3) Allowing the user to clear the per-origin browser-generated secret
> to have the browser generate a new one. (Doing this will introduce
> latency to your next use of the CDM with the origin for which you
> cleared the browser-generated per-origin secret.)

Is there a bugzilla ticket and/or spec document that describes the
implementation of this hashing mechanism?

From what you've said, it sounds like the exact same per-origin secret
will be re-generated after it is cleared, unless there is a
randomized/salted input step to the hashing process that you did not
mention?

If the hash that is produced *is* salted, I am now wondering why the CDM
host would need to gather device-identifying information at all?

Also, what does "origin" mean in this context? Is a third party iframe
considered to have the URL bar domain as its "origin", or is the
"origin" here the one used by the same-origin policy (ie the iframe
source url domain)?

> > but I can't find direct reference to in the EME draft spec).
>
> EME doesn't specify DRM. It specifies an API for talking to a DRM
> component (that it calls a CDM). It just happens that node locking
> (making the user unable to migrate DRM keys from one device to another
> on their own as opposed to re-requesting keys from the DRM server) is
> a feature that Hollywood-approved DRMs tend to have.

Node-locking as implemented by an extractable, persistent, unsalted
identifier is a non-starter for us, and it also effectively criminalizes
privacy for stock Firefox users, via the DMCA.

> > It seems to me that a device identifier will quickly be abused by more
> > than just streaming media sites. What will prevent banking sites,
> > government sites, and even sites that are simply hostile to privacy from
> > requiring the receipt of a device id before allowing access to their
> > content?
>
> The CDM will be sandboxed and the ID the sandboxing host exposes to
> the CDM will be
> 1) not reversible to permanent device-identifying info (see "hash" above)
> 2) compartmentalized per-site and resettable, so no worse as a
> tracking identifier than the site setting a cookie or storing some
> data in IndexedDB or localStorage.
>
> > Have these issues been considered?
>
> They have. In fact, we considered this such an important point that
> addressing it was part of the initial announcement. Search for "By
> contrast, in Firefox the sandbox prohibits the CDM from fingerprinting
> the user’s device." in the very post you linked to!

If the same identifier is re-generated for a given origin, it doesn't
much matter. Again, the post is still not clear on this and other
important privacy points, even with the FAQ addition.


--
Mike Perry
signature.asc

Henri Sivonen

unread,
May 20, 2014, 9:05:10 AM5/20/14
to dev-p...@lists.mozilla.org
On Tue, May 20, 2014 at 3:22 PM, Mike Perry <mike...@torproject.org> wrote:
> We do in fact hard-block (via C++ patch) all NPAPI plugins from loading
> into the Tor Browser address space, except for Adobe Flash.

Oh. Interesting.

> We do allow
> the Flash plugin to load into the address space, but disable it by
> default through Mozilla's XPCOM plugin service APIs, and warn the user
> and ask for confirmation when they attempt to enable it. We also warn
> them again if they leave it enabled across a browser restart. Even when
> the plugin is enabled, Flash objects still default to click-to-play.

Then you have a way to run Adobe Access DRM without a
Mozilla-controlled sandbox.

> Some members of the Tor community are not satisfied even with this
> compromise, as Flash has proven to be an utter disaster for browser
> security, and has an ignorant (if not actively malicious) attitude
> towards privacy issues and browser proxy settings.

The CDM not having network access on its own (i.e. its communications
with the outside world will be mediated by Firefox) will be an
improvement then.

> As the FAQ also mentions, unless the CRM host sandbox binary is
> reproducible/deterministic, it is difficult to know for sure that it is
> providing whatever privacy properties the spec and source code claims.

Correct, except if you get close enough to the same build environment
and parameters as Mozilla, you might be able to convince *yourself*
that the Mozilla-provided executable was built from the disclosed
source even if you fall short of convincing the *CDM* that your build
was.

> Will the CDM host source code be compiled by Mozilla, or Adobe?

By Mozilla.

> Is there a bugzilla ticket and/or spec document that describes the
> implementation of this hashing mechanism?

Not yet.

> From what you've said, it sounds like the exact same per-origin secret
> will be re-generated after it is cleared, unless there is a
> randomized/salted input step to the hashing process that you did not
> mention?

The plan is to generate the secret randomly and remember (in the
browser) which origin has which secret associated with it. That is,
the secret will work as a per-origin salt to the hash of the
device-identifying info.

> If the hash that is produced *is* salted, I am now wondering why the CDM
> host would need to gather device-identifying information at all?

The salt comes from the browser, which is assumed to be controlled by
the user. It is the User Agent, after all. The user is the adversary
in the DRM threat model. Therefore, the anti-cloneability of the
identifier cannot depend on browser-provided data (i.e. it has to have
some CDM host-gathered device-specific data hashed into it).

> Also, what does "origin" mean in this context? Is a third party iframe
> considered to have the URL bar domain as its "origin", or is the
> "origin" here the one used by the same-origin policy (ie the iframe
> source url domain)?

To be decided.

>> > but I can't find direct reference to in the EME draft spec).
>>
>> EME doesn't specify DRM. It specifies an API for talking to a DRM
>> component (that it calls a CDM). It just happens that node locking
>> (making the user unable to migrate DRM keys from one device to another
>> on their own as opposed to re-requesting keys from the DRM server) is
>> a feature that Hollywood-approved DRMs tend to have.
>
> Node-locking as implemented by an extractable, persistent, unsalted
> identifier is a non-starter for us, and it also effectively criminalizes
> privacy for stock Firefox users, via the DMCA.

Salted and semi-persistent (persists until the user asks the salt to
be forgotten).

--
Henri Sivonen
hsiv...@hsivonen.fi
https://hsivonen.fi/

Mike Perry

unread,
May 20, 2014, 10:33:15 AM5/20/14
to dev-p...@lists.mozilla.org
Henri Sivonen:
> > We do allow the Flash plugin to load into the address space, but
> > disable it by default through Mozilla's XPCOM plugin service APIs, and
> > warn the user and ask for confirmation when they attempt to enable it.
> > We also warn them again if they leave it enabled across a browser
> > restart. Even when the plugin is enabled, Flash objects still default
> > to click-to-play.
>
> Then you have a way to run Adobe Access DRM without a Mozilla-controlled
> sandbox.

Is this sandbox architecture described anywhere? Is it just OS-level
sandboxing, or are you also running Adobe's code in some kind of
NaCl/asm.js/bytecode VM as well?

> > As the FAQ also mentions, unless the CRM host sandbox binary is
> > reproducible/deterministic, it is difficult to know for sure that it is
> > providing whatever privacy properties the spec and source code claims.
>
> Correct, except if you get close enough to the same build environment
> and parameters as Mozilla, you might be able to convince *yourself*
> that the Mozilla-provided executable was built from the disclosed
> source even if you fall short of convincing the *CDM* that your build
> was.

Hrmm. In theory, yes. But in practice, if you do not publish your exact
build environment config and build machine setup scripts, as well as
your Profile Guided Optimization files, even such manual verification
will be extremely costly and tedious, and will be unlikely to actually
happen with any frequency.

In fact, if there is any component of Firefox that should have
reproducible builds as a hard requirement, this seems like candidate 0.

> > Will the CDM host source code be compiled by Mozilla, or Adobe?
>
> By Mozilla.

Do you have a plan for producing AddressSanitizer+UBSanitizer and/or
assert-enabled builds that are capable of using a live Adobe $EVILBLOB?

We are considering providing AddressSanitizer-enabled Tor Browser
builds, as both a hardening option, and to help us sniff out bugs. Would
we be able to get the CRM host component in an ASAN+UbSan+Assert enabled
form for use with these builds?

> > From what you've said, it sounds like the exact same per-origin secret
> > will be re-generated after it is cleared, unless there is a
> > randomized/salted input step to the hashing process that you did not
> > mention?
>
> The plan is to generate the secret randomly and remember (in the
> browser) which origin has which secret associated with it. That is,
> the secret will work as a per-origin salt to the hash of the
> device-identifying info.
>
> > If the hash that is produced *is* salted, I am now wondering why the CDM
> > host would need to gather device-identifying information at all?
>
> The salt comes from the browser, which is assumed to be controlled by
> the user. It is the User Agent, after all. The user is the adversary
> in the DRM threat model. Therefore, the anti-cloneability of the
> identifier cannot depend on browser-provided data (i.e. it has to have
> some CDM host-gathered device-specific data hashed into it).

Ok, this makes sense then. Or at least, it makes as much sense as DRM
can possibly make.

> > Also, what does "origin" mean in this context? Is a third party iframe
> > considered to have the URL bar domain as its "origin", or is the
> > "origin" here the one used by the same-origin policy (ie the iframe
> > source url domain)?
>
> To be decided.

For the record, in Tor Browser we are also trying to demonstrate that it
is possible to provide the same third party tracking protections as "Do
Not Track" through technology, rather than policy.

In other words, we have jailed/double-keyed/disabled third party
cookies, cache, DOM storage, HTTP Auth, and TLS Session state to the URL
bar domain, to eliminate third party tracking across different url bar
sites.

In any world in which we allow this thing to run, we would want the
ability to patch Tor Browser to additionally/alternatively salt the
identifier with the URL bar domain, rather than just the iframe domain.

To be completely clear, the salt is handed to the CRM host by browser
code that we can modify, if we disagree with your decision on the iframe
scoping of this salt?

> >> > but I can't find direct reference to in the EME draft spec).
> >>
> >> EME doesn't specify DRM. It specifies an API for talking to a DRM
> >> component (that it calls a CDM). It just happens that node locking
> >> (making the user unable to migrate DRM keys from one device to another
> >> on their own as opposed to re-requesting keys from the DRM server) is
> >> a feature that Hollywood-approved DRMs tend to have.
> >
> > Node-locking as implemented by an extractable, persistent, unsalted
> > identifier is a non-starter for us, and it also effectively criminalizes
> > privacy for stock Firefox users, via the DMCA.
>
> Salted and semi-persistent (persists until the user asks the salt to
> be forgotten).

Alright. This seems like it might actually be the most reasonable way to
go, modulo CRM host build issues.


--
Mike Perry
signature.asc

Henri Sivonen

unread,
May 21, 2014, 1:01:26 PM5/21/14
to dev-p...@lists.mozilla.org
On Tue, May 20, 2014 at 5:33 PM, Mike Perry <mike...@torproject.org> wrote:
> Is this sandbox architecture described anywhere?

Not really apart from the Hacks post, this thread and the thread on
the governance list.

> Is it just OS-level sandboxing

Yes.

>, or are you also running Adobe's code in some kind of
> NaCl/asm.js/bytecode VM as well?

No.

>> > As the FAQ also mentions, unless the CRM host sandbox binary is
>> > reproducible/deterministic, it is difficult to know for sure that it is
>> > providing whatever privacy properties the spec and source code claims.
>>
>> Correct, except if you get close enough to the same build environment
>> and parameters as Mozilla, you might be able to convince *yourself*
>> that the Mozilla-provided executable was built from the disclosed
>> source even if you fall short of convincing the *CDM* that your build
>> was.
>
> Hrmm. In theory, yes. But in practice, if you do not publish your exact
> build environment config and build machine setup scripts, as well as
> your Profile Guided Optimization files, even such manual verification
> will be extremely costly and tedious, and will be unlikely to actually
> happen with any frequency.

I don't see a reason (other than people being busy) for us not to
document our build environment. With proprietary systems, providing
outright VM images probably wouldn't work, I imagine.

> In fact, if there is any component of Firefox that should have
> reproducible builds as a hard requirement, this seems like candidate 0.

My understanding is that the actual first focus is/will be OpenH264 so
that we could both not sandbox it and not have to tell users to trust
a non-Mozilla entitity.

>> > Will the CDM host source code be compiled by Mozilla, or Adobe?
>>
>> By Mozilla.
>
> Do you have a plan for producing AddressSanitizer+UBSanitizer and/or
> assert-enabled builds that are capable of using a live Adobe $EVILBLOB?

I'll put this on my list of things to ask Adobe about.

> For the record, in Tor Browser we are also trying to demonstrate that it
> is possible to provide the same third party tracking protections as "Do
> Not Track" through technology, rather than policy.
>
> In other words, we have jailed/double-keyed/disabled third party
> cookies, cache, DOM storage, HTTP Auth, and TLS Session state to the URL
> bar domain, to eliminate third party tracking across different url bar
> sites.

Cool.

> To be completely clear, the salt is handed to the CRM host by browser
> code that we can modify, if we disagree with your decision on the iframe
> scoping of this salt?

As currently planned, you should be able to do that. Each new salt
results in some server load-causing initialization, so the main
concern I see is unhappiness over making the system more chatty in a
way that translates into server load (and user-perceived latency, but
considering that Tor itself adds latency to buy privacy, I expect you
to be OK with added user-perceived latency to buy privacy).

Mike O'Neill

unread,
May 21, 2014, 3:57:51 PM5/21/14
to Henri Sivonen, dev-p...@lists.mozilla.org
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Can the CDM communicates back to base itself? If it can then it can send the salt, along with any other data it has collected. Can it collect web history data in any way? If so there should be some contingency on DNT. A salt delete API (no read access) would make that available to open source privacy extensions such as EFF's PrivacyBadger.

Mike
baycloud.com
> _______________________________________________
> dev-privacy mailing list
> dev-p...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-privacy
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.2.42.4591 - http://www.gpg4o.de/
Charset: utf-8

iQEcBAEBAgAGBQJTfQU+AAoJEHMxUy4uXm2JXMEH/irjUcDk6KR51hIaM6zyQp6p
ah0uZsh8ceIcQ9Zyzk+3tUCArE8SnmsOf5gLUzVRXaAB2/ORkKjUSpzU8xdFKscd
UxJ+kes3E9Z7EJ3qwmYMq1gebUS1TYIEH0cNfOUlCGAhYX8v+Y2meFQ6M6jMzIPp
ZYgyFpjjHvxX4yNSUEYaTMaQhJ79YwzUXC9u8bReguLOQ4u6JzbyygyqZd9Bqn+t
FcEpcL54ESMtQEuJ9NsO1DeMm5HsXrGnMkHOpQtf76Kn4rIqfLEHsVaDL07X7IRB
aTDrxT9ah3MfCE3eA/IzVqUMPbwRF3YcA2nAg0L55M4OTrEoQKjfX+1xSVFR5ZU=
=8Cav
-----END PGP SIGNATURE-----

Mike Perry

unread,
May 22, 2014, 6:51:17 AM5/22/14
to dev-p...@lists.mozilla.org
Henri Sivonen:
> On Tue, May 20, 2014 at 5:33 PM, Mike Perry <mike...@torproject.org> wrote:
> > Is this sandbox architecture described anywhere?
>
> Not really apart from the Hacks post, this thread and the thread on
> the governance list.
>
> > Is it just OS-level sandboxing
>
> Yes.
>
> >, or are you also running Adobe's code in some kind of
> > NaCl/asm.js/bytecode VM as well?
>
> No.

Hrm.. What is the nature of the barrier between the $EVILBLOB and the
CRM host, then?

Can $EVILBLOB decide to alter the address space of the CRM host at will?

If so, what prevents it from going rogue and eliminating/ignoring your
salt entirely, or from being exploited and using the same
device-information gathering APIs that are allowed in the CRM host to
gather sufficient information about the device to break out of the
sandbox?

> > For the record, in Tor Browser we are also trying to demonstrate that it
> > is possible to provide the same third party tracking protections as "Do
> > Not Track" through technology, rather than policy.
> >
> > In other words, we have jailed/double-keyed/disabled third party
> > cookies, cache, DOM storage, HTTP Auth, and TLS Session state to the URL
> > bar domain, to eliminate third party tracking across different url bar
> > sites.
>
> Cool.
>
> > To be completely clear, the salt is handed to the CRM host by browser
> > code that we can modify, if we disagree with your decision on the iframe
> > scoping of this salt?
>
> As currently planned, you should be able to do that. Each new salt
> results in some server load-causing initialization, so the main
> concern I see is unhappiness over making the system more chatty in a
> way that translates into server load (and user-perceived latency, but
> considering that Tor itself adds latency to buy privacy, I expect you
> to be OK with added user-perceived latency to buy privacy).

Well, adding a full round trip over Tor will actually be quite
expensive, and I suspect will ultimately result in the
generation+transmission of the third-party trackable identifier *and*
the the jailed identifier, which will also still be a tracking issue.

Is there a reason why we couldn't simply change the single salt that is
currently passed in to be based on url bar host + iframe url host,
instead of just iframe host, and avoid the extra round trip?


--
Mike Perry
signature.asc

Benjamin Smedberg

unread,
May 22, 2014, 1:21:48 PM5/22/14
to Mike Perry, dev-p...@lists.mozilla.org
On 5/22/2014 6:51 AM, Mike Perry wrote:
>
> Hrm.. What is the nature of the barrier between the $EVILBLOB and the
> CRM host, then?

There are two processes, the Firefox process and the Adobe-EME-plugin
process. Both processes run Mozilla binaries. Let's presume for the
moment that those are called firefox.exe and plugin-container.exe.

When the user requests DRM activation, firefox.exe will set up launching
plugin-container.exe in a sandbox. This sandbox does not have access to
most OS APIs, including any networking or filesystem APIs. The only data
that the sandbox has is whatever firefox.exe gives it access to via
known pipes.

plugin-container.exe then loads the Adobe DRM DLL and feeds it the data
as requested by firefox.exe and sends the information back to firefox
over the pipes.

The Adobe DLL is free to poke around and check for instance that
plugin-container.exe is a binary that it expects before proceeding. But
it can't go to the network or the filesystem or store any persistent
identifiers because it doesn't have access to those oscalls.

--BDS

Mike Perry

unread,
May 22, 2014, 2:11:16 PM5/22/14
to Benjamin Smedberg, dev-p...@lists.mozilla.org
Benjamin Smedberg:
I am confused by trying to reconcile this description with Henri's
earlier statement that the CDM Host is a Firefox-provided, authenticated
executable that extracts device-identifying information, and only allows
$EVILBLOB to obtain a salted, hashed derivative of this information.

Based on the combination of your and Henri's statements, it sounds like
the CDM Host (plugin-container.exe) process is still allowed a large
degree of access to OS APIs/interfaces in order to extract
device-identifying information. Unless these privileges are subsequently
dropped by plugin-container.exe after extracting this information but
*before* executing any $EVILBLOB code, then if $EVILBLOB is exploited,
it will still access these APIs, which it may be able to use to break
out of the sandbox, or at least to directly obtain device-identifying
information for its own purposes.

In other words, $EVILBLOB does not truly have least privilege under
this model.


Is Adobe/Hollywood against letting $EVILBLOB run in NaCl/asm.js or
similar restricted VM? Or is this just a significant engineering effort?


--
Mike Perry
signature.asc

Benjamin Smedberg

unread,
May 22, 2014, 2:20:55 PM5/22/14
to Mike Perry, dev-p...@lists.mozilla.org
On 5/22/2014 2:11 PM, Mike Perry wrote:
>
> Based on the combination of your and Henri's statements, it sounds like
> the CDM Host (plugin-container.exe) process is still allowed a large
> degree of access to OS APIs/interfaces in order to extract
> device-identifying information.

The device identification and salting is performed by Firefox and passed
to the plugin-container.

--BDS

Henri Sivonen

unread,
May 23, 2014, 6:38:32 AM5/23/14
to dev-p...@lists.mozilla.org
On Thu, May 22, 2014 at 1:51 PM, Mike Perry <mike...@torproject.org> wrote:
> Hrm.. What is the nature of the barrier between the $EVILBLOB and the
> CRM host, then?

C++ linkage across a shared library dynamic linking boundary.

> Can $EVILBLOB decide to alter the address space of the CRM host at will?

Yes.

> If so, what prevents it from going rogue and eliminating/ignoring your
> salt entirely, or from being exploited and using the same
> device-information gathering APIs that are allowed in the CRM host to
> gather sufficient information about the device to break out of the
> sandbox?

The process having zeroed the inputs (origin-associated salt from the
browser and device-unique information the CDM host has gathered) to
the cryptographic hash function before passing control to CDM code and
process having requested (before control is passed to CDM code) the
kernel not service most system calls from the process.

>> > For the record, in Tor Browser we are also trying to demonstrate that it
>> > is possible to provide the same third party tracking protections as "Do
>> > Not Track" through technology, rather than policy.
>> >
>> > In other words, we have jailed/double-keyed/disabled third party
>> > cookies, cache, DOM storage, HTTP Auth, and TLS Session state to the URL
>> > bar domain, to eliminate third party tracking across different url bar
>> > sites.
>>
>> Cool.
>>
>> > To be completely clear, the salt is handed to the CRM host by browser
>> > code that we can modify, if we disagree with your decision on the iframe
>> > scoping of this salt?
>>
>> As currently planned, you should be able to do that. Each new salt
>> results in some server load-causing initialization, so the main
>> concern I see is unhappiness over making the system more chatty in a
>> way that translates into server load (and user-perceived latency, but
>> considering that Tor itself adds latency to buy privacy, I expect you
>> to be OK with added user-perceived latency to buy privacy).
>
> Well, adding a full round trip over Tor will actually be quite
> expensive, and I suspect will ultimately result in the
> generation+transmission of the third-party trackable identifier *and*
> the the jailed identifier, which will also still be a tracking issue.
>
> Is there a reason why we couldn't simply change the single salt that is
> currently passed in to be based on url bar host + iframe url host,
> instead of just iframe host, and avoid the extra round trip?

If two sites embed a DRMed video from a video hosting service, the
server load-causing initialization would happen separately for the two
embedding sites if the salt was embedder-specific compared to it being
only embeddee-specific only. Also, the post-initialization state of
the CDM would have to be stored twice on the user's disk (not written
by the CDM directly of course but in a way mediated by Firefox).

Note that we might end up making the salt specific to the
embedder/embeddee combination despite those disadvantages.

On Thu, May 22, 2014 at 8:21 PM, Benjamin Smedberg
<benj...@smedbergs.us> wrote:
> When the user requests DRM activation, firefox.exe will set up launching
> plugin-container.exe in a sandbox.

No, the CDM host process has to gather the device-unique data first
and only subsequently ask the kernel to take away its ability to do
that again. Hence, the CDM host process can't be launched without the
privileges needed to gather the device-unique information. Instead, it
has to make the transition after gathing the device-unique
information.

> The Adobe DLL is free to poke around

Poke around within the address space of the process it is in, that is.
That process will contain the code of the CDM host executable and (by
the time the CDM can do any poking) the code of the CDM
.so/.dylib/.dll.

On Thu, May 22, 2014 at 9:11 PM, Mike Perry <mike...@torproject.org> wrote:
> Based on the combination of your and Henri's statements, it sounds like
> the CDM Host (plugin-container.exe) process is still allowed a large
> degree of access to OS APIs/interfaces in order to extract
> device-identifying information. Unless these privileges are subsequently
> dropped by plugin-container.exe after extracting this information but
> *before* executing any $EVILBLOB code, then if $EVILBLOB is exploited,
> it will still access these APIs, which it may be able to use to break
> out of the sandbox, or at least to directly obtain device-identifying
> information for its own purposes.

For simplicity, let's say the browser consists of one executable and
one process. (That the browser is becoming multi-process itself isn't
relevant here.)

So there are two processes: browser and CDM host. There are two
executables to go with those processes: browser and CDM host.
Additionally, there is a shared library, the CDM, that gets loaded in
the CDM host process.

So roughly this is what we are planning on implementing:
1) The browser process spawns the CDM host process that runs the code
of the CDM host executable.
2) These two processes set up whatever IPC is going to be needed.
3) The browser passes the salt to the CDM host over IPC.
4) The CDM host goes gather device-unique information.
5) The CDM host passes the data obtained in steps #3 and #4 to a
cryptographic hash function.
6) The CDM host zeros the memory that held the data obtained in steps
#3 and #4. (The output of the hash function is kept.)
7) The CDM host maps the code of the CDM .so/.dylib/.dll into its
address space. (This will require some trickery to inhibit the
execution of static initializers at this point.)
8) The CDM host asks the kernel to stop servicing system calls (apart
from using the already established IPC facilities, terminating the
process and obtaining more memory).
9) The CDM host calls into CDM code passing the hash to it.
10) (This step goes beyond Mozilla's responsibility to implement, but
I'm including it for clarity.) The CDM pokes around to convince itself
that the address space belongs to an executable that matches the
Mozilla-disclosed source and, therefore, has performed steps #4 and #5
in the manner expected.

On Thu, May 22, 2014 at 9:20 PM, Benjamin Smedberg
<benj...@smedbergs.us> wrote:
> The device identification and salting is performed by Firefox and passed to
> the plugin-container.

As I explained earlier in this thread, the CDM won't trust Firefox to
gather the device-unique information.

That the device-unique information is gathered by neither the CDM nor
by the browser is the crux of this solution. The CDM Tivoizes the CDM
host from within, which makes it OK from the perspective of the DRM
threat model for the CDM host to be responsible for gathering the
device-unique information. The code for hashing and zeroing the
device-unique information will be built by Mozilla, which make this OK
from the perspective of the user privacy threat model which treats
unauditable code as a potential threat to the user's privacy.

krissy...@gmail.com

unread,
Feb 12, 2020, 6:03:47 AM2/12/20
to
I'm
0 new messages