Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

PSA: Major preference service architecture changes inbound

206 views
Skip to first unread message

Kris Maglione

unread,
Jul 13, 2018, 4:37:22 PM7/13/18
to Firefox Dev, dev-pl...@lists.mozilla.org
tl;dr: A major change to the architecture preference service has just landed,
so please be on the lookout for regressions.

We've been working for the last few weeks on rearchitecting the preference
service to work better in our current and future multi-process configurations,
and those changes have just landed in bug 1471025.

Our preference database tends to be very large, even without any user values.
It also needs to be available in every process. Until now, that's meant
complete separate copies of the hash table, name strings, and value strings in
each process, along with separate initialization in each content process, and
a lot of IPC overhead to keep the databases in sync.

After bug 1471025, the database is split into two sections: a snapshot of the
initial state of the database, which is stored in a read-only shared memory
region and shared by all processes, and a dynamic hash table of changes on top
of that snapshot, of which each process has its own. This approach
significantly decreases memory, IPC, and content process initialization
overhead. It also decreases the complexity of certain cross-process
synchronization logic.

But it adds complexity in other areas, and represents one of the largest
changes to the workings of the preference service since its creation. So
please be on the lookout for regressions that look related to preference
handling. If you spot any, please file bugs blocking https://bugzil.la/1471025.

Thanks,
Kris

Jonathan Kew

unread,
Jul 17, 2018, 9:07:03 AM7/17/18
to Kris Maglione, Firefox Dev, dev-pl...@lists.mozilla.org
On 13/07/2018 21:37, Kris Maglione wrote:
> tl;dr: A major change to the architecture preference service has just
> landed, so please be on the lookout for regressions.
>
> We've been working for the last few weeks on rearchitecting the
> preference service to work better in our current and future
> multi-process configurations, and those changes have just landed in bug
> 1471025.

Looks like a great step forward!

While we're thinking about the prefs service, is there any possibility
we could enable off-main-thread access to preferences?

I am aware that in simple cases, this can be achieved via the
StaticPrefsList; by defining a VARCACHE_PREF there, I can read its value
from other threads. But this doesn't help in my use case, where I need
another thread to be able to query an extensible set of pref names that
are not fully known at compile time.

Currently, it looks like to do this, I'll have to iterate over the
relevant prefs branch(es) ahead of time (on the main thread) and copy
all the entries to some other place that is then available to my worker
threads. For my use case, at least, the other threads only need read
access; modifying prefs could still be limited to the main thread.

Possible? Or would the overhead of locking be too crippling?

JK

Kris Maglione

unread,
Jul 17, 2018, 11:57:43 AM7/17/18
to Jonathan Kew, dev-pl...@lists.mozilla.org, Firefox Dev
On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>On 13/07/2018 21:37, Kris Maglione wrote:
>>tl;dr: A major change to the architecture preference service has
>>just landed, so please be on the lookout for regressions.
>>
>>We've been working for the last few weeks on rearchitecting the
>>preference service to work better in our current and future
>>multi-process configurations, and those changes have just landed in
>>bug 1471025.
>
>Looks like a great step forward!
>
>While we're thinking about the prefs service, is there any possibility
>we could enable off-main-thread access to preferences?

I think the chances of that are pretty close to 0, but I'll
defer to Nick.

We definitely can't afford the locking overhead—preference
look-ups already show up in profiles without it. And even the
current limited exception that we grant Stylo while it has the
main thread blocked causes problems (bug 1474789), since it
makes it impossible to update statistics for those reads, or
switch to Robin Hood hashing (which would make our hash tables
much smaller and more efficient, but requires read operations to
be able to move entries).

>I am aware that in simple cases, this can be achieved via the
>StaticPrefsList; by defining a VARCACHE_PREF there, I can read its
>value from other threads. But this doesn't help in my use case, where
>I need another thread to be able to query an extensible set of pref
>names that are not fully known at compile time.
>
>Currently, it looks like to do this, I'll have to iterate over the
>relevant prefs branch(es) ahead of time (on the main thread) and copy
>all the entries to some other place that is then available to my
>worker threads. For my use case, at least, the other threads only need
>read access; modifying prefs could still be limited to the main
>thread.

That's probably your best option, yeah. Although I will say that
those kinds of extensible preference sets aren't great for
performance or memory usage, so switching to some other model
might be better.

>Possible? Or would the overhead of locking be too crippling?

The latter, I'm afraid.

Jeff Gilbert

unread,
Jul 17, 2018, 6:50:13 PM7/17/18
to Kris Maglione, dev-platform, Firefox Dev, Jonathan Kew
We should totally be able to afford the very low cost of a
rarely-contended lock. What's going on that causes uncached pref reads
to show up so hot in profiles? Do we have a list of problematic pref
keys?
> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

Nicholas Nethercote

unread,
Jul 18, 2018, 12:23:56 AM7/18/18
to Kris Maglione, dev-platform, Firefox Dev, Jonathan Kew
On Wed, Jul 18, 2018 at 1:57 AM, Kris Maglione <kmag...@mozilla.com>
wrote:

>
> While we're thinking about the prefs service, is there any possibility we
>> could enable off-main-thread access to preferences?
>>
>
> I think the chances of that are pretty close to 0, but I'll defer to Nick.
>

I agree, for the reasons that Kris mentioned.



> I need another thread to be able to query an extensible set of pref names
> that are not fully known at compile time.
>

This is a good example of how prefs is a far more general mechanism than I
would like, leading to all manner of use and abuse. "All I want is a
key-value store, with fast multi-threaded access, where the keys aren't
known ahead of time."

Nick

Myk Melez

unread,
Jul 19, 2018, 3:05:11 PM7/19/18
to Nicholas Nethercote, Kris Maglione, dev-platform, Firefox Dev, Jonathan Kew
Nicholas Nethercote wrote on 2018-07-17 21:23:
> This is a good example of how prefs is a far more general mechanism than I
> would like, leading to all manner of use and abuse. "All I want is a
> key-value store, with fast multi-threaded access, where the keys aren't
> known ahead of time."
Agreed, the prefs service has been overloaded for too long due to the
lack of alternatives for storing key-value data.

I've been investigating the Lightning Memory-Mapped Database (LMDB)
storage engine for such use cases. It supports multi-threaded (and
multi-process) access and is optimized for fast reads of arbitrary keys.

It could conceivably handle this subset of prefs usage, along with a
variety of other KV storage use cases that currently use JSONFile or
other bespoke storage engines/formats.

Follow along in bug 1445451
(https://bugzilla.mozilla.org/show_bug.cgi?id=1445451) for status
updates on validating, landing, and using LMDB (via the rkv Rust crate).

-myk

Kris Maglione

unread,
Jul 19, 2018, 5:20:03 PM7/19/18
to Jeff Gilbert, dev-platform, Firefox Dev, Jonathan Kew
On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:
>We should totally be able to afford the very low cost of a
>rarely-contended lock. What's going on that causes uncached pref reads
>to show up so hot in profiles? Do we have a list of problematic pref
>keys?

So, at the moment, we read about 10,000 preferences at startup in debug
builds. That number is probably slightly lower in non-debug builds, bug we
don't collect stats there. We're working on reducing that number (which is why
we collect statistics in the first place), but for now, it's still quite high.


As for the cost of locks... On my machine, in a tight loop, the cost of a
entering and exiting MutexAutoLock is about 37ns. This is pretty close to
ideal circumstances, on a single core of a very fast CPU, with very fast RAM,
everything cached, and no contention. If we could extrapolate that to normal
usage, it would be about a third of a ms of additional overhead for startup.
I've fought hard enough for 1ms startup time improvements, but *shrug*, if it
were that simple, it might be acceptable.

But I have no reason to think the lock would be rarely contended. We read
preferences *a lot*, and if we allowed access from background threads, I have
no doubt that we would start reading them a lot from background threads in
addition to reading them a lot from the main thread.

And that would mean, in addition to lock contention, cache contention and
potentially even NUMA issues. Those last two apply to atomic var caches too,
but at least they generally apply only to the specific var caches being
accessed off-thread, rather than pref look-ups in general.


Maybe we could get away with it at first, as long as off-thread usage remains
low. But long term, I think it would be a performance foot-gun. And,
paradoxically, the less foot-gunny it is, the less useful it probably is, too.
If we're only using it off-thread in a few places, and don't have to worry
about contention, why are we bothering with locking and off-thread access in
the first place?

>On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione <kmag...@mozilla.com> wrote:
>> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>>>
>>> On 13/07/2018 21:37, Kris Maglione wrote:
>>>>
>>>> tl;dr: A major change to the architecture preference service has just
>>>> landed, so please be on the lookout for regressions.
>>>>
>>>> We've been working for the last few weeks on rearchitecting the
>>>> preference service to work better in our current and future multi-process
>>>> configurations, and those changes have just landed in bug 1471025.
>>>
>>>
>>> Looks like a great step forward!
>>>
>>> While we're thinking about the prefs service, is there any possibility we
>>> could enable off-main-thread access to preferences?
>>
>>
>> I think the chances of that are pretty close to 0, but I'll defer to Nick.
>>
>> We definitely can't afford the locking overhead—preference look-ups already
>> show up in profiles without it. And even the current limited exception that
>> we grant Stylo while it has the main thread blocked causes problems (bug
>> 1474789), since it makes it impossible to update statistics for those reads,
>> or switch to Robin Hood hashing (which would make our hash tables much
>> smaller and more efficient, but requires read operations to be able to move
>> entries).
>>
>>> I am aware that in simple cases, this can be achieved via the
>>> StaticPrefsList; by defining a VARCACHE_PREF there, I can read its value
>>> from other threads. But this doesn't help in my use case, where I need
>>> another thread to be able to query an extensible set of pref names that are
>>> not fully known at compile time.
>>>
>>> Currently, it looks like to do this, I'll have to iterate over the
>>> relevant prefs branch(es) ahead of time (on the main thread) and copy all
>>> the entries to some other place that is then available to my worker threads.
>>> For my use case, at least, the other threads only need read access;
>>> modifying prefs could still be limited to the main thread.
>>
>>
>> That's probably your best option, yeah. Although I will say that those kinds
>> of extensible preference sets aren't great for performance or memory usage,
>> so switching to some other model might be better.
>>
>>> Possible? Or would the overhead of locking be too crippling?
>>
>>
>> The latter, I'm afraid.
>>
>> _______________________________________________
>> dev-platform mailing list
>> dev-pl...@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform

--
Kris Maglione
Senior Firefox Add-ons Engineer
Mozilla Corporation

On two occasions I have been asked, "Pray, Mr. Babbage, if you put
into the machine wrong figures, will the right answers come out?" I am
not able rightly to apprehend the kind of confusion of ideas that
could provoke such a question.
--Charles Babbage

Justin Dolske

unread,
Jul 19, 2018, 5:39:03 PM7/19/18
to dev-platform, Firefox Dev
I know we've had code that, instead of reading a pref directly, checks the
pref once in an init() and uses pref observers to watch for any changes to
it. (i.e., basically mirrors the pref into some module-local variable, at
which point you can roll your own locking or whatever to make it
threadsafe). Is that a pattern that would work here, if people really want
OMT access but we're not ready to bake support for that into the pref
service? [Perhaps with some simple helper glue / boilerplate to make it
easier.]

Justin

On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione <kmag...@mozilla.com>
> _______________________________________________
> firefox-dev mailing list
> firef...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>

Daniel Veditz

unread,
Jul 19, 2018, 5:39:33 PM7/19/18
to Nicholas Nethercote, Kris Maglione, dev-platform, Firefox Dev, Jonathan Kew
On Tue, Jul 17, 2018 at 9:23 PM, Nicholas Nethercote <n.neth...@gmail.com
> wrote:

> This is a good example of how prefs is a far more general mechanism than I
> would like, leading to all manner of use and abuse. "All I want is a
> key-value store, with fast multi-threaded access, where the keys aren't
> known ahead of time."
>

​Prefs might be a terrible way to implement that functionality, but it's
been used that way as long as we've had prefs in Mozilla so there seems to
be a need for it. Early offenders: printer setup, mail accounts, external
protocol handlers (and I believe content-type handlers, but those moved out
long ago). Possibly inspired by the way the Windows registry was used.

-Dan Veditz

Kris Maglione

unread,
Jul 19, 2018, 6:03:12 PM7/19/18
to Justin Dolske, dev-platform, Firefox Dev
On Thu, Jul 19, 2018 at 02:37:07PM -0700, Justin Dolske wrote:
>I know we've had code that, instead of reading a pref directly, checks the
>pref once in an init() and uses pref observers to watch for any changes to
>it. (i.e., basically mirrors the pref into some module-local variable, at
>which point you can roll your own locking or whatever to make it
>threadsafe). Is that a pattern that would work here, if people really want
>OMT access but we're not ready to bake support for that into the pref
>service? [Perhaps with some simple helper glue / boilerplate to make it
>easier.]

We already have helper glue for this. For C++, we have VarCache prefs, and for
JS, we have XPCOMUtils.defineLazyPreferenceGetter. In general, it's probably
better to use those rather than hand-rolled observers when possible, since I
have optimizations planned for both.
If a man does not keep pace with his companions, perhaps it is because
he hears a different drummer. Let him step to the music which he
hears, however measured or far away.
--Henry David Thoreau

Jeff Gilbert

unread,
Jul 19, 2018, 10:17:25 PM7/19/18
to Kris Maglione, dev-platform, Firefox Dev, Jonathan Kew
Using a classic read/write exclusive lock, we would only every contend
on read+write or write+write, which are /rare/.

It's really, really nice when we can have dead-simple threadsafe APIs,
instead of requiring people to jump through hoops or roll their own
dispatch code. (fragile) IIRC most new APIs added to the web are
supposed to be available in Workers, so the need for reading prefs
off-main-thread is only set to grow.

I don't see how this can mutate into a foot-gun in ways that aren't
already the case today without off-main-thread access.

Anyway, I appreciate the work that's been done and is ongoing here. As
you burn down the pref accesses in start-up, please consider
unblocking this feature request. (Personally I'd just eat the 400us in
exchange for this simplifying architectural win)

Kris Maglione

unread,
Jul 19, 2018, 10:25:16 PM7/19/18
to Jeff Gilbert, dev-platform, Firefox Dev, Jonathan Kew
On Thu, Jul 19, 2018 at 07:17:13PM -0700, Jeff Gilbert wrote:
>Using a classic read/write exclusive lock, we would only every contend
>on read+write or write+write, which are /rare/.

That might be true if we gave up on the idea of switching to Robin Hood
hashing. But if we don't, then every lookup is a potential write, which means
every lookup required a write lock.

We also don't really have any good APIs for rwlocks at the moment. Which,
admittedly, is a solvable problem.
You can't trust code that you did not totally create yourself.
--Ken Thompson

Nicholas Nethercote

unread,
Jul 19, 2018, 10:53:35 PM7/19/18
to Daniel Veditz, Kris Maglione, dev-platform, Firefox Dev, Jonathan Kew
On Fri, Jul 20, 2018 at 7:37 AM, Daniel Veditz <dve...@mozilla.com> wrote:

> ​Prefs might be a terrible way to implement that functionality, but it's
> been used that way as long as we've had prefs in Mozilla so there seems to
> be a need for it. Early offenders: printer setup, mail accounts, external
> protocol handlers (and I believe content-type handlers, but those moved out
> long ago). Possibly inspired by the way the Windows registry was used.
>

The Windows registry is a very good comparison, alas.

Nick

Jonathan Kew

unread,
Jul 20, 2018, 5:00:36 AM7/20/18
to Jeff Gilbert, Kris Maglione, dev-platform, Firefox Dev
On 20/07/2018 03:17, Jeff Gilbert wrote:
> Using a classic read/write exclusive lock, we would only every contend
> on read+write or write+write, which are /rare/.

Indeed, that's what I envisage we'd want. The -vast- majority of prefs
access only involves reading values. We should be able to do that from
any thread without a second thought about either safety or contention.

>
> It's really, really nice when we can have dead-simple threadsafe APIs,
> instead of requiring people to jump through hoops or roll their own
> dispatch code. (fragile) IIRC most new APIs added to the web are
> supposed to be available in Workers, so the need for reading prefs
> off-main-thread is only set to grow.
>
> I don't see how this can mutate into a foot-gun in ways that aren't
> already the case today without off-main-thread access.
>
> Anyway, I appreciate the work that's been done and is ongoing here. As
> you burn down the pref accesses in start-up, please consider
> unblocking this feature request. (Personally I'd just eat the 400us in
> exchange for this simplifying architectural win)

+1 to that. Our need for OMT access to prefs is only likely to grow,
IMO, and we should just fix it once, so that any code (regardless of
which thread(s) it may eventually run on) can trivially read prefs.

Even if that means we can't adopt Robin Hood hashing, I think the
trade-off would be well worthwhile.

JK

Ehsan Akhgari

unread,
Jul 20, 2018, 11:54:17 AM7/20/18
to Jonathan Kew, Jeff Gilbert, Kris Maglione, dev-platform, Firefox Dev
On Fri, Jul 20, 2018 at 5:00 AM, Jonathan Kew <jfkt...@gmail.com> wrote:

> On 20/07/2018 03:17, Jeff Gilbert wrote:
>
>> Using a classic read/write exclusive lock, we would only every contend
>> on read+write or write+write, which are /rare/.
>>
>
> Indeed, that's what I envisage we'd want. The -vast- majority of prefs
> access only involves reading values. We should be able to do that from any
> thread without a second thought about either safety or contention.
>
>
>> It's really, really nice when we can have dead-simple threadsafe APIs,
>> instead of requiring people to jump through hoops or roll their own
>> dispatch code. (fragile) IIRC most new APIs added to the web are
>> supposed to be available in Workers, so the need for reading prefs
>> off-main-thread is only set to grow.
>>
>> I don't see how this can mutate into a foot-gun in ways that aren't
>> already the case today without off-main-thread access.
>>
>> Anyway, I appreciate the work that's been done and is ongoing here. As
>> you burn down the pref accesses in start-up, please consider
>> unblocking this feature request. (Personally I'd just eat the 400us in
>> exchange for this simplifying architectural win)
>>
>
> +1 to that. Our need for OMT access to prefs is only likely to grow, IMO,
> and we should just fix it once, so that any code (regardless of which
> thread(s) it may eventually run on) can trivially read prefs.
>
> Even if that means we can't adopt Robin Hood hashing, I think the
> trade-off would be well worthwhile.
>

While it's true that the need for reading more prefs from workers will
continue to grow, given the number of prefs we have I think it's safe to
say that the set of prefs that we need to access from threads beside the
main thread will be a minority of the entire set of prefs. So one way to
have our cake and eat it too would be to separate out the prefs that are
meant to be accessible through a worker thread and expose them through an
alternate thread-safe API which may be a bit more costly to call on the
main thread, and keep the rest of the min-thread only prefs on the existing
non-thread-safe APIs. This won't be as elegant as having one set of APIs
to work with, of course.

(FWIW pref accesses sometimes also show up in profiles when code ends up
reading prefs in a loop, e.g. when invoked from web page JS, so the startup
scenario discussed so far is only one case to think about here, there are
many more also.)
> _______________________________________________
> firefox-dev mailing list
> firef...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>



--
Ehsan

Jean-Yves Avenard

unread,
Jul 20, 2018, 11:58:05 AM7/20/18
to Kris Maglione, dev-pl...@lists.mozilla.org, Firefox Dev
Hi

I believe that this change may be the caused of https://bugzilla.mozilla.org/show_bug.cgi?id=1477254 <https://bugzilla.mozilla.org/show_bug.cgi?id=1477254>

That is, the pref value set in all.js no longer overrides the default value set in StaticPrefs. The problem occurs mostly with e10s on, when e10s is disabled, I see the problem only about 5% of the time.

JY

> On 13 Jul 2018, at 10:37 pm, Kris Maglione <kmag...@mozilla.com> wrote:
>
> tl;dr: A major change to the architecture preference service has just landed, so please be on the lookout for regressions.
>
> We've been working for the last few weeks on rearchitecting the preference service to work better in our current and future multi-process configurations, and those changes have just landed in bug 1471025.
>
> Our preference database tends to be very large, even without any user values. It also needs to be available in every process. Until now, that's meant complete separate copies of the hash table, name strings, and value strings in each process, along with separate initialization in each content process, and a lot of IPC overhead to keep the databases in sync.
>
> After bug 1471025, the database is split into two sections: a snapshot of the initial state of the database, which is stored in a read-only shared memory region and shared by all processes, and a dynamic hash table of changes on top of that snapshot, of which each process has its own. This approach significantly decreases memory, IPC, and content process initialization overhead. It also decreases the complexity of certain cross-process synchronization logic.
>
> But it adds complexity in other areas, and represents one of the largest changes to the workings of the preference service since its creation. So please be on the lookout for regressions that look related to preference handling. If you spot any, please file bugs blocking https://bugzil.la/1471025.
>
> Thanks,
> Kris

Eric Rahm

unread,
Jul 20, 2018, 2:37:09 PM7/20/18
to Kris Maglione, Jeff Gilbert, dev-platform, Firefox Dev, Jonathan Kew
We *could* special case prefs with an appropriate data structure that works
in a thread-safe nature; as far as RWLock's go, we do have one in tree [1].
This has gone off the rails a bit from Kris' original announcement, which
I'll reiterate: Watch out for prefs related bustage.

Jeff, would you mind filing a bug for further discussion of off-main-thread
access as a future improvement?

[1] https://searchfox.org/mozilla-central/source/xpcom/threads/RWLock.h

On Thu, Jul 19, 2018 at 7:25 PM, Kris Maglione <kmag...@mozilla.com>
wrote:

> On Thu, Jul 19, 2018 at 07:17:13PM -0700, Jeff Gilbert wrote:
>
>> Using a classic read/write exclusive lock, we would only every contend
>> on read+write or write+write, which are /rare/.
>>
>
> That might be true if we gave up on the idea of switching to Robin Hood
> hashing. But if we don't, then every lookup is a potential write, which
> means every lookup required a write lock.
>
> We also don't really have any good APIs for rwlocks at the moment. Which,
> admittedly, is a solvable problem.
>
>
>>>>>> On 13/07/2018 21:37, Kris Maglione wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> tl;dr: A major change to the architecture preference service has just
>>>>>>> landed, so please be on the lookout for regressions.
>>>>>>>
>>>>>>> We've been working for the last few weeks on rearchitecting the
>>>>>>> preference service to work better in our current and future
>>>>>>> multi-process
>>>>>>> configurations, and those changes have just landed in bug 1471025.
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
> You can't trust code that you did not totally create yourself.
> --Ken Thompson
>
>

Kris Maglione

unread,
Jul 20, 2018, 4:13:28 PM7/20/18
to Jonathan Kew, Jeff Gilbert, dev-platform, Firefox Dev
On Fri, Jul 20, 2018 at 10:00:22AM +0100, Jonathan Kew wrote:
>+1 to that. Our need for OMT access to prefs is only likely to grow,
>IMO, and we should just fix it once, so that any code (regardless of
>which thread(s) it may eventually run on) can trivially read prefs.
>
>Even if that means we can't adopt Robin Hood hashing, I think the
>trade-off would be well worthwhile.

This is exactly the kind of performance footgun I'm talking about. The marginal
cost of making something threadsafe may be low, but those costs pile up. The
cost of locking in hot code often adds up enough on its own. Throwing away an
important optimization on top of that adds up even faster. Getting into the
habit, and then continuing the pattern elsewhere starts adding up exponentially.

Threads are great. Threadsafe code is useful. But data that's owned by a single
thread is still the best when you can manage it, and it should be the default
option whenever we can reasonably manage it. We already pay the price of being
overzealous about thread safety in other areas (for instance, the fact that all
of our strings require atomic refcounting, even though DOM strings are generally
only used by a single thread). I think the trend needs to be in the opposite
direction.

Rust and JS make this much easier than C++ does, unfortunately. But we're
getting better at it in C++, and moving more and more code to Rust.

Kris Maglione

unread,
Jul 20, 2018, 4:17:34 PM7/20/18
to Ehsan Akhgari, Jeff Gilbert, dev-platform, Firefox Dev, Jonathan Kew
On Fri, Jul 20, 2018 at 11:53:22AM -0400, Ehsan Akhgari wrote:
>On Fri, Jul 20, 2018 at 5:00 AM, Jonathan Kew <jfkt...@gmail.com> wrote:
>> +1 to that. Our need for OMT access to prefs is only likely to grow, IMO,
>> and we should just fix it once, so that any code (regardless of which
>> thread(s) it may eventually run on) can trivially read prefs.
>>
>> Even if that means we can't adopt Robin Hood hashing, I think the
>> trade-off would be well worthwhile.
>>
>
>While it's true that the need for reading more prefs from workers will
>continue to grow, given the number of prefs we have I think it's safe to
>say that the set of prefs that we need to access from threads beside the
>main thread will be a minority of the entire set of prefs. So one way to
>have our cake and eat it too would be to separate out the prefs that are
>meant to be accessible through a worker thread and expose them through an
>alternate thread-safe API which may be a bit more costly to call on the
>main thread, and keep the rest of the min-thread only prefs on the existing
>non-thread-safe APIs. This won't be as elegant as having one set of APIs
>to work with, of course.

This is what we have atomic var caches are for. They can't currently be used
for string preferences, but that's a problem that could be solved with an
rwlock. They're also a bit difficult to use for preferences which aren't known
at compile time, but we've generally been trying to move away from using the
preference service for such things.

For the sorts of preferences that are generally needed by Worker threads,
though, they should mostly just work as-is.

Johann Hofmann

unread,
Jul 20, 2018, 5:48:49 PM7/20/18
to Kris Maglione, Ehsan Akhgari, dev-platform, jgil...@mozilla.com, Firefox Dev, jfkt...@gmail.com
Since I have seen several people point out in this thread that there's
*probably* code that excessively accesses prefs:

You can easily assert the name and amount of different prefs that are read
during whatever scenario you'd like to perform by adding to
this test (or writing your own version of it):
https://searchfox.org/mozilla-central/source/browser/base/content/test/performance/browser_preferences_usage.js

It currently only runs a few basic things like opening a bunch of tabs and
windows and I'd be happy to mentor/review additions that cover whatever
code you are maintaining (even if you don't discover anything it's good to
prevent regressions creeping up).

Cheers,

Johann

On Fri, Jul 20, 2018 at 10:17 PM Kris Maglione <kmag...@mozilla.com>

Jeff Gilbert

unread,
Jul 20, 2018, 8:43:04 PM7/20/18
to Eric Rahm, Kris Maglione, dev-platform, Firefox Dev, Jonathan Kew
I have filed bug 1477436: "Preferences::Get* is not threadsafe/is main
thread only"
https://bugzilla.mozilla.org/show_bug.cgi?id=1477436

On Fri, Jul 20, 2018 at 11:36 AM, Eric Rahm <er...@mozilla.com> wrote:
> We *could* special case prefs with an appropriate data structure that works
> in a thread-safe nature; as far as RWLock's go, we do have one in tree [1].
> This has gone off the rails a bit from Kris' original announcement, which
> I'll reiterate: Watch out for prefs related bustage.
>
> Jeff, would you mind filing a bug for further discussion of off-main-thread
> access as a future improvement?
>
> [1] https://searchfox.org/mozilla-central/source/xpcom/threads/RWLock.h
>
> On Thu, Jul 19, 2018 at 7:25 PM, Kris Maglione <kmag...@mozilla.com>
> wrote:
>>
>> On Thu, Jul 19, 2018 at 07:17:13PM -0700, Jeff Gilbert wrote:
>>>
>>> Using a classic read/write exclusive lock, we would only every contend
>>> on read+write or write+write, which are /rare/.
>>
>>
>> That might be true if we gave up on the idea of switching to Robin Hood
>> hashing. But if we don't, then every lookup is a potential write, which
>> means every lookup required a write lock.
>>
>> We also don't really have any good APIs for rwlocks at the moment. Which,
>> admittedly, is a solvable problem.
>>
>>

Jonathan Kew

unread,
Jul 21, 2018, 5:36:55 AM7/21/18
to Kris Maglione, Jeff Gilbert, dev-platform, Firefox Dev
On 20/07/2018 03:25, Kris Maglione wrote:
> On Thu, Jul 19, 2018 at 07:17:13PM -0700, Jeff Gilbert wrote:
>> Using a classic read/write exclusive lock, we would only every contend
>> on read+write or write+write, which are /rare/.
>
> That might be true if we gave up on the idea of switching to Robin Hood
> hashing. But if we don't, then every lookup is a potential write, which
> means every lookup required a write lock.

I'm a bit puzzled by this statement, actually.... my understanding of
Robin Hood hashing (based on looking at a few articles; I don't claim
anything like in-depth study) is that it may move existing entries
during *insertion*, and potentially during *deletion* (if using a
backward-shift deletion approach, rather than tombstones), but *lookup*
operations wouldn't modify anything.

So we should be able to use Robin Hood hashing with an RWLock that
allows reading from multiple threads without contention; only writing
(which for prefs is pretty rare, I believe) would block other threads.

Or is there a more advanced version of the Robin Hood design, where
*lookup* also rearranges entries?

JK
0 new messages