Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

l20n and react

496 views
Skip to first unread message

Joe Walker

unread,
Mar 29, 2016, 5:34:13 PM3/29/16
to tools...@lists.mozilla.org
Hi,
Has anyone worked on a react component for l20n, or put any thought into
how it might work?
(Context Firefox devtools are moving to use react, and are thinking of how
we can do l10n better at the same time)
Thanks,
Joe.

Staś Małolepszy

unread,
Mar 30, 2016, 4:23:35 AM3/30/16
to Joe Walker, tools...@lists.mozilla.org
Great timing :) I'm just now in the process of writing a summary of my
research into React. I'll post something later today.
> _______________________________________________
> tools-l10n mailing list
> tools...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/tools-l10n
>

Staś Małolepszy

unread,
Mar 31, 2016, 7:10:26 AM3/31/16
to Joe Walker, tools...@lists.mozilla.org
Here are my thoughts and rough notes on integrating L20n with React.

I spent some time learning React (and other popular frameworks), went
through a few tutorials and wrote some code. One of the most helpful
experience was the meet.js Summit conference which I attended two weeks
ago where I spoke to many people about this and got a first round of
feedback.

I would now like to write down these thoughts and conclusions from
conversations in a structured manner and open them up to a larger
discussion. I'd love to get more feedback on these ideas. Please
don't be shy to chime in especially if you've already written a few
React apps. And thanks in advance for your patience—this is a long
email.


How L20n works
==============

First, a brief reminder of how L20n currently works.

The developer puts the information about the available languages and
links to resources in <head>. As soon as it's ready, L20n starts the
language negotiation process using this information and subsequently
starts downloading the relevant resources in the user's most preferred
language. Translations are only available via an async API.

DOM nodes with localizable messages are marked up with data-l10n-id
attributes. These attributes are observed by L20n's mutation observer:
each time a new node with the attribute is inserted or the attribute
changes, L20n will retranslate the node in question.

Translations can contain some HTML markup and importantly are objects:
a single translation unit is responsible for localizing both the text
value of a DOM node as well as its textual attributes. Translations
are parsed into an inert DocumentFragment and superimposed (or,
overlayed) on the source node in the DOM. Original child nodes in the
source DOM are kept in the translated result; the identity doesn't
change, event listeners are preserved. Here's an example:

source:

<p data-l10n-id="foo"><button onclick="…"></button></p>

translation:

foo = <button>Send</button> your message <em>now</em>.

result:

<p data-l10n-id="foo">
<button onclick="…">Send</button> your message <em>now</em>.
</p>

Translations accept arguments with external data which is passed by the
developer. For instance, a translation "Hello, { $name }" can take
a "name" external argument and will format the whole message.
Importantly, these external arguments can also be used to configure
other parts of the translation, or to select a branch (e.g. in case of
plurals).

complete = { $num ->
[0] Pending…
*[other] { NUMBER($num, style: "percent") } complete
[1] Done!

This means that' it's important to feed these external arguments into
L20n and format them there, instead of having L20n output translations
with unresolved {…} expressions and resolve them in the consumer code.


How React works
===============

React structures the UI into components and the general consensus is
that you should be smart about separating your components into stateful
and stateless ones. The former kind is for keeping the logic and the
latter is for keeping the presentation layer. Other than the state,
you can pass so-called props to components to configure them (props can
be textual data, child elements, callbacks, etc.). Props are
immutable.

React components have a well-defined lifecycle with self-explanatory
method names like componentWillUpdate, render, componentDidMount and
componentDidUpdate. The render method is called to produce the virtual
DOM representation of the component. The virtual DOM operates on React
components and not the actual DOM elements—the encapsulation is
enforced during the reconciliation process up until the very last
moment in which the actual rendering occurs.


How to integrate L20n into React
================================

There are multiple ways we could integrate L20n into React. Some seem
to be more idiomatic for React, while others enforce a stricter
separation of concerns and let L20n do its thing independently of
React.

In the following examples I'll be using variations of the following
React snippet:

http://stasm.github.io/l20n-react-experiments/base/

The source code can be found at:

https://github.com/stasm/l20n-react-experiments/


1. L20n Components
------------------

Since React uses components, an obvious first try is to provide a set
of special-purpose components whose job is to display translated
messages.

render() {
return (
<h1>
<Translation id="hello" name={this.props.name} />
</h1>
);
}

This allows us to seamlessly take advantage of the "component"
abstraction and pass developer-provided arguments as props/attributes
and even put children inside of the <Translation/>!

There is, however, a problem with this approach. Components need to
produce an element tree. We'd need to enclose the translation in an
outer HTML element, e.g. a <span>, which would result in needless
nesting of redundant elements. One possible solution would be to do
something like this instead:

render() {
return (
<TranslatedElement elem="h1" id="hello" name=… />
);
}

(This is in fact what Yahoo's react-intl does. It also sometimes wraps
translation components in <span>.)

Or even:

render() {
return (
<TranslatedH1 id="hello" name=… />
);
}

An implementation of this approach (sine args and overlays) lives at:

http://stasm.github.io/l20n-react-experiments/components/

This approach extends quite well to attributes because we'd encapsulate
the logic of translating the <h1> together with the attributes inside
of the <TranslatedH1> component. For the same reason it would be
possible to pass children into the component. This way one could
define the source (default) text which could then be extracted into
a localization resource by tools. It would also be possible to define
interactive elements as children and run the (virtual) DOM overlay
logic:

render() {
return (
<TranslatedP id="hello" name=…>
<button onClick=…></button>
</TranslatedP>
);
}

Each instance of the <Translated*> components could store the contents
of the translations or the storage could be centralized via a React's
context (see 2.C. below).


2. Translation passed as props and/or children
----------------------------------------------

Another React's idiom is to use variables stored in the state and
expand them into element's attributes or contents. It can be the
current's component's state, or it can be stored in a high order
component which transparently wraps around another one, or it could be
a global store passed in as a context to all child components.

render() {
return <p title={…}> {…} </p>;
}

What goes into the {…} depends on the exact solution. It could be
this.state.translations.title or this.props.getTranslation('title') or
this.context.getTranslation('title'). My preference goes to functions
because then we can pass this.state as l10-args.

The challenge with this approach is that we need a way to get the list
of translations that we'll use to populate the state. Keep in mind
that the L20n API is async. In order to make a sync request inside of
the {…} we need to first prepare the data store accordingly. There are
a few possible solutions here.

A. Save translation ids as they're being requested.

In this scenario we're collecting the translation ids requested
during the first render of the component. When the render is complete
and the component is mounted, we retrieve the translations
asynchronously and save the result to the component's state this
triggering a re-render. This feels rather hacky and I don't think it's
a well-written React code.

An implementation of this approach (sine args and overlays) lives at:

http://stasm.github.io/l20n-react-experiments/mutable-state/


B. Declare which ids will be needed to render the component.

Here we're declaring which translation ids will be needed. This
can be done with a property stored on the component or by using
data-l10n-ids in the render() method. In case of a configurable
property, declaring args is hard because we don't have them up front.
This might be a show stopper for this approach. And if we use
data-l10n-ids and data-l10n-args, we need to walk the resulting DOM
tree anyways so perhaps another approach which also walks the tree
would be better.

Two implementations (one for a property and another one for
data-l10n-id) of this approach (sine args and overlays) live at:

http://stasm.github.io/l20n-react-experiments/declarative-property/
http://stasm.github.io/l20n-react-experiments/declarative-state/


C. Use a global store à la Redux Provider.

Lastly, we can use React's contexts to create a state store which
is globally available to all child components. Again, I'm still not
sure how to pass arguments into this store at the right time.

ReactDOM.render(
<TranslationProvider src="/path/to/{locale}/resource">
<App />
</TranslationProvider>,
document.getElementById("container")
);

An implementation of this approach (sine overlays) lives at:

http://stasm.github.io/l20n-react-experiments/context/


* * *

Reading more about the context made me realize that even when we store
the state (translations) in individual components (like in A. and B.),
we still probably want a central place to store the current language
chosen by or negotiated on the user's behalf. In fact, all of the
above solutions would likely benefit from a central translation store
or "provider". I implemented an example at:

http://stasm.github.io/l20n-react-experiments/components-context/


3. Virtual DOM manipulation
---------------------------

In this scenario we could make L20n hook into the render() method of
components which need to be localized. The method returns
a React.Component instance which has props for attributes and
props.children for, well, child elements. We could traverse this
virtual DOM similar to how we do it with the regular DOM in L20n's HTML
bindings and apply translation logic where needed. We could even
re-implement the whole DOM overlay mechanism to operate on React's
virtual DOM.

(No example implementation here yet!)


4. Real DOM manipulation via lifecycle methods
----------------------------------------------

What if we wanted to re-use more of the existing L20n code? We could
apply translations to the true DOM when React renders it. Two
lifecycle methods are perfect for this: componentDidMount() and
componentDidUpdate(). I've been having some trouble getting the latter
to work with High Order Components but I think the general idea is
sound.

In this scenario we keep most of the current L20n intact and only use
React as a mechanism to monitor changes to the DOM. A mutation
observer, really! Indeed, in this approach we could remove L20n's
internal mutation observer (if it's not needed by other pieces of the
UI, like web components with Shadow DOM) and only rely on React to
notify us about re-renders.

An implementation lives at:

http://stasm.github.io/l20n-react-experiments/componentDidUpdate/


5. Real DOM manipulation via mutation observer
----------------------------------------------

Lastly, as it turns out, not changing anything in L20n is also a viable
option for us to consider! When a component with data-l10n-id is
rendered or re-rendered, l20n's mutation observer picks up the change
and translates the DOM node. This means that L20n is completely
separate from React.

An implementation lives at:

http://stasm.github.io/l20n-react-experiments/mutation/



Conclusions
===========

The above examples are still WIP. I'd like to better understand how
changes to the state should be propagated to translations which need to
be formatted anew and inserted into the DOM. I'll continue researching
this topic.

Judging from my initial explorations, I like the component abstraction
as presented in #1 although understandably this approach would require
a lot of changes in L20n. Namely, we'd need a whole new React DOM
bindings. I also like the <TranslationProvider /> idea as it
translates well into L20n's concept of lightweight per-view or
per-component contexts. Lastly, option #5 is also interesting: we
don't have to do anything really to support it and it might be a great
first step into testing L20n in real React applications and learning in
more detail what we can do to make this experience easier.

Let's discus.

—Staś

Axel Hecht

unread,
Mar 31, 2016, 8:21:25 AM3/31/16
to Staś Małolepszy, Joe Walker
Hi stas,

thanks for taking the deep dive into the various pools here.

I've got a general ecosystem question to start with, is supporting
3rd-level components a requirement for us? That's probably one thing I
failed to understand in my React.Friday, if everybody that uses react
just implements standalone monoliths, or if there's actually a culture
of component-sharing across projects?

Other thoughts:

For pontoon support, either bare-metal-html or global state seem most
promising? Note, "pontoon" might not mean "pontoon" here technically.
I'm entertaining the thought that for Firefox UI pontooning, we need
something that's closer to dedicated privileged APIs like we use for UI
tours today, than owning-dom-window references that pontoon uses for
in-place l10n right now.

Maybe even more so bare-metal-html, as that's making it easier to relate
DOM click events to actual string IDs?

Component-local state OTH seems to be the hardest to maintain with live
l10n updates.

Axel, for now.

Staś Małolepszy

unread,
Mar 31, 2016, 9:23:26 AM3/31/16
to Axel Hecht, Joe Walker, mozilla-t...@lists.mozilla.org
On Thu, Mar 31, 2016 at 2:20 PM, Axel Hecht <l1...@mozilla.com> wrote:
> Hi stas,
>
> thanks for taking the deep dive into the various pools here.
>
> I've got a general ecosystem question to start with, is supporting 3rd-level
> components a requirement for us? That's probably one thing I failed to
> understand in my React.Friday, if everybody that uses react just implements
> standalone monoliths, or if there's actually a culture of component-sharing
> across projects?

What do you mean by 3rd-level components?

As far as I understand it, there's a strong pattern of having two
types of components. Stateful components don't render anything per
se, only their children. They provide the logic. Stateless
components (which as of React 0.14 can be implemented simply as
functions returning the virtual DOM) provide the content and the
presentation. I think we would mostly like to localized the stateless
ones.

A lot of addons implement their logic as stateful components. So
instead of something like:

Foo.method(componentInstance, arg1, arg2);

you end up doing:

<FooMethod param1=arg1 param2=arg2>
<Component />
</Foo>

It's preferred to split your app into multiple components, and have
separate containers for logic. Supposedly Facebook uses 15,000
components across all of their web properties. I think stateful
components are meant to be modular and easily re-used when the same
logic is needed someplace else. Stateless ones can be re-used when
you just need the same-loooking UI.

> Other thoughts:
>
> For pontoon support, either bare-metal-html or global state seem most
> promising? Note, "pontoon" might not mean "pontoon" here technically. I'm
> entertaining the thought that for Firefox UI pontooning, we need something
> that's closer to dedicated privileged APIs like we use for UI tours today,
> than owning-dom-window references that pontoon uses for in-place l10n right
> now.
>
> Maybe even more so bare-metal-html, as that's making it easier to relate DOM
> click events to actual string IDs?

Yeah, I think bare-metal or global state indeed seem to be the best
candidates. Right now I'm leaning towards bare-metal, as that keeps
L20n framework agnostic and better mimics what platform could someday
do for us.

My next step is to drop l20n.js into Activity Streams and see how it
performs and if bare-metal is fast enough.

> Component-local state OTH seems to be the hardest to maintain with live l10n
> updates.

Do you mean scenarios when the user changes the language? If so, then
yes, it's hard.

data-l10n-id a surprising robust and foolproof solution to many problems :)

Thanks for your thoughts!
-stas

>
> Axel, for now.
>
>
> On 31/03/16 13:10, Staś Małolepszy wrote:
>>

Axel Hecht

unread,
Mar 31, 2016, 10:15:30 AM3/31/16
to Staś Małolepszy, Joe Walker
On 31/03/16 15:22, Staś Małolepszy wrote:
> On Thu, Mar 31, 2016 at 2:20 PM, Axel Hecht <l1...@mozilla.com> wrote:
>> Hi stas,
>>
>> thanks for taking the deep dive into the various pools here.
>>
>> I've got a general ecosystem question to start with, is supporting 3rd-level
>> components a requirement for us? That's probably one thing I failed to
>> understand in my React.Friday, if everybody that uses react just implements
>> standalone monoliths, or if there's actually a culture of component-sharing
>> across projects?
>
> What do you mean by 3rd-level components?

Err, yeah, what did I say there?

What I meant was 3rd party components. I.e., code that we can't just
modify to use l20n.

....
Also live updates to a localization, kinto or otherwise.

> data-l10n-id a surprising robust and foolproof solution to many problems :)

:-)

Axel

Joe Walker

unread,
Apr 1, 2016, 1:17:34 PM4/1/16
to tools...@lists.mozilla.org
(Repost - message bounced because it was too long)

Thanks for this Stas, it's a really interesting exploration of the options.

In devtools, we're using Redux, and I'm curious about using the context
because that makes re-rendering on new l10n data difficult. Do you know how
this would work?

The low-fi thing for us to do with Redux is to include the l10n object in
the global state and have a FETCHED_L10N_DATA action which makes the l10n
object render different text. Then the l10n object would either be passed
down the tree, or injected with the connect([mapStateToProps]...) function
[1].

So a lightweight component could look simply like this.

const Component = ({ l10n, ...otherProps }) => (
<span {...{ otherProps }}>l10n('hello-world')</span>
);

When the strings file is received, we'd need to dispatch the
FETCHED_L10N_DATA, but that's fairly simple.

I appreciate that you weren't really looking at Redux, but does this
approach make sense? Did I miss anything?

Thanks
Joe.

On Thu, Mar 31, 2016 at 12:10 PM Staś Małolepszy <st...@mozilla.com> wrote:

> Here are my thoughts and rough notes on integrating L20n with React.
> <snip>
>

jwa...@mozilla.com

unread,
Apr 5, 2016, 7:10:46 AM4/5/16
to mozilla-t...@lists.mozilla.org

Something else, we can't ship a product that needs to download an l10n pack on first run. So we're going to have to bundle the strings with the product, which means the lookup can always be synchronous.

Maybe the impedance mismatch for devtools is too high?

Joe.

Axel Hecht

unread,
Apr 5, 2016, 7:53:21 AM4/5/16
to jwa...@mozilla.com
On 05/04/16 13:10, jwa...@mozilla.com wrote:
> Something else, we can't ship a product that needs to download an l10n pack on first run. So we're going to have to bundle the strings with the product, which means the lookup can always be synchronous.
Actually, that's one of the lessons from gecko that we brought into the
design. String look-ups must be fallible (see also the hello bustage we
just had on beta). Now, if a string fails, we need to look up a
fallback, which means that during the call, we'd have to do blocking
mainthread sync IO to get and parse the English strings.
Or we'd need to take the perf hit to always load and parse at least
twice as many strings in localized builds as we do in en-US.

That's why all l10n getters are async in the next world of l10n.

Axel

jwa...@mozilla.com

unread,
Apr 6, 2016, 8:17:55 AM4/6/16
to mozilla-t...@lists.mozilla.org

On Tuesday, April 5, 2016 at 12:53:21 PM UTC+1, Axel Hecht wrote:
> On 05/04/16 13:10, Joe Walker wrote:
> > Something else, we can't ship a product that needs to download an l10n
> > pack on first run. So we're going to have to bundle the strings with
> > the product, which means the lookup can always be synchronous.
>
> Actually, that's one of the lessons from gecko that we brought into the
> design. String look-ups must be fallible (see also the hello bustage we
> just had on beta). Now, if a string fails, we need to look up a
> fallback, which means that during the call, we'd have to do blocking
> mainthread sync IO to get and parse the English strings.
> Or we'd need to take the perf hit to always load and parse at least
> twice as many strings in localized builds as we do in en-US.
>
> That's why all l10n getters are async in the next world of l10n.

I'm not following you quite here.

Is there a reason we couldn't just lookup the fallback strings when we lookup the localized strings? Thus the IO wouldn't need to happen on the main thread.

Thanks,

Joe.

Axel Hecht

unread,
Apr 7, 2016, 7:37:02 AM4/7/16
to jwa...@mozilla.com
Then you'd have to do the loading and parsing of twice the amount of
files on startup, half of which you don't want to need in most of the cases.

Axel
> Thanks,
>
> Joe.
>

Joe Walker

unread,
Apr 7, 2016, 7:45:36 AM4/7/16
to Axel Hecht, mozilla-t...@lists.mozilla.org
On Thu, Apr 7, 2016 at 12:36 PM Axel Hecht <l1...@mozilla.com> wrote:

> On 06/04/16 14:17, jwa...@mozilla.com wrote:
> Then you'd have to do the loading and parsing of twice the amount of
> files on startup, half of which you don't want to need in most of the
> cases.
>

Right, obviously. But that seems like a small cost compared with the
massive cost (both to development and to live usage) of making every string
lookup asynchronous.

Do we have numbers on how long it takes to load a string file?

Joe.

Axel Hecht

unread,
Apr 8, 2016, 8:15:05 AM4/8/16
to Joe Walker, mozilla-t...@lists.mozilla.org
We actually have a rich experience from converting gaia apps and their
developers to these APIs, and it turned out that once you get into it,
things are much nicer. A lot of the gaia devs were much happier to use
the l20n apis compared to the old sync l10n.js ones.

The key here is to use the API in the ways it's strong:

Just add html, and let the library localize it. This is what the
experiment that stas did around "just use l20n" did. Just pass the data
to the html, and the l20n library will figure out what to do, and when.

That's a lot easier than manually looking up each string, and then
marshalling it through a bunch of DOM calls.
> Do we have numbers on how long it takes to load a string file?
We did a bunch of experiments within gaia, but I don't have the numbers
myself.

Axel

Axel Hecht

unread,
Apr 8, 2016, 8:15:32 AM4/8/16
to Joe Walker, mozilla-t...@lists.mozilla.org
On 07/04/16 13:45, Joe Walker wrote:
We actually have a rich experience from converting gaia apps and their
developers to these APIs, and it turned out that once you get into it,
things are much nicer. A lot of the gaia devs were much happier to use
the l20n apis compared to the old sync l10n.js ones.

The key here is to use the API in the ways it's strong:

Just add html, and let the library localize it. This is what the
experiment that stas did around "just use l20n" did. Just pass the data
to the html, and the l20n library will figure out what to do, and when.

That's a lot easier than manually looking up each string, and then
marshalling it through a bunch of DOM calls.
> Do we have numbers on how long it takes to load a string file?

Joe Walker

unread,
Apr 8, 2016, 9:18:52 AM4/8/16
to Axel Hecht, mozilla-t...@lists.mozilla.org
Axel Hecht wrote:

> ...
> > Right, obviously. But that seems like a small cost compared with the
> > massive cost (both to development and to live usage) of making every
> string
> > lookup asynchronous.
> We actually have a rich experience from converting gaia apps and their
> developers to these APIs, and it turned out that once you get into it,
> things are much nicer. A lot of the gaia devs were much happier to use
> the l20n apis compared to the old sync l10n.js ones.
>
> The key here is to use the API in the ways it's strong:
>
> Just add html, and let the library localize it. This is what the
> experiment that stas did around "just use l20n" did. Just pass the data
> to the html, and the l20n library will figure out what to do, and when.
>
> That's a lot easier than manually looking up each string, and then
> marshalling it through a bunch of DOM calls.
>

So the render() call in React is synchronous. There is no option to resolve
a promise.
The only thing you can do is to some form of re-render at a later time.

The examples seem to mostly cause a re-render by calling setState one way
or another when the string is available.
The trouble is this doesn't address the lifecycle of a react application.
When something else changes, and you need to re-render for a different
reason, you need to start all over again with an async lookup ...

Presumably, string formatting is synchronous with l20n? I think that's the
place to start looking. Could you give me a pointer to a format function?

Thanks,
Joe.

Axel Hecht

unread,
Apr 8, 2016, 9:39:43 AM4/8/16
to Joe Walker
I suggest to look at
https://github.com/mozilla/activity-streams/pull/429/files.

I'm afraid that somewhere in this thread, we lost you on one of the
tangents we took. Seems we lost you on one that we don't like either.

What stas did on activity stream, and on
https://github.com/stasm/l20n-react-experiments/tree/gh-pages/mutation
with just using l20n and data-l10n-id on the react/virtualdom side is
effective, and pretty straight forward for devs and tools.

Axel


Joe Walker

unread,
Apr 8, 2016, 10:17:50 AM4/8/16
to Axel Hecht, mozilla-t...@lists.mozilla.org
OK. Thanks.

My understanding it that this should work just fine for a single call to
render(), but you're then mutating the DOM outside of React, so the next
time render() is called React will overwrite your localized strings with
the unlocalized ones.

Joe.


On Fri, Apr 8, 2016 at 2:39 PM Axel Hecht <l1...@mozilla.com> wrote:

> On 08/04/16 15:18, Joe Walker wrote:

Axel Hecht

unread,
Apr 8, 2016, 11:32:06 AM4/8/16
to Joe Walker
On 08/04/16 16:17, Joe Walker wrote:
> OK. Thanks.
>
> My understanding it that this should work just fine for a single call to
> render(), but you're then mutating the DOM outside of React, so the next
> time render() is called React will overwrite your localized strings with
> the unlocalized ones.
Interesting question, I'll defer that to stas. He'll be back next week.

Axel

Matej

unread,
Apr 8, 2016, 4:10:08 PM4/8/16
to Axel Hecht, Joe Walker, mozilla-t...@lists.mozilla.org
I am late to the discussion, but I use this react-mixin for my l20n
components

npm install --save react-mixin@2

import ReactDOM from 'react-dom';

export default {

componentDidMount() {
document.l10n.translateFragment(ReactDOM.findDOMNode(this));
}

componentDidUpdate() {
document.l10n.translateFragment(ReactDOM.findDOMNode(this));
}
}

Keep it simple, stupid.

--
Matej

On Fri, Apr 8, 2016 at 5:31 PM, Axel Hecht <l1...@mozilla.com> wrote:

> On 08/04/16 16:17, Joe Walker wrote:
>
>> OK. Thanks.
>>
>> My understanding it that this should work just fine for a single call to
>> render(), but you're then mutating the DOM outside of React, so the next
>> time render() is called React will overwrite your localized strings with
>> the unlocalized ones.
>>
> Interesting question, I'll defer that to stas. He'll be back next week.
>
> Axel
>
>>

Joe Walker

unread,
Apr 9, 2016, 5:24:42 AM4/9/16
to Matej, Axel Hecht, mozilla-t...@lists.mozilla.org
This is nice, and I agree very simple.
The downsides are:
- This is a potential performance problem because every render becomes 2
reflows
- It means we can't use lightweight components or ES6 classes because
mixins aren't supported there.

Joe.



On Fri, Apr 8, 2016 at 9:10 PM Matej <gen...@console.army> wrote:

> I am late to the discussion, but I use this react-mixin for my l20n
> components
>
> npm install --save react-mixin@2
>
> import ReactDOM from 'react-dom';
>
> export default {
>
> componentDidMount() {
> document.l10n.translateFragment(ReactDOM.findDOMNode(this));
> }
>
> componentDidUpdate() {
> document.l10n.translateFragment(ReactDOM.findDOMNode(this));
> }
> }
>
> Keep it simple, stupid.
>
> --
> Matej
>
> On Fri, Apr 8, 2016 at 5:31 PM, Axel Hecht <l1...@mozilla.com> wrote:
>
>> On 08/04/16 16:17, Joe Walker wrote:
>>
>>> OK. Thanks.
>>>
>>> My understanding it that this should work just fine for a single call to
>>> render(), but you're then mutating the DOM outside of React, so the next
>>> time render() is called React will overwrite your localized strings with
>>> the unlocalized ones.
>>>
>> Interesting question, I'll defer that to stas. He'll be back next week.
>>
>> Axel
>>
>>>

Axel Hecht

unread,
Apr 11, 2016, 3:48:04 PM4/11/16
to Joe Walker
On 08/04/16 16:17, Joe Walker wrote:
> OK. Thanks.
>
> My understanding it that this should work just fine for a single call to
> render(), but you're then mutating the DOM outside of React, so the next
> time render() is called React will overwrite your localized strings with
> the unlocalized ones.
stas is not really back, but closer, so we talked about this today a bit
(stas, gandalf and I).

We've put up http://jsbin.com/losumuz/13/edit?js,output as a test
environment, and tested this against react 0.13 and 0.15.

If you inspect the "Hello" element, you can edit that in-place in
devtools. If you then change the input box, that triggers setState, and
render(), but it doesn't undo the edit you did to the text node.

gandalf also mentioned that mutation observers are in the same "micro
task", and thus layout is only going to be bugged once about those dom
modifications.

Our analysis leads us to:

React and l20n.js-with-mutation-observers are two orthogonal libraries.

There are some useful patterns in that scenario to deal with data that's
needed by localized strings,
https://github.com/mozilla/activity-streams/pull/429/files#diff-c6fa2132721dc285a614d8adb24477a9R84
seems to be a good helper, that we may want to formalize.

Keeping l10n close to the html metal is the best way for us to empower
l10n-related asks from the organization:

- being able to just click on a UI element and find out how to change
its localization (pontoon-like workflows for Firefox)
- being able to push updates to UI l10n live - kinto-like updates to
localizations.

Thus, our recommendation is to use l20n.js in this mode.

I'd not do so this week, as there are a few open ends as to which file
format to use. But from an architecture pov, that's what we think is
going to be successful and empowering.

Axel

Joe Walker

unread,
Apr 12, 2016, 9:31:24 AM4/12/16
to Axel Hecht, mozilla-t...@lists.mozilla.org
Axel Hecht wrote:

> ...
> Our analysis leads us to:
> React and l20n.js-with-mutation-observers are two orthogonal libraries.
>

I think this gets to the heart of the matter. React and l20n both change
the DOM, and both do it without taking account of each other.

The best we can hope for is a double update where React updates the DOM
without translated strings, and then l20n comes along and fixes things. [1]
The worst case is that this update doesn't happen properly, and we're left
with things out of sync and a broken website either through a react error
or a translation error. [2]

My hunch is that working with the synchronous react render cycle (rather
than against it) is going to be much less painful long term. We can get
that either by loading 2 sets of strings at startup, or by using a redux
action to indicate a string update.

Joe.

[1]: Reflecting on the best case scenario: Many people put lots of effort
into using Immutable and/or shouldComponentUpdate to avoid unnecessary
renders, so I'm not keen on an architectural decision to double (or more?)
the cost of any render that uses translated strings.
[2]: Thanks for the JSBin demo. I should have beeen clearer. React is
making changes to the DOM without thinking of l20n at all. The changes
could be sweeping (e.g. for a tab dialog), or rapid (e.g. an animation) or
both together. My concern is that interleaving of events from both react
and l20n is going to leave things untranslated or otherwise out of sync.
Maybe we can prove there is no possibility that this will go wrong, but [1]
is still a big issue.

Axel Hecht

unread,
Apr 13, 2016, 9:38:14 AM4/13/16
to Joe Walker
On 12/04/16 15:31, Joe Walker wrote:
> Axel Hecht wrote:
>
>> ...
>> Our analysis leads us to:
>> React and l20n.js-with-mutation-observers are two orthogonal libraries.
>>
> I think this gets to the heart of the matter. React and l20n both change
> the DOM, and both do it without taking account of each other.
I used the word "orthogonal" in a stronger sense. The thing that one
does doesn't destroy the other and vice versa.
>
> The best we can hope for is a double update where React updates the DOM
> without translated strings, and then l20n comes along and fixes things. [1]
Sorry, but our experiment shows that that doesn't happen. The jsx
generates the DOM elements, and l20n.js adds text nodes and attributes.
Only in cases where we expect retranslation to be required, react unsets
our DOM mods.
Gandalf refers to the spec of mutation observers that they're defined to
not trigger layout inbetween, so there's also no double layout.
> The worst case is that this update doesn't happen properly, and we're left
> with things out of sync and a broken website either through a react error
> or a translation error. [2]
Same as above.
>
> My hunch is that working with the synchronous react render cycle (rather
> than against it) is going to be much less painful long term. We can get
> that either by loading 2 sets of strings at startup, or by using a redux
> action to indicate a string update.
A hunch doesn't advance the conversation here. We're confident that we
addressed your concerns, and if there are open items, I'd expect to see
a demo that shows them.

Axel

rfo...@gmail.com

unread,
Apr 13, 2016, 4:40:44 PM4/13/16
to mozilla-t...@lists.mozilla.org
Posting my feedback here as I was asked by Gandalf to do so.

I work on [browser.html] (https://github.com/browserhtml/browserhtml), which is built with an [immediate mode rendering](https://en.wikipedia.org/wiki/Immediate_mode_(computer_graphics)) architecture similar to React. In fact our renderers (that we refer to drivers) are swappable and react is one of the options. One of the reasons we've settled on swappable driver architecture was because react was not fast enough in our experience and swappable drivers allowed us to try alternatives (which in turned out to perform better). Trying different drivers is still on a plate & what I mean by that is that is driver may actually render into canvas or webgl or something even wilder.

Points I'm trying to make is:

1. Hooking localization at the DOM layer is not great option for us, because it can operates on the same stuff (DOM) as a driver that can cause problems, as driver assumes full control of the target.
2. We are trying hard to achieve native performance that is already challenging, adding extra DOM mutations and observers is going to make things worse.
3. We invested a lot of effort into making drivers swappable such that no changes are required other than plugging different driver. Doing localization at the DOM layer conflicts with this if we would like to do non DOM driver.
4. Our architecture allows us to do deterministic session record / reply (similar to [rr](http://rr-project.org). Doing any IO outside of the app code means deterministic reply isn't possible.

Give above context, I think that proving JS api to fetch localization bundle and then doing localizations as simply as `bundle.localize(....):string` would be most ideal solution. That would work regardless of rendering method and without conflicting with any of the other tooling mentioned above.

Joe Walker

unread,
Jun 12, 2017, 10:26:32 AM6/12/17
to marc....@gmail.com, mozilla-t...@lists.mozilla.org
Hi Marc,
At a very quick glance this looks like a nice API. I'll pass the message on
to the people that would be using it.
Thanks,
Joe.


On Fri, Jun 9, 2017 at 11:28 AM <marc....@gmail.com> wrote:

> Op dinsdag 29 maart 2016 23:34:13 UTC+2 schreef Joe Walker:
> Please take a look at the react-l20n-u npm package I created:
> https://www.npmjs.com/package/react-l20n-u
>
> I'm very interested to hear your feedback about my choices of syntax and
> implementation and what would be your recommendations.
>
> Thanks,
> Marc Selman

Staś Małolepszy

unread,
Jun 12, 2017, 10:54:06 AM6/12/17
to Joe Walker, marc....@gmail.com, mozilla-t...@lists.mozilla.org
Hi Joe, Marc --

Marc's message was stuck in the moderation queue of the mailing list
and I only found out a few days ago. Sorry about that. The message
itself is fairly old and I'd like to point out a number of more recent
developments:

In January 2017 we started Project Fluent, which bases on the core of
L20n with the goal to create an set of small un-opinionated
localization libraries which can be easily ported and plugged into
larger codebases and projects. You can find out about Project Fluent
at http://projectfluent.io and in the project announcement:
https://groups.google.com/forum/#!topic/mozilla.tools.l10n/NPmsJD4IGjQ

Consequently, I then created fluent-react, which exposes Fluent's API
and file syntax to React apps:

https://www.npmjs.com/package/fluent-react

The README has the scoop and the summary of the API and there are also
a number of example apps showing different features and use-cases:
https://github.com/projectfluent/fluent.js/tree/master/fluent-react/examples

I already got some feedback from the Test Pilot team and I'm hoping
that we'll be able to use fluent-react for their websites in the near
future. I'd love to get more feedback from people working on
Devtools! :)

Thanks,
Staś

On Mon, Jun 12, 2017 at 4:26 PM, Joe Walker <jwa...@mozilla.com> wrote:
> Hi Marc,
> At a very quick glance this looks like a nice API. I'll pass the message on
> to the people that would be using it.
> Thanks,
> Joe.
>
>
> On Fri, Jun 9, 2017 at 11:28 AM <marc....@gmail.com> wrote:
>
>> Op dinsdag 29 maart 2016 23:34:13 UTC+2 schreef Joe Walker:
0 new messages