Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Path to start using l20n file format in Gaia and l20n format for v3

93 views
Skip to first unread message

Zibi Braniecki

unread,
Apr 17, 2015, 5:13:45 PM4/17/15
to mozilla-t...@lists.mozilla.org
Hi all,

Last week we met with Axel and Stas and started plotting the plan to put l20n file format in Gaia.

We know we're close, but it's also a major change that requires massive updates to the ecosystem.

L20n file format offers so much more than what we have right now with properties and DTD, that it will require updates to an underlying data models for most consumers of the format like Pootle, Pontoon, compare-locales etc.

============

Right now, we're ready to land the l20n file format parser in gaia[0]. This will enable a subset of l20n format that matches current feature set of .properties.

It means that on one hand, we will start being able to use l20n, on the other, it will be a very, very limited l20n.

The major change that this will introduce is that we will start recognizing multi-variant strings. In other words, instead of:

foo = {[ plural(n) ]}
foo[zero] = Foo
foo[other] = Foo 2

being three different entities (that's how it is treated right now by majority of our toolchain), but instead it will be stored as a single entity with two different variants and an index to select from.

=============

In order for that to work we need to update compare-locales to recognize the concept of multi-variant strings. Because of that, we created a new compare-locales implementation [1] that supports this new data model paradigm.

We're close to land this in l10n.mozilla.org for Gaia.

=============

We also need Pootle support. I'm working on basic conversion between po and l20n [2], which should work since we only start with the features that .properties already have.

============

Once we have pootle and l10n.mozilla.org support, we will land [0] and start experimenting with using it in some Gaia apps. Unfortunately, this will not give people exposure to the benefits of the new format, so I expect that people will have the "Why change if there's no benefit?" confusion :(

==== What's next? ====

In order to really benefit from the new format, we will need a few extensions to the subset. For v3, I'd like to get:

1) Support triple-quote strings.

This will make DOM Overlays easier

2) Support Global vs. Variable vs. Identifier (instead of the current IdOrVar)

This will remove the overlap between variable reference and entity reference:

<name "Firefox">
<foo "Hi, my name is {{ name }}">

<div data-l10n-id='foo' data-l10n-args='{"name": "John"}' />

In the current model we don't specify what should happen. In the proper l20n it should be:

<name "Firefox">
<foo "Hi, my name is {{ $name }}, and the app name is {{ name }}">

<div data-l10n-id='foo' data-l10n-args='{"name": "John"}' />


3) Support CallExpression in placeables.

This will enable

<shortTimeFormat "%I:%M %p">

<foo "{{ @icu.formatDateTime($d, shortTimeFormat) }}">

mozL10n.setAttributes(node, 'foo', {d: new Date().getTime()});

instead of the current:

var f = new navigator.mozL10n.DateTimeFormat();
return f.localeFormat(d, _('shortDateFormat'));

removing the need for DateTimeFormat, mozL10n.get and moving date formatting from developer to localizer.

note: I'm ok not supporting propertyexpressions just yet. So "icu.formatDateTime" is a single name here (same as "cldr.plural" or "gaia.formFactor") for now.

4) Support for @gaia.formFactor

In order to remove per-form factor resources and make langpacks work accross device types, I believe we should land it early:

<foo[@gaia.formFactor] {
phone: "Phone name",
tv: "TV name",
}>

5) Default hash value

This will enable forward compatibility:

<foo[@gaia.formFactor] {
phone: "Phone name",
tv: "TV name",
*other: "name"
}>

=======================

Those five changes will require updates to the parser, serializer, resolver and solution for localization tools (Pootle/Pontoon).
compare-locales.js should not require any changes (except of an updated l20n parser).

Let me know what you think!
zb.

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1027684
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1037052
[2] https://github.com/zbraniecki/translate/tree/l20n

Ricardo Palomares Martínez

unread,
Apr 18, 2015, 5:43:46 AM4/18/15
to tools...@lists.mozilla.org
El 17/04/15 a las 23:13, Zibi Braniecki escribió:
> Hi all,
>
> Last week we met with Axel and Stas and started plotting the plan to put l20n file format in Gaia.


I guess that this just affects Gaia, not the toolkit or B2G strings.

Also, is there any rough schedule to migrate the remaining products
(Fx, Tb, Sm, shared components) to L20n?

TIA

--
Proyecto NAVE
Mozilla Localization Project, es-ES Team
http://www.proyectonave.es/
Diaspora: rick...@diasp.eu

Matjaz Horvat

unread,
Apr 19, 2015, 7:32:55 AM4/19/15
to Zibi Braniecki, mozilla-t...@lists.mozilla.org
Hi,

I like that plan. I've started working on L20n support in Pontoon.

On Fri, Apr 17, 2015 at 11:13 PM, Zibi Braniecki <
zbigniew....@gmail.com> wrote:

> We also need Pootle support. I'm working on basic conversion between po
> and l20n [2], which should work since we only start with the features that
> .properties already have.


I've successfully parsed the following .l20n file and imported it using
your parser:

--
<everyday "Every day">

<close "Close"
accesskey: "C"
>

<nSpinnerSeconds[@cldr.plural(n)] {
zero: "zero seconds",
one: "one second",
two: "{{ n }} seconds",
few: "{{ n }} seconds",
many: "{{ n }} seconds",
other: "{{ n }} seconds"
}>
--

Does the file cover the entire l20n subset we want to support initially?

Now we only need the serializer, which will probably come as part of your
po2l20n effort.

-Matjaž

Zibi Braniecki

unread,
Apr 22, 2015, 8:21:36 PM4/22/15
to mozilla-t...@lists.mozilla.org
An update on the parser front.

I've ended up developing two JS parser versions:

1) Compatibility parser, which produces exactly the same AST as the current properties parser. This one will land in the current l10n.js

https://github.com/zbraniecki/l20n.js/blob/1027684-add-subset-of-l20n-format/src/lib/format/l20n/parser.js

2) V3 parser, intended to land together with v3 API and new revision of l10n.js

https://github.com/zbraniecki/l20n.js/blob/v3-features/src/lib/format/l20n/parser.js

The latter one supports:
- default hash values
- simplified CallExpressions
- separation between variables, globals and identifiers

That means it covers 4 out of 5 scenarios we are planning for V3. The only one I left intentionally out is the triple-quote strings. We don't need them now, they may impact performance and add a bit of complexity, while they don't affect AST at all. So I decided to leave this one out for now.

The latter also uses different AST than what we currently have, so it will be incompatible with v2 of our resolver.

I want to use the v3 version of the parser as a reference point for python ports and any work based on L20n, and the v2 compatibility one exclusively for the runtime.

I will update my port and will work with Stas to get the v3 version merged into his v3 API work.

I believe that v3 should only support l20n file format and if needed we may write a properties parser that will produce the v3 AST for compatibility reasons.

If you are working on any code that needs l20n file format parser, it should be based on v3 version. I'll update this thread with the python port when it's ready.

zb.

Zibi Braniecki

unread,
Apr 27, 2015, 4:54:13 PM4/27/15
to mozilla-t...@lists.mozilla.org
Next update!

I marked c-l.js bug as FIXED, because at this point c-l.js is fully operational [0]

I filed a new bug to start using c-l.js in Elmo[1] with instructions. Next step is for Axel to test this in his environment.

I updated JS parser in my v3-features branch to support everything I want to support for now, and added lots of tests.

Lastly, I branched the python-l20n parser as 1.0.x to match l20n.js 1.0.x and updated the master python-l20n parser to be a perfect port of the JS parser.

At this point, both parsers produce exactly the same AST. I believe there's some room to wiggle with the exact AST and once we start working with that AST we may want to fine tune it because I made some changes, but overall, it should be good to start developing in your code.

Next step is to integrate the parser into Stas's v3 branch and get the pootle code to work with the python parser.

zb.

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1037052
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1158976
[2] https://github.com/zbraniecki/l20n.js/blob/v3-features/src/lib/format/l20n/parser.js

Zibi Braniecki

unread,
Apr 30, 2015, 8:35:48 PM4/30/15
to mozilla-t...@lists.mozilla.org
Next update!

I got both python [0] and js [1] serializers to work! I can't say they are complete, and I don't have tests yet, but from my hand testing they seem usable.

I also added ./tools/serialize.js|py to both repositories.

So now I have:
- two parsers that produce the same JSON AST
- serializers that can take that AST and reproduce L20n

Which means that we should be able to freely interact between js and python and also read/write L20n for tools purposes.
Axel, I also removed unescape dependency from JS Parser, so you should be able to use it in Aisle.

Working on that brought three topics that I so far left unresolved:

1) Source notation. Currently both parsers don't store any information on syntax nodes positioning in the source. I believe it would be worth figuring out how we want to handle that. First idea that comes to mind is that we could just add a kvp on the node object like 'source': {'start': 49, 'end': 102', string: '...'} to use for an editor.

2) String notations. When a string is used it may be surrounded by ", ' or (in the future) """ or '''. Once we parser id, we don't store this information so on serialization we cannot reuse it.

We could guess (for example: multiline uses triple-quotes, single line uses " unless it has " inside it, and no ' in which case it uses '), but we could also somehow store it on the string

3) Unescaping.

Right now we do something very dummy - we unescape unicode and remove a quote from in front of any other character treating the following char as non-semantic.

It works well enough, you can do: <foo "hey \" ho"> or <foo "hey \{{ var }} ho"> and it will all be stores as a simple string.

But with serialization, problems arise.

First, unicode \uXXXX will be turned into a unicode char by parser so the serializer will have no way to figure out what form of unicode has been used and will serialize it as a unicode char.

Second, there is no way to sometimes know what unescape form has been used. Like:

<foo "hey \{{ var }}"> and <foo "hey {\{ var }}"> will produce the same AST. During serialization we can identify that since the ast node is a simple string "hey {{ var }}" and not a complex string, we should unescape the {{ to remove the syntactic meaning, but we have no way to know which char should be unescaped.

Third, all other chars just escaped, so <foo "hey \n"> will be turned into "hey n" and <foo "hey \l"> will be turned into <foo "hey l">

That means that when serializing we will just write it back without a backslash.

We can limit the backslash use, and raise errors in parser if \ precedes an unknown char, and then have rules in the serializer, to backslash a backslash, backslash {{ and backslash string closing mark, but for chars like "\n" we will hit the same problem as with unicode:

<foo "hey
ho"> and <foo hey \n ho"> will produce the same AST. What should we serialize it into?

Would love to get your feedback!
zb.

[0] https://github.com/l20n/python-l20n/blob/master/lib/l20n/format/serializer.py
[1] https://github.com/zbraniecki/l20n.js/blob/v3-features/src/lib/format/l20n/serializer.js

Matjaz Horvat

unread,
May 1, 2015, 5:44:40 AM5/1/15
to Zibi Braniecki, mozilla-t...@lists.mozilla.org
Zibi,

I'm trying to parse and then serialize back the following entity with your
parser and serializer:

<nSpinnerSeconds[@cldr.plural(n)] {
zero: "zero seconds",
one: "one second",
two: "{{ n }} seconds",
few: "{{ n }} seconds",
many: "{{ n }} seconds",
other: "{{ n }} seconds"
}>

The parser gives me this:

{
'$v': {
'many': [{
't': 'id',
'v': 'n'
}, ' seconds'],
'two': [{
't': 'id',
'v': 'n'
}, ' seconds'],
'one': 'one second',
'few': [{
't': 'id',
'v': 'n'
}, ' seconds'],
'zero': 'zero seconds',
'other': [{
't': 'id',
'v': 'n'
}, ' seconds']
},
'$x': [{
'a': [{
't': 'id',
'v': 'n'
}],
't': 'call',
'v': {
't': 'glob',
'v': 'cldr.plural'
}
}],
'$i': 'nSpinnerSeconds'
}

Is this expected? Instead, I was hoping for something more l10n-tool
friendly:

{
"$v": {
"zero": "zero seconds",
"one": "one second",
"two": "{{ n }} seconds",
"few": "{{ n }} seconds",
"many": "{{ n }} seconds",
"other": "{{ n }} seconds",
},
"$i": "nSpinnerSeconds"
}

-Matjaž
> _______________________________________________
> tools-l10n mailing list
> tools...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/tools-l10n
>

Zibi Braniecki

unread,
May 1, 2015, 11:07:23 AM5/1/15
to mozilla-t...@lists.mozilla.org
Hi Matjaž,

Yes, it very much is. In L20n grammar [0] we define strings as complex structures which can contain so called expanders.

In runtime we optimize it by separating simple and complex strings. Simple string is just a string:

<foo "hey world!">

Complex string is an Array|List or strings and expressions:

<foo "one {{ n }} two {{ $m }}">

The latter will produce:
[
"one ",
{t: "id", v: "n"},
" two ",
{t: "var", v: "m"}
]

It actually is not a naive optimization. Because of the way expanders work, I cannot in the parser find the end of the string without parsing the expanders. Look at this:

<foo "Hello {{ @global("foo") }}">

Without lexing the string, I would have to assume that the end of the string is at global(". In the old days (huh!) we tried to approach of double parsing - where such an entity would require escaped quotes and then would be double-parser, first time to just find the whole string (and remove backslashes), second to parse expander in the string[1]:

<foo "Hello {{ @global(\"foo\") }}">

This unfortunately didn't work [2] (I'd like to point out that arguments for double-parsing and against double-parsing were raised, with the full power of reason, by Stas over the span of two days. Stas is amazing).

===============

Bottom line is that I believe that parser should store parsed complex string. What we may want to do is:

a) Store a source of the string as well

which would require the Array|List of a complex string to be turned into an object with {'v': [], 's': "..."} value and source.

b) Let you serialize and parse string easily

which would keep the AST as it is, but you'd have access to Serializer.serializeString() and Parser.parseString() which you'd be using whenever a value is an Array|List.

How does it sound to you?

zb.


[0] http://l20n.github.io/spec/grammar.html
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=918655#c11 , https://bugzilla.mozilla.org/show_bug.cgi?id=918655#c16
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=918655

Matjaz Horvat

unread,
May 1, 2015, 11:47:56 AM5/1/15
to Zibi Braniecki, mozilla-t...@lists.mozilla.org
Speaking with my l10n-tool-developer hat on, I prefer option A, which makes
source strings available directly.

-Matjaž

Zibi Braniecki

unread,
May 1, 2015, 3:25:07 PM5/1/15
to mozilla-t...@lists.mozilla.org
On Friday, May 1, 2015 at 11:47:56 AM UTC-4, Matjaz Horvat wrote:
> Speaking with my l10n-tool-developer hat on, I prefer option A, which makes
> source strings available directly.


Yeah, roger that.

On the other hand we would prefer not to inflate the AST if possible because it costs runtime memory and/or storage. So we either diverge ASTs or we go with b).

zb.

Axel Hecht

unread,
May 1, 2015, 3:47:57 PM5/1/15
to mozilla-t...@lists.mozilla.org
On 5/1/15 2:35 AM, Zibi Braniecki wrote:
> Next update!
>
> I got both python [0] and js [1] serializers to work! I can't say they are complete, and I don't have tests yet, but from my hand testing they seem usable.
>
> I also added ./tools/serialize.js|py to both repositories.
>
> So now I have:
> - two parsers that produce the same JSON AST
> - serializers that can take that AST and reproduce L20n
>
> Which means that we should be able to freely interact between js and python and also read/write L20n for tools purposes.
> Axel, I also removed unescape dependency from JS Parser, so you should be able to use it in Aisle.
>
> Working on that brought three topics that I so far left unresolved:
>
> 1) Source notation. Currently both parsers don't store any information on syntax nodes positioning in the source. I believe it would be worth figuring out how we want to handle that. First idea that comes to mind is that we could just add a kvp on the node object like 'source': {'start': 49, 'end': 102', string: '...'} to use for an editor.
Maybe look at what treehugger does via setAnnotation?
https://github.com/ajaxorg/treehugger/blob/master/lib/treehugger/tree.js
>
> 2) String notations. When a string is used it may be surrounded by ", ' or (in the future) """ or '''. Once we parser id, we don't store this information so on serialization we cannot reuse it.
>
> We could guess (for example: multiline uses triple-quotes, single line uses " unless it has " inside it, and no ' in which case it uses '), but we could also somehow store it on the string
>
> 3) Unescaping.
>
> Right now we do something very dummy - we unescape unicode and remove a quote from in front of any other character treating the following char as non-semantic.
>
> It works well enough, you can do: <foo "hey \" ho"> or <foo "hey \{{ var }} ho"> and it will all be stores as a simple string.
>
> But with serialization, problems arise.
>
> First, unicode \uXXXX will be turned into a unicode char by parser so the serializer will have no way to figure out what form of unicode has been used and will serialize it as a unicode char.
>
> Second, there is no way to sometimes know what unescape form has been used. Like:
>
> <foo "hey \{{ var }}"> and <foo "hey {\{ var }}"> will produce the same AST. During serialization we can identify that since the ast node is a simple string "hey {{ var }}" and not a complex string, we should unescape the {{ to remove the syntactic meaning, but we have no way to know which char should be unescaped.
>
> Third, all other chars just escaped, so <foo "hey \n"> will be turned into "hey n" and <foo "hey \l"> will be turned into <foo "hey l">
>
> That means that when serializing we will just write it back without a backslash.
>
> We can limit the backslash use, and raise errors in parser if \ precedes an unknown char, and then have rules in the serializer, to backslash a backslash, backslash {{ and backslash string closing mark, but for chars like "\n" we will hit the same problem as with unicode:
>
> <foo "hey
> ho"> and <foo hey \n ho"> will produce the same AST. What should we serialize it into?
I'm generally on the "be an editor" front.

One algorithm for the serializer could be to minimize the textual diff
between the existing content in the file and the serialized output. In
particular for unchanged entities, that'd result in no change in the
text on disk.

Yeah, my editor-writing self doesn't believe in parsing and serializing,
I'm sorry.

Axel

Zibi Braniecki

unread,
May 1, 2015, 6:28:55 PM5/1/15
to mozilla-t...@lists.mozilla.org
Ok, I added parseString/serializeString for now. Once we start adding source annotations for editors, we will have to figure out how to do this (either fork the parser or extend it), and then we will start providing you an AST more suited for your needs (including a source for the complexString).

For now, just pass the value to serializer.serializeString and it will give you a string.

zb.

Zibi Braniecki

unread,
May 1, 2015, 9:08:00 PM5/1/15
to mozilla-t...@lists.mozilla.org
On Friday, May 1, 2015 at 3:47:57 PM UTC-4, Axel Hecht wrote:
> Maybe look at what treehugger does via setAnnotation?
> https://github.com/ajaxorg/treehugger/blob/master/lib/treehugger/tree.js

Interesting. So you think it would be better to give you col/line instead of pos? I like to think of the source as a long string, and avoid building any piece as white-space driven, so my idea would be rather to give you pos.start and pos.end.

Would that work?

> I'm generally on the "be an editor" front.
>
> One algorithm for the serializer could be to minimize the textual diff
> between the existing content in the file and the serialized output. In
> particular for unchanged entities, that'd result in no change in the
> text on disk.

Absolutely agree, that's my goal as well. So in the process of building the AST I'm trying to identify which rules should be 1-1 between parse and serializer (if parser does X, serializer reverts it), and which require AST annotation to be written back (string quotes etc.)

I'm not sure what to do with escaping. Because in order to revert escaped char in unicode I need to know that it was escaped. Which means either not unescaping it when parsing, or annotating in AST somehow.

> Yeah, my editor-writing self doesn't believe in parsing and serializing,

That's ok. Let's try to work around it and get an AST your editor-writing self wants to work with :)

zb.

Axel Hecht

unread,
May 3, 2015, 4:01:17 AM5/3/15
to mozilla-t...@lists.mozilla.org
On 5/2/15 3:07 AM, Zibi Braniecki wrote:
> On Friday, May 1, 2015 at 3:47:57 PM UTC-4, Axel Hecht wrote:
>> Maybe look at what treehugger does via setAnnotation?
>> https://github.com/ajaxorg/treehugger/blob/master/lib/treehugger/tree.js
> Interesting. So you think it would be better to give you col/line instead of pos? I like to think of the source as a long string, and avoid building any piece as white-space driven, so my idea would be rather to give you pos.start and pos.end.
>
> Would that work?
In my experience, machines want the global offset, humans don't. Text
editors are "both", at least ace splits the doc into lines.

Which is a lengthy way of saying "depends on the use case", and the
optimizations for it.

Axel

Zibi Braniecki

unread,
May 3, 2015, 10:07:38 PM5/3/15
to mozilla-t...@lists.mozilla.org
On Sunday, May 3, 2015 at 4:01:17 AM UTC-4, Axel Hecht wrote:
> On 5/2/15 3:07 AM, Zibi Braniecki wrote:
> > On Friday, May 1, 2015 at 3:47:57 PM UTC-4, Axel Hecht wrote:
> >> Maybe look at what treehugger does via setAnnotation?
> >> https://github.com/ajaxorg/treehugger/blob/master/lib/treehugger/tree.js
> > Interesting. So you think it would be better to give you col/line instead of pos? I like to think of the source as a long string, and avoid building any piece as white-space driven, so my idea would be rather to give you pos.start and pos.end.
> >
> > Would that work?
> In my experience, machines want the global offset, humans don't. Text
> editors are "both", at least ace splits the doc into lines.
>
> Which is a lengthy way of saying "depends on the use case", and the
> optimizations for it.

True, my thinking is that the editor will format a source string into rows and cols, so it should be able to calculate it. Parser doesn't store row/cols, parser stores position in a source string, so it's easier for the parser to annotate it.

zb.

Zibi Braniecki

unread,
May 3, 2015, 10:09:24 PM5/3/15
to mozilla-t...@lists.mozilla.org
And another update,

Tim brought a case for PropertyExpression, so I added it to parser/serializer for python/JS.

For now it's just handling computed property expression, so that this works:

<byteUnit {
B: 'Bytes',
KB: 'KB',
}>

<preInstalledStatus "{{ $size }} {{ byteUnit[$unit] }}">

I don't want to enable more than I have to to make all major Gaia use cases work, so I didn't add attribute expression or non-computed property expression.

zb.

Francesco Lodolo [:flod]

unread,
May 4, 2015, 1:50:49 AM5/4/15
to tools...@lists.mozilla.org
Il 04/05/15 04:09, Zibi Braniecki ha scritto:
> <byteUnit {
> B: 'Bytes',
> KB: 'KB',
> }>
I suppose you mean "B", not "Bytes", unless the syntax could integrate
plural forms too?

Francesco

Staś Małolepszy

unread,
May 4, 2015, 7:59:22 AM5/4/15
to Zibi Braniecki, mozilla-t...@lists.mozilla.org
Is this something that we already use in Gaia? I agree it would be very
helpful to have for many use-cases, for instance here:

https://github.com/mozilla-b2g/gaia/blob/master/apps/keyboard/js/settings/layout_item_view.js#L263-L291
https://github.com/mozilla-b2g/gaia/blob/master/apps/keyboard/locales/keyboard.en-US.properties#L79

Should we track this as a feature to add to l10n.js on current master?
That might be great to introduce indeed. It will require some work in the
Resolver, too.

-stas


On Mon, May 4, 2015 at 4:09 AM, Zibi Braniecki <zbigniew....@gmail.com
> wrote:

> And another update,
>
> Tim brought a case for PropertyExpression, so I added it to
> parser/serializer for python/JS.
>
> For now it's just handling computed property expression, so that this
> works:
>
> <byteUnit {
> B: 'Bytes',
> KB: 'KB',
> }>
>
> <preInstalledStatus "{{ $size }} {{ byteUnit[$unit] }}">
>
> I don't want to enable more than I have to to make all major Gaia use
> cases work, so I didn't add attribute expression or non-computed property
> expression.
>
> zb.

Zibi Braniecki

unread,
May 4, 2015, 9:14:51 AM5/4/15
to mozilla-t...@lists.mozilla.org
On Monday, May 4, 2015 at 7:59:22 AM UTC-4, Staś Małolepszy wrote:
> Is this something that we already use in Gaia? I agree it would be very
> helpful to have for many use-cases, for instance here:
>
> https://github.com/mozilla-b2g/gaia/blob/master/apps/keyboard/js/settings/layout_item_view.js#L263-L291
> https://github.com/mozilla-b2g/gaia/blob/master/apps/keyboard/locales/keyboard.en-US.properties#L79
>
> Should we track this as a feature to add to l10n.js on current master?
> That might be great to introduce indeed. It will require some work in the
> Resolver, too.

So, that depends.

On one hand, I can see us doing that, but on the other I think I'd like to focus on migrating people to the new API as soon as possible.

My idea was to land l20n parser that matches current AST perfectly ASAP and encourage people to start using l20n file format with current l10n.js, and introduce the new features with the new API, basically encouraging to migrate to the new l10n.js to use them.

I wouldn't be opposed to backport some new features to the current l10n.js, especially if we are some time away from being able to start using new API, but I'd like to first land it without them to minimize the risk of regressions on migration.

zb.

Zibi Braniecki

unread,
May 22, 2015, 2:16:05 PM5/22/15
to mozilla-t...@lists.mozilla.org
Update on the L20n format front.

1) We landed the subset parser on master (bmo 1027684)
2) We landed an example l20n demo app [0]
3) We have patches for elmo (bmo 1158976) going through review
4) Dwayne and I are working on Pootle support and we are close [1]
5) Gsvelto started experimenting with l20n format for Gaia purposes (bmo 1165332 and bmo 1104667)

It's exciting to see those experiments as they give us a good perspective of how people imagine using l20n and what pieces are needed.

I hope that in a matter of week or two we will be able to flip the switch and start using the subset of the l20n format in production.

Then, in order to be able to use more, we will need to put more work into Pootle converter.

zb.

[0] https://github.com/mozilla-b2g/gaia/tree/master/dev_apps/l20n-app
[1] https://github.com/dwaynebailey/translate/tree/l20n-db
0 new messages