[net] Allow data: URL redirects in manual redirect modeHelmut JanuschkaLet's make clear these are redirects to data URLs, not redirects from them.
Done
deferred_redirect_url_ = std::make_unique<GURL>(redirect_info.new_url);
response_head->response_type = mojom::FetchResponseType::kOpaqueRedirect;
response_head->timing_allow_passed = !timing_allow_failed_flag_;Helmut JanuschkaI'm a bit concerned that we don't seem to have any protection against following these redirects. It looks like we'd just return ERR_UNKNOWN_URL_SCHEME (See https://source.chromium.org/chromium/chromium/src/+/main:net/url_request/url_request_job_factory.cc;l=112), but I really don't understand the implications of letting callers do that, or the lack of protections against it).
It seems to me like we really should be closing the pipe to prevent following the redirects, if manual mode means the consumer is expected to manually make a new request to follow the redirect (It's not clear if it means that, from glancing at the spec, unfortunately - that's the problem with purely prescriptive specs. They're basically code without comments).
I'm going to defer to Adam here, as CORS isn't really my area.
tried adding protection to close the pipe after sending the opaque-response:
```cpp
if (request_.redirect_mode == mojom::RedirectMode::kManual) {
HandleComplete(URLLoaderCompletionStatus(net::ERR_FAILED));
return;
}
```
This broke a large number of tests. The issue is that OnReceiveRedirect is the mechanism to deliver the opaque-redirect response to the client.
its not actually following the redirect. The client receives the redirect info wrapped in an opaque response, and the flow completes normally.
the redirect is never followed
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
deferred_redirect_url_ = std::make_unique<GURL>(redirect_info.new_url);
response_head->response_type = mojom::FetchResponseType::kOpaqueRedirect;
response_head->timing_allow_passed = !timing_allow_failed_flag_;Helmut JanuschkaI'm a bit concerned that we don't seem to have any protection against following these redirects. It looks like we'd just return ERR_UNKNOWN_URL_SCHEME (See https://source.chromium.org/chromium/chromium/src/+/main:net/url_request/url_request_job_factory.cc;l=112), but I really don't understand the implications of letting callers do that, or the lack of protections against it).
It seems to me like we really should be closing the pipe to prevent following the redirects, if manual mode means the consumer is expected to manually make a new request to follow the redirect (It's not clear if it means that, from glancing at the spec, unfortunately - that's the problem with purely prescriptive specs. They're basically code without comments).
I'm going to defer to Adam here, as CORS isn't really my area.
tried adding protection to close the pipe after sending the opaque-response:
```cpp
if (request_.redirect_mode == mojom::RedirectMode::kManual) {
HandleComplete(URLLoaderCompletionStatus(net::ERR_FAILED));
return;
}
```This broke a large number of tests. The issue is that OnReceiveRedirect is the mechanism to deliver the opaque-redirect response to the client.
its not actually following the redirect. The client receives the redirect info wrapped in an opaque response, and the flow completes normally.the redirect is never followed
I'm happy to help figure out how best to do that, if we decide to go in that direction (again, deferring to Adam on that decision).
We'd probably want to either not treat it as a redirect, but rather as a final response (e.g., instead of a 2xx response with body, we'd return a 3xx response with empty body), or call OnReceiveRedirect, but also have this class destroy itself, so there's no object to actually follow the redirect. First approach would probably break more things, though if not, it would be what I prefer. Second approach likely more practical, and shouldn't break consumers unless they continue to monitor the Mojo pipe even after they've received their last message over it...unless they're using manual mode, but actually using the URLLoader to follow the redirect, anyways.
Anyhow, let's hold off on trying either of those until we hear back from Adam.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
bool is_fetch_redirect_mode_manual_ = false;This should be called `treat_all_redirects_as_safe` to clearly express what it does at this layer, and to avoid making reference to concepts from a higher layer. Same with the setter and getter..
deferred_redirect_url_ = std::make_unique<GURL>(redirect_info.new_url);
response_head->response_type = mojom::FetchResponseType::kOpaqueRedirect;
response_head->timing_allow_passed = !timing_allow_failed_flag_;Helmut JanuschkaI'm a bit concerned that we don't seem to have any protection against following these redirects. It looks like we'd just return ERR_UNKNOWN_URL_SCHEME (See https://source.chromium.org/chromium/chromium/src/+/main:net/url_request/url_request_job_factory.cc;l=112), but I really don't understand the implications of letting callers do that, or the lack of protections against it).
It seems to me like we really should be closing the pipe to prevent following the redirects, if manual mode means the consumer is expected to manually make a new request to follow the redirect (It's not clear if it means that, from glancing at the spec, unfortunately - that's the problem with purely prescriptive specs. They're basically code without comments).
I'm going to defer to Adam here, as CORS isn't really my area.
mmenketried adding protection to close the pipe after sending the opaque-response:
```cpp
if (request_.redirect_mode == mojom::RedirectMode::kManual) {
HandleComplete(URLLoaderCompletionStatus(net::ERR_FAILED));
return;
}
```This broke a large number of tests. The issue is that OnReceiveRedirect is the mechanism to deliver the opaque-redirect response to the client.
its not actually following the redirect. The client receives the redirect info wrapped in an opaque response, and the flow completes normally.the redirect is never followed
I'm happy to help figure out how best to do that, if we decide to go in that direction (again, deferring to Adam on that decision).
We'd probably want to either not treat it as a redirect, but rather as a final response (e.g., instead of a 2xx response with body, we'd return a 3xx response with empty body), or call OnReceiveRedirect, but also have this class destroy itself, so there's no object to actually follow the redirect. First approach would probably break more things, though if not, it would be what I prefer. Second approach likely more practical, and shouldn't break consumers unless they continue to monitor the Mojo pipe even after they've received their last message over it...unless they're using manual mode, but actually using the URLLoader to follow the redirect, anyways.
Anyhow, let's hold off on trying either of those until we hear back from Adam.
I've spent some time looking at this today. The key thing is that `RedirectMode::kManual` is not by itself a sufficient indicator that we should avoid the check for HTTP(S). All navigations use `RedirectMode::kManual` so skipping the check could open a security hole.
The difficulty we have comes from layering and historical differences from the Fetch Standard. //net checks whether the redirect is safe to follow before returning the response. The Fetch Standard checks whether the redirect is safe to follow at the time when it follows it.
I think the difference is visible in two places:
1. `await fetch("/redirect-to-data-scheme", { redirect: "manual" })` should return a Response object with a type of `"opaqueredirect"` rather than throwing an exception.
2. It should be possible for a Service Worker to cache an `"opaqueredirect"` Response object and then later return that cached Response for a navigation. In this case, IIUC, the network error should happen when the navigation is performed whereas Chrome wouldn't let the `fetch()` return a Response to begin with.
The status quo in Chrome is that //net is performing that filtering on responses itself, so we cannot assume that clients of the URLLoader interface are doing it correctly. Maybe the right way forward is to have a new flag on `network::ResourceRequest` to indicate that the caller expects unfiltered redirects and will filter them itself.
As far as I can tell, it would be safe to censor all non-HTTP(S) URLs to just `data:`, which should limit the security risk if we forget to filter somewhere.
| Commit-Queue | +1 |
This should be called `treat_all_redirects_as_safe` to clearly express what it does at this layer, and to avoid making reference to concepts from a higher layer. Same with the setter and getter..
Done
awesome! really appreciate adam! could you take a look if the recent PS is how you meant it? (WPT's still passing 🤗)
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
This seems safe to me. lgtm but get feedback from the other reviewers too.
!redirect_info.new_url.SchemeIs("data")) {I think we should do the rewrite even if the scheme *is* data, just in case the contents of the data URL is something malicious. If everything works as expected, it shouldn't make any difference to any observable outcome.
{Maybe this would be easier to read as a data driven test? Something like
```
struct TestCase {
std::string_view location_header;
std::string_view expected_new_url;
};
static constexpr TestCase test_cases[] = {
{"https://other.example.com/bar.png", "https://other.example.com/bar.png"},
{"data:text/html,hello", "data:"},
...
};
```
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
I'll take another look tomorrow, sorry for not getting to it today. This CL is potentially scary, since it affects security-related protections, so want to be very careful here.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
I'm sorry for disappearing here, Helmut. Had some unplanned personal stuff come up (nothing serious), and there was a holidy, so have been out for most of the last week.
"If we forget to filter them somewhere" - this field is set by the renderer, which may be compromised, so we need to consider that in the security model.
I'm a bit confused by how these concerns tie in to the comment "This seems safe to me."
I really don't understand the security model here well enough to be comfortable signing off on this CL, unfortunately, and I don't see a good path forward to learning enough information to change that.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
dang, hat replies in draft!
@mmenke - no worries about timing, i am doing this in my spare time and expect everything to be async by default!
deferred_redirect_url_ = std::make_unique<GURL>(redirect_info.new_url);
response_head->response_type = mojom::FetchResponseType::kOpaqueRedirect;
response_head->timing_allow_passed = !timing_allow_failed_flag_;Done
I think we should do the rewrite even if the scheme *is* data, just in case the contents of the data URL is something malicious. If everything works as expected, it shouldn't make any difference to any observable outcome.
Done. Now all non-HTTP(S) URLs are censored to "data:," including data: URLs themselves.
Maybe this would be easier to read as a data driven test? Something like
```
struct TestCase {
std::string_view location_header;
std::string_view expected_new_url;
};
static constexpr TestCase test_cases[] = {
{"https://other.example.com/bar.png", "https://other.example.com/bar.png"},
{"data:text/html,hello", "data:"},
...
};
```
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
deferred_redirect_url_ = std::make_unique<GURL>(redirect_info.new_url);
response_head->response_type = mojom::FetchResponseType::kOpaqueRedirect;
response_head->timing_allow_passed = !timing_allow_failed_flag_;My understanding of the security model here:
1. //net checks the renderer to protect the caller from itself. By the model of the Fetch Standard, it is doing this too early. The Fetch Standard says it is not an error to *receive* a bad redirect, it is only an error to *follow* it. However, we have an extremely large number of callers who have always had our existing behaviour and would blindly follow a bad redirect if we gave them one.
2. This CL adds a flag to permit a caller to opt into a new behaviour, where the caller is responsible for deciding whether to follow the redirect or not.
3. The Fetch Standard requires an "opaque-redirect" type response to a request with a redirect mode of "manual". It doesn't distinguish between valid and invalid redirects here, because validity is only checked when a redirect is followed.
The Fetch Standard does not let us expose the redirect location to JavaScript. This is the security/privacy boundary in the platform. However it is exposed to the render process as the "internal response", which is the same behaviour as all other opaque responses in Chromium. It is not, and is not intended to be, a protection against a hacked render process getting access to the data.
As an additional protection against bugs, this CL censors the bad redirect target to "data:", since the only feature of it that is detectable from JavaScript is the fact that it is bad.
Did that help make it clearer?
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
By the way, Helmut: Good job on tests.
Ok, so this is not about protecting callers from data they aren't allowed to access.
There's still the issue of proxying requests (e.g., maybe the extensions URLLoader proxy machinery explodes on a redirect to a Javascript URL, and now we're giving untrusted renderer processes the ability to make that URLLoader see those URLs).
Replacing them with just "data:," seems to partially address that (though they could theoretically still explode on empty data URLs, or could even manually parse the location header, if they really wanted, and then explode on that).
It does make sense that they're allowed to access the response itself, as we have plenty of other permission checks for that.
// Test that in manual redirect mode with allow_unsafe_redirect_schemes,nit: Surround variable names that appear in comments with backticks (`)
NotifyLoaderClientOnReceiveRedirect(
CreateRedirectInfo(302, "GET", file_redirect));Can this actually happen? URLRequest should fail in this case, right?
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
By the way, Helmut: Good job on tests.
thank you 🤗
d.request_status() == OK);Helmut JanuschkaIs this needed?
Done
Happy to add additional safeguards if you think of specific cases that need protection.
// Test that in manual redirect mode with allow_unsafe_redirect_schemes,nit: Surround variable names that appear in comments with backticks (`)
Done
NotifyLoaderClientOnReceiveRedirect(
CreateRedirectInfo(302, "GET", file_redirect));Can this actually happen? URLRequest should fail in this case, right?
correct `URLRequest` would reject a redirect to `file://` before reaching `CorsURLLoader` in practice. This test exercises the `CorsURLLoader` code path in isolation to verify it doesn't incorrectly apply censoring when `allow_unsafe_redirect_schemes` is not set.
i can remove it if you think it doesn tbring in value. WDYT?
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
- Step 9: "If request's redirect mode is 'manual', then set response to an opaque-redirect filtered response whose internal response is actualResponse."Please wrap these lines too.
I looked at the extension APIs a bit. In the old `webRequest` API, the `onBeforeRedirect` event is passed a `redirectUrl` property. The new behaviour will be visible in that `onBeforeRedirect` will be called even for an invalid URL when `fetch()` has been used with manual redirects, and `data:,` will be passed as the `redirectUrl` to the handler.
Given that the `webRequest` API is deprecated, I'm not sure we need to care about this.
With the modern `declarativeNetRequest` API the change is almost undetectable. `declarativeNetRequest` doesn't permit interception of `data:,` URLs, so the extension cannot influence the outcome.
`Content-Security-Policy` is an outlier in the web platform in that it has some visibility of redirects. If a site has a Content-Security-Policy that bans `data:` URLs, and has set up reporting, and the page attempts to do a fetch for something with an invalid redirect, and they have redirects set to manual, then a CSP report will be generated where none would have previously. This seems like an extreme edge case.
As far as I can tell, DevTools doesn't expose the redirect URL in the UI. Only the Location: header is visible, and we're not changing that. So the impact on DevTools is minimal.
In conclusion, I don't think we need to take any special action regarding extensions.
NotifyLoaderClientOnReceiveRedirect(
CreateRedirectInfo(302, "GET", file_redirect));Helmut JanuschkaCan this actually happen? URLRequest should fail in this case, right?
correct `URLRequest` would reject a redirect to `file://` before reaching `CorsURLLoader` in practice. This test exercises the `CorsURLLoader` code path in isolation to verify it doesn't incorrectly apply censoring when `allow_unsafe_redirect_schemes` is not set.
i can remove it if you think it doesn tbring in value. WDYT?
I think it is okay to keep if you add a comment mentioning that URLRequest shouldn't actually do this in practice. It has some value in clarifying the behaviour.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
url_request.set_treat_all_redirects_as_safe(I think since this is a behaviour change, we do need to put it behind a feature. Please add something like `kDeferCheckingOfUnsafeRedirects` to services/network/public/cpp/features.h. I think the changes are simple enough that we only need to check the feature here and not anywhere else.
Do you have permissions to create a chromestatus.com entry? I think we can get away with an FYI to blink-dev@ for a change this minor, but it's good to have a chromestatus entry so the behaviour change is documented somewhere.
deferred_redirect_url_ = std::make_unique<GURL>(redirect_info.new_url);
response_head->response_type = mojom::FetchResponseType::kOpaqueRedirect;
response_head->timing_allow_passed = !timing_allow_failed_flag_;Thanks for the investigation, Adam! Going to mark this is resolved. I'll sign off once your other comments are addressed.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
- Step 9: "If request's redirect mode is 'manual', then set response to an opaque-redirect filtered response whose internal response is actualResponse."Please wrap these lines too.
Done
NotifyLoaderClientOnReceiveRedirect(
CreateRedirectInfo(302, "GET", file_redirect));Helmut JanuschkaCan this actually happen? URLRequest should fail in this case, right?
Adam Ricecorrect `URLRequest` would reject a redirect to `file://` before reaching `CorsURLLoader` in practice. This test exercises the `CorsURLLoader` code path in isolation to verify it doesn't incorrectly apply censoring when `allow_unsafe_redirect_schemes` is not set.
i can remove it if you think it doesn tbring in value. WDYT?
I think it is okay to keep if you add a comment mentioning that URLRequest shouldn't actually do this in practice. It has some value in clarifying the behaviour.
Done
url_request.set_treat_all_redirects_as_safe(I think since this is a behaviour change, we do need to put it behind a feature. Please add something like `kDeferCheckingOfUnsafeRedirects` to services/network/public/cpp/features.h. I think the changes are simple enough that we only need to check the feature here and not anywhere else.
Do you have permissions to create a chromestatus.com entry? I think we can get away with an FYI to blink-dev@ for a change this minor, but it's good to have a chromestatus entry so the behaviour change is documented somewhere.
added the feature flag, and set it to off-by-default.
i can file chromestatus entries, you think this should be a intent to ship (and then switching it to on-by-default)?
or should it be on-by-default, and just a PSA email to @blink-dev? (and we have feature flag for kill-switching)
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Code-Review | +1 |
lgtm!
url_request.set_treat_all_redirects_as_safe(Helmut JanuschkaI think since this is a behaviour change, we do need to put it behind a feature. Please add something like `kDeferCheckingOfUnsafeRedirects` to services/network/public/cpp/features.h. I think the changes are simple enough that we only need to check the feature here and not anywhere else.
Do you have permissions to create a chromestatus.com entry? I think we can get away with an FYI to blink-dev@ for a change this minor, but it's good to have a chromestatus entry so the behaviour change is documented somewhere.
added the feature flag, and set it to off-by-default.
i can file chromestatus entries, you think this should be a intent to ship (and then switching it to on-by-default)?or should it be on-by-default, and just a PSA email to @blink-dev? (and we have feature flag for kill-switching)
I think a PSA is probably acceptable. If it was me I'd probably do an "Intent to Ship" anyway, just because by the time you've created the chromestatus entry it's not that much more work. But I will let you decide which you'd rather do.
url_request.set_treat_all_redirects_as_safe(Helmut JanuschkaI think since this is a behaviour change, we do need to put it behind a feature. Please add something like `kDeferCheckingOfUnsafeRedirects` to services/network/public/cpp/features.h. I think the changes are simple enough that we only need to check the feature here and not anywhere else.
Do you have permissions to create a chromestatus.com entry? I think we can get away with an FYI to blink-dev@ for a change this minor, but it's good to have a chromestatus entry so the behaviour change is documented somewhere.
Adam Riceadded the feature flag, and set it to off-by-default.
i can file chromestatus entries, you think this should be a intent to ship (and then switching it to on-by-default)?or should it be on-by-default, and just a PSA email to @blink-dev? (and we have feature flag for kill-switching)
I think a PSA is probably acceptable. If it was me I'd probably do an "Intent to Ship" anyway, just because by the time you've created the chromestatus entry it's not that much more work. But I will let you decide which you'd rather do.
done https://chromestatus.com/feature/5098938002702336 - i'll wait till all is green and then land CL
From googleclient/chrome/chromium_gwsq/ipc/config.gwsq:
IPC: mk...@chromium.org
📎 It looks like you’re making a possibly security-sensitive change! 📎 IPC security review isn’t a rubberstamp, so your friendly security reviewer will need a fair amount of context to review your CL effectively. Please review your CL description and code comments to make sure they provide context for someone unfamiliar with your project/area. Pay special attention to where data comes from and which processes it flows between (and their privilege levels). Feel free to point your security reviewer at design docs, bugs, or other links if you can’t reasonably make a self-contained CL description. (Also see https://cbea.ms/git-commit/).
IPC reviewer(s): mk...@chromium.org
Reviewer source(s):
mk...@chromium.org is from context(googleclient/chrome/chromium_gwsq/ipc/config.gwsq)
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
// See https://fetch.spec.whatwg.org/#http-redirect-fetch step 9.This looks reasonable from a mojo perspective, but I wonder how necessary this attribute is. If the goal is to support this for all non-navigational requests whose redirect-mode is `manual`, you could look at the request's `destination` rather than passing another boolean along (e.g. `fetch()` will result in a request whose destination is `kEmpty`). Did you consider that approach?
// See https://fetch.spec.whatwg.org/#http-redirect-fetch step 9.This looks reasonable from a mojo perspective, but I wonder how necessary this attribute is. If the goal is to support this for all non-navigational requests whose redirect-mode is `manual`, you could look at the request's `destination` rather than passing another boolean along (e.g. `fetch()` will result in a request whose destination is `kEmpty`). Did you consider that approach?
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Code-Review | +1 |
This approach looks good, so I'll stamp the CL (not that you need it anymore, since you dropped the mojo bits :) ), but I do have a suggestion for improvement that's inlined below (tl;dr: check `mode` rather than `destination`).
I'd also suggest that it would be helpful to ensure we have a negative test for the `fetch()` behavior. We might already have one somewhere, but I would have expected to see an update to some `-expected.txt` somewhere.
redirect target, making the safety check unnecessary.This reference seems out of date. I think you now want to refer to step 6.2 of https://fetch.spec.whatwg.org/#http-fetch?
That spec text also suggests a better option than I gave you earlier: the request's `mode` should be [`kNavigate`](https://source.chromium.org/chromium/chromium/src/+/main:services/network/public/mojom/fetch_api.mojom;drc=d60984dee3a5102a2f7b42f0f6ac329b900bee7b;l=17). That should be more exhaustive than comparing against the request destination used for `fetch()`.
if (request_.destination == mojom::RequestDestination::kEmpty &&See the suggestion above to compare against the request's `mode` rather than the destination. I think we'd otherwise need to invert this check to check whether the destination wasn't one of the navigational types (document, iframe, frame, embed, object? Maybe others I'm forgetting?). Checking the mode seems simpler and more in-line with the specification.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Code-Review | +1 |
virtual test suites lgtm, didn't look at the rest
This approach looks good, so I'll stamp the CL (not that you need it anymore, since you dropped the mojo bits :) ), but I do have a suggestion for improvement that's inlined below (tl;dr: check `mode` rather than `destination`).
I'd also suggest that it would be helpful to ensure we have a negative test for the `fetch()` behavior. We might already have one somewhere, but I would have expected to see an update to some `-expected.txt` somewhere.
`ManualRedirectWithoutFlagDoesNotCensor` is the negative test
This reference seems out of date. I think you now want to refer to step 6.2 of https://fetch.spec.whatwg.org/#http-fetch?
That spec text also suggests a better option than I gave you earlier: the request's `mode` should be [`kNavigate`](https://source.chromium.org/chromium/chromium/src/+/main:services/network/public/mojom/fetch_api.mojom;drc=d60984dee3a5102a2f7b42f0f6ac329b900bee7b;l=17). That should be more exhaustive than comparing against the request destination used for `fetch()`.
Done
if (request_.destination == mojom::RequestDestination::kEmpty &&See the suggestion above to compare against the request's `mode` rather than the destination. I think we'd otherwise need to invert this check to check whether the destination wasn't one of the navigational types (document, iframe, frame, embed, object? Maybe others I'm forgetting?). Checking the mode seems simpler and more in-line with the specification.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |