Re: A few questions upon feature policy wpt tests

2 views
Skip to first unread message

Luna Lu

unread,
Jun 7, 2017, 1:58:18 PM6/7/17
to Philip Jägenstedt, Rick Byers, platform-predictability
cc'd predictability 

On Wed, Jun 7, 2017 at 1:56 PM, Luna Lu <lun...@google.com> wrote:
Thanks Philip,

I am going to write a CL with some framework that can be used to test any (existing and on-going) feature with feature policy in external/wpt/feature-policy.


Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751

On Wed, Jun 7, 2017 at 1:38 PM, Luna Lu <lun...@google.com> wrote:
Another thing is, when a feature is disabled, I check its error message, error name and etc in current feature policy layout tests. But given we don't have unified way of informing a feature is disabled yet. That part can't be (and shouldn't be) tested in wpt.

Ian, we are planning to support feature policy error / warning / exception in the future are we?



Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751

On Wed, Jun 7, 2017 at 6:12 AM, Philip Jägenstedt <foo...@chromium.org> wrote:
On Tue, Jun 6, 2017 at 8:51 PM Luna Lu <lun...@google.com> wrote:
Hi Rick and Philip,

I am adding feature policy tests to wpt and I have two questions:

1. Does it make sense to move tests from LayoutTest/*/feature-policy to LayoutTest/external/wpt/? There might be a few complication as "vibrate" tests require "user gesture" so we need to write manual tests for that in wpt. To me, the layout tests version and the wpt version should be almost identical so I am not sure if it is a good practice to keep both. However, we can test features behind flags in LayoutTests but not in wpt I suppose? 

Things in LayoutTest/external/wpt behave just like LayoutTests/* for most purposes, and they'll run with experimental features enabled. You can also do virtual test suites for wpt and all the rest.

Anything that's covered by specs can be moved into wpt, and then you can delete the copy in LayoutTests entirely and depend on just wpt coverage. For the tests that require a use gesture, see LayoutTests/external/wpt_automation/fullscreen/auto-click.js for how I did that for Fullscreen. It's not perfect, but if you can use the same mechanism for all tests and don't need to write a separate automation script for each test, I think it's still probably worth the effort.

Eventually such manual tests will be converted to use WebDriver to do the clicking.
 
2. Do we want to create a feature-policy directory under wpt and keep feature-policy tests for all features there? Currently the feature policy tests for webusb are under webusb/. But I think it makes most sense to keep all feature policy tests all in one place.

You should have a directory for feature policy I think. Where to put the tests depends on what spec you are testing. If, say, the Fullscreen spec were to say "if feature policy such and such throw an exception" then that test should be under fullscreen/.

If there are tests where it's not clear which spec actually required the behavior being tested for, that might be a problem to resolve on the spec side first. When it's really the interaction of two different normative requirements in two different specs, then you can put the tests in either, that'll happen once in a while.
 
3. In the future, we should always reinforce that every feature added to feature policy must have some wpt tests for it. I will probably write a wiki on which tests (header policy, container policy, cross-origin, same origin, relocation, etc) are required. Does that sound reasonable?

Obviously +10k on using wpt wherever possible. If most feature policy tests follow a similar structure, then it'd be reasonable to either just document it as you say, or possible to create a small framework for testing feature policy, and document how to use that.
 
Thanks for answering my questions.

Regards,
Luna

Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751



Philip Jägenstedt

unread,
Jun 7, 2017, 3:56:36 PM6/7/17
to Luna Lu, Rick Byers, platform-predictability
Luna, do you mean that the tests check the exception messages or something like that? That is indeed something that isn't standardized and can't be asserted in wpt. We don't really have a good solution for this yet, for now we just have to write separate tests. I think that perhaps we could arrange things so that one can have an -expected-log.txt or something like so that one doesn't need to duplicate the tests just to additionally test exception messages. Maybe the tests would then t.log(message) or something.

If you have ideas for changes to testharness.js and surrounding infrastructure that'd help here, let's talk!

--
You received this message because you are subscribed to the Google Groups "platform-predictability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-predictability+unsub...@chromium.org.
To post to this group, send email to platform-predictability@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-predictability/CAGOO0twrzBgzZZyVyCn9YV9oYYg91_hxwaYTvN%2BWbVsfG6BGPg%40mail.gmail.com.

Ian Clelland

unread,
Jun 7, 2017, 4:02:05 PM6/7/17
to Philip Jägenstedt, Luna Lu, Rick Byers, platform-predictability
On Wed, Jun 7, 2017 at 3:56 PM, Philip Jägenstedt <foo...@chromium.org> wrote:
Luna, do you mean that the tests check the exception messages or something like that? That is indeed something that isn't standardized and can't be asserted in wpt. We don't really have a good solution for this yet, for now we just have to write separate tests. I think that perhaps we could arrange things so that one can have an -expected-log.txt or something like so that one doesn't need to duplicate the tests just to additionally test exception messages. Maybe the tests would then t.log(message) or something.

If you have ideas for changes to testharness.js and surrounding infrastructure that'd help here, let's talk!

On Wed, Jun 7, 2017 at 7:57 PM, Luna Lu <lun...@chromium.org> wrote:
cc'd predictability 

On Wed, Jun 7, 2017 at 1:56 PM, Luna Lu <lun...@google.com> wrote:
Thanks Philip,

I am going to write a CL with some framework that can be used to test any (existing and on-going) feature with feature policy in external/wpt/feature-policy.


Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751

On Wed, Jun 7, 2017 at 1:38 PM, Luna Lu <lun...@google.com> wrote:
Another thing is, when a feature is disabled, I check its error message, error name and etc in current feature policy layout tests. But given we don't have unified way of informing a feature is disabled yet. That part can't be (and shouldn't be) tested in wpt.

For any given feature, the behaviour when disabled via feature policy should be specified -- whether it fails with an error code, or throws an exception, or rejects a promise, or whatever -- I don't think we want to leave that up to implementers to decide differently for each browser.

In general, if an API throws an exception, we should be able to standardize on the type of the exception, though not the message itself.


Ian, we are planning to support feature policy error / warning / exception in the future are we?

We've talked about logging to the console when policies are violated, or when a site does something nonsensical or contradictory within a policy, but I don't know if that will be part of a standard. The more likely standard solution for testing will be the JS introspection API, I think.




Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751

On Wed, Jun 7, 2017 at 6:12 AM, Philip Jägenstedt <foo...@chromium.org> wrote:
On Tue, Jun 6, 2017 at 8:51 PM Luna Lu <lun...@google.com> wrote:
Hi Rick and Philip,

I am adding feature policy tests to wpt and I have two questions:

1. Does it make sense to move tests from LayoutTest/*/feature-policy to LayoutTest/external/wpt/? There might be a few complication as "vibrate" tests require "user gesture" so we need to write manual tests for that in wpt. To me, the layout tests version and the wpt version should be almost identical so I am not sure if it is a good practice to keep both. However, we can test features behind flags in LayoutTests but not in wpt I suppose? 

Things in LayoutTest/external/wpt behave just like LayoutTests/* for most purposes, and they'll run with experimental features enabled. You can also do virtual test suites for wpt and all the rest.

Anything that's covered by specs can be moved into wpt, and then you can delete the copy in LayoutTests entirely and depend on just wpt coverage. For the tests that require a use gesture, see LayoutTests/external/wpt_automation/fullscreen/auto-click.js for how I did that for Fullscreen. It's not perfect, but if you can use the same mechanism for all tests and don't need to write a separate automation script for each test, I think it's still probably worth the effort.

Eventually such manual tests will be converted to use WebDriver to do the clicking.
 
2. Do we want to create a feature-policy directory under wpt and keep feature-policy tests for all features there? Currently the feature policy tests for webusb are under webusb/. But I think it makes most sense to keep all feature policy tests all in one place.

You should have a directory for feature policy I think. Where to put the tests depends on what spec you are testing. If, say, the Fullscreen spec were to say "if feature policy such and such throw an exception" then that test should be under fullscreen/.

If there are tests where it's not clear which spec actually required the behavior being tested for, that might be a problem to resolve on the spec side first. When it's really the interaction of two different normative requirements in two different specs, then you can put the tests in either, that'll happen once in a while.
 
3. In the future, we should always reinforce that every feature added to feature policy must have some wpt tests for it. I will probably write a wiki on which tests (header policy, container policy, cross-origin, same origin, relocation, etc) are required. Does that sound reasonable?

Obviously +10k on using wpt wherever possible. If most feature policy tests follow a similar structure, then it'd be reasonable to either just document it as you say, or possible to create a small framework for testing feature policy, and document how to use that.
 
Thanks for answering my questions.

Regards,
Luna

Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751



--
You received this message because you are subscribed to the Google Groups "platform-predictability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-predictability+unsubscr...@chromium.org.

--
You received this message because you are subscribed to the Google Groups "platform-predictability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-predictability+unsub...@chromium.org.
To post to this group, send email to platform-predictability@chromium.org.

Philip Jägenstedt

unread,
Jun 7, 2017, 4:08:35 PM6/7/17
to Ian Clelland, Luna Lu, Rick Byers, platform-predictability
On Wed, Jun 7, 2017 at 10:02 PM Ian Clelland <icle...@chromium.org> wrote:
On Wed, Jun 7, 2017 at 3:56 PM, Philip Jägenstedt <foo...@chromium.org> wrote:
Luna, do you mean that the tests check the exception messages or something like that? That is indeed something that isn't standardized and can't be asserted in wpt. We don't really have a good solution for this yet, for now we just have to write separate tests. I think that perhaps we could arrange things so that one can have an -expected-log.txt or something like so that one doesn't need to duplicate the tests just to additionally test exception messages. Maybe the tests would then t.log(message) or something.

If you have ideas for changes to testharness.js and surrounding infrastructure that'd help here, let's talk!

On Wed, Jun 7, 2017 at 7:57 PM, Luna Lu <lun...@chromium.org> wrote:
cc'd predictability 

On Wed, Jun 7, 2017 at 1:56 PM, Luna Lu <lun...@google.com> wrote:
Thanks Philip,

I am going to write a CL with some framework that can be used to test any (existing and on-going) feature with feature policy in external/wpt/feature-policy.


Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751

On Wed, Jun 7, 2017 at 1:38 PM, Luna Lu <lun...@google.com> wrote:
Another thing is, when a feature is disabled, I check its error message, error name and etc in current feature policy layout tests. But given we don't have unified way of informing a feature is disabled yet. That part can't be (and shouldn't be) tested in wpt.

For any given feature, the behaviour when disabled via feature policy should be specified -- whether it fails with an error code, or throws an exception, or rejects a promise, or whatever -- I don't think we want to leave that up to implementers to decide differently for each browser.

In general, if an API throws an exception, we should be able to standardize on the type of the exception, though not the message itself.

Yeah, that's my view as well, and I thought that Luna was talking about testing the message. If it's just about throwing a TypeError (or FooError) at all, then that should be unproblematic to test in wpt.

Ian, we are planning to support feature policy error / warning / exception in the future are we?

We've talked about logging to the console when policies are violated, or when a site does something nonsensical or contradictory within a policy, but I don't know if that will be part of a standard. The more likely standard solution for testing will be the JS introspection API, I think.

That all sounds like it might tie into the reporting API, yeah.
 

Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751

On Wed, Jun 7, 2017 at 6:12 AM, Philip Jägenstedt <foo...@chromium.org> wrote:
On Tue, Jun 6, 2017 at 8:51 PM Luna Lu <lun...@google.com> wrote:
Hi Rick and Philip,

I am adding feature policy tests to wpt and I have two questions:

1. Does it make sense to move tests from LayoutTest/*/feature-policy to LayoutTest/external/wpt/? There might be a few complication as "vibrate" tests require "user gesture" so we need to write manual tests for that in wpt. To me, the layout tests version and the wpt version should be almost identical so I am not sure if it is a good practice to keep both. However, we can test features behind flags in LayoutTests but not in wpt I suppose? 

Things in LayoutTest/external/wpt behave just like LayoutTests/* for most purposes, and they'll run with experimental features enabled. You can also do virtual test suites for wpt and all the rest.

Anything that's covered by specs can be moved into wpt, and then you can delete the copy in LayoutTests entirely and depend on just wpt coverage. For the tests that require a use gesture, see LayoutTests/external/wpt_automation/fullscreen/auto-click.js for how I did that for Fullscreen. It's not perfect, but if you can use the same mechanism for all tests and don't need to write a separate automation script for each test, I think it's still probably worth the effort.

Eventually such manual tests will be converted to use WebDriver to do the clicking.
 
2. Do we want to create a feature-policy directory under wpt and keep feature-policy tests for all features there? Currently the feature policy tests for webusb are under webusb/. But I think it makes most sense to keep all feature policy tests all in one place.

You should have a directory for feature policy I think. Where to put the tests depends on what spec you are testing. If, say, the Fullscreen spec were to say "if feature policy such and such throw an exception" then that test should be under fullscreen/.

If there are tests where it's not clear which spec actually required the behavior being tested for, that might be a problem to resolve on the spec side first. When it's really the interaction of two different normative requirements in two different specs, then you can put the tests in either, that'll happen once in a while.
 
3. In the future, we should always reinforce that every feature added to feature policy must have some wpt tests for it. I will probably write a wiki on which tests (header policy, container policy, cross-origin, same origin, relocation, etc) are required. Does that sound reasonable?

Obviously +10k on using wpt wherever possible. If most feature policy tests follow a similar structure, then it'd be reasonable to either just document it as you say, or possible to create a small framework for testing feature policy, and document how to use that.
 
Thanks for answering my questions.

Regards,
Luna

Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751



--
You received this message because you are subscribed to the Google Groups "platform-predictability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-predicta...@chromium.org.
To post to this group, send email to platform-pr...@chromium.org.

--
You received this message because you are subscribed to the Google Groups "platform-predictability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-predicta...@chromium.org.
To post to this group, send email to platform-pr...@chromium.org.
--
You received this message because you are subscribed to the Google Groups "platform-predictability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-predicta...@chromium.org.
To post to this group, send email to platform-pr...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-predictability/CAK_TSXK87Qg5q2mGYtRNeOPnjpGCE5az9OGr_KrGR-%3DWw_7S8A%40mail.gmail.com.

Luna Lu

unread,
Jun 7, 2017, 4:55:05 PM6/7/17
to Philip Jägenstedt, Ian Clelland, Rick Byers, platform-predictability
On Wed, Jun 7, 2017 at 4:08 PM, Philip Jägenstedt <foo...@chromium.org> wrote:
On Wed, Jun 7, 2017 at 10:02 PM Ian Clelland <icle...@chromium.org> wrote:
On Wed, Jun 7, 2017 at 3:56 PM, Philip Jägenstedt <foo...@chromium.org> wrote:
Luna, do you mean that the tests check the exception messages or something like that? That is indeed something that isn't standardized and can't be asserted in wpt. We don't really have a good solution for this yet, for now we just have to write separate tests. I think that perhaps we could arrange things so that one can have an -expected-log.txt or something like so that one doesn't need to duplicate the tests just to additionally test exception messages. Maybe the tests would then t.log(message) or something.

If you have ideas for changes to testharness.js and surrounding infrastructure that'd help here, let's talk!

On Wed, Jun 7, 2017 at 7:57 PM, Luna Lu <lun...@chromium.org> wrote:
cc'd predictability 

On Wed, Jun 7, 2017 at 1:56 PM, Luna Lu <lun...@google.com> wrote:
Thanks Philip,

I am going to write a CL with some framework that can be used to test any (existing and on-going) feature with feature policy in external/wpt/feature-policy.


Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751

On Wed, Jun 7, 2017 at 1:38 PM, Luna Lu <lun...@google.com> wrote:
Another thing is, when a feature is disabled, I check its error message, error name and etc in current feature policy layout tests. But given we don't have unified way of informing a feature is disabled yet. That part can't be (and shouldn't be) tested in wpt.

For any given feature, the behaviour when disabled via feature policy should be specified -- whether it fails with an error code, or throws an exception, or rejects a promise, or whatever -- I don't think we want to leave that up to implementers to decide differently for each browser.

In general, if an API throws an exception, we should be able to standardize on the type of the exception, though not the message itself.

Yeah, that's my view as well, and I thought that Luna was talking about testing the message. If it's just about throwing a TypeError (or FooError) at all, then that should be unproblematic to test in wpt.

​An example is https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/http/tests/feature-policy/payment-enabledforself.php
When the "payment" is enabled, it is easy to verify. But when a "feature" is disabled, besides checking "data.enabled", a few other things are currently checked in our layout test.

I agree that we should specify in the spec what happens when a feature is disabled by feature policy. But where should this part of the spec live, under feature policy, or under each feature? I suspect it is the latter​.

Alternatively in the wpt, we only need to check if feature is enabled or disabled, and leaving the rest to a separate test. That way, it is probably even easier to write a generalized framework to test for all features. WDYT?

Ian, we are planning to support feature policy error / warning / exception in the future are we?

We've talked about logging to the console when policies are violated, or when a site does something nonsensical or contradictory within a policy, but I don't know if that will be part of a standard. The more likely standard solution for testing will be the JS introspection API, I think.

That all sounds like it might tie into the reporting API, yeah.

Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751

On Wed, Jun 7, 2017 at 6:12 AM, Philip Jägenstedt <foo...@chromium.org> wrote:
On Tue, Jun 6, 2017 at 8:51 PM Luna Lu <lun...@google.com> wrote:
Hi Rick and Philip,

I am adding feature policy tests to wpt and I have two questions:

1. Does it make sense to move tests from LayoutTest/*/feature-policy to LayoutTest/external/wpt/? There might be a few complication as "vibrate" tests require "user gesture" so we need to write manual tests for that in wpt. To me, the layout tests version and the wpt version should be almost identical so I am not sure if it is a good practice to keep both. However, we can test features behind flags in LayoutTests but not in wpt I suppose? 

Things in LayoutTest/external/wpt behave just like LayoutTests/* for most purposes, and they'll run with experimental features enabled. You can also do virtual test suites for wpt and all the rest.

Anything that's covered by specs can be moved into wpt, and then you can delete the copy in LayoutTests entirely and depend on just wpt coverage. For the tests that require a use gesture, see LayoutTests/external/wpt_automation/fullscreen/auto-click.js for how I did that for Fullscreen. It's not perfect, but if you can use the same mechanism for all tests and don't need to write a separate automation script for each test, I think it's still probably worth the effort.

Eventually such manual tests will be converted to use WebDriver to do the clicking.
 
2. Do we want to create a feature-policy directory under wpt and keep feature-policy tests for all features there? Currently the feature policy tests for webusb are under webusb/. But I think it makes most sense to keep all feature policy tests all in one place.

You should have a directory for feature policy I think. Where to put the tests depends on what spec you are testing. If, say, the Fullscreen spec were to say "if feature policy such and such throw an exception" then that test should be under fullscreen/.

If there are tests where it's not clear which spec actually required the behavior being tested for, that might be a problem to resolve on the spec side first. When it's really the interaction of two different normative requirements in two different specs, then you can put the tests in either, that'll happen once in a while.
 
3. In the future, we should always reinforce that every feature added to feature policy must have some wpt tests for it. I will probably write a wiki on which tests (header policy, container policy, cross-origin, same origin, relocation, etc) are required. Does that sound reasonable?

Obviously +10k on using wpt wherever possible. If most feature policy tests follow a similar structure, then it'd be reasonable to either just document it as you say, or possible to create a small framework for testing feature policy, and document how to use that.
 
Thanks for answering my questions.

Regards,
Luna

Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751



--
You received this message because you are subscribed to the Google Groups "platform-predictability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-predictability+unsub...@chromium.org.
To post to this group, send email to platform-predictability@chromium.org.

--
You received this message because you are subscribed to the Google Groups "platform-predictability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-predictability+unsub...@chromium.org.
To post to this group, send email to platform-predictability@chromium.org.

--
You received this message because you are subscribed to the Google Groups "platform-predictability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-predictability+unsub...@chromium.org.
To post to this group, send email to platform-predictability@chromium.org.

--
You received this message because you are subscribed to the Google Groups "platform-predictability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-predictability+unsub...@chromium.org.
To post to this group, send email to platform-predictability@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-predictability/CAARdPYcw%2BgM-G83MrdM5k%2B%3Dn%3Drt0ewKQ4_sagytCzDLTQzNbTw%40mail.gmail.com.

Philip Jägenstedt

unread,
Jun 22, 2017, 4:50:34 AM6/22/17
to Luna Lu, Ian Clelland, Rick Byers, platform-predictability
On Wed, Jun 7, 2017 at 10:55 PM Luna Lu <lun...@chromium.org> wrote:
On Wed, Jun 7, 2017 at 4:08 PM, Philip Jägenstedt <foo...@chromium.org> wrote:
On Wed, Jun 7, 2017 at 10:02 PM Ian Clelland <icle...@chromium.org> wrote:
On Wed, Jun 7, 2017 at 3:56 PM, Philip Jägenstedt <foo...@chromium.org> wrote:
Luna, do you mean that the tests check the exception messages or something like that? That is indeed something that isn't standardized and can't be asserted in wpt. We don't really have a good solution for this yet, for now we just have to write separate tests. I think that perhaps we could arrange things so that one can have an -expected-log.txt or something like so that one doesn't need to duplicate the tests just to additionally test exception messages. Maybe the tests would then t.log(message) or something.

If you have ideas for changes to testharness.js and surrounding infrastructure that'd help here, let's talk!

On Wed, Jun 7, 2017 at 7:57 PM, Luna Lu <lun...@chromium.org> wrote:
cc'd predictability 

On Wed, Jun 7, 2017 at 1:56 PM, Luna Lu <lun...@google.com> wrote:
Thanks Philip,

I am going to write a CL with some framework that can be used to test any (existing and on-going) feature with feature policy in external/wpt/feature-policy.


Luna Lu | Software Engineer | lun...@google.com | +1 519 513 5751

On Wed, Jun 7, 2017 at 1:38 PM, Luna Lu <lun...@google.com> wrote:
Another thing is, when a feature is disabled, I check its error message, error name and etc in current feature policy layout tests. But given we don't have unified way of informing a feature is disabled yet. That part can't be (and shouldn't be) tested in wpt.

For any given feature, the behaviour when disabled via feature policy should be specified -- whether it fails with an error code, or throws an exception, or rejects a promise, or whatever -- I don't think we want to leave that up to implementers to decide differently for each browser.

In general, if an API throws an exception, we should be able to standardize on the type of the exception, though not the message itself.

Yeah, that's my view as well, and I thought that Luna was talking about testing the message. If it's just about throwing a TypeError (or FooError) at all, then that should be unproblematic to test in wpt.

​An example is https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/http/tests/feature-policy/payment-enabledforself.php
When the "payment" is enabled, it is easy to verify. But when a "feature" is disabled, besides checking "data.enabled", a few other things are currently checked in our layout test.

I agree that we should specify in the spec what happens when a feature is disabled by feature policy. But where should this part of the spec live, under feature policy, or under each feature? I suspect it is the latter​.

Alternatively in the wpt, we only need to check if feature is enabled or disabled, and leaving the rest to a separate test. That way, it is probably even easier to write a generalized framework to test for all features. WDYT?

 I assume this has resolved itself by now, but in https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/http/tests/feature-policy/payment-enabledforself.php I don't see any use of internals or similar, would it be problematic to upstream this as-is?

You're right that ideally each feature (spec) should integrate with FP and say precisely what happens at a specific point in some algorithm. As an example, for Fullscreen I believe it should be a new error condition in https://fullscreen.spec.whatwg.org/#dom-element-requestfullscreen which could then be tested.
Reply all
Reply to author
Forward
0 new messages