Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A question on the dev process

12 views
Skip to first unread message

David Bruant

unread,
Nov 21, 2012, 4:53:22 PM11/21/12
to dev-w...@lists.mozilla.org
Hi,

I am registered to a bunch of bugmail components and for the web APIs, I
notice a lot of "develop tests for X". I'm somewhat surprised by these
emails, because I thought tests were developed as part of write the code
for a feature.

Is there a particular reason for separating feature and tests?

David

Justin Lebar

unread,
Nov 22, 2012, 12:09:34 AM11/22/12
to David Bruant, dev-w...@lists.mozilla.org
> Is there a particular reason for separating feature and tests?

Sometimes we don't have time. Which is another way of saying we
sometimes have higher-priority tasks to attend to. A lot of the time
these "add tests for X" bugs don't even get filed, because there's no
realistic expectation that tests will ever be written.

We can debate at a theoretical level whether it's appropriate to check
in testable code changes without tests, but as a practical matter, I
don't think you're going to get very far legislating this sort of
thing as policy. Indeed, we've had a policy in B2G for some months
now of "all code changes must be accompanied by tests" -- yes, "all
changes", as I recall the policy -- but I've never once seen that
policy invoked to r- a patch.

If you want to develop tests for some of these features, I'm sure that
would be welcomed by the respective owners and peers.

-Justin

David Bruant

unread,
Nov 22, 2012, 4:49:45 AM11/22/12
to Justin Lebar, dev-w...@lists.mozilla.org
Le 22/11/2012 06:09, Justin Lebar a écrit :
>> Is there a particular reason for separating feature and tests?
> Sometimes we don't have time. Which is another way of saying we
> sometimes have higher-priority tasks to attend to. A lot of the time
> these "add tests for X" bugs don't even get filed, because there's no
> realistic expectation that tests will ever be written.
Do you mean literally "ever"? Or rather "within the B2G V1 timeframe"?
"ever" would be worrisome.

> We can debate at a theoretical level whether it's appropriate to check
> in testable code changes without tests, but as a practical matter, I
> don't think you're going to get very far legislating this sort of
> thing as policy. Indeed, we've had a policy in B2G for some months
> now of "all code changes must be accompanied by tests" -- yes, "all
> changes", as I recall the policy -- but I've never once seen that
> policy invoked to r- a patch.
>
> If you want to develop tests for some of these features, I'm sure that
> would be welcomed by the respective owners and peers.
I can imagine :-)
I'm currently writing doc for which there is already largely enough
work, but in the process of writing doc, I'm writing code examples.
Maybe I'll find bugs. In that case, I'll try to submit bugs and
hopefully it won't be too hard to turn the code samples into tests. If
you have tips to make my code more easily transformable into tests, I'm
open to that.

I wish to point out that without tests it's a bit harder to write
documentation since I don't see the API being exercised. Specifically,
I'm currently working on documenting WebActivities and the commits are
not really helpful to understand how it works from a developer
perspective. I feel I'm going to have to spend a lot of time poking
around and annoying WebAPI devs to get my work done :-(


Just a thought, not having time to write tests may be the symptom of a
too-ambitious product or delay. I'm not sure saying that now makes much
of a difference since decisions probably have already been made. I'm
afraid it'll bite later.

David

jsmith....@gmail.com

unread,
Nov 24, 2012, 1:10:11 AM11/24/12
to mozilla-d...@lists.mozilla.org, dev-w...@lists.mozilla.org, Justin Lebar
On Thursday, November 22, 2012 1:49:54 AM UTC-8, David Bruant wrote:
> Le 22/11/2012 06:09, Justin Lebar a écrit :
>
> >> Is there a particular reason for separating feature and tests?
>
> > Sometimes we don't have time. Which is another way of saying we
>
> > sometimes have higher-priority tasks to attend to. A lot of the time
>
> > these "add tests for X" bugs don't even get filed, because there's no
>
> > realistic expectation that tests will ever be written.
>
> Do you mean literally "ever"? Or rather "within the B2G V1 timeframe"?
>
> "ever" would be worrisome.
>
>
>
> > We can debate at a theoretical level whether it's appropriate to check
>
> > in testable code changes without tests, but as a practical matter, I
>
> > don't think you're going to get very far legislating this sort of
>
> > thing as policy. Indeed, we've had a policy in B2G for some months
>
> > now of "all code changes must be accompanied by tests" -- yes, "all
>
> > changes", as I recall the policy -- but I've never once seen that
>
> > policy invoked to r- a patch.

Hmm, I just walked out of discussion on this with the webrtc developers, qa, and automation development team. Let me bring that perspective in here.

>From what I pulled out of that discussion, enforcing tests on check-in I don't expect to work in practice usually, although it's encouraged to at least mention when it's worthwhile to get a test with a patch (which that piece, I have seen happen if the effort to get the test in isn't too much work and the value of the test seems useful). In order to get to that stage though, there needs to usually be work to build out a framework for a set of automation usually (which can vary in difficulty depending on the feature, webapi, etc). Otherwise, I wouldn't expect automation in check-in to be likely (as it would require a lot more effort to get a test there). It's still important to flag which areas are in need of building out the framework though - we have at least 4 people from QA & Automation Dev that would like to know if you know that a framework is needed, but currently does not exist. They can probably help you out if you can raise this.

The goal I know from the b2g platform automation story right now anyways was to get the equivalent of "smoke test" coverage on the platform level, so some of these tests are getting looked into. We just need to ensure that awareness of the gaps are effectively raised.

>
> >
>
> > If you want to develop tests for some of these features, I'm sure that
>
> > would be welcomed by the respective owners and peers.
>
> I can imagine :-)
>
> I'm currently writing doc for which there is already largely enough
>
> work, but in the process of writing doc, I'm writing code examples.
>
> Maybe I'll find bugs. In that case, I'll try to submit bugs and
>
> hopefully it won't be too hard to turn the code samples into tests. If
>
> you have tips to make my code more easily transformable into tests, I'm
>
> open to that.
>
>
>
> I wish to point out that without tests it's a bit harder to write
>
> documentation since I don't see the API being exercised. Specifically,
>
> I'm currently working on documenting WebActivities and the commits are
>
> not really helpful to understand how it works from a developer
>
> perspective. I feel I'm going to have to spend a lot of time poking
>
> around and annoying WebAPI devs to get my work done :-(

For WebAPI docs, I'd fully expect you'd probably have to talk to dev lead of a particular WebAPI to understand it to document it effectively. There's nothing wrong with asking questions here to get effectively docs put up.

>
>
>
>
>
> Just a thought, not having time to write tests may be the symptom of a
>
> too-ambitious product or delay. I'm not sure saying that now makes much
>
> of a difference since decisions probably have already been made. I'm
>
> afraid it'll bite later.

It already has bit us. Quite hard. There's been many nasty bugs that have risen that would have never happened if there was sunny day automation in place. That's the reason why there's some people I know on the QA + Automation Dev side who's job right now is to help get automation in place to reduce the nasty regressions we're seeing.


>
>
>
> David

jsmith....@gmail.com

unread,
Nov 24, 2012, 1:10:11 AM11/24/12
to mozilla.d...@googlegroups.com, dev-w...@lists.mozilla.org, Justin Lebar
On Thursday, November 22, 2012 1:49:54 AM UTC-8, David Bruant wrote:
> Le 22/11/2012 06:09, Justin Lebar a écrit :
>
> >> Is there a particular reason for separating feature and tests?
>
> > Sometimes we don't have time. Which is another way of saying we
>
> > sometimes have higher-priority tasks to attend to. A lot of the time
>
> > these "add tests for X" bugs don't even get filed, because there's no
>
> > realistic expectation that tests will ever be written.
>
> Do you mean literally "ever"? Or rather "within the B2G V1 timeframe"?
>
> "ever" would be worrisome.
>
>
>
> > We can debate at a theoretical level whether it's appropriate to check
>
> > in testable code changes without tests, but as a practical matter, I
>
> > don't think you're going to get very far legislating this sort of
>
> > thing as policy. Indeed, we've had a policy in B2G for some months
>
> > now of "all code changes must be accompanied by tests" -- yes, "all
>
> > changes", as I recall the policy -- but I've never once seen that
>
> > policy invoked to r- a patch.

Hmm, I just walked out of discussion on this with the webrtc developers, qa, and automation development team. Let me bring that perspective in here.

>From what I pulled out of that discussion, enforcing tests on check-in I don't expect to work in practice usually, although it's encouraged to at least mention when it's worthwhile to get a test with a patch (which that piece, I have seen happen if the effort to get the test in isn't too much work and the value of the test seems useful). In order to get to that stage though, there needs to usually be work to build out a framework for a set of automation usually (which can vary in difficulty depending on the feature, webapi, etc). Otherwise, I wouldn't expect automation in check-in to be likely (as it would require a lot more effort to get a test there). It's still important to flag which areas are in need of building out the framework though - we have at least 4 people from QA & Automation Dev that would like to know if you know that a framework is needed, but currently does not exist. They can probably help you out if you can raise this.

The goal I know from the b2g platform automation story right now anyways was to get the equivalent of "smoke test" coverage on the platform level, so some of these tests are getting looked into. We just need to ensure that awareness of the gaps are effectively raised.

>
> >
>
> > If you want to develop tests for some of these features, I'm sure that
>
> > would be welcomed by the respective owners and peers.
>
> I can imagine :-)
>
> I'm currently writing doc for which there is already largely enough
>
> work, but in the process of writing doc, I'm writing code examples.
>
> Maybe I'll find bugs. In that case, I'll try to submit bugs and
>
> hopefully it won't be too hard to turn the code samples into tests. If
>
> you have tips to make my code more easily transformable into tests, I'm
>
> open to that.
>
>
>
> I wish to point out that without tests it's a bit harder to write
>
> documentation since I don't see the API being exercised. Specifically,
>
> I'm currently working on documenting WebActivities and the commits are
>
> not really helpful to understand how it works from a developer
>
> perspective. I feel I'm going to have to spend a lot of time poking
>
> around and annoying WebAPI devs to get my work done :-(

For WebAPI docs, I'd fully expect you'd probably have to talk to dev lead of a particular WebAPI to understand it to document it effectively. There's nothing wrong with asking questions here to get effectively docs put up.

>
>
>
>
>
> Just a thought, not having time to write tests may be the symptom of a
>
> too-ambitious product or delay. I'm not sure saying that now makes much
>
> of a difference since decisions probably have already been made. I'm
>
> afraid it'll bite later.

Vicamo Yang

unread,
Nov 26, 2012, 10:28:16 PM11/26/12
to mozilla.d...@googlegroups.com, dev-w...@lists.mozilla.org
Sometimes it's not the situation you think. For example, some "develop tests for SMS blahblah" is not because we don't have test cases for SMS, but just to add more cases. Sometimes it's because we just can't have test cases for that feature. For example, SMS delivery report is not supported in emulator so we can't have test cases for `SmsMessage.deliveryStatus`. And MMS, there are many extra service elements involved in MMS protocol, so we're still working on some automation test framework for it. And, STK, base stone is landed in emulator, but more console commands to come.
Message has been deleted

David Bruant

unread,
Nov 27, 2012, 3:12:35 AM11/27/12
to Vicamo Yang, dev-w...@lists.mozilla.org, mozilla.d...@googlegroups.com
Le 27/11/2012 04:28, Vicamo Yang a écrit :
> On Thursday, November 22, 2012 5:53:30 AM UTC+8, David Bruant wrote:
> Sometimes it's not the situation you think. For example, some "develop tests for SMS blahblah" is not because we don't have test cases for SMS, but just to add more cases. Sometimes it's because we just can't have test cases for that feature. For example, SMS delivery report is not supported in emulator so we can't have test cases for `SmsMessage.deliveryStatus`.
I could understand "XXX is not supported in emulator so we *don't want
to test that yet*", but "we can't" is quite wrong. There are a lot of
people quite fan of TDD (Test Driven Development) which could tell you
that you should be writing tests that fail and write the code later. If
it's planned to support the feature eventually, it wouldn't be absurd to
write the test first, even knowingly it fails (for a good reason in the
short term).
Writing all failing tests first before the code could be an excellent
reminder of what is expected to be done for milestone X (assuming we
forget Justin's point about having the time to write tests).

David
0 new messages