Hey, this is fantastic! I'm very excited that we have now automatic verification for our changes! Big thanks to everyone who helped to make it happen!
I've only had a brief look at the verification messages and I have a first question. It's very useful to be notified that a change does not cleanly merge. In this case a snippet with the conflict is posted. This output is just a snippet and very short and it ends with '...'. As it is now it is not very useful as in most cases it is showing a conflict in same import statements and there is no link so that one could navigate to the full report. Would it be possible to insert such a link or alternatively just provide a list of files that have conflicts? Hey wouldn't it be cool if we could show conflict icons for those files in ChangeScreen2, allowing to click on them to see the details?
I haven't found an example of a failing build yet. I'm curious how the message looks like in this case. Will there be a link to the build log?
By the way, the voter has as very cool name :-)
--
--
To unsubscribe, email repo-discuss...@googlegroups.com
More info at http://groups.google.com/group/repo-discuss?hl=en
---
You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to repo-discuss...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
On Sat, Sep 14, 2013 at 3:46 PM, Conley Owens <cc...@android.com> wrote:Having the conflicts displayed in Gerrit is a really good idea. It
>> Would it be possible to insert such a link or alternatively just provide a
>> list of files that have conflicts?
>
> I haven't gotten around to it, but I'd like to put the diff on Google Cloud
> Storage and provide links to that. Looking forward to the distant future, I
> think it be nice if this were just a feature of Gerrit proper. Gerrit
> already reports that a patchset cannot merge in the info box at the top of a
> change. It seems reasonable that there'd be a link from there with
> information.
might also be a good to email the change owner once we know there is a
conflict with a short summary of the conflict diff (longer than what
Deckard/Diffy Cuckoo is posting but maybe not the entire thing) so the
owner can rebase the change.
The builder project should have Google Cloud Storage quota so this is
>> Will there be a link to the build log?
>
> There will be a snippet of build output, like with the diff. Eventually,
> I'd like to upload build logs to Google Cloud Storage and provide links to
> that.
just a matter of technical effort to get the logs there.
More info at http://groups.google.com/group/repo-discuss?hl=en
--- You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to repo-discuss+unsubscribe@googlegroups.com.
Good idea . +1
/B
From: repo-d...@googlegroups.com [mailto:repo-d...@googlegroups.com] On Behalf Of Edwin Kempin
Sent: Tuesday, September 17, 2013 3:23 PM
To: Pursehouse, David (Sony Mobile)
Cc: Deniz Türkoglu; Repo and Gerrit Discussion
Subject: Re: Diffy Cuckoo - automatic compile testing of changes
2013/9/17 David Pursehouse <david.pu...@sonymobile.com>
On 09/17/2013 03:40 PM, Deniz Türkoglu wrote:
Isn't verified serving exactly that purpose, in theory?
In theory, yes, if the automated unit testing is good enough to prove that the change does what it is supposed to without breaking any other functionality.
This might be possible with things like the REST API, but not for UI changes, documentation, etc.
I just wonder if we're going to have changes submitted because the maintainer code reviewed it and relied on the automated verification.
If we don't have a separate label, how about making it so that the change can only be submitted if also verified by a human?
I also think that at the moment having a positive vote from the automatic verification is required but not sufficient for submitting a change,
this is because we don't have a good test coverage yet and we do not require that all new functionality is covered by tests and some changes may not be testable.
So having a human verification is still benefitial. We could also think about increasing the range of the Verified label:
+2 Verfied
+1 Build and tests succeed
0
- 1 Fails
--
--
To unsubscribe, email repo-discuss...@googlegroups.com
More info at http://groups.google.com/group/repo-discuss?hl=en
--- You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to repo-discuss...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
--
To unsubscribe, email repo-discuss...@googlegroups.com
More info at http://groups.google.com/group/repo-discuss?hl=en
---
You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to repo-discuss...@googlegroups.com.
On 09/17/2013 05:13 PM, Deniz Türkoglu wrote:
In that case I misunderstand the label. Verified means(?) it builds
which Diffy does now. Are we after built & tests passed?
To me, verified means that the change works, i.e. does what it is intended to do. In some cases this can be automated, but in many cases can't.
I don't set the verified label until I have actually tested the change myself and am satisfied that it does work. Even if there are tests that have passed.
That’s the definition my company took based upon the documentation and the usage by the default Jenkins Gerrit trigger plugin.
Verified is only set by the build machine and indicates that the build completes, doesn’t generate violations (findbugs etc) and unit tests pass.
It is the code reviewers responsibility to determine whether the unit tests are sufficient and if not either:
1. code review -1 with request for better unit tests
2. get the developer to explain what manual verification they have done and ensure it is to their satisfaction before they give the code review +2 or
3. or do their own manual verification before they give the code review +2.
Thomas
From: repo-d...@googlegroups.com [mailto:repo-d...@googlegroups.com]
On Behalf Of Deniz Türkoglu
Sent: 17 September 2013 09:37
To: Edwin Kempin
Cc: David Pursehouse; Repo and Gerrit Discussion
Subject: Re: Diffy Cuckoo - automatic compile testing of changes
"compiles, passes basic unit tests"
Once we clarify the meaning, I think it's easier to solve this. If there is a consensus on the understanding, the label idea looks good to me.
On Tue, Sep 17, 2013 at 10:26 AM, Edwin Kempin <edwin....@gmail.com> wrote:
2013/9/17 David Pursehouse <david.pu...@sonymobile.com>
On 09/17/2013 05:13 PM, Deniz Türkoglu wrote:
In that case I misunderstand the label. Verified means(?) it builds
which Diffy does now. Are we after built & tests passed?
To me, verified means that the change works, i.e. does what it is intended to do. In some cases this can be automated, but in many cases can't.
I don't set the verified label until I have actually tested the change myself and am satisfied that it does work. Even if there are tests that have passed.
+1 this is also my understanding and I use it accordingly
Now that (almost) everyone is using verification, can we solve this
mystery, not using the labels but adding a core functionality?
We can have a row under each patchset, such as builds, which shows (n)
builds that this commit has triggered with n statuses, for example we
currently need built / tests and maybe static analysis.
Comments?
On Tuesday, September 17, 2013, Edwin Kempin wrote:
2013/9/17 David Pursehouse <david.pu...@sonymobile.com>
--
--
To unsubscribe, email
repo-discuss...@googlegroups.com
More info at http://groups.google.com/group/repo-discuss?hl=en
---
You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
repo-discuss...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
That’s the definition my company took based upon the documentation and the usage by the default Jenkins Gerrit trigger plugin.
Verified is only set by the build machine and indicates that the build completes, doesn’t generate violations (findbugs etc) and unit tests pass.
It is the code reviewers responsibility to determine whether the unit tests are sufficient and if not either:
1. code review -1 with request for better unit tests
2. get the developer to explain what manual verification they have done and ensure it is to their satisfaction before they give the code review +2 or
3. or do their own manual verification before they give the code review +2.
On Tue, Sep 17, 2013 at 1:44 AM, Thomas Swindells (tswindel) <tswi...@cisco.com> wrote:That’s the definition my company took based upon the documentation and the usage by the default Jenkins Gerrit trigger plugin.
Verified is only set by the build machine and indicates that the build completes, doesn’t generate violations (findbugs etc) and unit tests pass.
It is the code reviewers responsibility to determine whether the unit tests are sufficient and if not either:
1. code review -1 with request for better unit tests
2. get the developer to explain what manual verification they have done and ensure it is to their satisfaction before they give the code review +2 or
3. or do their own manual verification before they give the code review +2.
My vote (no pun intended) is still for another label. What you've described sounds like an indirect way of encoding another bit of information (does this change do what it claims to do); why not just explicitly encode that bit of information in a label?
If we think collectively that three labels is too many and that Verified _should_ be enough, then let's consider that an impetus to improve test coverage :)
2013/9/17 Dave Borowitz <dbor...@google.com>
On Tue, Sep 17, 2013 at 1:44 AM, Thomas Swindells (tswindel) <tswi...@cisco.com> wrote:That’s the definition my company took based upon the documentation and the usage by the default Jenkins Gerrit trigger plugin.
Verified is only set by the build machine and indicates that the build completes, doesn’t generate violations (findbugs etc) and unit tests pass.
It is the code reviewers responsibility to determine whether the unit tests are sufficient and if not either:
1. code review -1 with request for better unit tests
2. get the developer to explain what manual verification they have done and ensure it is to their satisfaction before they give the code review +2 or
3. or do their own manual verification before they give the code review +2.
My vote (no pun intended) is still for another label. What you've described sounds like an indirect way of encoding another bit of information (does this change do what it claims to do); why not just explicitly encode that bit of information in a label?Adding another label is also fine for me.If we think collectively that three labels is too many and that Verified _should_ be enough, then let's consider that an impetus to improve test coverage :)+1 improving test coverage sounds like a good idea :-)
Diffy Cuckoo is now running automatic compile testing[*] of changes
uploaded to gerrit-review.
Has the problem with diffy running the acceptance tests been resolved?
There was a change merged yesterday that fixed a problem that was preventing some of the cases from passing but I don't know if that's the same problem that Shawn was referring to before.
> like the tests are finding the wrong name for the local host or something.That might be the issue...there's an bug in IcedTea's implementation
java.net.Inet4AddressImpl.getLocalHostName that causes a buffer
overflow on an fqdn that's greater than 64 characters long (which is
the case in my environment). I had to edit the fqdn to a bogus value
so that any of the tests would run, but that may have introduced
another problem.
On Thu, Sep 26, 2013 at 9:13 AM, Shawn Pearce <s...@google.com> wrote:
> I haven't investigated the issues yet. Conley said the tests would
> start and then hang, he gave up after 2 hours.
> is there any way for you all to tell which builds failed due to missing artifacts and resolve them?
The report does show that new artifacts were required, correct?
Handling this would be tricky. I build in an environment that does
not have a network connection so that the arbitrary code that gets
run/built does not have access to the network. I'd basically have to
parse out the file that contains the listed artifacts and download it
in a previous step (that has network connection), but that's kind of a
hacky solution.
Diffy Cuckoo is now running automatic compile testing[*] of changes
uploaded to gerrit-review.
It is based on Deckard, a verification tool Conley Owens developed for Android.
Thanks Conley!
[*] There is a bug preventing acceptance tests from running in this
environment. I plan to debug it next week when I can find time. Until
then its only doing `buck build :withdocs`.
It turns out that Diffy is using Java 6[1]. Because we are considering to switch to Java 7,and use its syntax and features, can we upgrade Diffy too?