Logging from Cucumber

8,458 views
Skip to first unread message

Arve Knudsen

unread,
Jul 5, 2010, 7:30:14 AM7/5/10
to cu...@googlegroups.com
Hi

Are there any logging facilities in Cucumber? Specifically, I should like to debug my steps by logging their execution.

Thanks,
Arve

aslak hellesoy

unread,
Jul 5, 2010, 7:48:57 AM7/5/10
to cu...@googlegroups.com
On Mon, Jul 5, 2010 at 1:30 PM, Arve Knudsen <arve.k...@gmail.com> wrote:
> Hi
> Are there any logging facilities in Cucumber?

from anywhere in your code: puts(msg)
from your ruby stepdefs: announce(msg) or puts(msg)

If you need more fancy logging than that you're doing it wrong IMO. I
use puts and just remove them when I have solved my problem (before I
commit the code).

> Specifically, I should like to
> debug my steps by logging their execution.

http://technicalpickles.com/posts/debugging-cucumber/

HTH,
Aslak

> Thanks,
> Arve
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cukes" group.
> To post to this group, send email to cu...@googlegroups.com.
> To unsubscribe from this group, send email to
> cukes+un...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/cukes?hl=en.
>

Arve Knudsen

unread,
Jul 5, 2010, 10:43:30 AM7/5/10
to cu...@googlegroups.com
On Mon, Jul 5, 2010 at 1:48 PM, aslak hellesoy <aslak.h...@gmail.com> wrote:
On Mon, Jul 5, 2010 at 1:30 PM, Arve Knudsen <arve.k...@gmail.com> wrote:
> Hi
> Are there any logging facilities in Cucumber?

from anywhere in your code: puts(msg)
from your ruby stepdefs: announce(msg) or puts(msg)

If you need more fancy logging than that you're doing it wrong IMO. I
use puts and just remove them when I have solved my problem (before I
commit the code).

I can't automatically see why dedicated logging facilities is a bad idea. While I am inexperienced with Cucumber, I have been writing xUnit-style tests for some time, and I find debug logs very valuable in validating my test logic. Especially as I refactor my test library like, any other library, to accommodate a wider range of tests, I want to see that test scenarios are still exercised correctly. So far, I miss this security when using Cucumber.

If I use announce for instance, I can't control the log output can I? That is, direct log to file, specify log verbosity. 
 
> Specifically, I should like to
> debug my steps by logging their execution.

http://technicalpickles.com/posts/debugging-cucumber/

Thanks, looks useful.

Arve

aslak hellesoy

unread,
Jul 5, 2010, 10:54:21 AM7/5/10
to cu...@googlegroups.com
On Mon, Jul 5, 2010 at 4:43 PM, Arve Knudsen <arve.k...@gmail.com> wrote:
> On Mon, Jul 5, 2010 at 1:48 PM, aslak hellesoy <aslak.h...@gmail.com>
> wrote:
>>
>> On Mon, Jul 5, 2010 at 1:30 PM, Arve Knudsen <arve.k...@gmail.com>
>> wrote:
>> > Hi
>> > Are there any logging facilities in Cucumber?
>>
>> from anywhere in your code: puts(msg)
>> from your ruby stepdefs: announce(msg) or puts(msg)
>>
>> If you need more fancy logging than that you're doing it wrong IMO. I
>> use puts and just remove them when I have solved my problem (before I
>> commit the code).
>
> I can't automatically see why dedicated logging facilities is a bad idea.

Let me elaborate on why I think it's a bad idea:

1) Feature creep. You can use any of the many available logging
frameworks for Ruby. Cucumber doesn't have to set one up for you.
2) Encouraging what I think is a dubious practice in BDD/TDD. Logging
is a technique people use when they don't know what's going on. A
better approach is to drop down to more focussed unit testing. That
automates the validation of your logic and fails when things don't
work right. Logging requires manual eye-balling to verify that
something is not working right and slows you down. It also pollutes
your code.

Aslak

Ben Mabey

unread,
Jul 5, 2010, 11:14:58 AM7/5/10
to cu...@googlegroups.com
aslak hellesoy wrote:
> On Mon, Jul 5, 2010 at 4:43 PM, Arve Knudsen <arve.k...@gmail.com> wrote:
>
>> On Mon, Jul 5, 2010 at 1:48 PM, aslak hellesoy <aslak.h...@gmail.com>
>> wrote:
>>
>>> On Mon, Jul 5, 2010 at 1:30 PM, Arve Knudsen <arve.k...@gmail.com>
>>> wrote:
>>>
>>>> Hi
>>>> Are there any logging facilities in Cucumber?
>>>>
>>> from anywhere in your code: puts(msg)
>>> from your ruby stepdefs: announce(msg) or puts(msg)
>>>
>>> If you need more fancy logging than that you're doing it wrong IMO. I
>>> use puts and just remove them when I have solved my problem (before I
>>> commit the code).
>>>
>> I can't automatically see why dedicated logging facilities is a bad idea.
>>
>
> Let me elaborate on why I think it's a bad idea:
>
> 1) Feature creep. You can use any of the many available logging
> frameworks for Ruby. Cucumber doesn't have to set one up for you.
>
+1. The standard Ruby 'logger' library should work great for you. Just
create an instance in your env.rb file and then use it in your steps.

Arve Knudsen

unread,
Jul 5, 2010, 12:28:01 PM7/5/10
to cu...@googlegroups.com
On Mon, Jul 5, 2010 at 4:54 PM, aslak hellesoy <aslak.h...@gmail.com> wrote:
On Mon, Jul 5, 2010 at 4:43 PM, Arve Knudsen <arve.k...@gmail.com> wrote:
> On Mon, Jul 5, 2010 at 1:48 PM, aslak hellesoy <aslak.h...@gmail.com>
> wrote:
>>
>> On Mon, Jul 5, 2010 at 1:30 PM, Arve Knudsen <arve.k...@gmail.com>
>> wrote:
>> > Hi
>> > Are there any logging facilities in Cucumber?
>>
>> from anywhere in your code: puts(msg)
>> from your ruby stepdefs: announce(msg) or puts(msg)
>>
>> If you need more fancy logging than that you're doing it wrong IMO. I
>> use puts and just remove them when I have solved my problem (before I
>> commit the code).
>
> I can't automatically see why dedicated logging facilities is a bad idea.

Let me elaborate on why I think it's a bad idea:

1) Feature creep. You can use any of the many available logging
frameworks for Ruby. Cucumber doesn't have to set one up for you.

OK, I'll look into this.
 
2) Encouraging what I think is a dubious practice in BDD/TDD. Logging
is a technique people use when they don't know what's going on. A
better approach is to drop down to more focussed unit testing. That
automates the validation of your logic and fails when things don't
work right. Logging requires manual eye-balling to verify that
something is not working right and slows you down. It also pollutes
your code.
 
If I were to replace logging with unittesting, it would have to be of my test library (and even then I would probably miss the manual assurance). For me, logging scenario-based tests works well in practice, as typically the log can be fairly coarse-grained in order to just see that the scenario is implemented. To deduce that the scenario is implemented from looking at the test library code on the other hand gets increasingly impractical as the set of testcases grows (and the library grows increasingly generic).

I grew especially dependent on test logging when testing a Twisted application, since with the asynchronous execution of tests (i.e., the Trial framework), it was quite easy for checks never to get executed (they were registered as asynchronous callbacks). From the log I could see directly if tests were incomplete by accident.

As opposed to running automated tests, I don't manually verify the test logic itself very often, but I like to revise the scenarios from time to time and in conjunction with that "eyeball" the corresponding logs. Could be it's not recommended, but it works for me and doesn't require much time.

Arve

Robert Hanson

unread,
Jul 5, 2010, 1:05:42 PM7/5/10
to cu...@googlegroups.com

So the question is NOT are my scenarios right, or succeed/fail; the question is: “did my scenarios run or not run”?

 

Seems to me that a software tool is in order:  Have cucumber output the names of the scenarios that ran to stdout; you can redirect that to a file (or maybe there is a more elegant way of doing this?)

 

Then write a tool in ruby or whatever, that scans the scenario files and pulls out the names of all the scenarios.  Then the software can compare the two lists.  This will work if you’re not using tags; and scenario outlines present a challenge, but should be workable.

 

Or is there more to this question?

 


--

Arve Knudsen

unread,
Jul 5, 2010, 1:25:16 PM7/5/10
to cu...@googlegroups.com
On Mon, Jul 5, 2010 at 7:05 PM, Robert Hanson <Robert...@calabrio.com> wrote:

So the question is NOT are my scenarios right, or succeed/fail; the question is: “did my scenarios run or not run”? 

It's not as simple as that, the question is more like "were my scenarios driven and verified correctly". It's basically regression testing of the test logic.

Seems to me that a software tool is in order:  Have cucumber output the names of the scenarios that ran to stdout; you can redirect that to a file (or maybe there is a more elegant way of doing this?) 

I can't see that this is going to do anything for my case. The interesting information is mostly what happens out of plain view (in the step definitions).
 
Arve

Robert Hanson

unread,
Jul 5, 2010, 1:34:26 PM7/5/10
to cu...@googlegroups.com

Oh, ok.  The question is then:  Am I testing what I think I’m testing, or am I getting a false positive? 

 

Sort of like this, but for acceptance tests   http://thedailywtf.com/articles/unit-tested.aspx

 

Seriously, this is an area of concern for me as well.  Suppose that you have a set of acceptance tests that provide adequate coverage.  Now the product owner changes a requirement that cuts across several stories; how do you verify that ALL the scenarios that need to change have changed?   I think that type of thing can’t be automated.  You need to audit your scenarios to ensure that they’re right.

 

I’d hope that the scenarios are written to make that auditing easier to do, so that you can review the scenarios rather than the log files.  After all, the scenarios are supposed to be a communication tool between the engineers and the stakeholders.  If you have to look at the log files to see if a scenario is doing what it is supposed to, then the scenarios aren’t serving that purpose.

Arve Knudsen

unread,
Jul 5, 2010, 2:07:34 PM7/5/10
to cu...@googlegroups.com
On Mon, Jul 5, 2010 at 7:34 PM, Robert Hanson <Robert...@calabrio.com> wrote:

Oh, ok.  The question is then:  Am I testing what I think I’m testing, or am I getting a false positive? 

 

Sort of like this, but for acceptance tests   http://thedailywtf.com/articles/unit-tested.aspx

Haha! Pretty much, yeah :)
 

Seriously, this is an area of concern for me as well.  Suppose that you have a set of acceptance tests that provide adequate coverage.  Now the product owner changes a requirement that cuts across several stories; how do you verify that ALL the scenarios that need to change have changed?   I think that type of thing can’t be automated.  You need to audit your scenarios to ensure that they’re right. 

In the project I was talking about where I wrote a Twisted application, I dealt with this sort of thing. While I wasn't using Cucumber, or consciously doing BDD, I was maintaining a document of scenarios which were implemented by unittest-based Python tests. When requirements changed, or supposedly did, I would review my scenarios.

Sometimes there was also uncertainty as to whether my scenario definitions were sufficient, due to reports of erratic behaviour. Then I could locate corresponding scenarios, run associated testcases and employ the application and test logs as proof of how I verified (perhaps that there wasn't any bug) :) Logs were also appreciated by my semi-technical clients (who were technical, but not programmers).
 

I’d hope that the scenarios are written to make that auditing easier to do, so that you can review the scenarios rather than the log files.  After all, the scenarios are supposed to be a communication tool between the engineers and the stakeholders.  If you have to look at the log files to see if a scenario is doing what it is supposed to, then the scenarios aren’t serving that purpose.


Well, the Cucumber scenarios should be documentation enough in themselves I *would* think. However, I can imagine that some scenarios might not be specific enough, so you would have to review the step definitions in detail (and revise as necessary).

Arve

Jason Mavandi

unread,
Mar 11, 2014, 2:05:32 PM3/11/14
to cu...@googlegroups.com
I know this is a few years late, but I am working at a job that we use cucumber to run our tests. It is super annoying to run them all, so I am making something to run all the tests and give a heatmap of what is failing most. If I get it working really well I might make it into a ruby gem, currently it is just a quick ruby file.

Check it out if it helps. Let me know what would make it better.

Matt Wynne

unread,
Mar 12, 2014, 7:08:35 PM3/12/14
to cu...@googlegroups.com
Wow, that’s a lot of code! Thanks for sharing it.

What does the output look like?


--
Posting rules: http://cukes.info/posting-rules.html
---
You received this message because you are subscribed to the Google Groups "Cukes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cukes+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

signature.asc

Jason Mavandi

unread,
Mar 18, 2014, 6:13:43 PM3/18/14
to cu...@googlegroups.com
The output to the terminal will show each test case being run with a green Passed or red Failed after each case. Telling you how many cases have passed and failed along the way, with which of how many you are on.

It also creates two log files. 
(1) The summary file has printed 3 arrays, the list of all tests being run, then passes and fails, so it can rerun all unpassed tests rather than rerunning everything again.
(2) A failure file. This is my favorite it first prints a list of all failed steps, with the count of how many times that step failed, so you can fix the most problematic scenarios all at once. Then I list every failed scenario with just the failure lines, so you can see exactly where the problem was.

Try it a few times and tell me what I can do to make it better. Thanks so much.
Reply all
Reply to author
Forward
0 new messages