New RSpec formatter

12 views
Skip to first unread message

Andrew Hodgkinson

unread,
May 15, 2020, 3:13:11 AM5/15/20
to rspec
Thought some people might be interested in an open source (MIT licence) RSpec formatter we wrote at RIP Global to help us see how far along CI is on longer test suites.


The example test suite for documentation includes various deliberate failure cases so you can see how they look. A passing suite is simple:

[   22] Doggo

[   22]   #message

[01/22]     logs directly if an example-start occurs unexpectedly

[02/22]     logs directly with no example running

[03/22]     logs in context if an example is running

[   22]   example groups

[   22]     #example_group_finished

[04/22]       does not decrement below zero

[05/22]       derements group level

[   22]     #example_group_started

[06/22]       does not output a separator below top level

[07/22]       outputs an indication of group progress

...etc

A failing suite is more verbose, so if tailing logs in CI or just running suites locally, you can start to look at failures straight away without needing to halt a long suite using --fail-fast or similar.

[   10] Doggo examples

[01/10]   outer passes

[02/10]   FAILED (1) - outer fails


  1) Doggo examples outer fails

     Failure/Error: expect(true).to eql(false)

     

       expected: false

            got: true

     

       (compared using eql?)

     

       Diff:

       @@ -1,2 +1,2 @@

       -false

       +true

       

     # ./spec/example/doggo_spec.rb:29:in `block (2 levels) in <top (required)>'


[03/10]   PENDING - outer is pending with xit

[04/10]   FAILED (2) - outer is pending with a custom message


  2) Doggo examples outer is pending with a custom message FIXED

     Expected pending 'custom message' to fail. No error was raised.

     # ./spec/example/doggo_spec.rb:35


[   10]   in a context

[   10]     with a nested context

[05/10]       passes

[06/10]       FAILED (3) - fails


  3) Doggo examples in a context with a nested context fails

     Failure/Error: expect(true).to eql(false)

     

       expected: false

            got: true

     

       (compared using eql?)

     

       Diff:

       @@ -1,2 +1,2 @@

       -false

       +true

       

     # ./spec/example/doggo_spec.rb:12:in `block (4 levels) in <top (required)>'


[07/10]       PENDING - is pending with xit

[08/10]       FAILED (4) - is pending with a custom message


  4) Doggo examples in a context with a nested context is pending with a custom message FIXED

     Expected pending 'custom message' to fail. No error was raised.

     # ./spec/example/doggo_spec.rb:18


[   10]   test count

[09/10]     is taken to 9

[10/10]     is taken to 10, showing leading zero pad formatting


Hope someone else finds this useful :-)

-- 
Andrew Hodgkinson
Senior Developer | API Specialist

A:    Level 2, 40 Taranaki Street, Wellington 6011
NZBN: 9429030851955

Malini Gowda

unread,
May 15, 2020, 3:35:11 AM5/15/20
to rs...@googlegroups.com
Hi All, 

           Is there any gem which generates parallel report in rspec when executed in parallel? . 
The report should be generated file wise 

Regards,
Malini.SM

--
You received this message because you are subscribed to the Google Groups "rspec" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rspec+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rspec/81773a67-34a5-4dfb-95cb-aee9a51f727c%40googlegroups.com.

Andrew Hodgkinson

unread,
May 16, 2020, 7:10:59 PM5/16/20
to rs...@googlegroups.com
On 15 May 2020, at 19:34, Malini Gowda wrote:

> Is there any gem which generates parallel report in rspec when
> executed in parallel?

Not that I know of. We investigated a few options for parallel test
execution and did stick with one for a while, but in the end all the
solutions we tried were unreliable. Ironically, given that the only
reason the gems exist is to support concurrent test execution, none of
them seemed to deal with concurrency properly and suffered various race
conditions or general oversights. Splitting of logged output and things
like incorrect or missing 'rcov' coverage were common problems.

We chose instead to split our test suite in parallel CI runs - currently
we have one executing system specs and another executing everything
else. That has been our least-worst option so far - certainly a lot
easier than trying to remote debug complex timing issues in complex
gems, or try and develop our own gem that might well have been no better
anyway. We do still only get a good RCov report or logging if we run in
series, and local development always has to do so.
Reply all
Reply to author
Forward
0 new messages