Spec over Iterable

45 views
Skip to first unread message

Dirk Toewe

unread,
Dec 11, 2018, 11:46:13 AM12/11/18
to Jasmine
Hi everyone,

is there a way to run a spec on a (large) Iterable without either
  • blowing up the number of Specs or
  • having to write an async spec with await new Promise( f => setTimeout(f) ) to keep the Browser responsive
What I have in mind is something like:

describe('isEven', () => {
 
function* evenPositiveIntegers() {
   
for( let i=0; i < Number.MAX_SAFE_INTEGER; i+=2 )
     
yield i
 
}
  forEachItemIn
( evenPositiveIntegers() ).it('should return true', item => {
    expect
( isEven(item) ).toBe(true)
 
})
})

Something like this would be super useful, e.g. for
  • exhaustive testing with every possible input/state
  • randomized testing to uncover bugs You've never even dreamt about.
I've already checked the Jasmine docs and couldn't find a feature like that. One possible, simple implementation of such a functionality would be:

export const forEachItemIn = items => ({
  it
: (specText, specFn) => it(specText, async () => {
   
for( const item of items )
   
{
      await
new Promise( done => setTimeout(done) )
      specFn
(item)
   
}
 
})
})

This solution however is missing some quite awesome features:
  • Spec failures could contain the item for which they failed for simpler debugging.
  • Config settings
    • Timeout per item (or no timeout in NodeJS)
    • Limit number of items for quick tests (possibly with random sampling)
  • Only use setTimeout if spec is taking too long (crucial for performance)
The first bullet point would be especially helpful. Does anybody have some ideas of how to to add a better error message?

Help would be appreciated
Dirk



Gregg Van Hove

unread,
Dec 13, 2018, 8:28:29 PM12/13/18
to jasmi...@googlegroups.com
The way that I have typically handled this type of testing in Jasmine is to write a loop that just calls Jasmine's `it` function. Something like:

for(let i = 0; i < Number.MAX_SAFE_INTEGER; i++) {
  it('is true for ' + i, function() {
  });
}

This will produce a large number of specs in your suite, but I'm curious to hear how that is impacting you or your suite. Doing it this way allows Jasmine's existing optimizations and failure messages to (I believe) provide most of the awesome features you're looking for.

Hope this helps. Thanks for using Jasmine!

- Gregg

--
You received this message because you are subscribed to the Google Groups "Jasmine" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jasmine-js+...@googlegroups.com.
To post to this group, send email to jasmi...@googlegroups.com.
Visit this group at https://groups.google.com/group/jasmine-js.
For more options, visit https://groups.google.com/d/optout.

Dirk Toewe

unread,
Dec 14, 2018, 12:41:54 AM12/14/18
to jasmi...@googlegroups.com

Hi Gregg,

thank You for Your answer. I've seen and considered a similar answer on Stackexchange (https://stackoverflow.com/questions/25419201/run-jasmine-spec-multiple-times-in-one-execution).

There is however some concern with that solution: As far as I understand it, Jasmine or at least Karma-Jasmine creates all the specs before executing them (otherwise how does it count and randomize the specs). So for each test to be run, a string and a function object are stored. Let's say in total we have 16 bytes per test. For the example that I've given, we are talking about roughly 70 petabytes of overhead (https://www.wolframalpha.com/input/?i=(2%5E53+-+1)*16bytes+%2F+2).

Granted, the example is a little contrived. The number of test in my case is probably only going to go into the 100k, but then again I under-stated the memory overhead per test above, especially using Karma-Jasmine. Just consider how much data is logged while debugging in Karma-Jasmine's browser window.

A lot of great reporters cannot be used anymore either. The Karma-Spec-Reporter for example which normally gives a great, structured view of the test results would create an abominable wall of text.

Considering the insane amount of combinations that would be necessary to exhaustively test even the simplest of functions, and considering how time and mind consuming it can be to test every edge-case with hand-crafted examples, I feel like I may not be the only one with the desire to run large amounts of randomized tests.

In the meantime, I've come up with a somewhat acceptable solution that can be found in the attachments.

P.S.: Apologies for the duplicate post. I couldn't find the initial post in my Outbox an assumed that I accidentally clicked discarded the e-mail instead of sending it.


Thank You for Your the support and for the Jasmine testing framework

Dirk

You received this message because you are subscribed to a topic in the Google Groups "Jasmine" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/jasmine-js/9mWDoBmM7q8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to jasmine-js+...@googlegroups.com.
forEachItemIn.js.txt

Gregg Van Hove

unread,
Dec 14, 2018, 8:10:29 PM12/14/18
to jasmi...@googlegroups.com
You might also check out the new `withContext` helper (https://jasmine.github.io/api/edge/matchers.html#withContext) for a way to add the description you want for each expectation.

-Gregg

Dirk Toewe

unread,
Dec 15, 2018, 3:23:58 PM12/15/18
to jasmi...@googlegroups.com

Thank You, yes that was exactly what I was looking for.

Reply all
Reply to author
Forward
0 new messages