Running (or not running) tagged tests

294 views
Skip to first unread message

Paul Butcher

unread,
Jul 8, 2010, 12:56:32 PM7/8/10
to simple-b...@googlegroups.com
Some of our tests take a long time to run, so we're using ScalaTest tags to mark them as slow. I'm trying to work out how to persuade sbt to either run or not run a test depending on its tag.

I thought that this kind of thing might work, but it appears to have no effect:

> sbt tests -- -n SlowTest

Is this possible, and if so, what am I missing?

Thanks!

--
paul.butcher->msgCount++

Snetterton, Castle Combe, Cadwell Park...
Who says I have a one track mind?

http://www.paulbutcher.com/
LinkedIn: http://www.linkedin.com/in/paulbutcher
MSN: pa...@paulbutcher.com
AIM: paulrabutcher
Skype: paulrabutcher

Mark Harrah

unread,
Jul 8, 2010, 2:21:50 PM7/8/10
to simple-b...@googlegroups.com
On Thursday, July 08, 2010 12:56:32 pm Paul Butcher wrote:
> Some of our tests take a long time to run, so we're using ScalaTest tags to
> mark them as slow. I'm trying to work out how to persuade sbt to either
> run or not run a test depending on its tag.
>
> I thought that this kind of thing might work, but it appears to have no
effect:
> > sbt tests -- -n SlowTest

I don't know what 'tests' is, so I'll assume it is like test-only. If you are
running sbt from a shell, you need to escape the spaces. For example:
$ sbt 'test-only -- -n SlowTest'

I would expect that you would get an error indicating that sbt doesn't know
what to do with '--'.

-Mark

paulbutcher

unread,
Jul 8, 2010, 6:12:44 PM7/8/10
to simple-build-tool
On Jul 8, 7:21 pm, Mark Harrah <dmhar...@gmail.com> wrote:
> > > sbt tests -- -n SlowTest
>
> I don't know what 'tests' is, so I'll assume it is like test-only.

Sorry - that was a mis-transcription. I was actually trying
"test" (without the "s").

> If you are
> running sbt from a shell, you need to escape the spaces.  For example:
>  $ sbt 'test-only -- -n SlowTest'

Ah - that was it. Thanks!

--
paul.butcher->msgCount++
Snetterton, Castle Combe, Cadwell Park...
Who says I have a one track mind?
http://www.paulbutcher.com/
LinkedIn: http://www.linkedin.com/in/paulbutcher
MSN: p...@paulbutcher.com
AIM: paulrabutcher
Skype: paulrabutcher

brett

unread,
May 7, 2012, 11:03:53 AM5/7/12
to simple-b...@googlegroups.com
Hi,

I have an issue with running tagged tests with sbt. Here is some info about our test configuration:

We have a number of test tags, categorized into type and function:
Tags:
IntegrationTest, ComponentTest and UnitTest
UtilTest, HttpTest, SerializationTest, StatusTest, etc

Tags are defined in com.mycompany.tags.Tags.scala: object UnitTest extends Tag("com.mycompany.tags.UnitTest")

Unit tests use FlatSpec, with taggedAs (UnitTest, FunctionalTest). E.g. "Some serialization test" should "do something" taggedAs (UnitTest, SerializationTest) in {}
Component and Integration tests use FeatureSpec, and are tagged in the scenario: scenario("Accepts API ping request", IntegrationTest, StatusTest) {given() when() then()}

I run the test as follows, with no special configuration in my build.scala / *.sbt files:
sbt test-only -- -n UnitTest -l "IntegrationTest ComponentTest"

When the test completes, I see that all of my tests have run, including those tests tagged with IntegrationTest and ComponentTest (e.g. "ran a total of 88 tests, with 18 failures").

However, if I update the options via my build.sbt for 1 module:
testOptions in Test ++= Seq(Tests.Argument("-oDS"), Tests.Argument("-n"), Tests.Argument("UnitTest"), Tests.Argument("-l"), Tests.Argument("ComponentTest IntegrationTest" ))

and then run "sbt test-only" (with no params), it seems that all tests are loaded, with test output from all tests being displayed (with FeatureSpec, the "feature" definition is sent to output, but the individual scenarios are skipped.). However, the summary is not valid:
[error] Failed: : Total 1, Failed 1, Errors 0, Passed 0, Skipped 0
[error] Failed tests:
[error]         com.mycompany.impl.something.SomeComponentSpec

the error above is caused by an exception in the "beforeAll()" method, i.e. in the bootstrap prior to the actual test being run. There are a number of unit tests that pass, but that are not reported in the summary above.

I think that for this use case, it would probably be a better idea to design the testing of the application around test class names - e.g. name endsWith "UnitSpec" for unit tests, "ComponentSpec" for component tests and "IntegrationSpec" for integration tests. How would that be configured with sbt + scalatest?

Thanks in advance.
brett

Mark Harrah

unread,
May 8, 2012, 8:57:39 AM5/8/12
to simple-b...@googlegroups.com
On Mon, 7 May 2012 08:03:53 -0700 (PDT)
brett <br...@jemstep.com> wrote:

> Hi,
>
> I have an issue with running tagged tests with sbt. Here is some info about
> our test configuration:
>
> We have a number of test tags, categorized into type and function:
> Tags:
> IntegrationTest, ComponentTest and UnitTest
> UtilTest, HttpTest, SerializationTest, StatusTest, etc
>
> Tags are defined in com.mycompany.tags.Tags.scala: object UnitTest extends
> Tag("com.mycompany.tags.UnitTest")
>
> Unit tests use FlatSpec, with taggedAs (UnitTest, FunctionalTest). E.g.
> "Some serialization test" should "do something" taggedAs (UnitTest,
> SerializationTest) in {}
> Component and Integration tests use FeatureSpec, and are tagged in the
> scenario: scenario("Accepts API ping request", IntegrationTest, StatusTest)
> {given() when() then()}
>
> I run the test as follows, with no special configuration in my build.scala
> / *.sbt files:
> sbt test-only -- -n UnitTest -l "IntegrationTest ComponentTest"

If this is what you are actually running, you should get an error about -n not being a command or something similar. sbt processes a sequence of commands and not just one command at a time, so each command+arguments is passed as one argument to sbt.

sbt "test-only -- -n ..."

> When the test completes, I see that all of my tests have run, including
> those tests tagged with IntegrationTest and ComponentTest (e.g. "ran a
> total of 88 tests, with 18 failures").
>
> However, if I update the options via my build.sbt for 1 module:
> testOptions in Test ++= Seq(Tests.Argument("-oDS"), Tests.Argument("-n"),
> Tests.Argument("UnitTest"), Tests.Argument("-l"),
> Tests.Argument("ComponentTest IntegrationTest" ))

As a side note, Tests.Argument has a repeated parameter, so you can put all of those arguments in one call. It should be equivalent to passing them separately.

> and then run "sbt test-only" (with no params), it seems that all tests are
> loaded, with test output from all tests being displayed (with FeatureSpec,
> the "feature" definition is sent to output, but the individual scenarios
> are skipped.). However, the summary is not valid:
> [error] Failed: : Total 1, Failed 1, Errors 0, Passed 0, Skipped 0
> [error] Failed tests:
> [error] com.mycompany.impl.something.SomeComponentSpec
>
> the error above is caused by an exception in the "beforeAll()" method, i.e.
> in the bootstrap prior to the actual test being run. There are a number of
> unit tests that pass, but that are not reported in the summary above.
>
> I think that for this use case, it would probably be a better idea to
> design the testing of the application around test class names - e.g. name
> endsWith "UnitSpec" for unit tests, "ComponentSpec" for component tests and
> "IntegrationSpec" for integration tests. How would that be configured with
> sbt + scalatest?

This is described in the "Additional test configurations with shared sources" section on the Testing page.

https://github.com/harrah/xsbt/wiki/Testing

If you run into issues, please post a sample project or at least a build definition that shows the problem.

-Mark

Brett Cave

unread,
May 8, 2012, 9:30:26 AM5/8/12
to simple-b...@googlegroups.com
On Tue, May 8, 2012 at 2:57 PM, Mark Harrah <dmha...@gmail.com> wrote:
On Mon, 7 May 2012 08:03:53 -0700 (PDT)
brett <br...@jemstep.com> wrote:


>
> I run the test as follows, with no special configuration in my build.scala
> / *.sbt files:
> sbt test-only -- -n UnitTest -l "IntegrationTest ComponentTest"

If this is what you are actually running, you should get an error about -n not being a command or something similar.  sbt processes a sequence of commands and not just one command at a time, so each command+arguments is passed as one argument to sbt.
 

Nope - this runs without errors. I cannot run test-only -- -n UnitTest -l "IntegrationTest ComponentTest" from within the sbt console, I can however pass the same parameters as arguments to my "sbt" script (as described in the sbt setup doc). However, the counts are not correct:

When I run: sbt test-only -- -n UnitTest -l "GoalTest AuthTest", I get

[error] Failed: : Total 82, Failed 17, Errors 0, Passed 62, Skipped 3
[error] Failed tests:
[error]         ....GoalComponentSpec
[error]         ....AuthSomethingUnitSpec
[error]         ....AuthProviderServiceIntegrationSpec
[error]         ....AuthComponentSpec
[error]         ....AuthAccessUnitSpec


This is because the tests above have a "BeforeAll()" method that fails prior to the tagged tests being run, so the suite fails, so I understand why this happens


 sbt "test-only -- -n ..."

Using this syntax on my shell, I can't specify multiple tags, either by escaping or using single inverts.
 

>
> However, if I update the options via my build.sbt for 1 module:
> testOptions in Test ++= Seq(Tests.Argument("-oDS"), Tests.Argument("-n"),
> Tests.Argument("UnitTest"), Tests.Argument("-l"),
> Tests.Argument("ComponentTest IntegrationTest" ))

As a side note, Tests.Argument has a repeated parameter, so you can put all of those arguments in one call.  It should be equivalent to passing them separately.

Ok great. Thanks :)

This is described in the "Additional test configurations with shared sources" section on the Testing page.

 https://github.com/harrah/xsbt/wiki/Testing

Eventually got my head around it, it was more Scala familiarity that i was lacking. Here is my  build.scala ( myproj/project/build.scala ) - I have had to duplicate the configuration between the root project definition and the server module to things working when running the test targets from the root. I know I can remove the duplicated code by using functions / traits somehow, but haven't figured out how yet.  I'm sure it will make a few people cringe. Suggestions welcome :)

object B extends Build {
  lazy val root = Project(id = "myproj",base = file("."))
    .aggregate(commons, model, server)
    .configs( UnitTest )
    .configs( ComponentTest )
    .configs( IntegrationTest )
    .settings( inConfig(UnitTest)(Defaults.testTasks) : _*)
    .settings( inConfig(ComponentTest)(Defaults.testTasks) : _*)
    .settings( inConfig(IntegrationTest)(Defaults.testTasks) : _*)
    .settings(
      libraryDependencies += specs,
      testOptions in UnitTest := Seq(Tests.Filter(unitTestFilter)),
      testOptions in ComponentTest := Seq(Tests.Filter(componentTestFilter)),
      testOptions in IntegrationTest := Seq(Tests.Filter(integrationTestFilter))
    )

  lazy val commons = Project(id = "myproj-commons",
                             base = file("myproj-commons"))

  lazy val model = Project(id = "myproj-model",
                           base = file("myproj-model")) dependsOn(commons)

  lazy val server = Project(id = "myproj-server",
    base = file("myproj-server"))
    .dependsOn(model,commons)
    .configs( UnitTest )
    .configs( ComponentTest )
    .configs( IntegrationTest )
    .settings( inConfig(UnitTest)(Defaults.testTasks) : _*)
    .settings( inConfig(ComponentTest)(Defaults.testTasks) : _*)
    .settings( inConfig(IntegrationTest)(Defaults.testTasks) : _*)
    .settings(
      libraryDependencies += specs,
      testOptions in UnitTest := Seq(Tests.Filter(unitTestFilter)),
      testOptions in ComponentTest := Seq(Tests.Filter(componentTestFilter)),
      testOptions in IntegrationTest := Seq(Tests.Filter(integrationTestFilter))
    )

  def unitTestFilter(name: String): Boolean = name endsWith "UnitSpec"
  def componentTestFilter(name: String): Boolean = name endsWith "ComponentSpec"
  def integrationTestFilter(name: String): Boolean = name endsWith "IntegrationSpec"

  lazy val UnitTest = config("unitTest") extend(Test)
  lazy val ComponentTest = config("componentTest") extend(Test)
  lazy val IntegrationTest = config("integrationTest") extend(Test)

  lazy val specs = "org.scala-tools.testing" %% "specs" % "1.6.9" % "test"

}




If you run into issues, please post a sample project or at least a build definition that shows the problem.

Thanks
Brett


-Mark


Mark Harrah

unread,
May 10, 2012, 6:51:41 PM5/10/12
to simple-b...@googlegroups.com
On Tue, 8 May 2012 15:30:26 +0200
Brett Cave <br...@jemstep.com> wrote:

> On Tue, May 8, 2012 at 2:57 PM, Mark Harrah <dmha...@gmail.com> wrote:
>
> > On Mon, 7 May 2012 08:03:53 -0700 (PDT)
> > brett <br...@jemstep.com> wrote:
> >
> >
> > >
> > > I run the test as follows, with no special configuration in my
> > build.scala
> > > / *.sbt files:
> > > sbt test-only -- -n UnitTest -l "IntegrationTest ComponentTest"
> >
> > If this is what you are actually running, you should get an error about -n
> > not being a command or something similar. sbt processes a sequence of
> > commands and not just one command at a time, so each command+arguments is
> > passed as one argument to sbt.
> >
>
> Nope - this runs without errors.

Ok, I see now. That command line first runs 'test-only' without arguments (for the reason mentioned above), which fails because some tests fail. Because test-only fails, the remaining "commands" aren't run. The comment about sbt processing a sequence of commands still holds.

> I cannot run *test-only -- -n UnitTest -l
> "IntegrationTest ComponentTest"* from within the sbt console, I can however
All of this is Scala code and much of it is just constructing sequences, so you can factor that into methods. For example,

> object B extends Build {
> lazy val root = Project(id = "myproj",base = file("."))
> .aggregate(commons, model, server)
> .configs( UnitTest )
> .configs( ComponentTest )
> .configs( IntegrationTest )

The .configs calls can be combined into:

.configs( UnitTest, ComponentTest, IntegrationTest )

because configs has a repeated parameter:

def configs(cs: Configuration*): Project

> .settings( inConfig(UnitTest)(Defaults.testTasks) : _*)
> .settings( inConfig(ComponentTest)(Defaults.testTasks) : _*)
> .settings( inConfig(IntegrationTest)(Defaults.testTasks) : _*)
> .settings(
> libraryDependencies += specs,
> testOptions in UnitTest := Seq(Tests.Filter(unitTestFilter)),
> testOptions in ComponentTest := Seq(Tests.Filter(componentTestFilter)),
> testOptions in IntegrationTest := Seq(Tests.Filter(integrationTestFilter))
> )

These can be put in a method that constructs a Seq[Setting[_]]:

def testSettings: Seq[Setting[_]] =
tests(UnitTest, unitTestFilter) ++
tests(ComponentTest, componentTestFilter) ++
tests(IntegrationTest, integrationTestFilter) ++ Seq(
libraryDependencies += specs,
)

def tests(config: Configuration, filter: String => Boolean): Seq[Setting[_]] =
inConfig(config)(Defaults.testTasks) ++ Seq(
testOptions in config := Seq(Tests.Filter(filter))
)

and then used reused for root and server in a single .settings call:

.settings( testSettings : _*)

Seth Tisue

unread,
Aug 9, 2012, 4:33:23 PM8/9/12
to simple-b...@googlegroups.com
On Thursday, May 10, 2012 6:51:41 PM UTC-4, Mark Harrah wrote:
These can be put in a method that constructs a Seq[Setting[_]]:
and then used reused for root and server in a single .settings call: 

  .settings( testSettings : _*)

So obvious!

In retrospect, that is...

You don't want to know all the crazy things I tried before I found this thread :-)

--
Seth Tisue | Northwestern University | http://tisue.net
lead developer, NetLogo: http://ccl.northwestern.edu/netlogo/


 
Reply all
Reply to author
Forward
0 new messages