I guess what I have in mind is a test framework wherein each test
consists of 1) a program that should not compile and 2) optional code
for checking the compiler output (i.e. that it failed in the right
way). Does any such framework exist? Ideally, the framework would
integrate well with sbt, JUnit, etc., and it would be *really* great
if the tests could be written in ScalaTest/specs (with the
uncompilable portions encoded in strings or external files).
I apologize if this question has an obvious answer, but my Googling
didn't turn up anything.
~Aaron
By the same argument, there's no point in writing unit tests, because
if you can't work out your code just works without testing, it's too
complex for anyone to use.
field.setText("Starting up...")
async {
val response = service.asyncCall(arg1, arg2)
field.setText(response)
}
field.setText("Waiting for response")
Some desirable constraints for this API might include:
* It's not possible to make an RPC outside of an `async` or `asyncOnce` block
* In an `asyncOnce` block, the user must make exactly one RPC
* Even in an `async` block, it's not possible to accidentally call a
method that makes an RPC -- you have to do something explicit to
indicate your intention
* It's not easy to accidentally call `shift` for a different
continuations-based API, without wrapping it in a reset, in an `async`
block
Now, it was pretty easy to state these constraints, and users should
have no trouble following them. But, at least for a half-wit such as
myself, it is not easy to encode these constraints in the type system.
Even if it were easy, I would want some assurance that I don't break
the constraints as I add new features to the API.
~Aaron
P.S. Sorry if the threading is messed up. I forgot to use "reply-all".
On Wed, Jun 8, 2011 at 2:31 PM, Lex <lex...@gmail.com> wrote:
> I was trying to say that if you cant easily encode the constraints,
> then your users will have very hard time using these constraints.
Thanks, your "specs" hint led me to [1] and [2]. Does anyone know
whether this functionality made it into specs2 in some form?
[1] http://stackoverflow.com/questions/1857999/static-testing-for-scala
[2] http://code.google.com/p/specs/source/browse/trunk/src/main/scala/org/specs/specification/Snippets.scala
2011/6/8 Maxime Lévesque <maxime....@gmail.com>:
Pretty sure that was dropped from Specs2, as it was hard to keep it
portable across different versions of Scalac. You directly invoke the
interpreter, as is done behind the scenes by Specs1 Snippets.
-jason
An alternative approach would be to write the snippets in actual
source code within the test classes. Of course, the whole point is to
verify that the snippets fail to compile, which implies that the test
classes themselves would fail to compile. Would it be possible in
principle to write a compiler plugin that would isolate failure in a
given AST node and replace that node with the compilation result?
For example, I would write
@isolateFailure val snippet = List[String](1,2,3)
and the compiler plugin would turn this into something like
val snippet: CompileResult = CompileError("error: type mismatch; etc, etc")
Then the test code would just verify the expected CompileResult for
each snippet (e.g. `snippet must beAnInstanceOf[CompileError]`).
2011/6/8 Jason Zaugg <jza...@gmail.com>:
My gut reaction would be to put each bit of code into a file with a
name like, .badscala or something, then have some BadSuite that you
configure with a directory, and it goes in there discovering .badscala
files, tries to compile them, reporting a successful test for each one
that fails to compile, and a failed test for each one that compiles.
I'm not sure how you would invoke the compiler or find out whether the
code compiled or not, but I'm sure they must be doing this at EPFL so
it should be doable.
Backing up a bit, I think this would be useful. I try to make as many
errors compiler errors as possible, and it would be nice to have a way
to do this.
Bill
2011/6/8 Aaron Novstrup <aaron.n...@gmail.com>:
--
Bill Venners
Artima, Inc.
http://www.artima.com
+1. Ive had this problem myself, and it seems likely to worsen as
"type level computation" becomes increasingly intricate. We need the
ability to unit test at the type level.
I noticed Jason mentioned this technique, but I haven't tried it myself:
Maybe it doesn't work well in practice, given that he's replied to
your thread but didn't mention it..?
-Ben
Because it is exactly the same. You rgued that if the code
does anything at all that is complicated enough to warrant
a test it is too complicated to ever be used and shouldn't be
written in the first case. Which is indeed an incredibly stupid
position.
- Florian.
--
#!/bin/sh -
set - `type -p $0` 'tr [a-m][n-z]RUXJAKBOZ [n-z][a-m]EH$W/@OBM' fu XUBZRA.fvt\
angher echo;while [ "$5" != "" ];do shift;done;$4 "gbhpu $3;fraqznvy sKunef.q\
r<$3&&frq -a -rc "`$4 "$0"|$1`">$3;rpub 'Jr ner Svt bs Obet.'"|$1|`$4 $2|$1`
I'm giving a Scala class this week so won't have time to code anything
up for a few days, but ScalaTest is designed to be easily customize
for just such unforeseen use cases. So if you want you can give this a
try in the meantime. I can think of two basic approaches, a custom
Suite trait or a custom matcher, and I'm not sure which you would
prefer, so I'll describe both. First the custom Suite trait approach:
ScalaTest's design is encapsulated in these lifecycle methods in trait suite:
run - override this method to define custom ways to run suites of tests.
runNestedSuites - override this method to define custom ways to run
nested suites.
runTests - override this method to define custom ways to run a suite's tests.
runTest - override this method to define custom ways to run a single named test.
testNames - override this method to specify the Suite's test names in
a custom way.
tags - override this method to specify the Suite's test tags in a custom way.
nestedSuites - override this method to specify the Suite's nested
Suites in a custom way.
suiteName - override this method to specify the Suite's name in a custom way.
expectedTestCount - override this method to count this Suite's
expected tests in a custom way.
withFixture - override this method to perform setup before and/or
cleanup after each test
So one approach is to define a trait ShouldNotCompileSuite that
overrides testNames to look in a directory or directories for files
that end in .shouldnotcompile (or some other extension) and then come
up with a testname for each such file. The test name could be based on
the filename, or grabbed from inside the file (say if a line matches
"testname: ..."), or both. Then override runTest, which gets passed
each testName, such that it somehow passes that file to the Scala
compiler, and ensures it does not compile. The directory or
directories in which to look could either be passed to the constructor
of ShouldNotCompileSuite or passed in via the "config map."
The other approach would be to make a "compile" matcher, which I'm
thinking might be nicer, but I want to know what you would prefer. You
could then write tests with any style trait, like FunSuite or
WordSpec, etc., then inside those write matcher expressions like:
"IncorrectUseOfMyDSL" should not (compile)
This matcher expression would go looking for a file named
IncorrectUseOfMyDSL.shouldnotcompile and somehow invoke the Scala
compiler on it and ensure it doesn't compile. It could be a matcher of
either Strings and/or java.io.Files, perhaps.
Trouble with the custom matcher approach is you have to explicitly
list each filename in your tests, whereas the custom Suite trait
approach can do discovery of files that should not compile if given
just a directory name or names. But the benefit of the custom matcher
approach is that you can write tests using existing style traits and
define test names or specifications like the rest of your test suite.
Either way, you'd need to figure out how to invoke the compiler and
check the result. Does anyone have a suggestion for that part?
Bill
One more question. Do you think most of the syntax you would want to
make sure doesn't compile would be relatively small snippets of code?
Because if so, it seems like it might be nicer if those snippets could
just show up as strings in the tests or suites themselves. Something
like:
""" "hi".charAt(1) """ should compile
""" "hi".charAt("ho") """ should not (compile)
Another question is would you want to be able to specify a compiler version?
Bill
On Thu, Jun 9, 2011 at 5:15 AM, Bill Venners <bi...@artima.com> wrote:
> It could be a matcher of either Strings and/or java.io.Files, perhaps.
I like this idea.
> Trouble with the custom matcher approach is you have to explicitly
> list each filename in your tests, whereas the custom Suite trait
> approach can do discovery of files that should not compile if given
> just a directory name or names.
If the matcher matches java.io.Files, you could check whether the file
is a directory and do automatic discovery in that case. More generally,
a DSL could specify the files. e.g.
filesMatching("*.badscala") in "path/to/files" should not (compile)
> Either way, you'd need to figure out how to invoke the compiler and
> check the result. Does anyone have a suggestion for that part?
I wonder if some logic could be borrowed from sbt.
Concerning compiler version: I suppose there are a whole host of
configuration issues once you get into the realm of invoking the
compiler. I don't really have a good answer for this. One idea would
be to have an sbt/ant/maven plugin do the compilation part and leave
behind compiler output in some form that the test framework could just
read and verify (of course, this approach would be more difficult with
embedded strings).
I do expect that most of the snippets would be fairly small, which is
why the specs Snippets approach seemed nice.
It has its own issues,
though: its slow because it relies on the interpreter,
and the
snippets are no longer accessible to IDEs as source code.
I guess what I have in mind is a test framework wherein each test
consists of 1) a program that should not compile and 2) optional code
for checking the compiler output (i.e. that it failed in the right
way). Does any such framework exist?
Thanks, Alex. Using partest outside of the scala project seems like
asking for trouble, without some extensions to make it more flexible.
In particular, do you have to specify a .check file for every negative
example? and must it exactly match the compiler output? That
combination would make tests fairly brittle, I would think. Ideally,
you'd be able to omit the .check and specify arbitrary logic for
checking the output (e.g. output.contains("type mismatch")). And, of
course, partest scatters tests over separate files/directories.
On Thu, Jun 9, 2011 at 7:57 AM, Matthew Pocock
<turingate...@gmail.com> wrote:
> On 9 June 2011 15:45, Aaron Novstrup <aaron.n...@gmail.com> wrote:
>> It has its own issues,
>> though: its slow because it relies on the interpreter,
>
> This one, yes - although would be a great use-case to drive making the
> interpreter super-lean.
Right. Actually, the use case is different enough from the REPL use
case that it would probably be desirable to have a special-purpose
interpreter that avoids a lot of the overhead of the REPL interpreter.
>> and the
>> snippets are no longer accessible to IDEs as source code.
>
> In IntellijIDEA you can use 'language injection' to designate a string as
> being source in some language understood by the IDE. I don't know if eclipse
> and netbeans have a similar ability.
Oooh, that's awesome! I doubt Eclipse does that, but I prefer IntelliJ
anyway. How do you designate it as being source? It would be great if
it uses an annotation. Actually, I think that would make the compiler
plugin approach a whole lot more feasible. I'm imagining something
like:
class source(lang: String) extends StaticAnnotation // this is the
annotation used to designate a string as source
class snippet extends source("scala") // this
is a special annotation used by the compiler plugin
sealed class CompileResult
case object CompileSuccess
case class CompileFailure(error: String)
Tests would then look like:
class Test extends WhateverSuite {
@snippet val s1: CompileResult = "List[String](1,2,3)"
@snippet val s2: CompileResult = """ withResource { r =>
r.put[Int]("1")
} """
s1 must not (compile)
s2 must not (compile)
}
The compiler plugin could look for the @snippet annotation and invoke
another instance of the compiler to compile each snippet, then
statically put the result in a CompileResult object. At runtime, the
tests would just check for the expected CompileResult.
Here's an example of this used, in the unit tests for the IDEA Scala
plugin itself.
http://bit.ly/l26Jjb
http://bit.ly/jfgg5G
http://imgur.com/5gU46
The code snippet is standalone (other than the header and footer). So
it can't reference the code defined in the 'host' file/module. Perhaps
that could be made to work, if the IntelliLang plugin allows for that
sort of thing.
-jason
****or***** partest could be made an sbt plugin, since it already supports negative compile tests.
Can you point me to partest? Or more generally, can you or someone
point me to how to checkout the whole Scala shebang and build and run
partest? I'd like to see how it does this kind of ensuring something
does not compile test.
Thanks.
Bill
--
OK. I'll give that a try. Checking it out now. Paul Phillips described
firing up the interpreter for testing through partest. I wonder if
that might be a good way to approach this for others. I'll ask Paul to
point me to some examples.
Bill
After building Scala, running ./test/partest will print a usage message.
See also src/partest/README.
To run the "shouldn't compile" tests only you do "test/partest --neg".
--
Seth Tisue | Northwestern University | http://tisue.net
lead developer, NetLogo: http://ccl.northwestern.edu/netlogo/
Oh. It does. I tried partest -help, and that doesn't show anything,
nor does it mention the possibility of running without parameters to
get help.
--
Daniel C. Sobral
I travel to the future all the time.