Full test output?

11 views
Skip to first unread message

Aaron Jacobs

unread,
Nov 21, 2010, 7:21:11 AM11/21/10
to Ekam
It seems by default that ekam stores test output to a log file, and
prints something like the following in the event of a failure:

full log: tmp/test_test.log

I'd prefer it to print all of the test failure messages on failure. Is
there a way to make it do so? To be honest, I'm not good enough with
bash to figure out the .ekam-rule file.

Thanks,
Aaron

Kenton Varda

unread,
Nov 21, 2010, 3:45:29 PM11/21/10
to Aaron Jacobs, Ekam
Wow, I didn't know anyone else was using Ekam.  :)

The reason I did it this way is because Ekam always builds and runs everything it can, so if you make a change that breaks N tests, you're going to see N full test logs, not just one.  It can become too much information to sift through at once, and can be particularly annoying in continuous build mode.

That said, if you wanted to change the behavior, you just need to edit the third-to-last line of test.ekam-rule.  Change this:
  egrep 'FAIL|ERROR|FATAL' "$TEST_LOG" >&2
to just:
  cat "$TEST_LOG" >&2

We could make this controllable via an environment variable.

BTW, I do intend to go back and add better comments to all those ekam-rule files and everywhere else at some point, so that it's easier to understand what's going on.


--
You received this message because you are subscribed to the Google Groups "Ekam" group.
To post to this group, send email to ekam...@googlegroups.com.
To unsubscribe from this group, send email to ekam-tool+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/ekam-tool?hl=en.


Aaron Jacobs

unread,
Nov 21, 2010, 3:51:21 PM11/21/10
to Kenton Varda, Ekam
On Mon, Nov 22, 2010 at 7:45 AM, Kenton Varda <temp...@gmail.com> wrote:
> We could make this controllable via an environment variable.

That would be cool. I see the reasoning for why it is the way it is by
default, but would like the other behavior when I'm doing TDD for one
(new) class at a time.

Now that I think about it, the behavior of always running every test
could become pretty annoying if you add larger integration tests. Have
you thought about allowing the user to specify one target to
build/run? Or is that against the whole philosophy?

Kenton Varda

unread,
Nov 21, 2010, 4:36:45 PM11/21/10
to Aaron Jacobs, Ekam
On Sun, Nov 21, 2010 at 12:51 PM, Aaron Jacobs <aaronj...@gmail.com> wrote:
Now that I think about it, the behavior of always running every test
could become pretty annoying if you add larger integration tests. Have
you thought about allowing the user to specify one target to
build/run? Or is that against the whole philosophy?

What's annoying about always running large tests?  Assuming you're running in continuous build mode (-c), Ekam will immediately kill any test it is running when it detects that the sources have changed.  So the only thing lost is some processing power that you weren't using anyway.

The problem with letting the user specify one particular target is that Ekam doesn't know where to find its dependencies until it actually builds those dependencies.  So, at least on the first pass, it really has to build everything.  In theory, Ekam could record the dependency graph from that first pass in order to perform more directed builds in the future, but it's tricky because any time a source file changes, the dependency graph on top of that file could change drastically.

So, instead, I'm really trying to get continuous build mode to be good enough that people don't feel the need to build individual targets.  A lot can be achieved through "optimizations".  For example, Ekam could remember which tests took a long time to run in the past, and prefer to delay them until after faster tests have completed.  Also, Ekam can prefer to run build actions that live close to the changed file before it moves on to ones further away, so that the edited package's tests are run before other package's tests.  These two "optimizations" may conflict, but then it's just a matter of tuning.

Aaron Jacobs

unread,
Nov 21, 2010, 5:12:30 PM11/21/10
to Kenton Varda, Ekam
On Mon, Nov 22, 2010 at 8:36 AM, Kenton Varda <temp...@gmail.com> wrote:
> What's annoying about always running large tests?

The way I tend to work is messing with one class at a time, running
its unit tests over and over until I've added/modified the appropriate
tests and made them pass. I'd like to see only the unit test output,
not the integration test output, until I'm done with each individual
class (since it's likely to fail or not even compile). If I understand
correctly, when I change an individual class that should trigger ekam
to run its unit test and the integration test.


> The problem with letting the user specify one particular target is that Ekam
> doesn't know where to find its dependencies until it actually builds those
> dependencies.  So, at least on the first pass, it really has to build
> everything.  In theory, Ekam could record the dependency graph from that
> first pass in order to perform more directed builds in the future, but it's
> tricky because any time a source file changes, the dependency graph on top
> of that file could change drastically.
>
> So, instead, I'm really trying to get continuous build mode to be good
> enough that people don't feel the need to build individual targets.  A lot
> can be achieved through "optimizations".  For example, Ekam could remember
> which tests took a long time to run in the past, and prefer to delay them
> until after faster tests have completed.  Also, Ekam can prefer to run build
> actions that live close to the changed file before it moves on to ones
> further away, so that the edited package's tests are run before other
> package's tests.  These two "optimizations" may conflict, but then it's just
> a matter of tuning.

Fair enough, I like the sound of this if you're planning on going into
that sort of heuristic.

Reply all
Reply to author
Forward
0 new messages