I did not have any time yet to play around with the screenshots. I did however, switched from sandbox
to actual product code and from that I have some new experiences to share.
1) Multiple tests for the same feature.
Especially in GUI applications, a single feature might have multiple tests that all test a small slice of
that feature. Consided for example a 'save changes yes/no/cancel' dialog. That feature probably has atleast
3 tests, one for each path of execution. However, from a user manual perspective it doesn't make sense to
have multiple sections and content-table entries for that.
One way to manage these would be to write all the BB comments in the first test and then to ignore the other
tests, but that feels like a step into the wrong direction because then again tests and documentation are
separated.
What would you do? Would it be possible to include the BB documentation *output* of one test into the
documentation of another? Or would it be possible that the other tests also append to the same section in
the document, provided that they specify the same header?
2) Error handling needs some improvement ;)
I did spent a lot of time today with two small problems.
One of them was caused by not having all the source code available to the (Ruby) bumblebee collector.
This caused a NullPointerException without any information about the cause of the problem
the other was probably caused by having some unusual non-visible character (don't know which) in one of the
source files. The result was a green bar and partially created documentation, but all method level
documentation from that file was missing from the result. This sounds potentially dangerous because at first
glance, everything looks to be ok.
Thank you already in advance, and best regards
DanielW
Thanks for posting in the group. :-) I assume you're using the 0.5
release (and not the snapshot). Otherwise, some of the problems
encountered might be resolved.
On Tue, Jun 24, 2008 at 1:13 PM, Daniel Wellner
<daniel....@sysart.fi> wrote:
>
>
> Hi again.
>
> I did not have any time yet to play around with the screenshots. I did however, switched from sandbox
> to actual product code and from that I have some new experiences to share.
>
> 1) Multiple tests for the same feature.
>
> Especially in GUI applications, a single feature might have multiple tests that all test a small slice of
> that feature. Consided for example a 'save changes yes/no/cancel' dialog. That feature probably has atleast
> 3 tests, one for each path of execution. However, from a user manual perspective it doesn't make sense to
> have multiple sections and content-table entries for that.
I agree. I haven't thought too much about variations yet (but now I will ;-).
>
> One way to manage these would be to write all the BB comments in the first test and then to ignore the other
> tests, but that feels like a step into the wrong direction because then again tests and documentation are
> separated.
This would be one option, and probably the simplest one. When you say
Ignore I guess you mean "Bumblebee" exclude content and not JUnit
ignore? (http://agical.com/bumblebee/bumblebee_doc.html#com.agical.bumblebee.acceptance.helpers.experimental.ExcludingContent)
Then you can still run the test but avoid the section output. (Note:
Experimental feature so far)
>
> What would you do? Would it be possible to include the BB documentation *output* of one test into the
> documentation of another? Or would it be possible that the other tests also append to the same section in
> the document, provided that they specify the same header?
>
An alternative would be to call the sections e.g. "No variation" and
"Cancel variation". Then they would become sections, but you could
avoid documenting anything
but the no/cancel stuff, and do that only when it is needed.
I haven't tried it, and I cannot promise you that it works, but you
might be able to traverse the "node tree" in Bumblebee (Ruby) to move
some content from one node to the other.
This is not documented; you'll have to browse the Ruby code to see how it works.
> 2) Error handling needs some improvement ;)
>
> I did spent a lot of time today with two small problems.
> One of them was caused by not having all the source code available to the (Ruby) bumblebee collector.
> This caused a NullPointerException without any information about the cause of the problem
Sounds like something I need to fix. I made some improvements in the
latest release, but I might have missed some cases.
>
> the other was probably caused by having some unusual non-visible character (don't know which) in one of the
> source files. The result was a green bar and partially created documentation, but all method level
> documentation from that file was missing from the result. This sounds potentially dangerous because at first
> glance, everything looks to be ok.
Since it is a wiki syntax it is pretty sensitive to formatting, and
especially whitespaces. I made some improvements here as well, but
some things are
outside of my control since I use a 3rd party wiki parser.
Again, what version do you use?
Cheers
Daniel
>
>
> Thank you already in advance, and best regards
>
> DanielW
>
>
>
>
> >
>
--
__________________________
Daniel....@Gmail.com
See comments below.
On Wed, Jul 2, 2008 at 2:27 AM, Joakim Ohlrogge
<joakim....@gmail.com> wrote:
>
>
>> Especially in GUI applications, a single feature might have multiple tests that all test a small slice of
>> that feature. Consided for example a 'save changes yes/no/cancel' dialog. That feature probably has atleast
>> 3 tests, one for each path of execution. However, from a user manual perspective it doesn't make sense to
>> have multiple sections and content-table entries for that.
I think this will be one of the places where people will have the
hardest time adopting UGDD (user guide driven development), testing
for variations. Sometimes I think it can be relevant to show the
entire variation flow, and sometimes you want a more high level
description. Coming up with a feasible solution to that would help a lot.
One way would be to manipulate the output. With e.g. DHTML variations
could be collapsed by default, but expanded if interesting.
If the output is e.g. PDF a reference to an appendix might make more sense.
>
> Hi,
>
> I think as you suggested that it may not be interesting to document
> all cases. Some cases in a GUI application are general like "if I
> press cancel I want to abort what I am doing". Such general things may
> be possible to express with theories:
>
> @Theory cancelAbortsDialog(Dialog dialog) {
> assumeThat(dialog, canCancel());
> dialog.cancel();
> assertThat(dialog.isAborted(), is(true));
> }
>
> The above leaves more than a few holes to fill in. What do dialog
> datapoints look like? The dialog is presumeably some convenience-
> wrapper for a GUI-dialogue but what does "isAborted()" really mean, is
> there something generic to check for in all cases?
>
> Assuming that those holes can be filled, it is interesting to think
> about what the documentation would look like. You would like to say
> something like "all dialogs with a cancel button aborts whatever they
> are doing if cancel is pressed, esc is always the shortcut for cancel,
> cancel does never have focus when any dialog is initially shown" and
> so on... Some of those checks would have their place in a style-guide
> for GUI:s but maybe not in end-user documentation. Some of them are
> probably generic enough to be reusable on all swing gui-applications
> and others in all products developed at a specific site.
This is a very interesting way to capture aspects of an application.
Perhaps even more interesting for aspects than for capturing variations?
Even though I can see it being used for both.
Perhaps the data points could be "lambdas/closures" that actually
perform the (App DSL) operations to get to
the interesting state in the application?
(and in Java a closure would be an implementation of a specific interface :-P)
>
> I have a gut-feeling that bumblebee could spawn a child project for
> GUI:s alone and that developing some useful ways of documenting GUI:s
> in bumblebee could also help a lot when it comes to testing them and
> making them self documenting, coherent and consistent.
>
I agree. Already from the small example GUIs I've tested BB on, some
interesting patterns have emerged, like
having an Application DSL calling a Bumblebee text generating DSL.
Building on those patterns and finding new ones by generally keeping
the code relatively DRY will probably provide candidates for
generic documentation components. Those should go in a GUI
documentation project, or perhaps three: Swing, Web and SWT, and maybe
a fourth with generic commons.
> For instance, if there would be some kind of metamodel where you can
> refer to GUI-components in a way that makes sense to an end-user (like
> firefox has a bookmarks-toolbar that probably is called something else
> as an application internal componentname). Then that metamodel could
> be the source of datapoints for theories that should be true for all
> datapoints. The same meta-model could also be used in examples (normal
> tests) that would provide documentation for the user-guide, helpsystem
> etc.
Perhaps this meta-model could/should be the Application DSL? With a
bunch of methods providing those data points for this specific
application?
Another cool synergy of the App DSL would be to provide it as a
context for application scripting/macros.
The easiest way to do that would probably be to wrap the DSL in JRuby
(Groovy? Jython?) and provide it as a context in a scripting
dialog/text box within the application.
Since you have tested every feature with that DSL you should be able
to perform every action within the application with it. :-)
Or at least it could be a good motivator for actually testing
everything with the DSL... ;-)
>
> I don't know yet how this would be solved (or even exactly what) but I
> have a feeling that it is possible to find something here to be
> excited about :)
Me too ;-)
>
> /J
>
> >
>
--
__________________________
Daniel....@Gmail.com