Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

fortran and unit testing

573 views
Skip to first unread message

John

unread,
Jan 8, 2008, 8:46:24 AM1/8/08
to
I'm wondering what unit testing strategies people use with Fortran. I
am deciding whether to try using some kind of existing framework or just
write my unit tests directly in Fortran. I have found two unit testing
frameworks for Fortran: Fruit and Funit. Neither seem that great in my
opinion and would require my project to have additional dependencies. I
was thinking about just adding unit tests directly in my Fortran source
code directly and using compiler directives to conditionally compile
them and run them. Right now I have three paths I can follow.

1. Use a scripting language unit-testing framework like Funit

2. Write all unit tests in Fortran in separate files and compile
conditionally.

3. Write all unit tests in Fortran and place unit tests for a
particular routine in the same file as the routine. Compile conditionally.

Any comments concerning the pros/cons of the above methods would be
appreciated. Also, personal experiences and other ideas would be great.

Thanks,
John

Sebastian Hanigk

unread,
Jan 8, 2008, 9:25:32 AM1/8/08
to
John <gh1...@yahoo.com> writes:

> I'm wondering what unit testing strategies people use with Fortran.

None ... for some legacy code. I worked with code which was mostly F77
with a few modules and COMMON blocks; sadly almost every routine changed
global state so it was more efficient to reimplement "interesting" or
critical code.

> 1. Use a scripting language unit-testing framework like Funit

Works quite well but sometimes the test routines require to write some
kind of initialisation for a module's state.

> 2. Write all unit tests in Fortran in separate files and compile
> conditionally.
>
> 3. Write all unit tests in Fortran and place unit tests for a
> particular routine in the same file as the routine. Compile conditionally.

You could do something with pre- and postconditions, i.e. check an
arbitrary set of predicates on the routine's entry and another one right
before you give the RETURN statement. Wrap those inside IF blocks so
you can enable/disable the tests by means of a module PARAMETER or an
optional argument. I had the idea after reading "Numerical Recipes"
which provides an assert routine and learning a wee bit of Eiffel. Avoid
preprocessor directives like hell, they tend to obfuscate the code and
aren't needed for this purpose anyway.


Sebastian

Bil Kleb

unread,
Jan 8, 2008, 9:50:28 AM1/8/08
to
John wrote:
> I'm wondering what unit testing strategies people use with Fortran.

We use a bunch; we call it our "testing planetary system" because
we have a range of testing from small, quick feedback cycles (unit
tests) to large, long feedback cycles (regression, stress, and
larger performance tests).

> I am deciding whether to try using some kind of existing framework or just
> write my unit tests directly in Fortran.

The trouble we had with writing directly in Fortran was the
amount of boiler plate code one has to write, especially
on the bookkeeping side, was enough to put all but the most
test-infected developer off. And even then, it was clear
that the computer was better suited to such monkey work.

Add to this overhead, the desire to automate the tests, a key
ingredient to test-first development, made us seek a "framework"
solution. FWIW, I prefer the term "testing harness".

> I have found two unit testing frameworks for Fortran: Fruit and Funit.

There is also pFUnit by the NASA Goddard folks[1], but I would
argue with their "lightweight" description.

Both Fruit and pFUnit are pure Fortran frameworks while fUnit
is written in Ruby and generates Fortran.

> Neither seem that great in my opinion

Would love to hear specific, actionable feedback along these
lines: http://rubyforge.org/tracker/?group_id=4129

> and would require my project to have additional dependencies.

With fUnit at least, the production side of your project
would have no extra dependencies, only the testing side.

We keep these concerns strictly separated[2], except where you
might need to have more public entities than encapsulation
might otherwise dictate to allow testing.

> I was thinking about just adding unit tests directly in my Fortran source
> code directly and using compiler directives to conditionally compile
> them and run them.

Sounds reasonable, but you might find writing new program units,
custom assertions, and the test pass/fail bookkeeping becomes
wearisome and violates the DRY principle[3].

> Right now I have three paths I can follow.
>
> 1. Use a scripting language unit-testing framework like Funit

FWIW, Ruby's a full fledged programming language, replete
with heavy-duty meta-programming facilities.

> 2. Write all unit tests in Fortran in separate files and compile
> conditionally.

See above discussion, my fear is that you will find the
repetitiveness puts you off of testing.

> 3. Write all unit tests in Fortran and place unit tests for a
> particular routine in the same file as the routine. Compile conditionally.

We considered something like this when we were starting fUnit,
but ultimately found separating them provided more options
such as selective code-test distribution, test file location,
one-button-testing[4], and computing lines-of-test-to-code ratios.

> Any comments concerning the pros/cons of the above methods would be
> appreciated. Also, personal experiences and other ideas would be great.

We wrote fUnit to minimize the amount of effort one has to
expend to write and execute Fortran tests so that folks might
consider writing Fortran in a test-first style as was done
back in the day when scientists were scientists and computer
time was _extremely_ expensive.

Sadly, I've found most accidental programmer-scientists reluctant
to write tests, even when made as easy as possible and reminded
of Roger Bacon's independent verification principle behind the
Scientific Method or Francis Bacon's "Truth will sooner come out
from error than from confusion."

Regards,
--
Bil Kleb
http://nasarb.rubyforge.org/funit

[1] http://sourceforge.net/projects/pfunit
[2] http://en.wikipedia.org/wiki/Separation_of_concerns
[3] http://en.wikipedia.org/wiki/Don%27t_repeat_yourself
[4] http://c2.com/cgi/wiki?OneButtonTesting

Paul van Delst

unread,
Jan 8, 2008, 9:54:12 AM1/8/08
to

In my experience it doesn't matter too much what you do - it will always be a lot of work
(no matter what the unit testing theorists tell you; although to be fair, most do admit
the workload increase by around 2x, at least initially.) The principle of unit testing is
great, no question, but I've been quite underwhelmed with the reality of unit testing when
dealing with science-y type results (in my case e.g. radiative transfer in the
atmosphere). My problem is that the number of tests I have to implement for them to be
useful is large, and as I travel up the call tree the number of dependencies increases,
and the test setup overhead becomes large. That in itself is not *too* much of a problem,
it's whenever a component changes -- e.g. new algorithm, better defined constants or
threshold, or simple reorganisation of code -- I then have to modify all the (statically
defined) answers for the "downstream" tests.

cheers,

paulv

Bil Kleb

unread,
Jan 8, 2008, 10:41:27 AM1/8/08
to
Paul van Delst wrote:
>
> In my experience it doesn't matter too much what you do - it will always
> be a lot of work (no matter what the unit testing theorists tell you;
> although to be fair, most do admit the workload increase by around 2x,
> at least initially.)

Heartily agree: when starting it will be slow and painful, but
the custom debugger you're building will save you from the
Pillsbury Doughboy Effect[1] later and the cost of change
doesn't go exponential when working empirically[2].

> My problem is that the number of
> tests I have to implement for them to be useful is large, and as I
> travel up the call tree the number of dependencies increases, and the
> test setup overhead becomes large. That in itself is not *too* much of a
> problem, it's whenever a component changes -- e.g. new algorithm, better
> defined constants or threshold, or simple reorganisation of code -- I
> then have to modify all the (statically defined) answers for the
> "downstream" tests.

Is this for legacy code, or "greenfield" code? Legacy code is nearly
impossible to test as it wasn't written with testing in mind.[3]

Sounds like your coupling[4], cyclomatic complexity[5], and Lakos
Average Component Dependency metric[6] is prohibitively high. Using
testing mocks and/or stubs may help.[7]

Regards,
--
Bil Kleb
http://fun3d.larc.nasa.gov

[1] Fix bug here, another one pops out over there, ad nauseam.
[2] http://www.agilemodeling.com/essays/costOfChange.htm
[3] http://www.amazon.com/Working-Effectively-Legacy-Robert-Martin/dp/0131177052
[4] http://en.wikipedia.org/wiki/Coupling_%28computer_science%29
[5] http://en.wikipedia.org/wiki/Cyclomatic_complexity
[6] http://www.amazon.com/Large-Scale-Software-Addison-Wesley-Professional-Computing/dp/0201633620
[7] http://www.amazon.com/xUnit-Test-Patterns-Refactoring-Addison-Wesley/dp/0131495054

Paul van Delst

unread,
Jan 8, 2008, 11:13:51 AM1/8/08
to
Bil Kleb wrote:
> Paul van Delst wrote:
>>
>> In my experience it doesn't matter too much what you do - it will
>> always be a lot of work (no matter what the unit testing theorists
>> tell you; although to be fair, most do admit the workload increase by
>> around 2x, at least initially.)
>
> Heartily agree: when starting it will be slow and painful, but
> the custom debugger you're building will save you from the
> Pillsbury Doughboy Effect[1] later and the cost of change
> doesn't go exponential when working empirically[2].
>
>> My problem is that the number of tests I have to implement for them to
>> be useful is large, and as I travel up the call tree the number of
>> dependencies increases, and the test setup overhead becomes large.
>> That in itself is not *too* much of a problem, it's whenever a
>> component changes -- e.g. new algorithm, better defined constants or
>> threshold, or simple reorganisation of code -- I then have to modify
>> all the (statically defined) answers for the "downstream" tests.
>
> Is this for legacy code, or "greenfield" code? Legacy code is nearly
> impossible to test as it wasn't written with testing in mind.[3]

It's a combination.

> Sounds like your coupling[4], cyclomatic complexity[5], and Lakos
> Average Component Dependency metric[6] is prohibitively high. Using
> testing mocks and/or stubs may help.[7]

I should come clean. My real problem is that I am weary. I'm just plain dog tired of
trying to change things that nobody seems to want changed. The red tape overhead is just
too high (I now have a dispenser on my desk...)

cheers,

paulv

p.s. Thanks very much for the references.

Bil Kleb

unread,
Jan 8, 2008, 11:25:03 AM1/8/08
to
Paul van Delst wrote:
>
> I should come clean. My real problem is that I am weary. I'm just plain
> dog tired of trying to change things that nobody seems to want changed.
> The red tape overhead is just too high (I now have a dispenser on my
> desk...)

Funny, I'm in the same place, although the purchase order
is still winding its way through the system for my dispenser. :)

See the first section of the sample-handout.pdf available
on the right side of http://code.google.com/p/tufte-latex/

Bil Kleb

unread,
Jan 8, 2008, 11:36:53 AM1/8/08
to
Bil Kleb wrote:
>
> See the first section of the sample-handout.pdf available
> on the right side of http://code.google.com/p/tufte-latex/

Actually, I just wrote a whole paper on this lately entitled,
"Toward Scientific Numerical Modeling",

ftp://ftp.rta.nato.int/PubFullText/RTO/MP/RTO-MP-AVT-147/RTO-MP-AVT-147-P-17-Kleb.pdf

Regards,
--
Bil Kleb
http://nato-rto-latex.googlecode.com

John

unread,
Jan 9, 2008, 6:58:57 AM1/9/08
to
>
>> 2. Write all unit tests in Fortran in separate files and compile
>> conditionally.
>>
>> 3. Write all unit tests in Fortran and place unit tests for a
>> particular routine in the same file as the routine. Compile conditionally.
>
> You could do something with pre- and postconditions, i.e. check an
> arbitrary set of predicates on the routine's entry and another one right
> before you give the RETURN statement. Wrap those inside IF blocks so
> you can enable/disable the tests by means of a module PARAMETER or an
> optional argument. I had the idea after reading "Numerical Recipes"
> which provides an assert routine and learning a wee bit of Eiffel. Avoid
> preprocessor directives like hell, they tend to obfuscate the code and
> aren't needed for this purpose anyway.
>

I would rather not have additional logic inside the program itself when
compiled for non-testing. I have read code with poor use of conditional
compilation, but I was thinking of a very simple, one-level deep #ifdef
to include or exclude the testing code. I don't find obfuscated at all.
At least that keeps the testing completely separated from the normally
compiled code base. If the test code is actually in the normal code
itself, I would always have the nagging feeling that, due to a bug, the
test code may be having a subtle influence on the code itself.

John

John

unread,
Jan 9, 2008, 7:04:47 AM1/9/08
to
>
> In my experience it doesn't matter too much what you do - it will always
> be a lot of work (no matter what the unit testing theorists tell you;
> although to be fair, most do admit the workload increase by around 2x,
> at least initially.) The principle of unit testing is great, no
> question, but I've been quite underwhelmed with the reality of unit
> testing when dealing with science-y type results (in my case e.g.
> radiative transfer in the atmosphere). My problem is that the number of
> tests I have to implement for them to be useful is large, and as I
> travel up the call tree the number of dependencies increases, and the


The code I am working on is a "science-y" type code (integral equation
solutions) which really can push all aspects of the computer's
performance. I agree this makes it very difficult at times to keep
different components of the code separated in the interest of code
efficiency, which in turn makes unit testing more difficult. However,
since my code is growing rapidly, I am reaching the point where I worry
that changes are having influences elsewhere I have forgotten about, so
I really would like to have some type of unit-testing, even if it's not

> test setup overhead becomes large. That in itself is not *too* much of a
> problem, it's whenever a component changes -- e.g. new algorithm, better
> defined constants or threshold, or simple reorganisation of code -- I
> then have to modify all the (statically defined) answers for the
> "downstream" tests.

perfect. I've also encountered the problem of trying to maintain the
"downstream" tests when reorganizing code. I have yet to find a good
solution to it.

John

Arjen Markus

unread,
Jan 9, 2008, 7:03:43 AM1/9/08
to
> John- Tekst uit oorspronkelijk bericht niet weergeven -
>
> - Tekst uit oorspronkelijk bericht weergeven -

Hm, you might want to have a look at my Flibs project:
http://flibs.sf.net

The ftnunit module/subproject uses this approach:

- The program calls the overall unit test routine before
going on with the ordinary processing
- This (general) routine in turn calls the specific
unit test routines if so instructed by the presence of
a file. If not, it simply returns.

Advantages are:

- All the unit tests are part of the program, so compilation
and linking is not a separate step (with the chance of the
code for the acutal program diverging from the test code).
- It is all in Fortran, so you deal with a single language.
- The unit tests are almost completely isolated from the
actual program, except for the one statement.

Disadvantages:

- The program is bigger.
- Not much fancy graphics available.
- The support infrastructure in general is limited.

(I welcome ideas on extending it ;))

Regards,

Arjen

John

unread,
Jan 9, 2008, 7:16:25 AM1/9/08
to
> Hm, you might want to have a look at my Flibs project:
> http://flibs.sf.net

Hi Arjen,

Thanks. I looked at it briefly. That really looks like the kind of
idea I was thinking of, but implemented much much better. Maybe it's
what I need. I'll try it out when I get a little more time.

>
> The ftnunit module/subproject uses this approach:
>
> - The program calls the overall unit test routine before
> going on with the ordinary processing
> - This (general) routine in turn calls the specific
> unit test routines if so instructed by the presence of
> a file. If not, it simply returns.
>
> Advantages are:
>
> - All the unit tests are part of the program, so compilation
> and linking is not a separate step (with the chance of the
> code for the acutal program diverging from the test code).
> - It is all in Fortran, so you deal with a single language.
> - The unit tests are almost completely isolated from the
> actual program, except for the one statement.
>
> Disadvantages:
>
> - The program is bigger.
> - Not much fancy graphics available.
> - The support infrastructure in general is limited.
>

I still think I will use #ifdefs to include/exclude the tests. My
project is set up using the gnu autotool chain and gfortran so adding a
flag for ./configure or a target for make can easily turn the tests on
or off.

John

Bil Kleb

unread,
Jan 9, 2008, 8:33:50 AM1/9/08
to
Arjen Markus wrote:
>
> Hm, you might want to have a look at my Flibs project:
> http://flibs.sf.net

I am so embarrassed to have omitted this from my earlier
post. Sorry Arjen!

Can you please add it to

http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#Fortran

so searches more easily reveal it?

> - All the unit tests are part of the program, so compilation
> and linking is not a separate step (with the chance of the

> code for the actual program diverging from the test code).

We do this by setting up Continuous Integration[1] so that whenever
any code is checked into our SVN repository, tests are run.

> - It is all in Fortran, so you deal with a single language.

Just a clarification note about fUnit: you write tests in Fortran
plus a small collection of fUnit-specific keywords like assertEquals
that are essentially macros. These get expanded by Ruby into standard
Fortran along with all the other boiler plate code. So ultimately,
you wind up with pure Fortran source, but most is throw-away boiler
plate code generated by Ruby.

Regards,
--
Bil Kleb
http://fun3d.larc.nasa.gov

[1] http://martinfowler.com/articles/continuousIntegration.html

Sebastian Hanigk

unread,
Jan 9, 2008, 1:48:38 PM1/9/08
to
John <gh1...@yahoo.com> writes:

> I would rather not have additional logic inside the program itself when
> compiled for non-testing. I have read code with poor use of conditional
> compilation, but I was thinking of a very simple, one-level deep #ifdef
> to include or exclude the testing code. I don't find obfuscated at all.

In theory, having a constant value for your condition (i.e. a PARAMETER)
would allow the compiler to remove the corresponding branch of the
syntax tree (dead code detection).

> At least that keeps the testing completely separated from the normally
> compiled code base. If the test code is actually in the normal code
> itself, I would always have the nagging feeling that, due to a bug, the
> test code may be having a subtle influence on the code itself.

I find it conceptionally better to write the testing code as near as
possible to the tested code, but my approach is more influenced by the
"design by contract" method than the unit test philosophy.


Sebastian

Maik Beckmann

unread,
Jan 9, 2008, 2:32:43 PM1/9/08
to
Arjen Markus wrote:
> Hm, you might want to have a look at my Flibs project:
> http://flibs.sf.net
>
> The ftnunit module/subproject uses this approach:
>
> - The program calls the overall unit test routine before
> going on with the ordinary processing
> - This (general) routine in turn calls the specific
> unit test routines if so instructed by the presence of
> a file. If not, it simply returns.
>
> Advantages are:
>
> - All the unit tests are part of the program, so compilation
> and linking is not a separate step (with the chance of the
> code for the acutal program diverging from the test code).
> - It is all in Fortran, so you deal with a single language.
> - The unit tests are almost completely isolated from the
> actual program, except for the one statement.
>
> Disadvantages:
>
> - The program is bigger.
> - Not much fancy graphics available.
> - The support infrastructure in general is limited.
>
> (I welcome ideas on extending it ;))
>
> Regards,
>
> Arjen

Hi

What is the license your lib is released under?

-- Maik

Richard Maine

unread,
Jan 9, 2008, 3:21:40 PM1/9/08
to
Sebastian Hanigk <han...@in.tum.de> wrote:

I don't like the idea of modifying the code being tested. One can say a
lot of things about what should happen "in theory", but the purpose of
testing is to find out what actually happens. By modifying the code
being tested, you pretty much throw out any pretense that the tests will
catch compiler bugs; it is way too easy for compiler bugs to be changed
by the additional code.

Also, you can't really do an end-to-end test of a subroutine that way.
There can be things that happen before the first executable statement
and after the last one (automatic allocation and deallocation,
evaluation of specification expressions, which can even include function
references, etc). Heck, it is possible to write a useful procedure that
has no executable statements at all, but still actually does things. (A
bit odd, but possible). That's not to speak of the possibility of
overlooking a return somewhere in the middle of the code.

If you go too far down the path of assuming that all those kinds of
things won't be problems, perhaps because your coding style avoids them,
it seems to me that this is somewhat like assuming that the code doesn't
have any bugs and doesn't need testing at all.

Sure, I'll modify the code when tracking down bugs, but I think of that
as different from testing.

--
Richard Maine | Good judgement comes from experience;
email: last name at domain . net | experience comes from bad judgement.
domain: summertriangle | -- Mark Twain

Sebastian Hanigk

unread,
Jan 9, 2008, 4:16:47 PM1/9/08
to
nos...@see.signature (Richard Maine) writes:

> I don't like the idea of modifying the code being tested. One can say a
> lot of things about what should happen "in theory", but the purpose of
> testing is to find out what actually happens. By modifying the code
> being tested, you pretty much throw out any pretense that the tests will
> catch compiler bugs; it is way too easy for compiler bugs to be changed
> by the additional code.

You're right, one of the assumptions of my proposed testing model is the
negligibility of compiler bugs. Call me naïve :-)

Perhaps another point of view would be that my method is not testing as
the OP had in mind, it's an extended version of argument and state
checking. You can quite easily combine both approaches.

> Also, you can't really do an end-to-end test of a subroutine that way.
> There can be things that happen before the first executable statement
> and after the last one (automatic allocation and deallocation,
> evaluation of specification expressions, which can even include function
> references, etc). Heck, it is possible to write a useful procedure that
> has no executable statements at all, but still actually does things. (A
> bit odd, but possible). That's not to speak of the possibility of
> overlooking a return somewhere in the middle of the code.

Good points. It is still somewhat difficult to write good testcases for
subprograms which rely (more or less) heavily on global state. Not that
I'm encouraging or appreciating such style, but one cannot always shy
away from legacies.

On a side note: contract-based assertions make it easier to reason about
the correctness of the code (under said assertions).


Sebastian

Paul van Delst

unread,
Jan 9, 2008, 4:39:54 PM1/9/08
to

In this context, rather than "design by contract" maybe "coding by assumption" is a better
term -- and it still relies on testing. You assume that something is going to do what it
says it will, and write your code accordingly. If a contract (figurative or literal) is
involved, then the contract should include testing or inclusion metrics, no?

Regardless, you still need to test it once the "something" is included in your code.

Unit testing is just one method you employ in a continuum of testing tools.

cheers,

paulv

Bil Kleb

unread,
Jan 9, 2008, 9:43:51 PM1/9/08
to
Sebastian Hanigk wrote:

> Richard Maine writes:
>> By modifying the code being tested, you pretty much throw out
> > any pretense that the tests will catch compiler bugs; it is way
>> too easy for compiler bugs to be changed by the additional code.
>
> You're right, one of the assumptions of my proposed testing model
> is the negligibility of compiler bugs. Call me naïve :-)

Testing for portability / compiler bugs is part of "testing solar
system".

Currently, four times a day, our ~750KLOC of Fortran 95 is checked
out from the SVN repository and complied with PathScale-2.5, NAG-5.1.365,
g95-0.91, PGI-7.1-3, Sun-8.3-20070731, gfortran-20070123, Intel-10.1.011,
Intel-9.1.041, LaheyX-6.20d, and LaheyX-8.00a. The entire process
typically takes less than 20 minutes and is fully automated.

If most compilers choke on some bit of code, we've found some
questionable coding on our part; if one one or two compilers
choke, we've mostly likely found yet another compiler bug. We've
found plenty over the years... except for Lahey.

Arjen Markus

unread,
Jan 10, 2008, 3:16:26 AM1/10/08
to

Done so - I feel a bit ashamed not having done this before
(but it took me some time to get to grips with editing pages
on Wikipedia, as this was my first contribution ;)).

Regards,

Arjen

Arjen Markus

unread,
Jan 10, 2008, 3:17:19 AM1/10/08
to
>  -- Maik- Tekst uit oorspronkelijk bericht niet weergeven -

>
> - Tekst uit oorspronkelijk bericht weergeven -

BSD - hm, I should add that information on the front page
(it is in the project specifications, of course).

Regards,

Arjen

Arjen Markus

unread,
Jan 10, 2008, 4:18:37 AM1/10/08
to
On 8 jan, 17:36, Bil Kleb <Bil.K...@NASA.gov> wrote:
> Bil Kleb wrote:
>
> > See the first section of the sample-handout.pdf available
> > on the right side ofhttp://code.google.com/p/tufte-latex/

>
> Actually, I just wrote a whole paper on this lately entitled,
> "Toward Scientific Numerical Modeling",
>
>    ftp://ftp.rta.nato.int/PubFullText/RTO/MP/RTO-MP-AVT-147/RTO-MP-AVT-1...
>
> Regards,
> --
> Bil Klebhttp://nato-rto-latex.googlecode.com

Interesting - printed the article. Thanks for posting it.

Regards,

Arjen

Maik Beckmann

unread,
Jan 10, 2008, 12:44:10 PM1/10/08
to
Arjen Markus wrote:

> On 9 jan, 20:32, Maik Beckmann <maikbeckm...@gmx.de> wrote:
>> What is the license your lib is released under?

> BSD

Cool! I will have a look at you lib as soon as I find time.

-- Maik

Jan Vorbrüggen

unread,
Jan 11, 2008, 5:54:13 AM1/11/08
to
> Heck, it is possible to write a useful procedure that
> has no executable statements at all, but still actually does things.

The one way I can think of is defining a derived type with a default
initialization, and then using this as the dummy argument of a
subroutine with INTENT(OUT). Add complications such as ALLOCATABLE
components as desired.

Anyway else this can be done?

Jan

Richard Maine

unread,
Jan 11, 2008, 12:16:35 PM1/11/08
to
Jan Vorbrüggen <Jan.Vor...@not-thomson.net> wrote:

That's the one that ocurred to me. You can also have things happen in
specification expressions, but I don't off-hand see how they can end up
having external effect.

Gordon Sande

unread,
Jan 11, 2008, 12:45:51 PM1/11/08
to
On 2008-01-11 13:16:35 -0400, nos...@see.signature (Richard Maine) said:

> Jan Vorbrüggen <Jan.Vor...@not-thomson.net> wrote:
>
>>> Heck, it is possible to write a useful procedure that
>>> has no executable statements at all, but still actually does things.
>>
>> The one way I can think of is defining a derived type with a default
>> initialization, and then using this as the dummy argument of a
>> subroutine with INTENT(OUT). Add complications such as ALLOCATABLE
>> components as desired.
>>
>> Anyway else this can be done?
>
> That's the one that ocurred to me. You can also have things happen in
> specification expressions, but I don't off-hand see how they can end up
> having external effect.

If you use a debugging system that does undefined variable checking the use
intent ( out ) in this way allows undefined status to be forced for some
useful situations. This is of interest if one is doing suballocation within
a block of storage. Here the variables may be defined by the rules of the
Fortran standard but undefined for the semantics of the user's application.
When the undefined checking is turned off the effects go away. A clever
optimizer might even remove such unused code.

I have "been there, done that and bought the t-shirt" as the saying goes.
It is a trick that I picked up from this newsgroup.


Dick Hendrickson

unread,
Jan 11, 2008, 1:07:54 PM1/11/08
to
Richard Maine wrote:
> Jan Vorbrüggen <Jan.Vor...@not-thomson.net> wrote:
>
>> > Heck, it is possible to write a useful procedure that
>>> has no executable statements at all, but still actually does things.

Technically, it's not possible to do this. The END statement is
executable. ;)

>> The one way I can think of is defining a derived type with a default
>> initialization, and then using this as the dummy argument of a
>> subroutine with INTENT(OUT). Add complications such as ALLOCATABLE
>> components as desired.
>>
>> Anyway else this can be done?
>
> That's the one that ocurred to me. You can also have things happen in
> specification expressions, but I don't off-hand see how they can end up
> having external effect.
>

Maybe not by intent, but 4.5.5.2 of f2003 says
"If a specification expression in a scoping unit references a function,
the result is finalized before execution of the first executable
statement in the scoping unit.

When a procedure is invoked, a nonpointer, nonallocatable object that is
an actual argument associated with an INTENT(OUT) dummy argument is
finalized."

I think the first paragraph above means you don't even need INTENT(OUT)
to trigger a final subroutine.

I don't think there are any restrictions on what a finalizer subroutine
can do. There's a few sissy limits on its argument, but COMMON easily
gets by those. So, there's no reason why the entire program can't
be executed as a magic side-effect of a call to a subroutine with no
interesting executable statements. Just one more thing to love about
OOP ;).

Dick Hendrickson

John

unread,
Jan 11, 2008, 9:58:57 PM1/11/08
to

> Hm, you might want to have a look at my Flibs project:
> http://flibs.sf.net
>
> The ftnunit module/subproject uses this approach:
>
> - The program calls the overall unit test routine before
> going on with the ordinary processing
> - This (general) routine in turn calls the specific
> unit test routines if so instructed by the presence of
> a file. If not, it simply returns.
>
> Advantages are:
>
> - All the unit tests are part of the program, so compilation
> and linking is not a separate step (with the chance of the
> code for the acutal program diverging from the test code).
> - It is all in Fortran, so you deal with a single language.
> - The unit tests are almost completely isolated from the
> actual program, except for the one statement.
>
> Disadvantages:
>
> - The program is bigger.
> - Not much fancy graphics available.
> - The support infrastructure in general is limited.
>
> (I welcome ideas on extending it ;))
>
> Regards,
>
> Arjen


Hi Arjen,

I've been playing around with your unit testing library. I think it is
pretty good, but there are one or two things I think could improve it.

1. I am not sure why you have the "test" subroutine in a different file
than the funit module. Since you seem to have to include the funit
module everywhere you include funit_test.f90 anyway, I think it would be
better to just go ahead and place the "test" subroutine in the funit
module. You state that you need to include funit_test.f90 in each
module you are testing so that the module gets a local copy of test. I
do not understand why this is necessary.

2. Because the "runtests" subroutine stops when all the tests are
completed. You can really only test one module at a time with this
routine. If you want to test code in multiple modules, you would have
to comment and recompile for each module. For example

call runtests(test_all) !test_all in modules 1
! call runtests(test_all) !test_all in modules 2

and then

! call runtests(test_all) !test_all in modules 1
call runtests(test_all) !test_all in modules 2

I would suggest making a seperate subroutine in funit that contains the
last few lines of runtests. Then you can call the tests like this

call runtests(test_all_mod1) !test_all in modules 1
call runtests(test_all_mod2) !test_all in modules 2
call runtests_final()

Actually, you will need to move the last few lines of runtests (after
call testproc) and some of the previous lines to read the tests results
from the file to runtests_final. I think you may have to change a few
lines in some other routines so that test_no doesn't get set < last_test
between the different calls to runtests.

Regards,
John


Arjen Markus

unread,
Jan 13, 2008, 8:09:55 AM1/13/08
to

Thanks for these comments and suggestions. I will have a closer look
the next
few days.

Regards,

Arjen

Paul van Delst

unread,
Jan 14, 2008, 9:04:44 AM1/14/08
to

Is this the same idea as creating test suites out of a bunch of individual test cases
(where the latter can contains more than one test)?

cheers,

paulv

Arjen Markus

unread,
Jan 14, 2008, 9:16:17 AM1/14/08
to
> paulv- Tekst uit oorspronkelijk bericht niet weergeven -

>
> - Tekst uit oorspronkelijk bericht weergeven -

Roughly speaking, yes.

Regards,

Arjen

Terence

unread,
Jan 14, 2008, 5:44:40 PM1/14/08
to
On Jan 9, 3:25 am, Bil Kleb <Bil.K...@NASA.gov> wrote:
> See the first section of the sample-handout.pdf available
> on the right side ofhttp://code.google.com/p/tufte-latex/
>
> Regards,
> --
> Bil Kleb

Much appreciated!
There is great truth in the concept of a threat of early chaos before
a return on investment.

Great, Bill!
Felicitaciones! (as we say)
How did you get away with it?

Bil Kleb

unread,
Jan 14, 2008, 8:22:23 PM1/14/08
to
Terence wrote:

> Bil Kleb wrote:
>> See the first section of the sample-handout.pdf available
>> on the right side of http://code.google.com/p/tufte-latex/
>
> Much appreciated!

'welcome, but it is just a sample of a format?

> There is great truth in the concept of a threat of early chaos before
> a return on investment.

Now I'm going to have to go re-read that thing! Oh, the
Satir Change Model.

> Great, Bill!
> Felicitaciones! (as we say)

Thank you... I think. :)

> How did you get away with it?

What do you mean "get away with"?

Regards,
--
http://www.linkedin.com/in/bilkleb

Andrew Chen

unread,
Jan 22, 2008, 2:08:05 AM1/22/08
to
John,

(I was totally confused by this group threads hosted by different
forums. I tried to reply your thread in another forum, but seems like
that didn't go anywhere. I will stick to google groups from now on. )

Here are some of my experience:
1.
Regarding your original 3 questions: my experience is that to separate
the source code and test code in different directories. It's easier
to maintain and deploy. Also, different tests can share the same test
data easier this way. The shared library may cause some problem in
compile and linking with .mod file. You will need to put all .mod
files into one directory.

In the environment I'm working at, it is beneficial to have the test
codes all in FORTRAN. Some FORTRAN developer are not comfortable
writing embedded FORTRAN in another language. It may depends on the
skill levels of your developers.

2.
In a previous discussion there were discussion about how to maintain
test_xxx method, and the overall module, and the driver program.

In the past, I found it is hard to maintain the driver and test
container.

In Fruit 2.0, the improved implementation is:
1. The developer only maintains the module_name_test.f90, with
subroutine names like setup/teardown/test_xxx
2. Then a script (fruit_processor gem) will generate 2 files:
fruit_basket.f90 (get all test modules in the current directory), and
fruit_driver.f90 . In this way, a lot more information can be put
into the generated container, and the driver. All original codes and
generated codes are f90.

See examples in this file:
http://fortranxunit.wiki.sourceforge.net/Add+Fruit+to+Your+Diet+in+20+Minutes

Also, by generating container and driver, we can do interesting
features such as parse specifications out of the test codes. That can
make an "executable-requirement".


Hope it helps.
~Andrew Chen
http://fortranxunit.wiki.sourceforge.net/

John

unread,
Jan 22, 2008, 6:08:05 AM1/22/08
to
Andrew Chen wrote:
> John,
>
> (I was totally confused by this group threads hosted by different
> forums. I tried to reply your thread in another forum, but seems like
> that didn't go anywhere. I will stick to google groups from now on. )
>
snip

>
> Hope it helps.
> ~Andrew Chen
> http://fortranxunit.wiki.sourceforge.net/

Hi Andrew,

Thanks for your comments. It may take me a day or two to digest them.
I've been playing around with FRUIT 2.0 since you posted the
announcement. It has a lot of features I like. Arjen's unit test
module also has some nice features. There are some areas of overlap
between the two frameworks, but also some nice features that are
contained in either one or the other. I'm still trying to decide which
one to use, or possibly trying to combine the best of both.

John

John

unread,
Jan 22, 2008, 6:12:31 AM1/22/08
to
I have one additional suggestion.

In your get_lun subroutine, it might be a good idea to test for the case
when the maximum number of files are open. If I am reading the code
correctly, this is not checked for and it seems like the return result
is undefined in this case.

John

Arjen Markus

unread,
Jan 22, 2008, 6:58:11 AM1/22/08
to

Hi John,

thanks for that suggestion. Yes, it is one of those tricky
things - I should probably just go on with higher numbers
until the OPEN statement sets an error (and only then
stop the program).

As for your other suggestions:

- I will have to go and stare at the code again to see
if including the routine is the best solution.

- A set-up where you have a final routine to indicate the
end should be accompanied by an initial routine to keep
it backward compatible:

call runtests_init
call runtests(test_all_mod1)
call runtests(test_all_mod2)
...
call runtests_final

But it seems useful.

Regards,

Arjen

John

unread,
Jan 24, 2008, 6:00:54 AM1/24/08
to
Andrew Chen wrote:
> John,
>
> (I was totally confused by this group threads hosted by different
> forums. I tried to reply your thread in another forum, but seems like
> that didn't go anywhere. I will stick to google groups from now on. )
>
> Here are some of my experience:
> 1.
> Regarding your original 3 questions: my experience is that to separate
> the source code and test code in different directories. It's easier
> to maintain and deploy. Also, different tests can share the same test
> data easier this way. The shared library may cause some problem in
> compile and linking with .mod file. You will need to put all .mod
> files into one directory.
>
> In the environment I'm working at, it is beneficial to have the test
> codes all in FORTRAN. Some FORTRAN developer are not comfortable
> writing embedded FORTRAN in another language. It may depends on the
> skill levels of your developers.

Here is one of the main problems I have with separating the source code
and test code. When I code, I try to put every routine in a module for
the explicit interface. In addition, I always declare the whole module
as private and only mark the variables/routines I have to as public.
However, a lot of the unit tests I would want to run are on the private
routines (which in some cases may also need access to some of the
private module globals). I see no way to unit test these routines
unless the test code is included with the source inside the module in
question. If I use your suggestion, I would basically be forced to make
all modules public only for the ability to unit test.

Do you have any ideas on how to solve this problem with FRUIT?

(I find declaring everything as private unless it has to be public to
give me some extra peace of mind.)

Thanks,
John

Bil Kleb

unread,
Jan 24, 2008, 8:41:55 PM1/24/08
to
John wrote:
>
> (I find declaring everything as private unless it has to be public to
> give me some extra peace of mind.)

FWIW, my experience is that test-driven-development more
than compensates for public access to allow white-box testing.

Regards,
--
Bil Kleb
http://nasarb.rubyforge.org/funit

Arjen Markus

unread,
Jan 25, 2008, 2:37:42 AM1/25/08
to
> Arjen- Tekst uit oorspronkelijk bericht niet weergeven -

>
> - Tekst uit oorspronkelijk bericht weergeven -

I am going to make a number of revisisions to the code
to include your suggestions. (I will also expand the
library with a one or two utilities, inspired by Bil Kleb's
articles).

Regards,

Arjen

Pierre Asselin

unread,
Jan 26, 2008, 4:21:02 PM1/26/08
to
John <gh1...@yahoo.com> wrote:

> Here is one of the main problems I have with separating the source code
> and test code. When I code, I try to put every routine in a module for
> the explicit interface. In addition, I always declare the whole module
> as private and only mark the variables/routines I have to as public.
> However, a lot of the unit tests I would want to run are on the private
> routines (which in some cases may also need access to some of the
> private module globals). I see no way to unit test these routines
> unless the test code is included with the source inside the module in
> question.

You can create wide-open modules that you use only through gateway
modules, except for testing.

module wide_open_
public
integer a, b
end module wide_open_

module gate
use wide_open_
private
public a
end module gate

program main
use gate
implicit none
a= 0
b= 0 ! error
end program main

Testing code would USE wide_open_ directly and manipulate its
innards at will. Production code must go through the gate module
to access only the public functionality.

This relies on programmer discipline and is not enforced by the
language. Also, it could be tedious to repeat the public part of
all interfaces in gateway modules. Still, it's probably worth
the trouble.

--
pa at panix dot com

Tom

unread,
Feb 11, 2008, 5:36:17 PM2/11/08
to


Has anyone seen a detailed comparison of the test framework
alternatives listed at http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#Fortran
?

Does anyone have experience with more than one? I'm hoping to avoid
being the first...

Thanks,

Tom

Apologies for joining this fray so late...

pschm...@gmail.com

unread,
Feb 22, 2008, 4:14:15 PM2/22/08
to
On Jan 8, 6:46 am, John <gh14...@yahoo.com> wrote:
> I'm wondering whatunittestingstrategies people use withFortran. I
> am deciding whether to try using some kind of existing framework or just
> write myunittests directly inFortran....

I also need to add unit tests to Fortran code. I think another option
would be to wrap the Fortran code in C++ and use another testing
framework like CxxTest [http://cxxtest.sourceforge.net/]. Has anyone
tried something like this??


On Feb 11, 3:36 pm, Tom <Thomas.B.Hender...@noaa.gov> wrote:
> Has anyone seen a detailed comparison of the test framework

> alternatives listed athttp://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#Fortran


> ?
>
> Does anyone have experience with more than one? I'm hoping to avoid
> being the first...

I've been looking but haven't found one yet... I found a useful
comparison of C++ unit testing frameworks [http://
www.gamesfromwithin.com/articles/0412/000061.html]. However, a
comparison of the Fortran unit testing frameworks would be extremely
helpful!

AFAIK these are the fortran unit testing options:
1. funit: [http://nasarb.rubyforge.org/]
2. FORTRAN Unit Test Framework (fruit) [http://sourceforge.net/
projects/fortranxunit]
3. FLIBS - ftnunit [http://flibs.sourceforge.net/]
4. pFUnit [http://sourceforge.net/projects/pfunit]
5. Wrap Fortran code in C++ and then use CxxTest as our testing
framework.

If I can scrape together some spare time, I would like to write a
simple "a+b=c" fortran routine & run it through each of the five
aforementioned frameworks... That is, if I can find the time to do
it...

Cheers,
Pete

0 new messages