Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

It's time to move on!

786 views
Skip to first unread message

Damian Rouson

unread,
Sep 2, 2015, 3:02:38 AM9/2/15
to
After reading a paper that expressed a new algorithm in Fortran 90, I can't help but wonder when we will tire of expressing new algorithms in 25-year-old idioms when more recent intrinsics and statements might collapse 5-10 lines down to 1. Such compaction can introduce a great deal of clarity while offloading lower-level algorithmic decisions to the compiler and oftentimes exposing parallelism. (In the multicore/many-core era, haven't we reached the point where serial code is by definition legacy code except on the least demanding applications?)

Considering Fortran 2008 compliance, the landscape appears to be roughly as follows:
1. Cray (full)
2. Intel (so close most people won't notice the minor missing features).
3. IBM (the only missing major feature is coarrays)
4. GNU (the only missing major features are derived type I/O and parameterized derived types)

Better yet, each of the above vendors has already implemented portions of what is expected to be in Fortran 2015. In particular, all four have implemented TS 29113 Further Interoperability of Fortran with C (ftp://ftp.nag.co.uk/sc22wg5/n1901-n1950/n1942.pdf) and Cray and GNU have implemented the collective subroutines of TS 18058 Additional Parallel Features in Fortran (http://isotc.iso.org/livelink/livelink?func=ll&objId=17288706&objAction=Open).

I'm not even going to mention Fortran 2003 compliance anymore because... it's time to move on.

At least if you're writing code primarily to demonstrate a new algorithm, go with the features that every reader can access: the features in GNU Fortran. That gets you quite close to Fortran 2008 compliance with a goodly chunk of Fortran 2015 thrown in.

To the extent the missing features are important to you, please, please, please let your preferred vendor know. I frequently hear from vendors that they aren't getting requests for the new features. The squeaky wheel gets the grease.

Damian

Wolfgang Kilian

unread,
Sep 2, 2015, 4:39:24 AM9/2/15
to
AFAIK, NAG is slightly further from F2008 compliance - it would be
interesting to know if they plan a new release which puts their compiler
on that list. They have all OO features, but no coarrays or submodules.
But for debugging new (and old) code, in particular runtime checks,
I'd be reluctant to give up nagfor.

-- Wolfgang

--
E-mail: firstnameini...@domain.de
Domain: yahoo

Arjen Markus

unread,
Sep 2, 2015, 5:07:50 AM9/2/15
to
Op woensdag 2 september 2015 09:02:38 UTC+2 schreef Damian Rouson:
Would it help if we advertise these new features more by means of short examples, concrete descriptions of how to do it (especially with coarrays, I'd say) and perhaps showing explicitly how the old idioms can be replaced?

Just thinking out loud here :).

Regards,

Arjen

Anton Shterenlikht

unread,
Sep 2, 2015, 5:46:35 AM9/2/15
to
Arjen Markus <arjen.m...@gmail.com> writes:

>Op woensdag 2 september 2015 09:02:38 UTC+2 schreef Damian Rouson:
>> After reading a paper that expressed a new algorithm in Fortran 90, I can=
>'t help but wonder when we will tire of expressing new algorithms in 25-yea=

Some people are skeptical about OO in general,
nevermind Fortran OO. Is this a problem?

Or are you talking about parallelism mostly?
Again, many people, particularly mathematicians,
i.e. those who design algorithms, couldn't care
less about parallelisation. Some of the optimisation
codes I use, written in 21st century, use f77.
Is this a problem?

Anton

robin....@gmail.com

unread,
Sep 2, 2015, 7:53:27 AM9/2/15
to
Perhaps the authors wanted the program to run on widely
available implementations.

Damian Rouson

unread,
Sep 2, 2015, 10:17:04 AM9/2/15
to
I believe NAG is only missing one major feature (by my personal definition of "major") from Fortran 2003: derived type I/O. Thus, I imagine NAG will finish Fortran 2003 and move on to Fortran 2008 support.

Damian

Damian Rouson

unread,
Sep 2, 2015, 10:30:43 AM9/2/15
to
On Wednesday, September 2, 2015 at 2:07:50 AM UTC-7, Arjen Markus wrote:
>
> Would it help if we advertise these new features more by means of short examples, concrete descriptions of how to do it (especially with coarrays, I'd say) and perhaps showing explicitly how the old idioms can be replaced?

Yes, I think anything and everything helps. Public demonstrations of interest in the new features will show demand. Supply and demand drive markets. I think the Fortran community is kind of quiet. Over the past two years, I've found that people in my short courses are very excited about coarrays, for example, but I don't think vendors are hearing a market demand. Attendees in a course I taught last year liked the coarray material so much that they suggested I move it to the beginning of future courses. In a course I taught last week, I went past the time allotted for coarrays and then asked the class if they wanted to move on to OOP or keep going with more coarrays. They asked for more coarrays. Yet I think most vendors would tell you they are hearing no demand for coarrays.

Just in case it's of interest, I started an open-source archive of codes from bug reports and feature requests: https://github.com/sourceryinstitute/AdHoc. Contributors are welcome, but I'm thinking these will be written as the smallest exemplars of the bug or feature: sometimes as short as two lines. The primary goal is for them to automate the running of tests of various compilers. Each time a new compiler version is released, the tests can be rerun with "make; make install; ctest" I imagine you're taking about more complete examples.

Damian

Stefano Zaghi

unread,
Sep 2, 2015, 11:57:41 AM9/2/15
to
Dear Damian I am with you :-)

@Anton

In general it is not a problem at all if you like to write your code with cumbersome-implicit-goto-based idiom, but the point highlighted by Damian is real. Damian concerns about "clarity", "conciseness", "parallelism".

You can be skeptic about OOP, but you should agree that modern standards (2003+) have introduced more than only OOP features. In particular, there are many helpers for improve clarity and conciseness, even in a strictly serial scenario. Moreover, you think that parallelism is not important, well many others disagree, is this a problem?

I think that developing new algorithms with 25-years old idiom is simply not so smart: if you have a "caterpillar" for moving a mountain why you prefer to use a "spoon"?

The point is not if OOP is better, but is Fortran 2003+ better than Fortran 66-90? My answer is that new Fortran standards are better than the older ones.

Someone say that there are a "viscosity" or an "inertia" that prevent to use new idioms.

@robin

The portability reason is weak, GNU gfortran supports a lot of different architectures.

My best regards.

Wolfgang Kilian

unread,
Sep 2, 2015, 12:41:42 PM9/2/15
to
On 02.09.2015 17:57, Stefano Zaghi wrote:
> Dear Damian I am with you :-)
>
> @Anton
>
> In general it is not a problem at all if you like to write your code with cumbersome-implicit-goto-based idiom, but the point highlighted by Damian is real. Damian concerns about "clarity", "conciseness", "parallelism".
>
> You can be skeptic about OOP, but you should agree that modern standards (2003+) have introduced more than only OOP features. In particular, there are many helpers for improve clarity and conciseness, even in a strictly serial scenario. Moreover, you think that parallelism is not important, well many others disagree, is this a problem?
>
> I think that developing new algorithms with 25-years old idiom is simply not so smart: if you have a "caterpillar" for moving a mountain why you prefer to use a "spoon"?
>
> The point is not if OOP is better, but is Fortran 2003+ better than Fortran 66-90? My answer is that new Fortran standards are better than the older ones.
>
> Someone say that there are a "viscosity" or an "inertia" that prevent to use new idioms.

Often, people who claim they don't need new 'features' are actually
using them all the time - but emulating them in obscure ways. Dynamic
memory had to be emulated in F77 and earlier. Abstraction (i.e., OOP)
had to be emulated in F95 and earlier. Many programs do just that, even
if their authors are not aware of the fact. Newer standards allow to
code such concepts directly.

Parallel execution could not even be emulated in F03 and earlier, only
by calling a library with OS access (MPI and friends). Now it's also
part of the language.

In all cases, there is no good reason to not use the new features,
whenever the corresponding concepts appear. But the coder has to
realize that he is actually coding a known pattern, and that the
solution is already in the language.

Richard Maine

unread,
Sep 2, 2015, 12:58:50 PM9/2/15
to
Stefano Zaghi <stefan...@gmail.com> wrote:

> The portability reason is weak, GNU gfortran supports a lot of different
> architectures.

I'm no longer programmimg professionally, but when I was, portability
was a huge issue. Portability had a lot to do with why I became
concerned about standards activity in the first place.

Your argument sounds a bit on the theoretical side.
In practice, if I write programs to be used by other people, it is
irrelevant that there is some compiler that could do the job. Users
don't always have a choice in what compilers are installed on their
system. Company IT departments often keep a pretty tight rein on that.
Plus, application users aren't necessarily familliar with installation
of things like compilers at all. I'm not talking about programmers, I'm
talking about the users of the programs. They often have to make do with
whatever is installed on their system. So it is critical for apps to be
buildable with whatever that might be.

Hobby use is another matter. And perhaps in some application areas,
users tend to be able to install compilers as desired. But for the areas
I worked in, a large fraction of the users didn't have that option.

--
Richard Maine
email: last name at domain . net
dimnain: summer-triangle

Stefano Zaghi

unread,
Sep 2, 2015, 3:14:39 PM9/2/15
to
>>>Your argument sounds a bit on the theoretical side.
In practice, if I write programs to be used by other people, it is
irrelevant that there is some compiler that could do the job. Users
don't always have a choice in what compilers are installed on their
system.

This sounds theoretical. I cited GNU gfortran only because it is the best free option, but we consider all other widely used compilers (intel, ibm, cray, nag, etc...): almost all support a wide part of modern idioms (oop features are ones missing mainly), thus your "poor" user that is imposed to use a particular compiler is not automatically imposed to also use f77/f90-95.

>>>I'm not talking about programmers, I'm
talking about the users of the programs. They often have to make do with
whatever is installed on their system. So it is critical for apps to be
buildable with whatever that might be.

Indeed, Damian point seems to be focused on the developers side, e.g. to develop new algorithms. Nevertheless, f2003+ standard is portable as much as f90, this is because I said that portability is a weak justification. If you stay standard-compliant (and a lot of legacy f77 are not...) your code is highly portable.

>>>Hobby use is another matter. And perhaps in some application areas,
users tend to be able to install compilers as desired. But for the areas
I worked in, a large fraction of the users didn't have that option.

No matter your target is (hobby, business, research...), presently almost all the compilers on the market (free or closed) have good support for f2003+ standard: the users, as the developers, can select the standard (66, 77, 90-95, 2003+) they prefer. So, my question is: why you really prefer f77? F2003+ offers helpers to be more clear, concise, portable, robust (maybe efficient, effective, chip...) even whitout considering OOP features.

Wolfgang Kilian

unread,
Sep 2, 2015, 3:57:17 PM9/2/15
to
On 09/02/2015 09:14 PM, Stefano Zaghi wrote:
>>>> Your argument sounds a bit on the theoretical side.
> In practice, if I write programs to be used by other people, it is
> irrelevant that there is some compiler that could do the job. Users
> don't always have a choice in what compilers are installed on their
> system.
>
> This sounds theoretical. I cited GNU gfortran only because it is the best free option, but we consider all other widely used compilers (intel, ibm, cray, nag, etc...): almost all support a wide part of modern idioms (oop features are ones missing mainly), thus your "poor" user that is imposed to use a particular compiler is not automatically imposed to also use f77/f90-95.
>

Don't forget about compiler bugs. The claim that a feature is supported
doesn't mean that it works under all circumstances. It's tempting to
adopt useful features once they are available, but that often means
writing bug reports. Even if bugs are fixed immediately, many users are
not in a position to upgrade their compiler version.

>>>> I'm not talking about programmers, I'm
> talking about the users of the programs. They often have to make do with
> whatever is installed on their system. So it is critical for apps to be
> buildable with whatever that might be.
>
> Indeed, Damian point seems to be focused on the developers side, e.g. to develop new algorithms. Nevertheless, f2003+ standard is portable as much as f90, this is because I said that portability is a weak justification. If you stay standard-compliant (and a lot of legacy f77 are not...) your code is highly portable.
>
>>>> Hobby use is another matter. And perhaps in some application areas,
> users tend to be able to install compilers as desired. But for the areas
> I worked in, a large fraction of the users didn't have that option.
>
> No matter your target is (hobby, business, research...), presently almost all the compilers on the market (free or closed) have good support for f2003+ standard: the users, as the developers, can select the standard (66, 77, 90-95, 2003+) they prefer. So, my question is: why you really prefer f77? F2003+ offers helpers to be more clear, concise, portable, robust (maybe efficient, effective, chip...) even whitout considering OOP features.
>

'good support' is still an overstatement. But that point is getting closer.

-- Wolfgang

Ian Harvey

unread,
Sep 2, 2015, 5:33:43 PM9/2/15
to
On 2015-09-03 5:57 AM, Wolfgang Kilian wrote:
> On 09/02/2015 09:14 PM, Stefano Zaghi wrote:
>>>>> Your argument sounds a bit on the theoretical side.
>> In practice, if I write programs to be used by other people, it is
>> irrelevant that there is some compiler that could do the job. Users
>> don't always have a choice in what compilers are installed on their
>> system.
>>
>> This sounds theoretical. I cited GNU gfortran only because it is the
>> best free option, but we consider all other widely used compilers
>> (intel, ibm, cray, nag, etc...): almost all support a wide part of
>> modern idioms (oop features are ones missing mainly), thus your "poor"
>> user that is imposed to use a particular compiler is not automatically
>> imposed to also use f77/f90-95.
>>
>
> Don't forget about compiler bugs. The claim that a feature is supported
> doesn't mean that it works under all circumstances. It's tempting to
> adopt useful features once they are available, but that often means
> writing bug reports. Even if bugs are fixed immediately, many users are
> not in a position to upgrade their compiler version.

This has also been very much my experience. I am sympathetic to the
view that people should consider adopting the new features of the
language, now that support from some vendors has started to mature.

But over the last few years I have written a lot of bug reports! The
cumulative amount of time spent isolating and reporting a bug, and then
investigating and [hopefully] implementing work-arounds has been
significant. Because of my particular situation I have been able to
afford that time, but for many others that would not be the case.

>>>>> I'm not talking about programmers, I'm
>> talking about the users of the programs. They often have to make do with
>> whatever is installed on their system. So it is critical for apps to be
>> buildable with whatever that might be.
>>
>> Indeed, Damian point seems to be focused on the developers side, e.g.
>> to develop new algorithms. Nevertheless, f2003+ standard is portable
>> as much as f90, this is because I said that portability is a weak
>> justification. If you stay standard-compliant (and a lot of legacy f77
>> are not...) your code is highly portable.
>>
>>>>> Hobby use is another matter. And perhaps in some application areas,
>> users tend to be able to install compilers as desired. But for the areas
>> I worked in, a large fraction of the users didn't have that option.
>>
>> No matter your target is (hobby, business, research...), presently
>> almost all the compilers on the market (free or closed) have good
>> support for f2003+ standard: the users, as the developers, can select
>> the standard (66, 77, 90-95, 2003+) they prefer. So, my question is:
>> why you really prefer f77? F2003+ offers helpers to be more clear,
>> concise, portable, robust (maybe efficient, effective, chip...) even
>> whitout considering OOP features.

(I'd be very surprised if the editor of the Fortran 2003 standard
preferred Fortran 77.)

What people prefer doesn't really come into it - the choice of language
revision and perhaps feature subset has to primarily consider the level
of support on the target platform/Fortran processor. If the level of
*working* support isn't there, you can't use that language
revision/feature subset! Whether some other platform/Fortran processor
supports that feature is irrelevant.

> 'good support' is still an overstatement. But that point is getting
> closer.

Agreed.

The OP of this thread has a particular focus on coarrays, which is fair
enough - perhaps that feature is particularly relevant to their domain,
and it has been one of the features of Fortran 2008 that has tended to
be implemented early. But there's a lot more to Fortran 2003/8 support
than coarrays.

baf

unread,
Sep 2, 2015, 6:20:08 PM9/2/15
to
While there are a few good reference type books covering Fortran 2003
and beyond, they do not generally have useful practical examples of the
use of the "newer" features of the language. Textbooks historically
have been the best source of seeing many practical examples of Fortran.
When Fortran 90 came out, there was a flood of new or revised
textbooks covering the material. Currently, there is only a single
textbook (in the traditional sense) covering Fortran 2003 and beyond.
To "move on", we need the "teachers" to move on, and the students they
are teaching to learn the newer idioms. The enthusiasm you are finding
in your classes reflect the interest in learning, and also the absence
of alternative educational material. Having source code examples of
commonly used algorithms and numerical methods may help educate "old
timers" and new programmers and provide the nudge needed to "move on".


Anton Shterenlikht

unread,
Sep 2, 2015, 7:10:49 PM9/2/15
to
baf <b...@nowhere.com> writes:
>To "move on", we need the "teachers" to move on, and the students they
>are teaching to learn the newer idioms. The enthusiasm you are finding
>in your classes reflect the interest in learning, and also the absence
>of alternative educational material.

My one day coarray course for Bristol HPC users
was cancelled last time due to no interest.
Must be me.

Anton

rusi_pathan

unread,
Sep 2, 2015, 7:46:22 PM9/2/15
to
A few points ...

Most people rarely install a compiler themselves and use the version that comes with the distribution. Both Red Hat and Debian (stable) which by far are the most popular distributions (especially in the enterprise) don't include the bleeding edge.

Few people have access to Crays. Even those who have access often use GNU/PGI Fortran compilers so I am not quite sure how well the newer Cray Fortran compilers are tested. The number of people actually using F2008 features on Cray machines is probably in single digits but I could be wrong.

A lot of people writing parallel codes often use other libraries/frameworks which themselves might be built on top of other libraries such as MPI. For such users Co-arrays are not very useful and might explain the lack of interest/demand.

However, with the effort that is going into GFortran and projects such as OpenCoarrays I think that the future is bright but it might take some more time to catch on. I for sure will use it.

Richard Maine

unread,
Sep 2, 2015, 9:11:40 PM9/2/15
to
Stefano Zaghi <stefan...@gmail.com> wrote:

> So, my question is: why you really prefer f77?

I am guessing that the "you" in this question was intended as an
abstract or generic "you" rather than meaning me, in specific.

For me in specific, in case you didn't make the connection from my many
prior posts, I stopped preferring f77 even before the first f90 compiler
was available. Although I considered myself a fluent and experienced f77
programmer, I decided to abandon it for a major new project that I
started circa 1990. Exact date forgotten, but there were no F90
compilers available at the time. The two things that drove me to that
decision were the lack of dynamic allocation and data structures in f77.
Of course, I could have managed without them, but I decided that woul
dbe too painful and awkward for the new project. I toyed with Ada for a
while, but decided it wasn't what I was looking for. C certainly had the
fundamental capabilities, but I found that for me it was an inordinately
long and painful process to write and debug substantial C codes; figured
I'd get out of programming before I went down that track. Then I looked
at f90 and decided that it had what I wanted (plus the advantage of
building on my extensive f77 experience), even though there were no
compilers yet. So I took the risk. never ended up regretting it.

I retired early in 2007 and there were no compilers yet supporting f2003
at that time, at least on platforms of interest to me. I did keep
thinking about adopting some f2003 features into my codes, because there
were several that I saw could be used to advantage, even if I had to
gradually ease them in (as rewriting whole apps from scratch was not
going to happen). But my experiments with using f2003 features in actual
compilers of th etime was a miserable failure. For example, I barely
started playing with OOP stuff in the NAG compiler when I discovered
that it did not yet support allocatable scalars. That completely killed
my intended use.

Things have undoubtedly improved since then, but as noted, I'm just
doing hobby-type stuff now. Still seems to me that I recall some pretty
basic f2003 stuff not working with gfortran until quite recently.
Details forgotten, but don't I recall something like deferred-length
strings not working as derived-type components? Do recall that the
compilers installed on user systems are often several years old, so
things that were fixed just in the last year or two aren't ones I'd
count on for production use yet, if I were still doing production
things.

So yes, I still have doubts about using f2003. That doesn't mean I
prefer f90/f95; I most certainly do not. The doubts are based solely on
compiler support. And as Wolfgang noted, "support" means more than just
having something listed as a feature.

And as for allegedly preferring f77, if you really mean me, I have no
idea how you would have gotten that impression. Hasn't been true for two
and a half decades.

Richard Maine

unread,
Sep 2, 2015, 9:14:39 PM9/2/15
to
Ian Harvey <ian_h...@bigpond.com> wrote:

> The OP of this thread has a particular focus on coarrays, which is fair
> enough - perhaps that feature is particularly relevant to their domain,
> and it has been one of the features of Fortran 2008 that has tended to
> be implemented early. But there's a lot more to Fortran 2003/8 support
> than coarrays.

Yes. I, for example, have very little interest in co-arrays. I'm aware
they are a hot subject for some people, but that doesn't generalize to
everyone.

James McCreight

unread,
Sep 3, 2015, 12:02:49 AM9/3/15
to
I think a huge piece of the problem is simply the lack of a comprehensive, up-to-date resource on the language and, very importantly, it's environment (compilers, debuggers, libraries, etc). There's no light shining in the darkness to help busy users hook in to mofo and its ecosystem.

Being an R user, I think of the success of Hadely Wickham's books, which are essentially open-source, crowd-sourced, and version controlled on github.
http://r-pkgs.had.co.nz/
http://adv-r.had.co.nz/

I think this is a really great way to focus an online community in to crafting something. Better than a wiki, for example. It is also something which could evolve with future standards.

Damian, I enthusiastically nominate you to lead the effort! :) It would be nice if it could be funded some how, at least at the top level for individuals who spend a greater amount of time working on it than reviewers/commenters/minor contributors.

Damian Rouson

unread,
Sep 3, 2015, 12:13:48 AM9/3/15
to
On Wednesday, September 2, 2015 at 2:46:35 AM UTC-7, Anton Shterenlikht wrote:
> Arjen Markus writes:
>
> >Op woensdag 2 september 2015 09:02:38 UTC+2 schreef Damian Rouson:
> >> After reading a paper that expressed a new algorithm in Fortran 90, I can=
> >'t help but wonder when we will tire of expressing new algorithms in 25-yea=
>
> Some people are skeptical about OO in general,
> nevermind Fortran OO. Is this a problem?

I don't think OO is necessary to modernize code. Even though I'm a fan of OO, I
make a conscious effort not to force it into projects where its benefits aren't apparent.

I like the idea behind the book Writing Idiomatic Python. What I'd like to see is people writing idiomatic modern Fortran. For example, use array features, intrinsic procedures, and loop concurrency to write compact logic that replaces nested, sequential loops with statements that accomplish the same thing in a single line or a few short lines while simultaneously exposing opportunities for parallelism. Such parallelism is less apparent in serial loops that wrap conditionals.

>
> Or are you talking about parallelism mostly?

I wouldn't say mostly, but that is a big piece of it -- especially if vectorization is considered a form of parallelism. Compilers are exploiting DO CONCURRENT, for example, to vectorize code.

> Again, many people, particularly mathematicians,
> i.e. those who design algorithms, couldn't care
> less about parallelisation. Some of the optimisation
> codes I use, written in 21st century, use f77.
> Is this a problem?

It is only a problem if they care about performance and I recognize there are important applications where performance is not the major concern. At the same time, the
high-performance computing (HPC) domain is a very important constituency for Fortran.
Consider the funding various government agencies are pouring into HPC and consider that
HPC's importance is being acknowledged at the highest levels of government
(see Pres. Obama's Executive Order at https://www.whitehouse.gov/blog/2015/07/29/advancing-us-leadership-high-performance-computing).

In an era where clock rates no longer increase and even our laptops and mobile phones have multicore CPUs, increasing levels of parallelism is the most sustainable approach to increasing levels of performance. And given that Fortran is now a parallel programming language, it's arguable that any algorithm for which performance matters ought to at least be expressed in a form that exposes the parallelism (think DO CONCURRENT) even if one chooses not to go all the
way to making the algorithm explicitly parallel (via coarrays or otherwise).

Damian

Stefano Zaghi

unread,
Sep 3, 2015, 12:24:17 AM9/3/15
to
Dear Richard, I am very sorry. I follow all your posts with great attention because I know you are a great programmer (in a recent thread I cited you alongside Dr. Fortran...). When I said "you" I intended f77 fan in general.

I see the problem of buggy features (I faced with many GNU gfortran and Intel fortran ones spending my limited time on them), but the time is mature for many F2003+ features on many compilers. My experience is limited (15 years) and I have access to only GNU, Intel and (sometimes IBM XL) Fortran compilers: these support (without major bugs) most of the features necessary to obtain more clarity, conciseness, robustness... forget about OOP and CAF, Damian (as me) is interested on that, but the thread is focused on the reason why NEW algorithms are still implemented with a 25-years old idioms, not on why Fortraners do not use OOP or CAF.

The documentation material is really a problem, but again the things are fluid: I can cite "Modern Fortran Style", "Modern Fortran in practice", "Fortran 2003 explained", obiousvly "Scientific Software Design, the OO way" of Damian et al and few other books that I do not rember tge title. I attend many course on HPC teaching some modern aspects of Fortran. Fortran-world is mature for adopting (the most part of) modern idioms.

My feeling of why people do not move on is that many still think that "implicit is better than explicit, goto is great, common block is useful..." and in the meanwhile thay think "automatic lhs-reallocation is bad, association is dangerous, modules complicate compilation...", but I could be wrong, my experience is limited to research and aerospace industry.

Damian Rouson

unread,
Sep 3, 2015, 12:24:33 AM9/3/15
to
On Wednesday, September 2, 2015 at 4:53:27 AM UTC-7, robin....@gmail.com wrote:
>
> Perhaps the authors wanted the program to run on widely
> available implementations.

You nailed it. This is exactly my concern. The impression that modern Fortran is not widely available is less and less true every year and I feel we've turned a corner where it should no longer be prohibitive. GFortran is free and open-source and can be installed nearly anywhere. My original posting was about demonstration code in an article. Settling on features to which every reader has access for free seems meets the bar of wide availability even if there might be reasons that people would choose other compilers for their production code. Going with GFortran, one gets nearly all of Fortran 2008 minus two major features (parameterized derived types and derived type I/O) plus a sizable chunk of Fortran 2015 (the Further C-Interoperability features plus the parallel collectives).

In the case of production code, I've seen organizations require that a feature be supported by 4 or 5 compilers in order to use the feature. Well, I just listed 4 compilers with 2008 compliance that is either complete or nearly so. If 4 compilers isn't sufficient, then let's go with Fortran 2003 and add in Portland Group (full 2003 compliance) and NAG (only missing derived type I/O). Then we have 6 strong arguments for not settling on Fortran 90.

Damian

Richard Maine

unread,
Sep 3, 2015, 1:33:11 AM9/3/15
to
Stefano Zaghi <stefan...@gmail.com> wrote:

> Dear Richard, I am very sorry. I follow all your posts with great
> attention because I know you are a great programmer (in a recent thread
> I cited you alongside Dr. Fortran...). When I said "you" I intended f77
> fan in general.

Ok. Then to the extent that you are talking about people moving on from
f77 to something later, I find myself in agreement with you. I'm less
convinced about whether the time has come to move on past f95 for
production codes. It might plausibly be time now, but I'd argue that if
so, it hasn't been for long. No way it was time when I retired in 2007 -
not even close in my opinion.

Do recall also that major production codes don't magically change
overnight just because new options are now reasonable. Working with
brand new codes is very different from working with old ones. Even when
one is introducing new routines to older codes, consistency of style
accross the whole code can be a significant issue.

> My feeling of why people do not move on is that many still think that
> "implicit is better than explicit, goto is great, common block is
> useful..." and in the meanwhile thay think "automatic lhs-reallocation
> is bad, association is dangerous, modules complicate compilation...",
> but I could be wrong, my experience is limited to research and aerospace
> industry.

That doesn't sound to me like a list of reasons why some people prefer
f77. That sounds like just repeating the fact that they prefer f77,
listing some of its differences from f90+, without getting into why they
have such a preference. My personal observation is that most people who
still prefer f77 do so out of inertia - that's what they learned and
they really aren't prepared to learn new ways. Exceptions certainly
exist, but my impression is that's the most common reason. As such, it
is naturally disappearing because the people who learned to program when
f77 was current are aging out of active roles. I first learned Fortran
well before f77, but I also learned that ithe computer world was
changing enough that if I didn't regularly learn new things, I'd rapidly
become irrelevant.

Wolfgang Kilian

unread,
Sep 3, 2015, 3:22:23 AM9/3/15
to
On 02.09.2015 23:33, Ian Harvey wrote:
>
> The OP of this thread has a particular focus on coarrays, which is fair
> enough - perhaps that feature is particularly relevant to their domain,
> and it has been one of the features of Fortran 2008 that has tended to
> be implemented early. But there's a lot more to Fortran 2003/8 support
> than coarrays.

I guess it is not unusual that a Fortran program spends more than 99% of
its time in less than 1% of its code. For that 1%, I'd consider
coarrays when refactoring the code. For the rest? (OK, once the
parallel speedup exceeds 100, think about it again.)

How well do coarrays work together with OO design? I remember reading
some criticism regarding that point, but didn't try myself yet. 10
years ago I had doubts about OOP, but today it has become standard for
me (in the sense of GoF). Maybe I'm still an exotic in the Fortran
world, but I consider the OOP support in the Fortran language excellent,
not just in theory but in real applications. Now, should one consider
for future projects that everything will be running multi-threaded, also
the non-critical parts that don't handle big chunks of data but require
big chunks of code? I hope coarrays turn out versatile enough to cope
with that situation.

abrsvc

unread,
Sep 3, 2015, 7:19:27 AM9/3/15
to
All this discussion about features and using "new" standards is nice, but shouldn't the emphasis be on what is best to resolve the problems at hand? If you are providing code for a serial interface, is OOP the best way to handle it or is the existing F77 code OK? I have had experience with people insisting on using features (whether Fortran or others is not important here) where straightforward code would have been both quicker and self documenting. Sometimes using new features just to use them is counterproductive. When writing code, sometimes using "the old fashion way" is easier for the future maintainer to see what the code does. Being fancy is not always the way to go. Simplicity has its value sometimes too.

Dan

Ian Harvey

unread,
Sep 3, 2015, 7:33:25 AM9/3/15
to
My understanding, supported by asking here and elsewhere, is that you
can't easily communicate the value of a polymorphic object between
images. See C617 in F2008.

(I qualify with "easily", because you could perhaps do some sort of
dynamic type specific serialization, then communicate the serialization,
then deserialize the value on the destination image, but that sort of
fiddly nonsense is not what I consider easy for a programmer. It isn't
something which strikes me as being that hard for a compiler though -
managing fiddly detail on behalf of a programmer kind of being the
reason that compilers exist.)

Is that a problem? I guess as per usual that depends on what you are
trying to do.

But I look at one of my current projects (which to be fair is probably
very atypical for Fortran) and CLASS appears all over the place.
Thumbsuck guess - 99% of the procedures have at least one dummy argument
that is polymorphic or has a polymorphic subobject. Perhaps I am coming
at coarrays with the wrong mental model, but if I use the procedure
arguments as an indication of the nature of the information flows in
that program, then I don't see opportunities for information flow
between images, despite there being independent streams of execution in
between periods of information transfer.

For other application cases I'm sure the situation is different. As I
said, I probably have a very atypical application, I don't want to
discourage anyone from learning and investigating and experimenting with
the application of coarrays to their problems at all. But I am rather
disappointed that the feature appears to have been designed as an
addition to Fortran 95, not Fortran 2003. As I recall reading somewhere
recently... its time to move on!

Stefano Zaghi

unread,
Sep 3, 2015, 7:56:22 AM9/3/15
to
Il giorno giovedì 3 settembre 2015 13:19:27 UTC+2, abrsvc ha scritto:
> All this discussion about features and using "new" standards is nice, but shouldn't the emphasis be on what is best to resolve the problems at hand? If you are providing code for a serial interface, is OOP the best way to handle it or is the existing F77 code OK? I have had experience with people insisting on using features (whether Fortran or others is not important here) where straightforward code would have been both quicker and self documenting. Sometimes using new features just to use them is counterproductive. When writing code, sometimes using "the old fashion way" is easier for the future maintainer to see what the code does. Being fancy is not always the way to go. Simplicity has its value sometimes too.
>
> Dan

Well, this is exactly the point... F77 code is (in general) obscure, error-friendly... You are talking about "the problems at hand" so it is up-to-you to list the cases. Damian simply point out that "developing new algorithm with 25-year old idioms" is a bad idea (in general, what ever the problem you are handling) because new idioms have added a lot of features that help to be more clear, concise, etc...

The point is not OOP or CAF... do you like procedural programming-old-fashion approach? good, why do you think that in F77/F90 your procedural approach is implemented better than in F03+?

I see a lot of mythic F77-still-in-use-productive-(wow)-codes in my research area, none of them are clear, concise, readable, well documented, modular, portable, reusable without efforts...

As you said, simplicity is very important: pollute your code with common blocks and goto does not simplify your codes.

We are not "fashioned-programmers" or "fanciness-fan" to be disregarded: we simply point out that new idioms add a lot of features (without considering OOP and CAF) that help you to be more clear, concise, simple, readable... do you not agree?

As Richard Maine said, I think there is a lot of "inertia" against the new standards and maybe for the first time, the compiler vendors are "upstream" than the developers :-)

Wolfgang Kilian

unread,
Sep 3, 2015, 8:08:27 AM9/3/15
to
On 03.09.2015 13:19, abrsvc wrote:
> All this discussion about features and using "new" standards is nice, but shouldn't the emphasis be on what is best to resolve the problems at hand? If you are providing code for a serial interface, is OOP the best way to handle it or is the existing F77 code OK? I have had experience with people insisting on using features (whether Fortran or others is not important here) where straightforward code would have been both quicker and self documenting. Sometimes using new features just to use them is counterproductive. When writing code, sometimes using "the old fashion way" is easier for the future maintainer to see what the code does. Being fancy is not always the way to go. Simplicity has its value sometimes too.
>
> Dan
>

OOP is no more simple or complicated than procedure-oriented
programming. The primary distinction is whether you associate data with
procedures or procedures with data. For a few lines of code, depending
on your education and on the nature of the problem, you may find any
approach more obvious, the power and functionality is the same.

On a larger scale, where you actually design a program and algorithms,
the story is different. You do want to separate the package into
hierarchical independent elements that can be tested, modified or
replaced on an individual basis. You may associate data with procedures
and abstract over those - then you want a functional language. Great
for multi-threaded environment, but Fortran is not a functional
language. Or you may associate procedures with data. This is supported
by modern Fortran.

So it's not about being fancy but whether the problem is large or small
- whether you want a program to grow and impose structure by ad-hoc
rules, or whether you use the power and flexibility of the language
itself.

Personally, I came to the point where I associate my procedures with
data even on a small scale. I find the code to become more readable,
self-documenting and maintainable than doing it the other way round.

Gary Scott

unread,
Sep 3, 2015, 8:23:26 AM9/3/15
to
On 9/3/2015 12:33 AM, Richard Maine wrote:
> Stefano Zaghi <stefan...@gmail.com> wrote:
>
>> Dear Richard, I am very sorry. I follow all your posts with great
snip
> That doesn't sound to me like a list of reasons why some people prefer
> f77. That sounds like just repeating the fact that they prefer f77,
> listing some of its differences from f90+, without getting into why they
> have such a preference. My personal observation is that most people who
> still prefer f77 do so out of inertia -

For me, being behind the curve has almost always been because it is like
pulling teeth to negotiate a compiler upgrade through the byzantine IT
software procurement bureaucracy...:( I've been trying for 9 months now
to get a compiler upgrade and a gino upgrade. It isn't so much as
resistance but bureaucracy (lost request more than once), approvals,
funding timing. Even though I've been using these tools since 1988 or
so and they should know that I'm going to want to upgrade at least every
5 years or so (sheesh).

snip

Wolfgang Kilian

unread,
Sep 3, 2015, 8:24:14 AM9/3/15
to
Looks like my code is also very atypical. I yet have to gather
experience with coarrays - this sounds like you have to communicate
dynamical data via specific coindexed buffers, as with good old MPI. I
hope that such restrictions are not in the syntax itself, so
constraints can be simply removed in a standard revision.

Gordon Sande

unread,
Sep 3, 2015, 10:23:05 AM9/3/15
to
Try getting a "maintenamce contract" where you pay every year and get the
updates whenever they arrive. Much lower fuss even if the cash outlay might
be higher. All the other "costs" are lower so it pays off. This is a case
where "you have more money than brains" in an institutional environment.




Richard Maine

unread,
Sep 3, 2015, 11:28:13 AM9/3/15
to
Gary Scott <garyl...@sbcglobal.net> wrote:

> On 9/3/2015 12:33 AM, Richard Maine wrote:
> > Stefano Zaghi <stefan...@gmail.com> wrote:
> >
> >> Dear Richard, I am very sorry. I follow all your posts with great
> snip
> > That doesn't sound to me like a list of reasons why some people prefer
> > f77. That sounds like just repeating the fact that they prefer f77,
> > listing some of its differences from f90+, without getting into why they
> > have such a preference. My personal observation is that most people who
> > still prefer f77 do so out of inertia -
>
> For me, being behind the curve has almost always been because it is like
> pulling teeth to negotiate a compiler upgrade through the byzantine IT
> software procurement bureaucracy...:(

Understand. Been there. I was talking more about people who prefer older
versions rather than people who are stuck with them for reasons other
than preference.

glen herrmannsfeldt

unread,
Sep 3, 2015, 11:51:13 AM9/3/15
to
Richard Maine <nos...@see.signature> wrote:

(snip on staying with Fortran 77)
(someone wrote)
>> My feeling of why people do not move on is that many still think that
>> "implicit is better than explicit, goto is great, common block is
>> useful..." and in the meanwhile thay think "automatic lhs-reallocation
>> is bad, association is dangerous, modules complicate compilation...",
>> but I could be wrong, my experience is limited to research and aerospace
>> industry.

> That doesn't sound to me like a list of reasons why some people prefer
> f77. That sounds like just repeating the fact that they prefer f77,
> listing some of its differences from f90+, without getting into why they
> have such a preference. My personal observation is that most people who
> still prefer f77 do so out of inertia - that's what they learned and
> they really aren't prepared to learn new ways. Exceptions certainly
> exist, but my impression is that's the most common reason.

Seems to me sometimes also the reason for resistance to social change.

We have climate change deniers, who don't really understand the
science, or even the natural result of exponential growth, but
just prefer doing things the old way.

> As such, it
> is naturally disappearing because the people who learned to program when
> f77 was current are aging out of active roles. I first learned Fortran
> well before f77, but I also learned that ithe computer world was
> changing enough that if I didn't regularly learn new things, I'd rapidly
> become irrelevant.

In some other cases, they might not be aging out fast enough.

-- glen

steve kargl

unread,
Sep 3, 2015, 11:55:47 AM9/3/15
to
Damian Rouson wrote:

> After reading a paper that expressed a new algorithm in Fortran 90, I can't help
> but wonder when we will tire of expressing new algorithms in 25-year-old idioms
>when more recent intrinsics and statements might collapse 5-10 lines down to 1.

Perhaps, the important point of the article was the new algorithm not the
language the authors choose to demostrate the algorithm. Knuth wrote
a rather famous set of books that put importance on the algorithms not
the expression of those algorithms in any given common programming
language. In fact, Knuth invented MIX (and now MIXX) with sole purpose
of no tying the algorithms to an implementation.

--
steve

glen herrmannsfeldt

unread,
Sep 3, 2015, 11:59:11 AM9/3/15
to
Wolfgang Kilian <kil...@invalid.com> wrote:
> On 03.09.2015 13:19, abrsvc wrote:
>> All this discussion about features and using "new" standards is
>> nice, but shouldn't the emphasis be on what is best to resolve
>> the problems at hand?

(snip)

> OOP is no more simple or complicated than procedure-oriented
> programming. The primary distinction is whether you associate data with
> procedures or procedures with data. For a few lines of code, depending
> on your education and on the nature of the problem, you may find any
> approach more obvious, the power and functionality is the same.

Yes. This might not be so well known.

> On a larger scale, where you actually design a program and algorithms,
> the story is different. You do want to separate the package into
> hierarchical independent elements that can be tested, modified or
> replaced on an individual basis. You may associate data with procedures
> and abstract over those - then you want a functional language. Great
> for multi-threaded environment, but Fortran is not a functional
> language. Or you may associate procedures with data. This is supported
> by modern Fortran.

This distinction on scale might be not so well known. It might be
best to be object-oriented on one scale, but not another.

You don't want to create and destroy objects within the inner loop
of a matrix multiplication routine, or for that matter within the
inner loop of many algorithms.

> So it's not about being fancy but whether the problem is large or small
> - whether you want a program to grow and impose structure by ad-hoc
> rules, or whether you use the power and flexibility of the language
> itself.

> Personally, I came to the point where I associate my procedures with
> data even on a small scale. I find the code to become more readable,
> self-documenting and maintainable than doing it the other way round.

-- glen

Damian Rouson

unread,
Sep 3, 2015, 12:58:31 PM9/3/15
to
This makes perfect sense and I would of course never presume to argue with Knuth. :)
This paper also had pseudocode in the body of the paper. The Fortran program was
an attachment -- effectively an appendix.

For what it's worth, what I found fascinating is that compacting the code with newer
features made the actual Fortran so short that I believe it could be used instead of
pseudocode and would have two advantages: (1) Fortran is less ambiguous than
pseudocode because it's backed by a standard and (2) the array notation, mask-based
intrinsics, and DO CONCURRENT can combine to remove some of the unnecessary
serialization of the algorithm.

Many Fortran programs model nature and many aspects of nature are inherently parallel.
Rather than asking "Why parallelize this algorithm?", an alternative viewpoint
is "Why was this algorithm ever serialized?" When the natural aspects of what is being
modeled are parallel, the parallel algorithm might be the more natural and clear way to
express the algorithm. I should also be careful to point out that I don't necessarily
mean "parallel" in the sense of parallel computing. I mean to connote a notion of
parallelism that would include array assignments. Such an assignment might or might
not execute in parallel depending on the overall code implementation, the compiler,
the hardware, etc. But I think the algorithm is more clear with an array assignment
rather than nested loops that do the assignment element-by-element.

What I was reading was a combination of Fortran 77 idioms (e.g., element-by-element
assignment) with some Fortran 90 (e.g., array constructors). What I would replace it
with involves a Fortran 90 mask-based intrinsic and Fortran 2008 DO CONCURRENT.
I think it's perfectly reasonable to stop there because even the steps I just mentioned
demonstrate the parallelism in the way I'm using the word "parallelism." For what it's
worth, however, after I revised the algorithm in this way, it became immediately obvious
how generate actual parallel computation. The coarray parallelization occupied less than
10% of the time I spent revising the code.

I just realized that a somewhat simplified version of the paper's algorithm can be used
in my own work. I'll see if I can extract a short snippet from my own code to post in a
new thread for comment. I'll be very interested in people's feedback because I'm
considering using the code in courses I teach.

FortranFan

unread,
Sep 3, 2015, 3:24:49 PM9/3/15
to
On Thursday, September 3, 2015 at 12:58:31 PM UTC-4, Damian Rouson wrote:
> ..
>
> I just realized that a somewhat simplified version of the paper's algorithm can be used
> in my own work. I'll see if I can extract a short snippet from my own code to post in a
> new thread for comment. I'll be very interested in people's feedback because I'm
> considering using the code in courses I teach.


A few comments:

* the programming world in general and the scientific/technical computing domain in particular will need lots and lots of educators and communicators of Fortran like you, Dr. Rouson, if the idioms are ever going to change! Fortran has a rich legacy but it is also its (heavy) burden.

* programming world, especially the scientific/technical/HPC computing domain, can benefit greatly from Recipe series of content that is so widely available for many of the top languages, such as those in Tiobe Index. "Programming Recipes in Fortran 2015 (or 2008)" would be very useful. Toward this, all the work you are doing - blogs/vlogs (Stanford videos), books, etc. are invaluable.

* Re: the article that got you to initiate this thread, perhaps you can put together another article or a note based on your own findings of the new algorithm and try to get it published in the same journal, if that's possible. This can educate the readers of the possibilities with latest Fortran standard.

* Or you can perhaps write to the authors and share your findings with them directly; it might open their eyes to modern capabilities of which they may be unaware.

Separately,

* I prefer when authors in formal publications express their algorithms with flowcharts and words and phrases in normal language (English if that's the language the article is published in) rather than a specific programming or markup language.

* Any reference to Cray compiler and its features is worse than useless for any discussion like this given how the elusive it is.

Damian Rouson

unread,
Sep 3, 2015, 3:37:06 PM9/3/15
to
On Wednesday, September 2, 2015 at 6:11:40 PM UTC-7, Richard Maine wrote:

> I retired early in 2007 and there were no compilers yet supporting f2003
> at that time, at least on platforms of interest to me.

2007 was a turning point for me. I submitted my book proposal to Cambridge
University Press around January of that year. I was planning to emulate OOP in
Fortran 95. Cambridge sent the proposal out for review and the reviewer
suggested that using Fortran 2003 would give the book more lasting value.
I spent the remainder of that year searching for a 2003 compiler. Toward the
end of the year, a chance encounter and some personal contacts led me to
Jim Xia who was on IBM's compiler test team and on the Fortran standards
committee. I benefitted tremendously from my interactions with Jim. I invited
him to write journal article article together and then invited him to co-author
the book with me.

> Things have undoubtedly improved since then, but as noted, I'm just
> doing hobby-type stuff now. Still seems to me that I recall some pretty
> basic f2003 stuff not working with gfortran until quite recently.

Yes, this is very recent. I've been feeling for a while that 2015 is a watershed
moment and, as of August 5, the feeling grew even stronger. That was the day
that Paul Richard Thomas submitted the his submodules patch to the
GCC 6.0.0 trunk, making the compiler two major features away from 2008
compliance. And Paul and another gfortran developer are scoping out the
work on the two remaining features (crowdfunding, anyone?).

Considering Cray's 2008 compliance and how close Intel and IBM are to the
same, I'd place a gentleman's bet on there being four fully 2008 compliant compilers
either released or in beta within 18 months. More over, all four of the aforementioned
compilers already support substantial pieces of what is expected to be in Fortran
2015 so 2015 compliance will follow not long thereafter.

I'm sure I follow the compiler developments more than most Fortran programmers so
my original post was intended to be a clarion call to get the word out.


> Details forgotten, but don't I recall something like deferred-length
> strings not working as derived-type components?

Many of gfortran's problems with allocable deferred-length character variables
have been fixed recently. Searching GCC's Bugzilla site on the words
"deferred-length derived type" turns up five outstand bug reports, but not all are
related to your question and two ones have updates from last month so at least
there is activity.

> Do recall that the
> compilers installed on user systems are often several years old, so
> things that were fixed just in the last year or two aren't ones I'd
> count on for production use yet, if I were still doing production
> things.

Agreed. My original posting in this thread related to demonstration code in a
journal article, which of course imposes different constraints from those faced
by production code. I'm thinking that journal articles are similar to books in
that having lasting value is a high priority. In many cases, using newer language
features serves that purpose (I guess the exception would be when the newer
feature later becomes deprecated as appears to be the case with FORALL for example).

>
> So yes, I still have doubts about using f2003. That doesn't mean I
> prefer f90/f95; I most certainly do not. The doubts are based solely on
> compiler support.

If you're willing to use the absolute latest versions, I thin almost all of those doubts
will be removed. I spend a lot of time teaching modern Fortran and have finally
reached the point where I rarely have to insert workaround for missing features if I'm
using the Cray, Intel, or GNU compiler. I wouldn't write any of what I'm writing in this
thread if GNU weren't on the list. It is the free, open-source availability of modern
Fortran that I think makes 2015 a watershed moment.

Because of Intel's previous performance problems with coarrays, I wouldn't even
have included Intel in the list until they released version 16.0 last month. We really
are turning a corner here.

If anyone wants to try these features in non-production code, the virtual machine that
I offer online contains a GCC 6.0.0 build from August 25:

http://www.sourceryinstitute.org/store

I expect to update the virtual machine periodically.

Damian

FX

unread,
Sep 3, 2015, 5:41:10 PM9/3/15
to
> * Any reference to Cray compiler and its features is worse than useless
> for any discussion like this given how the elusive it is.

For HPC use, it is one of the references. I haven't used it myself, my
HPC platforms use (or used) Intel or IBM compilers, and NEC in the past.

--
FX

Terence

unread,
Sep 3, 2015, 5:52:40 PM9/3/15
to
About parallel processing:-
Humanity and most living creatures use parallel processing (with
constantly–adjusted priority output attention levels) plus a central task
manager for system awareness.

My first experience with Transputer arrays (interconnected CPUs), used the
same parallel methods for problem solving, made me realise that this is how
REAL self-aware thinking machines could be developed.

My humble opinion is that, to REALLY move on in programming, we need some
manufacturer of highly-parallel systems (e.g. Nvidia, using Fortran-like
Cuda on its multiple-cpu graphics boards) can produce economic
parallel-processing laptops for University and home programming. Then leave
progress to the nerds.

Oh! One more observation from early works in the process control field; you
shouldn’t transmit data between tasks, you should instead switch the access
rights to where the needed data is stored (one basic time unit).


James Van Buskirk

unread,
Sep 3, 2015, 7:02:18 PM9/3/15
to
"Damian Rouson" wrote in message
news:ec281193-a4f4-4b64...@googlegroups.com...

> In the case of production code, I've seen organizations require that a
> feature be supported by 4 or 5 compilers in order to use the feature.
> Well, I just listed 4 compilers with 2008 compliance that is either
> complete or nearly so. If 4 compilers isn't sufficient, then let's go
> with Fortran 2003 and add in Portland Group (full 2003 compliance)
> and NAG (only missing derived type I/O). Then we have 6 strong
> arguments for not settling on Fortran 90.

Although I really like f2003 and have some features I recommend like
C interoperability and deferred-length variables, the standards are
complicated enough that you can probably slaughter any of the
compilers in question with the right kind of f95 code.

campbel...@gmail.com

unread,
Sep 3, 2015, 10:38:31 PM9/3/15
to
I have been following this topic with interest. Coarrays sound like a new feature that I should investigate.

However, I am a Fortran 95 user. I have considered the significant complexity that has been introduced into the language with f03 and F08 to be a dramatic change to the Fortran language, which was once a simple pragmatic language and is now overly complex.

I wonder who can learn Fortran from scratch, if they were to look at the F2008 language. I learnt F66 in the early 70's then progressed through 77 to 95. Then, an overriding strategy was always to keep it simple.
F95 with a well a designed data structure and hierarchy of routines produces good solutions for numerical calculations. The use of allocatable variables in modules is well suited to 64-bit computing.

My latest additions have been to learn how to use Fortran and benefit from !$OMP and vector instructions. It is a challenge keeping up with these changes although the results have been very interesting but variable. On wintel architecture, the memory access bottleneck is a significant solution design issue.

As for implementing F03 and F08 in production code, there is the uncertainty of the compiler reliability, combined with the reliability of the optimisers that are available. It is much safer to know your coding approach will work using F95 than invest in new data structure approaches that add another level of complexity and the uncertainty that these approaches may not be reliably portable.

As an F95 user, I am not convinced of the benefits of the complex additions to the standard and wonder how someone with little knowledge of Fortran could become reliably proficient in the modern standard. The complexity of F2008 is an issue that needs to be addressed.

Damian Rouson

unread,
Sep 4, 2015, 12:32:38 AM9/4/15
to
On Thursday, September 3, 2015 at 7:38:31 PM UTC-7, campbel...@gmail.com wrote:

> I wonder who can learn Fortran from scratch, if they were to look at the F2008 language.

It's all about finding the right port of entry. Very few people learn Fortran as their first programming language anymore so the proper entry point depends on their prior programming experience. For someone most familiar with MATLAB, start with Fortran's array programming facilities. For someone most familiar with Java or C++, start with Fortran's OOP feature set. For someone experienced with parallel programming using MPI or OpenMP, start with coarrays. For someone most familiar with C, start with the C-interoperability features.

I taught a tutorial on modern Fortran to a group of student interns and recent hires at BP last summer. Most of them had little or no prior Fortran experience, but they all had programming experience in another modern language. Almost every modern language is object-oriented (even MATLAB) and the OOP world has a unifying set of concepts that make it relatively easy to jump from one language to the next. If you know how to construct a class hierarchy and invoke methods on objects in one language, the conceptual leap to doing so in a new language is relatively short. My impression was that it was actually easier to teach OOP in Fortran to OOP programmers from other languages than it often is to teach OOP in Fortran to seasoned Fortran programmers if they have no OOP experience.

Damian

Damian Rouson

unread,
Sep 4, 2015, 12:52:43 AM9/4/15
to
On Thursday, September 3, 2015 at 4:33:25 AM UTC-7, Ian Harvey wrote:
>
> My understanding, supported by asking here and elsewhere, is that you
> can't easily communicate the value of a polymorphic object between
> images. See C617 in F2008.

That constraint seems to have a lot of qualifiers that narrow its scope. I can
say that the basic communication a polymorphic object works in GFortran 5.2.
I just successfully compiled the following example with the OpenCoarrays "caf"
compiler wrapper and "cafrun" program launcher:

$ cat polymorpic-communication.f90
program main
implicit none
type foo
integer :: bar
end type
class(foo), allocatable :: foobar[:]
allocate(foo::foobar[*])
if (num_images()<2) error stop "Not enough images"
if (this_image()==2) foobar%bar=1
sync all
if (this_image()==1) then
print *,"Before communication: foobar%bar = ",foobar%bar
foobar%bar = foobar[2] %bar
print *,"After communication: foobar%bar = ",foobar%bar
end if
end program

$ caf polymorpic-communication.f90 -o poly

$ cafrun -np 2 ./poly
Before communication: foobar%bar = 0
After communication: foobar%bar = 1

The constraint appears to apply to the case when the component (bar) is allocatable. I haven't come across a need for this in my programming, but I imagine it could prove overly restrictive for some applications. I have more experience with writing derived types that have coarray components, but less experience with coarrays that are themselves of derived type.

Damian

FortranFan

unread,
Sep 4, 2015, 3:18:43 AM9/4/15
to
On Thursday, September 3, 2015 at 10:38:31 PM UTC-4, campbel...@gmail.com wrote:
> I have been following this topic with interest. Coarrays sound like a new feature that I should investigate.
>
> However, I am a Fortran 95 user. I have considered the significant complexity that has been introduced into the language with f03 and F08 to be a dramatic change to the Fortran language, which was once a simple pragmatic language and is now overly complex.
>

There is little in the newer features in Fortran (especially in Fortran 2008) that is all that complex; rather it is the newer programming paradigms that are behind the newer features, particularly those of OOP and parallelization, which many fail to grasp for any number of reasons stated in this thread. That is too bad but it has little to do with the language. The fact is the Fortran committees for 2003 and 2008 standard revisions have done a nice job in incorporating support for newer aspects in a manner consistent with the rest of the language. In the process, they have made some really concepts amenable to Fortranners with an easy syntax. As soon as a coder beings to appreciate the paradigms, Fortran again becomes a simple, highly pragmatic language.

> I wonder who can learn Fortran from scratch, if they were to look at the F2008 language. I learnt F66 in the early 70's then progressed through 77 to 95. Then, an overriding strategy was always to keep it simple.

More features in the language don't necessarily mean more complexity. Take coarrays and DO CONCURRENT for example; they offer major capabilities behind a simple syntax. In fact, many feel coarray features in Fortran 2008 didn't go the full distance because the standard bearers wanted to keep it too simple.

> F95 with a well a designed data structure and hierarchy of routines produces good solutions for numerical calculations. The use of allocatable variables in modules is well suited to 64-bit computing.
>

There were problems and gaps in the allocatable variables in Fortran 95/90, as has been discussed elsewhere on this forum, that have been addressed in Fortran 2003; 2008 revision simply made a few things even easier for the coder. Sticking to Fortran 95 makes little sense on this account.

> My latest additions have been to learn how to use Fortran and benefit from !$OMP and vector instructions. It is a challenge keeping up with these changes although the results have been very interesting but variable. On wintel architecture, the memory access bottleneck is a significant solution design issue.
>
These are non-standard aspects, not relevant to 2003 and 2008 revisions. If anything, the newer standards have made many vendor-specific aspects superfluous, a highly welcome development.

> As for implementing F03 and F08 in production code, there is the uncertainty of the compiler reliability, combined with the reliability of the optimisers that are available. It is much safer to know your coding approach will work using F95 than invest in new data structure approaches that add another level of complexity and the uncertainty that these approaches may not be reliably portable.
>

As the OP is trying to show in this thread, such concerns are now starting to be outdated and misplaced.

> As an F95 user, I am not convinced of the benefits of the complex additions to the standard and wonder how someone with little knowledge of Fortran could become reliably proficient in the modern standard. The complexity of F2008 is an issue that needs to be addressed.

For those who have started seeing the benefits of OOP, luckily modern Fortran offers decent options with good, simple syntax. Those who want to do parallelization based on MPI, PGAS type of approach, Fortran includes a standard way. Those who need to do mixed language programming, a lot of madness has been eliminated with what the standard offers for C-like processors. Submodules fix unresolved deficiencies from Fortran 90 and allow implementations to follow design, as it should have been from the beginning. Somebody may complain about parameterized derived types and derived type I/O, but they are extensions to the derived types concept and as such, I do not buy the argument they complicate matters all that much for a coder; it's a separate matter altogether if compiler implementations struggle in getting them right. The rest of the stuff in Fortran 2003 and 2008 (IEEE aspects, etc.) are largely designed to make things easier for the coder and a decent programmer can pick them up easily.

Bottomline: reports about increasing complexity in Fortran are greatly exaggerated, to paraphrase a quote attributed to a legendary satirist.

Arjen Markus

unread,
Sep 4, 2015, 7:01:42 AM9/4/15
to
Slightly off-topic, but some time ago I realised that Fortran offers a bunch of tools to make reading (scalar) input variables from files very easy.

Rather than read statements like "read(10,*) value" or manipulating files with lines like "key = value", I constructed a small module that takes of almost all the details. It reads these key-value pairs and assigns the value to the associated variable. Here is a simple example:

integer :: x
real :: y
character(len=20) :: string

!
! Provide default values
!
x = -1
y = -1.0

!
! Get the values
!
call get_values( 'keyvars.inp', [keyvar("int", x, "Integer value"), &
keyvar("real", y, "Real value"), &
keyvar("char", string, "Some text")] )

The routine get_values takes care of a lot of things (for instance: examine the command-line arguments to see if something else is required than read from the file "keyvars.inp", like writing a template or reading from a different file).

But the really neat feature is that you can associate a _variable_ with a keyword, so that the reading procedure is fully automatised.

(In fact, modulo a few details, this was possible already with Fortran 90)

I intend to put it in my Flibs project. Just have not taken the time to do so yet :).

Regards,

Arjen

Ian Harvey

unread,
Sep 4, 2015, 7:35:24 AM9/4/15
to
On 2015-09-04 2:52 PM, Damian Rouson wrote:
> On Thursday, September 3, 2015 at 4:33:25 AM UTC-7, Ian Harvey wrote:
>>
>> My understanding, supported by asking here and elsewhere, is that you
>> can't easily communicate the value of a polymorphic object between
>> images. See C617 in F2008.
>
> That constraint seems to have a lot of qualifiers that narrow its scope. I can
> say that the basic communication a polymorphic object works in GFortran 5.2.
> I just successfully compiled the following example with the OpenCoarrays "caf"
> compiler wrapper and "cafrun" program launcher:

The example below does not communicate the value of a polymorphic
object. The object communicated is non-polymorphic - it is the integer
component. The super-object of that component is polymorphic, but that
is incidental. Similarly incidental is that the communicated thing
happens to be the entirety of the value of the polymorphic object
(because the dynamic type of the object only has that one component),
but that won't be the general case.

> $ cat polymorpic-communication.f90
> program main
> implicit none
> type foo
> integer :: bar
> end type
> class(foo), allocatable :: foobar[:]
> allocate(foo::foobar[*])
> if (num_images()<2) error stop "Not enough images"
> if (this_image()==2) foobar%bar=1
> sync all
> if (this_image()==1) then
> print *,"Before communication: foobar%bar = ",foobar%bar
> foobar%bar = foobar[2] %bar
> print *,"After communication: foobar%bar = ",foobar%bar
> end if
> end program

> The constraint appears to apply to the case when the component (bar) is allocatable. I haven't come across a need for this in my programming, but I imagine it could prove overly restrictive for some applications. I have more experience with writing derived types that have coarray components, but less experience with coarrays that are themselves of derived type.

Considering components, a polymorphic component is either a pointer or
an allocatable. Pointer components, whether polymorphic or not, can
only reference a object on the same image as the pointer (F2008 Note
7.45). Given that reasonable design, it makes no sense to transfer them
between images - hence the general statement that pointers become
undefined when communicated across images (F2008 7.2.2.3p2).

So that leaves polymorphic allocatable components. Given the wording of
that constraint, that knocks polymorphic components out completely.

(I'm not sure why polymorphic pointer components were not also
prohibited by constraint, given communication of them is pointless, but
perhaps the pointer association to something relevant on the local image
can be established by other means post the communication.)

Polymorphic components are what I had in mind with my original point
(refer
https://groups.google.com/d/msg/comp.lang.fortran/52xOEt-HTTc/7QXuFdzkiuQJ),
but to be fair my query does address polymorphic objects in general.
Perhaps I am wrong about that.

So considering polymorphic objects that are not components - i.e.
`foobar` without any further part references in your example above, and
for the sake of example, assume a definition `CLASS(foo), ALLOCATABLE ::
some_local_object`, there is a requirement on assignment statements that
prevents something along the lines of:

foobar[2] = some_local_object

This is reasonable, as it would imply remote allocation, and would break
the requirement that all coarrays have the same dynamic type.

Perhaps the alternative is permitted:

some_local_object = foobar[2]

or using Fortran 2003 sourced allocation, which is more likely to be
supported by current compilers than F2008 polymorphic assignment:

allocate(some_local_object, source=foobar[2])

but is a foobar[2] a subobject of itself? If so - that violates C617.
Or perhaps that concept of self-sub-object is absurd.

I asked one of my compilers (gfortran, current trunk), but, as is to be
reasonably expected with the implementation of new features in compilers
(or perhaps in any program - new features .eqv. new bugs), I didn't get
very far:

gfortran -fcoarray=lib 2015-09-04\ poly-coarray.f90
2015-09-04 poly-coarray.f90:15:0:

allocate(some_local_object, source=foobar[2])
1
internal compiler error: Segmentation fault

ifort 16.0.20150815 (their latest release) accepted it, but
`some_local_object` did not acquire the correct value. Again, new
features, new bugs.

My Cray machine is on the blink, so for now I'll assume this is
conforming. Discussion confirming or contradicting appreciated.

For reference, the complete program was:

program main
implicit none
type foo
integer :: bar = 99
end type
class(foo), allocatable :: foobar[:]
class(foo), allocatable :: some_local_object
allocate(foo::foobar[*])
if (num_images()<2) error stop "Not enough images"
if (this_image()==2) foobar%bar=66
sync all
if (this_image()==1) then
print *,"Before communication: foobar%bar = ",foobar%bar
!foobar%bar = foobar[2] %bar
!print *,"After communication: foobar%bar = ", &
! some_local_object%bar
! some_local_object = foobar[2]
allocate(some_local_object, source=foobar[2])
print *,"After communication: some_local_object%bar = ", &
some_local_object%bar
end if
end program


But let me restate my problem more specifically: I want to be able to
communicate the value of a polymorphic object between images, where the
dynamic type of the object varies from image to image. Perhaps the
dynamic type of the object describes the task that the image needs to
execute, and/or at the other end of the day, the dynamic type of the
object on an image is part of the description of the results of the
calculation done by the image.

I think I can think of ways that sort of hack around this, but they are
very much hacks. I just want to be able to write:

TYPE t
...other polymorphic components in here too...
END TYPE t
TYPE x
...
CLASS(t), ALLOCATABLE :: comp
END TYPE x

CLASS(t), ALLOCATABLE :: local_polymorphic_object
TYPE(x) :: coarray[*]
...
local_polymorphic_object = coarray[idx]%polymorphic_component


The following shows one possible hack, noting that:

- it still precludes the possibility of there being polymorphic
component inside the types that are being communicated (and I find this
seriously limiting);
- I am effectively duplicating the dynamic type information that the
compiler already has in my own variable;
- the in_type, out_type and general use of select type means the main
program has to have intimate knowledge of the extensions being
manipulated, which smashes encapsulation.
- my Cray is still on the blink, so I don't know if this is correct code.


! Module that defines some tasks, represented by extensions of
! `t`, and some results, represented by extensions of `r`.
module m
implicit none

type, abstract :: t
contains
procedure(t_execute), deferred :: execute
end type t

type, abstract :: r
end type r

abstract interface
subroutine t_execute(in, out)
import :: t
import :: r
implicit none

class(t), intent(in) :: in
class(r), intent(out), allocatable :: out
end subroutine t_execute
end interface

type, extends(t) :: ta
integer :: icomp
contains
procedure :: execute => a_execute
end type ta
type, extends(t) :: tb
real :: rcomp
contains
procedure :: execute => b_execute
end type tb
type, extends(t) :: tc
logical :: lcomp
contains
procedure :: execute => c_execute
end type tc

type t_collection
class(t), allocatable :: item
end type t_collection

type, extends(r) :: ra
integer :: icomp
end type ra
type, extends(r) :: rb
real :: rcomp
end type rb
type, extends(r) :: rc
logical :: lcomp
end type rc

type r_collection
class(r), allocatable :: item
end type r_collection
contains
subroutine a_execute(in, out)
class(ta), intent(in) :: in
class(r), intent(out), allocatable :: out
allocate(out, source=rb(real(in%icomp)))
end subroutine a_execute

subroutine b_execute(in, out)
class(tb), intent(in) :: in
class(r), intent(out), allocatable :: out
allocate(out, source=rc(mod(in%rcomp, 2.0) < epsilon(0.0)))
end subroutine b_execute

subroutine c_execute(in, out)
class(tc), intent(in) :: in
class(r), intent(out), allocatable :: out
allocate(out, source=ra(merge(1,-1,in%lcomp)))
end subroutine c_execute
end module m

! Program to execute some tasks on multiple images, and then collect
! results back on image one.
program p3
use m
implicit none

! Type used to broadcast taks data to images. Each possible
! task type needs to be a component.
type :: in_type
type(ta) :: a
type(tb) :: b
type(tc) :: c
end type in_type

! Type used to collect results data from images. Each possible
! result type needs to be a component.
type :: out_type
type(ra) :: a
type(rb) :: b
type(rc) :: c
end type out_type

! The type of task to be executed, then the type of result generated.
character :: instruction[*]

! Used to communicate tasks to images.
type(in_type) :: in[*]
! Used to communicate results from images.
type(out_type) :: out[*]

integer :: image ! Image index

! Assign tasks.
if (this_image() == 1) then
block
type(t_collection) :: tasks(num_images())

do image = 1, num_images()
select case (mod(image, 3))
case (0) ; allocate(tasks(image)%item, source=ta(image))
case (1) ; allocate(tasks(image)%item, source=tb(real(image)))
case (2) ; allocate( tasks(image)%item, &
source=tc(mod(image,2) == 0) )
end select
end do

! At this point the `tasks` variable has all the task
! information. Now we need to communicate that with the
! images.

do image = 1, num_images()
select type (task => tasks(image)%item)
type is (ta)
instruction[image] = 'a'
in[image]%a = task
type is (tb) ;
instruction[image] = 'b'
in[image]%b = task
type is (tc) ;
instruction[image] = 'c'
in[image]%c = task
end select
end do
end block
end if

sync all

! do task
block
class(r), allocatable :: result

select case (instruction)
case ('a') ; call in%a%execute(result)
case ('b') ; call in%b%execute(result)
case ('c') ; call in%c%execute(result)
end select

! Copy result back into coarray
select type (result)
type is (ra)
instruction = 'a'
out%a = result
type is (rb)
instruction = 'b'
out%b = result
type is (rc)
instruction = 'c'
out%c = result
end select
end block

sync all

! Collect results.
if (this_image() == 1) then
block
type (r_collection) :: results(num_images())

do image = 1, num_images()
select case (instruction[image])
case ('a')
allocate(results(image)%item, source=out[image]%a)
case ('b')
allocate(results(image)%item, source=out[image]%b)
case ('c')
allocate(results(image)%item, source=out[image]%c)
end select
end do

! Here results has all our results. Go forth and be merry.
!...
end block
end if
end program p3


In the absence of finding existing reports I will file bug reports for
gfortran and ifort as time permits.

Ian Harvey

unread,
Sep 4, 2015, 7:41:06 AM9/4/15
to
On 2015-09-04 9:35 PM, Ian Harvey wrote:
...
> I just want to be able to write:
>
> TYPE t
> ...other polymorphic components in here too...
> END TYPE t
> TYPE x
> ...
> CLASS(t), ALLOCATABLE :: comp
> END TYPE x
>
> CLASS(t), ALLOCATABLE :: local_polymorphic_object
> TYPE(x) :: coarray[*]
> ...
> local_polymorphic_object = coarray[idx]%polymorphic_component

I also want to always write correct code, but I often fail. The name of
the component in the right hand side expression in the last statement
should be `comp`, as per the type definition.



Gary Scott

unread,
Sep 4, 2015, 8:11:43 AM9/4/15
to
I've written similar processes in strict FORTRAN 77, although my first
version simply read every value as integer, real, and character, and I
chose the appropriate value internally. I didn't change it until I
needed it to be a bit faster.

fj

unread,
Sep 4, 2015, 12:49:36 PM9/4/15
to
As R. Maine explained, you cannot change the standard so easily in production codes. They are several reasons for that :
- when the developement team is rather large (> 5 people) the project leader has to impose a set of programming rules which have to be stable during several years
- when the code is installed by many clients, you must often adapt your installation procedures for their OS and their compilers.
- even without client, you need to take care about your own computer system. For instance, my compagny has two linux clusters of about 2000 cores each The operating system and the associated compilers, installed on such network do not change very often (a major evolution every 5 years, no more). Presently, the GCC version is 4.4.7 (RedHat 2010).
- the people I work with are all physicists (as me). Most of them are not experts in programming. OOP and Coarrays are not at all easy to learn for people who just want to develop physical or chemical models. Imposing Fortran95+ was already a complicated challenge...

Despite these limitations, we have chosen to program in FORTRAN-95+, the + being composed of :
- the iso_c_binding
- the generalization of ALLOCATABLE (The famous TR15xxx)
- OpenMP for // computing (muti-thread shared memory model)
- MPI for // computing on the clusters (distributed memory)

You test our programs with 4 compilation system (g95+gcc, gfortran+gcc, intel (ifort+icc), nag (nagfor+gcc)).

Damian Rouson

unread,
Sep 4, 2015, 1:33:35 PM9/4/15
to
On Friday, September 4, 2015 at 4:35:24 AM UTC-7, Ian Harvey wrote:
>
> The example below does not communicate the value of a polymorphic
> object. The object communicated is non-polymorphic - it is the integer
> component. The super-object of that component is polymorphic, but that
> is incidental. Similarly incidental is that the communicated thing
> happens to be the entirety of the value of the polymorphic object
> (because the dynamic type of the object only has that one component),
> but that won't be the general case.

Good points.

>

>
> Polymorphic components are what I had in mind with my original point
> (refer
> https://groups.google.com/d/msg/comp.lang.fortran/52xOEt-HTTc/7QXuFdzkiuQJ),

I have consistently been impressed by the committee's decisions over the years
(I only recently started participating so this is not self-congratulation. I'm referring
to all the years before I joined). When I look at your dialogue cited above, I see
Richard was advocating for holding off on coarrays to await for more experience
with F2003 (a reasonable request), while you're wanting the committee to go further at integrating coarrays with the rest of the language, namely the OOP features (also a
reasonable request but one requiring a lot more thought and time and human resources).
I'd say the committee did an admirable job of coming down somewhere in between. My
guess is that Fortran will be the first widely used PGAS language. At this point, the
soonest the committee _might_ consider new features is 2017. I hope you'll put together
a proposal for new features at the appropriate time.

> So considering polymorphic objects that are not components - i.e.
> `foobar` without any further part references in your example above, and
> for the sake of example, assume a definition `CLASS(foo), ALLOCATABLE ::
> some_local_object`, there is a requirement on assignment statements that
> prevents something along the lines of:
>
> foobar[2] = some_local_object
>
> This is reasonable, as it would imply remote allocation, and would break
> the requirement that all coarrays have the same dynamic type.
>
> Perhaps the alternative is permitted:
>
> some_local_object = foobar[2]
>
> or using Fortran 2003 sourced allocation, which is more likely to be
> supported by current compilers than F2008 polymorphic assignment:
>
> allocate(some_local_object, source=foobar[2])
>
> but is a foobar[2] a subobject of itself? If so - that violates C617.
> Or perhaps that concept of self-sub-object is absurd.

I just tried these. Both the sourced allocation and the polymorphic assignment
compile but seg fault at runtime. I'll submit a bug report to Cray and store the
reported code in AdHoc (https://github.com/sourceryinstitute/AdHoc) so that it
can be tested automatically as part of a Cray bug fix test suite. This will
be the first Cray bug in the AdHoc (which is new so not much is there yet). If
you're willing to take the time to do it first, please do.

>
> For reference, the complete program was:
>
> program main
> implicit none
> type foo
> integer :: bar = 99
> end type
> class(foo), allocatable :: foobar[:]
> class(foo), allocatable :: some_local_object
> allocate(foo::foobar[*])
> if (num_images()<2) error stop "Not enough images"
> if (this_image()==2) foobar%bar=66
> sync all
> if (this_image()==1) then
> print *,"Before communication: foobar%bar = ",foobar%bar
> !foobar%bar = foobar[2] %bar
> !print *,"After communication: foobar%bar = ", &
> ! some_local_object%bar
> ! some_local_object = foobar[2]
> allocate(some_local_object, source=foobar[2])
> print *,"After communication: some_local_object%bar = ", &
> some_local_object%bar
> end if
> end program
>

I hadn't read this far when I did the Cray test, but what I tested is functionally
equivalent to what is written above.
I agree that's not desirable and probably not consistent with the best OO
style. I have more recently found ways to eliminate much of the type
guarding that appeared in my book.

I read quickly through the additional code you posted and, although I
think I gleaned the structure of it, it reminds me of patterns I learned from
Ed Akin's book on emulating OOP in Fortran 90/95, which shows patterns
for emulating dynamic polymorphism before the language supported it.
I imagine similar patterns will be needed to emulated distributed polymorphism
until the language supports it.

>
>
> In the absence of finding existing reports I will file bug reports for
> gfortran and ifort as time permits.

Please feel free to add these to AdHoc also if you are so inclined.

As an aside, reading the code made me wish there were an option to post
graphics to this forum. I'm guessing some UML diagrams would be very
useful in summarizing the code's structure (in class diagrams) and behavior
(in sequence diagrams perhaps) to be understood at a glance. A picture is
worth a thousand words.

I'm going to be mostly offline for the next week due to family obligations this weekend and teaching another tutorial next week so I might not respond to additional posts. At least I hope I don't because it won't be either my family or my host for the next short course
will get short shrift if I do. :)

One final question: if features were added to support what you want to do, would there likely
be significant performance implications? Possibly for task management, performance is of
no concern because the task management is hopefully lightweight relative to tasks that are
being managed. On the other hand, I would hope the standard would somehow make it
clear to the naive user if distributed polymorphism might have negative performance implications
for those aspects of the computation that make up a bigger portion of the runtime profile --
much like communicating allocatable components of derived type coarrays incurs extra
overhead relative to communicating coarrays of intrinsic type or derived type coarrays with
non-allocatable components.

I'm thinking about the early history of OOP, when it developed a reputation for hindering performance. I ultimately concluded that a lot of the problem wasn't in OOP
(which I'll define as the mechanics of writing classes). Rather, the problems were likely
due to poor Object-Oriented Design (which I'll define as deciding _which_ classes to write).
It took a while for projects to come along that showed that scientific OOP could be
high-performing (one such project is Trilinos: https://trilinos.org/).

My worry is that if the committee had gone too far down the path of more fully integrating coarrays with the OOP features of the language, some early adopters would bump up against
performance issues that would turn people away. I might be good to sacrifice some
flexibility in the short term so that what comes into the standard comports with what one of
Fortran's chief advantages: performance. Then there can be a string of success stories like
what the news Michael Seihl recently reported here before moving on to add features that
must be used more judiciously.


Damian

michael siehl

unread,
Sep 6, 2015, 8:28:57 AM9/6/15
to
To me, Coarray Fortran brings two syntactical revolutions into the world of parallel programming. The first one, performance oriented, is a syntax that easily allows to overlap computation with (remote) communication. We did discuss this recently in the OpenCoarrays Forum: https://groups.google.com/forum/#!topic/opencoarrays/_MvSi21uq-M

The second syntactical revolution, the possibility to encapsulate access to the Coarray Fortran syntax completely, is due to the fact that Coarray Fortran is only a minimalistic extension to the base language. (That is why I believe you can't do something similar in X10, Chapel, or other more fully featured parallel programing languages). To understand, let's face the fact that parallel programing is cumbersome and very scary to most programers. To avoid coding errors in my coarray programing (which otherwise could produce hard to locate run-time failures), I soon started to encapsulate the required coarray syntax and thus its functionality into object-based wrapper objects (my setup is F90/95 object-based/ADT). (See also Modern Fortran explained, chapter 19.1, "....,coarray syntax should appear only in isolated parts of the source code."). (BTW, I do mark these objects names with an ending '_CA' to still have a visual flag when PGAS memory is accessed. This is important because the compilers can't distinguish between coarrays and purely local variables. Just changing a local variable into a coarray does not change an interface). With that, the program logic code, even that of a task pool, can be written without coarrays. In other words, the encapsulation enables to do object-based parallel programming exclusively with F90/95/2003 syntax. (Well, I shouldn't say that too loud, because my F90/95 style coarray wrapper objects do need maintenance and adjustment, but that is a merely mechanical process.) The point is, that the main coding does not feel like parallel programing any more. Rather, it is just like ordinary object-based programing with the introduction of a new way of how objects can interact with each other through an object ID (which is the image number on which the object is executed). I am not aware of any easier way to do parallel software development. The programing isn't any different from serial programing, at least in terms of writing the program logic code.
But now, one very thrilling question is, if all that can also be done in a more F2003 true OOP way using the most modern coarray feature, namely coarray components, which are explicitly intended to be used together with inheritance and polymorphism (see chapter 3.1.1 in Aleksandar Donev's paper: ftp://ftp.nag.co.uk/sc22wg5/N1701-N1750/N1702.pdf ). Somebody (not me) should try this out soon. (While coarray components of F2008 can only be used in a SPMD programing style, the advent of coarray teams will certainly extend its use also to MPMD-like programing.)

As an aside, I'd like to add the following for doing MPMD-like programing with Coarray Fortran.
Coarray Fortran does adopt the SPMD programing model, but at the same time makes it very easy to execute quite different code, resp. object models, on distinct images and thus allows a MPMD-like programing. On the other hand, coarray correspondence does require, that coarray declarations on distinct images must be done within the equally named procedure. On first sight, this seems to be hard to implement between completely different object models running on the distinct images. Well, the solution is very simple. To establish a communication channel (coarray correspondence) between completely different object models, just declare your coarrays public in the specification part of a module (of course, I use my wrapper objects for that), and then just put a USE statement wherever you need data transfer (communication) with that different object model on a distinct image.

best regards
Michael

Ron Shepard

unread,
Sep 6, 2015, 12:41:17 PM9/6/15
to
On 9/6/15 7:28 AM, michael siehl wrote:
> The first one, performance oriented, is a syntax that easily allows to overlap computation with (remote) communication. We did discuss this recently in the OpenCoarrays Forum:https://groups.google.com/forum/#!topic/opencoarrays/_MvSi21uq-M

Could you explain this a little more?

This is the opposite of my impression of CAF compared to, say, MPI.
With MPI, you can explicitly initiate a nonblocking data transfer, and
while it is executing you may continue local computations. Then you can
either query the transfer status to determine if more computations can
be done while waiting, or you can issue a blocking call that simply does
not return until the transfer has completed.

When I first used CAF, which has been almost 10 years ago now that I
think about it, there was no syntax at all to specify blocking or
nonblocking data transfer. Instead, one had to implicitly rely on the
compiler to recognize as part of its optimization process when
nonblocking data transfer could be overlapped with computation and when
a block was appropriate.

$.02 -Ron Shepard

michael siehl

unread,
Sep 6, 2015, 3:57:23 PM9/6/15
to
> Could you explain this a little more?
> ..
> ..there was no syntax at all to specify blocking or
> nonblocking data transfer. Instead, one had to implicitly rely on the
> compiler to recognize as part of its optimization process when
> nonblocking data transfer could be overlapped with computation and when
> a block was appropriate.

Sorry, I am not the expert with that. But see the last entries of the thread in the OpenCoarrays Forum, where Alessandro and Damian give advice regarding OpenCoarrays.

michael siehl

unread,
Sep 7, 2015, 6:33:39 AM9/7/15
to
>With MPI, you can explicitly initiate a nonblocking data transfer, and
>while it is executing you may continue local computations. Then you can
>either query the transfer status to determine if more computations can
>be done while waiting, or you can issue a blocking call that simply does
>not return until the transfer has completed.

Actually, this is not too much different from what I currently want to implement with coarrays, except that I will implement a buffer in the PGAS memory of the worker images, which will be managed from another designated image. The program logic code for that will be in completely local memory. The aim is, of course, to avoid blocking as much as possible. This simply requires that certain coarray assignments do allow overlap of computation and communication, namely

a[RemoteImage] = a[ThisImage]
a[RemoteImage1] = a[RemoteImage2].

Beliavsky

unread,
Sep 8, 2015, 8:09:13 AM9/8/15
to
On Thursday, September 3, 2015 at 10:38:31 PM UTC-4, campbel...@gmail.com wrote:
> I have been following this topic with interest. Coarrays sound like a new feature that I should investigate.
>
> However, I am a Fortran 95 user. I have considered the significant complexity that has been introduced into the language with f03 and F08 to be a dramatic change to the Fortran language, which was once a simple pragmatic language and is now overly complex.
>

Currently, I am also a Fortarn 95 user, partly because I am reluctant to stop using the g95 compiler. I have some sympathy for your statements about f03 and f08 but remember that some FORTRAN 77 programmers have rejected later standards for the same reasons.

Fortran compilers that do not implement standards beyond Fortran 95 will become obsolete -- some would say they already are. To be confident that your code will compile and run on platforms of the future, one should use compilers that are still actively developed. (I use gfortran in addition to g95.) Once you do that, whether to use F03 and F08 features depends on how much one values compatibility with "stale" compilers such as g95. I am reading "Modern Fortran in Practice" by Arjen Markus to decide whether the new features of F2003 are worth the transition to it.

michael siehl

unread,
Sep 8, 2015, 3:58:35 PM9/8/15
to
> I have been following this topic with interest. Coarrays sound like a new >feature that I should investigate.
>
> However, I am a Fortran 95 user. I have considered the significant complexity >that has been introduced into the language with f03 and F08 to be a dramatic >change to the Fortran language, which was once a simple pragmatic language and >is now overly complex.
>

>Currently, I am also a Fortarn 95 user,


Well, I personally believe that Fortran 90/95 style remains a long-term steam-engine behind Fortran's HPC capabilities. Fortran 90/95 already allows type inclusion for an object-based programing style (see chapter 11.4 of 'Modern Fortran - Style and Usage' by Norman S. Clerman and Walter Spector for good explanations) that fits perfectly with F2008 coarrays. Moreover, F90 array programing is required for SIMD parallelism, which is of even ever growing importance because of (expected) dramatically increasing SIMD register size of upcoming many-core computers. F2003 gives many powerful extensions to that. I also like the F2003 OOP features, but personally think they are just a nice extension to the language but not a substitute for F90/95 high performance.

Damian Rouson

unread,
Sep 8, 2015, 6:41:35 PM9/8/15
to
I believe the proposed Fortran 2015 intrinsic EVENT type and related features will help to address this issue. Here's a related quote from Steve Lionel's Dr. Fortran column regarding the proposed features as specified in TS 18508 (http://isotc.iso.org/livelink/livelink?func=ll&objId=17288706&objAction=Open):

"Another significant part to the TS is 'events', a way for one image notifying another image that a task has been completed and that it can proceed. This part has been settled for some time now and I don't expect further changes."

Source: https://software.intel.com/en-us/blogs/2015/03/27/doctor-fortran-in-the-future-of-fortran

Presumably non-blocking communication would be one example of a task that could be handled with events.

Also, there are apparently significant issues with actual progress happening with nonblocking communications in MPI. Here's another relevant quote that appears to be from a few years ago, but I've heard reports of similar behavior on current systems:

"Finally, on systems without separate software threads or dedicated hardware to process the communication stack, one has to call MPI_TEST explicitly to progress the non-blocking communication."

Source: http://www.2decomp.org/occ.html

Damian

Damian Rouson

unread,
Sep 8, 2015, 6:55:56 PM9/8/15
to
On Tuesday, September 8, 2015 at 3:58:35 PM UTC-4, michael siehl wrote:
>
> I also like the F2003 OOP features, but personally think they are just a nice extension to the language but not a substitute for F90/95 high performance.

OOP and performance are orthogon. Although the programs that many new OO programmers write often perform poorly, the problem often lies in the OO Design (OOD), i.e., the choices of what classes to define, what data to hide in the classes, what procedures to bind to the classes, and what relationships to define between classes.

The article by Filippone and Buttari [1] is a great demonstration of judicious OOD having little or no impact on performance. The article compares the performance of their PSBLAS 3 (which is OO) with PSBLAS 2 (which was not).

In fact, OOD can even increase performance, for example, by making it easier to adapt the data structures to ever-evolving heterogeneous hardware platforms [2].

Damian

[1] Filippone, S., & Buttari, A. (2012). Object-oriented techniques for sparse matrix computations in Fortran 2003. ACM Transactions on Mathematical Software (TOMS), 38(4), 23.
[2] Barbieri, D., Cardellini, V., Filippone, S., & Rouson, D. (2012, January). Design patterns for scientific computations on sparse matrices. In Euro-Par 2011: Parallel Processing Workshops (pp. 367-376). Springer Berlin Heidelberg. Chicago

Alessandro Fanfarillo

unread,
Sep 9, 2015, 4:26:18 AM9/9/15
to
> "Finally, on systems without separate software threads or dedicated hardware to process the communication stack, one has to call MPI_TEST explicitly to progress the non-blocking communication."
>
> Source: http://www.2decomp.org/occ.html
>
> Damian

That's the point. When hardware offload cannot be used, most MPI implementations do not provide (by default) asynchronous message progress.
The reason is that asynchronous progress impacts latency (which is critical in most MPI applications).
For example, Intel MPI does not provide (by default) thread-based progress; if you want your non-blocking functions to progress you need to explicitly call MPI_Test or MPI_Iprobe inside your application.
It is possible to turn on the thread-based progress using MPICH_ASYNC_PROGRESS=1. The benefits brought by this mode are very application dependent, in most cases performance are penalized.
For more information I suggest to read: http://htor.inf.ethz.ch/publications/img/hoefler-ib-threads.pdf

In the discussion on OpenCoarrays Forum mentioned by Michael I posted a test case that can be used to see the effect of asynchronous communication using OpenCoarrays.

Gary Scott

unread,
Sep 9, 2015, 9:14:36 AM9/9/15
to
At the risk of sounding stupid, since I'm not too familiar with MPI and
its ilk or coarrays...within the same machine or across multiport shared
memory, cross thread, cross core, cross processor communication is quite
easy via a simple shared memory block. Reserve a small section for a
communication handshaking process. This method has been used for real
time multiprocessor shared memory communication for 50+ years...at ECL
memory speeds in the 70s even.

FortranFan

unread,
Sep 9, 2015, 11:16:35 AM9/9/15
to
On Thursday, September 3, 2015 at 12:58:31 PM UTC-4, Damian Rouson wrote:
> On Thursday, September 3, 2015 at 8:55:47 AM UTC-7, steve kargl wrote:
> > Damian Rouson wrote:
> >
> > > After reading a paper that expressed a new algorithm in Fortran 90, I can't help
> > > but wonder when we will tire of expressing new algorithms in 25-year-old idioms
> > >when more recent intrinsics and statements might collapse 5-10 lines down to 1.
> >
> > Perhaps, the important point of the article was the new algorithm not the
> > language the authors choose to demostrate the algorithm. Knuth wrote
> > a rather famous set of books that put importance on the algorithms not
> > the expression of those algorithms in any given common programming
> > language. In fact, Knuth invented MIX (and now MIXX) with sole purpose
> > of no tying the algorithms to an implementation.
>
> This makes perfect sense and I would of course never presume to argue with Knuth. :)
> This paper also had pseudocode in the body of the paper. The Fortran program was
> an attachment -- effectively an appendix.
>
> For what it's worth, what I found fascinating is that compacting the code with newer
> features made the actual Fortran so short that I believe it could be used instead of
> pseudocode and would have two advantages: (1) Fortran is less ambiguous than
> pseudocode because it's backed by a standard and (2) the array notation, mask-based
> intrinsics, and DO CONCURRENT can combine to remove some of the unnecessary
> serialization of the algorithm.
>
> Many Fortran programs model nature and many aspects of nature are inherently parallel.
> Rather than asking "Why parallelize this algorithm?", an alternative viewpoint
> is "Why was this algorithm ever serialized?" When the natural aspects of what is being
> modeled are parallel, the parallel algorithm might be the more natural and clear way to
> express the algorithm. I should also be careful to point out that I don't necessarily
> mean "parallel" in the sense of parallel computing. I mean to connote a notion of
> parallelism that would include array assignments. Such an assignment might or might
> not execute in parallel depending on the overall code implementation, the compiler,
> the hardware, etc. But I think the algorithm is more clear with an array assignment
> rather than nested loops that do the assignment element-by-element.
>
> What I was reading was a combination of Fortran 77 idioms (e.g., element-by-element
> assignment) with some Fortran 90 (e.g., array constructors). What I would replace it
> with involves a Fortran 90 mask-based intrinsic and Fortran 2008 DO CONCURRENT.
> I think it's perfectly reasonable to stop there because even the steps I just mentioned
> demonstrate the parallelism in the way I'm using the word "parallelism." For what it's
> worth, however, after I revised the algorithm in this way, it became immediately obvious
> how generate actual parallel computation. The coarray parallelization occupied less than
> 10% of the time I spent revising the code.
>
> I just realized that a somewhat simplified version of the paper's algorithm can be used
> in my own work. I'll see if I can extract a short snippet from my own code to post in a
> new thread for comment. I'll be very interested in people's feedback because I'm
> considering using the code in courses I teach.


Ok, the usual divergence of threads in this forum is taking root. It will be nice to get back on track in this thread and get to know more about the details in the original post i.e., what is it, idiomatically speaking per OP, in the Fortran 90 code by the authors of the purported paper that can be expressed better using the facilities in the current standard?

First of all, did the authors really use Fortran 90? That is, are there aspects of Fortran 95, especially the HPC features that were added to this standard revision, that could have been used but which the authors consciously or otherwise avoided? If not, their code can be deemed Fortran 95 too, a rather important distinction since 1) Fortran 95 is the last standard that stalwarts like Richard Maine can easily access (in spite of all the advances in gfortran, it is rather difficult for many folks to get their hands on a stable toolset since gfortran v4.9.x on their compute platform of choice) and 2) as illustrated here and elsewhere, COARRAYs and DO CONCURRENT, the only other of true HPC aspects in latest Fortran standard can only be viewed as cutting (bleeding) edge and one can be sympathetic if authors stay away from them in publications. The fact is there is a lot lacking in many aspects of these new features including finding robust implementations in a broad set of compilers but also in understanding and interpreting the standard itself on these features and in figuring out where and how best to use these features. Don't get me wrong; I think it is great Fortran standard has added these features, it is just that it appears to me there is a long way to go before these features become fully functional, robust, and valuable.

So again, it will be nice to learn more about the idiomatic aspects to which the thread title suggests one should "move on", if the OP would be care to elaborate further.

Thanks,

michael siehl

unread,
Sep 9, 2015, 6:54:25 PM9/9/15
to
>as illustrated here and elsewhere, COARRAYs and DO CONCURRENT, the only other >of true HPC aspects in latest Fortran standard can only be viewed as cutting >(bleeding) edge and one can be sympathetic if authors stay away from them in >publications. The fact is there is a lot lacking in many aspects of these new >features including finding robust implementations in a broad set of compilers >but also in understanding and interpreting the standard itself on these >features and in figuring out where and how best to use these features. Don't >get me wrong; I think it is great Fortran standard has added these features, >it is just that it appears to me there is a long way to go before these >features become fully functional, robust, and valuable.


I use Ifort and GFortran/OpenCoarrays simultaneously with the same souce code files and therefore can tell that both are already quite robust coarray implementations. I learned the most from just trying out things with both compilers.

FortranFan

unread,
Sep 9, 2015, 9:38:19 PM9/9/15
to
On Wednesday, September 9, 2015 at 6:54:25 PM UTC-4, michael siehl wrote:
> >as illustrated here and elsewhere, COARRAYs and DO CONCURRENT, the only other >of true HPC aspects in latest Fortran standard can only be viewed as cutting >(bleeding) edge and one can be sympathetic if authors stay away from them in >publications. The fact is there is a lot lacking in many aspects of these new >features including finding robust implementations in a broad set of compilers >but also in understanding and interpreting the standard itself on these >features and in figuring out where and how best to use these features. Don't >get me wrong; I think it is great Fortran standard has added these features, >it is just that it appears to me there is a long way to go before these >features become fully functional, robust, and valuable.
>
>
> I use Ifort and GFortran/OpenCoarrays simultaneously with the same souce code files and therefore can tell that both are already quite robust coarray implementations. I learned the most from just trying out things with both compilers.


Perhaps you can produce a primer/tutorial/blog/vlog/article/book with detailed examples and the illustrated benefits for the benefit of others.

From what I see, on my compute platform of choice, there are no reliable gfortran binaries and I'm not about to download all the source and build them myself. With the commercial compiler, there are lots of hits and misses with whatever I've tried. Everything I do now is with OO design and OOP and I mostly get internal compiler errors with the simplest of things I try.

Note, again, I'm not trying to criticize anyone or the standard or anything; simply saying what the state is. Generally I'm quite thrilled with the direction Fortran has taken with 2003 and 2008 standards and I'm somewhat disappointed Fortran 2015 doesn't have as many "goodies" to offer as I would have liked.

What I'm trying to say is OP started off this thread with some discussion on modern Fortran idioms that folks can "move on" to; I would like to know more about what OP had in mind. COARRAYs and DO CONCURRENT can't be it; there has to be more for it to be meaningful.

Wolfgang Kilian

unread,
Sep 10, 2015, 3:11:28 AM9/10/15
to
As much as I'd like to see more 'goodies' in F2015, I'm glad that this
will be a moderate revision, so compiler vendors finally get a chance to
catch up with the standard. It's 2015, and I still can't claim to have
access to a Fortran 2003 compiler.

That said, it would be interesting to know into which direction the
committee wants to go after 2015. Like everybody else, I have my
personal preferences, but they might have different intentions.


-- Wolfgang

--
E-mail: firstnameini...@domain.de
Domain: yahoo

michael siehl

unread,
Sep 10, 2015, 5:38:40 PM9/10/15
to
> Perhaps you can produce a primer/tutorial/blog/vlog/article/book
> with detailed examples and the illustrated benefits for the
> benefit of others.

Actually, my plan is to develop a team-based task pool management (i.e. teams of task pools) in pure Fortran and make the code public available (possibly Open Public). (Of course, I will also use that task pool management myself to develop HPC software for statistical analysis). The code is mainly F9x object-based style and I'll make it as much as possible self-explanatory in hope to leave it accessible to novices too. But there is no CAF code to show up in the program logic code of this project. As I stated above, the required coarray functionality is encapsulated into wrapper objects which are used (1) via F9x type inclusion on the executing image and (2) via direct references (USE) on a remote image.
My trick is to build wrapper objects that encapsulate all access to PGAS memory (coarrays), and thus, to avoid coarrays completely in the program logic code. The intention is also to encapsulate all other required CAF functionality, like SYNC IMAGES. With that, I want to show the possibility of doing advanced parallel software development in pure Fortran 9x/2003 serial object-based code. Therefore, the thrilling part of the parallel code will not be with coarrays, even if they are the working horse under the hood.

best regards
Michael
0 new messages