Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

datetime-fortran release

747 views
Skip to first unread message

Milan Curcic

unread,
Sep 27, 2014, 4:09:55 PM9/27/14
to
Hi All,

I'd like to draw your attention to datetime-fortran (https://github.com/milancurcic/datetime-fortran), an open source library for time and date manipulation. The code has been in public beta since Summer 2013, and having finished the unit tests and over a dozen bug fixes, I have now labeled it as stable.

If you deal with any aspect of time or dates in your Fortran code, this library will likely be useful to you. You will need a Fortran 2003-conforming compiler. Any recent version of GNU, Intel, PG or IBM Fortran will do.

If you happen to use the code, I'm looking forward to hear about any feature requests or bug reports.

Cheers,
milan

Gary Scott

unread,
Sep 27, 2014, 5:40:20 PM9/27/14
to
Nice, I could have used that a decade ago (i reinvented the wheel and
wrote my own of course). Since I use GINO in most applications now
though, GINO also has quite a nice set of date/time routines as part of
its graphing capabilities (but you can use them for other purposes).

FortranFan

unread,
Sep 28, 2014, 1:04:49 AM9/28/14
to
Brilliant. great job.

Jos Bergervoet

unread,
Sep 28, 2014, 7:02:23 AM9/28/14
to
This may be true, but can someone explain what is the
use of it?

I don't find the code very self-explaining, but
things that stick out, like:
REAL(KIND=real_sp),PARAMETER :: one = 1e0 ! 1
certainly don't seem to be useful.

And then I see:
PROCEDURE :: addMilliseconds
PROCEDURE :: addSeconds
PROCEDURE :: addMinutes
PROCEDURE :: addHours

which appear (to me) to be in the category:
addOnePlusOne
addTwoPlusTwo
mulTwoTimesTwo
in terms of usefulness..

It looks like a lot of complexity, where just
using the Julian date as a time parameter can
solve everything in one sweep.

You would then only need two routines: to convert
to and from Julian date.

NB: I'm trying to play devil's advocate here, but
where do I miss the point?

--
Jos

dpb

unread,
Sep 28, 2014, 8:58:33 AM9/28/14
to
On 09/28/2014 6:02 AM, Jos Bergervoet wrote:
...

> And then I see:
> PROCEDURE :: addMilliseconds
> PROCEDURE :: addSeconds
> PROCEDURE :: addMinutes
> PROCEDURE :: addHours
>
...

> It looks like a lot of complexity, where just
> using the Julian date as a time parameter can
> solve everything in one sweep.
>
> You would then only need two routines: to convert
> to and from Julian date.
>
> NB: I'm trying to play devil's advocate here, but
> where do I miss the point?

I've not read the code itself, but I think I can comment on the above
from experience with the datenum functions in Matlab. Whether that was
where the OP got the inspiration for including them or not, these
functions are there and do get used.

While strictly speaking it's true one can get by w/o the "syntactic
sugar" of the help functions, they can be of much benefit for at least
two purposes...

a) simply for the ease of writing a source line that adds an integer
number of time increment(s) is quick to write and easy to read as
compared to requiring the explicit conversion or using fractional magic
numbers, and

b) when creating incremental time series, it's easy owing to floating
point roundoff to get cases where the values computed in different ways
to not compare exactly in later lookups for an exact date(s) or range(s)
of date(s). This problem arises fairly frequently in the Matlab
forum/usenet group when the user makes a sequence by incrementing a
floating point value. Computing the increment in the integer number of
the fractional days and then converting those resolves the issue and
keeps the roundoff the same. Granted one can do this explicitly as
well, but having the facility in the package encapsulates it and can
simplify the higher level code significantly. ("You can pay me now or
you can pay me later." :) )

--

robin....@gmail.com

unread,
Sep 28, 2014, 9:51:40 AM9/28/14
to
On Sunday, September 28, 2014 6:09:55 AM UTC+10, Milan Curcic wrote:
> Hi All,
>
>
>
> I'd like to draw your attention to datetime-fortran (https://github.com/milancurcic/datetime-fortran), an open source library for time and date manipulation. The code has been in public beta since Summer 2013, and having finished the unit tests and over a dozen bug fixes, I have now labeled it as stable.
>
>
>
> If you deal with any aspect of time or dates in your Fortran code, this library will likely be useful to you. You will need a Fortran 2003-conforming compiler. Any recent version of GNU, Intel, PG or IBM Fortran will do.

Might have more appeal if it were F90/95 compatible.

It's worth pointing out that such facilities were added to PL/I
20 years ago, to deal with Y2K.

Richard Maine

unread,
Sep 28, 2014, 10:01:40 AM9/28/14
to
Jos Bergervoet <jos.ber...@xs4all.nl> wrote:

> I don't find the code very self-explaining, but
> things that stick out, like:
> REAL(KIND=real_sp),PARAMETER :: one = 1e0 ! 1
> certainly don't seem to be useful.

I have something simillar to that in all my code. Not quite exactly
that, but very simillar. The only difference of substance is that my
simillar constant is working precision instead of single precision. I
find a lot of use for that. Gotta run, so little time to elaborate.
Elaboration would probably be pretty boring anyway.

--
Richard Maine
email: last name at domain . net
domain: summer-triangle

dpb

unread,
Sep 28, 2014, 10:38:47 AM9/28/14
to
On 09/28/2014 7:58 AM, dpb wrote:
> On 09/28/2014 6:02 AM, Jos Bergervoet wrote:
> ...
>
>> And then I see:
>> PROCEDURE :: addMilliseconds
>> PROCEDURE :: addSeconds
>> PROCEDURE :: addMinutes
>> PROCEDURE :: addHours
>>
> ...
>
>> It looks like a lot of complexity, where just
>> using the Julian date as a time parameter can
>> solve everything in one sweep.
...

> I've not read the code itself, but I think I can comment on the above
> from experience with the datenum functions in Matlab. Whether that was
> where the OP got the inspiration for including them or not, these
> functions are there and do get used.
...

ADDENDUM

Actually, in Matlab the above are in a single helper function
ADDTODATE(D,N,T) will add an integer quantity N to date field T of date
number D, and return the new date number R. T is a character
field indicator such as 'y[ear]', etc., ...

Perhaps it's the multiple functions in place of one that Joe sees as
superfluous, mostly??? I'd tend to agree there on the design choice of
individual interfaces, but the idea is the same, simply the implementation.

--

dpb

unread,
Sep 28, 2014, 10:43:25 AM9/28/14
to
On 09/28/2014 9:01 AM, Richard Maine wrote:
> Jos Bergervoet<jos.ber...@xs4all.nl> wrote:
>
>> I don't find the code very self-explaining, but
>> things that stick out, like:
>> REAL(KIND=real_sp),PARAMETER :: one = 1e0 ! 1
>> certainly don't seem to be useful.
>
> I have something simillar to that in all my code. Not quite exactly
> that, but very simillar. The only difference of substance is that my
> simillar constant is working precision instead of single precision. I
> find a lot of use for that. Gotta run, so little time to elaborate.
> Elaboration would probably be pretty boring anyway.

It does seem much more useful as wp instead of fixed sp, indeed, as a
generic constant. This is an area I waffle on a lot but have pretty
much gone to writing explicit constants instead of parameters when I
want the specific precision of single/double for the common integers.

Mayhaps I should look at the OPs code to see, but since the datenum
needs _must_ be at least double to have sufficient accuracy to track
mSecs, one begins to wonder why there's a SP "one" hanging around...

--



dpb

unread,
Sep 28, 2014, 11:11:34 AM9/28/14
to
On 09/28/2014 9:43 AM, dpb wrote:
...

> Mayhaps I should look at the OPs code to see, but since the datenum
> needs _must_ be at least double to have sufficient accuracy to track
> mSecs, one begins to wonder why there's a SP "one" hanging around...
...

Albeit, of course, the promotion of the integer is pretty-much
guaranteed, the SP designation makes one wonder elsewhere, perhaps,
whether there's the previous month's thread arising again...

--

Milan Curcic

unread,
Sep 28, 2014, 11:11:34 AM9/28/14
to
On the comment of the usefulness of one = 1e0: This constant remained from some past version and is not at all used in the code. Might as well remove it. But even then, given that it is not a public entity, it is pointless to discuss its usefulness from a user's point of view. Not all of datetime-fortran uses double precision as certain operations just don't have need for it.

> And then I see:
> PROCEDURE :: addMilliseconds
> PROCEDURE :: addSeconds
> PROCEDURE :: addMinutes
> PROCEDURE :: addHours

First, these are called substantially by the overloaded + and - operators for datetime and timedelta. If you look at the code for these routines, their usefulness should be self-explanatory. Without these, there will be a lot of code repetition. They themselves are used internally by the library and are not supposed to be called by the user. However, I have decided to leave them as public entities so that if for example the programmer needs only to add seconds to a very large array of datetimes, doing:

CALL arrayOfDatetimes % addSeconds(1)

Would induce less overhead and better performance than the recommended:

arrayOfDatetimes = arrayOfDatetimes + timedelta(seconds=1)

In this particular case, the first method of adding seconds is ~5 times faster on my computer.

Again, the user can add and subtract datetimes and timedeltas without calling the above addSeconds, addMinutes, addHours etc. methods, and I recommend using overloaded + and - operators instead. However the user has the add methods available as it may be useful in some cases as decribed above.

Ron Shepard

unread,
Sep 28, 2014, 12:41:52 PM9/28/14
to
On 9/28/14 9:01 AM, Richard Maine wrote:
> Jos Bergervoet <jos.ber...@xs4all.nl> wrote:
>
>> I don't find the code very self-explaining, but
>> things that stick out, like:
>> REAL(KIND=real_sp),PARAMETER :: one = 1e0 ! 1
>> certainly don't seem to be useful.
>
> I have something simillar to that in all my code. Not quite exactly
> that, but very simillar. The only difference of substance is that my
> simillar constant is working precision instead of single precision. I
> find a lot of use for that. Gotta run, so little time to elaborate.
> Elaboration would probably be pretty boring anyway.

I do the same thing, partly out of habit from my f77 programming days
when consistent precision was more difficult on the programmer to
maintain than now. Having said that however, I now typically define
constants like that as something like

REAL(KIND=real_sp),PARAMETER :: one = 1.0_real_sp

rather than relying on the correct implicit conversion, if necessary, to
occur. Yes, I know that small integer values usually get converted
correctly, but still, I think it is nice to write the code so this is
not necessary when possible.

My only general comment about the code is that I wonder why it uses all
of that object oriented stuff. I think all of that could have been done
with straightforward f90/f95, without that extra level of complexity. I
just skimmed over it, so maybe I overlooked something.

$.02 -Ron Shepard

dpb

unread,
Sep 28, 2014, 12:51:04 PM9/28/14
to
On 09/28/2014 10:11 AM, Milan Curcic wrote:
...

> First, these are called substantially by the overloaded + and -
> operators for datetime and timedelta. If you look at the code for
> these routines, their usefulness should be self-explanatory. Without
> these, there will be a lot of code repetition. They themselves are
> used internally by the library and are not supposed to be called by
> the user. However, I have decided to leave them as public entities
> so that if for example the programmer needs only to add seconds to a
> very large array of datetimes, doing:
>
> CALL arrayOfDatetimes % addSeconds(1) >
> Would induce less overhead and better performance than the recommended:
>
> arrayOfDatetimes = arrayOfDatetimes + timedelta(seconds=1)
>
> In this particular case, the first method of adding seconds is ~5
> times faster on my computer.
>
> Again, the user can add and subtract datetimes and timedeltas without
> calling the above addSeconds, addMinutes, addHours etc. methods, and
> I recommend using overloaded + and - operators instead. However the
> user has the add methods available as it may be useful in some cases
> as decribed above.

About the usage I outlined by reference to Matlab. I grok the need for
them internally fully and agree on "since they're there, might as well
make them available" philosophy. I'm somewhat surprised by such a large
performance difference, though; to what do you attribute such a large
hit with the alternate form?

Your code has one feature/advantage over the Matlab implementation,
however, of the overloaded +/- operators that are possible, but not
implemented, in the base Matlab distribution.

--

JWM

unread,
Sep 28, 2014, 2:17:16 PM9/28/14
to


On Sun, 2014-09-28 at 13:02 +0200, Jos Bergervoet wrote:
> On 9/28/2014 7:04 AM, FortranFan wrote:
> > On Saturday, September 27, 2014 4:09:55 PM UTC-4, Milan Curcic wrote:
> >> Hi All,
> >>
> >>
> >>
> >> I'd like to draw your attention to datetime-fortran (https://github.com/milancurcic/datetime-fortran), an open source library for time and date manipulation. The code has been in public beta since Summer 2013, and having finished the unit tests and over a dozen bug fixes, I have now labeled it as stable.
> >>
> >>
> >>
> >> If you deal with any aspect of time or dates in your Fortran code, this library will likely be useful to you. You will need a Fortran 2003-conforming compiler. Any recent version of GNU, Intel, PG or IBM Fortran will do.
> >>
> >>
> >>
> >> If you happen to use the code, I'm looking forward to hear about any feature requests or bug reports.
> >>
> >>
> >>
> >> Cheers,
> >>
> >> milan
> >
> > Brilliant. great job.
>
> This may be true, but can someone explain what is the
> use of it?
>
> I don't find the code very self-explaining, but
> things that stick out, like:
> REAL(KIND=real_sp),PARAMETER :: one = 1e0 ! 1
> certainly don't seem to be useful.

I've seen those a lot in the past (along with the declaration of the
parameter zero). It's good to have it in just one place, might the value
change in the future </end of sarcasm>

>
> And then I see:
> PROCEDURE :: addMilliseconds
> PROCEDURE :: addSeconds
> PROCEDURE :: addMinutes
> PROCEDURE :: addHours
>
> which appear (to me) to be in the category:
> addOnePlusOne
> addTwoPlusTwo
> mulTwoTimesTwo
> in terms of usefulness..
>
> It looks like a lot of complexity, where just
> using the Julian date as a time parameter can
> solve everything in one sweep.

The OP is simply exposing some of the internals of the implementation.
For example, if you have the date 2014-09-28T11:59:32.999 and want to
add 17 months, 13 days, 3 hours and 2 milliseconds, then the
implementation of operator(+) will have to invoke those (since adding 2
milliseconds to 999 milliseconds results in 1 second and 1 millisecond,
and so on).

And as Gary Scott said, it would have been a good one to have a decade
ago.

--
John.

Richard Maine

unread,
Sep 28, 2014, 2:21:27 PM9/28/14
to
Ron Shepard <nos...@nowhere.org> wrote:

> On 9/28/14 9:01 AM, Richard Maine wrote:
> > Jos Bergervoet <jos.ber...@xs4all.nl> wrote:
> >
> >> I don't find the code very self-explaining, but
> >> things that stick out, like:
> >> REAL(KIND=real_sp),PARAMETER :: one = 1e0 ! 1
> >> certainly don't seem to be useful.
> >
> > I have something simillar to that in all my code. Not quite exactly
> > that, but very simillar. The only difference of substance is that my
> > simillar constant is working precision instead of single precision. I
> > find a lot of use for that. Gotta run, so little time to elaborate.
> > Elaboration would probably be pretty boring anyway.
>
> I do the same thing, partly out of habit from my f77 programming days
> when consistent precision was more difficult on the programmer to
> maintain than now.

Yes. I'll grant my practice is probably mostly holdover from my f77
habits. These days it is easy enough to just tack the kind parameter
onto your constants. In f77, I recall particularly in plotting software
having lots of procedure arguments that were literal constants for
things like positions on the plot. The precision never mattered in terms
of accuracy (no way you could see the difference), but of course,
argument agreement mattered a lot. So I'd have actual arguments like
0.1*one, basically counting on the "*one" to get to the correct
precision, which was single on some machines and double on others. Yes,
the 0.1 was only accurate to single, but that did not matter.

Richard Maine

unread,
Sep 28, 2014, 2:25:38 PM9/28/14
to
JWM <jwmw...@gmail.com> wrote:

> On Sun, 2014-09-28 at 13:02 +0200, Jos Bergervoet wrote:

> > I don't find the code very self-explaining, but
> > things that stick out, like:
> > REAL(KIND=real_sp),PARAMETER :: one = 1e0 ! 1
> > certainly don't seem to be useful.
>
> I've seen those a lot in the past (along with the declaration of the
> parameter zero). It's good to have it in just one place, might the value
> change in the future </end of sarcasm>

Of course, quite seriously, although the value of one isn't likely to
change, the kind of it is (at least if it is a working kind instead of
hard-wired to single). Making it easy to change the kind is why I had
named constants like that, although as noted elsethread, that was a
bigger issue in f77 than it is today.

Milan Curcic

unread,
Sep 28, 2014, 2:47:22 PM9/28/14
to
Thanks everybody for your comments.

I have made a commit that removes the unused single precision constant that created quite a few waves. :)

Main reasons why datetime-fortran relies on Fortran 2003:

1. Lets the user extend existing types like datetime or timedelta to add custom functionality.

2. Using iso_c_binding for tm struct and calls to C strftime and strptime. Being able to call datetime % strftime(fmt) and strptime(str,fmt) is one of the more powerful features of datetime-fortran in my opinion, and allows for very easy I/O of time records of any format. Of course, calling C functions was possible with earlier Fortran specifications, but using iso_c_binding allows for standardized, implementation independent C function calls without name mangling issues.

3. Allocatable character variables - used only in datetime % strftime(fmt), but allows the function to return the character string of exact length as requested by fmt, with no need to TRIM() upon return.

4. Personal preference for software design (referring mostly to object-oriented features here). datetime-fortran is mainly a convenience library and awfully high computational performance is not a priority.

Cheers,
Milan Curcic

Milan Curcic

unread,
Sep 28, 2014, 3:03:54 PM9/28/14
to

> I'm somewhat surprised by such a large
>
> performance difference, though; to what do you attribute such a large
>
> hit with the alternate form?

The implementation of the + operator for datetime + timedelta inspects the timedelta instance components, and calls the appropriate datetime methods if the components are non-zero:



PURE ELEMENTAL FUNCTION datetime_plus_timedelta(d0,t) RESULT(d)
!======================================================================>
!
! Adds a timedelta instance to a datetime instance.
! Returns a new datetime instance. Overloads the operator +.
!
!======================================================================>

! ARGUMENTS:
TYPE(datetime), INTENT(IN) :: d0
TYPE(timedelta),INTENT(IN) :: t
TYPE(datetime) :: d

! Initialize:
d = d0

IF(t % milliseconds /= 0)CALL d % addMilliseconds(t % milliseconds)
IF(t % seconds /= 0)CALL d % addSeconds(t % seconds)
IF(t % minutes /= 0)CALL d % addMinutes(t % minutes)
IF(t % hours /= 0)CALL d % addHours(t % hours)
IF(t % days /= 0)CALL d % addDays(t % days)

ENDFUNCTION datetime_plus_timedelta



The IF checks for each component is likely what is making the datetime + timedelta operation slower than just datetime % addSeconds(). Of course, using the IF blocks is not necessary, but that would result in redundant subroutine calls in most cases. I don't think the compiler would be able to optimize addDays(0) away - but please correct me if I'm wrong.

dpb

unread,
Sep 28, 2014, 3:22:39 PM9/28/14
to
On 09/28/2014 2:03 PM, Milan Curcic wrote:

...[big snip for brevity]...

> The IF checks for each component is likely what is making the
> datetime + timedelta operation slower than just datetime %
> addSeconds(). Of course, using the IF blocks is not necessary, but
> that would result in redundant subroutine calls in most cases. I
> don't think the compiler would be able to optimize addDays(0) away -
> but please correct me if I'm wrong.

That's possible, I suppose, but 5X still seems remarkable...I don't have
a post-F95 compiler installed so likely won't download to investigate
further.

--

Jos Bergervoet

unread,
Sep 28, 2014, 5:43:20 PM9/28/14
to
On 9/28/2014 5:11 PM, Milan Curcic wrote:
> On Sunday, September 28, 2014 10:43:25 AM UTC-4, dpb wrote:
>> On 09/28/2014 9:01 AM, Richard Maine wrote:
>>> Jos Bergervoet<jos.ber...@xs4all.nl> wrote:
....
> On the comment of the usefulness of one = 1e0: This constant
> remained from some past version and is not at all used in the
> code. Might as well remove it. But even then, given that it
> is not a public entity,

You are right. The implicit "private" makes it easy
to overlook that things are private (my fault,
nevertheless) and if they are, then of course they
cannot be used (by "use" of the module, I mean) so
the usefulness question is something for the
programmer of the module to decide!

...
>> PROCEDURE :: addMilliseconds
>> PROCEDURE :: addSeconds
>> PROCEDURE :: addMinutes
>> PROCEDURE :: addHours

> ... I have decided to leave them as public entities
> so that if for example the programmer needs only to
> add seconds to a very large array of datetimes, doing:
> CALL arrayOfDatetimes % addSeconds(1)
> Would induce less overhead and better performance than
> the recommended:
>
> arrayOfDatetimes = arrayOfDatetimes + timedelta(seconds=1)
>
> In this particular case, the first method of adding
> seconds is ~5 times faster on my computer.

That could be a valid reason (if it is really not
possible for the compiler to optimize). Also you would
not have to write "arrayOfDatetimes" twice, which could
also be avoided by having a += or +:= operator:
arrayOfDatetimes += timedelta(seconds=1)
but very unfortunately Fortran lacks this operator. So
the incompleteness of Fortran in this respect and the
lack of compiler cleverness force us to make things more
complicated (at this moment). Your choice is clear now!

--
Jos

Gary Scott

unread,
Sep 28, 2014, 5:56:26 PM9/28/14
to
lol, my memory is fuzzy...it might actually have been two or three
decades ago...what's a decade among friends.

>
> --
> John.
>

FortranFan

unread,
Sep 28, 2014, 6:15:24 PM9/28/14
to
On Sunday, September 28, 2014 2:25:38 PM UTC-4, Richard Maine wrote:
I too do something very similar and have a constants module that extends the precisions module and which defines a set of commonly used constants in the computations I support: ZERO, ONE, TWO, .., ONETENTH, ONEFOURTH, ONETHIRD, etc. I mainly do it because I feel the code reads much better with lines such as

X = ZERO compared to X = 0.0_wp
or
Sigma = P**ONEFOURTH compared to Sigma = P**0.25_wp

while ensuring required precision in the calculations.
while t

robin....@gmail.com

unread,
Sep 29, 2014, 8:48:34 AM9/29/14
to
X = 0
is better.

> or
>
> Sigma = P**ONEFOURTH compared to Sigma = P**0.25_wp

Try Sigma = P/4

Gordon Sande

unread,
Sep 29, 2014, 9:27:24 AM9/29/14
to
On 2014-09-29 12:48:32 +0000, robin....@gmail.com said:

>>
>>
>>
>> X = ZERO compared to X = 0.0_wp
>
> X = 0
> is better.
>
>> or
>>
>> Sigma = P**ONEFOURTH compared to Sigma = P**0.25_wp
>
> Try Sigma = P/4

In Fortran the double star is for exponentiate.
(Time for new glasses or cleaning the old ones?)

The OP was avoiding something like

Sigma = sqrt ( sqrt ( P ) )

Ron Shepard

unread,
Sep 29, 2014, 11:47:58 AM9/29/14
to
On 9/29/14 7:48 AM, robin....@gmail.com wrote:
>> X = ZERO compared to X = 0.0_wp
> X = 0
> is better.

This may be like discussing dancing angels, but I'm curious why
programmers have these preferences. As I mentioned before, f77 had some
inherent problems that made the use of parameters and assignments like

X = ZERO

easier to use than the alternatives that were available at that time.
But now since f90, those considerations no longer apply. I have found
that I use

X = 0.0_wp

or something similar about as often as I use the parameter now. It
really is not clear to me which "looks" better.

I personally do not like assignments that require type or kind
conversion such as

X = 0

It isn't so much that I think the compiler will get it wrong, or that it
will be done inefficiently at runtime rather than at compile time, it is
more that when I see it I think the programmer was lazy or sloppy, and I
wonder where else in the code he was sloppy but maybe he was not so
lucky with the outcome.

The same thing applies to mixed-mode arithmetic, although I have seen
(and written) code that actually does look simpler this way. Long
polynomials or multinomials with integer coefficients are one example of
this.

$.02 -Ron Shepard

FortranFan

unread,
Sep 29, 2014, 11:51:17 AM9/29/14
to
On Monday, September 29, 2014 8:48:34 AM UTC-4, robin....@gmail.com wrote:

>
> X = 0
>
> is better.
>

..
>
> Try Sigma = P/4
>

Do you program in Fortran at all? If yes, who uses what you program, where, and how? Your statements always make me wonder. So here you are being unable relate to "**" operator PLUS you've missed the whole boat of discussions on this forum related to literal constants.

Ron Shepard

unread,
Sep 29, 2014, 11:56:24 AM9/29/14
to
On 9/29/14 8:27 AM, Gordon Sande wrote:
>>> Sigma = P**ONEFOURTH compared to Sigma = P**0.25_wp
>>
>> Try Sigma = P/4
>
> In Fortran the double star is for exponentiate.
> (Time for new glasses or cleaning the old ones?)
>
> The OP was avoiding something like
>
> Sigma = sqrt ( sqrt ( P ) )
>
>>> while ensuring required precision in the calculations.

I wonder which expression is more efficient? I expect the sqrt(sqrt(P))
one might be in most cases. Certainly, if SP=sqrt(P) is required anyway
in other parts of the calculation, then sqrt(SP) seems like a simple and
clear optimization that could be used by either the programmer or the
compiler itself.

$.02 -Ron Shepard

FortranFan

unread,
Sep 29, 2014, 12:06:24 PM9/29/14
to
As I implied with my statement "mainly do it because I feel the code reads much better", this is a matter of individual preference. A couple of other reasons for our team to do this include:

a) about a third of our code base started in FORTRAN 77 which had such constructs and they got carried over and

b) at times, there have been what I call regression in coding practices with statements such as "X = 0.0_r4" or "X = 0.0_real8" or "X = 0." or "X = 0" creeping in, particularly when multiple developers start to extend/modify parts of the code.

Milan Curcic

unread,
Sep 29, 2014, 12:21:46 PM9/29/14
to
Great discussion, but please let's try not to hijack the original datetime thread.

FortranFan

unread,
Sep 29, 2014, 1:06:19 PM9/29/14
to
On Sunday, September 28, 2014 12:41:52 PM UTC-4, Ron Shepard wrote:

>
> My only general comment about the code is that I wonder why it uses all
>
> of that object oriented stuff. I think all of that could have been done
>
> with straightforward f90/f95, without that extra level of complexity. I
>
> just skimmed over it, so maybe I overlooked something.
>
>
>
> $.02 -Ron Shepard

I disagree totally with above comment. I commend Milan Curcic for taking the object-oriented approach and his "class design" looks good to me.

I personally think anyone writing any new code should follow this model i.e., always try to follow the object-oriented approach, program using the OO features and design as much as possible, study OO patterns and designs from the computer science world and strive for better and better designs for one's own needs. Only go back to the old paradigm of Fortran 90/Fortran 95 based procedural programming if OO is prohibitively expensive (which I've yet to see) or there is a specific need. But I find all the OO stuff pays handsome dividends and I don't find it anymore complicated, even for small programs - in fact, it is a joy to program in.

Jos Bergervoet

unread,
Sep 29, 2014, 1:52:36 PM9/29/14
to
On 9/29/2014 7:06 PM, FortranFan wrote:
> On Sunday, September 28, 2014 12:41:52 PM UTC-4, Ron Shepard wrote:
>
>>
>> My only general comment about the code is that I wonder why it uses all
>> of that object oriented stuff. I think all of that could have been done
>> with straightforward f90/f95, without that extra level of complexity. I
>> just skimmed over it, so maybe I overlooked something.
>
> I disagree totally with above comment.

But why?

> I commend Milan Curcic for taking the object-oriented approach
> and his "class design" looks good to me.

Yes, but why would procedures instead of methods have
been worse?

> I personally think anyone writing any new code should follow this
> model

But you do not say why!

> i.e., always try to follow the object-oriented approach, program
> using the OO features and design as much as possible,

But is there a reason for it?

> study OO patterns and designs from the computer science world and
> strive for better and better designs for one's own needs. Only
> go back to the old paradigm of Fortran 90/Fortran 95 based
> procedural programming if OO is prohibitively expensive (which
> I've yet to see)

I agree that it need not (necessarily) be more expensive.
But that still is no argument *in favor* of it. (One would
almost say "qui s'excuse s'accuse", pardon my French.)

> or there is a specific need. But I find all the OO stuff pays
> handsome dividends and I don't find it anymore complicated,

Now you do it again! I immediately believe it is not too
complicated, but that is not an argument in favor of it.
(The alternative is not too complicated either, AFAIK!)

> even for small programs - in fact, it is a joy to program in.

Since it is in Fortran, that is of course self-evident!
But the alternative can *also* be done in Fortran.

Logic would say that if you write a program dealing with
objects, it's clear you might benefit from OO. Whereas a
program dealing with algorithms might better use the
procedural approach. So why should we mess up the logic?

--
Jos

Jos Bergervoet

unread,
Sep 29, 2014, 2:01:21 PM9/29/14
to
On 9/28/2014 8:21 PM, Richard Maine wrote:
> Ron Shepard<nos...@nowhere.org> wrote:
>> On 9/28/14 9:01 AM, Richard Maine wrote:
>>> Jos Bergervoet<jos.ber...@xs4all.nl> wrote:
...
>>>> REAL(KIND=real_sp),PARAMETER :: one = 1e0 ! 1
>>>> certainly don't seem to be useful.
>>>
>>> I have something simillar to that in all my code. Not quite exactly
>>> that, but very simillar. The only difference of substance is that my
>>> simillar constant is working precision instead of single precision. I
>>> find a lot of use for that. Gotta run, so little time to elaborate.
>>> Elaboration would probably be pretty boring anyway.
>>
>> I do the same thing, partly out of habit from my f77 programming days
>> when consistent precision was more difficult on the programmer to
>> maintain than now.
>
> Yes. I'll grant my practice is probably mostly holdover from my f77
> habits. These days it is easy enough to just tack the kind parameter
> onto your constants.

What I often use is the somewhat related:
real(wp), parameter :: Ima = (0d0,1d0)
and the reason is that
exp(Ima*t)
looks (to me) much clearer than
exp( (0._wp,1._wp) * t )


This advantage is not present if I compare
sqrt(x+one)
and
sqrt(x+1)
or even
sqrt(x+1._wp)


Of course instead of my "Ima" I could use "i"
but that would prevent the use of the idiomatic
do i=1,n
which of course we don't want to lose!

--
Jos

Sebastiaan Janssens

unread,
Sep 29, 2014, 3:04:05 PM9/29/14
to
On 09/29/2014 07:51 PM, Jos Bergervoet wrote:

[ snap ]

>
> Logic would say that if you write a program dealing with
> objects, it's clear you might benefit from OO. Whereas a
> program dealing with algorithms might better use the
> procedural approach. So why should we mess up the logic?
>

Is it really that black and white?

Let me give an example. As a fortran novice, I have found it extremely
useful that arithmetic operations such as addition and substraction have
been "overloaded" (may I use this term here?) in modern Fortran so they
also apply to multidimensional arrays. (Incidentally, this is indeed one
of the reasons I love the language: Fortran for me is like MATLAB on
steroids.) This seems to be an example of object orientation. In this
case, it leads to readable code for the algorithm that we aim to
implement. If this code is otherwise entirely procedural, that is fine.

More generally, I can think of numerical applications that involve the
manipulation of complex (as opposed to "simple") data structures (think
of adaptive meshes, finite elements, image processing etc.) for which
code that uses objects (in addition to procedural features) is cleaner,
more readable, re-usable and better maintainable.

Best wishes,

Sebastiaan.

FortranFan

unread,
Sep 29, 2014, 3:39:53 PM9/29/14
to
Note I'm not allowed to provide specific details, but the transition to OO approach for a bunch of legacy "library" code has been beneficial to us in many aspects. Here's what immediately comes to mind (hence in no particular order):

* thread-safe classes that don't require global and static data

* easier route to parallelization

* reusability and extensibility (one group creates a set of classes which get used as-is by many other groups and/or with some extensions by several other groups: one recent example is a class we created for a batch reactor calculation that was quickly extended by a colleague for a steady-state reactor. Or in another case, one of our heat transfer solver classes had 3 methods - adding a 4th one for a specific engineering problem was much easier and faster with no client interruptions than what it took us to add the 3rd method in what was back then our legacy library.

* classification of data and methods and making them private/public providing practical value also e.g., one team is able to work on one aspect of a big class library without "stepping on" what other teams are working on.

* easier route to training new folks on "internal content" of one's code - the OO class structure lends naturally to visually aids which help young engineers greatly - a "picture is worth a thousand words"

* easier consumption of the code one creates by the end users, many of whom are highly familiar with OO approach being trained in C++, Java, .NET, etc. Instead of procedure invocations of legacy libraries with several setup, initialization, and calculation steps with procedure calls that have long arguments/complicated data structures, etc., our users get calculations done (e.g., quadrature of some function) by instantiating a class and invoking simple getter and setter methods similar to what they would do if they were consuming objects in other computing environments and get as good or often better computational performance!

Thus, coding in and using Fortran libraries is no longer "ewww.."; we can now get young engineers to join us and the focus is on math and engineering as it should be.

We only wish Fortran 90 had all the OO features of Fortran 2003 and 2008 in which case we would not have "lost" people and libraries to C++, etc. Of course, the 2003 and 2008 standards simply make the implementation of OO paradigm easier; somebody will always claim they wrote better programs and could do everything in FORTRAN II - we are not that smart, we can do with all the help the language standard can provide.

glen herrmannsfeldt

unread,
Sep 29, 2014, 4:09:53 PM9/29/14
to
Jos Bergervoet <jos.ber...@xs4all.nl> wrote:

(snip)

> What I often use is the somewhat related:
> real(wp), parameter :: Ima = (0d0,1d0)
> and the reason is that
> exp(Ima*t)
> looks (to me) much clearer than
> exp( (0._wp,1._wp) * t )

> This advantage is not present if I compare
> sqrt(x+one)
> and
> sqrt(x+1)
> or even
> sqrt(x+1._wp)

Warning to Matlab (or Octave) programmers who are also Fortran
programmers, don't use i for a loop variable. Your complex math
won't work so well anymore.

(My Fortran, C, and Java programs often use i for a loop variable.)

-- glen

glen herrmannsfeldt

unread,
Sep 29, 2014, 4:18:12 PM9/29/14
to
Ron Shepard <nos...@nowhere.org> wrote:

(snip, someone wrote)
>> The OP was avoiding something like

>> Sigma = sqrt ( sqrt ( P ) )

>>>> while ensuring required precision in the calculations.

> I wonder which expression is more efficient? I expect the sqrt(sqrt(P))
> one might be in most cases. Certainly, if SP=sqrt(P) is required anyway
> in other parts of the calculation, then sqrt(SP) seems like a simple and
> clear optimization that could be used by either the programmer or the
> compiler itself.

The usual Newton-Raphson sqrt are pretty efficient. After generating
the starting guess with the exponent half the argument exponent,
and then with a little more to get a little better starting
point, it is two cycles for single, and four for double
(on most processors).

Computing x**0.25 is usually done as exp(0.25*log(x)),
where log and exp are significantly harder than sqrt.

Also, sqrt(sqrt(x)) should be pretty close to full precision,
where for many argument values exp(log(x)) loses precision.

-- glen

dpb

unread,
Sep 29, 2014, 5:24:44 PM9/29/14
to
On 09/29/2014 3:09 PM, glen herrmannsfeldt wrote:
...

> Warning to Matlab (or Octave) programmers who are also Fortran
> programmers, don't use i for a loop variable. Your complex math
> won't work so well anymore.
>
> (My Fortran, C, and Java programs often use i for a loop variable.)

While you _can_ screw up, it's simple enough and most experienced
Matlab'ers still do use i,j for loop indices as well since they're just
_so_ terribly ubiquitous (and complex variables being less prevalent
than real for many (most?) applications).

>> for i=1:3,end
>> x=3+i % the "oops" way if intend...
x =
6
>> x=3+1i % what ML'ers of any experience do...
x =
3.0000 + 1.0000i
>>

--

Ian Harvey

unread,
Sep 29, 2014, 8:05:18 PM9/29/14
to
Bar perhaps to the extent suggested in some of the adverbial clauses, I
agree.

There's also the very practical aspect that type bound procedures behave
themselves in the expected way in terms of the name space - they don't
get accidentally lost and they don't clutter the top level.

(It is a little unfortunate that the name space related behaviour is
tied together with polymorphism, because instances do arise where you
want one nice name space behaviour, but you don't need or want the
polymorphic capability (which does come with some overhead, albeit not
one that I generally worry about). The language is missing the ability
to specify that a type with bindings is not extensible. The types that
this library provide may fall into that category - the concept of
extending a date (that doesn't involve dinner) isn't something that
strikes me as being useful.

A related question to the OP (the provider of the library - thanks for
posting it) - is there a particular reason why the operators are
stand-alone (versus generic bindings)? My parenthetical bit isn't so
relevant here, as the procedures implementing the operators often
forward (multiple times) to the bindings anyway, so you've got the
overhead of construction of the polymorphic actual argument descriptor
and dynamic dispatch anyway (this is probably another factor explaining
the performance aspects for bindings versus operators discussed elsethread).

(Minor issues, that might be more questions of style - consider adding
implicit none to interface bodies; consider adding an only clause for
use statements for intrinsic modules.)

robin....@gmail.com

unread,
Sep 29, 2014, 10:26:37 PM9/29/14
to
On Monday, September 29, 2014 11:27:24 PM UTC+10, Gordon Sande wrote:
> On 2014-09-29 12:48:32 +0000, r.no...@gmail.com said:
>
> >> X = ZERO compared to X = 0.0_wp
>
> >
>
> > X = 0
>
> > is better.
>
> >
>
> >> or
>
> >>
>
> >> Sigma = P**ONEFOURTH compared to Sigma = P**0.25_wp
>
> >
>
> > Try Sigma = P/4
>
>
>
> In Fortran the double star is for exponentiate.

On the display (courtesy of HP and google) the font is very small,
and the characters **0 run into each other, so that I misread
it as multiplication.

> (Time for new glasses or cleaning the old ones?)

> The OP was avoiding something like
> Sigma = sqrt ( sqrt ( P ) )

which I probably would have used [viz., SQRT twice] for that
particular case, but obviously the poster was illustrating a point about
precisions.

robin....@gmail.com

unread,
Sep 29, 2014, 10:36:39 PM9/29/14
to
On Tuesday, September 30, 2014 1:47:58 AM UTC+10, Ron Shepard wrote:
> On 9/29/14 7:48 AM, r.no...@gmail.com wrote:
>
> >> X = ZERO compared to X = 0.0_wp
>
> > X = 0
>
> > is better.
>
>
>
> This may be like discussing dancing angels, but I'm curious why
>
> programmers have these preferences. As I mentioned before, f77 had some
>
> inherent problems that made the use of parameters and assignments like
>
>
>
> X = ZERO
>
>
>
> easier to use than the alternatives that were available at that time.
>
> But now since f90, those considerations no longer apply. I have found
>
> that I use
>
>
>
> X = 0.0_wp
>
>
>
> or something similar about as often as I use the parameter now. It
>
> really is not clear to me which "looks" better.
>
> I personally do not like assignments that require type or kind
> conversion such as

> X = 0

X = 0.0 , x = 0.0_wp, and X = 0 always require conversion to the appropriate
internal form.
I would be very surprised if any compiler now did a run-time conversion
of 0 to internal floating point of the same kind as X.

Assignments of simple integer constants to a fp variable are IMHO
best written as that. The KISS principle should apply, and
X = 0
is preferable to the clutter of
X = 0.0_wp
or whatever.
It isn't just in simple assignments that this clutter can obscure the
task at hand, for when simple constants are written like that throughout
an expression, the expression can become a mess.

Milan Curcic

unread,
Sep 30, 2014, 12:54:15 AM9/30/14
to

> A related question to the OP (the provider of the library - thanks for
>
> posting it) - is there a particular reason why the operators are
>
> stand-alone (versus generic bindings)? My parenthetical bit isn't so
>
> relevant here, as the procedures implementing the operators often
>
> forward (multiple times) to the bindings anyway, so you've got the
>
> overhead of construction of the polymorphic actual argument descriptor
>
> and dynamic dispatch anyway (this is probably another factor explaining
>
> the performance aspects for bindings versus operators discussed elsethread).
>
>
>
> (Minor issues, that might be more questions of style - consider adding
>
> implicit none to interface bodies; consider adding an only clause for
>
> use statements for intrinsic modules.)

Hi Ian,

Thank you very much for your comments. I implemented the operator overloading in a way that was intuitive to me and produced expected results. I am not too familiar with the concepts that you mention, but I am now motivated to go back to my Metcalf, Reid and Cohen and do some research on this. I would be very happy to make improvements to the code that you (and other Fortran experts around here) recommend.

Cheers,
milan

Ian Harvey

unread,
Sep 30, 2014, 2:01:31 AM9/30/14
to
On 2014-09-30 2:54 PM, Milan Curcic wrote:
>
>> A related question to the OP (the provider of the library - thanks for
>>
>> posting it) - is there a particular reason why the operators are
>>
>> stand-alone (versus generic bindings)?
...
> Hi Ian,
>
> Thank you very much for your comments. I implemented the operator overloading in a way that was intuitive to me and produced expected results. I am not too familiar with the concepts that you mention, but I am now motivated to go back to my Metcalf, Reid and Cohen and do some research on this. I would be very happy to make improvements to the code that you (and other Fortran experts around here) recommend.

What you've got might be fine. Whether changing it is a good idea is
possibly up for debate. But consider the operation of datetime +
timedelta. You could have something like:

TYPE :: datetime
! ...
CONTAINS
! ...
PROCEDURE, PRIVATE :: datetime_plus_timedelta
GENERIC :: OPERATOR(+) => datetime_plus_timedelta
! ...
END TYPE


!...

! remove this procedure from the current generic
! interface for OPERATOR(+)
PURE ELEMENTAL FUNCTION datetime_plus_timedelta(d0,t) RESULT(d)
!...
! ARGUMENTS:
! Need to change the /declaration-type-spec/.
CLASS(datetime), INTENT(IN) :: d0
TYPE(timedelta),INTENT(IN) :: t
TYPE(datetime) :: d

!...stuff...
END FUNCTION datetime_plus_timedelta


Then, anywhere the datetime identifier is accessible the associated
operator(+) is there too, even if there are ONLY clauses on USE
statements. Not applicable here, because the compiler would just
complain if the generic interface for + wasn't accessible, but for
defined assignment this can be critical.

But note how the declaration type spec is now polymorphic - because a
generic binding is specified via specific bindings, and the passed
argument of a specific bindings of an extensible type must be
polymorphic (and types that aren't extensible can't have bindings).
This is what I was referring to with my parenthetical bit in the
previous post. CLASS is all about polymorphism, but we don't really
care for that here.

You need to be a little mindful that you don't end up with ambiguous
specifics sitting behind a particular reference to a generic (i.e. don't
have something with the type-kind-rank signature of
datetime_plus_timedelta a specific binding for operator(+) in timedelta
or in the stand-alone generic), but I think this way makes sense -
timedeltas could be useful on their own without knowing about datetime,
so they don't have any bindings that take datetime arguments, but as
soon as you have more than one datetime you might start considering what
the difference is between them, so datetime bindings will often have
timedelta arguments.

Jos Bergervoet

unread,
Sep 30, 2014, 2:30:01 AM9/30/14
to
On 9/29/2014 11:24 PM, dpb wrote:
> On 09/29/2014 3:09 PM, glen herrmannsfeldt wrote:
> ...
> While you _can_ screw up, it's simple enough

Yes, Matlab is not really a safe programming
language. (Fortran is also not the very best
in terms of safety, of course).

> and most experienced
> Matlab'ers still do use i,j for loop indices as well since they're just
> _so_ terribly ubiquitous (and complex variables being less prevalent
> than real for many (most?) applications).

Yes, great! Matlab programmers (the experienced
ones, that is) can write loops. Wasn't it made
to use array syntax?! And about those complex
numbers: the experienced ones just know that we
do not need them. They are less prevalent! So
we should shut up.

>
> >> for i=1:3,end
> >> x=3+i % the "oops" way if intend...
> x =
> 6
> >> x=3+1i % what ML'ers of any experience do...

But how do they get:
x=exp(i*t)
with the correct result?! I think it cannot work,
that's why Mathematica chose "I" instead of "i",
probably.

--
Jos

Jos Bergervoet

unread,
Sep 30, 2014, 2:40:02 AM9/30/14
to
On 9/29/2014 9:03 PM, Sebastiaan Janssens wrote:
> On 09/29/2014 07:51 PM, Jos Bergervoet wrote:
>
> [ snap ]
>
>>
>> Logic would say that if you write a program dealing with
>> objects, it's clear you might benefit from OO. Whereas a
>> program dealing with algorithms might better use the
>> procedural approach. So why should we mess up the logic?
>>
>
> Is it really that black and white?
>
> Let me give an example. As a fortran novice, I have found it extremely
> useful that arithmetic operations such as addition and substraction have
> been "overloaded" (may I use this term here?) in modern Fortran so they
> also apply to multidimensional arrays. (Incidentally, this is indeed one
> of the reasons I love the language: Fortran for me is like MATLAB on
> steroids.) This seems to be an example of object orientation.

No. Addition is a procedure. Overloading existed decades
before the OO paradigm was invented.

Of course you can claim the success of others, but if
you stretch the meaning of OO to include everything,
then of course you have to use it for everything, but
then you do not need the term anymore! (Your meaning of
OO will then simply be synonymous to "programming")

> In this
> case, it leads to readable code for the algorithm that we aim to
> implement.

Exactly, it's algorithm-centric, not object-oriented.

> If this code is otherwise entirely procedural, that is fine.

Not in this discussion. We were (at the dismay of OP)
looking for arguments *in favor* of OO. You are (like
FortranFan) again only defending it, by saying that the
rest being procedural is fine. But we already know that
procedural is fine. (And I have nothing against OO,
but I'm surprised that no-one can give an argument in
favor of it without beating around the bushes!)

> More generally, I can think of numerical applications that involve the
> manipulation of complex (as opposed to "simple") data structures (think
> of adaptive meshes, finite elements, image processing etc.) for which
> code that uses objects (in addition to procedural features) is cleaner,
> more readable, re-usable and better maintainable.

But isn't that exactly my general statement that code
dealing with objects can benefit from OO? (These meshes
and elements are objects, I mean.)

--
Jos

glen herrmannsfeldt

unread,
Sep 30, 2014, 2:55:34 AM9/30/14
to
Jos Bergervoet <jos.ber...@xs4all.nl> wrote:

(snip)

>> >> for i=1:3,end
>> >> x=3+i % the "oops" way if intend...
>> x =
>> 6
>> >> x=3+1i % what ML'ers of any experience do...

> But how do they get:
> x=exp(i*t)
> with the correct result?! I think it cannot work,
> that's why Mathematica chose "I" instead of "i",
> probably.

The variable i starts out with the value that squared is -1, but
if you change it, then it gets a new value.

In mathematical, all the built-ins start with an upper case
letter. That is convenient in that if user variables and functions
start with a lower case letter, you never have to worry about
a conflict. So, yes, but more generally.

By the way, there is, last I know, only one Mathematica function
that you can execute without [].

-- glen

Wolfgang Kilian

unread,
Sep 30, 2014, 3:21:40 AM9/30/14
to
More about the 'why': syntax and details left aside, the OO paradigm is
about organizing and managing software. Polymorphism is just a specific
aspect.

(1) associate procedures to data structures, and group data structure
definitions together in modules. Expose a concise and well-defined
interface to the outside. This is good practice, and has become
standard (details vary) with many modern languages, including Fortran.

An alternative is to associate data to procedures or treat both on the
same footing. A valid approach when thinking in terms of algorithms.
But that's functional programming, with very little support in Fortran.

(2) separate (abstract) interface from (concrete) implementations,
a.k.a. design patterns. Well supported in modern Fortran.

These tools become valuable once programs get large, but I'd recommend a
consistent style, also in smaller pieces of software. So I fully agree
with the OP's approach.

-- Wolfgang

--
E-mail: firstnameini...@domain.de
Domain: yahoo

Sebastiaan Janssens

unread,
Sep 30, 2014, 4:58:04 AM9/30/14
to
On 09/30/2014 08:39 AM, Jos Bergervoet wrote:
> On 9/29/2014 9:03 PM, Sebastiaan Janssens wrote:
>> On 09/29/2014 07:51 PM, Jos Bergervoet wrote:
>>
>> [ snap ]
>>
>>>
>>> Logic would say that if you write a program dealing with
>>> objects, it's clear you might benefit from OO. Whereas a
>>> program dealing with algorithms might better use the
>>> procedural approach. So why should we mess up the logic?
>>>
>>
>> Is it really that black and white?
>>
>> Let me give an example. As a fortran novice, I have found it extremely
>> useful that arithmetic operations such as addition and substraction have
>> been "overloaded" (may I use this term here?) in modern Fortran so they
>> also apply to multidimensional arrays. (Incidentally, this is indeed one
>> of the reasons I love the language: Fortran for me is like MATLAB on
>> steroids.) This seems to be an example of object orientation.
>
> No. Addition is a procedure. Overloading existed decades
> before the OO paradigm was invented.
>

Interesting! Did not know that.
Your statement to which I was referring is:

>>> Logic would say that if you write a program dealing with
>>> objects, it's clear you might benefit from OO. Whereas a
>>> program dealing with algorithms might better use the
>>> procedural approach. So why should we mess up the logic?

To me, this sounded a lot like you view the situation as a dichotomy:
"Use OO when there are objects, use procedural programming when there
are algorithms." The point I was trying to make, is that I can very well
imagine situations where a mixed-paradigm approach could work very well,
namely those situations where algorithms manipulate objects.

Best wishes,

Sebastiaan.

Wolfgang Kilian

unread,
Sep 30, 2014, 5:28:46 AM9/30/14
to
Which is always true - all data entities are objects. (Syntactical
restrictions for intrinsic types understood.) The way it is implemented
in Fortran (incidentally, also in C++ and Java), object-oriented
programming *is* procedural programming, just with a higher level of
organization and abstraction. In those languages, method calls are
technically procedure/function calls. There is no dichotomy.

Things become interesting once you treat algorithms as objects, and once
objects become abstract. There was no need for this in the OP's module.

> Best wishes,
>
> Sebastiaan.

dpb

unread,
Sep 30, 2014, 9:22:16 AM9/30/14
to
On 09/30/2014 1:29 AM, Jos Bergervoet wrote:
> On 9/29/2014 11:24 PM, dpb wrote:
>> On 09/29/2014 3:09 PM, glen herrmannsfeldt wrote:
>> ...
>> While you _can_ screw up, it's simple enough
>
> Yes, Matlab is not really a safe programming
> language. (Fortran is also not the very best
> in terms of safety, of course).

I would say it is only different...

>> and most experienced
>> Matlab'ers still do use i,j for loop indices as well since they're just
>> _so_ terribly ubiquitous (and complex variables being less prevalent
>> than real for many (most?) applications).
>
> Yes, great! Matlab programmers (the experienced
> ones, that is) can write loops. Wasn't it made
> to use array syntax?! And about those complex
> numbers: the experienced ones just know that we
> do not need them. They are less prevalent! So
> we should shut up.
...

Much in Matlab can be vectorized, yes; that (along with the
interpretative nature and being packaged with the very large set of
auxiliary functions and plotting routines) is it's primary advantage in
application. But, not all algorithms are suitable to be vectorized or
to do so causes more complexity that even in Matlab the "deadahead"
looping construct can be more efficient (and with the advancements in
the JIT optimizer that TMW has made, the overhead of looping has been
reduced drastically over the years).

> But how do they get:
> x=exp(i*t)
> with the correct result?! I think it cannot work,
> that's why Mathematica chose "I" instead of "i",
> probably.

Same as showed before--write the explicit complex constant '1i' and the
interpreter gets it right...carrying on from the previous demonstration

>> exp(1i*1E3*t)
ans =
0.8834 + 0.4685i
0.8828 + 0.4698i
0.8821 + 0.4710i
...
0.8775 + 0.4796i
0.8768 + 0.4808i
>>

where t happened to be a vector of timestamps with a millisec resolution
so I amplified them to make the numerics more visible in a short format.

And, of course, one can also simply use other than i,j for loop indices
and not alias the builtin functions or after having used them for the
purpose,

>> clear i
>> i
ans =
0 + 1.0000i
>> which i
built-in (C:\ML_R2012b\toolbox\matlab\elmat\i)
>>

which restores the builtin behavior.

Also, it should be noted that other than in the context of the function
or the workspace script in which the aliasing occurs that in any other
function workspace the builtin will _not_ have been aliased.

>> i=1; % re-alias in workspace
>> showi(pi) % call function using builtin i
ans =
0 + 3.1416i
>> type showi % the function file showi.m

function y=showi(x)
y=x*i;

>>

I don't necessarily recommend the practice in Matlab to newbies, but I
use Matlab extensively and do such things myself all the time because I
understand the implications when doing so and know how to prevent the
unintended.

It would be _a_good_thing_ (tm) if TMW were to have figured out a better
way and not have allowed aliasing to be so rampant in Matlab but it's
"just the way it is"...

--

Dan Nagle

unread,
Sep 30, 2014, 9:58:15 AM9/30/14
to
Hi,

On 2014-09-29 17:51:47 +0000, Jos Bergervoet said:
>
>> I commend Milan Curcic for taking the object-oriented approach
>> and his "class design" looks good to me.
>
> Yes, but why would procedures instead of methods have
> been worse?


One reason is that if you have methods, or procedural components,
you always have the methods available any time you have the type.
No need to remember to make separate procedural names public,
and mention them in all needed only clauses.

--
Cheers!

Dan Nagle

Paul van Delst

unread,
Sep 30, 2014, 10:46:35 AM9/30/14
to
On 09/29/14 22:36, robin....@gmail.com wrote:

> Assignments of simple integer constants to a fp variable are IMHO
> best written as that. The KISS principle should apply, and
> X = 0
> is preferable to the clutter of
> X = 0.0_wp
> or whatever.
> It isn't just in simple assignments that this clutter can obscure the
> task at hand, for when simple constants are written like that throughout
> an expression, the expression can become a mess.

For integer constants, yes. Obviously not for numbers with non-integer
value.

Still, for the anal-retentive amongst us (i.e. me), I like the code to
look consistent. E.g. I prefer to have
X = 0.0_wp
Y = 0.1352_wp
rather than
X = 0
Y = 0.1352_wp

Purely personal preference on my part (say that 5 times fast).

Also, I tend not to use actual literal constants in expressions. I do it
pretty much Milan did in his module, specifying all the numbers up front
as parameters, except rather than something like his
REAL(KIND=real_dp),PARAMETER :: d2h = 24d0 ! day -> hour
REAL(KIND=real_dp),PARAMETER :: d2m = d2h*60d0 ! day -> minute
REAL(KIND=real_dp),PARAMETER :: m2h = one/60d0 ! minute -> hour
REAL(KIND=real_dp),PARAMETER :: d2s = 86400d0 ! day -> second
...etc...
I do
REAL(dp),PARAMETER :: d2h = 24.0_dp ! day -> hour
REAL(dp),PARAMETER :: d2m = d2h*60.0_dp ! day -> minute
REAL(dp),PARAMETER :: m2h = one/60.0_dp ! minute -> hour
REAL(dp),PARAMETER :: d2s = 86400.0_dp ! day -> second
Again, just for consistency.

Also, referring back to being anal, I would probably also do
REAL(dp),PARAMETER :: d2h = 24.0_dp ! day -> hour
REAL(dp),PARAMETER :: h2m = 60.0_dp ! hour -> minute ** same as m2s!
REAL(dp),PARAMETER :: d2m = d2h*h2m ! day -> minute
...etc...
REAL(dp),PARAMETER :: d2s = d2h*h2m*m2s ! day -> second

to remove all the repeated "60d0" usage in the definitions.

Anyhoo....


cheers,

paulv

Jos Bergervoet

unread,
Sep 30, 2014, 12:19:46 PM9/30/14
to
On 9/30/2014 10:57 AM, Sebastiaan Janssens wrote:
> On 09/30/2014 08:39 AM, Jos Bergervoet wrote:
>> On 9/29/2014 9:03 PM, Sebastiaan Janssens wrote:
...
>>> Let me give an example. As a fortran novice, I have found it extremely
>>> useful that arithmetic operations such as addition and substraction have
>>> been "overloaded" (may I use this term here?) in modern Fortran so they
>>> also apply to multidimensional arrays. (Incidentally, this is indeed one
>>> of the reasons I love the language: Fortran for me is like MATLAB on
>>> steroids.) This seems to be an example of object orientation.
>>
>> No. Addition is a procedure. Overloading existed decades
>> before the OO paradigm was invented.
>
> Interesting! Did not know that.

The addition and subtraction you mention became
overloaded at the moment they were first used for
anything else than integers. Maybe by the ancient
Greeks? Certainly by the people who first used
them for functions, vectors, quaternions, etc.

The OO paradigm seems to be present in the Einstein
summation convention where the tensors automatically
imply a procedure to be followed, but I'm not sure
there are much earlier examples. (Anyone?)

We can go further back to the natural languages.
Verbs are usually overloaded there, to accept
almost any data argument. Also possible, but much
less often encountered, are verbs that can only be
used when linked to a particular type of object.
So there it also seems that overloading is the
more natural to occur and therefore may have
developed earlier (but proof may in that case be
difficult to obtain..)


--
Jos

glen herrmannsfeldt

unread,
Sep 30, 2014, 4:21:39 PM9/30/14
to
Jos Bergervoet <jos.ber...@xs4all.nl> wrote:

(snip)

> The addition and subtraction you mention became
> overloaded at the moment they were first used for
> anything else than integers. Maybe by the ancient
> Greeks? Certainly by the people who first used
> them for functions, vectors, quaternions, etc.

> The OO paradigm seems to be present in the Einstein
> summation convention where the tensors automatically
> imply a procedure to be followed, but I'm not sure
> there are much earlier examples. (Anyone?)

> We can go further back to the natural languages.
> Verbs are usually overloaded there, to accept
> almost any data argument.

Always my favorite are nouns overloaded to be used as verbs.

Most often, a noun as a verb means that you add whatever
the noun describes to something. Water a plant. Butter
a dish.

But there are some nouns as verbs that have the meaning that
you subtract (remove) said item. I will let readers see which
ones they think up.

> Also possible, but much
> less often encountered, are verbs that can only be
> used when linked to a particular type of object.
> So there it also seems that overloading is the
> more natural to occur and therefore may have
> developed earlier (but proof may in that case be
> difficult to obtain..)

And sometimes old words (nouns or verbs) get reused with a new
meaning, possibly with a new qualifier. My favorite is
computer architecture, which is how to design and build computers.

Some years ago, I went to a seminar that had "computer architecture"
in the title, though it was in the mechanical engineering building.

Turned out to be about designing buildings using computers.

-- glen

Richard Maine

unread,
Sep 30, 2014, 4:43:36 PM9/30/14
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> And sometimes old words (nouns or verbs) get reused with a new
> meaning, possibly with a new qualifier.

If we want to go into English use of qualifiers, one of my pet peeves
about the Fortran standard is that it isn't very consistent about
following the usual English interpretation of qualifiers. My favorite
example is "executable construct." A reader of English might think that
"executable" was a qualifier to "construct" and thus that this referred
to some sort of construct, most likely a construct that was executable.

But alas, in Fortran standard-speak, most executable constructs are not
constructs at all. There is a Fortran definition of "construct" and a
Fortran definition of "executable construct", but neither definition
particularly builds on the other. They are just separate terms that
happen to share a simillarity in spelling. Yes, some constructs are
executable, and those are indeed executable constructs. But many things
that are executable are not constructs and yet are still executable
constructs.

I seem to recall other simillar cases, but that is one of my favorites.
There is also the fact that a target (in italic font) is not necessarily
a target. One that has generated messy debates and interpretations is
that an actual-argument is not necessarily an actual argument.

--
Richard Maine
email: last name at domain . net
domain: summer-triangle

Jos Bergervoet

unread,
Sep 30, 2014, 4:54:16 PM9/30/14
to
On 9/30/2014 4:46 PM, Paul van Delst wrote:

> ...etc...
> I do
> REAL(dp),PARAMETER :: d2h = 24.0_dp ! day -> hour

But that is confusing, it should be called h2d
instead! Because multiplying an hour by 24 clearly
makes it a day.

> REAL(dp),PARAMETER :: d2m = d2h*60.0_dp ! day -> minute

Now you messed up completely. That should be:

REAL(dp),PARAMETER :: m2d = m2h*h2d ! minute -> day

Isn't that evident?!

> ...etc...
> REAL(dp),PARAMETER :: d2s = d2h*h2m*m2s ! day -> second

That's exactly what would follow from my definitions.
Although I would actually never use them (nor yours) but
instead prefer to user type(time) with user defined
kinds (do they exist in Fortran already?)

If minute and second are just differend "kinds" of time
then you can assign them and do arithmetic without any
need for conversion. It's all implicit then.

--
Jos

Milan Curcic

unread,
Sep 30, 2014, 6:01:14 PM9/30/14
to

>
> But that is confusing, it should be called h2d
>
> instead! Because multiplying an hour by 24 clearly
>
> makes it a day.
>

The factor d2h (day to hour) (and similar conversion factors) does not act to multiply an hour by 24 to make it a day. It acts to multiply a number of days to obtain a number of hours. 1 day contains exactly 24 hours.

Example:

numberOfHours = numberOfDays * d2h

Like with your earlier post, if you read a bit more that those few lines, their meaning would be more clear.

>
>
> > REAL(dp),PARAMETER :: d2m = d2h*60.0_dp ! day -> minute
>
>
>
> Now you messed up completely. That should be:
>
>
>
> REAL(dp),PARAMETER :: m2d = m2h*h2d ! minute -> day
>
>
>
> Isn't that evident?!
>

Like before, 60d0 acts as h2m (hour to minute) to produce d2m = d2h*h2m. If this is not clear, see my comment above.

Looks to me you just got stuck on the naming of these constants. In the context that they operate, they make sense to me.

> Although I would actually never use them (nor yours) but
>
> instead prefer to user type(time) with user defined
>
> kinds (do they exist in Fortran already?)
>

What is this type(time) you speak of?

Have a great week!!

milan

Milan Curcic

unread,
Sep 30, 2014, 6:10:34 PM9/30/14
to

>
> Then, anywhere the datetime identifier is accessible the associated
>
> operator(+) is there too, even if there are ONLY clauses on USE
>
> statements. Not applicable here, because the compiler would just
>
> complain if the generic interface for + wasn't accessible, but for
>
> defined assignment this can be critical.
>
>
>
> But note how the declaration type spec is now polymorphic - because a
>
> generic binding is specified via specific bindings, and the passed
>
> argument of a specific bindings of an extensible type must be
>
> polymorphic (and types that aren't extensible can't have bindings).
>
> This is what I was referring to with my parenthetical bit in the
>
> previous post. CLASS is all about polymorphism, but we don't really
>
> care for that here.
>
>
>
> You need to be a little mindful that you don't end up with ambiguous
>
> specifics sitting behind a particular reference to a generic (i.e. don't
>
> have something with the type-kind-rank signature of
>
> datetime_plus_timedelta a specific binding for operator(+) in timedelta
>
> or in the stand-alone generic), but I think this way makes sense -
>
> timedeltas could be useful on their own without knowing about datetime,
>
> so they don't have any bindings that take datetime arguments, but as
>
> soon as you have more than one datetime you might start considering what
>
> the difference is between them, so datetime bindings will often have
>
> timedelta arguments.

Thanks, Ian! That is very useful. I will look into these soon.

Cheers,
milan

JWM

unread,
Sep 30, 2014, 6:11:33 PM9/30/14
to
On Tue, 2014-09-30 at 22:54 +0200, Jos Bergervoet wrote:
> On 9/30/2014 4:46 PM, Paul van Delst wrote:
>
> > ...etc...
> > I do
> > REAL(dp),PARAMETER :: d2h = 24.0_dp ! day -> hour
>
> But that is confusing, it should be called h2d
> instead! Because multiplying an hour by 24 clearly
> makes it a day.

Shouldn't the item to the right of the "2" express the expected result?

As in:

4 [day] x ( d2h [hour/day] ) = 4 x d2h [hour]

Or, using a unix command as an example:

file [dos] x (dos2unix [unix/dos] ) = file x dos2unix [unix]

--
John.

Richard Maine

unread,
Sep 30, 2014, 6:29:24 PM9/30/14
to
Milan Curcic <cao...@gmail.com> wrote:

Jos wrote:
> >
> > But that is confusing, it should be called h2d
> > instead! Because multiplying an hour by 24 clearly
> > makes it a day.

> The factor d2h (day to hour) (and similar conversion factors) does not act
> to multiply an hour by 24 to make it a day. It acts to multiply a number
> of days to obtain a number of hours. 1 day contains exactly 24 hours.

Apparently different people find different things confusing. I find your
version intuitive and Jos's confusing. As you say, it isn't for somehow
multiplying an hour. It is for converting a numer of days to a number of
hours. That's going to be how it is used. I'd have exactly the opposite
reaction from Jos, in that if I saw

hours = days*h2d

I'd think it was backwards and confusing.

Gordon Sande

unread,
Sep 30, 2014, 7:35:28 PM9/30/14
to
Every so often there is a suggestion that Fortran should be able to
follow units. Sometimes that is expressed by a module with various
derived types. Such suggestions always seem to be be based on the
utility of SI units in physics and either implicitly or explicitly
state that the few SI units will do the job.

This example is entirely within a single SI unit of time with the
issue being scaling. Which also goes by the english name of units.

Having been there, bought the tee shirt and even worn it out my
experience is that just and only SI is a bust. Once used a system
like that where the original author announced that it was a useless
failure as almost all the interesting things turned out to be pure
numbers in SI.

Another system I used had funny units with names like dollar, euro,
carbon, iron, manhour and interest-rate to name a few. A little less
automatic but it solved real problems. It turned out the various
analysts who got audited with the system (when the units were
retrofitted with moderate fuss) had repeated problem with interest
rates that were for other than a full year when the simulation was
for much finer time scale. It was intersting how productivity improved
once folks did not have to spend a fair bit of time repeatedly checking to
make sure that had not made a units error as they fiddled with their models.

The intesting technical issues were arrays with differing units by elenment
(never solved) and functions applied to arguements of differing types
(system had functions as macros for text expansion to solve the problem).










Jos Bergervoet

unread,
Oct 1, 2014, 2:51:47 AM10/1/14
to
On 10/1/2014 12:29 AM, Richard Maine wrote:
> Milan Curcic<cao...@gmail.com> wrote:

> hours. That's going to be how it is used. I'd have exactly the opposite
> reaction from Jos, in that if I saw
>
> hours = days*h2d
>
> I'd think it was backwards and confusing.

But I completely agree! It should be:

hours = d2h(days)

the conversion acts on the object, so is
should be on the left. And it should be
an operator precisely because the meaning
of the number d2h (or h2d) is confusing.
The name d2h, as a name *of a number*, is
not descriptive of the number 24, so you
don't know what it is!

And for me, rule number 1 (and 2 also) of
programming is to write clear code. And
I've always hated unclear names (like those
from the implicit typing days.)

--
Jos

Jos Bergervoet

unread,
Oct 1, 2014, 3:07:30 AM10/1/14
to
On 10/1/2014 1:35 AM, Gordon Sande wrote:
> On 2014-09-30 20:54:14 +0000, Jos Bergervoet said:
...
>> .. user type(time) with user defined
>> kinds (do they exist in Fortran already?)
>>
>> If minute and second are just differend "kinds" of time
>> then you can assign them and do arithmetic without any
>> need for conversion. It's all implicit then.
>
> Every so often there is a suggestion that Fortran should be able to
> follow units. Sometimes that is expressed by a module with various
> derived types. Such suggestions always seem to be be based on the
> utility of SI units in physics and either implicitly or explicitly
> state that the few SI units will do the job.
>
> This example is entirely within a single SI unit of time with the
> issue being scaling. Which also goes by the english name of units.

Of course, if we forget my suggestion to have
user-defined kinds, we could just have different
user-defined types for second, minute, etc. with
automatic (multiplicative) conversion in each
assignment or arithmetic operation. But then it
only comes down to the internal representation
being different for all of them. So how it would
help the user is not clear.

> Having been there, bought the tee shirt and even worn it out my
> experience is that just and only SI is a bust.

Personally I would use the strict rule that time
is in seconds. Then I do not need any conversion
in my code. I would still need it for interfacing
with existing code and for in and output, if those
were not using the second.

So what would be your solution then? Conversion
constants? Conversion routines? Wrappers around
all deviant functions? User-defined time types?

--
Jos

Gordon Sande

unread,
Oct 1, 2014, 9:28:35 AM10/1/14
to
Demography, whether for pension planning or health care,
would be rather awkward if done in seconds. When
health care needs times finer than days it is usually
called emergency medicine.

Finance, whether pensions or capital projects, has
time scales of decades with time measured in years,
quarters or even months. Only when front running
or high frequency trading, or whatever it is called,
does finance need seconds or finer.

> So what would be your solution then? Conversion
> constants? Conversion routines? Wrappers around
> all deviant functions? User-defined time types?

The problem you seem to be concerned with is that of notation
confounded by the issue that there are several common scales
that are in concurrent use. All annoying and confusing issues.
My issues with SI promotion is that there are many things that are
reasonably called units, and act, walk and and squawk like units,
that are in common use. Currencies, inflation rates, accident rates,
component fractions and on and on...


JWM

unread,
Oct 1, 2014, 1:10:15 PM10/1/14
to
On Wed, 2014-10-01 at 08:51 +0200, Jos Bergervoet wrote:
> On 10/1/2014 12:29 AM, Richard Maine wrote:
> > Milan Curcic<cao...@gmail.com> wrote:
>
> > hours. That's going to be how it is used. I'd have exactly the opposite
> > reaction from Jos, in that if I saw
> >
> > hours = days*h2d
> >
> > I'd think it was backwards and confusing.
>
> But I completely agree! It should be:
>
> hours = d2h(days)
>
> the conversion acts on the object, so is
> should be on the left. And it should be
> an operator precisely because the meaning
> of the number d2h (or h2d) is confusing.

So you'll have:

interface operator(.daystohours.)
module procedure d2h
end interface

contains

elemental real function d2h(days) result(hours)
real, intent(IN) :: days
hours = 24. * days
end function

All that for the whole purpose of multiplying a value by 24? Well, I
guess abstraction doesn't always mean efficiency.

--
John.


Jos Bergervoet

unread,
Oct 1, 2014, 4:03:36 PM10/1/14
to
On 10/1/2014 3:28 PM, Gordon Sande wrote:
> On 2014-10-01 07:07:28 +0000, Jos Bergervoet said:
...
>> Personally I would use the strict rule that time
>> is in seconds. Then I do not need any conversion
>> in my code. I would still need it for interfacing
>> with existing code and for in and output, if those
>> were not using the second.
>
> Demography, whether for pension planning or health care,
> would be rather awkward if done in seconds.

I do not see why. An identifier in the code could
have the same name (let's say "age") whatever the unit
you want to use. So how do you see the difference?

Suppose you need to see if someone's age is above
60 years and his child's age is less then 6 months,
then of course you need to write something like

John%age > 60*year .and. child%age < 6*month

and if you happen to know that time is represented
in months you can abbreviate it to:

John%age > 60*year .and. child%age < 6

But That Is Dangerous! If the representation is in
years you can abbreviate it in another way (left as an
exercise for the reader) which is equally dangerous,
because a change in representation will invalidate
the abbreviations. So here, as always, you should
*not* use knowledge about the internal representation.
And then I don't see why any choice will be "awkward"
as you say.

To reduce the risk of errors I would prefer a special
type which does not allow comparison with numbers. Of
course it should allow multiplication and division by
numbers. The "smart abbreviations" are then disallowed
and the unit used is invisible. Problem solved!

...
> Finance, whether pensions or capital projects, has
> time scales of decades with time measured in years,
> quarters or even months. Only when front running
> or high frequency trading, or whatever it is called,
> does finance need seconds or finer.

Yes, trading frequencies nowadays can be so high that
you need to use 4-vectors instead of time alone to
include relativistic effects! (This is left as an
exercise for the astute reader.)

--
Jos

Gordon Sande

unread,
Oct 1, 2014, 4:32:04 PM10/1/14
to
On 2014-10-01 20:03:16 +0000, Jos Bergervoet said:

> On 10/1/2014 3:28 PM, Gordon Sande wrote:
>> On 2014-10-01 07:07:28 +0000, Jos Bergervoet said:
> ...
>>> Personally I would use the strict rule that time
>>> is in seconds. Then I do not need any conversion
>>> in my code. I would still need it for interfacing
>>> with existing code and for in and output, if those
>>> were not using the second.
>>
>> Demography, whether for pension planning or health care,
>> would be rather awkward if done in seconds.
>
> I do not see why. An identifier in the code could
> have the same name (let's say "age") whatever the unit
> you want to use. So how do you see the difference?

Part of "doing" the problem consists of handling the data.
It is easier to read if it is a form that more nearly matches
how the topic is "spoken". On occassion the data will not have
cleverly formatted so an understandable native format lowers
the awkwardness. It stands a very good chance of being processed
by many diverse systems. The correspnding levels of douumentation
is likey to be rather low. Matching the "spoken" form will
lower the fuss level.

John Harper

unread,
Oct 1, 2014, 5:21:49 PM10/1/14
to
Milan Curcic wrote:

> 1 day contains exactly 24 hours.

Usually, but not always. Many of us live in places where each year has one
23-hour day and one 25-hour one.

--
John Harper

Milan Curcic

unread,
Oct 1, 2014, 5:38:06 PM10/1/14
to

>
> Usually, but not always. Many of us live in places where each year has one
>
> 23-hour day and one 25-hour one.
>


Thank you John, that is a very good point! There are programming languages with libraries that use the IANA database for DST and various other time zone quirks, so that may very well be included in some future version of datetime-fortran. Currently, handling timezones and DST is completely user's responsibility.

Cheers,
milan

michael...@compuserve.com

unread,
Oct 2, 2014, 2:46:11 AM10/2/14
to
And everyone lives in a world in which the number of seconds in the last minute of June or December can be 61: "Unlike leap days, UTC leap seconds occur simultaneously worldwide; for example, the leap second on December 31, 2005 23:59:60 UTC was December 31, 2005 18:59:60 (6:59:60 p.m.) in U.S. Eastern Standard Time and January 1, 2006 08:59:60 (a.m.) in Japan Standard Time."

Regards,

Mike Metcalf

Wolfgang Kilian

unread,
Oct 2, 2014, 3:07:26 AM10/2/14
to
SI are standard in engineering, but not in physics.

> Another system I used had funny units with names like dollar, euro,
> carbon, iron, manhour and interest-rate to name a few. A little less
> automatic but it solved real problems. It turned out the various
> analysts who got audited with the system (when the units were
> retrofitted with moderate fuss) had repeated problem with interest
> rates that were for other than a full year when the simulation was
> for much finer time scale. It was intersting how productivity improved
> once folks did not have to spend a fair bit of time repeatedly checking to
> make sure that had not made a units error as they fiddled with their
> models.
>
> The intesting technical issues were arrays with differing units by elenment
> (never solved) and functions applied to arguements of differing types
> (system had functions as macros for text expansion to solve the problem).

A sensible system of units would allow for multiplication, division, and
exponentiation of units or numbers with units, unit conversion, and it
should be possible to add meters and feet but not meters and ounces.
Conversions are not always multiplicative (Celsius and Fahrenheit?).
For any intrinsic or user-defined procedure, restrictions and allowed
combinations of arguments with units have to be defined in detail. (No
macros, please!)

This would introduce the capabilities of a respectable computer algebra
system into the Fortran compiler ...

glen herrmannsfeldt

unread,
Oct 2, 2014, 5:11:41 AM10/2/14
to
Wolfgang Kilian <kil...@invalid.com> wrote:
> On 01.10.2014 01:35, Gordon Sande wrote:

(snip)
>> Every so often there is a suggestion that Fortran should be able to
>> follow units. Sometimes that is expressed by a module with various
>> derived types. Such suggestions always seem to be be based on the
>> utility of SI units in physics and either implicitly or explicitly
>> state that the few SI units will do the job.

>> This example is entirely within a single SI unit of time with the
>> issue being scaling. Which also goes by the english name of units.

>> Having been there, bought the tee shirt and even worn it out my
>> experience is that just and only SI is a bust. Once used a system
>> like that where the original author announced that it was a useless
>> failure as almost all the interesting things turned out to be pure
>> numbers in SI.

> SI are standard in engineering, but not in physics.

There is a famous quote:

"The nice thing about standards is that you have so many to choose from."

Most of physics is done in SI units, but Electricity and Magnetism
is often done in Gaussian (CGS) units, and even then, electrostatic
based or magnetostatic based.

Nuclear and high energy physics have their own special units.

The barn: https://en.wikipedia.org/wiki/Barn_%28unit%29

1e-28 m**2, about the cross sectional area of the Uranium nucleus,
is used for nuclear reaction calculations. Even though it isn't an SI
unit, it is officially accepted for use with SI units.

And even though the Tesla is the SI magnetic flux unit, Gauss is
often used, even when everything else is SI. (As far as I know, with
no special dispensation.)

-- glen

Gordon Sande

unread,
Oct 2, 2014, 8:46:19 AM10/2/14
to
Can you explain the difference between inline expansion and macros. Or for that
matter, the whole notion of "as if".

> This would introduce the capabilities of a respectable computer algebra
> system into the Fortran compiler ...

Folks with real problems to solve would settle for complete checking. It is
only toy systems intended for occasional users who want the "do what I mean"
automatic algebra systems. The age old problem of the best is the enemy
of the good!

> -- Wolfgang


Wolfgang Kilian

unread,
Oct 2, 2014, 9:11:27 AM10/2/14
to
Well, the term 'macro' reminds me of a preprocessor which comes with an
independent set of syntax rules and is not subject to immediate
type-checking etc. If a construct is an integral part of the language,
all constraints applicable to the original expression, I would not call
it a macro. Maybe this is just my personal understanding.

>> This would introduce the capabilities of a respectable computer
>> algebra system into the Fortran compiler ...
>
> Folks with real problems to solve would settle for complete checking. It is
> only toy systems intended for occasional users who want the "do what I
> mean"
> automatic algebra systems. The age old problem of the best is the enemy
> of the good!

Units make sense as a language concept if they don't necessarily resolve
into simple real numbers. So they would be part of the type system,
and I rather would require them to be checked at compile time --
otherwise, why bother at all? Writing

length = 4 * cm

is completely sufficient, if cm is a numeric parameter.

Checking algebraic units consistency in arbitrary expressions (possibly
involving user-defined functions and operators) is a nontrivial problem.

Clive Page

unread,
Oct 2, 2014, 3:17:34 PM10/2/14
to
And all of us live in places where, up to twice a year, there is a day
of 24 hours and 1 second (or more rarely, 23 hours, 59 minutes, and 59
seconds).

I have to say that I have been somewhat bemused by this discussion. As
a research astronomer writing code for many years, I have often had
write software that handles dates and times. Astronomers have to cope
with not just UTC (often called Greenwich Mean Time in non-technical
publications), Atomic Time, Ephemeris Time, GPS Time, and Sidereal Time.
Occasionally they use scales such as UT1, UT2, Mean Solar Time,
Barycentric Time, etc. The least of the problems is dealing with human
units like hours/mins/seconds. The invariable practice of astronomers
dealing with times is to convert the human units at the earliest
opportunity to a high precision version in seconds, and then do all the
calculations in seconds. Then, only at the very end and if absolutely
essential, convert back to human units.

The same is true with dates: convert to Julian Date (some of us prefer
Modified Julian Date) and then do everything in this uniform day count,
converting back at the end if necessary.

Just because you can, in Fortran, set up a structure containing
components like hours, minutes, seconds, etc, doesn't mean that you
should. There are already perfectly good libraries handling dates and
times in sensible ways; this seems to be re-inventing a wheel somewhat
unnecessarily.

--
Clive Page

Milan Curcic

unread,
Oct 2, 2014, 4:28:55 PM10/2/14
to

>
> I have to say that I have been somewhat bemused by this discussion. As
>
> a research astronomer writing code for many years, I have often had
>
> write software that handles dates and times. Astronomers have to cope
>
> with not just UTC (often called Greenwich Mean Time in non-technical
>
> publications), Atomic Time, Ephemeris Time, GPS Time, and Sidereal Time.
>
> Occasionally they use scales such as UT1, UT2, Mean Solar Time,
>
> Barycentric Time, etc. The least of the problems is dealing with human
>
> units like hours/mins/seconds. The invariable practice of astronomers
>
> dealing with times is to convert the human units at the earliest
>
> opportunity to a high precision version in seconds, and then do all the
>
> calculations in seconds. Then, only at the very end and if absolutely
>
> essential, convert back to human units.
>


Hi Clive!

Thank you for your comment. I can see why a library like this would not be the most useful one for you, though it does sound like you do conversion to/from human time occasionally. I am a meteorologist and oceanographer, and you may be surprised how much is the human time formatting used (say, ISO 8601). Development datetime-fortran really came about from my own frustration of writing tedious integer arithmetic and string conversion. This library has already saved me a lot of time, and so it did for many more people over the past year or so.

> Just because you can, in Fortran, set up a structure containing
>
> components like hours, minutes, seconds, etc, doesn't mean that you
>
> should.

But does it mean that you should not? Keep in mind, I am not asking whether this is useful or a good idea, I'm already convinced that it is. I'm merely letting the community know that it's out there. If you are unsure how a tool would be useful to you, that probably means you don't need it.

Cheers,
milan

Clive Page

unread,
Oct 2, 2014, 5:24:45 PM10/2/14
to
On 02/10/2014 21:28, Milan Curcic wrote:

> But does it mean that you should not? Keep in mind, I am not asking whether this is useful or a good idea, I'm already convinced that it is. I'm merely letting the community know that it's out there. If you are unsure how a tool would be useful to you, that probably means you don't need it.

I take your point. Certainly it is very good that you have made this
available to the community in source form.


--
Clive Page

Ron Shepard

unread,
Oct 2, 2014, 9:02:22 PM10/2/14
to
On 10/2/14 2:17 PM, Clive Page wrote:
> And all of us live in places where, up to twice a year, there is a day
> of 24 hours and 1 second (or more rarely, 23 hours, 59 minutes, and 59
> seconds).
>
> I have to say that I have been somewhat bemused by this discussion. As
> a research astronomer writing code for many years, I have often had
> write software that handles dates and times. Astronomers have to cope
> with not just UTC (often called Greenwich Mean Time in non-technical
> publications), Atomic Time, Ephemeris Time, GPS Time, and Sidereal Time.
> Occasionally they use scales such as UT1, UT2, Mean Solar Time,
> Barycentric Time, etc. The least of the problems is dealing with human
> units like hours/mins/seconds. The invariable practice of astronomers
> dealing with times is to convert the human units at the earliest
> opportunity to a high precision version in seconds, and then do all the
> calculations in seconds. Then, only at the very end and if absolutely
> essential, convert back to human units.

I really do not understand how all of this works, so let me ask what is
probably a stupid question. If you want to know the number of seconds
between now and, say, 12-Oct-1492 at 2am, how do you do that? You have
to convert from julian to gregorian dates, which I do understand I
think, and you need to account for all of the leap seconds that have
been added recently, which I do not understand. Do you just have to keep
track of each leap second and when it was added?

Leap seconds are to account for Sidereal time, right?

$.02 -Ron Shepard

Richard Maine

unread,
Oct 2, 2014, 10:21:48 PM10/2/14
to
Ron Shepard <nos...@nowhere.org> wrote:

> think, and you need to account for all of the leap seconds that have
> been added recently, which I do not understand. Do you just have to keep
> track of each leap second and when it was added?

Yep. There are tables of that, but no algorithm other than just looking
it up in the table. They are irregular. But of a pain for times more
than 6 months in the future, as you don't know whether or not there will
be a leap second more in advance than that. I used to manage some
softwar ethat needed to know about leap seconds. Just had a table that
was manually updated whenever a new one was declared.

See the wikipedia article; it has quite a lot of detail.

William Clodius

unread,
Oct 3, 2014, 12:21:04 AM10/3/14
to
Leap seconds became useful when we developed a fixed definition of the
speed of light, and distance in terms of a fixed wavelength of light.
The resulting second at the time of that definition to the average
length of a day /(24*60*60) for that decade, but the average length of a
day in a year is not a fixed quantity. Leap seconds are intended
primarilly to deal with changes in the rotational speed of the earth due
to such things as tidal friction and changes in the moment of inertia of
the Earth from glacial ice melt, rebound of the crust, expansion and
contraction of the atmosphere and ocean due to climate, and plate
tectonics. (I suspect tidal friction dominates.) There is also the tidal
transfer of energy to the Earth's orbit that causes the orbit "radius"
to increase increasing the length of a year. As the motion of the earth
changes the mapping of sidereal time to "normal" time changes. Leap
seconds are one way to keep sidereal time consistent (to about a second)
with normal time.

While most of the the changes that drive the insertion of leap seconds
could be estimated for earlier times, they are not particularly useful
prior to high accuracy clocks developed in the mid twentieth century.

glen herrmannsfeldt

unread,
Oct 3, 2014, 10:40:02 AM10/3/14
to
William Clodius <wclo...@earthlink.net> wrote:

(snip)

> Leap seconds became useful when we developed a fixed definition of the
> speed of light, and distance in terms of a fixed wavelength of light.
> The resulting second at the time of that definition to the average
> length of a day /(24*60*60) for that decade, but the average length of a

would have been less of a problem if they didn't choose 1900 as
the year for the standard (in 1956).

So, there was already 56 years of slowing when the standard was
set, and now 114 years.

> day in a year is not a fixed quantity. Leap seconds are intended
> primarilly to deal with changes in the rotational speed of the earth due
> to such things as tidal friction and changes in the moment of inertia of
> the Earth from glacial ice melt, rebound of the crust, expansion and
> contraction of the atmosphere and ocean due to climate, and plate
> tectonics. (I suspect tidal friction dominates.) There is also the tidal
> transfer of energy to the Earth's orbit that causes the orbit "radius"
> to increase increasing the length of a year. As the motion of the earth
> changes the mapping of sidereal time to "normal" time changes. Leap
> seconds are one way to keep sidereal time consistent (to about a second)
> with normal time.

https://en.wikipedia.org/wiki/Leap_second

-- glen

Jos Bergervoet

unread,
Oct 3, 2014, 1:33:25 PM10/3/14
to
On 10/2/2014 3:11 PM, Wolfgang Kilian wrote:
> On 02.10.2014 14:46, Gordon Sande wrote:
>> On 2014-10-02 07:07:24 +0000, Wolfgang Kilian said:
>>> On 01.10.2014 01:35, Gordon Sande wrote:
>>>> On 2014-09-30 20:54:14 +0000, Jos Bergervoet said:
>>>>> On 9/30/2014 4:46 PM, Paul van Delst wrote:
>>>>>
>>>>>> ...etc...
...
> I rather would require them to be checked at compile time -- otherwise,
> why bother at all? Writing
>
> length = 4 * cm
>
> is completely sufficient, if cm is a numeric parameter.

But that will not do anything of your required compile-time
checking..

If, on the other hand, cm is *not* a numerical type
but a (user-defined) type, e.g. type(phys_length),
then at least you get the check whether the variable
length is also of that type, i.e. of the correct
physical dimension.

> Checking algebraic units consistency in arbitrary expressions (possibly
> involving user-defined functions and operators) is a nontrivial problem.

Not in the example above. What is a simple case where
it would become difficult?

--
Jos

Gordon Sande

unread,
Oct 3, 2014, 1:45:42 PM10/3/14
to
When you get into practical problems where there are "production functions"
you often find expressions that are dimensionally impure so that all one can
do is apply enough conversions to get to a pure number, apply the production
function to it and then undo the conversions to get back to whatever dimensions
are in play. If you get the scaling wrong then the production function is
nonsense. Some of the earliest examples of such production functions are the
tables of cable sizes for telegraphs and electrical distribution. They are a
curious art form. They also show up in current macro economics, Cobb-Douglas
is a name I seem to recall.




glen herrmannsfeldt

unread,
Oct 3, 2014, 4:09:57 PM10/3/14
to
Jos Bergervoet <jos.ber...@xs4all.nl> wrote:
> On 10/2/2014 3:11 PM, Wolfgang Kilian wrote:

(snip)

>> I rather would require them to be checked at compile
>> time -- otherwise, why bother at all? Writing

>> length = 4 * cm

>> is completely sufficient, if cm is a numeric parameter.

> But that will not do anything of your required compile-time
> checking..

I once used a computer system for teaching physics that would
ask questions, and then expect an answer with units.

As I understand it, the implementation assigned a numerical
value to each possible unit, maybe slightly random, and then
multiplied them appropriately. (This was mostly for first
year college physics.)

One time when I was trying it out (some years after I took
first year physics) I put in for the units for some problem
J^(1/2) kg^(-1/2) and got the right answer. Most likely,
I was the only one ever to use that unit for the answer.

Note that this method works well when there is a yes/no
(right or wrong) answer. When it is wrong, you don't say why,
just that it is wrong.

> If, on the other hand, cm is *not* a numerical type
> but a (user-defined) type, e.g. type(phys_length),
> then at least you get the check whether the variable
> length is also of that type, i.e. of the correct
> physical dimension.

-- glen

Clive Page

unread,
Oct 3, 2014, 4:40:49 PM10/3/14
to
On 03/10/2014 02:01, Ron Shepard wrote:
> I really do not understand how all of this works, so let me ask what is
> probably a stupid question. If you want to know the number of seconds
> between now and, say, 12-Oct-1492 at 2am, how do you do that? You have
> to convert from julian to gregorian dates, which I do understand I
> think, and you need to account for all of the leap seconds that have
> been added recently, which I do not understand. Do you just have to keep
> track of each leap second and when it was added?

Well first you have to know what timescale you are using now and were
using back in 1492. There have also been calendar changes in this gap
as I'm sure you know. Astronomers who investigate historical records of
eclipses, visible supernovae and so on have to cope with these
annoyances, plus the inexplicable non-existence of year zero. I'm not
an expert on matters calendrical, but others are and they write
software, sometimes in Fortran, to do this. Some procedure libraries
that astronomers use do indeed account for all leap seconds since they
were introduced around 1972, and the relevant procedures have to be
updated every time there is a new leap second. But that's only if you
use UTC; if you use something like Atomic Time, Ephemeris Time or GPS
Time, there are no leap seconds, and so these slowly diverge from UTC
(alias GMT).

> Leap seconds are to account for Sidereal time, right?

Not quite. Siderial time is the time measured by the rotation of the
Earth relative to a fixed point in space. In a year, the Earth rotates
approximately 366.25 times relative to the Universe when it appears to
rotate only 365.25 times relative to the Sun, so the length of the
siderial day is about 23 hours 56 minutes. Only those with telescopes
mounted on Earth have to bother with it (my observations have mostly
been made from space, fortunately).

Leap seconds are introduced (or removed) because the length of the day
varies (as measured by the movement of the Sun) e.g. because of melting
of polar ice, sap rising in the trees every spring (more of them in the
northern hemisphere than in the southern), and other effects such as
volcanism. Since we find it convenient to have (a) a day consisting of
24 * 60 * 60 seconds, and (b) the sun overhead (on average) at mid-day
in the centre of our time-zone, and (c) a second which does not vary in
length even by a small amount, something has to give. Interpolating
leap seconds was thought to be the least awkward solution to the problem
i.e. marginally compromise principal (a). It was then thought that leap
seconds would mainly affect astronomers and other scientists who
depended on knowing the absolute time to high precision.

Since 1972 dependence on exact time keeping has spread rather more
widely, e.g. to cell phone networks, air traffic control systems, and
indeed the Internet generally, and so many more people are affected by
leap seconds. It has been known for inexpert or perhaps not entirely
sober machine operators who have the misfortune to be on night duty at
23:59:59 on December 31st to fail to insert the leap second correctly,
and thereby mess things up for a huge number of people.

Hence many people thing leap seconds are a real nuisance, and there are
current proposals to get rid of them. I haven't got a clear idea of
what is proposed instead. It could be leap minutes or hours. In this
case the first leap minute would not be needed for decades, the first
leap hour not for centuries, by which time none of us will care, and it
will conveniently be "someone else's problem". Personally I think I
could tolerate a leap minute, but when a leap hour got near, everyone
would have inadvertently shifted themselves by the equivalent of one
time zone, which might not be so acceptable.

But I fear we may be getting off topic.

--
Clive Page

Wolfgang Kilian

unread,
Oct 6, 2014, 3:36:48 AM10/6/14
to
On 03.10.2014 19:33, Jos Bergervoet wrote:
> On 10/2/2014 3:11 PM, Wolfgang Kilian wrote:
>> On 02.10.2014 14:46, Gordon Sande wrote:
>>> On 2014-10-02 07:07:24 +0000, Wolfgang Kilian said:
>>>> On 01.10.2014 01:35, Gordon Sande wrote:
>>>>> On 2014-09-30 20:54:14 +0000, Jos Bergervoet said:
>>>>>> On 9/30/2014 4:46 PM, Paul van Delst wrote:
>>>>>>
>>>>>>> ...etc...
> ...
>> I rather would require them to be checked at compile time -- otherwise,
>> why bother at all? Writing
>>
>> length = 4 * cm
>>
>> is completely sufficient, if cm is a numeric parameter.
>
> But that will not do anything of your required compile-time
> checking..

Exactly. If there is no compile-time checking, there is no point in
introducing units as a language concept. Numerical parameters are just
fine.

> If, on the other hand, cm is *not* a numerical type
> but a (user-defined) type, e.g. type(phys_length),
> then at least you get the check whether the variable
> length is also of that type, i.e. of the correct
> physical dimension.

Yes. But see below.

>> Checking algebraic units consistency in arbitrary expressions (possibly
>> involving user-defined functions and operators) is a nontrivial problem.
>
> Not in the example above. What is a simple case where
> it would become difficult?

The compiler would need, at the very least, a rational data type, the
corresponding algebra, and new syntax. It must be able to verify that
(pseudocode)

kg^(1/3) * kg^(2/3) == kg

or

sqrt (kg) == kg^(1/2)

and such equalities are not guaranteed if the exponents are real
numbers, and wrong if they are integers. The compiler can't assume that
all real-life applications are trivial. Probably the compiler would
convert units to rational powers of basic units before evaluating
expressions. Combined with rational-number algebra, applying a set of
user-defined replacements is essentially what a computer algebra system
is doing.

Another issue is that operations can be overloaded, operators and
assignments might actually be user-defined procedures. Even in

length = 4 * cm

both '*' and '=' might be overloaded unless 'cm' is of intrinsic numeric
type. The user must be able to control the units of dummy variables of
defined procedures. This is not covered by the existing (already
complicated) rules of TKR matching.

In short, although I'm tempted to like physical units in a language for
scientific purposes, implementing it in the standard in any consistent
and useful way, for me, looks like a major task. I'd rather see other
improvements first, most notably generic-programming support.

glen herrmannsfeldt

unread,
Oct 6, 2014, 4:03:01 AM10/6/14
to
Wolfgang Kilian <kil...@invalid.com> wrote:
> On 03.10.2014 19:33, Jos Bergervoet wrote:

(snip, someone wrote)
>>> length = 4 * cm

>>> is completely sufficient, if cm is a numeric parameter.

>> But that will not do anything of your required compile-time
>> checking..

> Exactly. If there is no compile-time checking, there is no point in
> introducing units as a language concept. Numerical parameters are just
> fine.

Seems like you could have it check units, or you could have
it convert units, presumably at compile time, but maybe
also at run time.

>> If, on the other hand, cm is *not* a numerical type
>> but a (user-defined) type, e.g. type(phys_length),
>> then at least you get the check whether the variable
>> length is also of that type, i.e. of the correct
>> physical dimension.

> Yes. But see below.

I have wondered about equations and units for some time now.

Most often, I see physics equations written without units, but the
quantities used in them have units.

F=ma, m=3kg, a=10m/s**2, F=30N

The same equation works with m in slugs, a in ft/s**2, and F in lbs.

Reasonably often, though, engineering literature factors out
the units:

F(N) = m(kg) * a (m/s**2)

m=3, a=10, F=30N

that is, the variables don't have units, but the values have to
already have the right units. The appropriate unit is then applied
to the result. This is especially useful when a universal (but
unit dependent) constant is included.

In this case, one could do static unit testing.

It would be interesting to have input routines that would convert
units based on the input data unit specified.

(Then again, some may have used the FORMAT scale factor P to adjust
metric units.)

-- glen

Milan Curcic

unread,
Jan 27, 2016, 11:41:35 AM1/27/16
to
Just a bump to this topic and a thanks to Ian Harvey for his advice on implementing datetime and timedelta operators as generic instead of module procedures.

https://github.com/milancurcic/datetime-fortran

Since the original post in 2014, datetime-fortran has seen contribution from several people so it is slowly moving toward a small-community project. The code has been re-organized into a one-class-per-module structure and we are in process of documenting the code in a format that will be readable by the doc generation package FORD (https://github.com/cmacmackin/ford). Another recent update is the GNU Autotools configuration and install capability, added by Mark Carter.

If you think datetime-fortran could be useful to you, please star, fork and contribute!

Cheers,
milan
0 new messages