Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

random numbers

1,061 views
Skip to first unread message

lyng...@gmail.com

unread,
Apr 28, 2014, 12:29:59 PM4/28/14
to
GNU Fortran (g77 and gfortran) has a RAND() intrinsic function for FOTRAN 77, while Intel Fortran does not; I have read that random numbers are not part of the F77 specification. I am porting a F77 program to Intel Fortran on Linux, and read about the need to include 'mkl_vsl.f77' But RAND is still not recognized; in fact, that name does not appear in the include file.

Has anyone solved this problem in the past?

Thanks,
Jay

Richard Maine

unread,
Apr 28, 2014, 12:37:16 PM4/28/14
to
<lyng...@gmail.com> wrote:

> GNU Fortran (g77 and gfortran) has a RAND() intrinsic function for FOTRAN
> 77, while Intel Fortran does not; I have read that random numbers are
> not part of the F77 specification. I am porting a F77 program to Intel
> Fortran on Linux, and read about the need to include 'mkl_vsl.f77' But
> RAND is still not recognized; in fact, that name does not appear in the
> include file.

See the random_number intrinsic, added to the Fortran standard as of
f90. Any compiler from the last 2 decades or so will have it. That
certainly includes Intel Fortran.

--
Richard Maine
email: last name at domain . net
domain: summer-triangle

FortranFan

unread,
Apr 28, 2014, 4:24:01 PM4/28/14
to
In Intel Fortran, if you use Intel-specific IFPORT module included with the compiler, you can get RAND (and RANDOM) function.

For intrinsic procedures, look for RANDOM_NUMBER, a pseudorandom number generator included in the standard since Fortran 90 and the companion RANDOM_SEED subroutine.

robin....@gmail.com

unread,
Apr 28, 2014, 9:57:49 PM4/28/14
to
On Tuesday, April 29, 2014 2:29:59 AM UTC+10, lyng...@gmail.com wrote:
> GNU Fortran (g77 and gfortran) has a RAND() intrinsic function for FOTRAN 77, while Intel Fortran does not; I have read that random numbers are not part of the F77 specification. I am porting a F77 program to Intel Fortran on Linux, and read about the need to include 'mkl_vsl.f77' But RAND is still not recognized; in fact, that name does not appear in the include file. Has anyone solved this problem in the past?

Look under RANDOM_NUMBER and RANDOM_SEED, which are subroutines.

William Clodius

unread,
Apr 28, 2014, 11:21:32 PM4/28/14
to
Note that a commonly (and rightly) discouraged style is the use of
(pseudo) random number generator functions in Fortran. The Fortran
standard allows processors (i.e., compilers) to ignore the side effects
in function invocations, and random number generators inherently rely on
side effects. If you use a random number function, the compiler can
optimize away some of the calls leading to non-random behavior. Fortran
95+ defines an intrinsic subroutine RANDOM_NUMBER that, with minor
changes in syntax, can be used as a substitute. However it makes not
guarantees about its quality. However I also suspect that your original
RAND is of minimal quality.

Jos Bergervoet

unread,
Apr 29, 2014, 2:48:51 AM4/29/14
to
On 4/29/2014 5:21 AM, William Clodius wrote:
> <lyng...@gmail.com> wrote:
>
>> GNU Fortran (g77 and gfortran) has a RAND() intrinsic function for FOTRAN
>> 77, while Intel Fortran does not; I have read that random numbers are not
>> part of the F77 specification. I am porting a F77 program to Intel
>> Fortran on Linux, and read about the need to include 'mkl_vsl.f77' But
>> RAND is still not recognized; in fact, that name does not appear in the
>> include file.
>>
>> Has anyone solved this problem in the past?
>>
>> Thanks,
>> Jay
> Note that a commonly (and rightly) discouraged style is the use of
> (pseudo) random number generator functions in Fortran. The Fortran
> standard allows processors (i.e., compilers) to ignore the side effects
> in function invocations, and random number generators inherently rely on
> side effects.

But all functions except pure ones are allowed
to have side effects, I thought. So if a variable
(with the save attribute) is changed, how is the
compiler allowed to do anything else than properly
using the saved value on the next invocation?

This seems to be of interest not just for random
generators but for functions in general (pure ones
again excused) so perhaps you could point out
what exactly is the pitfall here!

--
Jos

Phillip Helbig---undress to reply

unread,
Apr 29, 2014, 3:36:53 AM4/29/14
to
In article <535f4b53$0$2913$e4fe...@news2.news.xs4all.nl>, Jos
Bergervoet <jos.ber...@xs4all.nl> writes:

> > Note that a commonly (and rightly) discouraged style is the use of
> > (pseudo) random number generator functions in Fortran. The Fortran
> > standard allows processors (i.e., compilers) to ignore the side effects
> > in function invocations, and random number generators inherently rely on
> > side effects.

Right. It is even allowed to not evaluate a function. For example, if
it is called twice with the same arguments, the second call could be
optimized away. (Keep in mind that not even all expressions have to be
evaluated.)

> But all functions except pure ones are allowed
> to have side effects, I thought. So if a variable
> (with the save attribute) is changed, how is the
> compiler allowed to do anything else than properly
> using the saved value on the next invocation?

Allowed, yes (except pure); required to evaluate, no. There are many
things which are allowed but on which one cannot depend, for example
before initialization implied SAVE, many compilers treated it as if it
had the SAVE attribute, but one couldn't rely on it. Ditto for
initializing non-initialized variables with 0 etc.

Unless you are REALLY concerned about CPU time, why not use the world's
best 24-bit-mantissa random-number generator?

http://www.astro.multivax.de:8000/helbig/fortran/ranlux.html

At the highest "luxury level", this has been mathematically proven to
have all 24 bits chaotic. Like real random numbers, knowing one doesn't
tell you what the next one is. (Many random-number generators generate
the next number based on the current one. So, wherever you start, all
possible numbers (usually less than the number of possible valid bit
representations) are generated then the sequence repeats. Of course, it
always repeats in the same order. Although RANLUX does repeat, its
period is much, much, much, much longer than the number of random
numbers it generates.) Also, you can get an array of random numbers
with one call. It has also been quite well tested. It is the gold
standard. Don't settle for anything less!

Richard Maine

unread,
Apr 29, 2014, 4:04:14 AM4/29/14
to
Oh dear! That's a hugely controversial subject with innumerable
pitfalls. It has been hashed out many times in many places, including
this newsgroup, standards committee meetings, and others. I'm afraid you
aren't going to find agreement on exactly what cases are safe and what
ones are not.

The (very) short version follows.

1. There are some cases where function side effects are explicitly
prohibitted. Those cases mostly have to do with interactions with other
things in the same statement as the function reference. F2003, 7.1.8,
2nd para

"The evaluation of a function reference shall neither affect nor be
affected by the evaluation of any other entity within the statement.
If a function reference causes definition or undefinition of an
actual argument of the function, that argument or any associated
entities shall not appear elsewhere in the same statement."

Although I hate to label anything related to function side effects as
noncontroversial, I will claim that the controversies about the
restrictions mentioned above are less vehement than those about the
other cases.

2. The other cases. Oh my. Although functions are allowed to have side
effects as long as you avoid the restrictions mentioned above, you are
not necessarily allowed to depend on those side effects. When you can
and when you cannot depend on them is hugely controversial. Opinions
range from "you can never depend on them" to "you really can always
depend on them", with plenty of positions somewhere in the middle. I
can't even really summarize the positions in any detail because a) it
would take a lot longer than I care to invest in it, and b) no matter
how hard I tried to give an impartial summary, people would disagree
with parts of what I said.

Heck, this subject is so controversial that it wouldn't surprise me to
see a meta-controversy about how much controversy there is. Yes, I
recognize the circularity of that comment. :-(

3. C interop has added some extra twist to the issue, mostly because
literally everything in C is done in functions and depends heavily on
function side effects. So the position that you can't strictly depend on
any function side effects at all would imply that C interop was not
actually useful. Unfortunately, last time I checked, it was still quite
possible to interpret the standard that way. Indeed, that's how I tend
to read it. I'll acknowledge that the standard *SHOULDN'T* make C
interop useless. In my opinion, that ought to be fixed, but then for
nearly 25 years I've thought that the issue of function side effects
needed clarification in the standard. It doesn't seem to happen though.

In the meantime, my personal recommendation is to avoid writing Fortran
functions with side effects, and to make interaction with C functions
very simple to minimize the odds of problems. By "very simple" I mean to
strongly prefer forms like

fortran_variable = c_function(arguments)

avoiding more complicated expressions involving the function reference,
and most particularly avoiding multiple function references in the same
expression.

Izaak B Beekman

unread,
Apr 29, 2014, 11:11:47 AM4/29/14
to
If I know the program is on *nix and I need truly random data, but not
much of it, I read some bytes from /dev/random then map them on to a
uniform distribution or gaussian or the set of integer in [0,n] etc.
--
-Zaak

Anton Shterenlikht

unread,
Apr 29, 2014, 1:02:50 PM4/29/14
to
truly random?

I think the whole field of "truly" random numerical
analysis is not at all explored. One can have a true
random number generator, e.g. due to quantum effects,
e.g. using IdQuantique PCI cards, or similar, but
whenever you talk to a numerical analyst about using
truly random numbers, they are horrified. Since results
are fundamentally not reproducible, a lot of computer
science is out of the window. Yet, a lot of experimental
results, which a lot of computer models try to reproduce,
are not reproducible either, i.e. only reproducible
in some statistical sense. So it seems a completely
new ground to be explored.

Anton

lyng...@gmail.com

unread,
Apr 29, 2014, 2:10:28 PM4/29/14
to
A belated thank-you for the IFPORT suggestion. The USE statement looks very interesting among all the FORTRAN-66-style and FORTRAN-77-style fixed-format statements in this application.

Jos Bergervoet

unread,
Apr 29, 2014, 3:31:17 PM4/29/14
to
On 4/29/2014 10:04 AM, Richard Maine wrote:
> Jos Bergervoet<jos.ber...@xs4all.nl> wrote:
>> On 4/29/2014 5:21 AM, William Clodius wrote:
...
...
> 2. The other cases. Oh my. Although functions are allowed to have side
> effects as long as you avoid the restrictions mentioned above, you are
> not necessarily allowed to depend on those side effects. When you can
> and when you cannot depend on them is hugely controversial. Opinions
> range from "you can never depend on them" to "you really can always
> depend on them", with plenty of positions somewhere in the middle. I
> can't even really summarize the positions in any detail because a) it
> would take a lot longer than I care to invest in it, and b) no matter
> how hard I tried to give an impartial summary, people would disagree
> with parts of what I said.

But for a well-defined case, would the standard
not give an answer? For instance: computing the new
position on a grid of points and *as a side effect*
keeping track how often a point is visited. It seems
clumsy to require the use of:
-> pos = new_pos(.., .., any information)
-> call increase_count(pos)
so the programmer might think it is a good idea to
absorb the second action into the function! Can
the compiler spoil this?

...
> Heck, this subject is so controversial that it wouldn't surprise me to
> see a meta-controversy about how much controversy there is.

O, but then perhaps others would deny the possibility
that such a meta-controversy can exist..

> .. Yes, I
> recognize the circularity of that comment. :-(

Recursiveness was my first thought!

--
Jos

Jos Bergervoet

unread,
Apr 29, 2014, 3:46:35 PM4/29/14
to
On 4/29/2014 9:36 AM, Phillip Helbig---undress to reply wrote:
...
>> compiler allowed to do anything else than properly
>> using the saved value on the next invocation?
>
> Allowed, yes (except pure); required to evaluate, no. There are many
> things which are allowed but on which one cannot depend, for example
> before initialization implied SAVE, many compilers treated it as if it
> had the SAVE attribute, but one couldn't rely on it. Ditto for
> initializing non-initialized variables with 0 etc.
>
> Unless you are REALLY concerned about CPU time, why not use the world's
> best 24-bit-mantissa random-number generator?
>
> http://www.astro.multivax.de:8000/helbig/fortran/ranlux.html

I hardly use any other ones, of course! And for
those side effects I was more worried about other
functions, as explained in my other post in this
thread.

...
> It has also been quite well tested. It is the
> gold standard. Don't settle for anything less!

Still, there are always risks if you include code too
long to read entirely. It may have been corrupted
(by untrusted parties?) after it was well tested!

So in some cases I would use the opposite approach
and just keep it very simple, e.g. repeated use of:
x = 1.456 - x**2 ! chaotic sequence
It has almost all of the disadvantages you mentioned,
but I know that it is chaotic and it is short enough
to check in one glance.

--
Jos

Richard Maine

unread,
Apr 29, 2014, 5:59:08 PM4/29/14
to
Jos Bergervoet <jos.ber...@xs4all.nl> wrote:

> On 4/29/2014 10:04 AM, Richard Maine wrote:
> > Jos Bergervoet<jos.ber...@xs4all.nl> wrote:
> >> On 4/29/2014 5:21 AM, William Clodius wrote:
> ...
> ...
> > 2. The other cases. Oh my. Although functions are allowed to have side
> > effects as long as you avoid the restrictions mentioned above, you are
> > not necessarily allowed to depend on those side effects.
>
> But for a well-defined case, would the standard
> not give an answer?

Define "well-defined". I know of no such definition that applies. To the
contrary, the standard explicitly says, in f2003 7.1.8.1

"If a statement contains a function reference in a part of an
expression that need not be evaluated, all entities that would
have become defined in the execution of that function reference
become undefined..."

Alas, the condition in that sentence in the standard is the subject of
much controversy. It refers to the preceding para, which says

"It is not necessary for a processor to evaluate all of the operands
of an expression, or to evaluate entirely each operand, if the value
of the expression can be determined otherwise."

That condition is intended to facilitate optimizations. The
interpretations vary widely as to what could validly be considered to be
a way to "otherwise" determine the value. An extreme interpretation is
that there is always some way that count as otherwise. The compiler
could get the value from divine inspiration if it has the appropriate
hardware support (a soul, perhaps "the soul of a new machine" :-)). Or
still pretty ridculous, but back to the realm of things obviously
possible, the compiler could save the machine state, do the same
operations as would be involved in executing the function to determine
the value, and restore the machine state except for remembering the
computed result value. I recall hearing people opine that while the
later approach was admitedly silly, it was technically allowed by the
wording in the standard. I might add that a variant of it now seems a
little bit less silly than the original. Instead of saving and restoring
state, imagine copying the needed data to some parallel process that
computed the function result, but didn't necessarily copy back any
modified private memory. Probably still a little extreme, but not so
completely ludicrous.

It sounds to me as though you have not previously seen these discussions
of function side effects. If so, let me caution you against making
assumptions about how they have to work.

This subthread started in relation to pseudo-random number generators.
That is actually one of the cases where real programs have actually hit
bugs because of this issue. Pseudo-random number generators just happen
to get used in ways that are prone to trigger the bugs. For example, if
you try to approximate a normal distribution by averaging several
uniforms using code like

x = (rand(seed)+rand(seed)+rand(seed))/3.0

don't be shocked if the compiler optimizes that to

x = rand(seed)

(Ok, 3 is a bit few for even a rough approximation, but that's all I
wanted to write out here.)

> It seems clumsy...

Sorry, but I don't find that a very convincing argument. Oh sure, I
agree that it seems clumsy. But I don't buy that as grounds for
interpreting the words of the standard. Perhaps grounds for pushing for
changes in the words of the standard, though good luck with that. I
shouldn't phrase it quite that way because the "good luck" isn't
entirely facetious. I seriously do wish the standard could be clarified
in this area and so I actually do wish good luck. I just burned out on
pushing for such clarification myself a long time ago. As a warning, if
you try to tie down side effects to much, you'll likely run into very
heated opposition from people who want to give optimizers free reign.

And before log, someone else will probably speak up, because I don't
think I can go on for this long on this particular subject without
having someone say I got it all wrong. :-( Or at least some wrong.

Richard Maine

unread,
Apr 29, 2014, 6:03:13 PM4/29/14
to
<lyng...@gmail.com> wrote:

> A belated thank-you for the IFPORT suggestion. The USE statement looks
> very interesting among all the FORTRAN-66-style and FORTRAN-77-style
> fixed-format statements in this application.

One of the biggest strengths of Fortran 90 was the way in which it
introduced more modern features without invalidating old code.
You can thus introduce the new features gradually in old code in those
places where they benefit. You don't have to throw it all out and start
anew.

Alberto Ramos

unread,
Apr 29, 2014, 6:46:37 PM4/29/14
to

> Allowed, yes (except pure); required to evaluate, no. There are many
> things which are allowed but on which one cannot depend, for example
> before initialization implied SAVE, many compilers treated it as if it
> had the SAVE attribute, but one couldn't rely on it. Ditto for
> initializing non-initialized variables with 0 etc.
>
> Unless you are REALLY concerned about CPU time, why not use the world's
> best 24-bit-mantissa random-number generator?
>
> http://www.astro.multivax.de:8000/helbig/fortran/ranlux.html
>
> At the highest "luxury level", this has been mathematically proven to
> have all 24 bits chaotic. Like real random numbers, knowing one doesn't
> tell you what the next one is. (Many random-number generators generate
> the next number based on the current one. So, wherever you start, all
> possible numbers (usually less than the number of possible valid bit
> representations) are generated then the sequence repeats. Of course, it
> always repeats in the same order. Although RANLUX does repeat, its
> period is much, much, much, much longer than the number of random
> numbers it generates.) Also, you can get an array of random numbers
> with one call. It has also been quite well tested. It is the gold
> standard. Don't settle for anything less!

I have to point that there exist another RNG with strong theoretical
properties, that produces 61 bit caotic (enough for double precision), and
10 times faster than ranlux: MIXMAX (http://arxiv.org/abs/1403.5355). Among
other things it allows to skip over large portions of the stream (5^512) as
a very effective way of seeding (no collission and independence guaranteed,
even if your CPU only produces random numbers during the life of the
universe at a clock frecuency of 10^44 Hz).

I can give a fortran implementation if anyone is interested, and there is
a C one here available here:

https://mixmax.hepforge.org/

This RNG is 2 times faster than the native Random_number that comes with
gfortran. By the way, I would strongly recommment against using the
intrinsic Random_Number and Random_seed for anything serious.

For anything serious RANLUX/MIXMAX.

A.

Jos Bergervoet

unread,
Apr 29, 2014, 6:52:53 PM4/29/14
to
On 4/29/2014 11:59 PM, Richard Maine wrote:
> Jos Bergervoet<jos.ber...@xs4all.nl> wrote:
...
> x = (rand(seed)+rand(seed)+rand(seed))/3.0
>
> don't be shocked if the compiler optimizes that to
>
> x = rand(seed)
>
> (Ok, 3 is a bit few for even a rough approximation, but that's all I
> wanted to write out here.)
>
>> It seems clumsy...
>
> Sorry, but I don't find that a very convincing argument. Oh sure, I
> agree that it seems clumsy. But I don't buy that as grounds for
> interpreting the words of the standard.

It was meant as a ground (for some programmers) to
try building certain side effects into a function.
Of course there are also programmers who would
never destroy the beauty of a pure function for
the mundane cause of updating a counter.

> Perhaps grounds for pushing for
> changes in the words of the standard,

It seems sufficient to be aware that function
side effects are not guaranteed to work, just
like routine calls with aliased intent(out)
arguments aren't guaranteed to work. If people
use those things they actually use something
that isn't supported.

> ... As a warning, if
> you try to tie down side effects to much, you'll likely run into very
> heated opposition from people who want to give optimizers free reign.

Fine with me, I'm a pure function fan anyway.

> And before log, someone else will probably speak up, because I don't
> think I can go on for this long on this particular subject without
> having someone say I got it all wrong.

It seems very simple now:
1) The only effect required from a function call
is that the function result somehow is made
available to the calling code.
2) The function code need not be executed if
the function result can be deduced otherwise.
3) The function code need not be executed even
if this means the omission of side effects.

Functions, therefore, will never have guaranteed
side effects. They can only can have potential
side effects.

--
Jos

Jos Bergervoet

unread,
Apr 29, 2014, 7:00:02 PM4/29/14
to
On 4/30/2014 12:46 AM, Alberto Ramos wrote:
...
> I have to point that there exist another RNG with strong theoretical
> properties, that produces 61 bit caotic (enough for double precision), and
> 10 times faster than ranlux: MIXMAX (http://arxiv.org/abs/1403.5355). Among
> other things it allows to skip over large portions of the stream (5^512) as
> a very effective way of seeding (no collission and independence guaranteed,
> even if your CPU only produces random numbers during the life of the
> universe at a clock frecuency of 10^44 Hz).

Would that be the entire life of the universe? This
seems to come down to a claim about the cosmological
constant.

--
Jos

Phillip Helbig---undress to reply

unread,
Apr 29, 2014, 7:30:45 PM4/29/14
to
In article <1lkvbxf.1f38cgt7enl3mN%nos...@see.signature>,
nos...@see.signature (Richard Maine) writes:

> Define "well-defined".

At university, my maths professor pointed out that "well defined" is not
well defined. :-|

Phillip Helbig---undress to reply

unread,
Apr 29, 2014, 7:33:44 PM4/29/14
to
In article <ljpa4e$smp$1...@dont-email.me>, Alberto Ramos
<albert...@desy.de> writes:

> > Unless you are REALLY concerned about CPU time, why not use the world's
> > best 24-bit-mantissa random-number generator?
> >
> > http://www.astro.multivax.de:8000/helbig/fortran/ranlux.html

> I have to point that there exist another RNG with strong theoretical
> properties, that produces 61 bit caotic (enough for double precision), and
> 10 times faster than ranlux: MIXMAX (http://arxiv.org/abs/1403.5355).

OK, thanks. Quite new!

Of course, since highest-level RANLUX is chaotic, one can do what one
normally shouldn't do, namely splice pieces of single-precision
variables together to get double-precision variables.

> Among
> other things it allows to skip over large portions of the stream (5^512) as
> a very effective way of seeding (no collission and independence guaranteed,
> even if your CPU only produces random numbers during the life of the
> universe at a clock frecuency of 10^44 Hz).

Interesting.

> I can give a fortran implementation if anyone is interested, and there is
> a C one here available here:
>
> https://mixmax.hepforge.org/

It would be good to have a good Fortran implementation.

glen herrmannsfeldt

unread,
Apr 29, 2014, 9:05:58 PM4/29/14
to
Jos Bergervoet <jos.ber...@xs4all.nl> wrote:

(snip)

> Would that be the entire life of the universe? This
> seems to come down to a claim about the cosmological
> constant.

There are now atomic clocks that lose less than a second over the
(current) life of the universe. When told about this, someone
wondered how the wiring would last that long. (Or just about
any other part of it.)

-- glen

robin....@gmail.com

unread,
Apr 30, 2014, 8:14:49 AM4/30/14
to
On Wednesday, April 30, 2014 8:46:37 AM UTC+10, Alberto Ramos wrote:
>> Unless you are REALLY concerned about CPU time, why not use the world's > best 24-bit-mantissa random-number generator? > > http://www.astro.multivax.de:8000/helbig/fortran/ranlux.html > > At the highest "luxury level", this has been mathematically proven to > have all 24 bits chaotic. Like real random numbers, knowing one doesn't > tell you what the next one is. (Many random-number generators generate > the next number based on the current one. So, wherever you start, all > possible numbers (usually less than the number of possible valid bit > representations) are generated then the sequence repeats. Of course, it > always repeats in the same order. Although RANLUX does repeat, its > period is much, much, much, much longer than the number of random > numbers it generates.) Also, you can get an array of random numbers > with one call. It has also been quite well tested. It is the gold > standard. Don't settle for anything less! I have to point that there exist another RNG with strong theoretical properties, that produces 61 bit caotic (enough for double precision), and 10 times faster than ranlux: MIXMAX (http://arxiv.org/abs/1403.5355). Among other things it allows to skip over large portions of the stream (5^512) as a very effective way of seeding (no collission and independence guaranteed, even if your CPU only produces random numbers during the life of the universe at a clock frecuency of 10^44 Hz). I can give a fortran implementation if anyone is interested, and there is a C one here available here: https://mixmax.hepforge.org/ This RNG is 2 times faster than the native Random_number that comes with gfortran. By the way, I would strongly recommment against using the intrinsic Random_Number and Random_seed for anything serious.

For general work, Fortran's RANDOM_NUMBER is probably adequate.

For serious work, let's not forget George Marsaglia's random number generators.

Alberto Ramos

unread,
Apr 30, 2014, 1:34:28 PM4/30/14
to
Phillip Helbig---undress to reply wrote:

>
> OK, thanks. Quite new!

MIXMAX has been around since the 90's, but only recently Konstantin Savvidy
found a very nice way of implementing the matrix-vector multiplications in
O(N) time, making it faster than almost any other MT.

I learned of this very nice story in a talk from Fred James, at a monte
carlo meeting. The slides are available here, and can be simply read as a
story (you would like it!):

https://indico.desy.de/getFile.py/access?contribId=1&resId=0&materialId=slides&confId=6196

>
> Of course, since highest-level RANLUX is chaotic, one can do what one
> normally shouldn't do, namely splice pieces of single-precision
> variables together to get double-precision variables.

Sure! This is what we all do! RANLUX is great, and it is an standard of
quality. It has never failed (and will never fail) a test.

>
> It would be good to have a good Fortran implementation.

Let us hope that this one is good enough ;) I can't attach files to the
newsgroup, but you can get a copy with:

git clone https://github.com/ramos/mxmx.git

or visit:

https://github.com/ramos/mxmx

The important routines are a translation into fortran of the C
implementation, taking care that the fortran signed integers do not
overflow, etc... I have checked that it produces the same results with
gfortran 4.6/4.8/4.9 and intel fortran 2013 in PCs and with the IBM compiler
on a BG (big endian machine).

The main program test_mxmx.f90 has many comments with examples of how to use
MIXMAX, but it is fairly simple: init, seed and get random numbers.

I am interested in opinions and experiences, so please let me know what you
think.

Happy hacking!

A.

Phillip Helbig---undress to reply

unread,
Apr 30, 2014, 4:28:53 PM4/30/14
to
In article <72a37643-550c-4c11...@googlegroups.com>,
robin....@gmail.com writes:

> For general work, Fortran's RANDOM_NUMBER is probably adequate.
>
> For serious work, let's not forget George Marsaglia's random number
> generators.

RANLUX is based on an earlier generator by Marsaglia.

"Random number generators are like sex: even the bad ones are still
pretty good."

---G. Marsaglia

Phillip Helbig---undress to reply

unread,
Apr 30, 2014, 4:30:04 PM4/30/14
to
In article <ljrc74$nlu$1...@dont-email.me>, Alberto Ramos
<albert...@desy.de> writes:

> MIXMAX has been around since the 90's, but only recently Konstantin Savvidy
> found a very nice way of implementing the matrix-vector multiplications in
> O(N) time, making it faster than almost any other MT.
>
> I learned of this very nice story in a talk from Fred James, at a monte
> carlo meeting. The slides are available here, and can be simply read as a
> story (you would like it!):

Interesting. I got RANLUX from Fred James.

Alberto Ramos

unread,
Apr 30, 2014, 5:09:16 PM4/30/14
to
>
> "Random number generators are like sex: even the bad ones are still
> pretty good."
>
> ---G. Marsaglia

Great quote!

"Random numbers are more like wine: It is better to serve one
that is too good, than one that is not good enough."

-- Fred James




robin....@gmail.com

unread,
May 1, 2014, 8:45:31 AM5/1/14
to
On Thursday, May 1, 2014 6:28:53 AM UTC+10, Phillip Helbig---undress to reply wrote:
> In article <72a37643-550c-4c...@googlegroups.com>, r.no...@gmail.com writes: > For general work, Fortran's RANDOM_NUMBER is probably adequate. > > For serious work, let's not forget George Marsaglia's random number > generators. RANLUX is based on an earlier generator by Marsaglia.

I'm thinking of those he wrote in the 2000's, some of thich are
particularly fast and have extremely long periods (and of course, pass all tests).

Steve Lionel

unread,
May 1, 2014, 9:30:27 AM5/1/14
to
On 4/28/2014 12:29 PM, lyng...@gmail.com wrote:
> GNU Fortran (g77 and gfortran) has a RAND() intrinsic function for FOTRAN 77, while Intel Fortran does not;
> I have read that random numbers are not part of the F77
specification. I am porting a F77 program to Intel
> Fortran on Linux, and read about the need to include 'mkl_vsl.f77'
> But RAND is still not recognized; in fact, that name does not appear
in the include file.

I don't know why you think Intel Fortran doesn't support RAND without
any other source changes - it does, and it is documented. While you can
add USE IFPORT, you don't have to. RAND is non-standard, so there may be
varying interfaces to it across implementations, so use of it is risky.
As suggested by others, you should use the Fortran standard
RANDOM_NUMBER intrinsic (and RANDOM_SEED if you wish to set the seed.)

You certainly don't need any MKL include file for this.

--
Steve Lionel
Developer Products Division
Intel Corporation
Merrimack, NH

For email address, replace "invalid" with "com"

User communities for Intel Software Development Products
http://software.intel.com/en-us/forums/
Intel Software Development Products Support
http://software.intel.com/sites/support/
My Fortran blog
http://www.intel.com/software/drfortran

Refer to http://software.intel.com/en-us/articles/optimization-notice
for more information regarding performance and optimization choices in
Intel software products.

lyng...@gmail.com

unread,
May 1, 2014, 12:57:27 PM5/1/14
to
On Thursday, May 1, 2014 6:30:27 AM UTC-7, Steve Lionel wrote:

> I don't know why you think Intel Fortran doesn't support RAND without
> any other source changes - it does, and it is documented. While you can
> add USE IFPORT, you don't have to.

Below is a small test case demonstrating the issue.
--Jay

[braun@birch test]$ cat short.for
SUBROUTINE TEST
IMPLICIT NONE
REAL R
R = RAND(0)
END
[braun@birch test]$ ifort -c short.for
short.for(4): error #6404: This name does not have a type, and must have an explicit type. [RAND]
R = RAND(0)
----------^
compilation aborted for short.for (code 1)
[braun@birch test]$

After I add USE IFPORT:

[braun@birch test]$ cat short.for
SUBROUTINE TEST
USE IFPORT
IMPLICIT NONE
REAL R
R = RAND(0)
END
[braun@birch test]$ ifort -c short.for
[braun@birch test]$

lyng...@gmail.com

unread,
May 1, 2014, 1:04:49 PM5/1/14
to
I should have mentioned that I am using version 14.0.2 on Linux as a 30-day free trial. The legacy application is essentially FORTRAN 77 with some FORTRAN 66 usages dating from the 1970s.

j

Richard Maine

unread,
May 1, 2014, 1:12:04 PM5/1/14
to
<lyng...@gmail.com> wrote:

> On Thursday, May 1, 2014 6:30:27 AM UTC-7, Steve Lionel wrote:
>
> > I don't know why you think Intel Fortran doesn't support RAND without
> > any other source changes - it does, and it is documented. While you can
> > add USE IFPORT, you don't have to.
>
> Below is a small test case demonstrating the issue.
> --Jay
>
> [braun@birch test]$ cat short.for
> SUBROUTINE TEST
> IMPLICIT NONE
> REAL R
> R = RAND(0)
> END
> [braun@birch test]$ ifort -c short.for
> short.for(4): error #6404: This name does not have a type, and must have
an explicit type. [RAND]

So did you try what the error message implies and add a type declaration
for RAND? One of the difficulties in using non-standard things like RAND
is that, even among compilers that support it, fine details vary.

One of those details relevant here is whether RAND is considered to be a
compiler-supplied external procedure or an intrinsic. If it is an
external procedure, then the implicit none applies and you would need a
type declaration for rand. (I don't know, but assume from the data
presented that IFPORT has such a type declaration for it.) If, on the
other hand, RAND is considered to be an intrinsic procedure, then no
type declaration is needed.

glen herrmannsfeldt

unread,
May 1, 2014, 1:14:28 PM5/1/14
to
lyng...@gmail.com wrote:

(snip)

> Below is a small test case demonstrating the issue.

> [braun@birch test]$ cat short.for
> SUBROUTINE TEST
> IMPLICIT NONE
> REAL R
> R = RAND(0)
> END

> short.for(4): error #6404: This name does not have a type,
> and must have an explicit type. [RAND]

You put in IMPLICIT NONE, and then didn't declare the type
for RAND?

Since it isn't an intrinsic function, how is the compiler supposed
to know its type?

It sounds like Intel nicely supplies a library with it in,
but that doesn't declare its type in the source.

-- glen

lyng...@gmail.com

unread,
May 1, 2014, 1:20:05 PM5/1/14
to
Thank you. This is a better approach than using the USE statement.

Richard Maine

unread,
May 1, 2014, 3:49:27 PM5/1/14
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> You put in IMPLICIT NONE, and then didn't declare the type
> for RAND?
>
> Since it isn't an intrinsic function, how is the compiler supposed
> to know its type?

I believe there have been compilers that had it as an intrinsic
function. It is not a standard intrinsic function, but compilers are
allowed to have compiler-specific intrinsic functions as well. See my
other post.

Note that I'm agreeing with you as to the cause. I just don't think one
can start out with "it isn't an intrinsic function" as a given. One can
deduce that from the observed behavior, or possibly look it up in the
compiler docs, but it isn't a starting point.

Thomas Koenig

unread,
May 1, 2014, 4:12:04 PM5/1/14
to
Richard Maine <nos...@see.signature> schrieb:

> I believe there have been compilers that had it as an intrinsic
> function. It is not a standard intrinsic function, but compilers are
> allowed to have compiler-specific intrinsic functions as well. See my
> other post.

And this variability is constant source of trouble with old
programs. Some compilers required

EXTERNAL RAND

while for others this would result in a link failure (by declaring
a function RAND which overrode the intrinsic).

We had some bug report with this for gfortran (for RAND or for
another intrinsic, I forget), which we finally resolved as
invalid, IIRC.

ken.fa...@gmail.com

unread,
May 1, 2014, 4:59:02 PM5/1/14
to
On Thursday, May 1, 2014 9:57:27 AM UTC-7, lyng...@gmail.com wrote:
[...]
> Below is a small test case demonstrating the issue.
> --Jay
>
> [braun@birch test]$ cat short.for
> SUBROUTINE TEST
> IMPLICIT NONE
> REAL R
> R = RAND(0)
^^^ This is ugly!
> END
>

In my experience, RAND modifies its input argument. Acknowledging
that there are various implementations, the OP *really* needs to
read the documentation on the function. Again, my experience going
back to 1981 or so was that the argument to RAND is read/write
(thought they didn't call it back then, F77), and is the seed to
the PRNG, and needs to be a variable in the calling program. I'd
expect either an access violation (VMS, Windows) or similar on any
system that puts literals (the "0" in this case) in a read-only
program section. As an aside, zero is particularly bad choice for
a seed. :-(

In the bad old days of IBM mainframes and Fortran H, I saw "F66"
Monte Carlo code that overwrote literal dummy arguments, so that
on return to the caller, "5" had some other numerical value. Yikes!
(And it failed badly when compiled and run on VAX/VMS...with the
afore said access violoation...).

-Ken

Richard Maine

unread,
May 1, 2014, 5:22:42 PM5/1/14
to
<ken.fa...@gmail.com> wrote:

> In the bad old days of IBM mainframes and Fortran H, I saw "F66"
> Monte Carlo code that overwrote literal dummy arguments, so that
> on return to the caller, "5" had some other numerical value. Yikes!
> (And it failed badly when compiled and run on VAX/VMS...with the
> afore said access violoation...).

As part of training some of our new progrmmers, I used to (mis)lead them
into making that mistake. I'd have them write a subroutine to do
something mundane with a positive argument - perhaps compute its square
root (without using sqrt) or some such thing. After they got that
working, I'd change the problem description (preparation for real life
there :-)) by telling them that it had to handle negative values by
returning the square root of the absolute value. Amazing how reliably
new programers would do that modification by writing some equivalent of

x = abs(x)

where x was the dummy argument.

Then I'd provide a test main program that called their subroutine with
literal constants and maybe did some absolutely trivial arithmetic as
well. The resulting strange results made for debugging sessions that
left a lot more lasting impression than just lecturing about the caveats
of modifying dummy arguments. This was, of course, long before
intent(in) declarations. As Ken mentioned, it was more like f66 days.

That exercise became not nearly as interesting on later machines where
trying to modify a constant gave a run-time error instead of strange
results.

glen herrmannsfeldt

unread,
May 1, 2014, 5:45:47 PM5/1/14
to
Richard Maine <nos...@see.signature> wrote:

(snip, I wrote)

>> Since it isn't an intrinsic function, how is the compiler supposed
>> to know its type?

> I believe there have been compilers that had it as an intrinsic
> function. It is not a standard intrinsic function, but compilers are
> allowed to have compiler-specific intrinsic functions as well. See my
> other post.

> Note that I'm agreeing with you as to the cause. I just don't think one
> can start out with "it isn't an intrinsic function" as a given. One can
> deduce that from the observed behavior, or possibly look it up in the
> compiler docs, but it isn't a starting point.

By now, I am not so sure of the definition of intrinsic function.

When I first learned about them, (whether or not the standard
used those words) I thought of them as functions simple enough
to be generated inline. Maybe not so far from the Fortran 66
definition (where SIN and SQRT were external functions).

Some compilers include large libraries of useful functions
(and maybe some subroutines) usually not as intrinsic.
Numerical libraries and graphics are some common ones.

But mostly since the compiler didn't recognize the name, it seems
it isn't intrinsic.

-- glen

Richard Maine

unread,
May 1, 2014, 7:35:52 PM5/1/14
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> By now, I am not so sure of the definition of intrinsic function.

F2003 12.1.2.1 Intrinsic procedures

"A procedure that is provided as an inherent part of the
processor is an intrinsic procedure."

Two relevant fundamental things about intrinsic procedures from
elsewhere in the standard are:

1. A standard-conforming processor may provide intrinsic procedures in
addition to the standard intrinsic procedures, but a standard-conforming
program may not reference such additional intrinsics. See f2003 1.5 for
further details on that.

2. The interface of an intrinsic procedure is always explicit in any
scoping unit where the procedure is accessible. That's why you don't
need a type declaration for an intrinsic procedure. Somewhat strangely,
you are actually allowed to declare a type for a generic intrinsic
function. Even more strangely, such a type declaration has (almost) no
effect; you can, for example, declare sqrt to be double precision, but
you'll still get the single precision one anyway if you have a single
precision argument.

Because of 2), if procedure comes bundled with the compiler, but doesn't
automatically have an explicit interface, then it can't be an intrinsic
procedure. That would just be an external procedure (or a module one)
that the compiler happens to ship with. For an intrinsic procedure, the
interface is explicit without needing any extra declaration, though you
can throw in an intrinsic declaration to make things clear to the reader
(and to throw an error message on compilers that don't have such an
intrinsic in the case of nonstandard intrinsics).

> When I first learned about them, (whether or not the standard
> used those words) I thought of them as functions simple enough
> to be generated inline.

People think many things. Sometimes they are even right. Other times
not. :-(

Trying to do your own generalization from the cases you happen to see to
all cases that might happen in the future is inherently dangerous.
That's one of the reasons we have a standard - so that people can
actually know what the requirements are instead of trying to deduce them
from examples.

One of the things I rapidly jumped on as editor was the tendency of some
people to use examples as definitions. I think most of the people on the
committee knew better than that, but it is something you do see a lot
and that I consciously watched for in drafts of the standard.

Many of the early intrinsic functions were indeed simple. However, that
was never a defining property. Would have been pretty hard to define
that very precisely. Inline generation is not a concept in the scope of
the standard at all.

glen herrmannsfeldt

unread,
May 1, 2014, 8:32:57 PM5/1/14
to
Richard Maine <nos...@see.signature> wrote:

(snip, I wrote)
>> By now, I am not so sure of the definition of intrinsic function.

> F2003 12.1.2.1 Intrinsic procedures

> "A procedure that is provided as an inherent part of the
> processor is an intrinsic procedure."

> Two relevant fundamental things about intrinsic procedures from
> elsewhere in the standard are:

> 1. A standard-conforming processor may provide intrinsic procedures in
> addition to the standard intrinsic procedures, but a standard-conforming
> program may not reference such additional intrinsics. See f2003 1.5 for
> further details on that.

(snip)

>> When I first learned about them, (whether or not the standard
>> used those words) I thought of them as functions simple enough
>> to be generated inline.

> People think many things. Sometimes they are even right. Other times
> not. :-(

> Trying to do your own generalization from the cases you happen to see
> to all cases that might happen in the future is inherently dangerous.
> That's one of the reasons we have a standard - so that people can
> actually know what the requirements are instead of trying to deduce
> them from examples.

I suppose so, but sometimes it makes it easier to remember.
Since most of us don't carry around the standard and look up
everything as we write it, some kinds of generalizations help.


> One of the things I rapidly jumped on as editor was the tendency of some
> people to use examples as definitions. I think most of the people on the
> committee knew better than that, but it is something you do see a lot
> and that I consciously watched for in drafts of the standard.

> Many of the early intrinsic functions were indeed simple. However, that
> was never a defining property. Would have been pretty hard to define
> that very precisely. Inline generation is not a concept in the scope of
> the standard at all.

Maybe not so far off for Fortran 66, though. For one, using standard
Fortran 66 you can't write variable argument functions like *max*
and *min*. Even now, I don't think anyone would try writing a
function with a huge number of optional arguments to fake an
unlimited number. (Well, the continuation limit will get you.)

Also, Fortran 66 MOD and AMOD are intrinsic, but DMOD is external.
(Just a little more complicated.) Also CABS is external, but ABS,
IABS, and DABS are intrinsic, but CABS is more complicated.

-- glen

Richard Maine

unread,
May 1, 2014, 9:06:45 PM5/1/14
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
[about the definition of intrinsic procedures]

> Maybe not so far off for Fortran 66, though.

Lots of things were different in f66, including some pretty basic
definitions. "Variable" didn't even mean the same thing then as it does
now; that one changed in f90. But in particular, the categorization of
procedures was different enough in f66 that I wouldn't be able to cite
it without going back and looking, which I don't care to bother with. I
do recall that it was pretty different between f66 and even f77.

But the question at hand had to do with an f95 compiler (with some f2003
features).

Richard Maine

unread,
May 1, 2014, 9:46:57 PM5/1/14
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> Richard Maine <nos...@see.signature> wrote:

> > Trying to do your own generalization from the cases you happen to see
> > to all cases that might happen in the future is inherently dangerous.
> > That's one of the reasons we have a standard - so that people can
> > actually know what the requirements are instead of trying to deduce
> > them from examples.
>
> I suppose so, but sometimes it makes it easier to remember.
> Since most of us don't carry around the standard and look up
> everything as we write it, some kinds of generalizations help.

At times. But it helps a lot more when they are the *RIGHT*
generalizations.

I cannot count the number of times I have heard variants of "but this
worked all the times that I tried it before" when helping people debug
their code. Even when the number of times they tried it before was
exactly one. Or on occasion, when the number of times they actually did
the same thing before was zero, but they did not realize that because
they had generalized incorrectly. (I recall a particular case of the
latter from someone who swore that his code ran on a VAX until I took it
over to a VAX and showed that it didn't; he thought that the nonstandard
VAX parameter statement was just another syntax for the same thing as
the f77 standard parameter statement).

Generalizing that "intrinsic" means "simple and inline" is a
generalization that I'd say was both incorrect and unhelpful.

There are intrinsics that are complicated and seldom (never?) done
inline. Non-intrinsic procedures can be trivially simple (to the point
of doing absolutely nothing - I regularly write stub procedures that do
literally nothing) and can be inlined by some compilers.

And back to the particular case that started this, RAND is an intrinsic
in some compilers and nonintrinsic in others. So I'd say that whatever
generalization lead you to say "Since it isn't an intrinsic function..."
must not have been a very helpful one. If the OP's code compiled and ran
as is on some older compiler, that suggests that RAND probably was an
intrinsic on that compiler.

Dick Hendrickson

unread,
May 8, 2014, 9:48:24 AM5/8/14
to
On 4/29/14, 5:52 PM, Jos Bergervoet wrote:
> On 4/29/2014 11:59 PM, Richard Maine wrote:
>> Jos Bergervoet<jos.ber...@xs4all.nl> wrote:
> ...
>> x = (rand(seed)+rand(seed)+rand(seed))/3.0
>>
>> don't be shocked if the compiler optimizes that to
>>
>> x = rand(seed)
>>
>> (Ok, 3 is a bit few for even a rough approximation, but that's all I
>> wanted to write out here.)
>>
>>> It seems clumsy...
>>
>> Sorry, but I don't find that a very convincing argument. Oh sure, I
>> agree that it seems clumsy. But I don't buy that as grounds for
>> interpreting the words of the standard.
>
> It was meant as a ground (for some programmers) to
> try building certain side effects into a function.
> Of course there are also programmers who would
> never destroy the beauty of a pure function for
> the mundane cause of updating a counter.
>
>> Perhaps grounds for pushing for
>> changes in the words of the standard,
>
> It seems sufficient to be aware that function
> side effects are not guaranteed to work, just
> like routine calls with aliased intent(out)
> arguments aren't guaranteed to work. If people
> use those things they actually use something
> that isn't supported.
>
>> ... As a warning, if
>> you try to tie down side effects to much, you'll likely run into very
>> heated opposition from people who want to give optimizers free reign.
>
> Fine with me, I'm a pure function fan anyway.
>
>> And before log, someone else will probably speak up, because I don't
>> think I can go on for this long on this particular subject without
>> having someone say I got it all wrong.
>
> It seems very simple now:
> 1) The only effect required from a function call
> is that the function result somehow is made
> available to the calling code.
> 2) The function code need not be executed if
> the function result can be deduced otherwise.
> 3) The function code need not be executed even
> if this means the omission of side effects.
>
> Functions, therefore, will never have guaranteed
> side effects. They can only can have potential
> side effects.
>
It's somewhat worse than your last sentence says. If the function code
isn't executed, then anything that would have been side-effected becomes
undefined; which is a perverse side effect. So, a function that has
side effects always has side effects; you just can't predict what they
are. :(


Dick Hendrickson


abrsvc

unread,
May 8, 2014, 11:51:03 AM5/8/14
to
To add to the discussion of side effects:

The standard does not dictate the order of evaluation for statements containing functions. Specificially in reference to IF statements.

For example:
IF (I.le.0) .or. (funct_b(c).eq.d(I)) then do_stuff

One would like to only evaluate funct_b when I is non-zero. In the above case, some compilers will evaluate both the function and d(i) BEFORE I is evaluated. In this case, a reference to d(0) can occur. This can trigger a bounds error if enabled or access a piece of memory not normally allowed. Also, if I is in a common and available to func_b, it may also effect other things.

Dan

Dick Hendrickson

unread,
May 8, 2014, 1:26:06 PM5/8/14
to
Not quite. The side effects rules don't allow funct_b to modify I in
any way because it's used in the same expression. There is an exception
that allows funct_b to modify things used in "then do_stuff".

Dick Hendrickson

Richard Maine

unread,
May 8, 2014, 4:42:08 PM5/8/14
to
abrsvc <dansabr...@yahoo.com> wrote:

> To add to the discussion of side effects:
>
> The standard does not dictate the order of evaluation for statements
> containing functions. Specificially in reference to IF statements.
>
> For example:
> IF (I.le.0) .or. (funct_b(c).eq.d(I)) then do_stuff
>
> One would like to only evaluate funct_b when I is non-zero. In the above
> case, some compilers will evaluate both the function and d(i) BEFORE I is
> evaluated. In this case, a reference to d(0) can occur. This can trigger
> a bounds error if enabled or access a piece of memory not normally
> allowed.

That's not really much related to function side effects. The evaluation
of d(l) is an issue even when no functions are in sight. You have to
separate the part to be short circuited into a separate statement. Let's
see... off the top of my head... ah, yours is one of the nastier cases
hard to do easily without introducing a tenporary variable. Perhaps
something like

doit = l.le.0
if (.not.doit) doit = funct_b(c).eq.d(I)
if (doit) then do_stuff

Yes, I know that's a bit more long-winded but something along those
general lines is what you'd need.

> Also, if I is in a common and available to func_b, it may also
> effect other things.

That's one of the cases that is most clear in the standard in regards to
function side effects. Func_b is not allowed to modify l. If it does so,
that's a programming error.
0 new messages