Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A MATRIX type for Fortran

402 views
Skip to first unread message

Quadibloc

unread,
Oct 26, 2021, 2:15:12 AM10/26/21
to
I will have to admit that I'm not fully conversant
with the latest developments in Fortran.

But it is my understanding that starting around the
time of Fortran 90, constructs have been added to
the language so that programs can be written to
efficiently make use of the vector arithmetic features
of computers like the Cray I, and its successors and
imitators.

These days, except for the SX-Aurora TSUBASA from
NEC, most vector arithmetic on computers belongs to
a line of development that started from the AN/FSQ-7
for SAGE rather than the one that went through the
Cray I.

Be that as it may...

So, IIRC, one can write

A = B * C

in Fortran today and if A, B, and C are arrays having the
same dimensions (or if one of B or C is a scalar) it will
do an element-by-element multiplication, much as would
have been done by APL, without the need for a DO loop.

I remember that back in the 1970s I mentioned to someone
that I thought this sort of thing would be a nice feature to
have, even without vector hardware. The fellow replied that
I was being too slavish in following APL, matrix multiplication
would be better.

Well, I agreed that matrix multiplication was useful in
scientific and engineering computation.

But I felt that this would assign inappropriate semantics
to the construct of an array in Fortran.

Of course, one can add a built-in subroutine for matrix
multiplication.

But more recently, I thought that there might be a better
solution; and now on this newsgroup, I've thought to
present it more widely.

Fortran has a COMPLEX data type.

Why can't it have a MATRIX data type?

So there would be a MATRIX statement, having much the
same syntax as a DIMENSION statement. It could be used
to create one-dimensional vectors and matrices that were
not square.

If A, B, and C are declared as matrices (or one of B or C is
a scalar) then

A = B * C

would perform matrix multiplication as expected... if one of
B or C is a one-dimensional matrix, it would be treated as
either a row or column vector, whichever one would make
sense. (Unless that isn't practical; declaring 1 by n and n by 1
matrices instead to make it explicit is certainly an option as
well.)

Since a matrix is an a x b grid of floating-point numbers
combined into an object with specific mathematical
properties, while it would be possible to create an array
of matrices, it would *not* be possible to create a matrix
the elements of which are arrays.

For much the same reason that you can't make a
complex number the real and imaginary parts of which
are arrays of the same shape.

But, yes, it should be allowed to create a matrix the elements
of which are complex numbers. (Or quaternions, for that
matter!)

Oh, and by the way: yes, it should be possible to take a (4,4)
matrix the elements of which are REAL*16 and EQUIVALENCE
it to a (4,4) array the elements of which are REAL*16 with the
expected results of the same-numbered elements perfectly
corresponding. (This might even be a reason not to deprecate
EQUIVALENCE...)

Is this not a useful thing to include in Fortran?

Maybe someone else has even suggested it already?

John Savard

gah4

unread,
Oct 26, 2021, 6:47:28 AM10/26/21
to
On Monday, October 25, 2021 at 11:15:12 PM UTC-7, Quadibloc wrote:
> I will have to admit that I'm not fully conversant
> with the latest developments in Fortran.
>
> But it is my understanding that starting around the
> time of Fortran 90, constructs have been added to
> the language so that programs can be written to
> efficiently make use of the vector arithmetic features
> of computers like the Cray I, and its successors and
> imitators.

Complicated question.

In the Cray days, there was much work on getting compilers to figure
out how to make good use of vector registers.

The Fortran added array expressions, which are, among others, supposed
to make it easier for vector processors. There is also FORALL, which was more
specifically added to allow for vector processors.

Except that both don't work quite that well.

Both array expressions and FORALL are defined such that the whole
right hand side is evaluated before any changes are made to the left
side. Unless vector registers are big enough for the whole array, that
can mean temporary arrays.

Even worse, many array expressions, as written, are harder for compilers
to optimize well, than the DO form. That is especially true when one can
leave a computation early.

And then vector processors went out of style.

> These days, except for the SX-Aurora TSUBASA from
> NEC, most vector arithmetic on computers belongs to
> a line of development that started from the AN/FSQ-7
> for SAGE rather than the one that went through the
> Cray I.

(snip)

> I remember that back in the 1970s I mentioned to someone
> that I thought this sort of thing would be a nice feature to
> have, even without vector hardware. The fellow replied that
> I was being too slavish in following APL, matrix multiplication
> would be better.

> Well, I agreed that matrix multiplication was useful in
> scientific and engineering computation.

> But I felt that this would assign inappropriate semantics
> to the construct of an array in Fortran.

> Of course, one can add a built-in subroutine for matrix
> multiplication.

Fortran does have a built-in function for matrix
multiplication, MATMUL. Are expressions easier to read
with the * operator as a matrix multiply? I don't know.

> But more recently, I thought that there might be a better
> solution; and now on this newsgroup, I've thought to
> present it more widely.

> Fortran has a COMPLEX data type.

> Why can't it have a MATRIX data type?

You can do it with defined type, and matrix multiply
through defined operations. (That is, define * to be
the matrix multiply operator for TYPE(MATRIX).)

> So there would be a MATRIX statement, having much the
> same syntax as a DIMENSION statement. It could be used
> to create one-dimensional vectors and matrices that were
> not square.

It seems a lot of work, just to get the * operator to change.

It would not be so hard to add an operator with different
syntax, that would be a matrix multiply operator.

This is completely separate from the question about vector,
or otherwise, processor. In either the current form, or one
with a specific operator, it is just as easy for compilers
to figure out.





FortranFan

unread,
Oct 26, 2021, 9:41:57 AM10/26/21
to
On Tuesday, October 26, 2021 at 2:15:12 AM UTC-4, Quadibloc wrote:
> I will have to admit that I'm not fully conversant
> with the latest developments in Fortran.
> ..
>
> Why can't it have a MATRIX data type?
> ..
>
> If A, B, and C are declared as matrices (or one of B or C is
> a scalar) then
>
> A = B * C
>
> would perform matrix multiplication as expected... ..
>
> Is this not a useful thing to include in Fortran?
>
> Maybe someone else has even suggested it already?
> ..

@John Savard,

Given your first sentence, you may want to consider the following even if it is just an "academic" interest:
1) Grab a copy of "Modern Fortran Explained" that covers up to Fortran 2018, the current standard revision. With your experience it will be a quick read for you:
https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198811893.001.0001/oso-9780198811893

2) Also post and follow modern Fortran discourse at this site:
https://fortran-lang.discourse.group/

As to your immediate question, please note:

a) Fortran has long had native support for multidimensional arrays,

b) Toward ordinary operations assuming SHAPE conformance, the language supports the "vector" notation you show:
C = A + B

c) Starting with Fortran 90 introduced nearly 30 years ago, the language support for the outstanding "matrix" operation you show is

A = matmul( B, C )

with the details of achieving efficient matrix multiplication left mostly to the processors.

pehache

unread,
Oct 26, 2021, 11:08:45 AM10/26/21
to
Le 26/10/2021 à 12:47, gah4 a écrit :
> On Monday, October 25, 2021 at 11:15:12 PM UTC-7, Quadibloc wrote:
>> I will have to admit that I'm not fully conversant
>> with the latest developments in Fortran.
>>
>> But it is my understanding that starting around the
>> time of Fortran 90, constructs have been added to
>> the language so that programs can be written to
>> efficiently make use of the vector arithmetic features
>> of computers like the Cray I, and its successors and
>> imitators.
>
> Complicated question.
>
> In the Cray days, there was much work on getting compilers to figure
> out how to make good use of vector registers.
>
> The Fortran added array expressions, which are, among others, supposed
> to make it easier for vector processors. There is also FORALL, which was more
> specifically added to allow for vector processors.

Wasn't FORALL rather designed for general parallelism, allowing out of
order execution of the loop ?

>
> Except that both don't work quite that well.
>
> Both array expressions and FORALL are defined such that the whole
> right hand side is evaluated before any changes are made to the left
> side. Unless vector registers are big enough for the whole array, that
> can mean temporary arrays.

Or unless the compiler can determine that this is safe to update the LHS
during the evaluation of the RHS. But yes, this means more complexity on
the compiler side.

>
> Even worse, many array expressions, as written, are harder for compilers
> to optimize well, than the DO form. That is especially true when one can
> leave a computation early.
>
> And then vector processors went out of style.

Yes, and no. Most of modern CPUs have small vector registers.



--
"...sois ouvert aux idées des autres pour peu qu'elles aillent dans le
même sens que les tiennes.", ST sur fr.bio.medecine

gah4

unread,
Oct 26, 2021, 2:42:43 PM10/26/21
to
On Tuesday, October 26, 2021 at 8:08:45 AM UTC-7, pehache wrote:

(snip, I wrote)

> > Complicated question.

> > In the Cray days, there was much work on getting compilers to figure
> > out how to make good use of vector registers.

> > The Fortran added array expressions, which are, among others, supposed
> > to make it easier for vector processors. There is also FORALL, which was more
> > specifically added to allow for vector processors.

> Wasn't FORALL rather designed for general parallelism, allowing out of
> order execution of the loop ?

It was about the time of Cray, and I thought I knew that it was related
to vector processors. Now we have DO CONCURRENT that, as far as
I know, is related to out of order.

> > Except that both don't work quite that well.

> > Both array expressions and FORALL are defined such that the whole
> > right hand side is evaluated before any changes are made to the left
> > side. Unless vector registers are big enough for the whole array, that
> > can mean temporary arrays.

> Or unless the compiler can determine that this is safe to update the LHS
> during the evaluation of the RHS. But yes, this means more complexity on
> the compiler side.

Yes, I said "can". Maybe I should have made that more obvious.

But if you have infinite sized vector registers, then you wouldn't have
to consider that. But you never have those.

> > Even worse, many array expressions, as written, are harder for compilers
> > to optimize well, than the DO form. That is especially true when one can
> > leave a computation early.

> > And then vector processors went out of style.

> Yes, and no. Most of modern CPUs have small vector registers.

OK, but for it to make sense, vector registers have to keep getting
bigger, not smaller. Well, Cray registers were based on the processor
pipeline, and that it could keep them full through a series of vector
instructions.

Before Cray, there were a series of pipelined scalar processors,
such as the IBM 360/91, and the CDC 7600, but it is much easier
to keep them full with large vector registers.

Quadibloc

unread,
Oct 26, 2021, 8:14:10 PM10/26/21
to
On Tuesday, October 26, 2021 at 4:47:28 AM UTC-6, gah4 wrote:
> It seems a lot of work, just to get the * operator to change.

Your comment has made me think of a potential pitfall:
using operator overloading for matrix multiplication versus
element-by-element multiplication has the potential to cause
confusion, and, hence, errors in programming.

The reason I was doing it was not so much to have *access*
to matrix multiplication - as has been noted, Fortran now has a
function for that - but instead to have the language treat matrices
consistently as mathematical objects which resemble numbers,
just as it does with complex numbers.

That would be a very natural style of writing certain types of
programs which made extensive use of matrices.

John Savard

gah4

unread,
Oct 27, 2021, 2:42:20 AM10/27/21
to
On Tuesday, October 26, 2021 at 5:14:10 PM UTC-7, Quadibloc wrote:
> On Tuesday, October 26, 2021 at 4:47:28 AM UTC-6, gah4 wrote:
> > It seems a lot of work, just to get the * operator to change.

> Your comment has made me think of a potential pitfall:
> using operator overloading for matrix multiplication versus
> element-by-element multiplication has the potential to cause
> confusion, and, hence, errors in programming.

Octave and Matlab, two languages designed around matrix
operations, used * for the matrix multiply operator,
and .* for the element wise multiply operator.

That is slightly more obvious for an interpreted language,
and also with the matrix emphasis.

While matrix math is not rare in Fortran, it is also not the
primary use for the language.

Octave and Matlab, even more, have the / operator (multiply
by the inverse matrix), and ** (matrix exponential), again
with the ./ and .** element wise operators.

> The reason I was doing it was not so much to have *access*
> to matrix multiplication - as has been noted, Fortran now has a
> function for that - but instead to have the language treat matrices
> consistently as mathematical objects which resemble numbers,
> just as it does with complex numbers.

I don't know if there was any thought to that when array expressions
were added, but I suspect not. Adding new operators doesn't seem
like such a bad idea, though.

I think I still remember, when I first learned C, how convenient it
was to have the % (modulo) operator, instead of the MOD function.
I suspect I do a lot more modulo than matrix multiplication, though.

Edmondo Giovannozzi

unread,
Oct 27, 2021, 7:15:22 AM10/27/21
to
Python introduced the operator @ for matrix multiplication (in version 3.5 if I remember well), leaving * as an element by element multiplication.

David Jones

unread,
Oct 27, 2021, 8:30:21 AM10/27/21
to
FortranFan wrote:

>
> c) Starting with Fortran 90 introduced nearly 30 years ago, the
> language support for the outstanding "matrix" operation you show is
>
> A = matmul( B, C )
>
> with the details of achieving efficient matrix multiplication
> left mostly to the processors.

If there were to be a semi-serious attempt to make matrices more easily
used in Fortran, one might contemplate adding optional arguments to
MATMUL to allow optional transposition of the input arguments before
being used in the matrix-multiplication. This would then allow
avoidance of unnecessary construction of transposes.

Gary Scott

unread,
Oct 27, 2021, 9:36:22 AM10/27/21
to
I've wanted the same "natural" representation for boolean/logical
operations for decades. At least you can make your own now.

a .and. b .or. 5 .shift. 8

Thomas Koenig

unread,
Oct 27, 2021, 1:07:59 PM10/27/21
to
David Jones <dajh...@nowherel.com> schrieb:
Such arguments already exist, but you have to apply them to the
arguments.

If you write

A = matmul (transpose(b), c)

compilers can recognize this idiom and can

a) Use optimized loop nesting if the call to matmul is inlined

b) do something special in its own library

c) Call the relevant *GEMM routine (if the compiler does so)
and specify the TRANSA or TRANSB as needed

gfortran does a) and c) and, to a limited extent, b).

Ron Shepard

unread,
Oct 27, 2021, 1:17:27 PM10/27/21
to
I remember having that same concern back when I started using f90. I
stumbled across the following. If you have a subprogram with an explicit
interface with an intent(in) assumed shape matrix argument, then you can
use the loc() or c_loc() functions to look at the memory locations of
the actual and dummy arguments. What I found was that the transpose(A)
actual argument and the dummy argument memory locations are the same.
This is not required by the standard (any of them), but it was a nice
surprise to see that optimization being performed. I have looked at
several compilers over the years, and this seems to be a common
optimization.

What is happening, of course, is that the compiler is creating a new
data structure for the transposed array, and the row and column
addressing values for increments and offsets for the elements are being
switched, but the actual memory is the same for the actual and the dummy
arrays. Thus it is like a "shallow transpose" is being done, similar to
a shallow copy using pointer addresses rather than a deep copy that
actually references the memory. Thus, no matter how large the array
argument is, the calling sequence takes O(1) time. It is just the
metadata that is modified in the call, not the actual array data.

At the times I have looked at this, it did not also work for other
declarations. Explicit shape dummy arguments don't work that way because
they must be contiguous, thus a deep transpose must be done, with
copy-in/copy-out. Intent(inout) dummy arguments do not work this way
because the actual argument is considered to be an expression. However,
if you think about it, the same shallow transpose operation would also
work in this case too. For this reason, I have suggested that there
should be a new shallow transpose operator that is directly available to
the programmer that works the same way as this common optimization, but
one that would not be considered as an expression. It's only task is to
manipulate the metadata for arrays. It would always be an O(1)
operation, no matter the size of the array. In addition to actual
arguments, it could also be used in expressions and on the left hand
side of expressions, the same way that the transpose operation can be
used in mathematical expressions. There is currently no way to do a
pointer assignemnt to the transpose of an array -- this would allow that
in a straightforward way. The programmer could do things like

B => shallow_transpose(A)
shallow_transpose(B) => A
B = expression...shallow_transpose(A)...expression
B = function(,..., shallow_transpose(A), ...)
call some_subroutine(..., shallow_transpose(A), ...)

and know that all of those transpose operations would be done with O(1)
effort. Currently, it is up to the compiler to decide which transpose
operations are shallow (and cheap) and which ones are deep (and
expensive). This would give the programmer more control over the
low-level operations that are being performed.

BTW, I do not think there is any need to introduce an intrinsic MATRIX
type in fortran. That would introduce much complication to the language,
requiring, for example, separate subprograms to be written for various
combinations of array and MATRIX arguments, new conversion routines to
be defined, new semantics for assignments, pointer assignments, actual
and dummy arguments, and mixed-mode arithmetic, and so on. Consider for
example, the nightmare of maintaining the LAPACK library that allows all
combinations of array and MATRIX arguments. Instead, I think exposing
the array metadata to the programmer, even if it is limited to 2D arrays
in some cases, would be generally more productive.

$.02 -Ron Shepard

pehache

unread,
Oct 28, 2021, 6:08:19 PM10/28/21
to
Le 26/10/2021 à 20:42, gah4 a écrit :
>
>>> In the Cray days, there was much work on getting compilers to figure
>>> out how to make good use of vector registers.
>
>>> The Fortran added array expressions, which are, among others, supposed
>>> to make it easier for vector processors. There is also FORALL, which was more
>>> specifically added to allow for vector processors.
>
>> Wasn't FORALL rather designed for general parallelism, allowing out of
>> order execution of the loop ?
>
> It was about the time of Cray, and I thought I knew that it was related
> to vector processors.

Actually both parallelisation and vectorisation of loops require some
indepency, so what was good for vectorisation was also good for
parallelism. FORALL is just a kind of generalized array syntax, ans
basically the array syntax allows the execution in any order (the price
to pay being the temporary arrays)

> Now we have DO CONCURRENT that, as far as
> I know, is related to out of order.

Yep. And is more general than array syntax, and without hidden temporary
arrays if used to replace an array syntax or a FORALL.

Jos Bergervoet

unread,
Oct 29, 2021, 1:50:03 PM10/29/21
to
On 21/10/27 7:17 PM, Ron Shepard wrote:
> On 10/27/21 7:30 AM, David Jones wrote:
>> FortranFan wrote:
...
...
> ... There is currently no way to do a
> pointer assignemnt to the transpose of an array -- this would allow that
> in a straightforward way. The programmer could do things like >
>    B => shallow_transpose(A)
>    shallow_transpose(B) => A
>    B = expression...shallow_transpose(A)...expression
>    B = function(,..., shallow_transpose(A), ...)
>    call some_subroutine(..., shallow_transpose(A), ...)
>
> and know that all of those transpose operations would be done with O(1)
> effort.

This came up some time ago in the thread "Mathematics of arrays".
The minimum "hacking" solution (dangerous!) I found in gfortran
was the following: (just to play with it, not really useful..)
To be included in a module or "contains" of main program
!

function shallow_transpose(p1) result(p2)
real(wp), target, intent(in) :: p1(:,:)
real(wp), pointer :: p2(:,:)

type dummy
real(wp), pointer :: p(:,:)
end type
type(dummy) :: d
integer :: ar(22) ! to fetch descriptor.. yes, that big!

d%p => p1; ! look at pointer p1
ar = transfer(d, ar) ! fetch descriptor in ar(:)

ar(11:22) = [ar(17:22), ar(11:16)] ! swap index information

d = transfer(ar, d) ! put back in the descriptor
p2 => d%p ! copy pointer from wrapper
end


--
Jos

Robin Vowels

unread,
Oct 29, 2021, 8:44:30 PM10/29/21
to
On Tuesday, October 26, 2021 at 5:15:12 PM UTC+11, Quadibloc wrote:
> I will have to admit that I'm not fully conversant
> with the latest developments in Fortran.
>
> But it is my understanding that starting around the
> time of Fortran 90, constructs have been added to
> the language so that programs can be written to
> efficiently make use of the vector arithmetic features
> of computers like the Cray I, and its successors and
> imitators.
>
> These days, except for the SX-Aurora TSUBASA from
> NEC, most vector arithmetic on computers belongs to
> a line of development that started from the AN/FSQ-7
> for SAGE rather than the one that went through the
> Cray I.
>
> Be that as it may...
>
> So, IIRC, one can write
>
> A = B * C
>
> in Fortran today and if A, B, and C are arrays having the
> same dimensions (or if one of B or C is a scalar) it will
> do an element-by-element multiplication, much as would
> have been done by APL, without the need for a DO loop.
.
That has been available in PL/I since it was introduced in 1966.
It has been the mainstay of array operations on the Pilot ACE
from 1955 (and probably earlier) and on DEUCE from 1955,
as GIP. The specific program was LZ61B and its successor LZ61BM/1.
.
In IBM's PL/I-F (1966), the elements are accessed using a single loop
(regardless of whether the operands are vectors or matrices),
without the need to manipulate subscripts, and thus allows
for highly efficient array operations.
.
> I remember that back in the 1970s I mentioned to someone
> that I thought this sort of thing would be a nice feature to
> have, even without vector hardware. The fellow replied that
> I was being too slavish in following APL, matrix multiplication
> would be better.
.
In the 1970s, the facility had already been available in PL/I
for at least 4 years.
.
> Well, I agreed that matrix multiplication was useful in
> scientific and engineering computation.
>
> But I felt that this would assign inappropriate semantics
> to the construct of an array in Fortran.
>
> Of course, one can add a built-in subroutine for matrix
> multiplication.
.
That was done in 1990 with Fortran 90.
.
> But more recently, I thought that there might be a better
> solution; and now on this newsgroup, I've thought to
> present it more widely.
>
> Fortran has a COMPLEX data type.
>
> Why can't it have a MATRIX data type?
.
Why?
.
> So there would be a MATRIX statement, having much the
> same syntax as a DIMENSION statement. It could be used
> to create one-dimensional vectors and matrices that were
> not square.
.
That can be done in FORTRAN (pre 1990) and from Fortran 90.
.
> If A, B, and C are declared as matrices (or one of B or C is
> a scalar) then
>
> A = B * C
>
> would perform matrix multiplication as expected...
.
Might that not be confusing?
An in any case, Fortran post 1990 already has a built-in subroutine
for matrix multiplication.
.
> if one of
> B or C is a one-dimensional matrix, it would be treated as
> either a row or column vector, whichever one would make
> sense. (Unless that isn't practical; declaring 1 by n and n by 1
> matrices instead to make it explicit is certainly an option as
> well.)
>
> Since a matrix is an a x b grid of floating-point numbers
.
Matrices can be composed of integers.
.
> combined into an object with specific mathematical
> properties, while it would be possible to create an array
> of matrices, it would *not* be possible to create a matrix
> the elements of which are arrays.
>
> For much the same reason that you can't make a
> complex number the real and imaginary parts of which
> are arrays of the same shape.
>
> But, yes, it should be allowed to create a matrix the elements
> of which are complex numbers.
.
That can be done in FORTRAN, and has been the case for
some 60+ years.
.
> (Or quaternions, for that
> matter!)
>
> Oh, and by the way: yes, it should be possible to take a (4,4)
> matrix the elements of which are REAL*16 and EQUIVALENCE
> it to a (4,4) array the elements of which are REAL*16
.
Why on earth would you want to do that?
.
>with the
> expected results of the same-numbered elements perfectly
> corresponding. (This might even be a reason not to deprecate
> EQUIVALENCE...)
>
> Is this not a useful thing to include in Fortran?
.
No.
.

Ron Shepard

unread,
Oct 30, 2021, 4:35:52 PM10/30/21
to
Neat trick. This doesn't allow the function on the lhs of an expression,
but I think it is close for all of the other cases (apart from
expression/intent(inout) mismatches). Unfortunately, it does not seem to
work in gfortran 10. That is not surprising because this would be highly
nonportable code. But just from looking at what it does, that is exactly
what I have in mind for the shallow_transpose() function, but in a
standard and portable way.

$.02 -Ron Shepard

Steve Lionel

unread,
Oct 31, 2021, 10:26:00 AM10/31/21
to
On 10/30/2021 4:35 PM, Ron Shepard wrote:
> But just from looking at what it does, that is exactly what I have in
> mind for the shallow_transpose() function, but in a standard and
> portable way.

Um, no. This is the absolute antithesis of "standard and portable". It
relies on a specific compiler's implementation of pointers, something
that can change even between versions of a compiler. It will not work
with other compilers.

--
Steve Lionel
ISO/IEC JTC1/SC22/WG5 (Fortran) Convenor
Retired Intel Fortran developer/support
Email: firstname at firstnamelastname dot com
Twitter: @DoctorFortran
LinkedIn: https://www.linkedin.com/in/stevelionel
Blog: https://stevelionel.com/drfortran
WG5: https://wg5-fortran.org

Quadibloc

unread,
Oct 31, 2021, 10:56:12 PM10/31/21
to
On Friday, October 29, 2021 at 6:44:30 PM UTC-6, Robin Vowels wrote:

> In the 1970s, the facility had already been available in PL/I
> for at least 4 years.

I wasn't claiming originality. It was a good idea in PL/I,
and so it would be good for Fortran to have it too.

Just as later on, in Fortran 77, block-structured IF statements
were added, which PL/I also had.

John Savard

gah4

unread,
Nov 1, 2021, 12:10:03 PM11/1/21
to
One interesting difference, though.

Fortran array assignments require the "as if" the whole right hand side
is evaluated before the left side is changed. If the compiler can't be sure,
it has to use a temporary array.

PL/I array assignment is done element by element. If an array element
changes, that change is used in later assignments. That was before
vector processors, maybe before people thought about building them.



Robin Vowels

unread,
Nov 1, 2021, 8:51:04 PM11/1/21
to
On Tuesday, November 2, 2021 at 3:10:03 AM UTC+11, gah4 wrote:
> On Sunday, October 31, 2021 at 7:56:12 PM UTC-7, Quadibloc wrote:
> > On Friday, October 29, 2021 at 6:44:30 PM UTC-6, Robin Vowels wrote:
>
> > > In the 1970s, the facility had already been available in PL/I
> > > for at least 4 years.
>
> > I wasn't claiming originality. It was a good idea in PL/I,
> > and so it would be good for Fortran to have it too.
>
> > Just as later on, in Fortran 77, block-structured IF statements
> > were added, which PL/I also had.
.
Algol also had block-structured IF statements.
.
> One interesting difference, though.
>
> Fortran array assignments require the "as if" the whole right hand side
> is evaluated before the left side is changed. If the compiler can't be sure,
> it has to use a temporary array.
>
> PL/I array assignment is done element by element.

As is done in Fortran. However, in PL/I

> If an array element changes,
.
No. If an array element on the LHS of the assignment
is also used on the RHS,
.
> that change is used in later assignments. That was before
> vector processors, maybe before people thought about building them.
.
Vector processors were in operation long before PL/I (1966).
The Pilot ACE (1951) did vector arithmetic and vector moves.
So did the DEUCE (1955) which had a similar architecture to Pilot ACE.
I suspect (but do not know for sure) that ACE (c. 1955) also had these
array features.

The rule in PL/I is that in an array assignment,
it is treated as if the loop were written out as one (or more nested
DO loops, as required), with the subscripts being varied in the
corresponding row-major order.
.
However, that rule* doesn't stop a PL/I compiler from reducing
nested loops to a SINGLE loop (as was done in IBM's PL/I (F)
compiler and the DEUCE PL/I compiler) to produce a highly-
optimised loop.
_________
* It is necessary only for the programmer's understanding of the
order in which the computations are carried out.

Jos Bergervoet

unread,
Nov 2, 2021, 4:06:03 AM11/2/21
to
On 21/10/31 3:25 PM, Steve Lionel wrote:
> On 10/30/2021 4:35 PM, Ron Shepard wrote:
>> But just from looking at what it does, that is exactly what I have in
>> mind for the shallow_transpose() function, but in a standard and
>> portable way.
>
> Um, no. This is the absolute antithesis of "standard and portable".

I'm not entirely sure about this "absolute", because somehow I could
perhaps have used equivalence, instead of the transfer function. But
I admit this was close to what is conceivably possible (in terms of
approximating the antithesis, I mean..)

> It
> relies on a specific compiler's implementation of pointers, something
> that can change even between versions of a compiler.

That is true, but it did not rely on possible alignment problems, which
equivalence could perhaps have introduced (with some care..) That could
also easily have provided out-of-bounds addressing, which was ow not
used at all!

> It will not work
> with other compilers.

OK, it did at least satisfy that basic requirement. :^)

[But seriously, I think Ron is right that it could be portable if each
version of a compiler would do this in the way appropriate for, well..
that version of the compiler! After all that is how all code generation
is kept portable, even though "a specific compiler's implementation"
that "can change between versions" is used for almost everything.]

--
Jos

Ron Shepard

unread,
Nov 2, 2021, 11:35:31 AM11/2/21
to
I guess my wording in the previous post must have been confusing. I
certainly was not saying that that implementation of the
shallow_transpose() function was portable. In fact, I noted that it did
not work with an older version of gfortran, so it was not even portable
between different versions of the same compiler. But that implementation
does demonstrate that such an intrinsic function could provide that
functionality in a standard way that would be portable to the
programmer. The low-level implementation of that intrinsic function
would of course depend on the underlying metadata that the compiler uses
to describe arrays.

$.02 -Ron Shepard


gah4

unread,
Nov 2, 2021, 8:42:58 PM11/2/21
to
On Tuesday, November 2, 2021 at 8:35:31 AM UTC-7, Ron Shepard wrote:

(snip)
> I guess my wording in the previous post must have been confusing. I
> certainly was not saying that that implementation of the
> shallow_transpose() function was portable. In fact, I noted that it did
> not work with an older version of gfortran, so it was not even portable
> between different versions of the same compiler. But that implementation
> does demonstrate that such an intrinsic function could provide that
> functionality in a standard way that would be portable to the
> programmer. The low-level implementation of that intrinsic function
> would of course depend on the underlying metadata that the compiler uses
> to describe arrays.

I wasn't especially trying to follow it, but I thought I remembered
something about standardizing the descriptors, such that one could
do things like this.

Steve Lionel

unread,
Nov 4, 2021, 12:14:21 PM11/4/21
to
On 11/2/2021 8:42 PM, gah4 wrote:
> I wasn't especially trying to follow it, but I thought I remembered
> something about standardizing the descriptors, such that one could
> do things like this.

"C descriptors" are somewhat standardized, but they're not visible to
Fortran code and the standard permits layout differences. C code can
access the descriptors through macros and structure declarations in
ISO_Fortran_binding.h. Some compilers might use C descriptors to
represent pointers in Fortran code, but there's no requirement that this
be done. Even the specific layout of a C descriptor's bounds section is
implementation-dependent.

Lynn McGuire

unread,
Nov 4, 2021, 2:39:03 PM11/4/21
to
Hi John,

When was the last time you wrote anything in Fortran ?

I write software in Fortran almost every day. Or C++. Or if I am
lucky, both. I could never use a matrix object package in our software
since the computations are so long and usually dependent upon the
previous computations. We do use some of the Harwell routines, heavily
modified.

Lynn

Quadibloc

unread,
Nov 5, 2021, 1:15:51 AM11/5/21
to
On Thursday, November 4, 2021 at 12:39:03 PM UTC-6, Lynn McGuire wrote:

> When was the last time you wrote anything in Fortran ?

I must admit, quite a while ago.

> I write software in Fortran almost every day. Or C++. Or if I am
> lucky, both. I could never use a matrix object package in our software
> since the computations are so long and usually dependent upon the
> previous computations. We do use some of the Harwell routines, heavily
> modified.

I know that Harwell is in Britain, and has something to do with atomic
energy...

Ah, this page
http://www.chilton-computing.org.uk/acl/society/impact/hsl.htm

on a site I went to for nice pictures of the front panel of the IBM 370/195
explains what the subroutine library is.

And it's the Atomic Energy Research Establishment that was known as Harwell,
from the place it was located; it has now been renamed the Harwell Science and
Innovation Campus.

John Savard

gah4

unread,
Nov 5, 2021, 2:56:05 AM11/5/21
to
On Thursday, November 4, 2021 at 9:14:21 AM UTC-7, Steve Lionel wrote:
> On 11/2/2021 8:42 PM, gah4 wrote:
> > I wasn't especially trying to follow it, but I thought I remembered
> > something about standardizing the descriptors, such that one could
> > do things like this.

> "C descriptors" are somewhat standardized, but they're not visible to
> Fortran code and the standard permits layout differences. C code can
> access the descriptors through macros and structure declarations in
> ISO_Fortran_binding.h. Some compilers might use C descriptors to
> represent pointers in Fortran code, but there's no requirement that this
> be done. Even the specific layout of a C descriptor's bounds section is
> implementation-dependent.

So with some strange mix of Fortran and C, it should be possible to
do what was wanted. Maybe not so convenient, though.

Edmondo Giovannozzi

unread,
Nov 5, 2021, 6:51:44 AM11/5/21
to
I don't think that a manipulation of the descriptor should be allowed except when using pointers.
If I write something like:

B = shallowTranspose(A)

Now B and A are pointing to the same memory and now one have aliasing without the processor really knowing. While one should have something like:
..., target :: A
..., pointer :: B

B => shallowTranspose(A)

Or whatever manipulation of the descriptor one want to allow. Now the standard Fortran rules apply and the processor know that the two arrays may share the same memory.



Steve Lionel

unread,
Nov 5, 2021, 10:40:30 AM11/5/21
to
On 11/5/2021 2:56 AM, gah4 wrote:
>> "C descriptors" are somewhat standardized, but they're not visible to
>> Fortran code and the standard permits layout differences. C code can
>> access the descriptors through macros and structure declarations in
>> ISO_Fortran_binding.h. Some compilers might use C descriptors to
>> represent pointers in Fortran code, but there's no requirement that this
>> be done. Even the specific layout of a C descriptor's bounds section is
>> implementation-dependent.
> So with some strange mix of Fortran and C, it should be possible to
> do what was wanted. Maybe not so convenient, though.
>

You can't use the CFI_ functions to remap a Fortran pointer, but you
could create a new pointer with dimensions and bounds as desired and
pass that back to Fortran. It would have to be done as a sort of
callback, because the descriptor structure would have to be created in
the C code. It would be awkward.

Ron Shepard

unread,
Nov 5, 2021, 8:19:53 PM11/5/21
to
Yes, pointer assignment was exactly what was suggested in the earlier
post, along with ability to use shallow_transpose() in expressions and
as an actual argument to a subprogram. In that latter case, it would be
the programmer's responsibility to avoid aliases with other arguments --
I don't think the compiler could know and check for those in all cases.

I also suggested the possibility of using the function on the left hand
side of expressions.

shallow_transpose(B) = <some expression>

That would be beyond what is currently allowed in fortran, but it would
match what is allowed in formal mathematical expressions. In terms of
low-level operations, that would require the allocation of memory to
evaluate the expression, association of that memory to the pointer, and
adjusting the pointer metadata to account for the transpose. That seems
complicated, knowing when that memory can be deallocated by the compiler
and what other things can be done with it (e.g. subsequent pointer
assignments, etc.). So maybe that isn't really practical, despite its
apparent usefulness.

In any case, in the end the shallow_transpose() would be an intrinsic
function defined as part of the fortran standard. The programmer would
not need to fiddle around with any descriptors using C interop or
anything like that.

$.02 -Ron Shepard

Jos Bergervoet

unread,
Nov 8, 2021, 5:00:04 PM11/8/21
to
Why? The lhs is allowed to be a pointer (assuming that it
has been allocated the correct array size!) And if B has
been allocated the correct transposed array size then the
result of shallow_transpose(B) has the correct size, so
the assignement is no problem (and does indeed work nicely
in the toy-implementation given upthread).

> but it would
> match what is allowed in formal mathematical expressions. In terms of
> low-level operations, that would require the allocation of memory to
> evaluate the expression,

That is needed regardless of what the lhs is..

> association of that memory to the pointer, and

That's the programmers responsibiliy, not the compilers
(we are talking about pointers, not allocatable arrays).

> adjusting the pointer metadata to account for the transpose.

That's what the function shallow_transpose does, which (at
this point of the thread) we assume to exist of course..

> That seems
> complicated, knowing when that memory can be deallocated by the compiler
> and what other things can be done with it (e.g. subsequent pointer
> assignments, etc.).

I see no difference between the situation after:
B = <some expression>
and
shallow_transpose(B) = <some expression>

At least not for what can be done with the memory region
involved. To me it would seem to be more tricky to have
B(i:j, n:m) = <some expression>
where only a slice of the total allocated memory of B is
being used.

But why would anything have to be "deallocated by the
compiler" in these examples? Again: we aren't talking
about allocatable arrays, but just pointers, so the
burden is on the programmer.

> So maybe that isn't really practical, despite its
> apparent usefulness.
>
> In any case, in the end the shallow_transpose() would be an intrinsic
> function defined as part of the fortran standard.

I do think it might be useful, but the name is too
long (it's even worse than dot_product!)

> The programmer would
> not need to fiddle around with any descriptors using C interop or
> anything like that.

Of course. But can it be generalized? For a two-
dimensional array we could have something like:

B(:,:, iorder=[2,1]) = <some expression>

And for a higher dimensional case for instance:

B(:,:,2:3,:, iorder=[2,1,4,3]) = <some expression>

where "iorder" just specifies a change in indexing
order! (Admittedly it would be error prone because of
the added complexity to what index slices now mean..)

--
Jos

gah4

unread,
Nov 8, 2021, 10:44:03 PM11/8/21
to
On Monday, November 8, 2021 at 2:00:04 PM UTC-8, Jos Bergervoet wrote:

(snip)

> I see no difference between the situation after:
> B = <some expression>
> and
> shallow_transpose(B) = <some expression>

Other than it isn't usual to use functions on the left side of an assignment.

PL/I has pseudo-variables, which do look like functions, and can appear
on the left side. In the case of a function that is its own inverse, though
I don't see the need.

PL/I has the REAL and IMAG functions to extract the real and imaginary
parts of a complex value, and COMPLEX to make a complex value from
real and imaginary parts.

Then there are the REAL and IMAG pseudo-variables to change the real
or imaginary part of a complex value, and COMPLEX to extract both.

X=REAL(Z);
Y=IMAG(Z);

can be written as:

COMPLEX(X,Y) = Z;

IMAG(Z) = Y;

to change only the imaginary part. Much more obvious than:

Z = COMPLEX(REAL(Z), Y);

and even:

DO IMAG(Z) = 1 TO 100;

where it is both a function and pseudo-variable.

There is also SUBSTR(), as a function to extract a substring, and
pseudo-variable to change a substring in a string.

In any case, interpreted languages commonly have the shallow
transpose, where all of memory allocation is not visible to the
programmer.

Robin Vowels

unread,
Nov 9, 2021, 1:39:33 AM11/9/21
to
On Tuesday, November 9, 2021 at 2:44:03 PM UTC+11, gah4 wrote:
> On Monday, November 8, 2021 at 2:00:04 PM UTC-8, Jos Bergervoet wrote:
>
> (snip)
> > I see no difference between the situation after:
> > B = <some expression>
> > and
> > shallow_transpose(B) = <some expression>
> Other than it isn't usual to use functions on the left side of an assignment.
>
> PL/I has pseudo-variables, which do look like functions,
.
Most of them do not.
Most are simple names, such as ONSOURCE, ONCHAR, etc.
One pseudo-variable that is likely to be seen is SUBSTR,
which can replace zero or more characters in an existing string. E.g.,
SUBSTR(S, 4, 7) = 'abcdefg';
which replaces 7 characters in string S, commencing with the fourth
character.
Fortran has a comparable facility.
.
> and can appear
> on the left side. In the case of a function that is its own inverse, though
> I don't see the need.
>
> PL/I has the REAL and IMAG functions to extract the real and imaginary
> parts of a complex value, and COMPLEX to make a complex value from
> real and imaginary parts.
.
COMPLEX as a pseudo-variable has long been dropped from the language.
.
> Then there are the REAL and IMAG pseudo-variables to change the real
> or imaginary part of a complex value, and COMPLEX to extract both.
>
> X=REAL(Z);
> Y=IMAG(Z);
>
> can be written as:
>
> COMPLEX(X,Y) = Z;
.
Not any more.
.
0 new messages