Matrix indexing start

309 views
Skip to first unread message

Thierry Arnoux

unread,
Aug 28, 2020, 3:45:33 AM8/28/20
to metamath

Hi all,

I recently formalized a proof of the Laplace expansion of determinants (~ mdetlap), which I think would be useful to pull to the main part of set.mm. Because the formula makes calculation based on the row and column indices of the element of the matrix, I'm using matrix with integer indices (in contrast with the rest of the development on matrices which is based on arbitrary sets).

I chose indices in ` ( 1 ... N ) ` , so that the top-left matrix element is a11 (in set.mm written ` ( 1 A 1 ) ` ). It seems using indices starting from one is the convention used for mathematics, I have not found yet a reference with indices starting at zero (and neither did Norm), however we would like to run this through the community. Most programming languages start indices with zero, with the exception of R and several others.

In set.mm words indices start with zero.

What's your opinion? Should matrix indices start with one or zero?

Thanks for your input!

BR,
_
Thierry


PS. I would later like to define a "literal" matrix function which would be used like this to transform words (for any matrix size up to 8x8) into matrices :

( litMat ` <" <" A B C "> <" D E F "> <" G H I "> "> )

This would allow a bridge/conversion.

Alexander van der Vekens

unread,
Aug 28, 2020, 12:14:57 PM8/28/20
to Metamath

There was a discussion in https://groups.google.com/g/metamath/c/UwTUuNPgaB0/m/NdWefzG4AgAJ about the indices for words. Currently, the indices for words start with 0, and the proposal to change this was not accepted.

For matrices, however, the things are different: The indices for rows and colums usually start with 1, as Thierry explained, so I agree with Thierry. And having the planned conversion function should dispel any doubt.

Alexander

Benoit

unread,
Aug 28, 2020, 1:43:07 PM8/28/20
to Metamath
I'm still in favor of words starting at 0.  As for matrix rows and columns, it would be more...wait for it... natural to start at 0, but since the literature overwhelmingly prefers to start at 1, maybe it's better to conform with it (there are a few "historical accidents" like that, for instance defining \pi as half what it should be, and since the consequences are very mild, it stays like that).
And to state the obvious: since most results using the fact that rows and columns are natural numbers have them intervene in the form (-1)^{i+j}, shifting both row and column indices by 1 does not change the parity of the sum, so these statements are unaffected.

Benoit

Mario Carneiro

unread,
Aug 28, 2020, 2:20:35 PM8/28/20
to metamath
My vote is also in favor of starting with 0, or even better, doing everything over generic finite sets when possible and falling back on ( 0 ..^ N ). I realize that close correspondence with papers is considered important here but mathematicians write for their own ease. In a math paper, you can write either a_0, ..., a_(n-1) or a_1, ..., a_n - which is easier? Besides the strictness of a few inequalities this is probably the only difference, so a_1, ..., a_n is preferred because it is very slightly shorter.

In formalization, we should also write for our own ease. In this case, that means better interfacing with the theory of finite words, which uses ( 0 ..^ N ). There are also other reasons to prefer 0 based indexing but in this case I think consistency with words (viewed as 1D vectors) is more important than that.

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/0173f6ec-7faf-41a8-816d-7becca681ba0n%40googlegroups.com.

Jim Kingdon

unread,
Aug 28, 2020, 5:40:29 PM8/28/20
to Benoit, Metamath


On August 28, 2020 10:43:07 AM PDT, Benoit <benoit...@gmail.com> wrote:
>defining \pi as half what it should be

Thems fighting words: http://us.metamath.org/mpeuni/taupi.html

(And because the internet doesn't easily transmit the slightly wry smile with which I post this, "fighting words" is meant very loosely and I'm mostly just using this as an excuse to look again at taupi which I wrote a year and a half ago)

Norman Megill

unread,
Aug 28, 2020, 8:06:43 PM8/28/20
to Metamath
While I'm not an expert, my guess is that the major uses of 0-based arrays are in computer science and some computer programming languages, where they can offer an advantage for some algorithms.

But mathematics is meant for humans, and humans tend to count from 1. That's why we number the proof steps on our web pages starting with 1. Having the "first" step numbered with 0 would seem unnatural to me.

As AV pointed out, we haven't been able to find a linear algebra textbook with matrices numbered from 0. On Wikipedia, there are 4 subtopics listed under https://en.wikipedia.org/wiki/Vector#Mathematics_and_physics which are:

https://en.wikipedia.org/wiki/Euclidean_vector
https://en.wikipedia.org/wiki/Row_and_column_vectors
https://en.wikipedia.org/wiki/Vector_space
https://en.wikipedia.org/wiki/Vector_field

These all show 1-based indices.

I have always wanted to see set.mm to be as close to mathematical literature as possible or practical.  Apparently almost all of linear algebra literature uses 1-based matrices and vectors, so I think that is what set.mm should do.  Even if for some purposes it would be more convenient to have matrices 0-based, it hasn't seemed to have harmed the field.

My vote is to conform to the literature and have matrices and vectors 1-based.

On Friday, August 28, 2020 at 2:20:35 PM UTC-4 di.... gmail.com wrote:
My vote is also in favor of starting with 0, or even better, doing everything over generic finite sets when possible and falling back on ( 0 ..^ N ). I realize that close correspondence with papers is considered important here but mathematicians write for their own ease. In a math paper, you can write either a_0, ..., a_(n-1) or a_1, ..., a_n - which is easier? Besides the strictness of a few inequalities this is probably the only difference, so a_1, ..., a_n is preferred because it is very slightly shorter.

In formalization, we should also write for our own ease. In this case, that means better interfacing with the theory of finite words, which uses ( 0 ..^ N ). There are also other reasons to prefer 0 based indexing but in this case I think consistency with words (viewed as 1D vectors) is more important than that.

David A. Wheeler

unread,
Aug 29, 2020, 6:57:31 AM8/29/20
to Norman Megill, Metamath
>My vote is to conform to the literature and have matrices and vectors 1-based.

Would it be reasonable to "hide" the starting index in most cases and just say "first index", "last index", and so on? Ada the programming language does this to a certain extent. I'm thinking out loud.


--- David A.Wheeler

fl

unread,
Aug 29, 2020, 7:33:10 AM8/29/20
to Metamath
I feel as if I were attending a debate on nuclear power or the gender of the word "Covid".

-- 
FL

Thierry Arnoux

unread,
Aug 29, 2020, 8:57:49 AM8/29/20
to meta...@googlegroups.com, David A. Wheeler, Norman Megill
"Hiding" the starting index would unfortunately not really solve the issue.

Even if we use a variable as a starting index, we have to choose between
the ` ... ` operator (closed integer interval) and the ` ..^ ` operator
(half-open integer interval), whereas the former form is rather used for
indices starting at 1 and the latter rather for indices starting at 0.

I also think the more we hide, the more complex and the less clear the
constructs will be: the best would still be a universal convention.

I'm willing to rewrite the theorems I have now with ( 1 ... N ) into ( 0
..^ N ) if such is the choice of the community, but I don't see any
efficient and clear way to make it working in a flexible way for both at
the same time.

fl

unread,
Aug 29, 2020, 9:00:41 AM8/29/20
to Metamath

I'm willing to rewrite the theorems I have now with ( 1 ... N ) into ( 0
..^ N ) if such is the choice of the community, but I don't see any
efficient and clear way to make it working in a flexible way for both at
the same time.


A finite totally ordered set ? 

Thierry Arnoux

unread,
Sep 2, 2020, 12:30:44 AM9/2/20
to meta...@googlegroups.com, Norman Megill

Thanks for all feedback!

There are valid arguments on both sides, so we need Norm or Mario to do a final call.

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

Mario Carneiro

unread,
Sep 2, 2020, 3:15:25 AM9/2/20
to metamath
I will reiterate my suggestion to use finite sets for indexing when possible, which sidesteps the question, but while I would like to defer to Norm here, I really think a lack of consistency is going to cause actual problems in this area. What is the definition of a matrix multiplied by a vector? Surely the vector has to be using the same indexing as the matrix. Perhaps we can have a separate type for vectors than words, but this seems superfluous and repetitive.

I am willing to defer to decisions on convention making but I would like to see some clear path forward for the main definitions here and how they are not better served by a minor variation on the definition such as reindexing.

Mario

savask

unread,
Sep 2, 2020, 3:35:31 AM9/2/20
to Metamath
I will reiterate my suggestion to use finite sets for indexing when possible, which sidesteps the question

Whatever the final decision on indexing start will be, I really hope that there still will be support for matrices indexed by arbitrary finite sets. I'm not sure if this example was mentioned before, but, for example, adjacency matrix of a graph can be indexed by graph vertices. It is often cumbersome to induce an arbitrary ordering on the vertices, and needless to say that many theorems do not require any ordering whatsoever (for instance, theorems about eigenvalues of subgraphs/submatrices).

Thierry Arnoux

unread,
Sep 2, 2020, 3:49:50 AM9/2/20
to meta...@googlegroups.com, Mario Carneiro

Indeed, we do have matrix vector multiplication, ` maVecMul `, introduced by Alexander, and used in his proof of Cramer's rule.
That operation considers vectors as functions of a single variable, indexed by any finite set.

This can only identify with Words (` Word X `, which can be input as ` <" A B C "> ` ) if that set is of the form ` ( 0 ..^ N ) ` ...

_
Thierry

Alexander van der Vekens

unread,
Sep 2, 2020, 5:17:20 AM9/2/20
to Metamath
For clarification: there is already a definition of (square) matrices based on arbitrary finite sets used as index sets (see ~df-mat, contributed by Stefan O'Rear). The theorems which I needed to proof Cramer's rule are based on this definion. If I understand Thierry correctly, he does not want to replace this definition, but to provide an additional, more specialized definition with an ordered index set, which can be used for special theorems like the Laplace expansion (I was not able to prove it for arbitrary finite sets...).

The column vectors which I used for the matrix vector multiplication `maVecMul` are also indexed by arbitrary finite sets, as  mentioned by Thierry, so this also covers the general case Mario is asking for.

-
Alexander

Mario Carneiro

unread,
Sep 2, 2020, 6:54:12 AM9/2/20
to metamath
Note that an expression such as <" <" A B C "> <" D E F "> <" G H I "> "> has the type (0..^N) -> ((0..^N) -> R), which would have to be converted anyway to match the expected I x I -> R type used by square matrices. If we take I to be (0..^N) then this is just an uncurry, but if the official index set for matrices is instead (1...N) then the index remapping could be performed at the same time.

Thierry Arnoux

unread,
Sep 2, 2020, 7:28:41 AM9/2/20
to meta...@googlegroups.com
I’ve done a mock-up of this conversion (including index translation), you can find it here:
http://us2.metamath.org:88/mpeuni/lmat22e11.html

On the other hand, does any one have any idea or suggestion about how I could have expressed the Laplace expansion without integer indices, on any finite set index?

Le 2 sept. 2020 à 18:54, Mario Carneiro <di....@gmail.com> a écrit :



Mario Carneiro

unread,
Sep 2, 2020, 8:47:53 AM9/2/20
to metamath
On Wed, Sep 2, 2020 at 7:28 AM Thierry Arnoux <thierry...@gmx.net> wrote:
I’ve done a mock-up of this conversion (including index translation), you can find it here:

That's a whole lot of boilerplate, I would hope that most of that can be broken into a lemma specialized for proving the entries of a 2x2 matrix (instantiated 4 times) and a 3x3 matrix (instantiated 9 times). I'm thinking something like:

|- M = ( litMat ` S )
|- K e. NN0
|- L e. NN0
|- ( K + 1 ) = I
|- ( L + 1 ) = J
|- ( S ` K ) = T
|- ( ph -> ( T ` L ) = X )
$p |- ( ph -> ( I M J ) = X )

But the index translation is only a small added pain in this lemma application.
 
On the other hand, does any one have any idea or suggestion about how I could have expressed the Laplace expansion without integer indices, on any finite set index?

The laplace expansion of the determinant (just one level of expansion, not recursive) requires a choice of row to expand over (which is an element of the index set), and then it's an unordered finite sum. No integers needed.

Mario
 

fl

unread,
Sep 2, 2020, 11:25:33 AM9/2/20
to Metamath
Note that an expression such as <" <" A B C "> <" D E F "> <" G H I "> "> has the type (0..^N) -> ((0..^N) -> R), which would have to be converted anyway to match the expected I x I -> R type used by square matrices. If we take I to be (0..^N) then this is just an uncurry, but if the official index set for matrices is instead (1...N) then the index remapping could be performed at the same time.



The best to do is redefine the <" ...>" operator so that it takes (1 ... N) as its set of indices and then fix up all the proofs referring to the definition.
You should have only one definition for matrices, tuples and words since all that is the same story. Or at most two: one with a abstract finite
set of indices and another one  with (1... N).

--
FL
 

Thierry Arnoux

unread,
Sep 2, 2020, 1:02:23 PM9/2/20
to meta...@googlegroups.com

On 02/09/2020 23:25, 'fl' via Metamath wrote:

The best to do is redefine the <" ...>" operator so that it takes (1 ... N) as its set of indices and then fix up all the proofs referring to the definition.
You should have only one definition for matrices, tuples and words since all that is the same story. Or at most two: one with a abstract finite
set of indices and another one  with (1... N).

Like mentioned by Norm in the original thread about index start for words, I'm afraid that would be a huge work, there are already hundreds of theorems making use of that range.
That's why I would like to ask the same about matrices now, before writing more theorems within whichever convention.

Of course we shall try to use arbitrary sets whenever possible... which leads me to the other point:

On the other hand, does any one have any idea or suggestion about how I could have expressed the Laplace expansion without integer indices, on any finite set index?

The laplace expansion of the determinant (just one level of expansion, not recursive) requires a choice of row to expand over (which is an element of the index set), and then it's an unordered finite sum. No integers needed.

Yes, but what is the factor you apply to each submatrix determinant ? The textbooks all have ` -1 ^ ( i + j ) ` , where i and j are integer indices. Of course, start index does not matter, but where do those numbers come from when working with arbitrary sets?

_
Thierry

Norman Megill

unread,
Sep 2, 2020, 2:12:50 PM9/2/20
to Metamath
I don't want to be the one making the decision on this, in part because I won't be doing the bulk of the work, and I don't really have special expertise in the matter.  It just seems that starting at 0 goes against virtually all of published mathematics on matrices.

I would feel better if someone could find a book on linear algebra with matrices starting at 0.  I was unable to find one.  Even Cormen et. al.  _Algorithms_ (which is computer-science oriented) start at 1 in their chapter on matrices.

What is the connection between words and matrices in the literature, and how does it handle the 0- vs. 1-based conversion?  Is all or most of the literature on words 0-based?

BTW it seems that the main computer algebra languages start at 1 (Mathematica, Matlab and R are mentioned):
https://mathematica.stackexchange.com/questions/86189/why-do-mathematica-list-indices-start-at-1

Norm

fl

unread,
Sep 2, 2020, 2:32:05 PM9/2/20
to Metamath
> I don't want to be the one making the decision on this, in part because I won't be doing the bulk of the work,

There are ~300 proofs about words. No more. It's not a big deal. If everybody takes 30 proofs it's done in two weeks. Changing
the start of the indices is pretty a mechanical job. Guess that for many proofs changing them amounts to 5 minutes or less.

> and I don't really have special expertise in the matter. 

^^

> What is the connection between words and matrices in the literature,

Words are tuples! I don't understand this question.

If we don't change these proofs now, we will keep the burden of this specificy for years.

--
FL

Alexander van der Vekens

unread,
Sep 2, 2020, 2:42:40 PM9/2/20
to Metamath
I do not like the idea to regard words as vectors - these are two completely different concepts. Words have prefixes and suffixes, can be concatenated, and subwords can be extracted - what would all of these mean for vectors? On the other side, vectors can be added and multiplied by scalars (and matrices), not meaningful for words. Therefore, the question which indices to use for words and which for vectors/matrices can/should be discussed (and decided) separately. I would prefer indices starting with 0 for words, and indices starting with 1 for vectors and matrices.

fl

unread,
Sep 2, 2020, 2:51:06 PM9/2/20
to Metamath

> I do not like the idea to regard words as vectors - these are two completely different concepts

Simply do a computer program and you will realize soon that tuples can be concatenated at will.
Length is a concept relevant to words and  matrices.

There is an abstract theory of words regarding them as elements of free structures. Concatenation is
not at all an operation of its own. It's an operation similar to the others.

Pretty sure that some transformation of matrices can be analyzed in terms of concatenation or subwords.

No definitely it's the same concept.

--
FL

fl

unread,
Sep 2, 2020, 3:08:00 PM9/2/20
to Metamath

fl

unread,
Sep 2, 2020, 3:08:44 PM9/2/20
to Metamath
* Mathematica

fl

unread,
Sep 2, 2020, 3:09:07 PM9/2/20
to Metamath
* was

Le mercredi 2 septembre 2020 à 21:08:00 UTC+2, fl a écrit :

Stanislas Polu

unread,
Sep 2, 2020, 3:52:01 PM9/2/20
to meta...@googlegroups.com
Wouldn’t they be clearly differentiated in a type theory setup? How does coq/lean treat those? Are they the same concept or separated ones?

-stan

--


You received this message because you are subscribed to the Google Groups "Metamath" group.


To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.


fl

unread,
Sep 2, 2020, 4:02:03 PM9/2/20
to Metamath
Wouldn’t they be clearly differentiated in a type theory setup? How does coq/lean treat those? Are they the same concept or separated ones?

We are speaking of lists and lists of lists in the context of FOL + ZFC.  All what matters is to have a unique framework
for processing lists.

--
FL

Mario Carneiro

unread,
Sep 2, 2020, 6:08:26 PM9/2/20
to metamath
In Coq and Lean, the structure of inductive types, specifically the peano natural numbers starting at 0, *strongly* prefers 0-based indexing. The difference is stark enough that it's not really ever a discussion. You can do induction on nat, you can't do induction on positive nat (at least the version that is baked into the language doesn't give you this, you have to use a theorem that amounts to an index conversion under the hood). ZFC is more free in this regard because any definition is as good as any other from the point of view of the automation, but there is still friction associated with conversions.

My argument for 0-based matrices is at this point entirely one of uniformity. Changing 300 theorems with a silly index change is not an automatic job so it will be a lot of work, and the gain is small (and since I'm on Dijkstra's side in this argument, the gain is negative from my POV). Unlike Norm I don't think that rigid adherence to the textbook formulations of things is necessarily a good idea, as long as I think the author would agree that the reformulation is essentially equivalent (the "mathematician's equal"). It is easier to get this confirmation when you have actual practicing mathematicians on staff, as it were; there are a lot of mathematicians getting into lean these days and the 0-based thing hasn't caused one bit of dissent that I have heard.

Here's the definition of a matrix: https://leanprover-community.github.io/mathlib_docs/find/matrix/src . It uses arbitrary finite sets, but further down the page you will see some theorems like https://leanprover-community.github.io/mathlib_docs/find/matrix.sub_left/src that specialize to the type "fin n", which is lean's version of our (0..^n). It is defined as {i : nat | i < n}, where nat is the inductive type of nonnegative integers. The type (1...n) would be the more cumbersome {i : nat | 0 < i /\ i <= n}, which has an extra conjunct that would have to be dealt with whenever you destructure the type (and working directly from the stated definition is much more common in lean than metamath). This is the sort of extra friction I am talking about - it's not just changing numbers in a few places.

Mario

Norman Megill

unread,
Sep 2, 2020, 8:59:10 PM9/2/20
to Metamath
Is the issue here the ability to translate to Lean more easily?

On the flip side, there has been some interest in translation from set.mm to Mathematica, which would be more difficult with 0-based matrices.

Why there is a need for words in the development of matrices? Is any of this already being done in set.mm?  Is there literature that develops matrices using words?

Norm

On Wednesday, September 2, 2020 at 6:08:26 PM UTC-4 di.... gmail.com wrote:
In Coq and Lean, the structure of inductive types, specifically the peano natural numbers starting at 0, *strongly* prefers 0-based indexing. The difference is stark enough that it's not really ever a discussion. You can do induction on nat, you can't do induction on positive nat (at least the version that is baked into the language doesn't give you this, you have to use a theorem that amounts to an index conversion under the hood). ZFC is more free in this regard because any definition is as good as any other from the point of view of the automation, but there is still friction associated with conversions.

My argument for 0-based matrices is at this point entirely one of uniformity. Changing 300 theorems with a silly index change is not an automatic job so it will be a lot of work, and the gain is small (and since I'm on Dijkstra's side in this argument, the gain is negative from my POV). Unlike Norm I don't think that rigid adherence to the textbook formulations of things is necessarily a good idea, as long as I think the author would agree that the reformulation is essentially equivalent (the "mathematician's equal"). It is easier to get this confirmation when you have actual practicing mathematicians on staff, as it were; there are a lot of mathematicians getting into lean these days and the 0-based thing hasn't caused one bit of dissent that I have heard.

Here's the definition of a matrix: https://leanprover-community.github.io/mathlib_docs/find/matrix/src . It uses arbitrary finite sets, but further down the page you will see some theorems like https://leanprover-community.github.io/mathlib_docs/find/matrix.sub_left/src that specialize to the type "fin n", which is lean's version of our (0..^n). It is defined as {i : nat | i < n}, where nat is the inductive type of nonnegative integers. The type (1...n) would be the more cumbersome {i : nat | 0 < i /\ i <= n}, which has an extra conjunct that would have to be dealt with whenever you destructure the type (and working directly from the stated definition is much more common in lean than metamath). This is the sort of extra friction I am talking about - it's not just changing numbers in a few places.

Mario

On Wed, Sep 2, 2020 at 3:52 PM 'Stanislas Polu' via Metamath wrote:
Wouldn’t they be clearly differentiated in a type theory setup? How does coq/lean treat those? Are they the same concept or separated ones?

-stan

Thierry Arnoux

unread,
Sep 2, 2020, 10:09:34 PM9/2/20
to meta...@googlegroups.com, Norman Megill
Hi,

Like Mario mentioned, we have to look at the way matrix multiplication with vectors works. The existing ` maVecMul ` takes a matrix and a function of a single value, where it would be easy to use words as-is if matrices were indexed starting with 0.

About concatenation, there are interesting properties of matrices by blocks, where blocks behave very much the same way as matrix entries themselves. The big matrix can be seen as concatenation of its blocks, and Dijkstra's argument about index manipulation will apply. In that case the corresponding vectors also work by block concatenation.

As for elements of an Euclidean vector space, Scott Fenton has defined them with an index in ` ( 1 ... N ) ` (see ~ elee), and I have reused that definition in ` EEhil ` (see ~ df-ehl).

BR,
_
Thierry

Mario Carneiro

unread,
Sep 2, 2020, 10:19:18 PM9/2/20
to metamath
No, I'm not proposing a translation to or from lean. (Well, I am looking into it, but that's an entirely separate discussion.) My purpose was only to draw a parallel to another theorem prover library driven in large part by mathematicians, to send the message that this is a minor variation that is well within the capacity of mathematicians to barely acknowledge as different. The books don't change in response to usage, because they are frozen in time, but people can adapt to the technology. I could take a poll on the lean chat, but I think the deck is stacked in favor of 0-based for reasons I mentioned in the last email.

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

David A. Wheeler

unread,
Sep 2, 2020, 11:11:57 PM9/2/20
to Mario Carneiro, metamath
I think there has been a slow march from one-based indexing to zero based indexing in computing.

Many systems designed two decades ago supported one based indexing, such as R and matlab/octave. But newer systems are pretty much uniformly zero-based.

I realize that abstract mathematics is not the same as computing, but they do influence each other.

It seems that we can often hide the issue of whether or not matrices are zero based or one based. Perhaps that would be the best course.
--- David A.Wheeler

Mario Carneiro

unread,
Sep 2, 2020, 11:53:16 PM9/2/20
to David A. Wheeler, metamath
Don't forget Fortran arrays, which have user-configurable ordering (!) but by default start with 1.

Mario Carneiro

unread,
Sep 2, 2020, 11:55:08 PM9/2/20
to David A. Wheeler, metamath

fl

unread,
Sep 3, 2020, 11:36:48 AM9/3/20
to Metamath
No, I'm not proposing a translation to or from lean.

Why did you stop working for the Lean team by the way?

--
FL

Mario Carneiro

unread,
Sep 3, 2020, 12:48:44 PM9/3/20
to metamath
I haven't stopped anything, but my time is finite, and I have a dissertation to finish. I'm pretty active on the Lean Zulip chat (https://leanprover.zulipchat.com/) if you want to say hi or show support for metamath, since right now I'm the token representative.

Mario

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

Norman Megill

unread,
Sep 3, 2020, 6:38:21 PM9/3/20
to Metamath
On Wednesday, September 2, 2020 at 11:11:57 PM UTC-4 David A. Wheeler wrote:
I think there has been a slow march from one-based indexing to zero based indexing in computing.

Many systems designed two decades ago supported one based indexing, such as R and matlab/octave. But newer systems are pretty much uniformly zero-based.

I realize that abstract mathematics is not the same as computing, but they do influence each other.

Every linear algebra textbook I've seen, even relatively recent ones, starts at 1 for matrices. I've asked for a counterexample, but no one has been able to provide one, suggesting that there is not a trend towards using 0-based matrices in the linear algebra literature.

An important question is, who is Metamath intended for? What is its audience?

I have no doubt that a professional mathematician can easily feel comfortable with either 0 or 1-based matrices.

My concern is for someone who wants to use set.mm as a supplement to a book they are learning from. Apparently essentially 100% of linear algebra textbooks start at 1. For someone already struggling with new concepts, the disconnect between their book and set.mm's 0-based matrices is not going to be helpful.

That's about all I want to say on this. I'll accept whatever the rest of the people here decide.

Norm
 

fl

unread,
Sep 4, 2020, 7:09:29 AM9/4/20
to Metamath
 Apparently essentially 100% of linear algebra textbooks start at 1. 


Well I think there is now a consensus in the use of matrices, tuples and words.

-- 
FL
 

Alexander van der Vekens

unread,
Sep 5, 2020, 2:35:23 AM9/5/20
to Metamath
I think there is no consensus to use index sets starting with 1, especially not for tuples/words (actually, there was a consensus to start with 0 for words, see discussion in https://groups.google.com/g/metamath/c/UwTUuNPgaB0/m/NdWefzG4AgAJ).  Regarding matrices, I am also in favour of starting with 1, but Norm's argument is only for one of many criteria discussed here.

Alexander van der Vekens

unread,
Sep 5, 2020, 2:43:40 AM9/5/20
to Metamath
As proposed by FL in https://groups.google.com/g/metamath/c/n5g69qfwBmE/m/McJAgtdSAgAJ, the additional assumption that the finite index sets are totally orderer should be sufficient to express and prove the Laplace expansion. Regarding the expression ` -1 ^ ( i + j ) ` , a special concept of parity must be defined for (finite) totally ordered sets...

Benoit

unread,
Sep 5, 2020, 3:53:21 AM9/5/20
to Metamath
I do not have strong opinions on  this issue and I will not be the one to do the work, but I still think "starting at 0" for matrices is more natural.  Of course, the more important thing is to have as many results as possible stated for arbitrary finite sets (or even arbitrary sets for some results?).  Actually, I think starting at 0 is preferable even more after reading Mario's arguments, and the shift from literature is not that important (I think it won't even confuse beginners, since this is such a minor variation.)

As Norm said, it depends on the intended audience.  I mentioned this question a few times when discussing other topics, and said it was bound to show up again and again, since many decisions depend on this question.  Of course it's better to fulfill the needs of different audiences, but it's not always possible.  You don't write a textbook the same way you write a reference treatise (admittedly, with electronic material, one might have more flexibility).  Personally, I think it's time for Norm to realize the grandeur of what he created ;-) and that it's closer to Bourbaki's Elements than to your usual textbook.

Benoît

Mario Carneiro

unread,
Sep 5, 2020, 4:30:07 AM9/5/20
to metamath
Regarding the Laplace expansion more specifically:

I assume that a determinant is defined over matrices M : A x A -> R, where A is a finite (unordered) set. Suppose we wish to perform cofactor expansion along row i e. A. The j'th cofactor (where j e. A) is given by the elements of M on (A\{i}) x (A\{j}) ,which is not again a square matrix, so we must first compose with swap(i,j) : A\{j} -> A\{i}. This is the natural way to restore squareness in this setting, but note that it is not the usual way it is done over {1..n} where you just delete an element and shift everything down. The difference between these two operations (if you compose one with the inverse of the other) is a cycle of length |j - i|, and the sign of this permutation is, fortuitously, (-1)^(j+i). Thus if you state cofactor expansion using swap, you only need finite sets and you don't need the sign term at all.

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

fl

unread,
Sep 5, 2020, 8:19:48 AM9/5/20
to Metamath

I think there is no consensus to use index sets starting with 1, especially not for tuples/words (actually, there was a consensus to start with 0 for words, see discussion in https://groups.google.com/g/metamath/c/UwTUuNPgaB0/m/NdWefzG4AgAJ). 

There has never been any consensus regarduing words. There was an authoritarian decision by Carneiro, which was then endorsed 
by people who wanted to show him their affection or who didn't give a damn.

 
Regarding matrices, I am also in favour of starting with 1, but Norm's argument is only for one of many criteria discussed here.


Sure. :)
 
-- 
FL

fl

unread,
Sep 5, 2020, 8:26:26 AM9/5/20
to Metamath

In any case, dissociating the treatment of matrices, tuples and words is a bad idea.

-- 
FL
Reply all
Reply to author
Forward
0 new messages