166 views

Skip to first unread message

May 11, 2015, 6:02:26 PM5/11/15

to metamath

I'm starting to experiment with a "decimal point" operator, written as an isolated period,

in my mathbox. For example, ` 1 . 5 = 3 / 2 ` , and ` 3 . ; 1 4 = ; ; 3 1 4 / ; ; 1 0 0 ` .

Comments? I expect that "numdigits" can be written in a better way,

but I'm not sure how. This could be all wrong, though I *think* it's right.

--- David A. Wheeler

$(

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Decimal point

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

$)

$c . $. $( Decimal point. $)

$c numdigits $. $( Function, returns number of digits in an ` NN0 ` . $)

cdp $a class . $.

cnumdigits $a class numdigits $.

${

$d x y $.

$( Define the ` numdigits ` function, which returns the number

of digits (in base 10) of a natural number in ` NN0 ` .

E.g., ` ( numdigits `` ; 1 3 ) = 2 ` . We intentionally define

` ( numdigits `` 0 ) = 1 ` .

Perhaps rewrite for ~ df-mpt2 . $)

df-numdigits $a |- numdigits =

{ <. <. x , y >. , z >. | ( ( x e. NN0 /\ y e. NN ) /\

( ( x = 0 /\ y = 1 ) \/

( ( x <_ ( 10 ^ ( y - 1 ) ) ) /\ ( x <_ ( 10 ^ y ) ) ) ) ) } $.

$( Define the ` . ` (decimal point) operator.

For example, ` 1 . 5 = 3 / 2 ` , and

` 3 . ; 1 4 = ; ; 3 1 4 / ; ; 1 0 0 ` $)

df-dp $a |- . = ( x e. ZZ , y e. NN0 |->

( x + ( if ( x < 0 , -u 1 , 1 ) x. ( y / ( numdigits ` y ) ) ) ) ) $.

$}

in my mathbox. For example, ` 1 . 5 = 3 / 2 ` , and ` 3 . ; 1 4 = ; ; 3 1 4 / ; ; 1 0 0 ` .

Comments? I expect that "numdigits" can be written in a better way,

but I'm not sure how. This could be all wrong, though I *think* it's right.

--- David A. Wheeler

$(

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Decimal point

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

$)

$c . $. $( Decimal point. $)

$c numdigits $. $( Function, returns number of digits in an ` NN0 ` . $)

cdp $a class . $.

cnumdigits $a class numdigits $.

${

$d x y $.

$( Define the ` numdigits ` function, which returns the number

of digits (in base 10) of a natural number in ` NN0 ` .

E.g., ` ( numdigits `` ; 1 3 ) = 2 ` . We intentionally define

` ( numdigits `` 0 ) = 1 ` .

Perhaps rewrite for ~ df-mpt2 . $)

df-numdigits $a |- numdigits =

{ <. <. x , y >. , z >. | ( ( x e. NN0 /\ y e. NN ) /\

( ( x = 0 /\ y = 1 ) \/

( ( x <_ ( 10 ^ ( y - 1 ) ) ) /\ ( x <_ ( 10 ^ y ) ) ) ) ) } $.

$( Define the ` . ` (decimal point) operator.

For example, ` 1 . 5 = 3 / 2 ` , and

` 3 . ; 1 4 = ; ; 3 1 4 / ; ; 1 0 0 ` $)

df-dp $a |- . = ( x e. ZZ , y e. NN0 |->

( x + ( if ( x < 0 , -u 1 , 1 ) x. ( y / ( numdigits ` y ) ) ) ) ) $.

$}

May 11, 2015, 6:05:27 PM5/11/15

to metamath

A clever idea, but how do you represent 3.01?

--

You received this message because you are subscribed to the Google Groups "Metamath" group.

To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

To post to this group, send email to meta...@googlegroups.com.

Visit this group at http://groups.google.com/group/metamath.

For more options, visit https://groups.google.com/d/optout.

May 11, 2015, 8:01:01 PM5/11/15

to meta...@googlegroups.com, Mario Carneiro

On May 11, 2015 6:05:26 PM EDT, Mario Carneiro <di....@gmail.com> wrote:

>A clever idea, but how do you represent 3.01?

Not very well. Hmmm, I am glad I said experimental. Back to the drawing board.
>A clever idea, but how do you represent 3.01?

--- David A.Wheeler

May 11, 2015, 9:48:39 PM5/11/15

to metamath

Okay, here's a different experimental approach for the decimal point. The "." is

followed by a digit if you want tenths; otherwise use ";." to add more digits

(similar to ";"). The idea is that

` 1 . 5 = 3 / 2 ` , and ` 2 . ;. 7 ;. 1 8 = ( ; ; ; 2 7 1 8 / ; ; ; 1 0 0 0 ) `

Hopefully this has a completely different set of fundamental flaws :-).

--- David A. Wheeler

===================================================

$(

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Decimal point

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

$)

$c ;. $. $( Decimal fraction indicator for digits after decimal point $)

cdfrac $a class ;. A B $.

which is used to build up "decimal fractions" in base 10.

This is intentionally similar to ~ df-dec , which is

` |- ; A B = ( ( 10 x. A ) + B ) ` .

$)

df-dfracc $a |- ;. A B = ( A + ( B / 10 ) ) $.

${

$d x y $.

$}

followed by a digit if you want tenths; otherwise use ";." to add more digits

(similar to ";"). The idea is that

` 1 . 5 = 3 / 2 ` , and ` 2 . ;. 7 ;. 1 8 = ( ; ; ; 2 7 1 8 / ; ; ; 1 0 0 0 ) `

Hopefully this has a completely different set of fundamental flaws :-).

--- David A. Wheeler

===================================================

$(

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Decimal point

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

$)

$c . $. $( Decimal point. $)

$( Constant used for decimal fraction constructor. $)
cdfrac $a class ;. A B $.

cdp $a class . $.

$( Define the "decimal fraction constructor",
which is used to build up "decimal fractions" in base 10.

This is intentionally similar to ~ df-dec , which is

` |- ; A B = ( ( 10 x. A ) + B ) ` .

$)

df-dfracc $a |- ;. A B = ( A + ( B / 10 ) ) $.

${

$d x y $.

$( Define the ` . ` (decimal point) operator.

For example, ` 1 . 5 = 3 / 2 ` , and

` 2 . ;. 7 ;. 1 8 = ( ; ; ; 2 7 1 8 / ; ; ; 1 0 0 0 ) ` $)
For example, ` 1 . 5 = 3 / 2 ` , and

df-dp $a |- . = ( x e. ZZ , y e. NN0 |->

( x + ( if ( x < 0 , -u 1 , 1 ) x. ( y / 10 ) ) ) ) $.
$}

May 11, 2015, 9:57:23 PM5/11/15

to metamath

Yes, that should work. Actually, there is no need for separate "( x . y )" and ";. x y" constructors, as they are the same (you don't have to support negative numbers because -u ( x . y ) will do the trick just as well), although it may be a bit strange and hard to read to write

;. ; ; 1 2 3 ;. 7 ;. 1 8

to mean 123.718 (as opposed to "( ; ; 1 2 3 . ;. 7 ;. 1 8 )"). Note that if you use df-ov for the decimal point operator, parentheses around the number are required.

;. ; ; 1 2 3 ;. 7 ;. 1 8

to mean 123.718 (as opposed to "( ; ; 1 2 3 . ;. 7 ;. 1 8 )"). Note that if you use df-ov for the decimal point operator, parentheses around the number are required.

$}

May 12, 2015, 8:30:17 AM5/12/15

to metamath

On Mon, 11 May 2015 21:57:23 -0400, Mario Carneiro <di....@gmail.com> wrote:

> Yes, that should work. Actually, there is no need for separate "( x . y )"

> and ";. x y" constructors, as they are the same (you don't have to support

> negative numbers because -u ( x . y ) will do the trick just as well),

> although it may be a bit strange and hard to read to write

>

> ;. ; ; 1 2 3 ;. 7 ;. 1 8

>

> to mean 123.718 (as opposed to "( ; ; 1 2 3 . ;. 7 ;. 1 8 )"). Note that

> if you use df-ov for the decimal point operator, parentheses around the

> number are required.

Thanks for the tips! The goal is to make this relatively easy to read.
> Yes, that should work. Actually, there is no need for separate "( x . y )"

> and ";. x y" constructors, as they are the same (you don't have to support

> negative numbers because -u ( x . y ) will do the trick just as well),

> although it may be a bit strange and hard to read to write

>

> ;. ; ; 1 2 3 ;. 7 ;. 1 8

>

> to mean 123.718 (as opposed to "( ; ; 1 2 3 . ;. 7 ;. 1 8 )"). Note that

> if you use df-ov for the decimal point operator, parentheses around the

> number are required.

I worry that allowing ANYTHING for the left and right operators of "." will create

its own problems, e.g, I'd like to counter accidentally doing (-u 4 . 5 ) if that won't work.

Using "_" instead of ";." and simplifying the operator "." helps. Fixed examples:

` ( 1 . 5 ) = ( 3 / 2 ) ` , and

` -u ( ; 3 2 . _ 7 _ 1 8 ) = -u ( ; ; ; ; 3 2 7 1 8 / ; ; ; 1 0 0 0 ) `

--- David A. Wheeler

==============================================

$(

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Decimal point

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

$)

$c . $. $( Decimal point. $)

$( Constant used for decimal fraction constructor. $)

cdfrac $a class _ A B $.
$( Constant used for decimal fraction constructor. $)

$( Decimal point operator. $)

cdp $a class . $.

$( Define the "decimal fraction constructor",

which is used to build up "decimal fractions" in base 10.

This is intentionally similar to ~ df-dec . $)
$( Define the "decimal fraction constructor",

which is used to build up "decimal fractions" in base 10.

df-dfracc $a |- _ A B = ( A + ( B / 10 ) ) $.

${

$d x y $.

$( Define the ` . ` (decimal point) operator.

` -u ( ; 3 2 . _ 7 _ 1 8 ) = -u ( ; ; ; ; 3 2 7 1 8 / ; ; ; 1 0 0 0 ) `

Unary minus is typically applied outside the parentheses.

$)

df-dp $a |- . = ( x e. ZZ , y e. NN0 |-> ( x + ( y / 10 ) ) ) $.

$}

May 12, 2015, 1:31:45 PM5/12/15

to metamath

Oops, just noticed, you want df-dp to have domain ( NN0 X. RR ) or ( RR X. RR ) or else you can only express tenths (i.e. the standard closure for decimals is "RR" not "NN0", although it may be useful to express the stronger closure in QQ or 10-adic rationals occasionally).

Since ( A . B ) and _ A B are the same, I would give them similar names and also define "." in terms of "_":df-dp2 $a |- _ A B = ( A + ( B / 10 ) ) $.

df-dp $a |- . = ( x e. NN0 , y e. RR |-> _ x y ) $.

dpval.1 $e |- A e. NN0 $.

dpval $p |- ( A . B ) = _ A B $.dpval.2 $e |- B e. RR $.

dp2cl $p |- _ A B e. RR $.

dpcl $p |- ( A . B ) e. RR $.

dpfrac1 $p |- ( A . B ) = ( ; A B / 10 ) $.

dpfracp1.1 $e |- ( ; A B . C ) = ( D / E ) $.

dpfracp1 $p |- ( A . _ B C ) = ( D / ; E 0 ) $.dpdiv1.1 $e |- ( ( B x. P ) + R ) = A $.

dpdiv1 $e |- ( A / P ) = ( B + ( R / P ) ) $.dpdiv.1 $e |- ( D / P ) = ( C + ( R / P ) ) $.

dpdiv.2 $e |- ( ( B x. P ) + D ) = A $.

If you feel up to it, see if you can use these theorems to prove

|- ( 1 / 7 ) = ( ( 0 . _ 1 _ 4 _ 2 _ 8 _ 5 7 ) + ( ( 0 . _ 0 _ 0 _ 0 _ 0 _ 0 1 ) / 7 ) )

|- ( 1 / 7 ) = ( ( 0 . _ 1 _ 4 _ 2 _ 8 _ 5 7 ) + ( ( 0 . _ 0 _ 0 _ 0 _ 0 _ 0 1 ) / 7 ) )

Mario

$}

May 12, 2015, 6:09:55 PM5/12/15

to metamath, metamath

On Tue, 12 May 2015 13:31:45 -0400, Mario Carneiro <di....@gmail.com> wrote:

> Oops, just noticed, you want df-dp to have domain ( NN0 X. RR ) or ( RR X.

> RR ) or else you can only express tenths (i.e. the standard closure for

> decimals is "RR" not "NN0", although it may be useful to express the

> stronger closure in QQ or 10-adic rationals occasionally).

You're right. Actually, I think the right-hand closure should be "QQ", not "RR".
> Oops, just noticed, you want df-dp to have domain ( NN0 X. RR ) or ( RR X.

> RR ) or else you can only express tenths (i.e. the standard closure for

> decimals is "RR" not "NN0", although it may be useful to express the

> stronger closure in QQ or 10-adic rationals occasionally).

The "." notation can't handle irrational numbers with a finite number of digits :-).

> Since ( A . B ) and _ A B are the same, I would give them similar names and

> also define "." in terms of "_":

> dp2cl $p |- _ A B e. RR $.

dp2cl $p |- ( ( A e. QQ /\ B e. QQ ) -> _ A B e. QQ ) $=

( cq wcel wa cdp2 c10 cdiv co caddc df-dp2 10nn nnq ax-mp cc0 wne

cn 0re 10re 10pos ltneii qdivcl mp3an3 qaddcl sylan2 syl5eqel

mpan2 ) ACDZBCDZEABFABGHIZJIZCABKUIUHUJCDZUKCDUIGCDZULGQDUMLGMNUI

UMGOPULOGRSTUABGUBUCUGAUJUDUEUF $.

> Also, a notation is good but much more important is an algorithm for

> working with decimals - otherwise what is the point? For now, we would want

> a way to convert between decimal form and fraction form:

definitions are "wrong" (for some sense of wrong).

--- David A. Wheeler

May 12, 2015, 6:30:12 PM5/12/15

to metamath

On Tue, May 12, 2015 at 6:09 PM, David A. Wheeler <dwhe...@dwheeler.com> wrote:

On Tue, 12 May 2015 13:31:45 -0400, Mario Carneiro <di....@gmail.com> wrote:

> Oops, just noticed, you want df-dp to have domain ( NN0 X. RR ) or ( RR X.

> RR ) or else you can only express tenths (i.e. the standard closure for

> decimals is "RR" not "NN0", although it may be useful to express the

> stronger closure in QQ or 10-adic rationals occasionally).

You're right. Actually, I think the right-hand closure should be "QQ", not "RR".

The "." notation can't handle irrational numbers with a finite number of digits :-).

This was what I meant by "it may be useful to express the stronger closure in QQ or 10-adic rationals occasionally". The "correct" strongest closure for the notation is what I am calling "10-adic rationals" (by analogy to dyadic rationals, not p-adic numbers), i.e. { x | E. y e. NN0 E. z e. NN0 x = ( y / ( 10 ^ z ) ) }. However, this is sharper than usually needed. Even QQ-closure is unnecessarily sharp, and I am suggesting RR-closure because we have more convenience theorems for RR than QQ. (We could use CC-closure, but then we wouldn't be able to do ordering.)

> dp2cl $p |- _ A B e. RR $.

I can prove this:

dp2cl $p |- ( ( A e. QQ /\ B e. QQ ) -> _ A B e. QQ ) $=

( cq wcel wa cdp2 c10 cdiv co caddc df-dp2 10nn nnq ax-mp cc0 wne

cn 0re 10re 10pos ltneii qdivcl mp3an3 qaddcl sylan2 syl5eqel

mpan2 ) ACDZBCDZEABFABGHIZJIZCABKUIUHUJCDZUKCDUIGCDZULGQDUMLGMNUI

UMGOPULOGRSTUABGUBUCUGAUJUDUEUF $.

This is true, but when I talk about "default closure" I mean the closure that other theorems expect for the everyday properties. For example in the dec* theorems everything uses NN0 closure even though that is stronger than necessary for most theorems which could have used RR or CC instead; this way many of these closure proofs overlap each other in a given application resulting in shorter proofs. Your QQ-closure theorem is a useful addition to the collection, but is unlikely to be used often (unless we were doing a Diophantine approximation proof or something) if RR closure becomes the default.

By the way, you'll also notice that I use inference form for the dec* theorems, even though I have long been a proponent of theorem form or deduction form in regular theorems. The reason for this is that these theorems are exclusively for *concrete* numbers, for which all hypotheses are provable unconditionally; i.e. there is no reason to discuss theorems like ( x e. CC -> 1x.x e. CC ) because we have other notations for that sort of thing (the exception being in the dp* theorems themselves, which are effectively metatheorems over different numerical facts, each of which needs no hypotheses).

By the way, you'll also notice that I use inference form for the dec* theorems, even though I have long been a proponent of theorem form or deduction form in regular theorems. The reason for this is that these theorems are exclusively for *concrete* numbers, for which all hypotheses are provable unconditionally; i.e. there is no reason to discuss theorems like ( x e. CC -> 1x.x e. CC ) because we have other notations for that sort of thing (the exception being in the dp* theorems themselves, which are effectively metatheorems over different numerical facts, each of which needs no hypotheses).

Mario

May 12, 2015, 7:22:29 PM5/12/15

to metamath, metamath

On Tue, 12 May 2015 18:30:12 -0400, Mario Carneiro <di....@gmail.com> wrote:

> Even QQ-closure is unnecessarily sharp, and I am suggesting RR-closure because

> we have more convenience theorems for RR than QQ.

In this case I agree, because it's plausible to add operations that might find RR useful
> Even QQ-closure is unnecessarily sharp, and I am suggesting RR-closure because

> we have more convenience theorems for RR than QQ.

(e.g., infinite repeats of various kinds).

However, I think if a function truly makes no sense outside some domain, it

*should* be sharp. Functions (in the usual mathematical sense)

should be as general as possible... but no more.

Claiming a function accepts a value when there's general agreement that it doesn't

seems (to me) to be an inaccurate. Clearly in many cases

mathematicians don't specify (or don't agree) on what happens outside its

"usual range", but that's different than "we all agree this is NOT allowed".

> This is true, but when I talk about "default closure" I mean the closure

> that other theorems expect for the everyday properties.

I think it's more accurate to represent

the limitations of the operators as they really are, and if a collection of

theorems need some utility correlaries for some additional sets (e.g., QQ),

I don't see a serious problem. Definitions are cheap.

> By the way, you'll also notice that I use inference form for the dec*

> theorems, even though I have long been a proponent of theorem form or

> deduction form in regular theorems. The reason for this is that these

> theorems are exclusively for *concrete* numbers, for which all hypotheses

> are provable unconditionally; i.e. there is no reason to discuss theorems

> like ( x e. CC -> 1x.x e. CC ) because we have other notations for that

> sort of thing (the exception being in the dp* theorems themselves, which

> are effectively metatheorems over different numerical facts, each of which

> needs no hypotheses).

One annoying problem is that you currently have to do 5 steps every time you want to prove, say,

that 2 =/= 4. That especially hits in the concrete examples.

I've added a bunch of small utility proofs of such inequalities to my mathbox.

In the longer term I think it'd be worth considering adding them to the main body, so that stuff

like 0 =/= 10 is automatically filled in (instead of something to laboriously prove),

and so that proofs involving concrete numbers are shorter and easier for

humans to follow. Yes, it's a long combinatorial list, but each proof is uncontroversial.

--- David A. Wheeler

May 12, 2015, 10:26:43 PM5/12/15

to metamath

On Tue, May 12, 2015 at 7:22 PM, David A. Wheeler <dwhe...@dwheeler.com> wrote:

On Tue, 12 May 2015 18:30:12 -0400, Mario Carneiro <di....@gmail.com> wrote:

> Even QQ-closure is unnecessarily sharp, and I am suggesting RR-closure because

> we have more convenience theorems for RR than QQ.

In this case I agree, because it's plausible to add operations that might find RR useful

(e.g., infinite repeats of various kinds).

However, I think if a function truly makes no sense outside some domain, it

*should* be sharp. Functions (in the usual mathematical sense)

should be as general as possible... but no more.

Claiming a function accepts a value when there's general agreement that it doesn't

seems (to me) to be an inaccurate. Clearly in many cases

mathematicians don't specify (or don't agree) on what happens outside its

"usual range", but that's different than "we all agree this is NOT allowed".

Ah, actually I forgot for a moment that "." is a function and not a syntax construction. I agree with you that "." should have the most general domain on which it makes sense, which is in fact ( CC X. CC ) in this case. However, the utility theorems should be biased toward showing and using RR-closure for the reasons I mentioned.

But assuming that df-dp is being created for roughly the same reasons as df-dec was, then it will never be used in a way such that infinite repeats are allowed - the *only* thing which should show up are actual (concrete) numbers like 123.456, not 123.44... or 123.n5 or anything involving a variable, so that it would actually be safe (but inconvenient) to make the domain be QQ or something smaller.

> This is true, but when I talk about "default closure" I mean the closure

> that other theorems expect for the everyday properties.

It's not hard to show that QQ is a subset of RR :-).

Most of the dec* theorems are trivial. However, they have been optimized to be put together like building blocks in order to prove complicated arithmetic facts with a minimum of extra steps. Sure, applying ax-mp + qre is only two steps (one if you use sselii and reuse qssre), but if you have to prove fifty closures in a big arithmetic proof that's 50 steps you want to avoid. (Count how many times deccl gets used in 2503lem2.)

One annoying problem is that you currently have to do 5 steps every time you want to prove, say,

that 2 =/= 4. That especially hits in the concrete examples.

I've added a bunch of small utility proofs of such inequalities to my mathbox.

In the longer term I think it'd be worth considering adding them to the main body, so that stuff

like 0 =/= 10 is automatically filled in (instead of something to laboriously prove),

and so that proofs involving concrete numbers are shorter and easier for

humans to follow. Yes, it's a long combinatorial list, but each proof is uncontroversial.

I would not recommend this. Currently we have an efficient means of proving any dis-equality of numeric terms via either "x < y => y =/= x => x =/= y" or "y < x => x =/= y". The nice thing about this approach is that since inequality is transitive, you only need O(n) theorems to show that n different numbers are distinct, while using =/= directly would require O(n^2) theorems. It might make sense to make a theorem for "x < y => x =/= y", but I don't think 1 =/= 10 should be a separate theorem for the same reason that ( 1 + 3 ) = 4 is not - with the single extra theorem addcomli we can eliminate half the number of facts necessary. In short, any general theorem you can think of to shorten these mini-proofs is fine, but I would rather not add another 100 theorems just to save one step. (I don't usually count closure because it usually overlaps with something else anyway.)

By the way, even ignoring the recent "out-of-domain" discussion, ltne has an unnecessary hypothesis: Either one of

( ( A e. RR /\ A < B ) -> B =/= A )

( ( B e. RR /\ A < B ) -> B =/= A )

( ( A e. RR /\ A < B ) -> B =/= A )

( ( B e. RR /\ A < B ) -> B =/= A )

is provable, because if B = A then one is real iff the other is. If you allow treating < as a relation, then you can eliminate both assumptions by soirri.

Mario

May 16, 2015, 9:30:32 PM5/16/15

to metamath

I said:

> > One annoying problem is that you currently have to do 5 steps every time

> > you want to prove, say,

> > that 2 =/= 4. That especially hits in the concrete examples.

> >

> > I've added a bunch of small utility proofs of such inequalities to my mathbox.

> > In the longer term I think it'd be worth considering adding them to the

> > main body, so that stuff

> > like 0 =/= 10 is automatically filled in (instead of something to

> > laboriously prove),

> > and so that proofs involving concrete numbers are shorter and easier for

> > humans to follow. Yes, it's a long combinatorial list, but each proof is

> > uncontroversial.

> > One annoying problem is that you currently have to do 5 steps every time

> > you want to prove, say,

> > that 2 =/= 4. That especially hits in the concrete examples.

> >

> > I've added a bunch of small utility proofs of such inequalities to my mathbox.

> > In the longer term I think it'd be worth considering adding them to the

> > main body, so that stuff

> > like 0 =/= 10 is automatically filled in (instead of something to

> > laboriously prove),

> > and so that proofs involving concrete numbers are shorter and easier for

> > humans to follow. Yes, it's a long combinatorial list, but each proof is

> > uncontroversial.

On Tue, 12 May 2015 22:26:42 -0400, Mario Carneiro <di....@gmail.com> wrote:

> I would not recommend this. Currently we have an efficient means of proving

> any dis-equality of numeric terms via either "x < y => y =/= x => x =/= y"

> or "y < x => x =/= y".

I disagree. It's efficient if the most important measure is to
> I would not recommend this. Currently we have an efficient means of proving

> any dis-equality of numeric terms via either "x < y => y =/= x => x =/= y"

> or "y < x => x =/= y".

minimize the number of theorems, but I think that is an irrelevant measure.

I think the goal should be to make each proof as simple

and clear as reasonably possible, as well as making each one a little

easier to create... even if there are more theorems.

In programming there's something called the "DRY" principle, which means

"Don't repeat yourself". If you have to repeatedly prove something, you're doing it wrong.

Programs generally try to reuse components as subroutines and methods,

and the same applies here.

It's uncontroversial that 10 =/= 0 ; you shouldn't need to reprove that in every

theorem that needs it, and human readers shouldn't have to read that proof more than once.

Indeed, most human readers will see 10 =/= 0 and stop there (they're unlikely to

NEED to see the proof), so don't make them see it in any other theorem.

> The nice thing about this approach is that since

> inequality is transitive, you only need O(n) theorems to show that n

> different numbers are distinct, while using =/= directly would require

> O(n^2) theorems.

it's often a GOOD idea to use O(n ^ 2) approaches. I stop at 10, so "n" is small.

They don't add that many, and the proofs are easy (so verifying them is really fast).

I agree that trying to do this for *all* cases is excessive. But there are many

cases where it's important to show that specific numbers are distinct.

Prove those once for small numbers, and move on, so that all the other proofs are simpler.

> By the way, even ignoring the recent "out-of-domain" discussion, ltne has

> an unnecessary hypothesis: Either one of

>

> ( ( A e. RR /\ A < B ) -> B =/= A )

> ( ( B e. RR /\ A < B ) -> B =/= A )

>

> is provable, because if B = A then one is real iff the other is. If you

> allow treating < as a relation, then you can eliminate both assumptions by soirri.

In any case, this issue is EXACTLY WHY these "ne" proofs should be separate.

If the proofs that these "small numbers are distinct" were in the database,

then you could fix up their proofs with insights like these, and all the other theorems that

used it would typically NOT require any changes at all. As it is, if you soirri, then

for consistency you need to change many other proofs, and yet those proofs aren't

about number inequality at all (so they just muddy things up).

--- David A. Wheeler

May 17, 2015, 4:51:13 AM5/17/15

to meta...@googlegroups.com, dwhe...@dwheeler.com, dwhe...@dwheeler.com

On 5/16/15 9:30 PM, David A. Wheeler wrote:> I said:

>>> One annoying problem is that you currently have to do 5 steps every time

>>> you want to prove, say,

>>> that 2 =/= 4. That especially hits in the concrete examples.

>>>

>>> I've added a bunch of small utility proofs of such inequalities to my mathbox.

>>> In the longer term I think it'd be worth considering adding them to the

>>> main body, so that stuff

>>> like 0 =/= 10 is automatically filled in (instead of something to

>>> laboriously prove),

>>> and so that proofs involving concrete numbers are shorter and easier for

>>> humans to follow. Yes, it's a long combinatorial list, but each proof is

>>> uncontroversial.

>

> On Tue, 12 May 2015 22:26:42 -0400, Mario Carneiro <di....@gmail.com> wrote:

>> I would not recommend this. Currently we have an efficient means of proving

>> any dis-equality of numeric terms via either "x < y => y =/= x => x =/= y"

>> or "y < x => x =/= y".

>

> I disagree. It's efficient if the most important measure is to

> minimize the number of theorems, but I think that is an irrelevant measure.

>>> One annoying problem is that you currently have to do 5 steps every time

>>> you want to prove, say,

>>> that 2 =/= 4. That especially hits in the concrete examples.

>>>

>>> I've added a bunch of small utility proofs of such inequalities to my mathbox.

>>> In the longer term I think it'd be worth considering adding them to the

>>> main body, so that stuff

>>> like 0 =/= 10 is automatically filled in (instead of something to

>>> laboriously prove),

>>> and so that proofs involving concrete numbers are shorter and easier for

>>> humans to follow. Yes, it's a long combinatorial list, but each proof is

>>> uncontroversial.

>

> On Tue, 12 May 2015 22:26:42 -0400, Mario Carneiro <di....@gmail.com> wrote:

>> I would not recommend this. Currently we have an efficient means of proving

>> any dis-equality of numeric terms via either "x < y => y =/= x => x =/= y"

>> or "y < x => x =/= y".

>

> I disagree. It's efficient if the most important measure is to

> minimize the number of theorems, but I think that is an irrelevant measure.

It is a very relevant measure, or more specifically, the size of the

set.mm file is a very relevant measure. I have put a significant amount

of work (as have many others) in keeping the size reasonable, and

everyone benefits from faster run times of various activities.

You may think that adding all inequality combinations requires only a

small additional amount of space, and of course relatively speaking that

is true. But you are looking at a very tiny piece of set.mm and not the

whole picture. If we adopted the attitude of throwing in every trivial

variation of convenience theorems, set.mm would be much larger than it

is now.

set.mm file is a very relevant measure. I have put a significant amount

of work (as have many others) in keeping the size reasonable, and

everyone benefits from faster run times of various activities.

You may think that adding all inequality combinations requires only a

small additional amount of space, and of course relatively speaking that

is true. But you are looking at a very tiny piece of set.mm and not the

whole picture. If we adopted the attitude of throwing in every trivial

variation of convenience theorems, set.mm would be much larger than it

is now.

> I think the goal should be to make each proof as simple

> and clear as reasonably possible,

Seriously, I don't see that it is any harder to read a proof such as

phllem, despite the occurrence of an extra ltneii step. If a

person can't follow that 2 < 8 implies 8 =/= 2, he or she can always

click on ltneii to become enlightened.

phllem, despite the occurrence of an extra ltneii step. If a

person can't follow that 2 < 8 implies 8 =/= 2, he or she can always

click on ltneii to become enlightened.

> as well as making each one a little

> easier to create... even if there are more theorems.

Almost anyone creating a proof needing a numeric inequality will search for

'|- ? =/= ?', find 2ne0, and see how 2 =/= 0 is proved immediately from

0 < 2. Then they can do the same for their missing fact of 8 =/= 7 or

whatever.

When a trivial arithmetic fact such as 2ne0 becomes heavily used, we

will add that theorem, because the net size of set.mm decreases as a

result. Until then, we don't. I have used the criteria of a theorem

"paying for itself" in this way many time in the past to reject or

accept a marginally useful theorem.

When the time came to add 2ne0, the task of proving it was trivial, as

was incorporating it into all proofs benefiting from it.

'|- ? =/= ?', find 2ne0, and see how 2 =/= 0 is proved immediately from

0 < 2. Then they can do the same for their missing fact of 8 =/= 7 or

whatever.

When a trivial arithmetic fact such as 2ne0 becomes heavily used, we

will add that theorem, because the net size of set.mm decreases as a

result. Until then, we don't. I have used the criteria of a theorem

"paying for itself" in this way many time in the past to reject or

accept a marginally useful theorem.

When the time came to add 2ne0, the task of proving it was trivial, as

was incorporating it into all proofs benefiting from it.

> In programming there's something called the "DRY" principle, which means

> "Don't repeat yourself". If you have to repeatedly prove something, you're doing it wrong.

> Programs generally try to reuse components as subroutines and methods,

> and the same applies here.

When a simple fact is proved repeatedly, then we do add it, as in the

case of 2ne0 (which did not exist until it became useful). It was

trivial to determine which theorems would benefit and it was trivial to

add this "subroutine" and update all proofs containing it by using a

'minimize_with' script. Without fear of introducing bugs. :)

Norm

case of 2ne0 (which did not exist until it became useful). It was

trivial to determine which theorems would benefit and it was trivial to

add this "subroutine" and update all proofs containing it by using a

'minimize_with' script. Without fear of introducing bugs. :)

Norm

May 17, 2015, 3:02:17 PM5/17/15

to metamath

David A. Wheeler wrote:

> > ... It's efficient if the most important measure is to

Today's smartphones and laptops typically have 8G+ of RAM, and even more storage.

These "faster run times" differences are less than a second.

It's better to optimize for the human *readers* of the individual proofs, not for the tools.

> > I think the goal should be to make each proof as simple

> > and clear as reasonably possible,

>

> Seriously, I don't see that it is any harder to read a proof such as

> phllem, despite the occurrence of an extra ltneii step. If a

> person can't follow that 2 < 8 implies 8 =/= 2, he or she can always

> click on ltneii to become enlightened.

I took a look at phllem, and I think it's great evidence for my point.

In phllem, 20% of the steps (9 of the 45) are

steps that ONLY exist to state the ltneii *preconditions* for proving that two

numbers are distinct. E.G., to prove 8 =/= 1, we first need 1re, 8re, and 1lt8

before we can even *apply* ltneii. For 8 =/= 2, we need 2re and 2lt8

(at least we can reuse the facts for 8). And so on.

You can reduce the length of phllem by 20% by proving ahead-of-time statements like 8 =/= 2.

And I would be surprised if it's alone.

Even by the measure "it is a reused fact", the fact that 0..10 are

distinct from each other is already in use in in multiple places in set.mm.

I think there's another issue, anyway: proofs should optimize for *human* readers.

Sure, a reader can follow an inline proof that 8 =/= 2.

But the human reader should not need to. Most humans know that 8 =/= 2 already,

and higher-level proofs should not be burdened with trivial like that.

Higher-level proofs should be able to build on lower-level things, without re-determining them.

--- David A. Wheeler

P.S. Here are the steps in "phllem" that exist to create preconditions for leneii, along with the use of ltneii itself:

4 1re 7364 . . . . . . . 8 ⊢ 1 ∈ ℝ

5 8re 8037 . . . . . . . 8 ⊢ 8 ∈ ℝ

6 1lt8 8122 . . . . . . . 8 ⊢ 1 < 8

(7 4, 5, 6 ltneii 7447 . . . . . . 7 ⊢ 8 ≠ 1

11 2re 8029 . . . . . . . 8 ⊢ 2 ∈ ℝ

12 2lt8 8121 . . . . . . . 8 ⊢ 2 < 8

(13 11, 5, 12 ltneii 7447 . . . . . . 7 ⊢ 8 ≠ 2 )

19 5re 8034 . . . . . . . 8 ⊢ 5 ∈ ℝ

20 5lt8 8118 . . . . . . . 8 ⊢ 5 < 8

(21 19, 5, 20 ltneii 7447 . . . . . . 7 ⊢ 8 ≠ 5 )

30 6re 8035 . . . . 5 ⊢ 6 ∈ ℝ

31 6lt8 8117 . . . . 5 ⊢ 6 < 8

(32 30, 5, 31 ltneii 7447 . . . 4 ⊢ 8 ≠ 6 )

> > ... It's efficient if the most important measure is to

> > minimize the number of theorems, but I think that is an irrelevant measure.

On Sun, 17 May 2015 01:51:13 -0700 (PDT), Norman Megill <n...@alum.mit.edu> wrote:

> It is a very relevant measure, or more specifically, the size of the

> set.mm file is a very relevant measure. I have put a significant amount

> of work (as have many others) in keeping the size reasonable, and

> everyone benefits from faster run times of various activities.

I think things have changed and make that less important.
> It is a very relevant measure, or more specifically, the size of the

> set.mm file is a very relevant measure. I have put a significant amount

> of work (as have many others) in keeping the size reasonable, and

> everyone benefits from faster run times of various activities.

Today's smartphones and laptops typically have 8G+ of RAM, and even more storage.

These "faster run times" differences are less than a second.

It's better to optimize for the human *readers* of the individual proofs, not for the tools.

> > I think the goal should be to make each proof as simple

> > and clear as reasonably possible,

>

> Seriously, I don't see that it is any harder to read a proof such as

> phllem, despite the occurrence of an extra ltneii step. If a

> person can't follow that 2 < 8 implies 8 =/= 2, he or she can always

> click on ltneii to become enlightened.

In phllem, 20% of the steps (9 of the 45) are

steps that ONLY exist to state the ltneii *preconditions* for proving that two

numbers are distinct. E.G., to prove 8 =/= 1, we first need 1re, 8re, and 1lt8

before we can even *apply* ltneii. For 8 =/= 2, we need 2re and 2lt8

(at least we can reuse the facts for 8). And so on.

You can reduce the length of phllem by 20% by proving ahead-of-time statements like 8 =/= 2.

And I would be surprised if it's alone.

Even by the measure "it is a reused fact", the fact that 0..10 are

distinct from each other is already in use in in multiple places in set.mm.

I think there's another issue, anyway: proofs should optimize for *human* readers.

Sure, a reader can follow an inline proof that 8 =/= 2.

But the human reader should not need to. Most humans know that 8 =/= 2 already,

and higher-level proofs should not be burdened with trivial like that.

Higher-level proofs should be able to build on lower-level things, without re-determining them.

--- David A. Wheeler

P.S. Here are the steps in "phllem" that exist to create preconditions for leneii, along with the use of ltneii itself:

4 1re 7364 . . . . . . . 8 ⊢ 1 ∈ ℝ

5 8re 8037 . . . . . . . 8 ⊢ 8 ∈ ℝ

6 1lt8 8122 . . . . . . . 8 ⊢ 1 < 8

(7 4, 5, 6 ltneii 7447 . . . . . . 7 ⊢ 8 ≠ 1

11 2re 8029 . . . . . . . 8 ⊢ 2 ∈ ℝ

12 2lt8 8121 . . . . . . . 8 ⊢ 2 < 8

(13 11, 5, 12 ltneii 7447 . . . . . . 7 ⊢ 8 ≠ 2 )

19 5re 8034 . . . . . . . 8 ⊢ 5 ∈ ℝ

20 5lt8 8118 . . . . . . . 8 ⊢ 5 < 8

(21 19, 5, 20 ltneii 7447 . . . . . . 7 ⊢ 8 ≠ 5 )

30 6re 8035 . . . . 5 ⊢ 6 ∈ ℝ

31 6lt8 8117 . . . . 5 ⊢ 6 < 8

(32 30, 5, 31 ltneii 7447 . . . 4 ⊢ 8 ≠ 6 )

May 17, 2015, 6:00:19 PM5/17/15

to meta...@googlegroups.com, dwhe...@dwheeler.com, dwhe...@dwheeler.com

On 5/17/15 3:02 PM, David A. Wheeler wrote:

> David A. Wheeler wrote:

>>> ... It's efficient if the most important measure is to

>>> minimize the number of theorems, but I think that is an irrelevant measure.

>

> On Sun, 17 May 2015 01:51:13 -0700 (PDT), Norman Megill <n...@alum.mit.edu> wrote:

>> It is a very relevant measure, or more specifically, the size of the

>> set.mm file is a very relevant measure. I have put a significant amount

>> of work (as have many others) in keeping the size reasonable, and

>> everyone benefits from faster run times of various activities.

>

> I think things have changed and make that less important.

> Today's smartphones and laptops typically have 8G+ of RAM, and even more storage.

> These "faster run times" differences are less than a second.

> David A. Wheeler wrote:

>>> ... It's efficient if the most important measure is to

>>> minimize the number of theorems, but I think that is an irrelevant measure.

>

> On Sun, 17 May 2015 01:51:13 -0700 (PDT), Norman Megill <n...@alum.mit.edu> wrote:

>> It is a very relevant measure, or more specifically, the size of the

>> set.mm file is a very relevant measure. I have put a significant amount

>> of work (as have many others) in keeping the size reasonable, and

>> everyone benefits from faster run times of various activities.

>

> I think things have changed and make that less important.

> Today's smartphones and laptops typically have 8G+ of RAM, and even more storage.

> These "faster run times" differences are less than a second.

Memory is not the main issue, although load times increase. CPU speed

is not keeping up with the growth rate of set.mm and hasn't

significantly changed in quite a while. There is a noticeable

difference in the "crispness" of many commands over a few years ago

when the database was half its present size.

is not keeping up with the growth rate of set.mm and hasn't

significantly changed in quite a while. There is a noticeable

difference in the "crispness" of many commands over a few years ago

when the database was half its present size.

> It's better to optimize for the human *readers* of the individual proofs, not for the tools.

I cannot see how not having these inequalities proved separately will

make any difference to a human reader. Some might even prefer to see

them proved in place rather than having to visit another web page.

make any difference to a human reader. Some might even prefer to see

them proved in place rather than having to visit another web page.

> I took a look at phllem, and I think it's great evidence for my point.

>

> In phllem, 20% of the steps (9 of the 45) are

> steps that ONLY exist to state the ltneii *preconditions* for proving that two

> numbers are distinct. E.G., to prove 8 =/= 1, we first need 1re, 8re, and 1lt8

> before we can even *apply* ltneii. For 8 =/= 2, we need 2re and 2lt8

> (at least we can reuse the facts for 8). And so on.

phllem is the only place in the entire database where 8 =/= 2 is used.

So it does not make sense to break it out as a separate theorem.

In any case, phllem is in an advanced topic that is not of interest to

anyone at the level of not knowing that 8 =/= 2. I chose it because

actual examples using these inequalities (with a few exceptions

such as 2ne0) are hard to find.

Not having it as a separate theorem also imposes no burden on the proof

creator, since 8 =/= 2 is proved automatically with 'improve

all/depth=1'.

So it does not make sense to break it out as a separate theorem.

In any case, phllem is in an advanced topic that is not of interest to

anyone at the level of not knowing that 8 =/= 2. I chose it because

actual examples using these inequalities (with a few exceptions

such as 2ne0) are hard to find.

Not having it as a separate theorem also imposes no burden on the proof

creator, since 8 =/= 2 is proved automatically with 'improve

all/depth=1'.

> You can reduce the length of phllem by 20% by proving ahead-of-time statements like 8 =/= 2.

Since it's used only once, you are just moving that part of the proof

elsewhere while adding unnecessary overhead. In your your programming

analogy, it would not make sense to create a new function that contains a

single line of code and is called once, yet adds all the overhead of

pushing and popping the stack for the function call; you've made the program

less efficient and harder to follow.

elsewhere while adding unnecessary overhead. In your your programming

analogy, it would not make sense to create a new function that contains a

single line of code and is called once, yet adds all the overhead of

pushing and popping the stack for the function call; you've made the program

less efficient and harder to follow.

> And I would be surprised if it's alone.

It's alone, so you can be surprised. :)

> Even by the measure "it is a reused fact", the fact that 0..10 are

> distinct from each other is already in use in in multiple places in set.mm.

Only a handful from all the possibilities are used, and the ones that

are used are typically used once.

Again, once an inequality is used enough to "pay for itself" as a

standalone theorem, it will definitely be added, you can be assured of

that. But for a trivial theorem with a one-step proof (ok, 3 or 4 if

you count hypothesis matching; I mean one primary inference step), it

will require dozens of uses to achieve that, since you might be saving

only a couple of characters in the compressed proof compared to a

hundred or more characters of overhead for the standalone theorem.

The theorem 2ne0 is an exception; it is used 177 times, and set.mm clearly

benefits from having it. That is why we broke it out as a separate theorem.

We will do the same for others that may arise in the future.

In the meantime, it is a waste to have a multitude of unused trivial

variants of existing theorems. They can be created as needed once they

become needed. I see nothing special about the inequalities, since the

ordering theorems imply them immediately while containing more

information.

are used are typically used once.

Again, once an inequality is used enough to "pay for itself" as a

standalone theorem, it will definitely be added, you can be assured of

that. But for a trivial theorem with a one-step proof (ok, 3 or 4 if

you count hypothesis matching; I mean one primary inference step), it

will require dozens of uses to achieve that, since you might be saving

only a couple of characters in the compressed proof compared to a

hundred or more characters of overhead for the standalone theorem.

The theorem 2ne0 is an exception; it is used 177 times, and set.mm clearly

benefits from having it. That is why we broke it out as a separate theorem.

We will do the same for others that may arise in the future.

In the meantime, it is a waste to have a multitude of unused trivial

variants of existing theorems. They can be created as needed once they

become needed. I see nothing special about the inequalities, since the

ordering theorems imply them immediately while containing more

information.

> I think there's another issue, anyway: proofs should optimize for *human* readers.

> Sure, a reader can follow an inline proof that 8 =/= 2.

> But the human reader should not need to. Most humans know that 8 =/= 2 already,

> and higher-level proofs should not be burdened with trivial like that.

> Higher-level proofs should be able to build on lower-level things, without re-determining them.

I disagree that most humans are going to be bothered by seeing an extra

step in phllem to prove 8 =/= 2. Not even textbooks provide a list of

such inequalities.

Norm

step in phllem to prove 8 =/= 2. Not even textbooks provide a list of

such inequalities.

Norm

May 17, 2015, 8:51:43 PM5/17/15

to metamath

On Sun, May 17, 2015 at 3:02 PM, David A. Wheeler <dwhe...@dwheeler.com> wrote:

David A. Wheeler wrote:

> > ... It's efficient if the most important measure is to

> > minimize the number of theorems, but I think that is an irrelevant measure.

On Sun, 17 May 2015 01:51:13 -0700 (PDT), Norman Megill <n...@alum.mit.edu> wrote:

> It is a very relevant measure, or more specifically, the size of the

> set.mm file is a very relevant measure. I have put a significant amount

> of work (as have many others) in keeping the size reasonable, and

> everyone benefits from faster run times of various activities.

I think things have changed and make that less important.

Today's smartphones and laptops typically have 8G+ of RAM, and even more storage.

These "faster run times" differences are less than a second.

It's better to optimize for the human *readers* of the individual proofs, not for the tools.

Ah, but I hope that we can eventually put those 8G+ to better use (i.e. more significant theorems) in the future. :) Also, things seem to be (slowly) heading in the direction of more automation, and if we start doing fully automated proofs we will have our hands full keeping the size down. So size limitation is definitely something to keep in mind. (As a side benefit, it limits the number of "noise" theorems in main set.mm. Can you imagine if the whole database looked like the bnj* section? I am not quite sure what JBN was trying to prove, but I'm pretty sure that the 1000+ theorems there can be converted to two or three decent-sized theorems for a huge savings of size *and clarity*.

> > I think the goal should be to make each proof as simple

> > and clear as reasonably possible,

>

> Seriously, I don't see that it is any harder to read a proof such as

> phllem, despite the occurrence of an extra ltneii step. If a

> person can't follow that 2 < 8 implies 8 =/= 2, he or she can always

> click on ltneii to become enlightened.

I took a look at phllem, and I think it's great evidence for my point.

In phllem, 20% of the steps (9 of the 45) are

steps that ONLY exist to state the ltneii *preconditions* for proving that two

numbers are distinct. E.G., to prove 8 =/= 1, we first need 1re, 8re, and 1lt8

before we can even *apply* ltneii. For 8 =/= 2, we need 2re and 2lt8

(at least we can reuse the facts for 8). And so on.

You can reduce the length of phllem by 20% by proving ahead-of-time statements like 8 =/= 2.

And I would be surprised if it's alone.

Even by the measure "it is a reused fact", the fact that 0..10 are

distinct from each other is already in use in in multiple places in set.mm.

Since we're on the topic of phllem, I guess I should note that it is actually obsolete. Compare it to the proof of phlfn which used to use phllem as a lemma, and you will note that the 13 numerical steps you highlighted have been replaced with only 3. So there is another factor to keep in mind - if you need to prove many numerical disequalities, you're doing it wrong and there is a faster approach using only inequalities.

I think there's another issue, anyway: proofs should optimize for *human* readers.

Sure, a reader can follow an inline proof that 8 =/= 2.

But the human reader should not need to. Most humans know that 8 =/= 2 already,

and higher-level proofs should not be burdened with trivial like that.

Higher-level proofs should be able to build on lower-level things, without re-determining them.

In the large scale, this is extremely difficult to do beyond roughly the level which metamath is already at. Pick any theorem and read the proof and you will see that at least 95% of the steps are "trivial". In principle we could combat this problem by making theorems which put together common combinations of theorems, such as ax-mp + syl = mpsyl, but there is a combinatorial explosion of possibilities and you won't achieve more than maybe 50% decrease of the "average theorem" before you bust that 8G limit with billions of "convenience theorems" that are too convenient for their own good.

A much more reasonable goal is to make sure that these common proofs are as short as possible, so that you can build "templates" for common proof strategies. In fact ltneii itself falls under this category, since it is an easy-to-apply version of ltne or ltnr. But you've stumbled on a very common issue in metamath, and a uniform resolution along the lines you are suggesting is unfortunately terribly impractical. (If you are focusing on disequality of numeric terms, let me emphasize that it is actually very rare to need such facts, except possibly in the case n =/= 0. In most of the current cases, it is only applied once or twice, for the very reason that if it were needed hundreds of times the proof would be adjusted to use one of the more efficient methods which avoid disequality entirely.)

Mario

May 17, 2015, 9:15:23 PM5/17/15

to metamath

On Sun, 17 May 2015 15:00:18 -0700 (PDT), Norman Megill <n...@alum.mit.edu> wrote:

> ... I see nothing special about the inequalities, since the

It appears to me that ltne and friends currently require extraneous hypotheses;

eliminating the unnecessary hypotheses would make them more general (and nicer to work with).

It should be possible to prove that ( A < B -> B =/= A ) without any other requirements.

It's probably obvious to everyone else, but I'm not sure how to prove that.

If that were proven, then proving inequality would be much nicer.

--- David A. Wheeler

> ... I see nothing special about the inequalities, since the

> ordering theorems imply them immediately while containing more

> information.

Theorem ltne (and friends) currently require that both elements be in RR to prove =/=.
> information.

It appears to me that ltne and friends currently require extraneous hypotheses;

eliminating the unnecessary hypotheses would make them more general (and nicer to work with).

It should be possible to prove that ( A < B -> B =/= A ) without any other requirements.

It's probably obvious to everyone else, but I'm not sure how to prove that.

If that were proven, then proving inequality would be much nicer.

--- David A. Wheeler

May 17, 2015, 9:30:17 PM5/17/15

to metamath

On Sun, May 17, 2015 at 9:15 PM, David A. Wheeler <dwhe...@dwheeler.com> wrote:

It should be possible to prove that ( A < B -> B =/= A ) without any other requirements.

It's probably obvious to everyone else, but I'm not sure how to prove that.

If that were proven, then proving inequality would be much nicer.

Eliminating both hypotheses relies on "out-of-domain" behavior to an extent, in this case ltrelxr (which says that A < B is always false when A,B are not both extended reals, which is a consequence of our definition - we can't say the same of <RR for example). If you are okay with this, then the proof goes like this: take the contrapositive, to get ( B = A -> -. A < B ); then since ( B = A -> ( A < B <-> A < A ) ) by mtbiri and breq2 this reduces to -. A < A, which is the hypothesis-free form of ltnr. To derive this, just apply soirri to ltrelxr and xrltso.

Mario

May 18, 2015, 12:04:25 PM5/18/15

to metamath

On Sun, May 17, 2015 at 9:15 PM, David A. Wheeler <dwhe...@dwheeler.com> wrote:

> > It should be possible to prove that ( A < B -> B =/= A ) without any other

> > requirements.

> > It should be possible to prove that ( A < B -> B =/= A ) without any other

> > requirements.

On Sun, 17 May 2015 21:30:16 -0400, Mario Carneiro <di....@gmail.com> wrote:

> Eliminating both hypotheses relies on "out-of-domain" behavior to an

> extent, in this case ltrelxr (which says that A < B is always false when

> A,B are not both extended reals, which is a consequence of our definition...)
> Eliminating both hypotheses relies on "out-of-domain" behavior to an

> extent, in this case ltrelxr (which says that A < B is always false when

The fundamental assumption is that "<" is never true when comparing equal objects.

Even if the definition of "<" is modified in the future, I would call it broken if

` A < A ` could be true :-).

So, I'd like this theorem to be in set.mm's main body, near where ltne is

currently (and probably eventually replacing it).

I credit Mario, since he's the one that described the proof; I was just a scribe :-).

This theorem eliminates the unnecessary hypotheses that follow from the definition of "<":

$( Prove that ` A < B -> B =/= A ` ,

without unnecessary hypotheses that A or B are in ` RR* ` .

This is a consequence of the our definition for ` < ` ,

which only has values over ` RR* ` .

Contributed by Mario Carneiro, recorded by David A. Wheeler,

17-May-2015. $)

ltne-simple $p |- ( A < B -> B =/= A ) $=

( clt wbr wceq cxr xrltso ltrelxr soirri breq2 mtbiri necon2ai )

ABCDZBABAEMAACDACFGHIBAACJKL $.

I'd also add the commuting version; with it, you can quickly prove inequality

in one step once you prove a "<" relationship. I suspect I'll get pushback on that,

but here it is :-) ...

$( Commute ~ ltne-simple ahead-of-time, so we can quickly prove

from ` A < B ` the rather obvious fact that ` A =/= B ` .

Contributed by David A. Wheeler, 17-May-2015. $)

ltne-simple-commute $p |- ( A < B -> A =/= B ) $=

( clt wbr wne ltne-simple necom sylib ) ABCDBAEABEABFBAGH $.

--- David A. Wheeler

May 18, 2015, 1:39:06 PM5/18/15

to metamath

If others are in agreement, I can put these on the todo list for the next update:

xrltnr -> ltnri $p -. A < A $.

ltne $p ( A < B -> B =/= A ) $.

gtne $p ( A < B -> A =/= B ) $.

ltneii -> ltnei $e A < B $. $p B =/= A $.

gtnei $e A < B $. $p A =/= B $.

gtne $p ( A < B -> A =/= B ) $.

ltneii -> ltnei $e A < B $. $p B =/= A $.

gtnei $e A < B $. $p A =/= B $.

Also, if we are throwing in ltrelxr as "acceptable for use" we can simplify lttr and friends as discussed earlier (pause for discussion).

Mario

--- David A. Wheeler

May 18, 2015, 4:01:49 PM5/18/15

to meta...@googlegroups.com, di....@gmail.com

On 5/18/15 1:39 PM, Mario Carneiro wrote:

> If others are in agreement, I can put these on the todo list for the

> next update:

>

>

>

> xrltnr -> ltnri $p -. A < A $.

>

> (due to ordering restrictions ltnr will be left as-is, even though ltnri

> generalizes it)

>

> ltne $p ( A < B -> B =/= A ) $.

>

> gtne $p ( A < B -> A =/= B ) $.

>

> ltneii -> ltnei $e A < B $. $p B =/= A $.

>

> gtnei $e A < B $. $p A =/= B $.

>

>

> Also, if we are throwing in ltrelxr as "acceptable for use" we can

> simplify lttr and friends as discussed earlier (pause for discussion).

> If others are in agreement, I can put these on the todo list for the

> next update:

>

>

>

> xrltnr -> ltnri $p -. A < A $.

>

> (due to ordering restrictions ltnr will be left as-is, even though ltnri

> generalizes it)

>

> ltne $p ( A < B -> B =/= A ) $.

>

> gtne $p ( A < B -> A =/= B ) $.

>

> ltneii -> ltnei $e A < B $. $p B =/= A $.

>

> gtnei $e A < B $. $p A =/= B $.

>

>

> Also, if we are throwing in ltrelxr as "acceptable for use" we can

> simplify lttr and friends as discussed earlier (pause for discussion).

I thought that we had already decided not to go down this path with

out-of-domain behavior of real and complex number theorems, for several

reasons that were discussed here a few weeks ago.

1. Having the hypotheses immediately shows the intended domain of

applicability (is it RR? is it RR*? Is it om?), without having to

trace back to definitions.

2. Having the hypotheses forces its use in the intended domain, which

generally is desirable.

3. The behavior is dependent on accidental behavior of definitions

outside of their domains, so the theorems are non-portable and

"brittle".

4. Only a few theorems can have their hypotheses removed in this fashion

due to happy coincidences for our particular set-theoretical

definitions. The poor user (especially a novice learning real number

arithmetic) is going to be confused not knowing when hypotheses are

needed and when they are not. For someone who hasn't traced back the

set-theoretical foundations of the definitions, it is seemingly random

and isn't intuitive at all.

5. The consensus of opinion of people on this group seemed to be against

doing this.

This discussion is in the context of real and complex numbers, which I

consider to have a special status within set.mm, and I think it is

important to be consistent with the rest of the world. For example, we

don't allow the use of -. (/) e. CC, as handy as that would be. In

other contexts, such as ordinal numbers, I don't mind and even encourage

the removal of unnecessary hypotheses.

Norm

out-of-domain behavior of real and complex number theorems, for several

reasons that were discussed here a few weeks ago.

1. Having the hypotheses immediately shows the intended domain of

applicability (is it RR? is it RR*? Is it om?), without having to

trace back to definitions.

2. Having the hypotheses forces its use in the intended domain, which

generally is desirable.

3. The behavior is dependent on accidental behavior of definitions

outside of their domains, so the theorems are non-portable and

"brittle".

4. Only a few theorems can have their hypotheses removed in this fashion

due to happy coincidences for our particular set-theoretical

definitions. The poor user (especially a novice learning real number

arithmetic) is going to be confused not knowing when hypotheses are

needed and when they are not. For someone who hasn't traced back the

set-theoretical foundations of the definitions, it is seemingly random

and isn't intuitive at all.

5. The consensus of opinion of people on this group seemed to be against

doing this.

This discussion is in the context of real and complex numbers, which I

consider to have a special status within set.mm, and I think it is

important to be consistent with the rest of the world. For example, we

don't allow the use of -. (/) e. CC, as handy as that would be. In

other contexts, such as ordinal numbers, I don't mind and even encourage

the removal of unnecessary hypotheses.

Norm

May 18, 2015, 4:09:36 PM5/18/15

to metamath

(Note that I am aware that we have already discussed this, but thought it better not to point out DAW's apparent change of opinion. Since I have been in support of this approach from the beginning I didn't think it would be appropriate to play devil's advocate in this case. I won't make the proposed changes unless it is clear that the winds have indeed changed direction, though - I leave it to others to duke it out and let me know how it goes.)

--

May 18, 2015, 4:14:01 PM5/18/15

to metamath

Note that it is still true, even without any "out-of-domain" tricks, that ltne has at least one unnecessary hypothesis. Any interpretation that violates ( ( A e. RR /\ A < B ) -> B =/= A ) would have to interpret *equality* differently than df-cleq, which needless to say would cause much bigger issues.

May 18, 2015, 5:45:51 PM5/18/15

to metamath

On Mon, 18 May 2015 16:09:35 -0400, Mario Carneiro <di....@gmail.com> wrote:

> > > If others are in agreement, I can put these on the todo list for the

> > > next update:

> > > xrltnr -> ltnri $p -. A < A $.

...
> > > If others are in agreement, I can put these on the todo list for the

> > > next update:

> > > xrltnr -> ltnri $p -. A < A $.

> > > ltne $p ( A < B -> B =/= A ) $.

> > > gtne $p ( A < B -> A =/= B ) $.

> > > gtne $p ( A < B -> A =/= B ) $.

> On Mon, May 18, 2015 at 4:01 PM, Norman Megill <n...@alum.mit.edu> wrote:

> > I thought that we had already decided not to go down this path with

> > out-of-domain behavior of real and complex number theorems, for several

> > reasons that were discussed here a few weeks ago.

...
> > I thought that we had already decided not to go down this path with

> > out-of-domain behavior of real and complex number theorems, for several

> > reasons that were discussed here a few weeks ago.

> > 3. The behavior is dependent on accidental behavior of definitions

> > outside of their domains, so the theorems are non-portable and

> > "brittle".

Norm: In the end, you're the final arbiter. However, I think there is a fundamental
> > outside of their domains, so the theorems are non-portable and

> > "brittle".

difference between ` ( A + B ) = ( B + A ) ` and ` ( A < B -> B =/= A ) `

in terms of maintainability and future flexibility. I think the latter construct is NOT brittle,

while I think the former construct is.

In ` ( A + B ) = ( B + A ) ` without *ANY* hypotheses, there's an implication that

` ( A + B ) ` produces a meaningful result that can ALWAYS be compared for equality,

no matter what types A and B are (including proper classes).

For many people, ` ( A + B ) ` makes no sense when applied (for example)

to proper classes, or anything else not in CC. Thus, it seems suspect

to compare their results using equality. Yes, it works today. But

it's just an accident of the current definitions that this works, and thus,

statements like` ( A + B ) = ( B + A ) ` are "brittle".

In contrast, I think ` ( A < B -> B =/= A ) `is fundamentally different and NOT brittle.

Once you know that ` A < B ` is true, it MUST be true that A and B

have types that are comparable using "<" (because otherwise you

could not prove that ` A < B ` !).

Thus, I think ` A < B -> ... does *NOT* invoke out-of-domain behavior,

but instead, when ` A < B ` is true that must mean that

A and B are *within* the expected domain of the relation.

I expect that "<" would NEVER be changed in such a way that

` A < B ` and ` A = B ` could be simultaneously true; to me

that would be a big bug in the definition.

Thus statements like ` ( A < B -> B =/= A ) `are *NOT* brittle, because

they *REQUIRE* the operands to be within the expected domain to apply.

In general, statements of the form ` A R B -> ...` mean that the RHS only applies

when the types of A and B are appropriate for relationship R,

and thus, they aren't brittle in the same way.

Under the current definitions, there is no *real* difference.

Structuring things to "not be brittle" seems reasonable, though.

> > 4. Only a few theorems can have their hypotheses removed in this fashion

> > due to happy coincidences for our particular set-theoretical

> > definitions. The poor user (especially a novice learning real number

> > arithmetic) is going to be confused not knowing when hypotheses are

> > needed and when they are not. For someone who hasn't traced back the

> > set-theoretical foundations of the definitions, it is seemingly random

> > and isn't intuitive at all.

But I think there's a potential simple rule here: Anything that can trace back to an

implication that "A R B" is true automatically implies the type hypotheses of A and B,

and thus the type hypotheses don't need to be separately stated.

On Mon, 18 May 2015 16:09:35 -0400, Mario Carneiro <di....@gmail.com> wrote:

> (Note that I am aware that we have already discussed this, but thought it

> better not to point out DAW's apparent change of opinion. Since I have been

> in support of this approach from the beginning I didn't think it would be

> appropriate to play devil's advocate in this case. I won't make the

> proposed changes unless it is clear that the winds have indeed changed

> direction, though - I leave it to others to duke it out and let me know how it goes.)

I hope that the above at least explains why I think they're different situations.

--- David A. Wheeler

May 18, 2015, 6:44:57 PM5/18/15

to meta...@googlegroups.com, di....@gmail.com

On 5/18/15 4:14 PM, Mario Carneiro wrote:

> Note that it is still true, even without any "out-of-domain" tricks,

> that ltne has at least one unnecessary hypothesis. Any interpretation

> that violates ( ( A e. RR /\ A < B ) -> B =/= A ) would have to

> interpret *equality* differently than df-cleq, which needless to say

> would cause much bigger issues.

> Note that it is still true, even without any "out-of-domain" tricks,

> that ltne has at least one unnecessary hypothesis. Any interpretation

> that violates ( ( A e. RR /\ A < B ) -> B =/= A ) would have to

> interpret *equality* differently than df-cleq, which needless to say

> would cause much bigger issues.

I agree this is not an abuse of out-of-domain behavior and is in

principle acceptable. I included the redundant hypothesis to emphasize

the domain of B for the user and to make the theorem more

consistent-looking with others. I think the impact on set.mm size is

very small since few theorems have this property.

However, I'll accept the removal of B e. RR if people prefer it.

Norm

principle acceptable. I included the redundant hypothesis to emphasize

the domain of B for the user and to make the theorem more

consistent-looking with others. I think the impact on set.mm size is

very small since few theorems have this property.

However, I'll accept the removal of B e. RR if people prefer it.

Norm

May 18, 2015, 8:40:40 PM5/18/15

to meta...@googlegroups.com, dwhe...@dwheeler.com, dwhe...@dwheeler.com

On 5/18/15 5:45 PM, David A. Wheeler wrote:

> Thus statements like ` ( A < B -> B =/= A ) `are *NOT* brittle, because

> they *REQUIRE* the operands to be within the expected domain to apply.

> Thus statements like ` ( A < B -> B =/= A ) `are *NOT* brittle, because

> they *REQUIRE* the operands to be within the expected domain to apply.

It is brittle because it may fail depending on how ordered pairs are

defined to behave at proper classes.

The more important issue, though, is that the complex number axioms are

intended to be "generic" (for lack of a better word) in the sense that

we want to pretend that < is simply a predicate with no defined behavior

outside of the complex number axioms. This is the way it is often

presented in textbooks. See Eric Schmidt's

http://us.metamath.org/downloads/schmidt-cnaxioms.pdf

for a formalization of our CC axioms in first-order logic (except

the supremum axiom) that treats "<" this way.

There are several reasons to keep the development confined to

first-order-logic-like manipulations of complex axioms: (1) it makes

the development easier to follow for someone not versed in set theory,

(2) it is closer to the development of many textbooks, (3) it allows

easier theoretical studies such as independence of axioms (e.g.

Eric's), and (4) it is likely more portable to another formal system,

especially one with logic but no set theory.

Of course in actuality, < is a binary relation because that's how we

model it. But to keep the development "generic" we don't want to

exploit set-theoretical properties of the model. This is what we would

be doing with the unrestricted ( A < B -> B =/= A ), which cannot be

derived in Eric's system. That's probably a good test for what I'm

looking for.

(As you have probably noticed, for practical reasons the actual axioms

have "<RR" and not "<", but for the purpose of this discussion assume

"<" is substituted for "<RR" in the axioms.)

I should mention that for more advanced development of RR and CC, we

loosen this restriction for practical reasons. I don't know where the

cutoff is, but theorem ltne is definitely in the early section that

should be "generic".

Norm

defined to behave at proper classes.

The more important issue, though, is that the complex number axioms are

intended to be "generic" (for lack of a better word) in the sense that

we want to pretend that < is simply a predicate with no defined behavior

outside of the complex number axioms. This is the way it is often

presented in textbooks. See Eric Schmidt's

http://us.metamath.org/downloads/schmidt-cnaxioms.pdf

for a formalization of our CC axioms in first-order logic (except

the supremum axiom) that treats "<" this way.

There are several reasons to keep the development confined to

first-order-logic-like manipulations of complex axioms: (1) it makes

the development easier to follow for someone not versed in set theory,

(2) it is closer to the development of many textbooks, (3) it allows

easier theoretical studies such as independence of axioms (e.g.

Eric's), and (4) it is likely more portable to another formal system,

especially one with logic but no set theory.

Of course in actuality, < is a binary relation because that's how we

model it. But to keep the development "generic" we don't want to

exploit set-theoretical properties of the model. This is what we would

be doing with the unrestricted ( A < B -> B =/= A ), which cannot be

derived in Eric's system. That's probably a good test for what I'm

looking for.

(As you have probably noticed, for practical reasons the actual axioms

have "<RR" and not "<", but for the purpose of this discussion assume

"<" is substituted for "<RR" in the axioms.)

I should mention that for more advanced development of RR and CC, we

loosen this restriction for practical reasons. I don't know where the

cutoff is, but theorem ltne is definitely in the early section that

should be "generic".

Norm

May 18, 2015, 9:22:24 PM5/18/15

to metamath

On Mon, May 18, 2015 at 8:40 PM, Norman Megill <n...@alum.mit.edu> wrote:

I should mention that for more advanced development of RR and CC, we

loosen this restriction for practical reasons. I don't know where the

cutoff is,

I would put the cutoff around section 5.5.4 Real number intervals, because that's when we first start using functions whose values are sets, and some set theory beyond simple second-order logic becomes a requirement for continued development. (This also includes df-fz and df-uz, which require nontrivial set theory for their justification. I have toyed with the idea of turning df-uz into a syntax construction so that we don't need to invoke ax-inf2 on stuff like infinite sums, but we wouldn't get very far before the supremum axiom itself ensures the existence of infinite sets so that we don't gain much in the end.)

Mario

May 18, 2015, 10:28:32 PM5/18/15

to meta...@googlegroups.com, di....@gmail.com

On 5/18/15 9:22 PM, Mario Carneiro wrote:>

>

> On Mon, May 18, 2015 at 8:40 PM, Norman Megill <n...@alum.mit.edu

> <mailto:n...@alum.mit.edu>> wrote:

>

> I should mention that for more advanced development of RR and CC, we

> loosen this restriction for practical reasons. I don't know where the

> cutoff is,

>

>

> I would put the cutoff around section 5.5.4 Real number intervals,

> because that's when we first start using functions whose values are

> sets, and some set theory beyond simple second-order logic becomes a

> requirement for continued development.

>

> On Mon, May 18, 2015 at 8:40 PM, Norman Megill <n...@alum.mit.edu

> <mailto:n...@alum.mit.edu>> wrote:

>

> I should mention that for more advanced development of RR and CC, we

> loosen this restriction for practical reasons. I don't know where the

> cutoff is,

>

>

> I would put the cutoff around section 5.5.4 Real number intervals,

> because that's when we first start using functions whose values are

> sets, and some set theory beyond simple second-order logic becomes a

> requirement for continued development.

Sounds reasonable to me.

> (This also includes df-fz and

> df-uz, which require nontrivial set theory for their justification. I

> have toyed with the idea of turning df-uz into a syntax construction so

> that we don't need to invoke ax-inf2 on stuff like infinite sums, but we

> wouldn't get very far before the supremum axiom itself ensures the

> existence of infinite sets so that we don't gain much in the end.)

I would leave them alone for now.

Norm

Norm

Reply all

Reply to author

Forward

0 new messages