Proven: Fourier series convergence, #76

213 views
Skip to first unread message

David A. Wheeler

unread,
Dec 14, 2019, 10:01:03 AM12/14/19
to Metamath Mailing List
I am very pleased to announce that the Fourier series convergence, Metamath 100#76, has been proven by Glauco. This is a tremendous achievement and I congratulate him.

There are a few cleanups and such to be done before it gets merged in, but I expect it to be merged soon. If you're curious to see it, it is here:

https://github.com/metamath/set.mm/pull/1317

I may be delayed before I add it to the official metamath 100 list and such, but I will certainly do it.

Congrats!
--- David A.Wheeler

Norman Megill

unread,
Dec 14, 2019, 6:24:09 PM12/14/19
to Metamath
The build has completed at http://us2.metamath.org/mpeuni/mmrecent.html.  Glauco's update adds about 1.5MB (20K lines) to set.mm, comprising over 400 new theorems (and no new definitions).  Hopefully he'll give us an outline of what it all does.

Norm

Benoit

unread,
Dec 15, 2019, 10:47:13 AM12/15/19
to Metamath
I'm also hoping for an outline of the proofs (especially the "encoding" in set.mm and the difficulties encountered).  Is there a particular reason why sin, cos were used instead of exp ?
Benoît

Glauco

unread,
Dec 15, 2019, 12:12:40 PM12/15/19
to Metamath
About one third of the theorems is specific for the Dirichlet Kernel and the Fourier series results (if there is interest, I can write an overview of the main proof).

The other theorems are (mostly) general purpose statements, that will probably be useful for other users.

As a general note, using deduction form it's "easy" to have hypothesis shorter than the main statement, and mmj2 autocomplete algorithm takes enormous advantage from this fact. Here is Mario's description "I consider theorems with no new variables, of any arity, but such that every hypothesis is shorter than the original theorem".

So, whenever I saw that I needed a "basic" result over and over, I took the opportunity to increase the "library". ~ eliood is such an example: when mmj2 sees a statement that matches, essentially does the job for you (I always proof backward style, I'm not sure it would work as well proving forward). And also similar theorems where created ~ eliocd , ~ elicod , ~ eliccd.

Here are the areas where most new theorems where developed.


ORDERING ON REAL NUMBERS - REAL AND COMPLEX NUMBERS BASIC OPERATIONS

I've tried to complete some of the list of results we have about commutativity/associativity and comparison, see ~ mul13d  ~ subadd4b  ~ xrlttri5d as examples. And there are also some non so trivial results, see ~ upbdrech2 and ~ ssfiunibd as examples.


REAL INTERVALS

Some are simple new autocomplete friendly library theorems ( ~ eliood family described above, as an example) , some are simple snippets I had to use over and over ( see ~ icogelb and similar theorems for all kinds of intervals, all kind of bounds and all kind of comparisons). There are many other theorems about intervals, involving difference, union, inclusion, relationships between bounds, and so on. Nothing fancy here, the only nontrivial result may be ~ evthiccabs (an application of the extreme value theorem, that I could have classified in another section).


LIMITS

A lot of theorems I had to write for the Fourier proof. Some are for limit of sequences, some are for infinite sums, some are for limit of functions, some are for limit points and some are for limsup. To get a taste of some of the results, you can take a look at ~ ellimciota (an explicit value for the limit, when the limit exists at a limit point), ~ climinf (a bounded monotonic non increasing sequence converges to the infimum of its range), ~ divlimc (and similar theorems for other bynary and unary operations),   ~ limclner (the limit of the function exits only if the left and the right limits are equal) , ~ islpcn  ~ lptre2pt (results about limit points) and ~ limcperiod (a result for limits of periodic functions).


TRIGONOMETRY

As the title says... you can take a look at ~ cosknegpi as an example.


CONTINUOUS FUNCTIONS

Some results here I had to use over and over, as an example ~ cncfmptssg is a generalization of other theorems and it allows to remember a single label to handle several cases (a minimizer "could" then jump in later to check if there is a shorter theorem, applying more specific cncfmpt* theorems). Other examples of "library" results are ~ divcncf (and similar theorems for other bynary, unary and nullary operations) , ~ cncfdmsn and ~ cncfcompt .
Some other example of results that could be handy are ~ cncfuni ( a function is continuous if it's domain is the union of sets over which the function is continuous ) and ~ jumpncnp (Jump discontinuity or discontinuity of the first kind: if the left and the right limit don't match, the function is discontinuous at the point).
Some other theorems refer to continuity for periodic functions, like ~ cncfshift or ~ cncfperiod .


DERIVATIVES

Some theorems are derivatives computed for specific functions (see ~ dvsinax as an example).
Some theorems are useful to compute derivatives for generic functions in Map To form, for example ~ dvmptdiv (function-builder for derivative, quotient rule) and similar theorems for other bynary, unary and nullary operations. Similar results are available for functions built with the oF operator, see ~ dvsubf and ~ dvdivf (quotient rule for everywhere-differentiable functions).
Some results are specific for periodic functions, see ~ fperdvper .
Then there are a few results that allowed me to state the Fourier series convergence using only conditions on the derivative of the function (in "literature" I've always found the piecewise smooth function condition stated saying that the function and its derivative have to be piecewise continuous, but I've shown that the derivative of F being piecewise continuous implies that F itself is piecewise continuous too). The main results are ~ ioodvbdlimc1  (a real function with bounded derivative, has a limit at the lower bound of an open interval) and its symmetric ~ ioodvbdlimc2 . A related result that could be useful is ~ dvbdfbdioo (a function on an open interval, with bounded derivative, is bounded).


INTEGRALS

Some theorems are integrals computed for specific functions (see ~ itgcoscmulx as an example).
Some theorems are of (I expect) general use, for example ~ vol0  ~ volge0 and ~ iblsplit (the union of two integrable functions is integrable).
Some theorems are a bit more involved, like ~ ditgeqiooicc ( a function on an open interval, has the same directed integral as its extension on the closed interval; see the theorem to understand what "extension" means in this context ).
Some results are for integrals of periodic functions, like ~ itgiccshift and ~ itgperiod .
The most important result of this section, probably is ~ itgsubsticc ( integration by u-substitution ). I was not able to use ~ itgsubst "as is" I had to extend it (no other theorems refer to ~ itgsubst, if Mario had the time to try to use it to check if he actually meant to state it that way, or it has to be slightly changed to be more usable; it may very well have been just my fault). Then I used ~ itgsubsticc to prove ~ itgsbtaddcnst (an example of integral by substitution, I guess we don't have other examples of integrals by substitution in set.mm , since ~ itgsubst is not referenced elsewhere ).


Please, feel free to ask about anything.


Glauco

fl

unread,
Dec 15, 2019, 1:44:47 PM12/15/19
to Metamath

> [...]

All the theorems deserve to be put in their place in the different chapters of the official part. It will make it easier to read the material.

-- 
FL

Benoit

unread,
Dec 15, 2019, 4:02:33 PM12/15/19
to Metamath
Thanks Glauco for this outline.

First, I re-ask my previous question: is there a particular reason you chose sine and cosine over the exponential function ?

Like FL, I think several lemmas deserve to be in the main part.  I will use your sectioning:



ORDERING ON REAL NUMBERS - REAL AND COMPLEX NUMBERS BASIC OPERATIONS

~ mul13d  ~ subadd4b  ~ xrlttri5d : yes to moving all three to the main part (just an opinion, I'm not among the persons who decide).

Actually, there are by now algebraic lemmas of this kind in several mathboxes by Thierry Arnoux, David A. Wheeler, me (BJ), and probably others:
21.3.5  Real and Complex Numbers
21.26.11  Algebra helpers
21.29.8.2  Complex numbers (supplements)
I'm not proposing to move them to the main part, just mentioning them here in case they prove useful.


REAL INTERVALS

~ eliood , ~ eliocd , ~ elicod , ~ eliccd : it looks to me that they could be strengthened by assuming only that A, B, C are extended reals.  For instance, in the proof of eliood, use elioo1 instead of elioo2, and similarly in the proof of eliccd, use elicc1 instead of elicc2.

~ evthiccabs : I'm surprised that the proof is so long, given evthicc.  Have you used MM>minimize *  on you theorems?  Also, the proof has two very similar parts, I don't know if it is possible to mutualize them.  This is actually connected to Issue #1325 on github.


LIMITS

ellimciota : maybe it is worth to prove the lemma |- ( E! x x e. A -> ( iota x x e. A ) e. A ) ?

climinf (and infrglb, etc.) look like important statements; are their equivalents for "non-decreasing bounded-above" in set.mm ?

divlimc and limclner : also important, I would move them to the main part


TRIGONOMETRY

pirp2/bj-pirp/pirp: move Jim's version to the main part, now that it is in three mathboxes.

Actually, I would move your whole Trigonometry section to the main part. (And it would be great to have analogs for sine, which can be easily deduced from yours -- but I recognize it is not a pleasant work!).

------------

There's a lot of great work in your mathbox!  I'll have to stop here for the moment.  Looking at random, I saw halffl, which I would prove using bj-flbi3 (which should then be moved to the main part).

Benoît

David A. Wheeler

unread,
Dec 15, 2019, 7:32:53 PM12/15/19
to metamath, Metamath
On Sun, 15 Dec 2019 13:02:33 -0800 (PST), Benoit <benoit...@gmail.com> wrote:
> Like FL, I think several lemmas deserve to be in the main part.

I believe that all "Metamath 100" results (and thus all they depend on)
should eventually move into the main body. After all, all Metamath 100 results are
results that at least some others believe are important.

We don't have to do that *instantly* of course.
In many cases I expect there will be discussion as the dependencies
are examined, possibly generalized, and then moved into the main body
in groups over time.

I think it's important to keep working on getting more things moved from mathboxes
into the main body. Tools like mmj2 only consider items in the main body for
many automated steps, and in any case I think it's far more sensible to
readers if major results are properly organized in the main body.

--- David A. Wheeler

Alexander van der Vekens

unread,
Dec 16, 2019, 12:53:24 AM12/16/19
to Metamath
Congratulations also from my side for proving another "Metamath 100" theorem!

I also had a look at the "general theorems". I agree that a lot of them could be moved to the main part. As David (and also others) mentioned, this should be done in a structured way. At the moment, these theorems are all mixed up in section "Miscellanea", so maybe the first step would be to structure this section (using appropriate subsections, corresponding to the (sub)sections of the main part).

As far as my latest "Metamath 100" theorem is concerned, I plan to move it (and all theorems which are used by its proof) to the main part, too. See issue #1314 at GitHub.

Alexander

On Monday, December 16, 2019 at 1:32:53 AM UTC+1, David A. Wheeler wrote:

Glauco

unread,
Dec 16, 2019, 3:11:02 PM12/16/19
to Metamath
Hi Benoit

Il giorno domenica 15 dicembre 2019 22:02:33 UTC+1, Benoit ha scritto:

> First, I re-ask my previous question: is there a particular reason you chose sine and cosine over the exponential function ?

the proofs that I followed stated the coefficients in terms of sine and cosine. Some of them, then, proved some intermediate results using the complex exponential form (it simplified some integral calculation), some other sources instead kept using sine and cosine everywhere. When I had a choice between two alternative such proofs, I followed the sine/cosine approach.

As I side note, I could be wrong, but I believe that the sine / cosine statement is more familiar and accessible to a wider audience.

> ~ eliood , ~ eliocd , ~ elicod , ~ eliccd : it looks to me that they could be strengthened by assuming only that A, B, C are extended reals.  For instance, in the proof of eliood, use elioo1 instead of elioo2, and similarly in the proof of eliccd, use elicc1 instead of elicc2.

elioo1 and elioo2 are "different" theorems, and I'm sure there are pros and cons in both. But I see that elioo2 has (roughly) ten times the number of references than elioo1 does and I guess that's an indication that elioo2 turns out to be handy more often than elioo1. It maybe I chose eliood in such form, because I noticed it was required more often in that form. Of course, it may be useful to also introduce an elioo1d alternative that takes the elioo1 "approach".

> ~ evthiccabs : I'm surprised that the proof is so long, given evthicc.  Have you used MM>minimize *  on you theorems? 

unfortunately I miss some of the features of metamath.exe , because I use mmj2 only (but I'm really intrigued by Thierry's plugin for Eclipse, I hope I will soon have time to take a closer look at it).
If I'm not mistaken, Normann periodically runs a "minimize" on the all set.mm, and that will help.
As a general note, given the current tools, it looks like there is a trade off between productivity and proof optimization, I'll try to explain with an example: I always proof backward; assume I have proven a statement in the form
|- ( ps <-> ( ch /\ th ) )
and I want to prove
|- ( ph -> ps )
The "optimized" way would be to use
sylanbrc
and for some time I tried to remember all such labels, but unfortunately we don't have intellisense yet, and... it was hard.
So I decided to always go with
sylibr
that I can remember and covers a large number of "upstream" statements. But I have to pay with an additional
jca
and that will introduce an additional line.

I hope that, in the future, we will have a minimizer that can (after the fact) revert non optimal sylibr into sylanbrc .

HTH
Glauco

Benoit

unread,
Dec 16, 2019, 4:43:01 PM12/16/19
to Metamath
the proofs that I followed stated the coefficients in terms of sine and cosine. Some of them, then, proved some intermediate results using the complex exponential form (it simplified some integral calculation), some other sources instead kept using sine and cosine everywhere. When I had a choice between two alternative such proofs, I followed the sine/cosine approach.

Thanks.  Sticking to a proof you followed in a book is a good reason.  I would have thought using exp all along would shorten things a bit, but this is a detail.
 
> ~ eliood , ~ eliocd , ~ elicod , ~ eliccd : it looks to me that they could be strengthened by assuming only that A, B, C are extended reals.  For instance, in the proof of eliood, use elioo1 instead of elioo2, and similarly in the proof of eliccd, use elicc1 instead of elicc2.
 
elioo1 and elioo2 are "different" theorems, and I'm sure there are pros and cons in both. But I see that elioo2 has (roughly) ten times the number of references than elioo1 does and I guess that's an indication that elioo2 turns out to be handy more often than elioo1. It maybe I chose eliood in such form, because I noticed it was required more often in that form. Of course, it may be useful to also introduce an elioo1d alternative that takes the elioo1 "approach".

I'm not saying that either of elioo1/2 is more general than the other, and it is not the case indeed.  What I'm saying is that you can strengthen your theorem eliood (relax the hypothesis C e. RR to C e. RR*) by using elioo1 in its proof instead of elioo2.  And similarly with eliccd.
 
> unfortunately I miss some of the features of metamath.exe , because I use mmj2 only (but I'm really intrigued by Thierry's plugin for Eclipse, I hope I will soon have time to take a closer look at it).
> If I'm not mistaken, Normann periodically runs a "minimize" on the all set.mm, and that will help.

Yes, but I think "periodically" used to be once a year, but a bit less now, and I'm not sure it looks at the mathboxes.  Anyway, minimizing is not a requirement.

I think that the objective of "minimize" is precisely what you're asking for: you do not have to remember all the labels, and only use the basic jca, sylibr, etc. And once the proof is done, "minimize" will optimize them, using sylanbrc, etc.

Benoît

Norman Megill

unread,
Dec 16, 2019, 6:56:50 PM12/16/19
to Metamath
On Monday, December 16, 2019 at 4:43:01 PM UTC-5, Benoit wrote:
...
 
> unfortunately I miss some of the features of metamath.exe , because I use mmj2 only (but I'm really intrigued by Thierry's plugin for Eclipse, I hope I will soon have time to take a closer look at it).
> If I'm not mistaken, Normann periodically runs a "minimize" on the all set.mm, and that will help.

Yes, but I think "periodically" used to be once a year, but a bit less now, and I'm not sure it looks at the mathboxes.  Anyway, minimizing is not a requirement.

The last global minimize run was about 2 years ago, volunteered by Dylan Houlihan who ran it on some powerful servers he had available, over a period of maybe a couple of weeks.

The last one I did myself was around 3 years ago, using a laptop with a powerful i7 processor maxed out with 8 threads.  The CPU ran quite hot (Speccy reported around 90-95C even with a new fan), and it died near the end.  I finished it on another computer.

Since then I've become somewhat afraid of maxing out the CPU for long periods, and I don't have a plan to do it again soon.  If anyone wants to volunteer (or several people each working on a range), that would of course be welcome.

We can make 'minimize' look at other (earlier) mathboxes with '/include_mathboxes', but the output should be carefully vetted due to occasional matches to overly specialized theorems that we don't want to import.


I think that the objective of "minimize" is precisely what you're asking for: you do not have to remember all the labels, and only use the basic jca, sylibr, etc. And once the proof is done, "minimize" will optimize them, using sylanbrc, etc

Yes, that is the intent, and it will optimize many such cases of inefficient "glue logic".

Norm

Mario Carneiro

unread,
Dec 16, 2019, 9:17:04 PM12/16/19
to metamath
Wouldn't running the program with sleeps in it or with a scheduling setting ("nice" / process priority) such that it only consumes X% of the CPU resolve the "running hot" problem? Also it could be broken into smaller pieces and run multithreaded or on multiple computers to compensate. How hard would it be to break up the workload?

Mario

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/88ec959b-90d2-4805-b4e9-6a5e898cfdad%40googlegroups.com.

Alexander van der Vekens

unread,
Dec 17, 2019, 11:50:37 AM12/17/19
to Metamath
I had a closer look at the theorems in sections "Miscellanea" and "Functions" in Glauco`s Mathbox, and I think the following could be quite useful and should be moved to the main part as soon as possible:
  • "Miscellanea":
    • neqned: ( ph -> -. A = B ) => ( ph -> A =/= B )
    • iffalsed: ( ph -> -. ch ) => iffalsed $p |- ( ph -> if ( ch , A , B ) = B )
    • iftrued:( ph -> ch ) =>( ph -> if ( ch , A , B ) = A )
    • ifeq123d.: (ph -> ( ps <-> ch ) ) & ( ph -> A = B ) & ( ph -> C = D ) => ( ph -> if ( ps , A , C ) = if ( ch , B , D ) )
    • rspan: ( ( A. x e. A ph /\ x e. A ) -> ph ) (maybe to be renamed!)
  • "Functions":
    • feq1dd:  ( ph -> F = G ) &( ph -> F : A --> B ) => ( ph -> G : A --> B ) (rename feq1d -> feq1bid; feq1dd -> feq1d)
    • fssresd: ( ph -> F : A --> B ) & ( ph -> C C_ A ) => ( ph -> ( F |` C ) : C --> B )
    • fssd: ( ph -> F : A --> B ) & ( ph -> B C_ C ) =>( ph -> F : A --> C ) 
    • resmptd: ( ph -> B C_ A ) => ( ph -> ( ( x e. A |-> C ) |` B ) = ( x e. B |-> C ) )
    • dmmptid: A = ( x e. B |-> C ) & ( ( ph /\ x e. B ) -> C e. V ) => ( ph -> dom A = B )
    • resabs1d: ( ph -> B C_ C ) => ( ph -> ( ( A |` C ) |` B ) = ( A |` B ) )
I have chosen these theorems especially because they are variants of existing theorems, but in deduction form, so they support the "deduction style" of set.mm. For the theorems in section "Miscellanea", I also took the number of usages into account, see attachment.

If nobody objects, I can move these theorems before christmas :-).
MB_GS_Miscellanea.txt

Giovanni Mascellani

unread,
Dec 17, 2019, 11:54:04 AM12/17/19
to meta...@googlegroups.com
Hi,

Il 17/12/19 00:56, Norman Megill ha scritto:
> The last one I did myself was around 3 years ago, using a laptop with a
> powerful i7 processor maxed out with 8 threads.  The CPU ran quite hot
> (Speccy reported around 90-95C even with a new fan), and it died near
> the end.  I finished it on another computer.

Wow, that's bad. Modern computers should automatically step down CPU
speed when their temperature is too high, and in extreme case maybe shut
down completely, but never get permanent damage. I routinely let very
long and CPU-intensive works run on my laptop, and it never got damaged,
even if it has a rather fast CPU.

Anyway, I have access to some decent server on which I can schedule long
running works without problems (and which I don't expect to burn out!).
I can do minimization or whatever database maintenance task is useful,
especially if someone can prepare a script for me to run. If the script
is expected to run for more than a few hours, I suggest to engineer it
so that it can restart from an intermediate point if it is interrupted,
as I cannot guarantee that nobody will try to restart the servers while
the script is running.

Giovanni.
--
Giovanni Mascellani <g.masc...@gmail.com>
Postdoc researcher - Université Libre de Bruxelles

signature.asc

David A. Wheeler

unread,
Dec 17, 2019, 12:46:30 PM12/17/19
to Norman Megill, Metamath

>The last one I did myself was around 3 years ago, using a laptop with a
>powerful i7 processor maxed out with 8 threads.

Can metamath automatically use all the CPUs available when doing a minimize run?

> The CPU ran quite hot
>(Speccy reported around 90-95C even with a new fan), and it died near
>the end.

All systems have temperature measures that should prevent any permanent damage and automatically slow down as necessary. Also, some modern computers are designed to run absurdly hot. Yes, some systems have defects in their temperature measurement systems, but those should be relatively rare. So you should not be afraid to run a minimization.

I might be able to run minimization on a useful beefy system, but I'll have to check first.

--- David A.Wheeler

fl

unread,
Dec 17, 2019, 1:02:30 PM12/17/19
to Metamath


Can metamath automatically use all the CPUs available when doing a minimize run?

You simply launch several jobs. Each of them covers a different range of set.mm. One job per CPU. 
All the processors are at work this way.But it's getting hot.

 
 
So you should not be afraid to run a minimization.


It might be better to ask people to minimize the proofs they made before they send them. It is called parallel computing after all.
But I used to do that and it is boring. Minimizing one proof can take a long time and you need to sit down
in front of your computer until it stops. A shell to manage a batch job might help I think.

-- 
FL 

Alexander van der Vekens

unread,
Dec 17, 2019, 1:25:24 PM12/17/19
to Metamath
I use the (spare) time while the minimization is running to relax or to write/improve the comments for the theorem I just have proven...

Alexander van der Vekens

unread,
Dec 21, 2019, 1:21:48 AM12/21/19
to Metamath
With my latest PR, I moved the following theorems from Glauco's mathbox into the main part of set.mm:
  • ~ neqned, ~ iffalsed, ~ iftrued, ~ rspan (renamed "rspa")
  • ~ fssresd, ~ fssd, ~ resmptd, ~ dmmptid (renamed "dmmpod"), ~ resabs1d
I didn't move ~ ifeq123d (because it is already available as ~ ifbieq12d , so I marked ~ ifeq123d as "discouraged") and ~ feq1dd, which maybe isn't as useful as I thought.

ookami

unread,
Dec 21, 2019, 4:32:09 AM12/21/19
to meta...@googlegroups.com
neqned is a duplicate of neneqad. I suggest merging both into one, preferring the name, attributions and explanation of the older neneqad.

Wolf

Alexander van der Vekens

unread,
Dec 21, 2019, 5:00:21 AM12/21/19
to Metamath
Ups, neneqad is directly following neqned - I didn't notice that (because I only looked for neneqd and copied neqned directly beneath it...)

Of course both should be merged, and the attributions and explanation of the older neneqad should be kept. As for the name, however, I would suggest to take the new one "neqned" - it  is easier to remember, symmetric to "neneqd", shorter and, finally, I do not understand the meaning of the "a" in "neneqad".

However, "neqned" is used 39 times, "neneqad" 81 times, but that's only a minor argument for "neneqad" (find&replace has to be done anyway).

Alexander

Glauco

unread,
Dec 21, 2019, 4:42:41 PM12/21/19
to Metamath
Hi Alexander,


I didn't move ~ ifeq123d (because it is already available as ~ ifbieq12d , so I marked ~ ifeq123d as "discouraged") 

did you also replace all the references from ifeq123d to ifbieq12d (so that my theorems don't refer to a discouraged label, now?)

Thanks
Glauco


 

Alexander van der Vekens

unread,
Dec 21, 2019, 4:59:31 PM12/21/19
to Metamath
Hi Glauco,
no, I did not replace ifeq123d by ifbieq12d in any proof. Since the theorems are not identical (they use different class variables), this is not possible by a simple find&replace. But there is no urgency to replace ifeq123d by ifbieq12d although ifeq123d is discouraged -  the proofs using ifeq123d are still valid. I marked feq123d as "discouraged" so that it will/should not be used in additional proofs.

Alexander

Norman Megill

unread,
Dec 21, 2019, 6:49:23 PM12/21/19
to Metamath
BTW the way to make the replacement is by using the "/may_grow" option of minimize:

MM> prove dirkentg
MM-PA> min ifbieq12d /may_grow
Bytes refer to compressed proof size, steps to uncompressed length.
Proof of "dirkeritg" remained at 23729 steps using "ifbieq12d".
MM-PA> save new_proof /compressed
MM-PA> exit

Norm
Reply all
Reply to author
Forward
0 new messages