51 views

Skip to first unread message

Jan 24, 2022, 6:38:43 PMJan 24

to

The source can be downloaded here:

http://people.csail.mit.edu/garland/LP/

It had FiniteMap object type:

FiniteMap (M, D, R): trait

% An M is a map from D's to R's.

introduces

{}: -> M

update: M, D, R -> M

apply: M, D -> R

defined: M, D -> Bool

asserts

M generated by {}, update

M partitioned by apply, defined

forall m: M, d, d1, d2: D, r: R

apply(update(m, d2, r), d1) ==

if d1 = d2 then r else apply(m, d1);

~defined({}, d);

defined(update(m, d2, r), d1) ==

d1 = d2 \/ defined(m, d1)

implies

Array1 (update for assign, apply for __[__],

M for A, D for I, R for E)

converts apply, defined

exempting forall d: D apply({}, d)

http://nms.lcs.mit.edu/Larch/handbook/FiniteMap.html

Basically a type generated from {} and update, whereby

{} being the empty function. Besides an apply function

it has also a defined predicate. The functions itself are

conceived partial. The apply might react like Dan-O-Matiks

function when outside domain, in that apply(m, d) has

neither provable the value a nor the value b, but nevertheless

the underlying simple typed language would make apply ?total?

on the full D. On the other hand defined is more precise, we

even have ~defined({}, d) provable. So I guess one would use

defined and not apply, in case one is interested which subset of

D corresponds to the domain of a give function. If the type is

not inductive from {} and update, it would also allow infinite

functions and their updates.

http://people.csail.mit.edu/garland/LP/

It had FiniteMap object type:

FiniteMap (M, D, R): trait

% An M is a map from D's to R's.

introduces

{}: -> M

update: M, D, R -> M

apply: M, D -> R

defined: M, D -> Bool

asserts

M generated by {}, update

M partitioned by apply, defined

forall m: M, d, d1, d2: D, r: R

apply(update(m, d2, r), d1) ==

if d1 = d2 then r else apply(m, d1);

~defined({}, d);

defined(update(m, d2, r), d1) ==

d1 = d2 \/ defined(m, d1)

implies

Array1 (update for assign, apply for __[__],

M for A, D for I, R for E)

converts apply, defined

exempting forall d: D apply({}, d)

http://nms.lcs.mit.edu/Larch/handbook/FiniteMap.html

Basically a type generated from {} and update, whereby

{} being the empty function. Besides an apply function

it has also a defined predicate. The functions itself are

conceived partial. The apply might react like Dan-O-Matiks

function when outside domain, in that apply(m, d) has

neither provable the value a nor the value b, but nevertheless

the underlying simple typed language would make apply ?total?

on the full D. On the other hand defined is more precise, we

even have ~defined({}, d) provable. So I guess one would use

defined and not apply, in case one is interested which subset of

D corresponds to the domain of a give function. If the type is

not inductive from {} and update, it would also allow infinite

functions and their updates.

Jan 24, 2022, 8:04:13 PMJan 24

to

https://github.com/UniMath/UniMath

Type theory (Coquand's), Univalency after category theory (...),

then I wonder how these systems handle all sorts the usual

vagaries about vacuous quantification and so on.

"Important : files in the library starting with hProp.v will not

compile without a type_in_type patch which turns off the

universe consistency checking ." -- https://github.com/UniMath/UniMath/tree/master/UniMath/Foundations

Jan 24, 2022, 8:22:35 PMJan 24

to

I don't know, these provers do not contribute

to foundation. Any foundational axiom set is

again just some axiom set. Most often it doesn't

matter which foundation you add to the system.

The history of Lerch is interesting, since it

put a lot of faith into Knuth Bendix completion,

not as a foundation, but as a proof search

mechanism. So its currently a source of inspiration

for dealing with first order equality, see also:

A Guide to LP, The Larch Prover

https://www.semanticscholar.org/paper/e8cf48115302abe5658c3972a48056f282330e55

Ross A. Finlayson schrieb:

to foundation. Any foundational axiom set is

again just some axiom set. Most often it doesn't

matter which foundation you add to the system.

The history of Lerch is interesting, since it

put a lot of faith into Knuth Bendix completion,

not as a foundation, but as a proof search

mechanism. So its currently a source of inspiration

for dealing with first order equality, see also:

A Guide to LP, The Larch Prover

https://www.semanticscholar.org/paper/e8cf48115302abe5658c3972a48056f282330e55

Ross A. Finlayson schrieb:

> On Monday, January 24, 2022 at 3:38:43 PM UTC-8, Mostowski Collapse wrote:

>> The source can be downloaded here:

>> http://people.csail.mit.edu/garland/LP/

>

>> The source can be downloaded here:

>> http://people.csail.mit.edu/garland/LP/

>

Jan 24, 2022, 9:02:17 PMJan 24

to

https://www.jstor.org/stable/4098196

I would probably note that a fixed point theorem for compactness

after an inductive set, seems like why univalency, instead of vice versa.

I.e. there's a geometric besides algebraic approach.

Excuse me it's not necessarily relevant, though where there's discussion

about de Bruijn or Curry correspondence, it's another example that there are

particulars where the idea is to find the rule.

https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence#Generalizations

https://en.wikipedia.org/wiki/Homotopy_type_theory#The_univalence_axiom

It seems more like results after quantification _subsume_ types instead

of that outside axiomatizations _presume_ types.

https://en.wikipedia.org/wiki/Subtyping#Subsumption

Jan 24, 2022, 9:13:18 PMJan 24

to

can be some proper subset of the unit circle

of positive measure. We conjecture that there

exist univalent functions with this property."

-- Fixed Points of Univalent Functions, Gharibyan, Schmieder

"Abstract: For a closed nowhere dense subset C of dD a bounded

univalent holomorphic function f on D is found such that C equals

the cluster set of its fixed points."

-- Fixed Points of Univalent Functions II", Schmieder

Jan 24, 2022, 9:14:12 PMJan 24

to

And what is a function space in "UniMath". In Lerch

its relatively simple. Its a mixture of equational reasoning:

apply(update(m, d2, r), d1) ==

if d1 = d2 then r else apply(m, d1);

And predicates:

~defined({}, d);

defined(update(m, d2, r), d1) ==

d1 = d2 \/ defined(m, d1)

But Lerch probably views predicates as special case of

equation in the boolean domain, and that is where most of

the UniMath discussion happens.

But besides that it has the same primordial defect as

Dan-O-Matiks naive function spaces. Namely that apply fails

to derive something in case ~defined can be derived.

The modern approaches for function spaces, like

for example found in Isabelle/HOL do not have this

defect. Can you guess how this is solved?

its relatively simple. Its a mixture of equational reasoning:

apply(update(m, d2, r), d1) ==

if d1 = d2 then r else apply(m, d1);

~defined({}, d);

defined(update(m, d2, r), d1) ==

d1 = d2 \/ defined(m, d1)

equation in the boolean domain, and that is where most of

the UniMath discussion happens.

But besides that it has the same primordial defect as

Dan-O-Matiks naive function spaces. Namely that apply fails

to derive something in case ~defined can be derived.

The modern approaches for function spaces, like

for example found in Isabelle/HOL do not have this

defect. Can you guess how this is solved?

Jan 24, 2022, 9:32:44 PMJan 24

to

Ha Ha, this is a funny rant:

Isabelle functions: Always total, sometimes undefined

https://www.joachim-breitner.de/blog/732-Isabelle_functions__Always_total%2C_sometimes_undefined

Anyway the solution for function spaces in Isabelle/HOL is this here:

The datatype of "maps"; strongly resembles maps in VDM.

https://isabelle.in.tum.de/library/HOL/HOL/Map.html

You can compare with Lerch, to see what chaned over the years

in the understanding of Function Spaces. The update is

called a maplet. In ASCII you can write f[a|->b], in Isabelle HOL

they use f(a↦b). They start with the empty map:

abbreviation

empty :: "'a ⇀ 'b" where

"empty ≡ λx. None"

and the can prove things like:

lemma map_upd_nonempty [simp]: "t(k↦x) ≠ empty"

Isabelle functions: Always total, sometimes undefined

https://www.joachim-breitner.de/blog/732-Isabelle_functions__Always_total%2C_sometimes_undefined

Anyway the solution for function spaces in Isabelle/HOL is this here:

The datatype of "maps"; strongly resembles maps in VDM.

https://isabelle.in.tum.de/library/HOL/HOL/Map.html

You can compare with Lerch, to see what chaned over the years

in the understanding of Function Spaces. The update is

called a maplet. In ASCII you can write f[a|->b], in Isabelle HOL

they use f(a↦b). They start with the empty map:

abbreviation

empty :: "'a ⇀ 'b" where

"empty ≡ λx. None"

and the can prove things like:

lemma map_upd_nonempty [simp]: "t(k↦x) ≠ empty"

Jan 24, 2022, 10:56:29 PMJan 24

to

Vacuity is usually trivial, but, sometimes special.

There are a lot of hangnails in usual deft formalism.

It seems like rationals have LUB defined before nested intervals

would otherwise apply.

Having at least three formalisms that establish continuity,

i.e. line continuity, field continuity, signal continuity, and then

sorting out for "function theory" how cardinals and ordinals are

trichotomous but also there's a continuous domain defined before

the integers and another after the rationals, really makes a lot of

difference in foundations - also as it basically includes extraordinary

elements in theory - are among reasonings that help distinguish

the vacuous and trivial - and universal and total.

Physics is usually at least two things, the near and the far. Here for a

notion of the practical is an idea like a hologram plate, that basically is

manufactured according to a ratio, with the size of a coronovirus

virion, so that there would be plain detectors of coronavirus disease virions,

that on exposure would change color, all according to theories of diffraction.

https://spectrum.ieee.org/inexpensive-3d-printed-microscope-can-spot-coronavirus-in-blood

"To the best of our knowledge this is the first report

of a highly comparative time-series analysis

using digital holographic microscopy data."

https://www.bbva.com/en/history-of-holography/

It seems like with a brief amount of work, it could be designed a

"detects covid device", that uses no electricity or other consumables,

doesn't wear out, and is reusable after washing.

Jan 25, 2022, 6:52:13 AMJan 25

to

Univalency is rather revival of rewriting systems in

a wicked way. But this is only from glossing over it.

Historically, for example when going from Larch to

Isabelle/HOL, we see rather that first order equality

and rewriting was enhanced by higher order terms and

and a less termish view of propositions. Univalency

seems to make things termish again, and then allow

a horde of category theorists takle the problems that

originally were addressed by "hackers" such as Knuth.

LoL

a wicked way. But this is only from glossing over it.

Historically, for example when going from Larch to

Isabelle/HOL, we see rather that first order equality

and rewriting was enhanced by higher order terms and

and a less termish view of propositions. Univalency

seems to make things termish again, and then allow

a horde of category theorists takle the problems that

originally were addressed by "hackers" such as Knuth.

LoL

Jan 25, 2022, 6:58:55 AMJan 25

to

I just wrote a "anti-rewriting" pamphlet:

Non-Normal Manifesto

Before we begin we should assure ourselves about

our motivation to introduce first order equality into our

automated theorem prover. Usually automated theorem

proves might not only have a notion of an equality s = t,

but also a notion of a rewriting rule s -> t, and a notion

of size |r| of a term r.

A normal proof is then defined as a proof where the (subst)

rule is only applied when not |t| > |s|. With such a notion

of a (subst) rule, might then come a notion of computation

by repeating this step. Ultimately we might then ask for

complete and terminating rewrite rules.

In our take of an automated theorem prover, we do avoid

notions that go beyond equality s = t, and henceforth do

not aim at normal proofs. We are rather interested in

finding short proofs, and a short proof might be a

non-normal proof. The below shows a non-normal proof:

1. f1(c1) = c2 ∧ c3 = f2(c2) ∧ f3(c3) = c4 ⇒ f3(f2(f1(c1))) = c4

2. f3(f2(f1(c1))) = c4 (T⇒2 1)

3. ¬(f1(c1) = c2 ∧ c3 = f2(c2) ∧ f3(c3) = c4) (T⇒1 1)

4. ¬(c3 = f2(c2) ∧ f3(c3) = c4) (F∧2 3)

5. ¬f1(c1) = c2 (F∧1 3)

6. ¬f3(c3) = c4 (F∧2 4)

7. ¬c3 = f2(c2) (F∧1 4)

8. ¬f3(f2(c2)) = c4 (F= 6, 7)

9. f3(f2(c2)) = c4 (F= 2, 5)

✓ (ax 9, 8)

If we measure the size of a term by its printing length, we see

that the first application of (F=) to ~c3 = f2(c2) and ~f3(c3) = c4

is non-normal since f2(c2) is larger than c3. On the other hand

the second application of (F=) to ~f1(c1) = c2

to f3(f2(f1(c1)) = c4 is normal.

Have Fun!

See also:

Regular Equality Proofs for Maslovs Method

https://twitter.com/dogelogch/status/1485940674608386051

Regular Equality Proofs for Maslovs Method

https://www.facebook.com/groups/dogelog

Non-Normal Manifesto

Before we begin we should assure ourselves about

our motivation to introduce first order equality into our

automated theorem prover. Usually automated theorem

proves might not only have a notion of an equality s = t,

but also a notion of a rewriting rule s -> t, and a notion

of size |r| of a term r.

A normal proof is then defined as a proof where the (subst)

rule is only applied when not |t| > |s|. With such a notion

of a (subst) rule, might then come a notion of computation

by repeating this step. Ultimately we might then ask for

complete and terminating rewrite rules.

In our take of an automated theorem prover, we do avoid

notions that go beyond equality s = t, and henceforth do

not aim at normal proofs. We are rather interested in

finding short proofs, and a short proof might be a

non-normal proof. The below shows a non-normal proof:

1. f1(c1) = c2 ∧ c3 = f2(c2) ∧ f3(c3) = c4 ⇒ f3(f2(f1(c1))) = c4

2. f3(f2(f1(c1))) = c4 (T⇒2 1)

3. ¬(f1(c1) = c2 ∧ c3 = f2(c2) ∧ f3(c3) = c4) (T⇒1 1)

4. ¬(c3 = f2(c2) ∧ f3(c3) = c4) (F∧2 3)

5. ¬f1(c1) = c2 (F∧1 3)

6. ¬f3(c3) = c4 (F∧2 4)

7. ¬c3 = f2(c2) (F∧1 4)

8. ¬f3(f2(c2)) = c4 (F= 6, 7)

9. f3(f2(c2)) = c4 (F= 2, 5)

✓ (ax 9, 8)

If we measure the size of a term by its printing length, we see

that the first application of (F=) to ~c3 = f2(c2) and ~f3(c3) = c4

is non-normal since f2(c2) is larger than c3. On the other hand

the second application of (F=) to ~f1(c1) = c2

to f3(f2(f1(c1)) = c4 is normal.

Have Fun!

See also:

Regular Equality Proofs for Maslovs Method

https://twitter.com/dogelogch/status/1485940674608386051

Regular Equality Proofs for Maslovs Method

https://www.facebook.com/groups/dogelog

Sep 30, 2022, 12:23:29 PMSep 30

to

Sep 30, 2022, 7:12:47 PMSep 30

to

Recently I have seen dumb rewriting as rather a plus.

There are a few interesting propositional logic

provers, which combine different approaches:

1) Quine Algorithm

2) Sequent Calculus

You can marriage the two. Just take Fittings

paper(*) here, and then change the end-phase

of the prover as follows. From:

/* Fitting */

thm([L1|Lambda], [L2|_]) :- ( L1 = L2, ! ; thm(Lambda, [L2]) ).

thm(Lambda, [~ L | Gamma]) :- thm([L | Lambda], Gamma), !.

thm(Lambda, [L | Gamma]) :- thm([~ L | Lambda], Gamma), !.

And replace it by this end-phase:

/* Fitting + fCube Simplification */

thm(_, [1 | _)] :- !.

thm(_, [0 | Gamma]) :- !, thm2(_, Gamma).

thm2(_, [L| Gamma]) :- opposite2(L, R),

reduce(Gamma, R, Gamma2), thm2(_, Gamma2).

(*)

leanTAP revisited - September 1999

https://www.researchgate.net/publication/2240690

Ross A. Finlayson schrieb am Freitag, 30. September 2022 um 18:23:29 UTC+2:

> On Tuesday, January 25, 2022 at 3:58:55 AM UTC-8, Mostowski Collapse wrote:

> > I just wrote a "anti-rewriting" pamphlet:

Sep 30, 2022, 7:15:11 PMSep 30

to

Whats the Bang! of this marriage. Well the result is something

well known from SAT Solving, namely incorporation of what

might be termed look-ahead:

https://en.wikipedia.org/wiki/SAT_solver#CDCL_solvers

But it goes also by the name forward checking or unit

propagation, whereby you can propagate all kind of stuff.

Some empirical result on the test case SYN007+1.014.p:

/* Fitting */

% 12,779,502 inferences, 0.813 CPU in 0.826 seconds (98% CPU, 15728618 Lips)

/* Fitting + fCube Simplification */

% 1,203,958 inferences, 0.109 CPU in 0.098 seconds (112% CPU, 11007616 Lips)

well known from SAT Solving, namely incorporation of what

might be termed look-ahead:

https://en.wikipedia.org/wiki/SAT_solver#CDCL_solvers

But it goes also by the name forward checking or unit

propagation, whereby you can propagate all kind of stuff.

Some empirical result on the test case SYN007+1.014.p:

/* Fitting */

% 12,779,502 inferences, 0.813 CPU in 0.826 seconds (98% CPU, 15728618 Lips)

/* Fitting + fCube Simplification */

Sep 30, 2022, 7:16:20 PMSep 30

to

Anyway, thanks for checking whether I am asleep.

Unfortunately, busy as hell with some profane nonsense.

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu