Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A Paradox of Equality?

262 views
Skip to first unread message

Dan Christensen

unread,
Jun 3, 2023, 1:33:09 PM6/3/23
to
Using ordinary set theory, we can prove:

(1) ALL(a):[a in X & a=a] is FALSE for any set X.

Whereas

(2) ALL(a):[a in X => a=a]] is TRUE for any set X.

----------------------------------------------------------------------
PROOF (1)

Lemma: From Russel's Paradox (Proof: http://dcproof.com/UniversalSet.htm )

1. ALL(a):[Set(a) => EXIST(b):~b in a]
Axiom

Let x be a set

2. Set(x)
Premise

Apply lemma

3. Set(x) => EXIST(b):~b in x
U Spec, 1

4. EXIST(b):~b in x
Detach, 3, 2

5. ~y in x
E Spec, 4

6. ~y in x | ~y=y
Arb Or, 5

7. ~[~~y in x & ~~y=y]
DeMorgan, 6

8. ~[y in x & ~~y=y]
Rem DNeg, 7

9. ~[y in x & y=y]
Rem DNeg, 8

10. ALL(x):[Set(x) => EXIST(a):~[a in x & a=a]]
Conclusion, 2

11. ALL(x):[Set(x) => ~ALL(a):~~[a in x & a=a]]
Quant, 10

12. ALL(x):[Set(x) => ~ALL(a):[a in x & a=a]]
Rem DNeg, 11

----------------------------------------------------------------------
PROOF (2)

1. Set(x)
Premise

2. y in x
Premise

3. y=y
Reflex

4. ALL(a):[a in x => a=a]
Conclusion, 2

5. ALL(x):[Set(x) => ALL(a):[a in x => a=a]]
Conclusion, 1

Dan

Download my DC Proof 2.0 freeware at http://www.dcproof.com
Visit my Math Blog at http://www.dcproof.wordpress.com


Ross Finlayson

unread,
Jun 3, 2023, 2:02:40 PM6/3/23
to
Maybe you should read Quine, even the introduction to his book
on set theory could help you, he doesn't employ "material implication",
though.

He does help explain class/set distinction and then also gets into
when "=" means intensionality and when it means extensionality.

Then he doesn't so much "address the resolution of the paradoxes
of quantification" as allude to them.

Anyways your

1) "I'm conflicted"
2) "Whatever..."

isn't any stronger than

1) "Whatever..."


Yeah there are features of universal quantification and equals
that sort of imply a universe of logical theoretical objects
already exists, and you can only infer deductively about them,
which is why there is what's called "science" and for example
"systems of representation in reference" about "expert systems"
and in the face of "uncertainty".

Of course there are closed-deductively forms in the finite, ....

But, as Quine points out, at some point the logic
gets into the, infinite. Quine glosses over issues
of universality and extensionality when he talks
about "=" as a predicate, then also he indicates that
all the "special forms of predicates" are each their
own kinds of "ultimate classes". (In, "little theories".)




Mild Shock

unread,
Jun 3, 2023, 2:13:41 PM6/3/23
to
I guess its rather the paradox of irrelevance.
There are so many theorems that are irrelevant,
und would never make it into a paper, since

they use either inconsistent assumptions,
or then are non-construction and can therefore not be
applied. Are just some Unicorn dreams.

Maybe post some math when have outgrown your
juvenile phase of 1000-th version of Russels paradox
which is totally irrelevant and useless to mathematics.

Dan Christensen

unread,
Jun 3, 2023, 2:17:55 PM6/3/23
to
[snip]

Thanks, but I'm not asking for any "help." I'm simply pointing why, in ordinary set theory (no classes), you should avoid constructs of the form ALL(a):[a in x & P]. They will always be false since every set must exclude some object.

> he doesn't employ "material implication",

[snip]

I see no reason to avoid material implication for logical propositions that are unambiguously either true or false in the present.

Mild Shock

unread,
Jun 3, 2023, 2:20:18 PM6/3/23
to
Also for a moron like you it might not be evident
that you can replace a=a by TRUE. So you proved
these two theorems:

~ALL(a):[a e X]
TRUE

The first theorem has not even a mosquito leg
difference from your Universal Set theorem, at
least not in classical logic. And the second theorem,

is not something anybody would surprise.

Dan Christensen

unread,
Jun 3, 2023, 2:26:53 PM6/3/23
to
On Saturday, June 3, 2023 at 2:13:41 PM UTC-4, Mild Shock wrote:
> I guess its rather the paradox of irrelevance.


Actually, it's not a paradox at all. It's more of a cautionary tale to avoid if possible constructs of the form ALL(a):[a in x & P]


> There are so many theorems that are irrelevant,
> und would never make it into a paper, since
>
> they use either inconsistent assumptions,

No inconsistent assumptions here.

> or then are non-construction and can therefore not be
> applied.

Constructive proofs are not usually required. The rules of logic that I use are widely accepted by the vast majority of mathematicians. Some rules may be rejected by certain elements outside the mainstream.

[snip childish abuse]

Mild Shock

unread,
Jun 3, 2023, 2:42:39 PM6/3/23
to

Maybe you have entered now the baby phase
that these behave similarly, since we have:

~ALL(a):[a e X] <=> EXIST(a):[~a e X]

How long will this baby phase last. Multiple
months? It only makes me marvel once again
whether you wrote DC proof by yourself.

Most likely you stole it, inherited it from someone
or got it by some other means. After all your
DC proof tools has these two inference rules:
- Quantifier switch (annoyingly introduces an extra negation sign)
- Double negation removal

So you should know quantifier duality of ALL and
EXIST sleep walking. But you are totally clueless and
extremly slow. That DC proof has many inference

rules that unnecessarely introduce a negation sign,
even if not necessary for classical logic, let me
already to the speculation that the real author

had some intuitionistic tool in mind, or that the present
author is an utter moron, who wants to terrorize the
users of DC proof if complete nonsense inference rules.

I am still undecided which of the stories behind DC proof
are true. Maybe both. When DC proof changed hands
nobody told the owner: Watch out intuitionistic inference

rules. And since the new owner doesnt know a damn
bit of intuitionistic logic, there is now this crank tool.

Ross Finlayson

unread,
Jun 3, 2023, 2:47:41 PM6/3/23
to
You're unequipped to deal with these "paradoxes" unless you
get them out of the way up front.

Your ignorance of the surrounds does not serve you well.

Dan Christensen

unread,
Jun 3, 2023, 2:56:23 PM6/3/23
to
On Saturday, June 3, 2023 at 2:20:18 PM UTC-4, Mild Shock (aka Mr. Collapse) wrote:

[snip childish abuse]

> that you can replace a=a by TRUE.

"TRUE" is not usually used like this in textbook math proofs. Save it for your philosophy class, Mr. Collapse.

Mild Shock

unread,
Jun 3, 2023, 2:57:45 PM6/3/23
to
That Rossy Boys Herpes gets worse and worse is also
seen in your very recents posts. Although you like
writing posts with dual notions in it, you are

totally clueless when it comes to duality in logic.
You cannot recognize it sleep walking. If a post
does not contain the word dual directly, you

will not recognize it. The moron you are, you
will jump on every other keyword (here Equality,
Paradox) in a post. A little entertaining but also

very sad to see, since certain breaking of this duality plays
a role in intutionistic logic. Another name for
it, when quantifiers are involved, is the square of

opposition, which doesnt fully hold for intuitionistic logic.

Mild Shock

unread,
Jun 3, 2023, 2:58:39 PM6/3/23
to
I guess Rossy Boys Herpes has now jumped to you.

Mild Shock

unread,
Jun 3, 2023, 3:08:53 PM6/3/23
to
Or to put it simple, so that you understand why your tool
makes people dumb, including yourself. Your tool doesn't
provide de Morgan rules. You cannot do with one click:

~(A v B) ~~> ~A & ~B

https://en.wikipedia.org/wiki/De_Morgan%27s_laws

You meed many many clicks, one click that replaces the
logical connective, and introduces a hell of new negation
signs, and a couple of other clicks to remove the double

negations. Same problem with quantifiers.

P.S.: It also wrongly labels rules as de Morgan, but its
not de Morgan in the sense of classical de Morgan laws.
Its something else. Here is an example:

1 ~[P | Q]
Premise

2 ~~[~P & ~Q]
DeMorgan, 1

3 ~P & ~Q
Rem DNeg, 2

Mild Shock

unread,
Jun 3, 2023, 3:12:47 PM6/3/23
to
That your tool provides intuitionistic de Morgan is explained here:

Three out of the four implications of de Morgan's laws hold in
intuitionistic logic.
https://en.wikipedia.org/wiki/De_Morgan%27s_laws#In_intuitionistic_logic

The workaround for intuitionistic logic is then to use for example:

A v B ~~> ~(~A & ~B)

This is also detailed on Wikipedia.

Dan Christensen schrieb:

Dan Christensen

unread,
Jun 3, 2023, 3:14:30 PM6/3/23
to
On Saturday, June 3, 2023 at 2:42:39 PM UTC-4, Mild Shock (aka Mr. Collapse) wrote:

[snip childish abuse]

> DC proof tools has these two inference rules:
> - Quantifier switch (annoyingly introduces an extra negation sign)
> - Double negation removal
>

[snip childish abuse]

Maybe you didn't know, but both of these rules of logic are widely accepted by the vast majority of mathematicians.

Change universal to existential:

1. ALL(a):P(a)
Premise

2. ~EXIST(a):~P(a)
Quant, 1

3. ALL(a):P(a) => ~EXIST(a):~P(a)
Conclusion, 1


Change existential to universal:

1. EXIST(a):P(a)
Premise

2. ~ALL(a):~P(a)
Quant, 1

3. EXIST(a):P(a) => ~ALL(a):~P(a)
Conclusion, 1


Remove double negation
1. ~~P
Premise

2. P
Rem DNeg, 1

3. ~~P => P
Conclusion, 1


For some unknown reason, you have recently relegated yourself to the intuitionist fringes of mathematics, Mr. Collapse. Why would you do that?

Mild Shock

unread,
Jun 3, 2023, 3:20:05 PM6/3/23
to
You also need extra steps, in the form of double negation
removal, for quantifiers if they are negated. Here you see that:

1 ~EXIST(a):P(a)
Premise

2 ~~ALL(a):~P(a)
Quant, 1

3 ALL(a):~P(a)
Rem DNeg, 2

This is also detailed here, that this is the intuitionistic approach,
just check out what wikipedia writes about quantifiers:

https://en.wikipedia.org/wiki/De_Morgan%27s_laws#In_intuitionistic_logic

But since you are an uneducated fool, and have never teached
yourself the slightest bit of non-classical logic, you might
even not be aware of any of these things. You have possibly

never heard of the square of opposition and how it works out
in some non-classical logic. But why has your moron tool non-classical
inference rules? You sure you wrote it yourself?

Mild Shock schrieb:

Dan Christensen

unread,
Jun 3, 2023, 3:25:26 PM6/3/23
to
On Saturday, June 3, 2023 at 3:08:53 PM UTC-4, Mild Shock (aka Mr. Collapse) wrote:

[snip childish abuse]

> Your tool doesn't
> provide de Morgan rules. You cannot do with one click:
>
> ~(A v B) ~~> ~A & ~B
>
[snip]

Wrong again, Mr. Collapse. See the De Morgan option on the Logic menu, Mr. Collapse.

OR ---> AND

1. A | B
Premise

2. ~[~A & ~B]
DeMorgan, 1

3. A | B => ~[~A & ~B]
Conclusion, 1

AND ---> OR

1. A & B
Premise

2. ~[~A | ~B]
DeMorgan, 1

3. A & B => ~[~A | ~B]
Conclusion, 1

Dan Christensen

unread,
Jun 3, 2023, 3:27:27 PM6/3/23
to
On Saturday, June 3, 2023 at 3:12:47 PM UTC-4, Mild Shock wrote:
> That your tool provides intuitionistic de Morgan is explained here:
>
> Three out of the four implications of de Morgan's laws hold in
> intuitionistic logic.

[snip]

Not used in most textbook math proofs. Must be frustrating as hell for you.

Dan Christensen

unread,
Jun 3, 2023, 3:37:11 PM6/3/23
to
On Saturday, June 3, 2023 at 2:47:41 PM UTC-4, Ross Finlayson wrote:
[snip]

> You're unequipped to deal with these "paradoxes" unless you
> get them out of the way up front.
>

As I told Mr. Collapse here, it's not a paradox at all. It's more of a cautionary tale to avoid if possible constructs of the form ALL(a):[a in x & P] since they will always be false regardless of the truth value of P.
Message has been deleted

Mild Shock

unread,
Jun 3, 2023, 3:41:42 PM6/3/23
to
Do you need glasses? What on earth
makes you think that what I picked here:

~(A v B) ~~> ~A & ~B
https://en.wikipedia.org/wiki/De_Morgan%27s_laws

Is the same as this nonsense:

Dan Christensen schrieb am Samstag, 3. Juni 2023 um 21:25:26 UTC+2:
> 3. A | B => ~[~A & ~B]
> Conclusion, 1
https://groups.google.com/g/sci.logic/c/8F9_EdEOXa0/m/1oKmQttZBgAJ

Its NOT THE SAME . You were supposed to
prove ~[A | B] => [~A & ~B]

Whats wrong with you? Still not getting
what the CLASSICAL de Morgan laws are,
and sticking to your INTUITINISTIC FRINGE?

(Your own words)

Dan Christensen schrieb:

Dan Christensen

unread,
Jun 3, 2023, 4:02:17 PM6/3/23
to
On Saturday, June 3, 2023 at 3:41:42 PM UTC-4, Mild Shock (aka Mr. Collapse) wrote:

> Dan Christensen schrieb:
> > On Saturday, June 3, 2023 at 3:12:47 PM UTC-4, Mild Shock wrote:
> >> That your tool provides intuitionistic de Morgan is explained here:
> >>
> >> Three out of the four implications of de Morgan's laws hold in
> >> intuitionistic logic.
> >
> > [snip]
> >
> > Not used in most textbook math proofs. Must be frustrating as hell for you.
> >
> >

> Do you need glasses? What on earth
> makes you think that what I picked here:
> ~(A v B) ~~> ~A & ~B
> https://en.wikipedia.org/wiki/De_Morgan%27s_laws
>
> Is the same as this nonsense:
> Dan Christensen schrieb am Samstag, 3. Juni 2023 um 21:25:26 UTC+2:
> > 3. A | B => ~[~A & ~B]
> > Conclusion, 1
> https://groups.google.com/g/sci.logic/c/8F9_EdEOXa0/m/1oKmQttZBgAJ
>
> Its NOT THE SAME . You were supposed to
> prove ~[A | B] => [~A & ~B]
>
> Whats wrong with you?

It seems YOU are getting more desperate by the hour, Mr. Collapse!

There is nothing wrong the De Morgan rule in DC Proof. Or any other rules AFAIK. Must be frustrating as hell for you.

Mild Shock

unread,
Jun 3, 2023, 4:07:15 PM6/3/23
to
I didn't say that your inference rules
are not valid. They are clumsy nonsense.

And they don't deserve the name they have.
You can check yourself, the deMorgan laws

are other laws, like for example:
I don't know how to care your dumbass tool.
You have to figure this out by yourself.

Good Luck!

Dan Christensen

unread,
Jun 3, 2023, 4:27:57 PM6/3/23
to
On Saturday, June 3, 2023 at 4:07:15 PM UTC-4, Mild Shock wrote:

> Dan Christensen schrieb:
> > On Saturday, June 3, 2023 at 3:41:42 PM UTC-4, Mild Shock (aka Mr. Collapse) wrote:
> >
> >> Dan Christensen schrieb:
> >>> On Saturday, June 3, 2023 at 3:12:47 PM UTC-4, Mild Shock wrote:
> >>>> That your tool provides intuitionistic de Morgan is explained here:
> >>>>
> >>>> Three out of the four implications of de Morgan's laws hold in
> >>>> intuitionistic logic.
> >>>
> >>> [snip]
> >>>
> >>> Not used in most textbook math proofs. Must be frustrating as hell for you.
> >>>
> >>>
> >
> >> Do you need glasses? What on earth
> >> makes you think that what I picked here:
> >> ~(A v B) ~~> ~A & ~B
> >> https://en.wikipedia.org/wiki/De_Morgan%27s_laws
> >>
> >> Is the same as this nonsense:
> >> Dan Christensen schrieb am Samstag, 3. Juni 2023 um 21:25:26 UTC+2:
> >>> 3. A | B => ~[~A & ~B]
> >>> Conclusion, 1
> >> https://groups.google.com/g/sci.logic/c/8F9_EdEOXa0/m/1oKmQttZBgAJ
> >>
> >> Its NOT THE SAME . You were supposed to
> >> prove ~[A | B] => [~A & ~B]
> >>
> >> Whats wrong with you?
> >
> > It seems YOU are getting more desperate by the hour, Mr. Collapse!
> >
> > There is nothing wrong the De Morgan rule in DC Proof. Or any other rules AFAIK. Must be frustrating as hell for you.
> >

> I didn't say that your inference rules
> are not valid.

Well, that's progress!

> They are clumsy nonsense.
>
[snip childish abuse]

There you go again, Mr. Collapse. The rules of inference of inference in DC Proof are designed to be easily learned and applied. Are you envious? How is that proof assistant of yours coming along? Maybe you are beginning to understand some basic principles of design for learning when you actually try to use your program? Who is your target audience anyway--philosophers or mathematicians? Beginners or experts? You really should decide before going any further.

Mild Shock

unread,
Jun 3, 2023, 4:49:06 PM6/3/23
to
Its pretty obvious that they are clumsy nonsense.
For example you implement only two rules:

EXIST(x):A(x) ~~> ~ALL(x):~A(x)
ALL(x):A(x) ~~~> ~EXIST(x):~A(x)

But the Greeks where already discussing the
square of opposition. Which hat 4 differnent

https://en.wikipedia.org/wiki/Square_of_opposition

formulas to start with and not only two.
Notice something dumbo? So it wouldn't

cost you much to allow the end user to
additional perform the following operations:

a) If the end user clicks on a negation sign ~
in front of an EXIST, this rule fires:

~EXIST(x):A(x) ~~> ALL(x):~A(x)

b) If the end user clicks on a negation sign ~
in front of an ALL, this rule fires:

~ALL(x):A(x) ~~> EXIST(x):~A(x)

Hope this Helps!

Dan Christensen schrieb am Samstag, 3. Juni 2023 um 22:27:57 UTC+2:
> > They are clumsy nonsense.

Mild Shock

unread,
Jun 3, 2023, 4:56:34 PM6/3/23
to
The rules you implemented are even not part
of the square of opposition, nor part of de Morgan,
you implemented some replacement rules, the

balance of these rules is that they add two negation
signs, thats why they are clumsy:

/* Replacement Rules */
EXIST(x):A(x) ~~> ~ALL(x):~A(x)
ALL(x):A(x) ~~> ~EXIST(x):~A(x)

The square of opposition has these 4 rules,
the balance of these rules are that they remove
two negations, or keep the number of negations the same:

~EXIST(x):~A(x) ~~> ALL(x):A(x)
~ALL(x):~A(x) ~~> EXIST(x):A(x)
~EXIST(x):A(x) ~~> ALL(x):~A(x)
~ALL(x):A(x) ~~> EXIST(x):~A(x)
https://en.wikipedia.org/wiki/Square_of_opposition

Two of them are de Morgan. The de Morgan has these
4 rules. The balance of these rules is that they
always keep the number of negations the same:

ALL(x):~A(x) ~~> ~EXIST(x):A(x)
EXIST(x):~A(x) ~~> ~ALL(x):A(x)
~EXIST(x):A(x) ~~> ALL(x):~A(x)
~ALL(x):A(x) ~~> EXIST(x):~A(x)
https://en.wikipedia.org/wiki/De_Morgan%27s_laws

So you picked the worst inference rules, those
that add negation signs. Whereas for classical
logic there exist inference rules that remove

negation signs or keep the number of negation
signs constant. Well Shit Happens!

Mild Shock

unread,
Jun 3, 2023, 5:12:06 PM6/3/23
to
The proper de Morgan rules are interesting for
mathematics and lattice theory, since one can
find lattices that satisfy the de Morgan rules,

including involution, but do not satisfy these
rules that are part of "material implication". Namely:

quasi-boolean algebras
¬x ∨ x = 1 (law of the excluded middle), and
¬x ∧ x = 0 (law of noncontradiction)
https://en.wikipedia.org/wiki/De_Morgan_algebra

Which is quite interesting, since it has nevertheless
involution aka double negation elimination. But in
the setting of minimal logic, double negation elimination

implies both (law of the excluded middle) and
(law of noncontradiction). In minimal logic we can
also add either, (law of the excluded middle) or

(law of noncontradiction), whereas a quasi-boolean
algebra collapses into a boolean algebra, if one of
the two is added.

Mild Shock

unread,
Jun 3, 2023, 5:24:48 PM6/3/23
to

If you drop involution as a requirement, you might
end up with so called Ockham algebras. Which is
characterized by dual lattice endomorphism.

Somehow a fitting name, reference to Ockham
razor?, since it doesn't require much, except duality
in the form of de Morgan laws for meet and join,

but no de Morgan law for the function f, the
endomorphism. But you see that there is
only one negation:

f(a v b) = f(a) & f(b)
f(a & b) = f(a) & f(b)

And not two negations as here in Dan-O-Matik:

a v b = g(f(a) & f(b))
a & b = g(f(a) v f(b))

This could make a dent.

Dan Christensen

unread,
Jun 3, 2023, 10:24:04 PM6/3/23
to
On Saturday, June 3, 2023 at 4:56:34 PM UTC-4, Mild Shock (Mr. Collapse) wrote:
> The rules you implemented are even not part
> of the square of opposition,

So what? No one cares. You are simply grasping at straws here, Mr. Collapse.

> nor part of de Morgan,
> you implemented some replacement rules, the
> balance of these rules is that they add two negation
> signs, thats why they are clumsy:
>
[snip]

The De Morgan rule in DC Proof gives an easy way to convert AND to OR, and OR to AND. It works. Just click on the '&' or the '|' symbol, et voila! Top that, Mr. Collapse.

Mild Shock

unread,
Jun 4, 2023, 9:46:36 AM6/4/23
to
Yes I know, that you don't care that your DC Proof
is utter nonsense. There is a lot of asymmetry in your
DC Proof which makes it doing proofs a pain in the ass.

Another example which is quite unfortunate is the
fact that you have only this rule:

(A <=> B) <~~> (A => B) & (B => A)

But this rule is missing:

(A <=> B) <~~> (A & B) v (~A & ~B)

Which makes it hard to prove something with negative
(A <=> B), like when (A <=> B) is on the lefthand side of (=>).
Try proving this simple propositional theorem:

(r <=> p) & (p <=> q) => (r <=> q)

How many lines? 100 hundred lines?

Mild Shock

unread,
Jun 4, 2023, 9:47:34 AM6/4/23
to
Or maybe this one:

~ (p <=> q) <=> (p <=> ~q)

Mild Shock

unread,
Jun 4, 2023, 9:51:44 AM6/4/23
to

DC Proof is a kind of a masochist version of a Proof
assistant/ Proof writing system. That it doesn't assume
that the domain is non-empty is the smallest problem.

Its very weak in propositional logic. Extremly annoying.

Dan Christensen

unread,
Jun 4, 2023, 11:43:13 AM6/4/23
to
On Sunday, June 4, 2023 at 9:46:36 AM UTC-4, Mild Shock wrote:
> Yes I know, that you don't care that your DC Proof
> is utter nonsense. There is a lot of asymmetry in your
> DC Proof which makes it doing proofs a pain in the ass.
>

I take it your proof-assistant project isn't progressing as much as you would like, Mr. Collapse. (Hee, hee!)


> Another example which is quite unfortunate is the
> fact that you have only this rule:
>
> (A <=> B) <~~> (A => B) & (B => A)
>
> But this rule is missing:
>
> (A <=> B) <~~> (A & B) v (~A & ~B)
>

I can't recall ever having to use such an equivalence. I don't think it would be worth implementing another rule.

> Which makes it hard to prove something with negative
> (A <=> B), like when (A <=> B) is on the lefthand side of (=>).
> Try proving this simple propositional theorem:
>
> (r <=> p) & (p <=> q) => (r <=> q)
>

Straightforward proof:

Suppose...

1. [R <=> P] & [P <=> Q]
Premise

2. R <=> P
Split, 1

3. P <=> Q
Split, 1

4. [R => P] & [P => R]
Iff-And, 2

5. R => P
Split, 4

6. P => R
Split, 4

7. [P => Q] & [Q => P]
Iff-And, 3

8. P => Q
Split, 7

9. Q => P
Split, 7

Suppose...

10. R
Premise

11. P
Detach, 5, 10

12. Q
Detach, 8, 11

As Required:

13. R => Q
Conclusion, 10

Suppose...

14. Q
Premise

15. P
Detach, 9, 14

16. R
Detach, 6, 15

As Required:

17. Q => R
Conclusion, 14

18. [R => Q] & [Q => R]
Join, 13, 17

19. R <=> Q
Iff-And, 18

As Required:

20. [R <=> P] & [P <=> Q] => [R <=> Q]
Conclusion, 1

Mild Shock

unread,
Jun 4, 2023, 11:55:22 AM6/4/23
to
And what about this one:

~ (p <=> q) <=> (p <=> ~q)

We can later compare with SAT-XOR. These are
not any projects of mine. Thats just when you
put your nose a little deeper into propositional

logic. I can recommend it. Currently your nonsense
tool implements only intuitionistic rules, so
its from the INTUITIONISTIC FRINGE.

Not very useful.

Dan Christensen schrieb:

Dan Christensen

unread,
Jun 4, 2023, 12:33:51 PM6/4/23
to
On Sunday, June 4, 2023 at 9:47:34 AM UTC-4, Mild Shock wrote:
> Or maybe this one:
>
> ~ (p <=> q) <=> (p <=> ~q)

'=>'

Suppose...

1. ~[P <=> Q]
Premise

2. ~[[P => Q] & [Q => P]]
Iff-And, 1

3. ~~[~[P => Q] | ~[Q => P]]
DeMorgan, 2

4. ~[P => Q] | ~[Q => P]
Rem DNeg, 3

5. ~~[P & ~Q] | ~[Q => P]
Imply-And, 4

6. P & ~Q | ~[Q => P]
Rem DNeg, 5

7. P & ~Q | ~~[Q & ~P]
Imply-And, 6

Two cases:

8. P & ~Q | Q & ~P
Rem DNeg, 7

Case 1

Suppose...

9. P & ~Q
Premise

10. P
Split, 9

11. ~Q
Split, 9

Suppose...

12. P
Premise

13. ~Q
Copy, 11

As Required:

14. P => ~Q
Conclusion, 12

Suppose...

15. ~Q
Premise

16. P
Copy, 10

As Required:

17. ~Q => P
Conclusion, 15

18. [P => ~Q] & [~Q => P]
Join, 14, 17

19. P <=> ~Q
Iff-And, 18

Case 1

As Required:

20. P & ~Q => [P <=> ~Q]
Conclusion, 9

Case 2

Suppose...

21. Q & ~P
Premise

22. Q
Split, 21

23. ~P
Split, 21

Suppose...

24. P
Premise

25. ~P => ~Q
Arb Cons, 24

26. ~Q
Detach, 25, 23

As Required:

27. P => ~Q
Conclusion, 24

Suppose...

28. ~Q
Premise

29. ~Q => P
Arb Cons, 22

30. P
Detach, 29, 28

As Required:

31. ~Q => P
Conclusion, 28

32. [P => ~Q] & [~Q => P]
Join, 27, 31

33. P <=> ~Q
Iff-And, 32

As Required:

34. Q & ~P => [P <=> ~Q]
Conclusion, 21

35. [P & ~Q => [P <=> ~Q]] & [Q & ~P => [P <=> ~Q]]
Join, 20, 34

36. P <=> ~Q
Cases, 8, 35

'=>'

As Required:

37. ~[P <=> Q] => [P <=> ~Q]
Conclusion, 1

'<='

Suppose...

38. P <=> ~Q
Premise

39. [P => ~Q] & [~Q => P]
Iff-And, 38

40. P => ~Q
Split, 39

41. ~Q => P
Split, 39

Suppose to the contrary...

42. P <=> Q
Premise

43. [P => Q] & [Q => P]
Iff-And, 42

44. P => Q
Split, 43

45. Q => P
Split, 43

46. ~Q => ~P
Contra, 44

Suppose...

47. P
Premise

48. ~Q
Detach, 40, 47

49. Q
Detach, 44, 47

50. ~Q & Q
Join, 48, 49

As Required:

51. ~P
Conclusion, 47

52. ~P => ~~Q
Contra, 41

53. ~P => Q
Rem DNeg, 52

54. Q
Detach, 53, 51

55. P
Detach, 45, 54

56. ~P & P
Join, 51, 55

As Required:

57. ~[P <=> Q]
Conclusion, 42

'<='

As Required:

58. [P <=> ~Q] => ~[P <=> Q]
Conclusion, 38

59. [~[P <=> Q] => [P <=> ~Q]]
& [[P <=> ~Q] => ~[P <=> Q]]
Join, 37, 58

As Required:

60. ~[P <=> Q] <=> [P <=> ~Q]
Iff-And, 59

Dan Christensen

unread,
Jun 4, 2023, 12:40:04 PM6/4/23
to
On Sunday, June 4, 2023 at 11:55:22 AM UTC-4, Mild Shock wrote:
> And what about this one:
> ~ (p <=> q) <=> (p <=> ~q)
> We can later compare with SAT-XOR. These are
> not any projects of mine. Thats just when you
> put your nose a little deeper into propositional
>
> logic. I can recommend it. Currently your nonsense
> tool implements only intuitionistic rules

Why would you say such a silly thing, Mr. Collapse? Another desperate ploy?

Ross Finlayson

unread,
Jun 4, 2023, 1:42:06 PM6/4/23
to
Negating material implication, as it applies to direct implication:

(P=/=>Q) -> (P ^ ~Q) | (~P ^ ~Q)

If P=>Q is "false", then having it as a "false antecedent", makes what follows, "valid material implication".

It seems you can't have both "material implication" and "direct implication" together.

(~P=>Q) <=> (P=>Q)


"Material implication in material implication"
(P=>Q) => (Q ^ P ) | (Q ^ ~P) | (P=/=>Q)

"Direct implication in direct implication"
(P->Q) -> (P ^ Q) | (~Q ^ ~P)

Mild Shock

unread,
Jun 4, 2023, 3:22:55 PM6/4/23
to
Omg. 60 Lines.

Currently you dont have any de Morgan for bi-implicaion.
You only have the one definition of bi-implication,
namely this one:

(A <=> B) <~~> (A => B) & (B => A)

You could prove it much quicker if you had de Morgan
for bi-implication. You can derive de Morgan for bi-
implication from this one:

(A <=> B) <~~> (A & B) v (~A & ~B)

The de Morgan is then:

~ (A <=> B) <~~> (~A v ~B) & (A v B)

You dont take duality in classical logic seriously Dan O Matik.
Your DC Proof is too much leaning towards intuitionistic
logic, or as you say the INTUITIONISTIC FRINGE.

Dan Christensen

unread,
Jun 4, 2023, 4:48:59 PM6/4/23
to
> Omg. 60 Lines.
>

All very straightforward, Mr. Collapse. Deal with it.

> Currently you dont have any de Morgan for bi-implicaion.
> You only have the one definition of bi-implication,
> namely this one:
> (A <=> B) <~~> (A => B) & (B => A)

Seems sufficient to me after writing tens of thousands of lines of formal proof with DC Proof. To prove A<=>B, I usually prove A=>B and B=>A.

> You could prove it much quicker if you had de Morgan
> for bi-implication. You can derive de Morgan for bi-
> implication from this one:
> (A <=> B) <~~> (A & B) v (~A & ~B)

I haven't found a need for this in any mathematical proofs.

> The de Morgan is then:
>
> ~ (A <=> B) <~~> (~A v ~B) & (A v B)
>
> You dont take duality in classical logic seriously Dan O Matik.
> Your DC Proof is too much leaning towards intuitionistic
> logic, or as you say the INTUITIONISTIC FRINGE.

You are the one complaining my removing ~~'s, Mr. Collapse! You whined about it NOT being intuitionistic! (HA, HA, HA!!!)

Mild Shock

unread,
Jun 4, 2023, 5:38:09 PM6/4/23
to
There is a clear asymmetry in your DC Poop tool. If you
look at the Logic drop down menu. You have two
entries for => but only one entry for <=>:

'=>' <--> '&' (Implies-And)
'=> <--> 'I' (Implies-Or)
'<=> <--> '&' (Iff-And)

As usually you are a blind mole. Where is (Iff-Or)?
Its missing. Your 60 lines proof, would be reduced to
4 lines. Just watch:

~ (A <=> B) <=> (~A v ~B) & (B v A)

Now apply your => definition:

~ (A <=> B) <=> (A => ~B) & (B v A)

Now apply another => definition:

~ (A <=> B) <=> (A => ~B) & (~B => A)

Now apply your <=> definition:

~ (A <=> B) <=> (A <=> ~B)

4 lines, and not 60 lines. Ok maybe you would
need 8 lines or 16 lines since, since you would make
~ (A <=> B) a premisse and so on.

Dan Christensen

unread,
Jun 4, 2023, 6:29:20 PM6/4/23
to
On Sunday, June 4, 2023 at 5:38:09 PM UTC-4, Mild Shock wrote:

> Dan Christensen schrieb am Sonntag, 4. Juni 2023 um 22:48:59 UTC+2:

> > > (A <=> B) <~~> (A & B) v (~A & ~B)

> > I haven't found a need for this in any mathematical proofs.

> > > The de Morgan is then:
> > > ~ (A <=> B) <~~> (~A v ~B) & (A v B)
> > >
> > > You dont take duality in classical logic seriously Dan O Matik.
> > > Your DC Proof is too much leaning towards intuitionistic
> > > logic, or as you say the INTUITIONISTIC FRINGE.

> > You are the one complaining about my removing ~~'s, Mr. Collapse! You whined about it NOT being intuitionistic! (HA, HA, HA!!!)

> There is a clear asymmetry in your tool. If you
> look at the Logic drop down menu. You have two
> entries for => but only one entry for <=>:
>

So what?

> '=>' <--> '&' (Implies-And)
> '=> <--> 'I' (Implies-Or)
> '<=> <--> '&' (Iff-And)
>
>[snip childish abuse] Where is (Iff-Or)?

Do you not understand what feature creep is? Didn't you say you were a software developer? Again, I simply haven't found it to be necessary.

Mild Shock

unread,
Jun 4, 2023, 8:20:13 PM6/4/23
to
Well in this case, '=>' <--> 'I' (Implies-Or) is also feature
creap. You don't need it. You can use '=>' <--> '&' and
'&' <--> '|' plus a couple of clumsy RemDNeg.

So why do you have '=>' <--> 'I' (Implies-Or) ?

P.S.: Here is a demonstration of '=>' <--> 'I' (Implies-Or),
not using '=>' <--> 'I' (Implies-Or):

29 P => Q <=> ~P | Q
Iff-And, 28

------------------------------------ begin proof -----------------------------------------------

1 P => Q
Premise

2 ~[~P | Q]
Premise

3 ~~[~~P & ~Q]
DeMorgan, 2

4 ~~[P & ~Q]
Rem DNeg, 3

5 P & ~Q
Rem DNeg, 4

6 P
Split, 5

7 ~Q
Split, 5

8 Q
Detach, 1, 6

9 ~Q & Q
Join, 7, 8

10 ~~[~P | Q]
Conclusion, 2

11 ~P | Q
Rem DNeg, 10

12 P => Q => ~P | Q
Conclusion, 1

13 ~P | Q
Premise

14 ~[P => Q]
Premise

15 ~~[P & ~Q]
Imply-And, 14

16 P & ~Q
Rem DNeg, 15

17 ~[~~P & ~Q]
DeMorgan, 13

18 ~[P & ~Q]
Rem DNeg, 17

19 ~~[P => ~~Q]
Imply-And, 18

20 P => ~~Q
Rem DNeg, 19

21 P
Split, 16

22 ~Q
Split, 16

23 ~~Q
Detach, 20, 21

24 ~Q & ~~Q
Join, 22, 23

25 ~~[P => Q]
Conclusion, 14

26 P => Q
Rem DNeg, 25

27 ~P | Q => [P => Q]
Conclusion, 13

28 [P => Q => ~P | Q] & [~P | Q => [P => Q]]
Join, 12, 27

29 P => Q <=> ~P | Q
Iff-And, 28

------------------------------------ end proof -----------------------------------------------

Dan Christensen schrieb am Montag, 5. Juni 2023 um 00:29:20 UTC+2:
> So what?
> > '=>' <--> '&' (Implies-And)
> > '=> <--> 'I' (Implies-Or)
> > '<=> <--> '&' (Iff-And)

Dan Christensen

unread,
Jun 4, 2023, 9:23:38 PM6/4/23
to
On Sunday, June 4, 2023 at 8:20:13 PM UTC-4, Mild Shock wrote:
> Well in this case, '=>' <--> 'I' (Implies-Or) is also feature
> creap. You don't need it.

[snip]

On the contrary, I often use it. Based on my experience, I would almost certainly never use such a feature as you describe in a mathematical proof even if it was implemented. Maybe you can include it in your own proof assistant. I will give it a miss.

Mild Shock

unread,
Jun 5, 2023, 4:31:18 AM6/5/23
to

I guess we have a paradox of logical equality then.

LoL

Mild Shock

unread,
Jun 5, 2023, 5:55:12 AM6/5/23
to

Using ordinary propositional 2nd order logic, we can prove:

(1) ALL(a):[[a <=> X] & [a <=> a]] is FALSE for any proposition X.

Whereas

(2) ALL(a):[[a <=> X] => [a <=> a]] is TRUE for any proposition X.

Here is a proof with SWI-Prolog QSAT:

?- use_module(library(clpb)).
true.

?- taut(~(A^(~((A =:= X)*(A=:=A)))), R).
R = 0,

?- taut(~(A^(~((A =:= X)=<(A=:=A)))), R).
R = 1,

ROFL

Dan Christensen

unread,
Jun 5, 2023, 11:22:44 AM6/5/23
to
On Monday, June 5, 2023 at 5:55:12 AM UTC-4, Mild Shock wrote:
> Using ordinary propositional 2nd order logic, we can prove:
>

Who uses 2nd order logic? I don't think you will find it any math textbooks. They do tend, however, to use a lot of set theory instead.

Mild Shock

unread,
Jun 5, 2023, 1:47:29 PM6/5/23
to
Terrence Tao says the contrary:

In order to prevent this post from being totally
devoid of mathematical content, I’ll mention that
I recently came across the phenomenon of nonfirstorderisability
in mathematical logic: there are perfectly meaningful
and useful statements in mathematics which cannot be
phrased within the confines of first order logic
(combined with the language of set theory, or any
other standard mathematical theory); one must use
a more powerful language such as second order logic
instead. This phenomenon is very well known among
logicians, but I hadn’t learned about it until
very recently, and had naively assumed that first
order logic sufficed for “everyday” usage of mathematics.
https://terrytao.wordpress.com/2007/08/27/printer-friendly-css-and-nonfirstorderizability/#more-172

The Drinker Paradox is also such an instance. Not
that you would really need 2nd Order, but
set theory breaks it.

Dan Christensen schrieb:
Message has been deleted

Dan Christensen

unread,
Jun 5, 2023, 3:32:43 PM6/5/23
to
On Monday, June 5, 2023 at 1:47:29 PM UTC-4, Mild Shock wrote:

> Dan Christensen schrieb:
> > On Monday, June 5, 2023 at 5:55:12 AM UTC-4, Mild Shock wrote:
> >> Using ordinary propositional 2nd order logic, we can prove:
> >>
> >
> > Who uses 2nd order logic? I don't think you will find it any math textbooks. They do tend, however, to use a lot of set theory instead.
> >

> Terrence Tao says the contrary:
>
> In order to prevent this post from being totally
> devoid of mathematical content, I’ll mention that
> I recently came across the phenomenon of nonfirstorderisability
> in mathematical logic: there are perfectly meaningful
> and useful statements in mathematics which cannot be
> phrased within the confines of first order logic
> (combined with the language of set theory, or any
> other standard mathematical theory); one must use
> a more powerful language such as second order logic
> instead. This phenomenon is very well known among
> logicians, but I hadn’t learned about it until
> very recently, and had naively assumed that first
> order logic sufficed for “everyday” usage of mathematics.
> https://terrytao.wordpress.com/2007/08/27/printer-friendly-css-and-nonfirstorderizability/#more-172
>

Interesting. That Terence Tao was only recently aware of it suggests it must be exceedingly rare in math textbooks.

> The Drinker Paradox is also such an instance.

On the contrary, as we have seen here repeatedly, using ordinary set theory, we have:

EXIST(x):[x in D => Q(D)] for arbitrary set D and proposition Q regardless of its truth value.

For Smullyan's original Drinker Principle setup, where D is the set of all drinkers, and Q is the proposition ALL(y):y in D.

Although Smullyan's original informal proof makes no mention of a pub (as above), it could be better formalized as:

EXIST(x):[x in D => P=D] where D is the set of all those people drinking in a given pub and P is the set of all people in that pub.

Ross Finlayson

unread,
Jun 5, 2023, 3:39:23 PM6/5/23
to
Of course schema for higher-order can be written on down to lower-order,
except when there are "extensionality problems", and by problems I
mean invalidities or contradictions.

For example writing out "material" "implication" down out of schema
shows how its "false antecedents" and "false consequents" blow up.

There's that first-orderizability is a thing already and it's fine:
as long as you don't have quantifier ambiguities that otherwise
the "pure" logic is first-order.

Anyways you two go on clowning each other it's pretty much
bored everybody else where they just ignore it.

Especially _stupid_ things ike "I fooled myself and prove you
do too", it's considered non-mathematical. (At best.)


Mild Shock

unread,
Jun 5, 2023, 3:49:52 PM6/5/23
to
Get an education man!

Your nonsensical Generalized Drinker Paradox Nonsense
doesn't work in second order logic, since in second
order logic you cannot form:

r e r
http://www.dcproof.com/UniversalSet.htm

In second order logic you can only form:

x e X

Where the left hand side is a "element" variable
and the right hand side is a "set" variable.

Also what makes you think Terrence Tao doesn't
use second order logic in his Real Analysis Books?
He also uses the lower case and upper case convention.

Dan Christensen schrieb:
> On Monday, June 5, 2023 at 1:47:29 PM UTC-4, Mild Shock wrote:
>
>> Dan Christensen schrieb:
>>> On Monday, June 5, 2023 at 5:55:12 AM UTC-4, Mild Shock wrote:
>>>> Using ordinary propositional 2nd order logic, we can prove:
>>>>
>>>
>>> Who uses 2nd order logic? I don't think you will find it any math textbooks. They do tend, however, to use a lot of set theory instead.
>>>
>
>> Terrence Tao says the contrary:
>>
>> In order to prevent this post from being totally
>> devoid of mathematical content, I’ll mention that
>> I recently came across the phenomenon of nonfirstorderisability
>> in mathematical logic: there are perfectly meaningful
>> and useful statements in mathematics which cannot be
>> phrased within the confines of first order logic
>> (combined with the language of set theory, or any
>> other standard mathematical theory); one must use
>> a more powerful language such as second order logic
>> instead. This phenomenon is very well known among
>> logicians, but I hadn’t learned about it until
>> very recently, and had naively assumed that first
>> order logic sufficed for “everyday” usage of mathematics.
>> https://terrytao.wordpress.com/2007/08/27/printer-friendly-css-and-nonfirstorderizability/#more-172
>>
>
> Interesting.
>
>> The Drinker Paradox is also such an instance.
>
> On the contrary, as we have seen here repeatedly, using ordinary set theory, we have:
>
> EXIST(x):[x in D => Q(D)] for arbitrary set D and proposition Q regardless of its truth value.
>
> For Smullyan's original Drinker Principle setup, where D is the set of all drinkers, and Q is the proposition ALL(y):y in D.
>
> Although Smullyan's original informal proof makes no mention of a pub (as above), it could be better formalized as:
>
> EXIST(x):[x in D => P=D] where D is the set of all those people drinking in a given pub and P is the set of all people in that pub.
>

Dan Christensen

unread,
Jun 5, 2023, 4:03:44 PM6/5/23
to
On Monday, June 5, 2023 at 3:39:23 PM UTC-4, Ross Finlayson wrote:

> Of course schema for higher-order can be written on down to lower-order,
> except when there are "extensionality problems", and by problems I
> mean invalidities or contradictions.
>
> For example writing out "material" "implication" down out of schema
> shows how its "false antecedents" and "false consequents" blow up.
>

Still waiting for even a single example of this "blowing up," Ross.

Mild Shock

unread,
Jun 5, 2023, 4:09:29 PM6/5/23
to
Most mathematicians when they talk about "sets"
they think about them in second order logic terms.
This is well known, that they hardly

think of them in first order logic terms. This means
also that for mathematicians the Russell Paradox isn't
existent, because this here is not provable:

28 ALL(s):[Set(s) => EXIST(a):~a e s]
Rem DNeg, 27
http://www.dcproof.com/UniversalSet.htm

To begin with, you use in your proof:

r e r

Which for a mathematician that thinks in second
order logic terms, isn't well formed. Since in second
order logic we can only form:

x e X

Sometimes written as Xx. Where the lower case variable
x is not the same name as the upper case variable X. They
are different variables. So you cannot provoke

the Russell Paradox in second order logic, using native
membership relation of second order logic.

Mild Shock

unread,
Jun 5, 2023, 4:14:59 PM6/5/23
to

An approximation to this thinking is the regularity
axiom in ZFC. Namely this axion:

https://en.wikipedia.org/wiki/Axiom_of_regularity

It would make r e r false. Interestingly despite
the regularity axiom, the Russell Paradox can still
be provoked in ZFC, which shows, that even with

this "patch", second order logic thinking is still different.

Dan Christensen

unread,
Jun 5, 2023, 4:39:27 PM6/5/23
to
On Monday, June 5, 2023 at 3:49:52 PM UTC-4, Mild Shock wrote:
> Get an education man!
>
> Your nonsensical Generalized Drinker Paradox Nonsense

It works. Deal with it, Mr. Collapse. (See my previous posting here.)

> doesn't work in second order logic, since in second
> order logic you cannot form:
>
> r e r

It works in ordinary, textbook logic and set theory as we can see here:

> http://www.dcproof.com/UniversalSet.htm
>
[snip]

> Also what makes you think Terrence Tao doesn't
> use second order logic in his Real Analysis Books?

See the appendix "Variables and Quantifiers" p.320 in Tao's "Analysis I." I see no indication of quantifying over logical propositions:

"The fundamental concept of variables - those familiar symbols such as x or n which denote various quantities which are unknown, or set to some value, or assumed to obey some property....

"A variable is a symbol, such as n or x, which denotes a certain type of mathematical object - an integer, a vector, a matrix, that kind of thing."

Ross Finlayson

unread,
Jun 5, 2023, 5:09:58 PM6/5/23
to
"We repeat that the sole question confronting the pure mathematicians
(as distinct from the scientist who employs mathematics in investigating
a special subject matter) is not whether the postulates he assumes or
the conclusions he deduces from them are true, but whether the
alleged conclusions are in fact the _necessary logical consequences_
of the initial assumptions." -- Nagel and Newman, "Goedel's proof"


This whole "higher-order bit" of the Burse-bot clam-loll-troll arrived
from this.

(A => B) => (B v ~A) | ~(A => B)

False antecedent? False consequent.

It blows itself down.

Dan Christensen

unread,
Jun 5, 2023, 5:22:24 PM6/5/23
to
On Monday, June 5, 2023 at 4:09:29 PM UTC-4, Mild Shock wrote:
> Most mathematicians when they talk about "sets"
> they think about them in second order logic terms.

They don't usually think of quantifying over logical propositions or predicates.

> This is well known, that they hardly
>
> think of them in first order logic terms. This means
> also that for mathematicians the Russell Paradox isn't
> existent, because this here is not provable:
>
> 28 ALL(s):[Set(s) => EXIST(a):~a e s]
> Rem DNeg, 27
> http://www.dcproof.com/UniversalSet.htm
>
> To begin with, you use in your proof:
>
> r e r
>
> Which for a mathematician that thinks in second
> order logic terms, isn't well formed.

[snip]

In what sense is the expression "r in r" not well formed?

The non-existence of a universal set is a widely accepted fact. It is accepted because of Russell's Paradox:

1. ALL(a):[a in r <=> ~a in a]
Premise

2. r in r <=> ~r in r
U Spec, 1

3. ~EXIST(r):ALL(a):[a in r <=> ~a in a]
Conclusion, 1

Dan Christensen

unread,
Jun 5, 2023, 5:32:33 PM6/5/23
to
On Monday, June 5, 2023 at 5:09:58 PM UTC-4, Ross Finlayson wrote:
> On Monday, June 5, 2023 at 1:03:44 PM UTC-7, Dan Christensen wrote:
> > On Monday, June 5, 2023 at 3:39:23 PM UTC-4, Ross Finlayson wrote:
> >
> > > Of course schema for higher-order can be written on down to lower-order,
> > > except when there are "extensionality problems", and by problems I
> > > mean invalidities or contradictions.
> > >
> > > For example writing out "material" "implication" down out of schema
> > > shows how its "false antecedents" and "false consequents" blow up.
> > >
> > Still waiting for even a single example of this "blowing up," Ross.

> "We repeat that the sole question confronting the pure mathematicians
> (as distinct from the scientist who employs mathematics in investigating
> a special subject matter) is not whether the postulates he assumes or
> the conclusions he deduces from them are true, but whether the
> alleged conclusions are in fact the _necessary logical consequences_
> of the initial assumptions." -- Nagel and Newman, "Goedel's proof"
>
>
> This whole "higher-order bit" of the Burse-bot clam-loll-troll arrived
> from this.
>
> (A => B) => (B v ~A) | ~(A => B)
>

A tautology. No inconsistencies here, Ross. What else have you got?

Dan Christensen

unread,
Jun 5, 2023, 5:34:45 PM6/5/23
to
On Monday, June 5, 2023 at 4:14:59 PM UTC-4, Mild Shock wrote:
> An approximation to this thinking is the regularity
> axiom in ZFC. Namely this axion:
>
> https://en.wikipedia.org/wiki/Axiom_of_regularity
>
> It would make r e r false.

To be false, it must be well formed. Busted!

Mild Shock

unread,
Jun 5, 2023, 6:04:53 PM6/5/23
to

You are a little confused Danny Boy. These are two different
words "founded" and "formed". Do you need glasses? Do
you have eyesight problems like John Gabriel.

The axiom of regularity requires, and therefore
excludes a set that would satisfy r e r:

Well Founded Sets

It doesn't require anything well formed.
Its not some well formedness axiom.

Let (S,⪯) be an ordered set.
Then (S,⪯) is well-founded if and only if it satisfies the minimal condition:
Every non-empty subset of S has a minimal element.
That is, if the ordering ⪯ is a well-founded relation.
https://proofwiki.org/wiki/Definition:Well-Founded_Ordered_Set

Mild Shock

unread,
Jun 5, 2023, 6:06:40 PM6/5/23
to
In second order logic you can only state that an
element is member of a set. You can only write:

/* Well Formed, element e set */
x e X

You cannot write:

/* Not Well Formed, element e element */
x e x
/* Not Well Formed, set e element */
X e x
/* Not Well Formed, set e set */
X e X

What makes you think you can write r e r in second order
logic? What type should r have?

Mild Shock

unread,
Jun 5, 2023, 6:15:01 PM6/5/23
to
First time you hear about set theory Danny Boy?
Why would a set that satisfies r ∈ r not be compatible
with the axiom of regularity?

Axiom of regularity
The axiom was introduced by von Neumann (1925)
https://en.wikipedia.org/wiki/Axiom_of_regularity

Because r ∈ r would imply an infinite decending
chain, which doesn't exist in a wellfounded universe:

... r ∈ r ∈ r ∈ r ∈ r

The fact that a set r, that satisfies r ∈ r, isn't regular aka
wellfunded is also mentioned on Wikipedia:

Elementary implications of regularity
we cannot have A ∈ A
https://en.wikipedia.org/wiki/Axiom_of_regularity#Elementary_implications_of_regularity

Mild Shock

unread,
Jun 5, 2023, 6:26:23 PM6/5/23
to
Obviously set theory is first order logic and not second order
logic, because we write stuff such as A ∈ A. Some authors also
have it that second order logic is used, but set theory belongs

to the first order fragment. But then A ∈ A would not anymore
use the native member ship ∈ from second order logic. The
member ship x ∈ X in second order logic is usually written as:

Xx

A nice book about second order logic is:

Foundations without Foundationalism:
A Case for Second-Order Logic
Shapiro, S. (2000)
https://www.semanticscholar.org/paper/c738147397d6e7f4d993e28cd3abbb3805865278

According to semanticscholar, the was cited 290 times.

Dan Christensen

unread,
Jun 5, 2023, 7:33:34 PM6/5/23
to
On Monday, June 5, 2023 at 6:04:53 PM UTC-4, Mild Shock wrote:

> Dan Christensen schrieb am Montag, 5. Juni 2023 um 23:34:45 UTC+2:
> > On Monday, June 5, 2023 at 4:14:59 PM UTC-4, Mild Shock wrote:
> > > An approximation to this thinking is the regularity
> > > axiom in ZFC. Namely this axion:
> > >
> > > https://en.wikipedia.org/wiki/Axiom_of_regularity
> > >
> > > It would make r e r false.

> > To be false, it must be well formed. Busted!

> The axiom of regularity requires, and therefore
> excludes a set that would satisfy r e r:
>

The only requirement of a premise is that it be syntactically correct (i.e. well formed). It doesn't have to be true. It can contradict another premise. It can even a be a self-contradiction, e.g. A & ~A.

But thanks for confirming that you were unable to prove that r in r is not well-formed.

Dan Christensen

unread,
Jun 5, 2023, 8:14:00 PM6/5/23
to
On Monday, June 5, 2023 at 6:06:40 PM UTC-4, Mild Shock wrote:
> In second order logic you can only state that an
> element is member of a set. You can only write:
>
> /* Well Formed, element e set */
> x e X
>
> You cannot write:
>
> /* Not Well Formed, element e element */
> x e x

Nothing wrong with that as premise/assumption. See: https://en.wikipedia.org/wiki/Well-formed_formula

Mild Shock

unread,
Jun 5, 2023, 8:26:41 PM6/5/23
to
You made an error when you translated the Drinker
Paradox from second order logic. Second order
logic is two sorted. It is a multi sorted logic.

It has these two sorts:

x, y, z, etc..: Elements or individuals (the 1st order objects)
X, Y, Z, etc..: Sets or predications. (the 2nd order objects)

Using the native membership between the two, you
are only allowed to form:

x e X

i.e. state that a first order object is element of a second order
object. But you use nonsense like this in your proof:

r e r

Which is not well formed in a two sorted logic, where
membership is only supposed to be written between element
and set. If you would obey the rules of second order logic,

you could not prove the Russell Paradax. Its not a well
formed proof in second order logic, using the native
membership of second order logic.

Mild Shock

unread,
Jun 5, 2023, 8:36:26 PM6/5/23
to
What makes you think this link is relevant:

https://en.m.wikipedia.org/wiki/Well-formed_formula

Its only for first order logic, not for second order logic,
Wikipedia explicitly states:

„A key use of formulas is in propositional logic and
predicate logic such as first-order logic.“

On the other hand second order logic has its own
WFF definition. Its not so difficult to grasp, and
it is widely documented. You can recognize second

order logic that it allows quantifiers over its variables
that are of the sort of second order objects. i.e. it
allows ∃X and ∀X, which is missing in first order

logic. The membership x e X is usually written as X(x),
or without parenthesis like Wolfgang Schwart tree tool
would write it Xx.

Ross Finlayson

unread,
Jun 5, 2023, 8:42:41 PM6/5/23
to
No, that is not a "tautology".

It's not an inference, either, it's not implication,
it's not a conditional, and it doesn't entail anything.

It's "the lack thereof, that eases a certain kind of data
structure's concatenation", it's a laxative, and has no
business in "truth tables".

Also absent memory and book-keeping, it's a hazard.

No offense - just that it's not type-safe.

And _nobody_, _not anybody_, needs "material implication"
for any application in logic (except its own demonstration):
its _necessity_ is absent.

Mild Shock

unread,
Jun 5, 2023, 8:47:50 PM6/5/23
to

You find a second order logic WFF definition here:

Second Order Logic - Mahesh Viswanathan Fall 2018
https://courses.engr.illinois.edu/cs498mv/fa2018/SecondOrderLogic.pdf

You can verify by yourself that only X(x) is a WFF,
bot none of x(X), X(Y) or x(y) is a WFF.

Dan Christensen

unread,
Jun 5, 2023, 8:54:02 PM6/5/23
to
On Monday, June 5, 2023 at 8:26:41 PM UTC-4, Mild Shock wrote:
> You made an error when you translated the Drinker
> Paradox from second order logic.

I made no error. I used ordinary math textbook logic and set theory, not second order logic, i.e. no quantifying over logical propositions.

Second order
> logic is two sorted. It is a multi sorted logic.
>
> It has these two sorts:
>
> x, y, z, etc..: Elements or individuals (the 1st order objects)
> X, Y, Z, etc..: Sets or predications. (the 2nd order objects)
>
> Using the native membership between the two, you
> are only allowed to form:
>
> x e X
>

Use the rules of syntax of statements in ZFC, show that x in x is not a wff

See the rules of syntax here: https://encyclopediaofmath.org/wiki/ZFC (2nd paragraph)
Message has been deleted

Mild Shock

unread,
Jun 5, 2023, 9:02:10 PM6/5/23
to

You didnt consider that math text books are often
based on second order logic ideas of sets.

There are many challenges in translating this
to first order. Namely:
- Second order logic is two sorted, first order
logic is only single sorted
- Second order logic does not need set theory, it
has a native membership, first order logic
needs a set theory
- Second order logic does not have a Russell Paradox
for its native membership, first order logic
set theory is typically bugged by Russell Paradox

But there is nevertheless hope and maybe light at the
end of the tunnel. You could read why Quine thinks
second order logic is first order logic in

sheep clothes. Maybe this could help in producing
a version of the Drinker Paradox that does make so
many logical errors.

Mild Shock

unread,
Jun 5, 2023, 9:05:14 PM6/5/23
to
Corr.: Typo, forgot a „not“:

… Maybe this could help in producing a version
of the Drinker Paradox that does not make
so many logical errors.

Ross Finlayson

unread,
Jun 5, 2023, 9:17:37 PM6/5/23
to
Oh, try Quine.

Didn't you say your "sets" weren't necessarily well-founded?

Oh, it makes sense if you just assert what's convenient
for whatever purposes, it makes sense that's abuse.

More gently though the reason exactly one of the ZF's axioms
is a schema is otherwise people would shoe-horn in
things like "a set containing empty is a regular set".
I.e. that one of them's an "axiom schema" is that
it's always written in "second order".

Did you know that "one of the axioms of ZF is always a schema"?
People have been knocking it around for quite a while.

Dan Christensen

unread,
Jun 5, 2023, 9:32:32 PM6/5/23
to
On Monday, June 5, 2023 at 8:42:41 PM UTC-4, Ross Finlayson wrote:
[snip]

> > > This whole "higher-order bit" of the Burse-bot clam-loll-troll arrived
> > > from this.
> > >
> > > (A => B) => (B v ~A) | ~(A => B)
> > >
> > A tautology. No inconsistencies here, Ross. What else have you got?

> No, that is not a "tautology".
>

It is. Make a truth table. Only 4 lines.

> It's not an inference, either, it's not implication,

It is an implication. (A => B) is the antecedent. (B v ~A) | ~(A => B) is the consequent. ('|' = NAND in your notation?)

> it's not a conditional, and it doesn't entail anything.
>

It is a conditional. What do you suppose a tautology entails?

> It's "the lack thereof, that eases a certain kind of data
> structure's concatenation", it's a laxative, and has no
> business in "truth tables".
>
> Also absent memory and book-keeping, it's a hazard.
>
> No offense - just that it's not type-safe.
>

Use an online truth table app.

> And _nobody_, _not anybody_, needs "material implication"

Anyone who wants to do advanced mathematics needs material implication. Obviously not a concern for you, but that's OK, Ross.

Dan Christensen

unread,
Jun 5, 2023, 9:36:01 PM6/5/23
to
On Monday, June 5, 2023 at 9:02:10 PM UTC-4, Mild Shock wrote:
> You didnt consider that math text books are often
> based on second order logic ideas of sets.
>
[snip]

They usually won't quantify over propositions if that's what you mean.

Dan Christensen

unread,
Jun 5, 2023, 9:45:00 PM6/5/23
to
On Monday, June 5, 2023 at 9:17:37 PM UTC-4, Ross Finlayson wrote:
[snip]

> Oh, try Quine.
>
> Didn't you say your "sets" weren't necessarily well-founded?
>

If, for some reason, you insist on some equivalent of the Regularity Axiom, you can introduce one at the beginning of a proof using the Axiom Rule on the Logic Menu. It really isn't necessary for most mathematics.

> Oh, it makes sense if you just assert what's convenient
> for whatever purposes, it makes sense that's abuse.
>
> More gently though the reason exactly one of the ZF's axioms
> is a schema is otherwise people would shoe-horn in
> things like "a set containing empty is a regular set".
> I.e. that one of them's an "axiom schema" is that
> it's always written in "second order".
>
> Did you know that "one of the axioms of ZF is always a schema"?

As in DC Proof. It saves having to introduce another axiom every time you, say, want to construct a subset.

Ross Finlayson

unread,
Jun 5, 2023, 10:04:39 PM6/5/23
to
No they absolutely do _not_ need material implication,
unless you'd care to violate their causality.

In which case they don't need you, ....

Ross Finlayson

unread,
Jun 5, 2023, 10:15:37 PM6/5/23
to
Nope, that's just Boolean "or", either suffices to setup "material implication"
blowing itself down.

In any application in the higher order....

You'll notice that something like "Russell and Norvig's chapter on this in their
lengthy tome 'Artificial Intelligence'" exactly and carefully _do_ separate
"computing the column q v ~p in the state compilation" from "establish
derivation rules to compute satisfiability by rolling up entailment".
In their, ..., "example data structure for non-modal and non-temporal flag carriage".

I.e. "their rules are separate from their values of what's a sample space, what is a bitmap."

Which you don't, .... (... And, which you don't.)


A tautology basically is "equals". An implication's causal, not just a misnomer
for the coincident or a-causal, the causal makes rules, inference follows out rules,
entailment is that, and satisfiability follows out _closure_.

Dan Christensen

unread,
Jun 5, 2023, 11:39:38 PM6/5/23
to
On Monday, June 5, 2023 at 10:15:37 PM UTC-4, Ross Finlayson wrote:
> On Monday, June 5, 2023 at 6:32:32 PM UTC-7, Dan Christensen wrote:
> > On Monday, June 5, 2023 at 8:42:41 PM UTC-4, Ross Finlayson wrote:
> > [snip]
> > > > > This whole "higher-order bit" of the Burse-bot clam-loll-troll arrived
> > > > > from this.
> > > > >
> > > > > (A => B) => (B v ~A) | ~(A => B)
> > > > >
> > > > A tautology. No inconsistencies here, Ross. What else have you got?
> > > No, that is not a "tautology".
> > >
> > It is. Make a truth table. Only 4 lines.
> > > It's not an inference, either, it's not implication,
> > It is an implication. (A => B) is the antecedent. (B v ~A) | ~(A => B) is the consequent. ('|' = NAND in your notation?)
> > > it's not a conditional, and it doesn't entail anything.
> > >
> > It is a conditional. What do you suppose a tautology entails?
> > > It's "the lack thereof, that eases a certain kind of data
> > > structure's concatenation", it's a laxative, and has no
> > > business in "truth tables".
> > >
> > > Also absent memory and book-keeping, it's a hazard.
> > >
> > > No offense - just that it's not type-safe.
> > >

> > Use an online truth table app.

> > > And _nobody_, _not anybody_, needs "material implication"

> > Anyone who wants to do advanced mathematics needs material implication. Obviously not a concern for you, but that's OK, Ross.

> Nope, that's just Boolean "or", either suffices to setup "material implication"
> blowing itself down.

You used two different symbols for OR. OK. Interestingly, it's tautology either way! And it's far from clear how this tautology somehow "blows up" material implication.


>
> In any application in the higher order....
>
> You'll notice that something like "Russell and Norvig's chapter on this in their
> lengthy tome 'Artificial Intelligence'" exactly and carefully _do_ separate
> "computing the column q v ~p in the state compilation" from "establish
> derivation rules to compute satisfiability by rolling up entailment".
> In their, ..., "example data structure for non-modal and non-temporal flag carriage".
>
> I.e. "their rules are separate from their values of what's a sample space, what is a bitmap."
>
> Which you don't, .... (... And, which you don't.)
>
>
> A tautology basically is "equals".

Huh??? It is statement that is true in the present in every case.

> An implication's causal,

Not sure I follow you, but A implies B is NOT the same as A causes B.

Example: Scientists have now proven conclusively that smoking causes cancer. But smoking does NOT imply cancer. Many smokers never get cancer. I hope that much is obvious to you, Ross.


not just a misnomer
> for the coincident or a-causal, the causal makes rules, inference follows out rules,
> entailment is that, and satisfiability follows out _closure_.

I have repeatedly posted here the rationale behind each line of the truth table for material implicating using what amounts to first principles for NOT, AND and IMPLIES operators. See: https://dcproof.wordpress.com/2017/12/28/if-pigs-could-fly/

Mild Shock

unread,
Jun 6, 2023, 1:38:42 AM6/6/23
to
Its not that difficult to correctly translate second order logic
to first order logic. The semantics of second order logic
is given here in ths paper:

Second Order Logic - Mahesh Viswanathan Fall 2018
https://courses.engr.illinois.edu/cs498mv/fa2018/SecondOrderLogic.pdf

You can now construct a function that maps second
order WFFs to first oder WFFs. When you do a proof,
you then have to be careful, that you don't hit

a first order WFFs, that doesn't have a second order WFFs.
Thats basically all you have to do. Quite easy. So how does
this translation functions tr look like?

tr : SOL-WFF -> FOL-WFF

Do you have any clue. What will be the outcome for the
Drinker Paradox?

Mild Shock

unread,
Jun 6, 2023, 1:47:50 AM6/6/23
to

The Fall 2018 notes are not perfect. He has 1-2 bloopers
in his notes. But if you are intelligent you can ultimately
translate this second order:

∀D∃x(Dx → ∀yDy)

Into first order, and even prove it. How does it work?
Any clue Dan-O-Matik? If you do this translation you get
what your intention was here in this nonsense:

ALL(s):[Set(s) => EXIST(x):[x e s => Q(x,s)]]
http://www.dcproof.com/STGeneralizedDrinkersThm.htm

Namely to have a quantification of the drinkers s. Only
if you follow second order logic, you get something
which is faithful to Smullyans proof, and which

doesn't need the Russell Paradox, after all there is
no Russell Paradox in second order logic.

Mild Shock

unread,
Jun 6, 2023, 1:58:19 AM6/6/23
to

Modern Math texts use the translation all the
time. You find it all over the texts of Terrence Tao.
But you need to be able to spot the translation.
Message has been deleted

Mild Shock

unread,
Jun 6, 2023, 12:11:15 PM6/6/23
to
There is no quantification over proposition here:

/* Drinker Paradox */
∀D∃x(Dx → ∀yDy)

Where do you see a proposition quantified?
The definition of proposition is zero arity predicate basically.
The drinker paradox does not contain any zero arity predicate.

Come on its not so difficult to translate the sbove from second
order logic to first order logic. You did such things already in
the past. You are not novice to such tasks.

You did such things when you posted about and blogged
about the standard translation (ST) from modal logic
to first order logic. Now there is something similar from

second order logic to first order logic. Can you guess it,
or do you know it by chance? For example what would be the
translation of the Drinker Paradox.

Hint: You dont need ti guess anything, its definied here. Similar
lilke you find the standard translation (ST) for modal from
a definition of the „semantics“ of modal logic, you find

the translation from second order to first order via its
definition of its „semantic“. Can you read and understand the paper?

Second Order Logic - Mahesh Viswanathan Fall 2018
https://courses.engr.illinois.edu/cs498mv/fa2018/SecondOrderLogic.pdf


Dan Christensen schrieb am Dienstag, 6. Juni 2023 um 16:35:42 UTC+2:
> On Tuesday, June 6, 2023 at 1:38:42 AM UTC-4, Mild Shock wrote:
> > Its not that difficult to correctly translate second order logic
> > to first order logic.
> [snip]
>
> I see the need to quantify over sets and functions as in DC Proof and most advanced math textbooks. (Some say, that in itself is SOL.) However, I haven't come across any pressing need to quantify over logical propositions. I think it would only lead to confusion. Thanks anyway.

Dan Christensen

unread,
Jun 6, 2023, 1:08:46 PM6/6/23
to
(Correction)

On Tuesday, June 6, 2023 at 1:38:42 AM UTC-4, Mild Shock wrote:
> Its not that difficult to correctly translate second order logic
> to first order logic.

[snip]

I see the need to quantify over sets and functions as in DC Proof and most advanced math textbooks. (Some say, that in itself is SOL.) However, I haven't come across any pressing need to quantify over logical propositions OR PREDICATES. I think it would only lead to confusion. Thanks anyway.

Mild Shock

unread,
Jun 8, 2023, 4:25:03 AM6/8/23
to
No result yet? Ok, take your time...

The Drinker Paradox doesn't need quantification over predicates
in general, only monadic predicates are needed, i.e. predicates
with arity 1. You can check your self, only monadic predicates

/* Drinker Paradox, only Monadic Second Order */
∀D∃x(Dx → ∀yDy)

in the Drinker Paradox. How would you translate the Drinker Paradox
from second order logic to first order logic, so that its neither a
"generalization" nor a "specialization", so that it is faithful?

Its not like the example that you started in another thread, where
you started juggling with u^4. Contradicting your own claim,
you never wanted to do something with

second order logic. What made you change your mind. Is it
that Terrence Tao touched the subject? LoL
Anyway was reconsulting:

Second-order and Higher-order Logic
First published Thu 20 Dec, 2007 - Herbert B. Enderton
https://plato.stanford.edu/archives/win2007/entries/logic-higher-order/

Funny coincidence the SEP article has a section "Syntax
and translation". But his translation is not the one I have in mind.
Come on, its not that difficult. Its rather straight forwared,

when you start with this notes:

Second Order Logic - Mahesh Viswanathan Fall 2018
https://courses.engr.illinois.edu/cs498mv/fa2018/SecondOrderLogic.pdf

Dan Christensen

unread,
Jun 8, 2023, 11:09:44 AM6/8/23
to
On Thursday, June 8, 2023 at 4:25:03 AM UTC-4, Mild Shock wrote:
> No result yet? Ok, take your time...
>
> The Drinker Paradox doesn't need quantification over predicates
> in general, only monadic predicates are needed, i.e. predicates
> with arity 1. You can check your self, only monadic predicates
>
> /* Drinker Paradox, only Monadic Second Order */
> ∀D∃x(Dx → ∀yDy)
>
[snip]

Thanks, but not thanks. I prefer my set-theoretic formalization (see above)

Dan

Mild Shock

unread,
Jun 8, 2023, 2:09:12 PM6/8/23
to

Come on. Your formulation is nonsense. It makes use
of Russell Paradox. There is no Russell Paradox in
second order logic. Go go, Dan-O-Matik you can do it!

Dan Christensen

unread,
Jun 8, 2023, 2:35:15 PM6/8/23
to
On Thursday, June 8, 2023 at 2:09:12 PM UTC-4, Mild Shock wrote:

> Dan Christensen schrieb am Donnerstag, 8. Juni 2023 um 17:09:44 UTC+2:
> > On Thursday, June 8, 2023 at 4:25:03 AM UTC-4, Mild Shock wrote:
> > > No result yet? Ok, take your time...
> > >
> > > The Drinker Paradox doesn't need quantification over predicates
> > > in general, only monadic predicates are needed, i.e. predicates
> > > with arity 1. You can check your self, only monadic predicates
> > >
> > > /* Drinker Paradox, only Monadic Second Order */
> > > ∀D∃x(Dx → ∀yDy)
> > >
> > [snip]
> >
> > Thanks, but not thanks. I prefer my set-theoretic formalization (see above)
> >

> Come on. Your formulation is nonsense. It makes use
> of Russell Paradox. There is no Russell Paradox in
> second order logic.

There is Russell's Paradox in set theory. As a consequence, there is no universal set, i.e. every set excludes something. Deal with it.

Mild Shock

unread,
Jun 8, 2023, 2:36:33 PM6/8/23
to
You like your numb status quo, right?

LoL

Dan Christensen schrieb am Donnerstag, 8. Juni 2023 um 20:35:15 UTC+2:
> > Come on. Your formulation is nonsense. It makes use
> > of Russell Paradox. There is no Russell Paradox in
> > second order logic. Go go, Dan-O-Matik you can do it!

Mild Shock

unread,
Jun 9, 2023, 3:51:48 PM6/9/23
to
Ok, wasn't aware that Ebbinghaus & Flum are also
part of the second order logic fan base.
Recommended Book reading:

H.-D. Ebbinghaus et al., Mathematical Logic
© Springer Science+Business Media New York 1994
https://link.springer.com/book/10.1007/978-1-4757-2355-7

However, this approach necessitates an explicit use of set
theory to an extent not usual in ordinary mathematical practice.
The situation may encourage to consider languages with more
expressive power, which permit us to avoid this detour through
set theory. For example, in so-called second-order language
we can directly characterize the natural numbers by
means of Peano's axioms.
- IX Extensions of First-Order Logic

They see second order logic as more direct? BTW, as the cover
of the book shows, you might get an introduction to
Ehrenfeucht–Fraïssé game through it.

Have Fun!

Mild Shock

unread,
Jun 9, 2023, 4:12:34 PM6/9/23
to
What is second order Peano? Well its the same
induction schema as in first order Peano, only for

the language of second order. But then because in
second order there is this comprehension schema:

∃X∀x(Xx <=> F)

We can spare us making it an induction a schema,
and express it "directly" as:

∀X(X0 & ∀x(Xx => Xs(x)) => ∀yXy)

You get logics in between FOL and SOL Peano, the Reverse
Mathematics logics, if you weaken the comprehension schema.

Mild Shock

unread,
Jun 16, 2023, 8:53:37 AM6/16/23
to

I am very intrigued by Dan Christensen, the inventor of
the Generalized Drinker Paradox:

9 ALL(s):[Set(s) => EXIST(x):[x e s => Q(x,s)]]
4 Conclusion, 2
http://www.dcproof.com/STGeneralizedDrinkersThm.htm

If we apply the same Russell Paradox fallacy of set modelling
to the problem of alive and dead we get:

/* Everybody Dies Eventually */
EXIST(x):[~x e a ∪ now]

/* Everybody Reincarnates Eventually */
EXIST(x):[~x e d ∪ now]

/* Everybody Breaks the Cycle Eventually */
EXIST(x):[~x e a ∪ d ∪ now]

The later can be used to derive an inconsistency,
if we assume that alive and death are opposites.

Mild Shock

unread,
Jun 16, 2023, 7:59:17 PM6/16/23
to
People are so dull here, they are interesting in calculus.
But do they know why Newton was interested in calculus.
I am pretty sure John Gabriel has never seen a function

f : R->R, y = f(x), where x is the "time". Thanks to newton we
have even a very short notation for the differential, namely
the flyspeck notation, you just place a dot over y:

Isaac Newton's notation for differentiation (also called the
dot notation, fluxions, or sometimes, crudely, the flyspeck
notation for differentiation) places a dot over the dependent variable.
https://en.wikipedia.org/wiki/Notation_for_differentiation#Newton's_notation

to get f'(x). Same for Rossy Boy, when he talks about
"function theory" he has possibly no clue what kind of
functions are of interest in mathematics?

Do we use mathematics to predict the future? Well if
you have a controller you try possibly to predict and even
influence the future. Now the most crank here is

Dan Christensen, who is totally clueless about modal
logic. Recommended reading could be:

Papers on time and tense
by Prior, A. N. (Arthur N.), 1914-1969
https://archive.org/details/papersontimetens0000prio_k3r9/page/n5/mode/2up

Mild Shock

unread,
Jun 16, 2023, 8:07:19 PM6/16/23
to

Here some Unicode exploration, how does this show?

ẏ, ÿ, etc..

More about the notation:

Newton's notation is generally used when the independent
variable denotes time. If location y is a function of t, then
ẏ denotes velocity and ÿ denotes acceleration. This notation
is popular in physics and mathematical physics. It also appears
in areas of mathematics connected with physics
such as differential equations.
https://en.wikipedia.org/wiki/Notation_for_differentiation#Newton's_notation

Dan Christensen

unread,
Jun 17, 2023, 12:55:26 AM6/17/23
to
On Friday, June 16, 2023 at 7:59:17 PM UTC-4, Mild Shock (aka Mr. Collapse) wrote:
> People are so dull here, they are interesting in calculus.
> But do they know why Newton was interested in calculus.
> I am pretty sure John Gabriel has never seen a function
>
> f : R->R, y = f(x), where x is the "time". Thanks to newton we
> have even a very short notation for the differential, namely
> the flyspeck notation, you just place a dot over y:
>
> Isaac Newton's notation for differentiation (also called the
> dot notation, fluxions, or sometimes, crudely, the flyspeck
> notation for differentiation) places a dot over the dependent variable.
> https://en.wikipedia.org/wiki/Notation_for_differentiation#Newton's_notation
>
> to get f'(x). Same for Rossy Boy, when he talks about
> "function theory" he has possibly no clue what kind of
> functions are of interest in mathematics?
>
> Do we use mathematics to predict the future? Well if
> you have a controller you try possibly to predict and even
> influence the future. Now the most crank here is
>
> Dan Christensen, who is totally clueless about modal
> logic.

The ever more desperate Mr. Collapse here seems to think he can predict the future using only classical propositional logic (truth tables, etc.) Very sad.

Dan

Mild Shock

unread,
Jun 21, 2023, 8:49:14 AM6/21/23
to
All my posts have only one goal. Teaching Dan Christensen
to enhance his logic modelling skills, and produce more
paradox modelling in DC Poop.

So how would one for example go about the following
in LeanProver, Coq, DC Poop, etc... namely Epictetus
reconstructs the lost Master Argument of Diodorus

Cronus as a logical contradiction between three propositions.

- every past truth must be necessary
- an impossibility does not follow from a possibility
- something is possible which neither is nor will be true

https://www.informationphilosopher.com/freedom/master_argument.html

Concerning LeanProver, If I google, I find nothing:

https://www.google.com/search?q=diodorus+%22leanprover%22

Mild Shock

unread,
Jun 21, 2023, 8:53:55 AM6/21/23
to
Or maybe the bootstrap paradox? It deals also with future.

I bet you haven’t heard about the bootstrap paradox. It is a
paradox of time travel where things or information are brought
back in time, get recovered in the present or future, and become
the exact same thing or information that was sent back in time
in the first place. This causes the thing or information to have no real origin.

Example: The Bidet
https://www.youtube.com/watch?v=Ko8N-ogNyNE

Ross Finlayson

unread,
Feb 21, 2024, 12:27:38 AMFeb 21
to
See, these sorts things help make it so
that LEM/PEM/TND just result a class.

It helps a lot for tautology and as for
identity and equality, and same-ness and
difference, and the transitive, in intensionality
and extensionality, that indeed most people's
common sense about the indisputable notion
of equality, for example as identity, does
indeed have a way to make it match the
common definition of the copula and
regular and naive understanding of what "is", is.


Ross Finlayson

unread,
Feb 21, 2024, 1:02:15 PMFeb 21
to
The model-theory and proof-theory are really
equi-interpretable, the mountains the one
the map the other, foundations is properly
of a model, of a theory, its objects, for proofs.

I.e. "foundations results building a model theory anyways".


0 new messages