Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

When does 1 + 1 = 1?

103 views
Skip to first unread message

David Kaufman

unread,
Sep 21, 1996, 3:00:00ā€ÆAM9/21/96
to

When does 1 + 1 = 1?

How can we interpret an addition operation in the real world
where 1 + 1 = 1?

Also explain this concept in terms of the cubes below.

1 + 1 = 1
Y
_________ _________ _________|________
/| /| / /| / /| /|
/ | / | / / | / / | / |
/__|_____/ | /________/ | /________/__|_____/ |
| |_____|__| | | | = | | | | |
| / | / | | / | | / | /
| / | / | | / | | / | /
|/_______|/ |________|/ |________|/_______|/
/
Z

Did I commute the cubes pictorially?

C by David Kaufman, Sept. 20, 1996
Founder of the Cube Club
For Ethics, Math and Science Excellence

--
da...@netcom.com

kevin

unread,
Sep 21, 1996, 3:00:00ā€ÆAM9/21/96
to

hmmmm.

could it be like when two things come together like two drops of
water or people getting married?
But that doesn't explain the y and z in the cubes which are probably
there for a reason.
what's interesting is that one of the cubes is hollow and one isn't.
If the hollow cube was bigger the solid cube could be placed inside the
hollow cube and make it look like only one cube.
.......But it would still be two cubes, taking the space of one cube.
hmmmmm....

Jesse Matonak

unread,
Sep 21, 1996, 3:00:00ā€ÆAM9/21/96
to

1 + 1 = 1 in any ring where 0 = 1.
In the "real world" any operation that only counts whether or not
something is there can be interpreted to say 1 + 1 = 1; for instance,
one drop of water + one drop of water = one (bigger) drop of water.

As far as your diagram with the cubes goes, it seems to me you just
unified the cubes along one face; one cube plus one cube gives one
rectangular solid. (Not a cube, obviously.) All you're "counting"
here is existence; many useful "invariants" of the cubes, such as
volume, are not preserved under unification.

(A little less than a year ago, I _never_ wrote like that. Ah, me...)

If I'm missing some Secret of the Universe, please let me know.

-- Jesse Matonak
je...@ghidora.planetx.org


John E. Perry III

unread,
Sep 21, 1996, 3:00:00ā€ÆAM9/21/96
to

> 1 + 1 = 1 in any ring where 0 = 1.

What exactly do you mean, where 0 = 1? My understanding of
rings is that 0 is the standard additive identity, and 1 is
unity, if unity is in the ring. Then you would be saying that
for a =/= 0,

a * 0 = a

which leads to logical difficulties in rings, since

a * b = a * (0 + b) = a * 0 + a * b = a + a * b

which contradicts the uniqueness of 0 in a ring.

Surely you can't mean 0 = 1 as in 0 is the multiplicative
identity, or unity.

---
John E. Perry, III J+M+J Adoro Te Devote
The world does not see the internal, hearty devotion which renders all [acts
of devotion] easy, pleasant, and grateful. Watch the bees, how they suck
from thyme a bitter juice, but as they suck it is converted into honey, for
such is their property. -St. Francis de Sales

John Cramer

unread,
Sep 21, 1996, 3:00:00ā€ÆAM9/21/96
to

Jesse Matonak (rake...@cats.ucsc.edu) wrote:

: 1 + 1 = 1 in any ring where 0 = 1.
: In the "real world" any operation that only counts whether or not
: something is there can be interpreted to say 1 + 1 = 1; for instance,

: one drop of water + one drop of water = one (bigger) drop of water.

That made me remember computers.....

when combining two bit (bits can either be ON(1) or OFF(0)) you can comeup
with some stuff.....

using AND... using XOR (excluding or)

1 1 = 1 1 1 = 0
1 0 = 0 1 0 = 1


--
Nightshade
-----BEGIN GEEK CODE BODY-----
Version 3.1
G!>CS/M/S d-- s+: a--->? C++++>$ U P? L E? W+ N++
!o K W(--) O- M-- V PS+ PE Y PGP t- 5+ X R* tv++
b+(+++) DI+ G++(+++) e*(-) h! r(%) !z+
------END GEEK CODE BODY------


Kurt Foster

unread,
Sep 21, 1996, 3:00:00ā€ÆAM9/21/96
to

[followup to sci.math only]

David Kaufman (da...@netcom.com) wrote:
: When does 1 + 1 = 1?

: How can we interpret an addition operation in the real world
: where 1 + 1 = 1?

:
One could call the logical (inclusive) OR "+", and "true" = "1",
and "false" = "0" then 1+1=1,1+0=0+1=1, and 0+0=0. [However, in the "real
world" of Mr. Computer, although "false" = 0, logical negation makes
"true" equal to an integer, all of whose bits are 1, and in the usual
convention for signed integers, that makes "true" = -1.]
One could also let "1" symbolize any non-null string, and "+" the
operation of string concatenation. Then "1" + "1" = "1". That seems
fairly close to the interpretation of concatenating physical objects.

Orlando Vacquez

unread,
Sep 21, 1996, 3:00:00ā€ÆAM9/21/96
to

David Kaufman (da...@netcom.com) wrote:
: When does 1 + 1 = 1?

: How can we interpret an addition operation in the real world
: where 1 + 1 = 1?

: Also explain this concept in terms of the cubes below.

: 1 + 1 = 1
: Y


: _________ _________ _________|________
: /| /| / /| / /| /|
: / | / | / / | / / | / |
: /__|_____/ | /________/ | /________/__|_____/ |
: | |_____|__| | | | = | | | | |
: | / | / | | / | | / | /
: | / | / | | / | | / | /
: |/_______|/ |________|/ |________|/_______|/
: /
: Z

: Did I commute the cubes pictorially?

: C by David Kaufman, Sept. 20, 1996
: Founder of the Cube Club
: For Ethics, Math and Science Excellence

: --
: da...@netcom.com

I think I understand.
_ _ ___
|_| + |_| = |___|

but you could also say
_ _ _ _
|_| + |_| = |_||_|

I think the second example is the one most comoly used in math. (did that
make any sense?)

--
*=================*------------------------------------.
| Orlando Vazquez | is... !__,
*=================* !00|
! _ _ _ _ _ _ !00|
! | \/ | ___ |_|| | | \/ | ___ _ _ !00|
! | | / \ _ | | | | / \ | \| | !00|
! | |\/| || O || || |__ | |\/| || O || | !00|
! |_| |_||__|__||_||____||_| |_||__|__||_|\ _| !00|
!______________________________________________________!00|
!++++ "It's Only Work If Some One Makes You Do It" ++++!00|
!______________________________________________________!00|
! COME VISIT ME AT: !00|
! http://www.geocities.com/SiliconValley/Heights/1463/ !00|
!______________________________________________________!00|
|0000000000000000000000000000000000000000000000000000000|
`~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'

John E. Perry III

unread,
Sep 21, 1996, 3:00:00ā€ÆAM9/21/96
to

Reposting article removed by rogue canceller.

> 1 + 1 = 1 in any ring where 0 = 1.

What exactly do you mean, where 0 = 1? My understanding of

Orlando Vacquez

unread,
Sep 21, 1996, 3:00:00ā€ÆAM9/21/96
to

Reposting article removed by rogue canceller.

David Kaufman (da...@netcom.com) wrote:

Kurt Foster

unread,
Sep 21, 1996, 3:00:00ā€ÆAM9/21/96
to

Reposting article removed by rogue canceller.

[followup to sci.math only]

David Kaufman (da...@netcom.com) wrote:
: When does 1 + 1 = 1?

: How can we interpret an addition operation in the real world
: where 1 + 1 = 1?
:

U Lange

unread,
Sep 22, 1996, 3:00:00ā€ÆAM9/22/96
to

John E. Perry III (jpe...@pen.k12.va.us) wrote:
: > 1 + 1 = 1 in any ring where 0 = 1.

:
: What exactly do you mean, where 0 = 1? My understanding of
: rings is that 0 is the standard additive identity, and 1 is
: unity, if unity is in the ring. Then you would be saying that
: for a =/= 0,

(Assuming such an a=/=0 exists in the ring!)

:
: a * 0 = a


:
: which leads to logical difficulties in rings, since
:
: a * b = a * (0 + b) = a * 0 + a * b = a + a * b
:
: which contradicts the uniqueness of 0 in a ring.

Hence the assumption must have been false and the ring in which 0=1 cannot
have any other element a=/=0.

There is no "logical difficulty" at all in a ring with just one element.

: Surely you can't mean 0 = 1 as in 0 is the multiplicative
: identity, or unity.

He can, but it leaves him with only one (pretty boring) ring. ;-)

--
Ulrich Lange Dept. of Chemical Engineering
University of Alberta
la...@gpu.srv.ualberta.ca Edmonton, Alberta, T6G 2G6, Canada

Generic User u1006057

unread,
Sep 22, 1996, 3:00:00ā€ÆAM9/22/96
to

jpe...@pen.k12.va.us (John E. Perry III) writes:
: > 1 + 1 = 1 in any ring where 0 = 1.
:
: What exactly do you mean, where 0 = 1? My understanding of
: rings is that 0 is the standard additive identity, and 1 is
: unity, if unity is in the ring. Then you would be saying that
: for a =/= 0,

If 0 is the only element in the ring, there is no difficulty with
0 being both the additive identity and the multiplicitive identity.
It is only when another element is added that 0 can no longer
equal 1.

Hmmm... actually, I'm thinking of Groups... let me check my abstract
algebra book to make sure I know what I'm talking about... ;>

Ah, ok, I'm still ok, although the usage of "1" is misleading...

2+2=2 is certainly true in the structure I'm talking about (where 0=2).
But "1" has a special meaning of a "unit", and for some reason the
ring containing just 0 is not allowed to be considered a ring with a unit.
Oh well :-)

Karl
<marx>

Bill Dubuque

unread,
Sep 22, 1996, 3:00:00ā€ÆAM9/22/96
to jpe...@pen.k12.va.us

|From: jpe...@pen.k12.va.us (John E. Perry III)
|Date: Sat, 21 Sep 1996 17:21:03 GMT
|Path: senator-bedfellow.mit.edu!bloom-beacon.mit.edu!news.mathworks.com!uunet!in3.uu.net!hearst.acc.Virginia.EDU!pen.k12.va.us!jperry
Message-ID: <Dy3Ev...@pen.k12.va.us>
References: <520b4u$i...@darkstar.ucsc.edu>
Lines: 24
Xref: senator-bedfellow.mit.edu k12.ed.math:19109 k12.chat.senior:105971 k12.chat.junior:85883 k12.chat.teacher:49194 k12.chat.elementary:27715 sci.math:164513

|
|> 1 + 1 = 1 in any ring where 0 = 1.
|
| ... which contradicts the uniqueness of 0 in a ring.

The zero ring, where 0 = 1, has as its carrier the singleton
set {0} and as such cannot violate any uniqueness assumptions
on its elements!

Unlike field axioms, ring axioms do not include the axiom 0 != 1
and thus {0} is a ring, but not a field.


Generic User u1006057

unread,
Sep 22, 1996, 3:00:00ā€ÆAM9/22/96
to

Reposting article removed by rogue canceller.

jpe...@pen.k12.va.us (John E. Perry III) writes:

U Lange

unread,
Sep 22, 1996, 3:00:00ā€ÆAM9/22/96
to

Reposting article removed by rogue canceller.

John E. Perry III (jpe...@pen.k12.va.us) wrote:
: > 1 + 1 = 1 in any ring where 0 = 1.
:
: What exactly do you mean, where 0 = 1? My understanding of
: rings is that 0 is the standard additive identity, and 1 is
: unity, if unity is in the ring. Then you would be saying that
: for a =/= 0,

(Assuming such an a=/=0 exists in the ring!)

Viper

unread,
Sep 23, 1996, 3:00:00ā€ÆAM9/23/96
to

On 21 Sep 1996, Orlando Vacquez wrote:

V~> I think I understand.

After seeing what's below, I don't think you do ...

V~> _ _ ___
V~> |_| + |_| = |___|

What is this --- a square + a square = a rectangle ??? This is hardly
referenceing the question 1 + 1 = 1 ....

V~> but you could also say
V~> _ _ _ _
V~> |_| + |_| = |_||_|

No, you couldn't say this at all. 1 square + 1 square = 2 squares ...
once again this does not mean 1 + 1 = 1 ...

V~> I think the second example is the one most comoly used in math. (did that
V~> make any sense?)

No, it is not used in maths and any teacher who uses this as an example
should not be allowed to teach and his/her credentials questioned. And
no, that did not make any sense, nor did the rest of your message.

Vincent Johns

unread,
Sep 23, 1996, 3:00:00ā€ÆAM9/23/96
to David Kaufman

(posted & emailed)

David Kaufman <da...@netcom.com> wrote:
>
> When does 1 + 1 = 1?
>
> How can we interpret an addition operation in the real world
> where 1 + 1 = 1?
>

> Also explain this concept in terms of the cubes below.
>

> 1 + 1 = 1

> Y
> _________ _________ _________|________
> /| /| / /| / /| /|
> / | / | / / | / / | / |
> /__|_____/ | /________/ | /________/__|_____/ |
> | |_____|__| | | | = | | | | |
> | / | / | | / | | / | /
> | / | / | | / | | / | /
> |/_______|/ |________|/ |________|/_______|/
> /
> Z
>
> Did I commute the cubes pictorially?


Yes; I think the cubes are cute. I wish I could do spheres
in ASCII as well as that! :-)

Anyway, though there may be many answers to the question, the one
I most often see is Boolean algebra, in which "+" maps to "OR" and
"1" maps to "TRUE". (This is the usual mapping; one could instead
use "+" for "AND" and "1" for "FALSE".) "=" maps to "EQV" or "IFF",
the equivalence relation (meaning that both sides have the same
Boolean value).

The usual mapping gives 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, and 1 + 1 = 1.

As far as the cubes are concerned, think of set union; to the
right of the "=" sign in the picture is represented the union of the
sets identified with the cubes on the left side. In English, one
might say "x is in the first cube OR x is in the second cube IFF
x is in the union of the two cubes", or "x is in the first or the
second cube if and only if x is in either cube", or something similar.

-- Vincent Johns


Eduardo Chappa L.

unread,
Sep 23, 1996, 3:00:00ā€ÆAM9/23/96
to

On Sat, 21 Sep 1996, John E. Perry III wrote:

> Reposting article removed by rogue canceller.
>

> > 1 + 1 = 1 in any ring where 0 = 1.
>
> What exactly do you mean, where 0 = 1? My understanding of
> rings is that 0 is the standard additive identity, and 1 is
> unity, if unity is in the ring. Then you would be saying that
> for a =/= 0,
>

> a * 0 = a
>
> which leads to logical difficulties in rings, since
>
> a * b = a * (0 + b) = a * 0 + a * b = a + a * b
>
> which contradicts the uniqueness of 0 in a ring.
>

> Surely you can't mean 0 = 1 as in 0 is the multiplicative
> identity, or unity.
>

Certainly, in the ring R = {0}, we have that a*0 = a for all a in R, so 1
is a good notation for the element 0, and so 1 = 0.

Me.

Mike McCarty

unread,
Sep 23, 1996, 3:00:00ā€ÆAM9/23/96
to

In article <Dy3Ev...@pen.k12.va.us>,
John E. Perry III <jpe...@pen.k12.va.us> wrote:
)> 1 + 1 = 1 in any ring where 0 = 1.
)
)What exactly do you mean, where 0 = 1? My understanding of
)rings is that 0 is the standard additive identity, and 1 is
)unity, if unity is in the ring. Then you would be saying that
)for a =/= 0,
)
)a * 0 = a
)
)which leads to logical difficulties in rings, since
)
)a * b = a * (0 + b) = a * 0 + a * b = a + a * b
)
)which contradicts the uniqueness of 0 in a ring.
)
)Surely you can't mean 0 = 1 as in 0 is the multiplicative
)identity, or unity.

You missed the fact that, in this ring, there is only one element.

Mike
--
----
char *p="char *p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}

I don't speak for DSC. <- They make me say that.

Mike McCarty

unread,
Sep 23, 1996, 3:00:00ā€ÆAM9/23/96
to

In article <32439B...@cris.com>, kevin <knif...@cris.com> wrote:
)hmmmm.
)
) could it be like when two things come together like two drops of
)water or people getting married?
)But that doesn't explain the y and z in the cubes which are probably
)there for a reason.
)what's interesting is that one of the cubes is hollow and one isn't.
)If the hollow cube was bigger the solid cube could be placed inside the
)hollow cube and make it look like only one cube.
).......But it would still be two cubes, taking the space of one cube.
)hmmmmm....
)
)
)David Kaufman wrote:
)>
)> When does 1 + 1 = 1?
)>
)> How can we interpret an addition operation in the real world
)> where 1 + 1 = 1?
)>
)> Also explain this concept in terms of the cubes below.
)>
)> 1 + 1 = 1
)> Y
)> _________ _________ _________|________
)> /| /| / /| / /| /|
)> / | / | / / | / / | / |
)> /__|_____/ | /________/ | /________/__|_____/ |
)> | |_____|__| | | | = | | | | |
)> | / | / | | / | | / | /
)> | / | / | | / | | / | /
)> |/_______|/ |________|/ |________|/_______|/
)> /
)> Z

1+1 = 1 if 1 = 0. So the field with one element has this property.

Mik

Dik T. Winter

unread,
Sep 24, 1996, 3:00:00ā€ÆAM9/24/96
to

I read:

> Reposting article removed by rogue canceller.
many times.

Apparently there is a rogue canceller at work in the k12 newsgroups.
Could the fights not be fought out elsewhere?
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/

Jesse Matonak

unread,
Sep 24, 1996, 3:00:00ā€ÆAM9/24/96
to

In article <R.Dy3E...@pen.k12.va.us>,


John E. Perry III <jpe...@pen.k12.va.us> wrote:

>Reposting article removed by rogue canceller.
>

>> 1 + 1 = 1 in any ring where 0 = 1.
>

>Surely you can't mean 0 = 1 as in 0 is the multiplicative

>identity, or unity.
>
>---
>John E. Perry, III J+M+J Adoro Te Devote

Yes.
Despite the cogent analysis presented in your original post, there is
one ring with the property 0 = 1. It is the ring (field) with one
element.

-- Jesse Matonak
rake...@cats.ucsc.edu

Iouri Sorkine

unread,
Sep 24, 1996, 3:00:00ā€ÆAM9/24/96
to

Dik T. Winter (d...@cwi.nl) wrote:

: Apparently there is a rogue canceller at work in the k12 newsgroups.

Please read about a rogue canceller in news.admin.net-abuse.misc

Yuri Sorkin

Dave Seaman

unread,
Sep 25, 1996, 3:00:00ā€ÆAM9/25/96
to

In article <52762p$p...@sun001.spd.dsccc.com>,

Mike McCarty <jmcc...@sun1307.spd.dsccc.com> wrote:
>1+1 = 1 if 1 = 0. So the field with one element has this property.

That should be "the RING with one element." Fields require at least
two elements. For one thing, every field is also an integral domain,
and 1 != 0 in an integral domain. Also, the nonzero elements of any
field form a multiplicative group, and groups are necessarily
nonempty. Hence, the ring with one element is not a field.

--
Dave Seaman dse...@purdue.edu
++++ stop the execution of Mumia Abu-Jamal ++++
++++ if you agree copy these lines to your sig ++++
++++ see http://www.xs4all.nl/~tank/spg-l/sigaction.htm ++++

Bill Dubuque

unread,
Sep 26, 1996, 3:00:00ā€ÆAM9/26/96
to

:From: a...@seaman.cc.purdue.edu (Dave Seaman)
:Date: 25 Sep 1996 11:14:55 -0500
:Path: senator-bedfellow.mit.edu!bloom-beacon.mit.edu!tribune.meitca.com!ulowell.uml.edu!news.tacom.army.mil!news.webspan.net!www.nntp.primenet.com!nntp.primenet.com!swrinde!hookup!news.nstn.ca!news.cs.indiana.edu!purdue!mozo.cc.purdue.edu!not-for-mail
Lines: 15
Message-ID: <52blpv$q...@seaman.cc.purdue.edu>
References: <davkDy2...@netcom.com> <32439B...@cris.com> <52762p$p...@sun001.spd.dsccc.com>
NNTP-Posting-Host: seaman.cc.purdue.edu
Xref: senator-bedfellow.mit.edu k12.ed.math:19175 k12.chat.senior:106101 k12.chat.junior:86018 k12.chat.teacher:49360 k12.chat.elementary:27757 sci.math:164953
:

:Mike McCarty <jmcc...@sun1307.spd.dsccc.com> wrote:
:>1+1 = 1 if 1 = 0. So the field with one element has this property.
:
:That should be "the RING with one element." Fields require at least
:two elements. For one thing, every field is also an integral domain,
:and 1 != 0 in an integral domain. Also, the nonzero elements of any
:field form a multiplicative group, and groups are necessarily
:nonempty. Hence, the ring with one element is not a field.

Whether structures with empty carriers are allowed or not
depends on how one defines the notion of structure. Much
recent work in computational logic and algebra depends on
allowing empty carriers (uninhabited sorts), especially in
the many-sorted case. Below is a sample of related literature.

-Bill

Manca, Vincenzo (I-PISA-IF); Salibra, Antonino (I-PISA-IF)
Soundness and completeness of the Birkhoff equational calculus
for many-sorted algebras with possibly empty carrier sets.
Theoret. Comput. Sci. 94 (1992), no. 1, 101--124.
MR 93f:03018 03B70 03C05 08B05 68Q65

Mahr, Bernd (D-TUB-I)
Empty carriers: the categorical burden on logic.
Categorical methods in computer science (Berlin, 1988), 50--65,
Lecture Notes in Comput. Sci., 393, Springer, Berlin, 1989.
MR 91f:03059 03C07 03B70 03G30 68Q55

Amer, Mohamed A. (ET-CAIRS)
First order logic with empty structures.
Studia Logica 48 (1989), no. 2, 169--177.
MR 91a:03013 03B10

Markusz, Zsuzsanna (3-CALG-C)
Different validity concepts in many-sorted logic.
Tanulmanyok---MTA Szamitastech. Automat. Kutato Int. Budapest
No. 192 (1986), 5--49.
MR 88i:03019 03B10 03C20

Meseguer, Jose (1-SRI-C); Goguen, Joseph A. (1-SRI-C)
Initiality, induction, and computability.
Algebraic methods in semantics (Fontainebleau, 1982), 459--541,
Cambridge Univ. Press, Cambridge-New York, 1985.
MR 88a:68073 68Q55 03B70 03D45 03D80 08A99

Mitchell, John C. (1-STF-C); Moggi, Eugenio (4-EDIN-C)
Kripke-style models for typed lambda calculus.
Second Annual IEEE Symposium on Logic in Computer Science (Ithaca, NY, 1987).
Ann. Pure Appl. Logic 51 (1991), no. 1-2, 99--124.
MR 92a:03017 03B40 03G30 68Q55

Bill Dubuque

unread,
Sep 26, 1996, 3:00:00ā€ÆAM9/26/96
to

:From: a...@seaman.cc.purdue.edu (Dave Seaman)
:Date: 25 Sep 1996 11:14:55 -0500
:

John E. Perry III

unread,
Sep 26, 1996, 3:00:00ā€ÆAM9/26/96
to

> Despite the cogent analysis presented in your original post,
> there is one ring with the property 0 = 1.

I agree with this completely. BUT--- This is the wording of
the original post though:

> in any ring where 0 = 1.

That implies more than just the singleton ring, and this is
what I objected to: the use of the word "any".


---
John E. Perry, III J+M+J Adoro Te Devote

he...@dorsai.org

unread,
Sep 26, 1996, 3:00:00ā€ÆAM9/26/96
to

a...@seaman.cc.purdue.edu (Dave Seaman) wrote:

i am a bit confused.. if 1+1=1 if 1=0 then why have a 1?
why have a 0 if you have a 1?

btw: what is the definition of a ring?

>In article <52762p$p...@sun001.spd.dsccc.com>,


>Mike McCarty <jmcc...@sun1307.spd.dsccc.com> wrote:
>>1+1 = 1 if 1 = 0. So the field with one element has this property.

>That should be "the RING with one element." Fields require at least
>two elements. For one thing, every field is also an integral domain,
>and 1 != 0 in an integral domain. Also, the nonzero elements of any
>field form a multiplicative group, and groups are necessarily
>nonempty. Hence, the ring with one element is not a field.

>--

>Dave Seaman dse...@purdue.edu
> ++++ stop the execution of Mumia Abu-Jamal ++++
> ++++ if you agree copy these lines to your sig ++++
>++++ see http://www.xs4all.nl/~tank/spg-l/sigaction.htm ++++

**************************************
NEW ADDRESS
he...@ingress.com
http://www.ingress.com/~herb
**************************************


ed...@netcom.com

unread,
Sep 26, 1996, 3:00:00ā€ÆAM9/26/96
to

In article <52ek6o$n...@kensie.dorsai.org> he...@dorsai.org writes:
=a...@seaman.cc.purdue.edu (Dave Seaman) wrote:
=
=i am a bit confused.. if 1+1=1 if 1=0 then why have a 1?
=why have a 0 if you have a 1?
=
=btw: what is the definition of a ring?
=
Well, a ring is an algebraic set with two binary operations, commonly
known as addition (+) and multiplication (*) where:
a) the set is closed with respect to both operations
b) addition is associative and commutative, has the additive
identity, 0, and every element has a (unique) additive inverse.
c) multiplication is associative, and (usually, depending on which
book you take it out from), and has the multiplicative identity, 1.
d) multiplication is distributive over addition:
a*(b+c) = a*b + a*c
Thus, the reason for 0 and 1 is that 0 represents the additive identity
and 1 represents the multiplicative identity. Those are the only two
elements of the ring which are identified with a symbol (i.e., the symbols
0 and 1).

Some rings may have the pathological property that 0 and 1 are the same
things. That is, the additive and multiplicative identities are the same
element (the definition above doesn't specify that 0 and 1 be different).
If so, then the ring is pretty trivial, but is a ring, and indeed,
1 + 1 = 0
when you interpret the addition as ``the multiplicative identity added to
itself is equal to the additive identity.''

An example of this set is Z/<1>, the ring of integers modulo 1. Again,
not a very interesting ring, other than having the 1 + 1 = 0 property.

EDEW

Dave Seaman

unread,
Sep 26, 1996, 3:00:00ā€ÆAM9/26/96
to

In article <52eoq0$6...@news.next.com>, <ed...@netcom.com> wrote:
>In article <52ek6o$n...@kensie.dorsai.org> he...@dorsai.org writes:
>=a...@seaman.cc.purdue.edu (Dave Seaman) wrote:
>=
>=i am a bit confused.. if 1+1=1 if 1=0 then why have a 1?
>=why have a 0 if you have a 1?
>=
>=btw: what is the definition of a ring?

I didn't write that, by the way.

>Well, a ring is an algebraic set with two binary operations, commonly
>known as addition (+) and multiplication (*) where:

[ ... skipping ... ]


> d) multiplication is distributive over addition:
> a*(b+c) = a*b + a*c

There are actually two distributive laws. You also need to specify that

(b+c)*a = b*a + c*a

since multiplication is not necessarily assumed to be commutative in a
ring.

Also, in response to a question that was raised elsewhere, it is not
quite true that there is only one ring that consists of a single
element.

What is true is that there is only one such ring UP TO ISOMORPHISM.
There are as many single-element rings as there are singleton sets, but
all such rings are fundamentally alike. By convention, we call that
single element "0" if we want to talk about the additive identity, and
"1" if we want to talk about the multiplicative identity. That doesn't
mean that all things called "0" ("1") are identical.

Ilias Kastanas

unread,
Sep 28, 1996, 3:00:00ā€ÆAM9/28/96
to

In article <y8zbuetn7...@martigny.ai.mit.edu>,

Bill Dubuque <w...@martigny.ai.mit.edu> wrote:
>
>Whether structures with empty carriers are allowed or not
>depends on how one defines the notion of structure. Much
>recent work in computational logic and algebra depends on
>allowing empty carriers (uninhabited sorts), especially in
>the many-sorted case. Below is a sample of related literature.

In standard first-order logic the formula Ex ~R(x) V Ex R(x)
(that is, Ax R(x) -> Ex R(x)) is provable, and it holds in every stru-
cture. But if one allows structures to have empty universe the formula
does not hold. To put it another way, { ~Ex ~R(x), ~Ex R(x) } is incon-
sistent, but it has a model, the structure with M = empty, R_M = empty.
Apparently one has to do something more than just stipulate M can be
empty; losing the Completeness theorem is no little thing!


Ilias


ilias kastanas 08-14-90

unread,
Sep 28, 1996, 3:00:00ā€ÆAM9/28/96
to

In article <52jqjg$f...@news.ece.nwu.edu>,
Ian Sutherland <i...@eecs.nwu.edu> wrote:
@In article <52iekh$5...@gap.cco.caltech.edu>,
@Ilias Kastanas <ika...@alumnae.caltech.edu> wrote:
@>In article <y8zbuetn7...@martigny.ai.mit.edu>,
@>Bill Dubuque <w...@martigny.ai.mit.edu> wrote:
@>>
@>>Whether structures with empty carriers are allowed or not
@>>depends on how one defines the notion of structure. Much
@>>recent work in computational logic and algebra depends on
@>>allowing empty carriers (uninhabited sorts), especially in
@>>the many-sorted case. Below is a sample of related literature.
@>
@>
@>
@> In standard first-order logic the formula Ex ~R(x) V Ex R(x)
@> (that is, Ax R(x) -> Ex R(x)) is provable, and it holds in every stru-
@> cture. But if one allows structures to have empty universe the formula
@> does not hold. To put it another way, { ~Ex ~R(x), ~Ex R(x) } is incon-
@> sistent, but it has a model, the structure with M = empty, R_M = empty.
@> Apparently one has to do something more than just stipulate M can be
@> empty; losing the Completeness theorem is no little thing!
@
@Did the original poster somehow suggest that you could just
@change the semantics to allow empty structures without
@changing the rules of inference?


Did I suggest that he did?

He reported that some people were allowing empty structures, and I
wondered just how they were going about it.

Ilias


Ian Sutherland

unread,
Sep 28, 1996, 3:00:00ā€ÆAM9/28/96
to

In article <52iekh$5...@gap.cco.caltech.edu>,

Ilias Kastanas <ika...@alumnae.caltech.edu> wrote:
>In article <y8zbuetn7...@martigny.ai.mit.edu>,
>Bill Dubuque <w...@martigny.ai.mit.edu> wrote:
>>
>>Whether structures with empty carriers are allowed or not
>>depends on how one defines the notion of structure. Much
>>recent work in computational logic and algebra depends on
>>allowing empty carriers (uninhabited sorts), especially in
>>the many-sorted case. Below is a sample of related literature.
>
>
>
> In standard first-order logic the formula Ex ~R(x) V Ex R(x)
> (that is, Ax R(x) -> Ex R(x)) is provable, and it holds in every stru-
> cture. But if one allows structures to have empty universe the formula
> does not hold. To put it another way, { ~Ex ~R(x), ~Ex R(x) } is incon-
> sistent, but it has a model, the structure with M = empty, R_M = empty.
> Apparently one has to do something more than just stipulate M can be
> empty; losing the Completeness theorem is no little thing!

Did the original poster somehow suggest that you could just


change the semantics to allow empty structures without

changing the rules of inference?
--
Ian Sutherland
i...@ece.nwu.edu

Sans Peur

Orjan Johansen

unread,
Sep 29, 1996, 3:00:00ā€ÆAM9/29/96
to

In article <52iekh$5...@gap.cco.caltech.edu>,
Ilias Kastanas <ika...@alumnae.caltech.edu> wrote:
> In standard first-order logic the formula Ex ~R(x) V Ex R(x)
> (that is, Ax R(x) -> Ex R(x)) is provable, and it holds in every stru-
> cture. But if one allows structures to have empty universe the formula
> does not hold. To put it another way, { ~Ex ~R(x), ~Ex R(x) } is incon-
> sistent, but it has a model, the structure with M = empty, R_M = empty.
> Apparently one has to do something more than just stipulate M can be
> empty; losing the Completeness theorem is no little thing!

Isn't the only reason to disallow empty universes precisely that it
makes the above formula/scheme simpler? I.e. you could easily replace
it by something like

Ex S(x) & Ax R(x) -> Ex R(x)

Greetings,
Ƙrjan.

ilias kastanas 08-14-90

unread,
Sep 29, 1996, 3:00:00ā€ÆAM9/29/96
to

In article <52lr34$m...@due.unit.no>,

Orjan Johansen <oer...@lie.matstat.unit.no> wrote:
@In article <52iekh$5...@gap.cco.caltech.edu>,
@Ilias Kastanas <ika...@alumnae.caltech.edu> wrote:
@> In standard first-order logic the formula Ex ~R(x) V Ex R(x)
@> (that is, Ax R(x) -> Ex R(x)) is provable, and it holds in every stru-
@> cture. But if one allows structures to have empty universe the formula
@> does not hold. To put it another way, { ~Ex ~R(x), ~Ex R(x) } is incon-
@> sistent, but it has a model, the structure with M = empty, R_M = empty.
@> Apparently one has to do something more than just stipulate M can be
@> empty; losing the Completeness theorem is no little thing!
@
@Isn't the only reason to disallow empty universes precisely that it
@makes the above formula/scheme simpler? I.e. you could easily replace
@it by something like
@
@Ex S(x) & Ax R(x) -> Ex R(x)


This formula says something else. Besides, how does it help if
_I_ try to avoid Q = AxR(x)->ExR(x) ? As long as Q exists and is a
legitimate formula, Completeness fails. How do we outlaw Q... and which
other ones? What happens if we are carrying out some algorithm or trans-
formation and Q shows up? And who is to say theorems of Logic still hold,
now that we have modified the notion 'formula'? we'll have to check them
and see...

It is not simple at all. Even the very first question, "what are the
well-formed formulas?" is tough to answer.

One might try to weaken formal deduction, or alter the semantics. But
again: how?

Ilias

Orjan Johansen

unread,
Sep 29, 1996, 3:00:00ā€ÆAM9/29/96
to

In article <52ma6o$s...@nuke.csu.net>,

ilias kastanas 08-14-90 <ika...@sol.uucp> wrote:
>In article <52lr34$m...@due.unit.no>,
>Orjan Johansen <oer...@lie.matstat.unit.no> wrote:
>@
>@Isn't the only reason to disallow empty universes precisely that it
>@makes the above formula/scheme simpler? I.e. you could easily replace
>@it by something like
>@
>@Ex S(x) & Ax R(x) -> Ex R(x)
>
[snip]

> It is not simple at all. Even the very first question, "what are the
> well-formed formulas?" is tough to answer.
>
> One might try to weaken formal deduction, or alter the semantics. But
> again: how?

I'm not a logician (but I play one on the net.) I thought this was
something like when defining rings, you sometimes require them to have
units, sometimes not; and while it may be convenient to have units,
there is no fundamental difficulty with the other option.

Similarly it would seem to me that while it may be convenient to have
non-empty universes, it ought to be easy to define predicate calculus
without it. I would expect the well-formed formulas to be the same, but
some of the axioms/axiom schemes would have to be slightly modified.

The encyclopedia here contains a list of axioms for predicate calculus.
It seems to me that the only axiom which doesn't hold in empty universes
is

R(t) -> Ex R(x)

I would suggest replacing it by

Et S(t) & R(t) -> Ex R(x)

There are a couple of others which look suspicious, but I think they
hold:

Ax R(x) -> R(x|t)
Ax (R -> S(x)) -> (R -> Ax S(x))
Ax (R(x) -> S) -> (Ex R(x) -> S)

The semantics of theorems with free variables would have to be the
same as embedding the theorem in Ax Ay (...) to make this work.
I don't know if this is considered a change.

Is any other change necessary, and/or does this really create any
difficulties in proving the completeness theorem?

Greetings,
Ƙrjan.

ilias kastanas 08-14-90

unread,
Oct 1, 1996, 3:00:00ā€ÆAM10/1/96
to

In article <52mkgk$o...@due.unit.no>,
Orjan Johansen <oer...@lie.matstat.unit.no> wrote:

@I'm not a logician (but I play one on the net.) I thought this was
@something like when defining rings, you sometimes require them to have
@units, sometimes not; and while it may be convenient to have units,
@there is no fundamental difficulty with the other option.
@
@Similarly it would seem to me that while it may be convenient to have
@non-empty universes, it ought to be easy to define predicate calculus
@without it. I would expect the well-formed formulas to be the same, but
@some of the axioms/axiom schemes would have to be slightly modified.
@
@The encyclopedia here contains a list of axioms for predicate calculus.
@It seems to me that the only axiom which doesn't hold in empty universes
@is
@
@R(t) -> Ex R(x)
@
@I would suggest replacing it by
@
@Et S(t) & R(t) -> Ex R(x)
@
@There are a couple of others which look suspicious, but I think they
@hold:
@
@Ax R(x) -> R(x|t)


Same problem; in fact, the axiom you treated is essentially the same
as this (in contrapositive form).


@Ax (R -> S(x)) -> (R -> Ax S(x))
@Ax (R(x) -> S) -> (Ex R(x) -> S)
@
@The semantics of theorems with free variables would have to be the
@same as embedding the theorem in Ax Ay (...) to make this work.
@I don't know if this is considered a change.


It's OK. By the way, the t's above are terms, not necessarily
variables. E.g. c (constant symbol), f(x), g(c, x)...

@Is any other change necessary, and/or does this really create any
@difficulties in proving the completeness theorem?


Maybe you want the fix Ax R(x) -> (R(x/t) V Ay S(y)), similar
to yours, above. It complicates a basic axiom; one would have to re-check
everything. I'd like to see the dividend of empty structures before I
take the trouble.


Ilias

Moses Klein

unread,
Oct 1, 1996, 3:00:00ā€ÆAM10/1/96
to

In article <52iekh$5...@gap.cco.caltech.edu>,
Ilias Kastanas <ika...@alumnae.caltech.edu> wrote:
>In article <y8zbuetn7...@martigny.ai.mit.edu>,
>Bill Dubuque <w...@martigny.ai.mit.edu> wrote:
>>
>>Whether structures with empty carriers are allowed or not
>>depends on how one defines the notion of structure. Much
>>recent work in computational logic and algebra depends on
>>allowing empty carriers (uninhabited sorts), especially in
>>the many-sorted case. Below is a sample of related literature.
>
> In standard first-order logic the formula Ex ~R(x) V Ex R(x)
> (that is, Ax R(x) -> Ex R(x)) is provable, and it holds in every stru-
> cture. But if one allows structures to have empty universe the formula
> does not hold. To put it another way, { ~Ex ~R(x), ~Ex R(x) } is incon-
> sistent, but it has a model, the structure with M = empty, R_M = empty.
> Apparently one has to do something more than just stipulate M can be
> empty; losing the Completeness theorem is no little thing!

Ilias, you understate the crisis. If you have an inconsistent theory with
a model, you violate not completeness, but its converse, soundness. (A
complete logic is one for which every valid statement is provable. A
sound logic is one for which every provable statement is valid.) Losing
completeness, which could reasonably be called the Fundamental Theorem of
Model Theory, would certainly be no little thing; losing soundness would
be catastrophic!

This can certainly be repaired. As others have pointed out, we can
redefine the rules of inference to make this modified first-order logic
complete and sound. (This is possible because the set of validities is
still recursively enumerable, and any r.e. set is the set of theorems of
some formal system.) There is still one purely semantic issue concerning
logic allowing the empty model: how to interpret constant symbols?

That brings me to the real objection to Bill Dubuque's statement. He was
replying to Dave Seaman's assertion that *groups*, rather than model
structures in general, cannot be empty. Group theory needs either a
constant symbol for the identity, or an axiom stating the existence of an
identity. If the latter, then that axiom is false in an empty structure.
If the former, then the empty structure can't even be a model for the
language. Any way you slice it, part of the definition of group is that
there has to be an identity element, and as a trivial consequence there
has to be an element.

In brief: it's no great pain to redefine first-order logic to allow an
empty structure, but there's still no way that structure is a group, so
Dave's statement remains true.

Moses Klein (kl...@math.wisc.edu)

ilias kastanas 08-14-90

unread,
Oct 2, 1996, 3:00:00ā€ÆAM10/2/96
to

In article <52r9d6$1q...@news.doit.wisc.edu>,

Moses Klein <kl...@math.wisc.edu> wrote:
@In article <52iekh$5...@gap.cco.caltech.edu>,
@Ilias Kastanas <ika...@alumnae.caltech.edu> wrote:
@>In article <y8zbuetn7...@martigny.ai.mit.edu>,
@>Bill Dubuque <w...@martigny.ai.mit.edu> wrote:
@>>
@>>Whether structures with empty carriers are allowed or not
@>>depends on how one defines the notion of structure. Much
@>>recent work in computational logic and algebra depends on
@>>allowing empty carriers (uninhabited sorts), especially in
@>>the many-sorted case. Below is a sample of related literature.
@>
@> In standard first-order logic the formula Ex ~R(x) V Ex R(x)
@> (that is, Ax R(x) -> Ex R(x)) is provable, and it holds in every stru-
@> cture. But if one allows structures to have empty universe the formula
@> does not hold. To put it another way, { ~Ex ~R(x), ~Ex R(x) } is incon-
@> sistent, but it has a model, the structure with M = empty, R_M = empty.
@> Apparently one has to do something more than just stipulate M can be
@> empty; losing the Completeness theorem is no little thing!
@
@Ilias, you understate the crisis. If you have an inconsistent theory with
@a model, you violate not completeness, but its converse, soundness. (A
@complete logic is one for which every valid statement is provable. A
@sound logic is one for which every provable statement is valid.) Losing
@completeness, which could reasonably be called the Fundamental Theorem of
@Model Theory, would certainly be no little thing; losing soundness would
@be catastrophic!


For practical reasons I call "Completeness" the equivalence of |-
and |=; to be sure, one half of this is soundness and the other half
completeness.


@This can certainly be repaired. As others have pointed out, we can
@redefine the rules of inference to make this modified first-order logic
@complete and sound. (This is possible because the set of validities is
@still recursively enumerable, and any r.e. set is the set of theorems of
@some formal system.) There is still one purely semantic issue concerning
@logic allowing the empty model: how to interpret constant symbols?
@
@That brings me to the real objection to Bill Dubuque's statement. He was
@replying to Dave Seaman's assertion that *groups*, rather than model
@structures in general, cannot be empty. Group theory needs either a
@constant symbol for the identity, or an axiom stating the existence of an
@identity. If the latter, then that axiom is false in an empty structure.
@If the former, then the empty structure can't even be a model for the
@language. Any way you slice it, part of the definition of group is that
@there has to be an identity element, and as a trivial consequence there
@has to be an element.
@
@In brief: it's no great pain to redefine first-order logic to allow an
@empty structure, but there's still no way that structure is a group, so
@Dave's statement remains true.


The point is one could redefine 'group' to include the empty
set, but it would be a nuisance as a lot of results would have to
be reworded.

As to empty-structure f.o.l., is there a _simple_ way to come up
with valid axioms that are complete, and a _simple_ proof of comple-
teness? And what exactly is the advantage of having empty structures,
does anybody know?


Ilias

Moses Klein

unread,
Oct 2, 1996, 3:00:00ā€ÆAM10/2/96
to

In article <52sue4$f...@nuke.csu.net>,

ilias kastanas 08-14-90 <ika...@sol.uucp> wrote:
Ilias Kastanas wrote:
> The point is one could redefine 'group' to include the empty
> set, but it would be a nuisance as a lot of results would have to
> be reworded.

One *could*, but I don't see why anybody would. Even if we were working
first-order-logic-with-empty-models (let's call that L'), the natural
definition of group would still be the standard one. It's more
complicated to say "if there is at least one element in the group, then
there is an identity", than just to say "there is an identity". And the
development of group theory gains nothing and loses plenty of elegance.
Since factor groups G/H can't be defined if H is empty, you end up just
having to add "if the group is non-empty" before all your results.

> As to empty-structure f.o.l., is there a _simple_ way to come up
> with valid axioms that are complete,

Probably by requiring the hypothesis (Ex)(x=x) as a condition for either
universal specification (from (Ax)Px infer Pa) or existential
generalization (from Pa infer (Ex)Px).

> and a _simple_ proof of comple-
> teness? And what exactly is the advantage of having empty structures,
> does anybody know?

Well, in one sense the specification of the semantics is simpler this
way. A model is <S,...> where S is any set (instead of any non-empty
set), the ... (all the relations, functions and constants) are as usual.
More importantly, Bill Dubuque mentioned many-sorted logics. I'm not
familiar with what's been done in this direction, but I can imagine it
may be useful to be able to talk about all models of a many-sorted theory
without committing ourselves to all of the sorts being non-empty.

Moses Klein (kl...@math.wisc.edu)


ilias kastanas 08-14-90

unread,
Oct 2, 1996, 3:00:00ā€ÆAM10/2/96
to

In article <52u1uf$2v...@news.doit.wisc.edu>,
Moses Klein <kl...@math.wisc.edu> wrote:
@In article <52sue4$f...@nuke.csu.net>,
@ilias kastanas 08-14-90 <ika...@sol.uucp> wrote:
@Ilias Kastanas wrote:
@> The point is one could redefine 'group' to include the empty
@> set, but it would be a nuisance as a lot of results would have to
@> be reworded.
@
@One *could*, but I don't see why anybody would. Even if we were working
@first-order-logic-with-empty-models (let's call that L'), the natural
@definition of group would still be the standard one. It's more
@complicated to say "if there is at least one element in the group, then
@there is an identity", than just to say "there is an identity". And the
@development of group theory gains nothing and loses plenty of elegance.
@Since factor groups G/H can't be defined if H is empty, you end up just
@having to add "if the group is non-empty" before all your results.


Of course. That is why it is a nuisance.


@> As to empty-structure f.o.l., is there a _simple_ way to come up
@> with valid axioms that are complete,
@
@Probably by requiring the hypothesis (Ex)(x=x) as a condition for either
@universal specification (from (Ax)Px infer Pa) or existential
@generalization (from Pa infer (Ex)Px).


So in effect we _are_ saying "if the structure is non-empty"...

And we still need to go back and check whether this is complete.


@> and a _simple_ proof of comple-
@> teness? And what exactly is the advantage of having empty structures,
@> does anybody know?
@
@Well, in one sense the specification of the semantics is simpler this
@way. A model is <S,...> where S is any set (instead of any non-empty
@set), the ... (all the relations, functions and constants) are as usual.


Hmm, complicating the deductive system seems a high price for
this...

@More importantly, Bill Dubuque mentioned many-sorted logics. I'm not
@familiar with what's been done in this direction, but I can imagine it
@may be useful to be able to talk about all models of a many-sorted theory
@without committing ourselves to all of the sorts being non-empty.


Many-sorted logic seems a convenience rather than something essential
and is reducible to the one-sorted kind, as far as I know. Which is why
I asked; in what way would the change be useful?


Ilias

Justin Pearson

unread,
Oct 3, 1996, 3:00:00ā€ÆAM10/3/96
to

kl...@math.wisc.edu (Moses Klein) writes:

>
> > As to empty-structure f.o.l., is there a _simple_ way to come up

> > with valid axioms that are complete,
>

> Probably by requiring the hypothesis (Ex)(x=x) as a condition for either

> universal specification (from (Ax)Px infer Pa) or existential

> generalization (from Pa infer (Ex)Px).


If you want to know how to do it for Higher order logic, one can look
at article by Fourman in the Handbook of Mathematical logic, Barwise
et.al.

You introduce a quantifier I , such that Ix means roughly x is a
member of an inhabited type.

Justin Pearson
Computer Science
Royal Holloway
University of London
Egham
Surrey
TW20 0EX
U.K.
Tel: +44(0)1784 443912
Email: jus...@dcs.rhbnc.ac.uk

J.M. Plotkin

unread,
Oct 3, 1996, 3:00:00ā€ÆAM10/3/96
to

In article <52k3r8$d...@nuke.csu.net>, ika...@sol.uucp (ilias kastanas
08-14-90) wrote:

> In article <52jqjg$f...@news.ece.nwu.edu>,


> Ian Sutherland <i...@eecs.nwu.edu> wrote:
> @In article <52iekh$5...@gap.cco.caltech.edu>,
> @Ilias Kastanas <ika...@alumnae.caltech.edu> wrote:
> @>In article <y8zbuetn7...@martigny.ai.mit.edu>,
> @>Bill Dubuque <w...@martigny.ai.mit.edu> wrote:
> @>>
> @>>Whether structures with empty carriers are allowed or not
> @>>depends on how one defines the notion of structure. Much
> @>>recent work in computational logic and algebra depends on
> @>>allowing empty carriers (uninhabited sorts), especially in
> @>>the many-sorted case. Below is a sample of related literature.
> @>
> @>
> @>
> @> In standard first-order logic the formula Ex ~R(x) V Ex R(x)
> @> (that is, Ax R(x) -> Ex R(x)) is provable, and it holds in every stru-
> @> cture. But if one allows structures to have empty universe the formula
> @> does not hold. To put it another way, { ~Ex ~R(x), ~Ex R(x) } is incon-
> @> sistent, but it has a model, the structure with M = empty, R_M = empty.
> @> Apparently one has to do something more than just stipulate M can be
> @> empty; losing the Completeness theorem is no little thing!
> @

> @Did the original poster somehow suggest that you could just
> @change the semantics to allow empty structures without
> @changing the rules of inference?
>

The issue of empty universes gave rise to what are called Free Logics.
See Bencivenga's article in the Handbook of Philosophical Logic v.III
for details and motivation.

Moses Klein

unread,
Oct 4, 1996, 3:00:00ā€ÆAM10/4/96
to

In article <52u5lk$a...@nuke.csu.net>,

ilias kastanas 08-14-90 <ika...@sol.uucp> wrote:
:In article <52u1uf$2v...@news.doit.wisc.edu>,

:Moses Klein <kl...@math.wisc.edu> wrote:
:@In article <52sue4$f...@nuke.csu.net>,
:@ilias kastanas 08-14-90 <ika...@sol.uucp> wrote:
:@> And what exactly is the advantage of having empty structures,

:@> does anybody know?
:@
:@Well, in one sense the specification of the semantics is simpler this
:@way. A model is <S,...> where S is any set (instead of any non-empty
:@set), the ... (all the relations, functions and constants) are as usual.
:
: Hmm, complicating the deductive system seems a high price for
: this...

A lot of model theory can be done without any attention to the deductive
system, so if you simplify the semantics, you come out ahead.

: Many-sorted logic seems a convenience rather than something essential


: and is reducible to the one-sorted kind, as far as I know.

Yes, and logic without empty structures allowed is reducible to logic with
empty structures allowed. The choice is one of convenience, and which is
more convenient will depend on your purpose.

Moses Klein (kl...@math.wisc.edu)

ilias kastanas 08-14-90

unread,
Oct 5, 1996, 3:00:00ā€ÆAM10/5/96
to

In article <533ivt$18...@news.doit.wisc.edu>,
Moses Klein <kl...@math.wisc.edu> wrote:
@In article <52u5lk$a...@nuke.csu.net>,

@ilias kastanas 08-14-90 <ika...@sol.uucp> wrote:
@:In article <52u1uf$2v...@news.doit.wisc.edu>,
@:Moses Klein <kl...@math.wisc.edu> wrote:
@:@In article <52sue4$f...@nuke.csu.net>,
@:@ilias kastanas 08-14-90 <ika...@sol.uucp> wrote:
@:@> And what exactly is the advantage of having empty structures,
@:@> does anybody know?

@:@
@:@Well, in one sense the specification of the semantics is simpler this
@:@way. A model is <S,...> where S is any set (instead of any non-empty
@:@set), the ... (all the relations, functions and constants) are as usual.
@:
@: Hmm, complicating the deductive system seems a high price for
@: this...
@
@A lot of model theory can be done without any attention to the deductive
@system, so if you simplify the semantics, you come out ahead.


It is hardly a simplification; compared to the plethora of other
structures, the empty one is a speck of sand facing a mountain. Practi-
cally, it works as a complication, albeit a tiny one.

Also, bypassing deductions usually happens thanks to the Completeness
Theorem. I haven't seen anyone prove it for the 'new' system.


@: Many-sorted logic seems a convenience rather than something essential
@: and is reducible to the one-sorted kind, as far as I know.
@
@Yes, and logic without empty structures allowed is reducible to logic with
@empty structures allowed. The choice is one of convenience, and which is
@more convenient will depend on your purpose.


Well, to say it is reducible one would need the above proof, at the
very least. And I would still like to see some desirable effect of empty
structures, leaving aside the simplification issue. It is not a rhetori-
cal question.

Ilias

Orjan Johansen

unread,
Oct 5, 1996, 3:00:00ā€ÆAM10/5/96
to

In article <53517u$3...@nuke.csu.net>,

ilias kastanas 08-14-90 <ika...@sol.uucp> wrote:
> It is hardly a simplification; compared to the plethora of other
> structures, the empty one is a speck of sand facing a mountain. Practi-
> cally, it works as a complication, albeit a tiny one.
[snip]

> Well, to say it is reducible one would need the above proof, at the
> very least. And I would still like to see some desirable effect of empty
> structures, leaving aside the simplification issue. It is not a rhetori-
> cal question.

The thing I imagine could be important is the fact that every
substructure (as defined by a predicate) is itself a structure.

There may be only one speck of empty structure, but there is a mountain
of predicates giving it as a result.. e.g. all the ones for which Ex. Px
is inconsistent with the original axioms.

An analogous case for rings with and without unit is that an ideal
in a ring is only a ring itself if you don't require unit. (This
doesn't seem to stop people from requiring units anyhow, though.)

Greetings,
Ƙrjan.

Moses Klein

unread,
Oct 5, 1996, 3:00:00ā€ÆAM10/5/96
to

In article <53517u$3...@nuke.csu.net>,
ilias kastanas 08-14-90 <ika...@sol.uucp> wrote:
> It is hardly a simplification; compared to the plethora of other
> structures, the empty one is a speck of sand facing a mountain.

I think this is why Bill Dubuque brought up many-sorted logic. The class
of empty structures is indeed trivial, but of any given language with at
least two individual sorts, the class of models in which at least one
sort is empty, is not.

> Also, bypassing deductions usually happens thanks to the Completeness
> Theorem. I haven't seen anyone prove it for the 'new' system.

As I pointed out earlier, the set of validities is r.e., so there is a
complete deductive system. I even know how I could construct one, based
on formulas coding states of a Turing machine designed to compute the
partial characteristic function of the set of validities (i.e. the
function that returns 1 from the Godel code of a valid sentence, and
undefined on any other input). That would be a very unnatural proof
system, but I know I can do it. I've already conjectured what a more
familiar, Hilbert-style proof system might be, but that doesn't really
matter if I'm interested in the kind of model theory that doesn't need
any attention to inference. The point is that once I know there is a
complete deductive system, I don't need to lose any sleep over what it is.

Moses Klein (kl...@math.wisc.edu)

ilias kastanas 08-14-90

unread,
Oct 5, 1996, 3:00:00ā€ÆAM10/5/96
to

In article <535vsq$12...@news.doit.wisc.edu>,
Moses Klein <kl...@math.wisc.edu> wrote:
@In article <53517u$3...@nuke.csu.net>,

@ilias kastanas 08-14-90 <ika...@sol.uucp> wrote:
@> It is hardly a simplification; compared to the plethora of other
@> structures, the empty one is a speck of sand facing a mountain.
@
@I think this is why Bill Dubuque brought up many-sorted logic. The class
@of empty structures is indeed trivial, but of any given language with at
@least two individual sorts, the class of models in which at least one
@sort is empty, is not.


It is still pretty thin... and anyway, what good thing happens
with sorts that can be empty? At least in the study of w-models of
analysis and the like, I don't see anything.

@> Also, bypassing deductions usually happens thanks to the Completeness
@> Theorem. I haven't seen anyone prove it for the 'new' system.
@
@As I pointed out earlier, the set of validities is r.e., so there is a
@complete deductive system. I even know how I could construct one, based
@on formulas coding states of a Turing machine designed to compute the
@partial characteristic function of the set of validities (i.e. the
@function that returns 1 from the Godel code of a valid sentence, and
@undefined on any other input). That would be a very unnatural proof
@system, but I know I can do it. I've already conjectured what a more
@familiar, Hilbert-style proof system might be, but that doesn't really
@matter if I'm interested in the kind of model theory that doesn't need
@any attention to inference. The point is that once I know there is a
@complete deductive system, I don't need to lose any sleep over what it is.

Let me explain. I understand what you said earlier, and I don't
dispute it. You can even take all validities as axioms -- they are an
r.e. set. But I do care to have a reasonable deductive system; there
are many areas where it is called for, itself and/or metatheorems about
it. And it would be rather funny if what we end up doing is to stick
Ex x=x everywhere, blocking precisely the empty structure!


Ilias

Bill Dubuque

unread,
Oct 5, 1996, 3:00:00ā€ÆAM10/5/96
to

Justin Pearson <jus...@dcs.rhbnc.ac.uk> wrote:

:kl...@math.wisc.edu (Moses Klein) writes:
:>
:> > As to empty-structure f.o.l., is there a _simple_ way to come up
:> > with valid axioms that are complete,
:>
:> Probably by requiring the hypothesis (Ex)(x=x) as a condition for either
:> universal specification (from (Ax)Px infer Pa) or existential
:> generalization (from Pa infer (Ex)Px).
:
:If you want to know how to do it for Higher order logic, one can look at
:an article by Fourman in the Handbook of Mathematical logic, Barwise et.al.
:
:You introduce a quantifier I, such that Ix means roughly
: x is a member of an inhabited type.

See also:

------------------------------------------------------------------------------
81a:03060 03F55 03B20 03G30
Scott, Dana
Identity and existence in intuitionistic logic. (English)
Applications of sheaves (Proc. Res. Sympos. Appl. Sheaf Theory to Logic,
Algebra and Anal., Univ. Durham, Durham, 1977), pp. 660--696,
Lecture Notes in Math., 753,
Springer, Berlin, 1979.
------------------------------------------------------------------------------
The simplest way to convey an impression of the contents and style of this
very readable article is to quote extensively from its introduction: "Standard
formulations of intuitionistic logic, whether by logicians or by category
theorists, generally do not take into account partially defined elements. In
classical logic the problem is not important, because it is always possible to
split the definition (or theorem) into cases according as the object in
question does or does not exist. In intuitionistic logic this way is not open
to us, and this circumstance complicates many constructions, the theory of
descriptions, for example. Many people I find do not agree with me, but I
should like to advocate in a mild way in this paper what I consider a simple
extension of the usual formulation of logic allowing reference to partial
elements. "Technically the idea is to permit a wider interpretation of free
variables. All bound variables retain their usual existential import (when we
say something exists it does exist), but free variables behave in a more
'schematic' way. Thus there will be no restrictions on the use of modus ponens
or on the rule of substitution involving free variables and their
occurrences. The laws of quantifiers require some modification, however, to
make the existential assumptions explicit. The modification is very
straightforward, and I shall argue that what has to be done is simply what is
done naturally in making a relativization of quantifiers from a larger domain
to a subdomain. "In Section 1, I discuss the idea of allowing existence as a
predicate and the laws of quantifiers. Section 2 treats the theory of identity
and the connections with existence. Questions of strictness and extensionality
of relations and functions are discussed in Section 3 along with some examples
of first-order theories. As further examples of the use of the system, the
familiar theories of apartness and ordering in intuitionistic logic are
presented in Section 4. Section 5 goes briefly into relativization, and
Section 6 details the principles of descriptions. Finally, Section 7 reviews
the axioms for higher-order intuitionistic logic from this general
viewpoint. "The idea of schematic free variables is not new for classical
logic, and the literature on 'free' logic (or logic without existence
assumptions) is extensive. All I have done in this essay is to make what seems
to me to be the obvious carryover to intuitionistic logic, because I think it
is necessary and convenient. For those who do not like this formulation, some
comfort can be taken from the fact that in topos theory both kinds of systems
are completely equivalent. However, in first-order logic something is lost in
not allowing partial elements, as I shall try to argue along the way." (For
the entire collection see MR 80j:18001.)

Reviewed by Gavin Wraith
Cited in: 93i:18004 83a:03070

------------------------------------------------------------------------------
93i:18004 18B25 03G30
Chapman, Jonathan; Rowbottom, Frederick
Relative category theory and geometric morphisms. (English)
A logical approach.
Oxford Science Publications.
Oxford Logic Guides, 16.
The Clarendon Press, Oxford University Press, New York, 1992. xii+263 pp.
$75.00. ISBN 0-19-853434-5
------------------------------------------------------------------------------
This book does category theory in toposes entirely by means of internal set
theory, in the style of a book by J. L. Bell [Toposes and local set theories,
Oxford Univ. Press, New York, 1988; MR 90e:18002].

Partial operators such as composition are awkward. A recent book by the
reviewer [Elementary categories, elementary toposes, Oxford Univ. Press, New
York, 1992] uses them as abuse of notation in the internal language. This book
goes much further, extending the internal set theory $\scr L$ of a topos to a
general constructive theory $\scr L'$ of partially defined terms, related to
D. S. Scott's [in Applications of sheaves (Durham, 1977), 660--696, Lecture
Notes in Math., 753, Springer, Berlin, 1979; MR 81a:03060].

The book gives a new definition of locally small categories in a topos. In
particular, each topos $S$ has a topos $\scr S$ in it with the internal sets
of $S$ as objects, so that $\scr S$ seems to $S$ to be generated by 1. With
this apparatus developed, categories in $S$ can be studied in $\scr S$ quite
naively. The method is applied to toposes in toposes, culminating in the
relative Giraud theorem (examples such as the category of groups in a topos
are not mentioned).

Reviewed by Colin McLarty

Alan Smaill

unread,
Oct 6, 1996, 3:00:00ā€ÆAM10/6/96
to

ika...@sol.uucp (ilias kastanas 08-14-90) writes:

> And I would still like to see some desirable effect of empty
> structures, leaving aside the simplification issue. It is not a rhetori-
> cal question.

There is surely one obvious motivation, and that is that if you
believe that the predicate calculus captures logical validity (where I
mean not a particular formalisation, a la Tarski), then it's natural
to think that no empirical assumptions on the state of the universe of
discourse should be built in. Building in as a _logical_ truth that
there is at least one object looks like it's putting something in the
wrong place.

I think Quine discussed this somewhere, maybe in "Methods of logic".

--
Alan Smaill email: A.Sm...@ed.ac.uk
LFCS, Dept. of Computer Science tel: 44-31-650-2710
University of Edinburgh
Edinburgh EH9 3JZ, UK.

Ilias Kastanas

unread,
Oct 6, 1996, 3:00:00ā€ÆAM10/6/96
to

In article <53609c$m...@due.unit.no>,

Orjan Johansen <oer...@lie.matstat.unit.no> wrote:
>In article <53517u$3...@nuke.csu.net>,
>ilias kastanas 08-14-90 <ika...@sol.uucp> wrote:
>> It is hardly a simplification; compared to the plethora of other
>> structures, the empty one is a speck of sand facing a mountain. Practi-
>> cally, it works as a complication, albeit a tiny one.
>[snip]
>> Well, to say it is reducible one would need the above proof, at the
>> very least. And I would still like to see some desirable effect of empty

>> structures, leaving aside the simplification issue. It is not a rhetori-
>> cal question.
>
>The thing I imagine could be important is the fact that every
>substructure (as defined by a predicate) is itself a structure.
>
>There may be only one speck of empty structure, but there is a mountain
>of predicates giving it as a result.. e.g. all the ones for which Ex. Px
>is inconsistent with the original axioms.
>
>An analogous case for rings with and without unit is that an ideal
>in a ring is only a ring itself if you don't require unit. (This
>doesn't seem to stop people from requiring units anyhow, though.)


What _is_ the advantage in having an empty structure? What would be
some results thus obtained? No one has posted anything specific.

Ilias


Mike McCarty

unread,
Oct 7, 1996, 3:00:00ā€ÆAM10/7/96
to

In article <fwehgo8...@boor.dcs.ed.ac.uk>,
Alan Smaill <sma...@dcs.ed.ac.uk> wrote:
)ika...@sol.uucp (ilias kastanas 08-14-90) writes:
)
)> And I would still like to see some desirable effect of empty
)> structures, leaving aside the simplification issue. It is not a rhetori-
)> cal question.
)
)There is surely one obvious motivation, and that is that if you
)believe that the predicate calculus captures logical validity (where I
)mean not a particular formalisation, a la Tarski), then it's natural
)to think that no empirical assumptions on the state of the universe of
)discourse should be built in. Building in as a _logical_ truth that
)there is at least one object looks like it's putting something in the
)wrong place.
)
)I think Quine discussed this somewhere, maybe in "Methods of logic".

Somewhere one must at least assume that the empty set "exists".

Mike
--
----
char *p="char *p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}

I don't speak for DSC. <- They make me say that.

Robert Dawson

unread,
Oct 15, 1996, 3:00:00ā€ÆAM10/15/96
to

In article <5398c8$m...@gap.cco.caltech.edu>, ika...@alumnae.caltech.edu
says...

> What _is_ the advantage in having an empty structure? What would be
> some results thus obtained? No one has posted anything specific.
>
> Ilias


Typically, empty structures make the statement of most results
simpler. It's tidier for an n-element set to have 2^n subsets and not
(2^n)-1. It is simpler if {x:x in X P(X)} defines a subset of X whether or
not P(X) ever holds. It is nicer for every subset of X, including X itself,
to have a complement. So we allow empty sets.
There are a few places where we have to patch because of this, but
fewer than if we worked in "nonemptyset theory". The theories are, however,
identical,in the sense that by adding special cases we can translate any
result of one to a result of the other.
-Robert Dawson


ilias kastanas 08-14-90

unread,
Oct 15, 1996, 3:00:00ā€ÆAM10/15/96
to

In article <540d6m$k...@News.Dal.Ca>,
Robert Dawson <rda...@husky1.stmarys.ca> wrote:
@In article <5398c8$m...@gap.cco.caltech.edu>, ika...@alumnae.caltech.edu
@says...
@
@> What _is_ the advantage in having an empty structure? What would be
@> some results thus obtained? No one has posted anything specific.
@>
@> Ilias
@
@
@ Typically, empty structures make the statement of most results
@simpler. It's tidier for an n-element set to have 2^n subsets and not
@(2^n)-1. It is simpler if {x:x in X P(X)} defines a subset of X whether or
@not P(X) ever holds. It is nicer for every subset of X, including X itself,
@to have a complement. So we allow empty sets.
@ There are a few places where we have to patch because of this, but
@fewer than if we worked in "nonemptyset theory". The theories are, however,
@identical,in the sense that by adding special cases we can translate any
@result of one to a result of the other.
@ -Robert Dawson
@


The usefulness of the empty set is not in dispute; but the question
was about empty _structures_: when interpreting a f.o. language L in a
structure M, should one admit the M with empty universe? Incidentally,
if L has constant symbols then strictly speaking one cannot define such
an interpretation, there being no element to assign to a constant symbol.


Ilias


Libor Skarvada

unread,
Oct 16, 1996, 3:00:00ā€ÆAM10/16/96
to

ika...@sol.uucp (ilias kastanas 08-14-90) writes:
> [...]

> The usefulness of the empty set is not in dispute; but the question
> was about empty _structures_: when interpreting a f.o. language L in a
> structure M, should one admit the M with empty universe? Incidentally,
> if L has constant symbols then strictly speaking one cannot define such
> an interpretation, there being no element to assign to a constant symbol.
> Ilias

Yes. The usefulness of excluding structures (algebras) with empty carriers
from consideration, in case of _total_ structures, is undoubtable. On the
other hand, when dealing with partial structures (where denotations of
function symbols are partial functions) as models for 1st-order languages,
then having the empty structure appears to be advantageous.

But such "partial models" are not used very often, anyway. Usually one
rather adds a bottom element to the carrier (and corresponding constants
to the language) so s/he can gracefully describe popping from the empty
stack... ;-)

-- Libor

ilias kastanas 08-14-90

unread,
Oct 16, 1996, 3:00:00ā€ÆAM10/16/96
to

In article <libor.845453711@anxur>,
Libor Skarvada <li...@anxur.dcs.muni.cz> wrote:
@ika...@sol.uucp (ilias kastanas 08-14-90) writes:
@> [...]
@> The usefulness of the empty set is not in dispute; but the question
@> was about empty _structures_: when interpreting a f.o. language L in a
@> structure M, should one admit the M with empty universe? Incidentally,
@> if L has constant symbols then strictly speaking one cannot define such
@> an interpretation, there being no element to assign to a constant symbol.
@> Ilias
@
@Yes. The usefulness of excluding structures (algebras) with empty carriers
@from consideration, in case of _total_ structures, is undoubtable. On the
@other hand, when dealing with partial structures (where denotations of
@function symbols are partial functions) as models for 1st-order languages,
@then having the empty structure appears to be advantageous.


And partial functions are useful and necessary in recursion theory,
say. But I would really like to see a situation where partial structures
prove to be a good idea. I've been asking for something like that.


@But such "partial models" are not used very often, anyway. Usually one


Never mind that, even occasional use is fine. It seems you know
of some... could you describe it?


@rather adds a bottom element to the carrier (and corresponding constants
@to the language) so s/he can gracefully describe popping from the empty
@stack... ;-)


Oh well, in formal language theory one meets time and again "empty
stacks" with all kinds of markers in them...


Ilias

Libor Skarvada

unread,
Oct 16, 1996, 3:00:00ā€ÆAM10/16/96
to

ika...@sol.uucp (ilias kastanas 08-14-90) writes:

>In article <libor.845453711@anxur>,
>Libor Skarvada <li...@anxur.dcs.muni.cz> wrote:
>@Yes. The usefulness of excluding structures (algebras) with empty carriers
>@from consideration, in case of _total_ structures, is undoubtable. On the
>@other hand, when dealing with partial structures (where denotations of
>@function symbols are partial functions) as models for 1st-order languages,
>@then having the empty structure appears to be advantageous.

> And partial functions are useful and necessary in recursion theory,
> say. But I would really like to see a situation where partial structures
> prove to be a good idea. I've been asking for something like that.


>@But such "partial models" are not used very often, anyway. Usually one

> Never mind that, even occasional use is fine. It seems you know
> of some... could you describe it?

> [...]

Well, I meant the use of partial models in algebraic specifications of
programs.

In algebraic specifications one uses many-sorted theories, whose models
are many-sorted (or heterogeneous) algebras. If we deal only with total
models of an equational theory, we must introduce an "error" constant
symbol for some sorts to be able to describe (write axioms for)
operations that are intrinsically partial:

pred (zero) = error_nat
pop (emptystack) = error_stack
top (emptystack) = error_nat

etc. But then we must describe the behavior of these (and other)
operations when they are applied to the error elements. This may
necessitate introducing new error symbols... and may result in too
many "uninteresting" axioms:

pred (error_nat) = error_nat
succ (error_nat) = error_nat
iszero (error_nat) = error_bool
not (error_bool) = error_bool
isempty (error_stack) = error_bool
push (x, error_stack) = error_stack
push (error_nat, y) = error_stack
...


On the other hand, if we admit partial algebras as models, we can specify
only equations whose both sides have to be defined and equal. (Requirement
that a specific value t should be defined is then equivalent to equation
t=t.) For example,
zero = zero ; ("let zero be defined")
pred(succ(x)) = x
Nothing is said about the term pred(zero), so it need not be defined
(in the initial model it is indeed undefined).
Similarly for the other operations. The result is shorter and perhaps
more readable specification.

But yes, I admit that this is more-less a matter of taste, and I believe
that one can find also drawbaks of this "partial approach" to algebraic
specifications.


Regards, -- Libor

0 new messages