Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Occam's razor and efficient concept formation in planlinguistics

4 views
Skip to first unread message

Wolfgang G. Gasser

unread,
Aug 3, 2005, 11:55:40 AM8/3/05
to
One possible criteria to judge planned languages is the number of
basic concepts (roots) and of concept-formation principles used
in it. The lower these numbers are in relation to the expressivity
of the language, the better.

Here an example of an extreme conciseness of concept formation in
the case of family relationships. We start with the basic concept
'is-parent' alias 'ip' which is a relation of two equal arguments
which here are introduced with the prepositions 'a' and 'o':

ip a John o Jane = John is parent of Jane
= Jane is child of John

The sequence order of the three constitutive parts (ip, a John,
o Jane) does not matter.

In order to derive the nouns 'parent', 'father', 'mother', 'child',
'son', and 'daughter', apart from the basic concept 'ip' we need
the following distinctions:

1) Are we dealing with the argument introduced by 'a' (mother,
father) or with the other argument introduced by 'o' (daughter,
son)?

2) Is it necessary to indicate the sex or not? If yes, then we
must introduce an additional distinction between male (m) and
female (f).

One possible solution:

parent: ipa ( = ip + a = a-argument of ip)
mother: ipaf ( = ip + a + female )
father: ipam
child: ipo ( = ip + o = o-argument of ip)
daughter: ipof
son: ipom ( = ip + o + male )

'The daughter of John' becomes 'a John ipof'. 'The father of Jane'
becomes 'ipam o Jane'.

The interesting point is that we need no further concepts in order
to create such nouns as 'sister', 'grandfather', 'grandchild',
'aunt', 'nephew', 'cousin' and so on:

brother: ipaipom (parent -> child-male)
sister: ipaipof
grandparent: ipaipa (parent -> parent)
grandmother: ipaipaf
grandchild: ipoipo
aunt: ipaipaipof (parent -> parent -> child-female)
nephew: ipaipoipom (parent -> child -> child-male)
cousin: ipaipaipoipo (parent -> parent -> child -> child)

In Danish for instance, there are two words for grandfather: morfar
(mother -> father) and farfar (father -> father). The same is valid
for grandmother (mormor and farmor). Such words are also part of
this system: ipafipam, ipamipam, ipafipaf, ipamipaf.

A half-brother on mother's side is 'ipafipom' and on father's
side: ipamipom.

In order to discriminate half-siblings and genuine siblings in
general we must use a further distincion: between 'single' (s)
and 'two' (t).

a genuine sister: ipatipof (parent-two -> child-female)
a half sister: ipasipof (parent-single -> child-female)

Apart from general principles and the distinction between 'male'
and 'female' and between 'single' and 'two', the concept 'ip' is
enough to enable the creation of a huge number of basic concepts
which are often confusing and very tedious to learn in natural
languages.


Cheers,
Wolfgang


Schematic concept-formation:
http://groups.google.li/group/alt.language.artificial/msg/9c8ad8d901ebebf4


Nathan Sanders

unread,
Aug 3, 2005, 2:47:30 PM8/3/05
to
In article <dcqpbc$3sk$1...@atlas.ip-plus.net>,

"Wolfgang G. Gasser" <si...@homepage.li> wrote:

> One possible criteria to judge planned languages is the number of
> basic concepts (roots) and of concept-formation principles used
> in it. The lower these numbers are in relation to the expressivity
> of the language, the better.

Better for what? Reducing the size of dictionaries and increasing the
size of grammar descriptions?

> ip a John o Jane = John is parent of Jane
> = Jane is child of John
>
> The sequence order of the three constitutive parts (ip, a John,
> o Jane) does not matter.

Order doesn't matter in any context? How would one indicate
topicalization/focus? Are there no pragmatic issues tied to word
order at all? (I've noticed conlangers tend to gloss over---or more
frequently, completely ignore---pragmatics.)

Why can't "ip John a Jane o", or even "ip John Jane a o", be a valid
word order? Are "a" and "o" not words?

> In Danish for instance, there are two words for grandfather: morfar
> (mother -> father) and farfar (father -> father). The same is valid
> for grandmother (mormor and farmor). Such words are also part of
> this system: ipafipam, ipamipam, ipafipaf, ipamipaf.

Presumably, the order cannot be changed: "ipafipam" (paternal
grandmother) is not the same thing as "ipamipaf" (maternal
grandfather). The first set of morphemes refers to the latest
generation, and as you move rigthward, morphemes refer to earlier and
earlier generations.

That is, there is an assumed, inherent order for the object morphemes
to precede the agent morphemes: ipamipaf = "the female parent of a
male parent", not *"the male parent of a female parent". Why is this
order required when adding affixes to "ip", but not when building
sentences with "ip"? It seems like such a mixed system would actually
more more confusing to a learner than easier! (Especially if the
learner's native language doesn't make a significant distinction
between morpheme and word...)

All you've done is shift the required order within sentences to a
required order within words, moving the "problem" (which isn't really
a problem anyway...) from one part of the language to another. That's
not "better"; it's different.

> Apart from general principles

What general principle tells us that agent/object markers should come
to the left of gender markers?

If there is some general principle that tells us that objects should
be closer to the root than agents (and thus, to the left, since the
root is at the left), you explicitly eschewed this general principle
at the sentence level. Why should it be required within words, but
not within sentences?

> and the distinction between 'male'
> and 'female' and between 'single' and 'two', the concept 'ip' is
> enough to enable the creation of a huge number of basic concepts
> which are often confusing and very tedious to learn in natural
> languages.

And yet, human beings do manage to learn them anyway... fluently, in
fact.

BTW, I'm not trying to pick on you or conlanging. Conlanging is fun,
and even occassionally useful (getting paid to create Klingon is
"useful"). I just think that the idea of designing a "better"
language than a typical natural langauge is inherently flawed, because
perceived improvements in one area invariably lead to problems in
another area---usually an area the conlanger (like most other
conlangers) hasn't even thought about, such as pragmatics, lexical
neighborhood density, or the need for redundancy.

Nathan

--
Nathan Sanders
Linguistics Program nsan...@williams.edu
Williams College http://wso.williams.edu/~nsanders
Williamstown, MA 01267

Wolfgang G. Gasser

unread,
Aug 4, 2005, 1:22:43 PM8/4/05
to
> = Nathan Sanders in news:nsanders.DIE.SPAM-3...@news.verizon.net
>> = Wolfgang G. Gasser in news:dcqpbc$3sk$1...@atlas.ip-plus.net

>> ip a John o Jane = John is parent of Jane
>> = Jane is child of John
>>
>> The sequence order of the three constitutive parts (ip, a John,
>> o Jane) does not matter.
>
> Order doesn't matter in any context? How would one indicate
> topicalization/focus? Are there no pragmatic issues tied to word
> order at all? (I've noticed conlangers tend to gloss over---or more
> frequently, completely ignore---pragmatics.)

Topicalization/focus is a diffent matter. If the word order is
not constrained by the syntactic properties corresponding to the
needed semantic/relational properties, then word order can be
used for topicalization, e.g. assisted by the Japanese topic
particle 'wa': The left side of 'wa' is the topic/subject and
the right side is the predicate. The emphasis can be defined to
lie on the last parts of both sides:

a John wa o Jane ip = John: he is the FATHER of Jane
a John wa ip o Jane = John: he is the father of JANE
a John ip wa o Jane = the CHILD of John: it is Jane
ip a John wa o Jane = the child of JOHN: it is Jane
a John o Jane wa ip = the relation between JANE and John:
Jane is the daughter
ip wa o Jane a John = Concerning the parent-child-relation,
John is the father of Jane

> Why can't "ip John a Jane o", or even "ip John Jane a o", be a valid
> word order? Are "a" and "o" not words?

Without any rules, all becomes (im)possible. Here 'a' and 'o' are
(arbitrarily) defined as prepositions which by definition must
precede the corresponding arguments.

> > In Danish for instance, there are two words for grandfather: morfar
> > (mother -> father) and farfar (father -> father). The same is valid
> > for grandmother (mormor and farmor). Such words are also part of
> > this system: ipafipam, ipamipam, ipafipaf, ipamipaf.
>
> Presumably, the order cannot be changed: "ipafipam" (paternal
> grandmother) is not the same thing as "ipamipaf" (maternal
> grandfather).

If you descibe the way to a destination to somebody, you start
with the first passage from the initial location and end up
with the last passage to the distination. It is simply more
reasonable to do it this way than the other way round. Compare:

- go to the last room of the left side of the second floor
of the third house of that street

- go to: that street -> third house -> second floor -> left
side -> last room

In a similar way, when creating the concept of 'uncle (of a
given person)', it seems more reasonable to me to start with
the given person, to go further to the parents and grandparents
and end up with the son of the latter than the other way round.

> The first set of morphemes refers to the latest

> generation, and as you move rightward, morphemes refer to earlier
> and earlier generations.

Only in the case of 'parent'. In the case of 'child' it is the
opposite. For instance 'ipof-ipom' (grandson on daughter's
side) means: at first you target a female child (of somebody)
and then further a male child (of the female child).

The establishment of general order principles such as e.g.

1. from general to specific
2. from center (origin) to border
(--> e.g.: from present to past)
3. from past to future

seems to me quite important for planlinguistics. In the case
we are dealing with, both 2. and 3. could be applied with
equal right, 2. however can be defined to have higher priority
than 3. (and lower priority than 1.).

> That is, there is an assumed, inherent order for the object morphemes
> to precede the agent morphemes: ipamipaf = "the female parent of a
> male parent", not *"the male parent of a female parent". Why is this
> order required when adding affixes to "ip", but not when building
> sentences with "ip"? It seems like such a mixed system would actually
> more more confusing to a learner than easier! (Especially if the
> learner's native language doesn't make a significant distinction
> between morpheme and word...)

If we label the first argument of the relation REL(arg1, arg2)
by the preposition 'a' and the second by 'o', then the relation
remains the same, irrespective of whether we write e.g. 'a arg1
REL o arg2' or 'o arg2 a arg1 REL'. In the case of combining
relations however, the order in which they are combined may play
a decisive role.

So I don't think it is a mixed system in the sense you suggest.

> I just think that the idea of designing a "better"
> language than a typical natural langauge is inherently flawed, because
> perceived improvements in one area invariably lead to problems in
> another area---usually an area the conlanger (like most other
> conlangers) hasn't even thought about, such as pragmatics, lexical
> neighborhood density, or the need for redundancy.

Here I totally disagree with you. Look e.g. at the translation
of 'order' into German: http://www.dict.cc/?s=order Don't you
consider this rather chaotic. And why shouldn't it be a mere
prejudice that 'improvements in one area invariably lead to
problems in another area'?

And why do we need unavoidable redundancy? It is never a problem
to create additional redundancy, if necessary.

Moreover, I'm convinced that schematic concept formation leads
to the highest and most efficient 'lexical neighbourhood density'.


Cheers, Wolfgang


Nathan Sanders

unread,
Aug 4, 2005, 2:50:08 PM8/4/05
to
In article <dctite$9dl$1...@atlas.ip-plus.net>,

"Wolfgang G. Gasser" <si...@homepage.li> wrote:

> > = Nathan Sanders in
> > news:nsanders.DIE.SPAM-3...@news.verizon.net
> >> = Wolfgang G. Gasser in news:dcqpbc$3sk$1...@atlas.ip-plus.net
>

> >> The sequence order of the three constitutive parts (ip, a John,
> >> o Jane) does not matter.
> >
> > Order doesn't matter in any context? How would one indicate
> > topicalization/focus? Are there no pragmatic issues tied to word
> > order at all? (I've noticed conlangers tend to gloss over---or more
> > frequently, completely ignore---pragmatics.)
>
> Topicalization/focus is a diffent matter.

It's a linguistic matter, so in terms of making a "better" language,
it must be considered.

> If the word order is
> not constrained by the syntactic properties corresponding to the
> needed semantic/relational properties, then word order can be
> used for topicalization,

In which case, word order *does* matter, and every word order will be
assumed by the listener to have some particular pragmatic effect. The
speaker would need to be aware of that, and pick the correct order to
convey the correct pragmatic effect.

If word order is fixed, no one on either side would worry about the
pragmatic effects of word order at all.

> > Why can't "ip John a Jane o", or even "ip John Jane a o", be a valid
> > word order? Are "a" and "o" not words?
>
> Without any rules, all becomes (im)possible. Here 'a' and 'o' are
> (arbitrarily) defined as prepositions which by definition must
> precede the corresponding arguments.

Then you have a mixed system: some portions of sentences can be freely
ordered, others cannot. The language learner will have to learn what
can and cannot be reordered in a supposedly free word order language.

What if you wanted to focus the "a" (perhaps someone though you said
"o" instead), but put the lowest focus on John as possible? "a ip
John" seems the most logical way to do that according, but is outlawed
in your system, even though you want to have free word order...

> > Presumably, the order cannot be changed: "ipafipam" (paternal
> > grandmother) is not the same thing as "ipamipaf" (maternal
> > grandfather).
>

> when creating the concept of 'uncle (of a
> given person)', it seems more reasonable to me to start with
> the given person, to go further to the parents and grandparents
> and end up with the son of the latter than the other way round.

"the son of the father of my father"
"my father's father's son"

Both orders are available in English, and both orders make sense to
me. Both orders exist in languages around the world, and there are
likely many languages which only have one or the other. I don't see
that there is anything inherently "better" about one order over the
other. It depends on how you view a family: as something that evolves
over time, or as an extension of an individual (or something more
complex than either).

> > That is, there is an assumed, inherent order for the object morphemes
> > to precede the agent morphemes: ipamipaf = "the female parent of a
> > male parent", not *"the male parent of a female parent". Why is this
> > order required when adding affixes to "ip", but not when building
> > sentences with "ip"? It seems like such a mixed system would actually
> > more more confusing to a learner than easier! (Especially if the
> > learner's native language doesn't make a significant distinction
> > between morpheme and word...)
>
> If we label the first argument of the relation REL(arg1, arg2)
> by the preposition 'a' and the second by 'o', then the relation
> remains the same, irrespective of whether we write e.g. 'a arg1
> REL o arg2' or 'o arg2 a arg1 REL'. In the case of combining
> relations however, the order in which they are combined may play
> a decisive role.
>
> So I don't think it is a mixed system in the sense you suggest.

It's mixed in the sense that, at the sentence level, you propose that
arg1 and arg2 may be freely ordered, but at the word level, they may
not. For a speaker coming from a language with little or no internal
morphology, this distinction between "sentence parts" and "word parts"
will be an extra hurdle already, so why make it worse by giving them
contradictory rules? If the word parts have a strict order, then why
shouldn't the very same arguments have the same order when they are
sentence parts?

> > I just think that the idea of designing a "better"
> > language than a typical natural langauge is inherently flawed, because
> > perceived improvements in one area invariably lead to problems in
> > another area---usually an area the conlanger (like most other
> > conlangers) hasn't even thought about, such as pragmatics, lexical
> > neighborhood density, or the need for redundancy.
>
> Here I totally disagree with you. Look e.g. at the translation
> of 'order' into German: http://www.dict.cc/?s=order Don't you
> consider this rather chaotic.

In what sense? The English side, with a single word having multiple
meanings, or the German side, with different words for different
meanings?

Both sides have benefits and drawbacks.

The benefits of having lots of homophones is that you have fewer
phonetic strings to memorize and distinguish from each other, which
allows for a smaller phoneme system, shorter words. The drawbacks
include increased ambiguity, which leads to longer sentences and
conversations.

On the other hand, if you give every meaning a completely unique
phonetic string, the benefits and drawbacks are reversed. Words need
to be longer or the phoneme systeme more complex (and thus, more
confusable, since the available phoneme space is bounded), but
indivudal words will be less ambiguous, so sentences and conversations
will be shorter.

Obviously, no natural language is at either extreme: every natural
language has both homophones and phonetically unique words. That's
because neither end of the spectrum is better than the other;
languages can change slight in either direction, at different points
in time, in different parts of the vocabulary, but in the long term,
they float somehwere in the middle (just like a random walk on a
straight line stays near the origin).

> And why shouldn't it be a mere
> prejudice that 'improvements in one area invariably lead to
> problems in another area'?

Because it's backed up by linguistic data. What improvement do you
suppose you can make to a language (without sacrificing expressive
power of course) that would *not* lead to complication in another part
of the langauge?

It's exactly for this reason that languages never reach some perfect
end state. Every evolutionary improvement triggers a new complexity,
and this chain reaction is what keeps languages constantly evolving
without termination.

> And why do we need unavoidable redundancy? It is never a problem
> to create additional redundancy, if necessary.

Redundancy is good for spoken language, because humans make speech
errors, have different voices and accents, slur things together, pause
and continue somewhere else in the sentence, get drowned out by
background noise, etc.

> Moreover, I'm convinced that schematic concept formation leads
> to the highest and most efficient 'lexical neighbourhood density'.

It depends on how phonetically similar your proposed words are. If
you plan on using, say, ip, it, and ik as words, your lexical
neighborhood density will be very high (which has negative effects
like increasing processing time and confusability).

If you want to make them more distinct (ip/il/ig or ipo/itu/ika), you
either need to make use of more phonemes (an increase in phonological
complexity and articulatory difficulty needed to make the necessary
distinctions) or make the words longer (an increase in memorization,
speaking time, and number of speech errors likely to be found in a
given word).

Andrew Nowicki

unread,
Aug 5, 2005, 10:06:21 PM8/5/05
to
Wolfgang G. Gasser wrote:

> One possible criteria to judge planned languages is the number of
> basic concepts (roots) and of concept-formation principles used
> in it. The lower these numbers are in relation to the expressivity
> of the language, the better.

True. Americans call it KISS = keep is simple, stupid.
European languages have been evolving toward grammatical
simplicity.

> brother: ipaipom (parent -> child-male)
> sister: ipaipof
> grandparent: ipaipa (parent -> parent)
> grandmother: ipaipaf
> grandchild: ipoipo
> aunt: ipaipaipof (parent -> parent -> child-female)
> nephew: ipaipoipom (parent -> child -> child-male)
> cousin: ipaipaipoipo (parent -> parent -> child -> child)

Several conlangs (Ygyde, Toki Pona, Kali-sise) use these constructs.
Brother and sister are compound words in Ygyde:
brother = y-je-pi = "noun masculine sibling"
sister = y-fi-pi = "noun feminine sibling"

> In order to discriminate half-siblings and genuine siblings in
> general we must use a further distincion: between 'single' (s)
> and 'two' (t).

> a genuine sister: ipatipof (parent-two -> child-female)
> a half sister: ipasipof (parent-single -> child-female)

Ygyde definitions:
half sister = ofisypi = "noun feminine part sibling"
half brother = ojesypi = "noun masculine part sibling"

> And why do we need unavoidable redundancy? It is never a problem
> to create additional redundancy, if necessary.
>

> Moreover, I'm convinced that schematic concept formation leads
> to the highest and most efficient 'lexical neighbourhood density'.

True. Nearly all words of Ygyde, Toki Pona and Kali-sise
conlangs are compound words. Redundancy in these
conlangs is produced by the fact that some compound
words do not make sense. For example, Ygyde compound word:
"o-lu-ta-by" means "noun mathematical war food"
This word does not make sense, so it does not exist.
Similar sounding compound words can be eliminated
by interchanging positions of root words within the
compound word.

Natural spoken languages are very difficult to learn
because they are much more complex than they need to
be. Esperanto and other euroclones are somewhat simpler
than natural european languages. The simplest and easiest
to learn languages are compound conlangs like Ygyde,
Toki Pona, and Kali-sise. Lojban conlang is a step
backward in a sense that its grammar is very difficult
to learn.

Peter T. Daniels

unread,
Aug 5, 2005, 10:28:02 PM8/5/05
to

The less redundancy they have, the less useful they would be in
real-world situations.

Please keep this conlang stuff out of sci.lang; there are enough real
languages in the world to deal with.

Oh, and did you just outlaw poetry from Ygyde, whatever it may be?
(Don't answer here.)
--
Peter T. Daniels gram...@att.net

Wolfgang G. Gasser

unread,
Aug 7, 2005, 2:32:09 PM8/7/05
to
> = Nathan Sanders in news:nsanders.DIE.SPAM-1...@news.verizon.net
>> = Wolfgang G. Gasser in news:dctite$9dl$1...@atlas.ip-plus.net

> If word order is fixed, no one on either side would worry about the
> pragmatic effects of word order at all.

That's obvious. It was not my intention to deal with pragmatic
effects, topicalization, emphasis and similar, neither with the
creation of concrete words nor of a concrete language. I've
been creating and using such words only as examples.

In my opinion it's not a further planned language we need at the
moment, but something like a dictionary of concepts, which is
something like mapping the semantic space of words used in
concrete languages.

For example, if such a dictionary contains schematically
constructed concepts involving both 'to ask somebody to do
something' and 'to command', then all concepts from 'gentle
solicitation' to 'crude coercion' are part of the dictionary.

The creation of such a sematic dictionary would as a by-product
clear up many ordinary words (e.g. 'want', 'may', 'may not',
'should', 'must not') or less-ordinary words (e.g. 'soul', 'mind',
'ghost', 'spirit').

> For a speaker coming from a language with little or no internal
> morphology, this distinction between "sentence parts" and "word parts"
> will be an extra hurdle already,

If a person is not able or willing to understand the "word parts"
of words created for concepts such as 'daughter' and 'grandfather'
by means of schematic concept formation, then he may simply
ignore the internal structure of character strings like 'ipof'
or 'ipa-ipam'. Why should it for such a person be more difficult
to learn 'ipa-ipam' than to learn 'grand-father'?

> so why make it worse by giving them
> contradictory rules? If the word parts have a strict order, then why
> shouldn't the very same arguments have the same order when they are
> sentence parts?

In the case of '5 + 7', you can exchange the places of 5 and 7.
In the case of 57 however, 5 and 7 cannot be freely ordered,
because the meaning of a decimal number depends on the sequence
of its digits. By the way, the decimal system and similar number
systems are excellent examples showing the efficiency of schematic
concept formation.

At least from the semantic point of view, there is no clear-cut
distinction between "words parts" and "sentence parts". Rather
there are different groupings at diffenent levels, similar to
mathematical formula such as: ((25 - y) - (2x + 5z) * (3z - 27)).

All such groupings (of word parts and sentence parts) must
sometimes have a strict order, for '36' and '63' or 'friend of
enemy' and 'enemy of friend' are different concepts. However,
'below 9 and above 3' is the same as 'above 3 and below 9'.
In any language some elements can be freely ordered and others
must be fixed. In planlinguistics we should postulate as few
rules as possible, but also: as many rules as needed!


Let us deal now with the concept 'give-receive' alias 'GR'. It
implies (otherwise than the static 'parent-child' relation) a
change-of-state. The relation obviously has three arguments:
giver, receiver, and exchanged object.

So one must link three different arguments in a clear way with
the relation 'GR'. Whether we use prepositions, postpositions, a
place stucture system as in Lojban, or some other mechanism does
not matter.

Because there is an analogy BETWEEN 'parent' and 'child' of the
'parent-child' relation AND 'giver' and 'receiver' of the 'give-
receive' relation, the argument 'giver' should use the same
argument-marker as 'parent' and 'receiver' the same marker as
'child'. As the third agument-marker (for 'exchanged object')
let us introduce the prepositon 'i'.

a-John GR o-Jane i-ice

The derivability of 'giver', 'receiver' and 'exchanged object'
corresponds obviously better to Occam's razor than the
introduction of independent semantic units. And its obvious
that we should somehow use the argument-markers (in our case
'a', 'o' and 'i') to indicate, which argument is derived from
the original relation 'GR'. Here these markers are simply used
as affixes to 'GR':

giving person = John = GRa (o-Jane i-ice)
receiving person = Jane = GRo (a-John i-ice)
exchanged objekt = ice = GRi (a-John o-Jane)

If we take Occam's razor seriously then concepts such as
'sell-buy', 'lend-borrow' and problable also 'teach-learn'
must be schematized in the same way, possibly introducing
a further mechanism in order to indicate if an argument
is especially active or passive. "He teaches her something"
normally doesn't mean "She learns something from him".

What improvement do you
> suppose you can make to a language (without sacrificing expressive
> power of course) that would *not* lead to complication in another part
> of the langauge?

Do you actually believe that the abolishment of superfluous
exceptions would diminuish expressive power or complicate another
part of a language?

> Redundancy is good for spoken language, because humans make speech
> errors, have different voices and accents, slur things together, pause
> and continue somewhere else in the sentence, get drowned out by
> background noise, etc.

The shorter the words, the more time you have to say the message
in an understandable way. The longer the words, the higher the
tendency to articulate improperly.

And the longer the words, the higher the probability to overlook
orthographic errors. In short words errors are normally overlooked
only in the case the words can be skipped because of the context.

If room number 325 means the 25th room of the third floor, then
the error of having written 326 will easily be recognized, in the
case of 34096243598 instead of 34095243598 however, far less
easily. The same is valid for 'is' and 'ie' versus 'phonetically'
and 'phonatically' or 'phoneticelly'.

And are you sure that in the case of background noise it is easier
to understand long words such as e.g. 'phonetically' than simple
words such as eg. 'sea'/'see'?

And if the words are short, we can in analogy to Andrew Nowicki's
Long Ygde create long versions of words by simply spelling out the
words using predefined, optimally distinguishable names for the
characters.

>> Moreover, I'm convinced that schematic concept formation leads
>> to the highest and most efficient 'lexical neighbourhood density'.

I suppose I misunderstood 'lexical neighbourhood density'. I
interpreted it as the density of concepts/meanings, but now I
suppose you mean something like 'density of character strings'.

> It depends on how phonetically similar your proposed words are. If
> you plan on using, say, ip, it, and ik as words, your lexical
> neighborhood density will be very high (which has negative effects
> like increasing processing time and confusability).

I don't understand. My experience is: the shorter the words, the
easier to recognize them and to find them in a dictionary. And if
a person confuses 'ip' with 'it' and 'ik', then he also confuses
'iteration' with 'iperation', 'iterakion', 'itaretion' and so on.

Cheers,
Wolfgang


Nathan Sanders

unread,
Aug 8, 2005, 1:02:38 AM8/8/05
to
In article <dd5k15$oi0$1...@atlas.ip-plus.net>,

"Wolfgang G. Gasser" <si...@homepage.li> wrote:

> > = Nathan Sanders in
> > news:nsanders.DIE.SPAM-1...@news.verizon.net
> >> = Wolfgang G. Gasser in news:dctite$9dl$1...@atlas.ip-plus.net
>
> > If word order is fixed, no one on either side would worry about the
> > pragmatic effects of word order at all.
>
> That's obvious. It was not my intention to deal with pragmatic
> effects, topicalization, emphasis and similar, neither with the
> creation of concrete words nor of a concrete language.

If you aren't dealing with pragmatics, topicalization, emphasis, etc.,
then you aren't creating a "language" in any useful sense for fluent
human communication. Perhaps I misunderstood your intentions behind a
"better" (your word) planned language... Who exactly is it planned to
be used by, and who is it "better" for?

> In my opinion it's not a further planned language we need at the
> moment, but something like a dictionary of concepts, which is
> something like mapping the semantic space of words used in
> concrete languages.

This is an entriely different kettle of fish than creating a language.
And it's something (to a rough approximation) that makes up a portion
of the field of semantics (there is, of course, much more to the field
beyond cataloguing meanings, and in my opinion, these other avenues of
inquiry are far more interesting; as a native speaker of a natural
human language, I can just open up a thesaurus in my language when I
want to see how far supposed synonyms can vary in their semantics and
pragmatics).

> The creation of such a sematic dictionary would as a by-product
> clear up many ordinary words (e.g. 'want', 'may', 'may not',
> 'should', 'must not') or less-ordinary words (e.g. 'soul', 'mind',
> 'ghost', 'spirit').

Clear them up for who? Native speakers of a language know what words
mean when they use them. Non-native learners aren't necessarily going
to be helped by having to refer to an intermediate dictionary of
universal meanings, most of which won't even be applicable to either
their native language or their target language.

> In the case of '5 + 7', you can exchange the places of 5 and 7.
> In the case of 57 however, 5 and 7 cannot be freely ordered,

Those expressions are not comparable (or at least, not relevant to the
discussion of "ip" within sentences versus "ip" within words). Your
operators are completely different semantically: "x plus y" versus "x
times 10 plus y". Of course the syntax might be different.

In the original examples you gave, the semantics of the operator
remained constant (you even used the same name for it: "ip" whether it
was a sentence level operator, or a word-level operator). If you're
now proposing that the operator is different in sentences than in
words, why give it the same name? Again, this just creates more
confusion for a language learner. If the linearly first argument of
"ip" has a required semantics within words, then why not make the same
rule apply within sentences?

> At least from the semantic point of view, there is no clear-cut
> distinction between "words parts" and "sentence parts".

Yes, precisely (indeed, some languages barely make any distinction at
all). I don't see how drawing an arbitrary line for that distinction,
and then giving each side a different set of rules, is suposed to be
"better" than, say, not having the disction at all.

(Not that lack of a distinction is "better" either; that's the whole
point: nothing is "better" for ordinary human communication than
existing natural human languages.)

> In planlinguistics we should postulate as few
> rules as possible, but also: as many rules as needed!

Why have separate/different rules of for both sentence formation and
word formation? They aren't both needed (as in the aforementioned
polysynthetic languages, as well as analytic/isolating languages, for
example). A single, consistent set of rules would appear to be
"better", as least from the standpoint of number of rules to memorize,
and consistence of the semantics-synatx mappinbg.

> What improvement do you
> > suppose you can make to a language (without sacrificing expressive
> > power of course) that would *not* lead to complication in another part
> > of the langauge?
>
> Do you actually believe that the abolishment of superfluous
> exceptions would diminuish expressive power or complicate another
> part of a language?

Name a superfluous exception that exists naturally in anyone's native
idiolect, so I can see what you're talking about.

> > Redundancy is good for spoken language, because humans make speech
> > errors, have different voices and accents, slur things together, pause
> > and continue somewhere else in the sentence, get drowned out by
> > background noise, etc.
>
> The shorter the words, the more time you have to say the message
> in an understandable way. The longer the words, the higher the
> tendency to articulate improperly.

Of course: short words are generally easier on the speaker than long
words are.

But for the listener, the exact opposite is true! (See below.)

> If room number 325 means the 25th room of the third floor, then
> the error of having written 326 will easily be recognized, in the
> case of 34096243598 instead of 34095243598 however, far less
> easily.

This is only true because all of the numbers you have given are actual
numbers that could all be equally expected to be encountered.

In natural human languages, long lexical neighbors are less likely to
exist than short lexical neighbors (see Frauenfelder, Baayen, and
Hellwig 1993 for crosslinguistic evidence that shorter morphemes are
found in denser lexical neighborhoods than longer morphemes are).

> And are you sure that in the case of background noise it is easier
> to understand long words such as e.g. 'phonetically' than simple
> words such as eg. 'sea'/'see'?

Yes. I don't need to hear every single piece of "phonetically"
perfectly to know what word is being said, since there aren't very
many other English words that sound like it enough to be confused with
it. But I've pretty much got to hear 100% of "is" to be certain it
isn't one of dozens of possibilities.

> > It depends on how phonetically similar your proposed words are. If
> > you plan on using, say, ip, it, and ik as words, your lexical
> > neighborhood density will be very high (which has negative effects
> > like increasing processing time and confusability).
>
> I don't understand. My experience is: the shorter the words, the
> easier to recognize them

This is, generally, exactly the opposite. Shorter words are more
likely to be very similar to other existing shorter words, while
longer words are less likely to be similar to existing longer words.

If you change just a single phoneme in "is", you get numerous lexical
neighbors: ease, as, awes, Oz, ahs, owes, ooze, oohs, eyes, eye's,
ayes, it, ick, Id, if, itch, in, inn, ill, and plenty of others.
That's an enormously large lexical neighborhood density.

But for "phonetically", there's kinetically, genetically, fanatically,
and that's about all I can think of... certainly not nearly as many
neighbors as for "is".

Because "is" has more lexical neighbors, it takes longer for a
listener to process it, to ensure that it isn't one of the
phonetically similar alternatives. (This is psycholinguistic reality;
see research on lexical neighborhood density, such as Goldinger, Luce,
and Pizoni 1989, and Cluff and Luce 1990.)

> and to find them in a dictionary. And if
> a person confuses 'ip' with 'it' and 'ik', then he also confuses
> 'iteration' with 'iperation', 'iterakion', 'itaretion' and so on.

Certainly, but only if those words exist. Such substitutions will
result in actual pre-existing words for small words much more often
than for large words. How many single-sound substitutions for "it"
will result in an existing English word? For "iteration"?

Brian M. Scott

unread,
Aug 8, 2005, 1:53:33 PM8/8/05
to
On Mon, 08 Aug 2005 05:02:38 GMT, Nathan Sanders
<nsanders...@williams.edu> wrote in
alt.language.artificial,sci.lang:

[...]

> In natural human languages, long lexical neighbors are less likely to
> exist than short lexical neighbors (see Frauenfelder, Baayen, and
> Hellwig 1993 for crosslinguistic evidence that shorter morphemes are
> found in denser lexical neighborhoods than longer morphemes are).

In the same vein, and available on the web is Michael S.
Vitevitch and Eva Rodriguez, 'Neighborhood density effects in
spoken word recognition in Spanish', at
<29.237.66.221/ViteRod2005.pdf>. Andrew Wedel, 'Phonological
Alternation, Lexical Neighborhood Density and Markedness in
Processing', at
<linguistics.arizona.edu/~wedel/wedelLabPhon8.pdf>, also looks
interesting.

[...]

Brian

Wolfgang G. Gasser

unread,
Aug 14, 2005, 9:04:41 AM8/14/05
to
> = Nathan Sanders in news:nsanders.DIE.SPAM-B...@news.verizon.net
>> = Wolfgang G. in http://groups.google.com/group/alt.language.artificial/msg/d3c9bfe7b37edcd3

"Mapping the semantic space" is far from "cataloguing meanings"
as you suggest. Its rather the CREATION of useful and obvious ¨
concepts and of concept-formation principles.

> Native speakers of a language know what words mean when they use them.

Not necessarily. We can learn to correctly use words by pure
associative linking, i.e. by practice without understanding. That
even modern science is still based far more on associative thinking
than on logical reasoning (Kant's "apriori judgements"), is also
a consequence of the extremely chaotic state of natural languages.

Awareness of one's own thinking seems quite important to me.
And because our thinking depends (at different degrees) on the
languages we are using, awareness of the semantics of languages
is a prerequisite for a better understanding of ourselves. It is
however completely impossible to get an overview of the semantic
space of natural languages by simply dealing with dictionaries,
especially in the case of English.

> Non-native learners aren't necessarily going
> to be helped by having to refer to an intermediate dictionary of
> universal meanings, most of which won't even be applicable to either
> their native language or their target language.

I often have a concept or only a vague idea of a concept in mind and
don't know a corresponding word. So at least I would be helped by a
well-structured semantic dictionary which obviously must be linked
with ordinary dictionaries of as many languages as possible. The
concepts in the semantic dictionary may have neighbouring concepts,
but all of them are non-ambiguous.

> If the linearly first argument of "ip" has a required semantics within
> words, then why not make the same rule apply within sentences?

It makes sense to introduce for word formation special rules being
more straightforward than the ones for sentence formation.

a John o Jane IP (John and Jane: parent-child-relation)
a Jane o Jack IP (Jane and Jack: parent-child-relation)

Jane = o Jack IPa (parent of Jack)
John = o Jane IPa (parent of Jane)
= o o Jack IPa IPa (parent of parent of Jack)
= o Jack IPaIPa (grandparent of Jack)

It's interesting that prepositions (such as the English 'of') seem
to favour this order

room 'of' [floor 'of' [house 'of' street]]

whereas postpositions (such as the Japanes 'no') this one:

[[street 'no' house] 'no' floor] 'no' room

If we use 'of' as a postposition and 'no' as preposition, we get

room [floor [house street 'of'] 'of'] 'of'

'no' ['no' ['no' street house] floor] room

>> At least from the semantic point of view, there is no clear-cut
>> distinction between "words parts" and "sentence parts".
>
> Yes, precisely (indeed, some languages barely make any distinction at
> all). I don't see how drawing an arbitrary line for that distinction,
> and then giving each side a different set of rules, is suposed to be
> "better" than, say, not having the disction at all.

Why does it make sense to abbreviate

(3 * X * X * X + 2 * X + 5) * ((5 * X + 7) * a + 5)

(where the Roman numeral 'X' means 'ten') to

3025 * (57a + 5)

Why "drawing an arbitrary line" between "word parts" such as '3025'
and "sentence parts" such as '3 * X * X * X + 2 * X + 5' and "giving
each side a different set of rules"? I think the answer is obvious.

> nothing is "better" for ordinary human communication than
> existing natural human languages.)

I know from personal experience that Esperanto is much "better"
for human communication than English. From the semantic point
of view, English could be the worst of all languages. Languages
are not equally good for human communication. E.g. Turkish is
far more easy to master than Russian, and Chinese (apart from
pronounciation) far more easy than Japanese. (That for an
Ukrainian Russian may be easier than Turkish, is a different
problem.)

> Name a superfluous exception that exists naturally in anyone's native
> idiolect, so I can see what you're talking about.

It seems to me that you haven't learned foreign languages for
a long time. Otherwise you should easily understand what I'm
talking about. In English you use lots of rules and exceptions
you are not aware of. What kind of "complication in another part
of the langauge" or of "sacrificing expressive power" would
result, if <read, readed, readed> instead of <read, read, read>
where correct, of if English orthography were less irrational?

> Of course: short words are generally easier on the speaker than long
> words are.

Short words are also easier to learn. And at least when reading
in a foreign script such as Cyrillic or mirror writing, short
words are easier to recognize than long words (provided that each
character is recognizable).

>> If room number 325 means the 25th room of the third floor, then
>> the error of having written 326 will easily be recognized, in the
>> case of 34096243598 instead of 34095243598 however, far less
>> easily.
>
> This is only true because all of the numbers you have given are actual
> numbers that could all be equally expected to be encountered.

Your objection does not correspond to what I've written. I claim
that in meaningful numbers, the probability per digit to overlook
an error is the higher, the longer the numbers are. You are dealing
with redundancy and presuppose that a reader already knows that
34096243598 does not exist and that the nearest existing neighbour
is 34095243598.

> In natural human languages, long lexical neighbors are less likely to
> exist than short lexical neighbors (see Frauenfelder, Baayen, and
> Hellwig 1993 for crosslinguistic evidence that shorter morphemes are
> found in denser lexical neighborhoods than longer morphemes are).

The concept "lexical neighbour" seems problematic to me. I assume
that with "long lexical neighbours" you mean 'lexical neighbours of
long words'.

Even if we assume that the error probability per character does not
increase with the length of the words, we must not ignore, that the
longer a word, the higher the probability of one of more errors in
the word. If we assume for instance an error probability of 10% per
digit, then we get around 10 errors both in 50 two-character words
and in 10 ten-character-words.

> But I've pretty much got to hear 100% of "is" to be certain it isn't
> one of dozens of possibilities.

In the case of 'order', 'apply', 'set', 'get' and similar you defend
the confusion of dozens of possibilities. So why do you consider
ambiguousness a weakness if it results from unclear perception, but
not if it is already inherent in the language?

The word 'is' is normally not very important for understaning. In
the case of 'it is ice', 'it is' is rather perceived as a whole.
So there seems to be no bigger problem to understand 'ik is ice',
'it es ice' or 'it it ice' than to understand a normal seven-
character word with one error.

And if we have to create additional redundancy, e.g. for 'ice' or
'eyes', then we can also use a method which is quite frequent in
a language having lots of homonyms such as Japanese. Instead of
'it is ice' or 'it is eyes' we say 'it is foodstuff ice' or 'it is
body-part eyes'.

If irreducible redundancy is as important as you claim, then why
hasn't our number system (in written form) such a redundancy? The
answer is simply that it is easy to add redundancy if necessary,
e.g. by repeating a number, or by adding the number in word-form:
'4483 - four four eight three'.

> Shorter words are more
> likely to be very similar to other existing shorter words, while
> longer words are less likely to be similar to existing longer words.

d id kid kids
t it kit kits

One-character words differ in 100% from their one-character
neighbours. 'He' and 'we' differ in 50%, 'chance' and 'change'
in around 17%.

> If you change just a single phoneme in "is", you get numerous lexical
> neighbors: ease, as, awes, Oz, ahs, owes, ooze, oohs, eyes, eye's,
> ayes, it, ick, Id, if, itch, in, inn, ill, and plenty of others.
> That's an enormously large lexical neighborhood density.

If we replace 50% of the characters or phonemes in longer words,
then the outcome is quite similar.

> Because "is" has more lexical neighbors, it takes longer for a
> listener to process it, to ensure that it isn't one of the
> phonetically similar alternatives. (This is psycholinguistic reality;
> see research on lexical neighborhood density, such as Goldinger, Luce,
> and Pizoni 1989, and Cluff and Luce 1990.)

I'm rather sceptical. Maybe the reseach did not enough take into
account that the processing of words strongly depends on context
and that we do not expect to hear 'is' or 'as' as isolated words.

Let us take this example: 'he is as at ease as she is'. I agree that
when hearing the words of this sentence separately, they are more
difficult to recognize than long words such as e.g. 'university'.
Maybe it would even be difficult to notice that they are English
words. Nevertheless, when listening to the sentence as a whole,
the words become easily understandable and the number of lexical
neighbours of the single words becomes rather irrelevant.


Cheers, Wolfgang


Peter T. Daniels

unread,
Aug 14, 2005, 9:55:25 AM8/14/05
to
Wolfgang G. Gasser wrote:

> I know from personal experience that Esperanto is much "better"
> for human communication than English. From the semantic point
> of view, English could be the worst of all languages. Languages
> are not equally good for human communication. E.g. Turkish is
> far more easy to master than Russian, and Chinese (apart from
> pronounciation) far more easy than Japanese. (That for an
> Ukrainian Russian may be easier than Turkish, is a different
> problem.)
>
> > Name a superfluous exception that exists naturally in anyone's native
> > idiolect, so I can see what you're talking about.
>
> It seems to me that you haven't learned foreign languages for
> a long time. Otherwise you should easily understand what I'm
> talking about. In English you use lots of rules and exceptions
> you are not aware of. What kind of "complication in another part
> of the langauge" or of "sacrificing expressive power" would
> result, if <read, readed, readed> instead of <read, read, read>
> where correct, of if English orthography were less irrational?
>
> > Of course: short words are generally easier on the speaker than long
> > words are.
>
> Short words are also easier to learn. And at least when reading
> in a foreign script such as Cyrillic or mirror writing, short
> words are easier to recognize than long words (provided that each
> character is recognizable).

Garbage like these three paragraphs is why conlang hobbyists are not
welcome at sci.lang. Extrapolating from personal experience is the very
antithesis of a scientific approach.

(And the person he accuses of not having "learned foreign languages for
a long time" -- I suppose he means "studied"? -- happens to be an
accomplished linguist.)

Paul J Kriha

unread,
Aug 14, 2005, 10:39:28 AM8/14/05
to

Wolfgang G. Gasser <si...@homepage.li> wrote in message news:ddnfdv$7hk$1...@atlas.ip-plus.net...

> > = Nathan Sanders in news:nsanders.DIE.SPAM-B...@news.verizon.net
> >> = Wolfgang G. in http://groups.google.com/group/alt.language.artificial/msg/d3c9bfe7b37edcd3
>
> "Mapping the semantic space" is far from "cataloguing meanings"
> as you suggest. Its rather the CREATION of useful and obvious ¨
> concepts and of concept-formation principles.
>
> > Native speakers of a language know what words mean when they use them.
>
> Not necessarily. We can learn to correctly use words by pure
> associative linking, i.e. by practice without understanding. That
> even modern science is still based far more on associative thinking
> than on logical reasoning (Kant's "apriori judgements"), is also
> a consequence of the extremely chaotic state of natural languages.

A good demonstration of people employing associative memory without
logical reasoning was, for example, the time when somebody on this
group expressed agreement by exclaiming "Here, here!".
When he was asked if he meant "Hear, hear!" he said he certainly did not.
Suddenly, there were more of them coming out of their closets with
support of "here, here!" because that's the way they understood it since
their childhood and it was making "more" sense to them anyway.

They were _Native speakers who knew what words meant when
they used them_. Never-the-less, they were still wrong words. :-)

pjk

>
> Awareness of one's own thinking seems quite important to me.
> And because our thinking depends (at different degrees) on the
> languages we are using, awareness of the semantics of languages
> is a prerequisite for a better understanding of ourselves. It is
> however completely impossible to get an overview of the semantic
> space of natural languages by simply dealing with dictionaries,
> especially in the case of English.

[...]

> Cheers, Wolfgang


ranjit_...@yahoo.com

unread,
Aug 16, 2005, 12:07:10 AM8/16/05
to

Paul J Kriha wrote:
mirror.mcs.anl.gov

> > Not necessarily. We can learn to correctly use words by pure
> > associative linking, i.e. by practice without understanding. That
> > even modern science is still based far more on associative thinking
> > than on logical reasoning (Kant's "apriori judgements"), is also
> > a consequence of the extremely chaotic state of natural languages.
>
> A good demonstration of people employing associative memory without
> logical reasoning was, for example, the time when somebody on this
> group expressed agreement by exclaiming "Here, here!".
> When he was asked if he meant "Hear, hear!" he said he certainly did not.
> Suddenly, there were more of them coming out of their closets with
> support of "here, here!" because that's the way they understood it since
> their childhood and it was making "more" sense to them anyway.

Chortle:-) "There there" doesn't have as much room for confusion.

> They were _Native speakers who knew what words meant when
> they used them_. Never-the-less, they were still wrong words. :-)

> > Cheers, Wolfgang

Paul J Kriha

unread,
Aug 16, 2005, 3:53:38 AM8/16/05
to

<ranjit_...@yahoo.com> wrote in message
news:1124165230....@g43g2000cwa.googlegroups.com...

>
> Paul J Kriha wrote:
> mirror.mcs.anl.gov
> > > Not necessarily. We can learn to correctly use words by pure
> > > associative linking, i.e. by practice without understanding. That
> > > even modern science is still based far more on associative thinking
> > > than on logical reasoning (Kant's "apriori judgements"), is also
> > > a consequence of the extremely chaotic state of natural languages.
> >
> > A good demonstration of people employing associative memory without
> > logical reasoning was, for example, the time when somebody on this
> > group expressed agreement by exclaiming "Here, here!".
> > When he was asked if he meant "Hear, hear!" he said he certainly did not.
> > Suddenly, there were more of them coming out of their closets with
> > support of "here, here!" because that's the way they understood it since
> > their childhood and it was making "more" sense to them anyway.
>
> Chortle:-) "There there" doesn't have as much room for confusion.

Here, here! :-)

Wolfgang G. Gasser

unread,
Aug 22, 2005, 8:59:07 AM8/22/05
to
I wrote in http://groups.google.com/group/alt.language.artificial/msg/d3c9bfe7b37edcd3:

> Let us deal now with the concept 'give-receive' alias 'GR'. It
> implies (otherwise than the static 'parent-child' relation) a
> change-of-state. The relation obviously has three arguments:
> giver, receiver, and exchanged object.

> a-John GR o-Jane i-ice
[Ice is exchanged from John to Jane]

> giving person = John = GRa (o-Jane i-ice)
> receiving person = Jane = GRo (a-John i-ice)
> exchanged objekt = ice = GRi (a-John o-Jane)

For the exchange of objects between persons there exist in natural
languages a big number of words and constructions:

to give sb sth
to be given
giver, donor
to present, to donate
to get, to receive
to take sth from sb
to steal
thief
stolen good
to saddle sb with sth
to be saddled with sth
to take sth off sb's shoulders

Whereas 'Jane receives the dog from John' is sematically almost the
same as 'John gives the dog to Jane', it is not the same as 'Jane
takes the dog from John'. In the latter, Jane and not John is the
active person. So we must introduce the possibility to indicate,
if an argument is especially active or passive. One possibility is
to link the argument markers 'a', 'o' and 'i' with two additional
markers of activity and passivity.

Let us choose 'c' for 'active' and 'p' for 'passive'. With the
three-argument relation 'give-receive' alias GR we get
4 * 4 * 4 = 64 more or less useful combinations:

GR
GR a John
GR ac John (John gives)
GR ap John (sth is taken from John)

GR o Jane
GR a John o Jane
GR ac John o Jane (John gives to Jane)
GR ap John o Jane

GR oc Jane
GR a John oc Jane (Jane takes from John)
GR ac John oc Jane
GR ap John oc Jane

GR op Jane
GR a John op Jane (Jane receives from John)
GR ac John op Jane
GR ap John op Jane
...

GR op Jane ip dog
GR a John op Jane ip dog
GR ac John op Jane ip dog
GR ap John op Jane ip dog

'To present (to donate)' and similar are essentially only 'to give'
with possitive connotation, whereas 'to steal' and similar are 'to
take from' with negative connotation. So it makes sense to introduce
two semantic units for positive and negative connotation ('POS' and
'NEG') in order to get each 64 combinations of 'POS-GR' and 'NEG-GR'
in the same way as above with 'GR'. So we reach 3 * 64 = 192
combinations.

As shown earlier, it is possible to derive concepts corresponding
to the arguments of the 'GR' concept by simply using the argument
markers as affixes to 'GR':

GRa: person who gives or from whom sth is taken
GRac: giver
GRap: person from whom sth is taken

GRo: person who receives or takes sth
GRoc: person who takes sth
GRop: receiver

GRi: exchanged object
GRic: exchanged object actively participating in exchange
GRip: exchanged object passively participating in exchange

The argument concepts (GRa, GRac, ..., GRic) of the three-argument
concept 'GR' are two-argument concepts. Because each of the two
remaining arguments can be either absent, or neutrally, actively
or passively present, we get each 4 * 4 = 16 combinations for each
of the above nine two-argument argument-concepts. This results
in 9 * 4 * 4 = 144 combinations. Because each of the above nine
concepts can also be combined with 'POS' and 'NEG', we have 27
argument concepts (from 'GRa' to 'NEG-GRip') and consequently
3 * 9 * 4 * 4 = 432 combinations.

Despite a number of 192 + 432 = 624 combinations, the concept
'stolen goods' is not yet part, because

NEG-GRi: negative-connotation -> exchanged object

does not discriminate between the loss of useful/positive goods
(e.g. an article of value or a cheque) and receiving something
detrimental/negative (e.g. hazardous waste or a fabricated
certificate of debt). Nevertheless there are two obvious ways
to create the concept of 'stolen goods':

NEG-GR-ap-i: negative connotation -> give-receive ->
a-argument-passive -> i-argument
= negative connotation -> person(s) from whom
object(s) is/are taken -> exchanged object(s)
= stolen from -> stolen object(s)
or:

NEG-GR-oc-i: negative connotation -> give-receive ->
o-argument-active -> i-argument
= negative connotation -> person(s) taking
object(s) -> exchanged object(s)
= stealing person(s) -> stolen object(s)

Analogically, 'present (gift)' must be constructed as 'POS-GR-ac-i'
or 'POS-GR-op-i'. Also the compound concepts 'NEG-GR-ac-i' and
'NEG-GR-op-i', and 'POS-GR-ap-i' and 'POS-GR-oc-i' should be clear.
Maybe someone can provide sound English translations.


Cheers, Wolfgang


0 new messages