Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Fundamental language structure.

3 views
Skip to first unread message

blazingmuse

unread,
Feb 11, 2002, 6:23:01 AM2/11/02
to
I've been advised to try my luck at this Newsgroup given the
relatively low volume of traffic at alt.language per se. The
conlusion is 'no, well maybe,' thus far as I can tell. ):>| Anything
that sounds coherent I will regurgitate as faithfully as I may..

Sorry to occupy your valuable time, but I was wondering if I might
enlist some expertise in an ongoing debate at
humanities.philosophy.objectivism about the possible existence of a
fundamental human grammatical structure- all about whether or not
human language significantly affects, modifies, or sculpts human
intelligence.
Can anyone tell me of a single grammatical structure that is common to
virtually all languages upon this Earth, with the exception of
trivialities such as nouns, verbs, and perhaps tenses? I'd be much
obliged. Thank you.

Peter T. Daniels

unread,
Feb 11, 2002, 8:11:14 AM2/11/02
to

Can you list the primary colors, other than red, yellow, and blue?

You're asking two different questions: Does language affect thought? and
Are there universal structures in language?

For the former, seek out the debate surrounding the "Whorfian
Hypothesis" or the "Sapir-Whorf Hypothesis"; for the latter, seek out
the work of Joseph Greenberg and Noam Chomsky for two very different
approaches. (The obituary of Greenberg by William Croft in the latest --
December 2001 -- issue of *Language* is quite clear about what their
positions are.)
--
Peter T. Daniels gram...@att.net

michael farris

unread,
Feb 11, 2002, 7:28:10 AM2/11/02
to

JGuy wrote:

> 1. No language forms antonyms by reversing
> the word, syllable by syllable, or
> phoneme by phoneme:
>
> bona "good" but not *nabo or *anob "bad"

We just won't consider Polish:

do - to
od - from

since that's basically chance. No language does this on a regular
basis, that's true (AFAIK)

> That was a negative property. On the positive side...on the
> positive side... I pass.

Me too

-michael farris


Wolf Kirchmeir

unread,
Feb 11, 2002, 12:08:04 PM2/11/02
to
On 11 Feb 2002 03:23:01 -0800, blazingmuse wrote:

> all about whether or not
>human language significantly affects, modifies, or sculpts human
>intelligence.

This is IMO a non-question, like asking how long eternity is.


Best Wishes,

Wolf Kirchmeir
Blind River, Ontario

..................................................................
You can observe a lot by watching
(Yogi Berra, Phil. Em.)
..................................................................


Steve Conley

unread,
Feb 11, 2002, 8:35:38 PM2/11/02
to
blazi...@hotmail.com (blazingmuse) wrote in message news:<c69a5fd0.02021...@posting.google.com>...

> I've been advised to try my luck at this Newsgroup given the
> relatively low volume of traffic at alt.language per se. The
> conlusion is 'no, well maybe,' thus far as I can tell. ):>| Anything
> that sounds coherent I will regurgitate as faithfully as I may..

You know, until I saw your message, I didn't even know there was an
alt.language! Having read a little bit of it, I have to say that
group promises to be even funnier than this one. The response from
that one guy saying that _grammar_ was invented after "Nostratic"
split into Indo-European and Semitic (I assume he meant Afrasian, but
who cares?) was a side-splitter! I doubt that even the Nostraticists
would say something that nutty.

> Sorry to occupy your valuable time, but I was wondering if I might
> enlist some expertise in an ongoing debate at
> humanities.philosophy.objectivism about the possible existence of a
> fundamental human grammatical structure- all about whether or not
> human language significantly affects, modifies, or sculpts human
> intelligence.
> Can anyone tell me of a single grammatical structure that is common to
> virtually all languages upon this Earth, with the exception of
> trivialities such as nouns, verbs, and perhaps tenses? I'd be much
> obliged. Thank you.

Well, I checked out the debate of which you speak, and I suspect that
most (though perhaps not all) serious linguists (and students thereof)
are going to agree with at least most of what Dave Odden is saying.
Given that languages can produce an infinite number of sentences, and
that there are an infinite number of possible grammars from which to
choose, it seems hard to believe that all of the children in a speech
community would infer identical grammars from only five years of
linguistic input without some kind of innate linguistic knowledge or
other constraint(s) on the choice of grammars.

Plus, children generally make the same speech errors (reflecting the
same mistaken generalizations about the grammar) at about the same
developmental stages, indicating that they must be pursuing identical
lines of thought. When do humans ever do that?

Also, I would hardly call nouns, verbs, and tenses "trivialities",
especially when there are such huge variations in these things
cross-linguistically. Anyhow, getting to your request, I think Dave
alreadly mentioned these kinds of things but even if he did, they bear
repeating. All languages have duality of patterning (big things being
made up of a very limited set of little things--in this case, words
are made of a very limited set of sounds), and all languages have some
kind of phrasal structuring (sentences are made of phrases, which in
turn are made of words--you never have grammatical rules that apply
specifically to, say, the fifth word of a sentence).

I think the duality of patterning is especially important. The human
vocal apparatus is capable of producing a wide range of very distinct
sounds, but only a small subset of them is used in language, and a
much smaller subset in any one particular language. Given all the
other variation seen in human societies and cultures, surely some
language somewhere would have a word that consists of a screech
followed by a nasal implosive fricative (snort) followed by a cough.
Yet none have any of these, whether or not it would be a violation of
table manners in that society.

Steve

dave odden

unread,
Feb 11, 2002, 11:21:06 PM2/11/02
to
JGuy wrote:

> > (you never have grammatical rules that apply


> > specifically to, say, the fifth word of a sentence).
>

> But you have grammatical rules that apply to the second
> sentence of a discourse. Can you think of one? And in
> some New Guinea languages you have grammatical rules
> which apply to the (n-1)th sentence of a discourse.
> Do you know about them?

I can't imagine what you're refering to, by a rule that applies to the
second S in a discourse, or rules applying to the discourse-penult. Unless
you are speaking of "formulaic language".

> > I think the duality of patterning is especially important. The human
> > vocal apparatus is capable of producing a wide range of very distinct
> > sounds, but only a small subset of them is used in language

> Let's hear about those sounds which are not found in language,
> shan't we. "Tsk, tsk, tsk"? That's a click. Lots of them
> in some languages of Africa. A raspberry? That's an interlabial
> trill, found in many languages of Malakula. Any suggestions?

Everybody knows about the clicks. I would be interested in the "raspberry
phoneme", so if you can provide a reference to document that claim, it would
be very interesting. Note b.t.w. way that he does provide some other
examples, which you quoted in the next excerpt. Ladefoged had a nice talk
where he would give lots of such examples. Here's what they sound like
[....]. Unfortunately, there are no IPA symbols for them.

> > surely some
> > language somewhere would have a word that consists of a screech
> > followed by a nasal implosive fricative (snort) followed by a cough.
> > Yet none have any of these

> You chose a sequence which you think is not attested in any
> of the documented vocabularies of any known language, and
> then you dish it out with a "surely". What's the technical
> term for that sort of fallacious reasoning? Can't think of
> it. Help someone!

Yeah, it's called "statistical reasoning". If you have counterexamples, you
could bring them out. Actually, they would be worthy of publishing.


Mark Rosenfelder

unread,
Feb 12, 2002, 1:08:48 AM2/12/02
to
In article <7bb3c36f.02021...@posting.google.com>,

Steve Conley <st...@coil.com> wrote:
>Given that languages can produce an infinite number of sentences, and
>that there are an infinite number of possible grammars from which to
>choose, it seems hard to believe that all of the children in a speech
>community would infer identical grammars from only five years of
>linguistic input without some kind of innate linguistic knowledge or
>other constraint(s) on the choice of grammars.

Unfortunately this isn't much of an argument for it. Children do not
hear an infinite number of sentences; what they do hear contains a good
deal of repetition. So you are simply exaggerating the chaos of the input
for rhetorical effect. Having no access to their minds, you have
no way of knowing whether they end up with identical grammars.
Also note that five-year-olds are by no means accomplished in their
languages; there are subtle points that aren't grasped at twice that age.

>Plus, children generally make the same speech errors (reflecting the
>same mistaken generalizations about the grammar) at about the same
>developmental stages, indicating that they must be pursuing identical
>lines of thought. When do humans ever do that?

Talk to a customer service rep for a software company, and you'll find
that users make the same errors at about the same stages. Is there an
innate knowledge of computers?

>Also, I would hardly call nouns, verbs, and tenses "trivialities",
>especially when there are such huge variations in these things
>cross-linguistically. Anyhow, getting to your request, I think Dave
>alreadly mentioned these kinds of things but even if he did, they bear
>repeating. All languages have duality of patterning (big things being
>made up of a very limited set of little things--in this case, words
>are made of a very limited set of sounds),

This isn't so impressive if no alternative is given. What, realistically,
would be another way of organizing a communication system? Little things
being made up of a very large number of big things?

>and all languages have some
>kind of phrasal structuring (sentences are made of phrases, which in
>turn are made of words--you never have grammatical rules that apply
>specifically to, say, the fifth word of a sentence).

Indeed, sentences are built out of smaller parts. So are plants, animals,
cars, corporations, computer programs, cathedrals, and variety shows.
It's a powerful technique that almost any complex system will end up with.

I'll let Jacques respond to the bits about sounds. I'll just note that
the number of segments used in languages varies from 11 to 141. That
seems like a pretty wide variation for an inborn trait.

benlizross

unread,
Feb 12, 2002, 1:45:05 AM2/12/02
to
JGuy wrote:

>
> dave odden wrote:
>
> > I can't imagine what you're refering to, by a rule that applies to the
> > second S in a discourse
>
> Easy. "Bill rode Mary's bike. He rode it."
>
> The other one refers to those languages of New Guinea, which require
> a forward reference in the verb to the subject of the _next_ sentence.
> I cannot tell you which languages. Never was involved in their
> study, only those of Vanuatu. But I learnt that from Donald Laycock
> (died in 1987) who was the expert in New Guinea languages.

I think what you're talking about is the final/nonfinal verb business,
where you commonly have sequences of clauses where only the last verb
has tense and subject marking; the preceding verbs are marked to
indicate (i) whether the event is prior to, simultaneous with or
subsequent to the following clause; (ii) whether the subject is same or
different. The latter aspect is known in the literature as
"switch-reference". Kalam would be one example of such a language, but
it's a typical areal feature.


> > or rules applying to the discourse-penult. Unless
> > you are speaking of "formulaic language".
>

> No. As you have just read, real languages.


>
> > I would be interested in the "raspberry
> > phoneme", so if you can provide a reference to document that claim
>

> It's going to be 17 years since I left ANU and Austronesian
> linguistics. Jean-Michel Charpentier, of the CNRS, was the one
> working on Malakula island then. My closest exposure was via
> Hilaire Chalet, a native of Vao, a small islet off the northern
> coast of Malakula. Vao has a whole set of interlabials--but no
> trill. It's when I mentioned it to Charpentier that I learnt
> about those interlabial trills on mainland Malakula.

Uripiv (NE Malakula, close to Vao) has a bilabial trill. Haven't heard
of an interlabial, though.

Ross Clark

John Atkinson

unread,
Feb 12, 2002, 8:22:37 AM2/12/02
to

"JGuy" <jg...@dev.null.nu> wrote...

> dave odden wrote:
>
> > I can't imagine what you're refering to, by a rule that applies to the

> > second S in a discourse
>
> Easy. "Bill rode Mary's bike. He rode it."
>
> The other one refers to those languages of New Guinea, which require
> a forward reference in the verb to the subject of the _next_ sentence.
> I cannot tell you which languages. Never was involved in their
> study, only those of Vanuatu. But I learnt that from Donald Laycock
> (died in 1987) who was the expert in New Guinea languages.
>

> > or rules applying to the discourse-penult. Unless
> > you are speaking of "formulaic language".
>

> No. As you have just read, real languages.
>

> > I would be interested in the "raspberry

> > phoneme", so if you can provide a reference to document that claim
>
> It's going to be 17 years since I left ANU and Austronesian
> linguistics. Jean-Michel Charpentier, of the CNRS, was the one
> working on Malakula island then. My closest exposure was via
> Hilaire Chalet, a native of Vao, a small islet off the northern
> coast of Malakula. Vao has a whole set of interlabials--but no
> trill. It's when I mentioned it to Charpentier that I learnt

> about those interlabial trills on mainland Malakula. I
> remember that Piraha, a recently discovered language of
> Amazonia,

Not that recent -- the missionaries Arlo and Di Heinrichs started work on it
in 1960. It's the only surviving dialect of Mura, for which wordlists exist
dating as far back as 1867. The Mura were a notoriously fierce tribe of 30
to 60 thousand (depending on the source), "using guerrilla tactics of ambush
to terrorize other tribes and also Portuguese invaders. They made peace
with the Portuguese in 1784 (partly out of fear of the Munduruku' -- an even
fiercer people) but still engaged in raiding and killing into the nineteenth
century." Today Piraha~ has about 100 speakers. It's an isolate.

> has an interlabial trill co-articulated with
> a dental, occurring in free alternance with... [t] if
> memory serves.

Not quite -- there are *two* funny sounds, allophones of /b/ and /g/
respectively -- see below

> One is tempted to see it as an April's
> fool. But I wonder. Piraha is also famous for having
> one of the smallest phonemic inventory with Rotokas:
> 7 consonants and 3 vowels, but tones (2 I think).

Our descriptions of Piraha~ are due to three different SIL missionary
couples who each gave slightly different descriptions of phonology and
considerably different ones of grammar. Aikhenvald and Dixon (in The
Amazonian Languages) don't go so far as to say they're all up themselves,
but I note that they're careful to preface most statements about it by "X
says", "Y gives", "Z states".

Anyway, according to the Heinrichs, it has eight consonants, three vowels
with optional nasalization, and 3 tones. The allophones of /b/ include a
bilabial nasal and and also a bilabial trill (raspberry); /g/ has as an
allophone "a kind of double flap in which the tongue tip hits the alvolar
ridge and then (coming out of the mouth) the lower lip. The Sheldons pretty
much agree but add some more complications (tone sandhi, metathesis, and
different phonemes for male and female speakers). The Everetts at first
recognised 4 tones, and then two, and reckon /k/ is underlying /hi/, so
there are actually only 7 consonant phonemes.

I note however that on the next page of A and D there's the sentence
hi ob-a'a?a'i' kahai kai-sai (He really knows how to make arrows)
Which is apparently from Everett (since it uses his representation of the
tones), but contains *both* /hi/ and /k/

Jacques -- you'll notice that English and Piraha~ both have the same word
for the 3rd singular masculine pronoun -- which must mean something, right?

John.

J. W. Love

unread,
Feb 12, 2002, 9:15:58 AM2/12/02
to
Jacques wrote: "And I never did any [fieldwork] in New Guinea. Did you know
that there is a language there where the first syllable of verbs is, or is not,
reduplicated depending on the number of syllables in the verb, modulo 2? The
language is known as Nasioi."

But, Jacques, the Nasioi homeland isn't in New Guinea: Nasioi is a
non-Austronesian language of southeastern Bougainville. No?

Greg Lee

unread,
Feb 12, 2002, 10:00:17 AM2/12/02
to
Mark Rosenfelder <mark...@enteract.com> wrote:
> In article <7bb3c36f.02021...@posting.google.com>,
> Steve Conley <st...@coil.com> wrote:
...

> This isn't so impressive if no alternative is given. What, realistically,
> would be another way of organizing a communication system? Little things
> being made up of a very large number of big things?

>>and all languages have some
>>kind of phrasal structuring (sentences are made of phrases, which in
>>turn are made of words--you never have grammatical rules that apply
>>specifically to, say, the fifth word of a sentence).

Actually, phrases are made of words and phrases (including sentences). That's
important, because it is what allows phrases to have unbounded complexity,
as well as unbounded length.

> Indeed, sentences are built out of smaller parts. So are plants, animals,
> cars, corporations, computer programs, cathedrals, and variety shows.
> It's a powerful technique that almost any complex system will end up with.

But cars aren't made of other cars, which are made of other cars, etc.
(Computer procedures are recursive, but that's because computer languages
are patterned after real languages.)

> I'll let Jacques respond to the bits about sounds. I'll just note that
> the number of segments used in languages varies from 11 to 141. That
> seems like a pretty wide variation for an inborn trait.

Not at all. It's the features that are inborn. SPE gives about 17 features,
and if features are binary in the lexicon (as SPE proposes), that gives
around 2^17 possible phonemes for the various languages of the world to
choose among. The possible features are presumably a function of the
shape and controllability of the human articulatory tract and human
perceptual limitations.

--
Greg Lee <gr...@ling.lll.hawaii.edu>

dave odden

unread,
Feb 12, 2002, 10:06:46 AM2/12/02
to
JGuy wrote:

> > I can't imagine what you're refering to, by a rule that applies to the

> > second S in a discourse

> Easy. "Bill rode Mary's bike. He rode it."

You should reconsider the rule, because it doesn't apply to the second S in
a discourse. I presume you're referring to the facts of pronominal reference
and the fact that "Bill" and "he" are coreferential. A better statement of
the rule is "when the referent can be supplied, given the speaker's
knowledge". For example, you find possible NP-pronoun coreference in:

"John said I should give him money"
"Because he spoke French, Jack ordered the wine"
"He finally quite"
<you had to be there>

I think Lees made the first attempt to formalise pronominalization;
Langacker wrote a defining article on the topic; about 20% of the total
energy expended in linguistic theory is about sorting out pronoun binding.
It isn't just "second S in a discourse".

Others here have addressed the raspberry question. I've never actually
worked on a language with a bilabial trill, but I know of them. Interesting,
though, that John Atkinson refers to a bilabial trill as "the raspberry". In
my "dialect", those are distinct things in that the raspberry involves
sticking the tongue out. Clearly, the terminology needs to be clarified. We
need an ISO standard for all possible sounds that humans can make (at least
using cranio-pulmonary anatomy).

> Thinking back on my student days, everything, everything
> I was taught about languages, was falsified by the
> experience gathered in fieldwork. And I never did any


> in New Guinea. Did you know that there is a language there
> where the first syllable of verbs is, or is not, reduplicated
> depending on the number of syllables in the verb, modulo 2?

> The language is known as Nasioi. This is the sort of things
> that one would propose as impossible, a language universal,
> no language could reasonably do that! Well, one does.

I sympathise; there is an unfortunate and robust trend in some areas of
theoretical linguistics where people leap to hasty conclusions. Highly
specific "universals" like "you can't have a language that reduplicates
depending on word parity" are not good universals. The seeds of destruction
of that hypothesis are already there in Yidiny and Saami. As a fieldworker
and theoretician, I see this conflict often, where an uninformed person
gives a purely philosophical reason for a belief -- often "because it's
simpler" -- which happens to be empirically false, but only falsified by an
"obscure" language.

Nihilism and skepticism are the easy positions to take. The fact that some
theoreticians reach empirically invalid conclusions does not warrant
rejecting theory. Nor is the merit of fieldwork invalidated by the fact that
fieldworkers can do bad work. If you know of someone who claimed that no
language can reduplicate on the basis of the syllabic-evenness of words,
then you know someone who is probably wrong. You haven't actually given any
data on Nasioi, so I am not in possession of any specific facts that would
lead me to conclude that the claim is true. It will be interesting to see
whether I am able to locate persuasive evidence regarding this claim.

stephen michael conley

unread,
Feb 12, 2002, 10:16:59 AM2/12/02
to
In article <3C68FC...@dev.null.nu>, JGuy <jg...@dev.null.nu> wrote:
>Steve Conley wrote:

>> (you never have grammatical rules that apply


>> specifically to, say, the fifth word of a sentence).
>

>But you have grammatical rules that apply to the second
>sentence of a discourse. Can you think of one? And in
>some New Guinea languages you have grammatical rules
>which apply to the (n-1)th sentence of a discourse.
>Do you know about them?

Does it apply to the second sentence because it's the second sentence, or
because the required discourse elements have to be introduced first, which
by nature requires a preceding sentence? There _is_ a difference, you
know. Even so, since you're dealing with sentences, how does that
disprove phrase structure?

How about a rule that applies to the third word of the fifth
sentence, but only if the onset of the second syllable of that word is
higher on the sonority scale than the coda of the first syllable of the
second word of the third sentence? Or anything else that involves
actually counting things.

I don't know about the New Guinea languages. Please provide source
citations for this.

>> I think the duality of patterning is especially important. The human
>> vocal apparatus is capable of producing a wide range of very distinct

>> sounds, but only a small subset of them is used in language

>Let's hear about those sounds which are not found in language,
>shan't we. "Tsk, tsk, tsk"? That's a click. Lots of them
>in some languages of Africa. A raspberry? That's an interlabial
>trill, found in many languages of Malakula. Any suggestions?

Yeah, yeah, clicks, big whoopee. You can't get two weeks into an
undergrad survey of linguistics course without hearing about them. Show
me a language that forms questions by singing the sentence in a minor key
and maybe you'll have an argument.

>> surely some
>> language somewhere would have a word that consists of a screech
>> followed by a nasal implosive fricative (snort) followed by a cough.

>> Yet none have any of these
>
>You chose a sequence which you think is not attested in any
>of the documented vocabularies of any known language, and
>then you dish it out with a "surely". What's the technical
>term for that sort of fallacious reasoning? Can't think of
>it. Help someone!

OK, what is fallacious about saying that, statistically, the number of
distinct sounds in the combined phonetic inventory of ~6700 languages
ought to be the same order of magnitude, given the wide range of sounds
that the human vocal apparatus can produce?

Feel free to post counterexamples. You can be the inventor of the IPA
symbol for the screech!

Steve

Mark Rosenfelder

unread,
Feb 12, 2002, 12:19:50 PM2/12/02
to
In article <a4bam1$58f$1...@news.hawaii.edu>,
Greg Lee <gr...@ling.lll.hawaii.edu> wrote:
>Mark Rosenfelder <mark...@enteract.com> wrote:
[...]

>Actually, phrases are made of words and phrases (including sentences). That's
>important, because it is what allows phrases to have unbounded complexity,
>as well as unbounded length.

>> Indeed, sentences are built out of smaller parts. So are plants, animals,
>> cars, corporations, computer programs, cathedrals, and variety shows.
>> It's a powerful technique that almost any complex system will end up with.
>
>But cars aren't made of other cars, which are made of other cars, etc.
>(Computer procedures are recursive, but that's because computer languages
>are patterned after real languages.)

Really? How do you know that? Computer languages are like human languages
in some ways, unlike them in others. Some of the similarities may be due
to the conscious or unconscious imitation of human language, but others may
in effect be independent discoveries invented to deal with programming
problems.

>> I'll let Jacques respond to the bits about sounds. I'll just note that
>> the number of segments used in languages varies from 11 to 141. That
>> seems like a pretty wide variation for an inborn trait.
>
>Not at all. It's the features that are inborn. SPE gives about 17 features,
>and if features are binary in the lexicon (as SPE proposes), that gives
>around 2^17 possible phonemes for the various languages of the world to
>choose among. The possible features are presumably a function of the
>shape and controllability of the human articulatory tract and human
>perceptual limitations.

No doubt. Still, I don't see a strong basis for saying that the number
of phonemes in a language is "small", which seems to imply that one would
expect it to be larger. It seems to me that there could be good reasons
not to have 2^17 phonemes-- the possibility of confusion or distortion,
for instance. Once those reasons were investigated, perhaps we'd find
that the number of phonemes is just reasonable.

Greg Lee

unread,
Feb 12, 2002, 1:14:06 PM2/12/02
to
Mark Rosenfelder <mark...@enteract.com> wrote:
> In article <a4bam1$58f$1...@news.hawaii.edu>,
> Greg Lee <gr...@ling.lll.hawaii.edu> wrote:
>>Mark Rosenfelder <mark...@enteract.com> wrote:
> [...]
>>Actually, phrases are made of words and phrases (including sentences). That's
>>important, because it is what allows phrases to have unbounded complexity,
>>as well as unbounded length.

>>> Indeed, sentences are built out of smaller parts. So are plants, animals,
>>> cars, corporations, computer programs, cathedrals, and variety shows.
>>> It's a powerful technique that almost any complex system will end up with.
>>
>>But cars aren't made of other cars, which are made of other cars, etc.
>>(Computer procedures are recursive, but that's because computer languages
>>are patterned after real languages.)

> Really? How do you know that? Computer languages are like human languages
> in some ways, unlike them in others. Some of the similarities may be due
> to the conscious or unconscious imitation of human language, but others may
> in effect be independent discoveries invented to deal with programming
> problems.

It is said that Algol-68 was not designed to be a context-free language,
but it was discovered, after it was designed, to be context-free (that is
to say, essentially, yielding only hierarchical structures). So I think the
original designers were just following their human noses.

Also, why is it that although modern computers are limited-memory Turing
machines, with ram, high-level computer languages treat them as more
limited stack-oriented machines, and that human programmers who don't
use structured (hierarchical) programming techniques tend to get lost
in bugs and complexity? Because they're human?

Other than that, I suppose I needn't point out that the non-humans here
on earth are not known to have designed any recursive computer languages.

>>> I'll let Jacques respond to the bits about sounds. I'll just note that
>>> the number of segments used in languages varies from 11 to 141. That
>>> seems like a pretty wide variation for an inborn trait.
>>
>>Not at all. It's the features that are inborn. SPE gives about 17 features,
>>and if features are binary in the lexicon (as SPE proposes), that gives
>>around 2^17 possible phonemes for the various languages of the world to
>>choose among. The possible features are presumably a function of the
>>shape and controllability of the human articulatory tract and human
>>perceptual limitations.

> No doubt. Still, I don't see a strong basis for saying that the number
> of phonemes in a language is "small", which seems to imply that one would
> expect it to be larger.

Nor I. But the number of features is small(*).

> It seems to me that there could be good reasons
> not to have 2^17 phonemes-- the possibility of confusion or distortion,
> for instance. Once those reasons were investigated, perhaps we'd find
> that the number of phonemes is just reasonable.

Yes. Things must all make sense, in the end.

(*) Comparable with the number of elemental smells, as I read someplace,
which is also 17, and with the largest-sided regular polygon with a prime
number of sides that is constructible with straight-edge and compass (17?).

--
Greg Lee <gr...@ling.lll.hawaii.edu>

Mark Rosenfelder

unread,
Feb 12, 2002, 2:03:24 PM2/12/02
to
In article <a4bm1e$d21$1...@news.hawaii.edu>,

Greg Lee <gr...@ling.lll.hawaii.edu> wrote:
>Mark Rosenfelder <mark...@enteract.com> wrote:
>> Greg Lee <gr...@ling.lll.hawaii.edu> wrote:
>>>But cars aren't made of other cars, which are made of other cars, etc.
>>>(Computer procedures are recursive, but that's because computer languages
>>>are patterned after real languages.)
>
>> Really? How do you know that? Computer languages are like human languages
>> in some ways, unlike them in others. Some of the similarities may be due
>> to the conscious or unconscious imitation of human language, but others may
>> in effect be independent discoveries invented to deal with programming
>> problems.
>
>It is said that Algol-68 was not designed to be a context-free language,
>but it was discovered, after it was designed, to be context-free (that is
>to say, essentially, yielding only hierarchical structures). So I think the
>original designers were just following their human noses.

And what are designers doing when they create non-context-free languages?
Following their non-human noses?

(I found a proof on the Web that C is not context-free, because it requires
expressions to be typed, and variables to be defined before use. Compilers
treat C as context-free but handle the typing using a symbol table.)

>Also, why is it that although modern computers are limited-memory Turing
>machines, with ram, high-level computer languages treat them as more
>limited stack-oriented machines, and that human programmers who don't
>use structured (hierarchical) programming techniques tend to get lost
>in bugs and complexity? Because they're human?

Why are molecules made up of other molecules? Because they're human?

>Other than that, I suppose I needn't point out that the non-humans here
>on earth are not known to have designed any recursive computer languages.

Only humans have created automobiles, too. So our brains must have wheels.

Helmut Richter

unread,
Feb 12, 2002, 2:26:11 PM2/12/02
to
For the question what formal grammars are useful, computer languages
and natural languages show some similarities, as I describe below. I
do not make speculations why this is so: perhaps for no real reason.

In article <a4bm1e$d21$1...@news.hawaii.edu>, Greg Lee wrote:

>> Really? How do you know that? Computer languages are like human languages
>> in some ways, unlike them in others. Some of the similarities may be due
>> to the conscious or unconscious imitation of human language, but others may
>> in effect be independent discoveries invented to deal with programming
>> problems.
>
> It is said that Algol-68 was not designed to be a context-free language,
> but it was discovered, after it was designed, to be context-free (that is
> to say, essentially, yielding only hierarchical structures). So I think the
> original designers were just following their human noses.

Algol-60, not Algol-68, was the first programming language that was
defined using a context-free grammar. It was soon discovered that this
grammar did not describe *all* syntactic features of the language.
Therefore, a different type of grammar, to wit Wijngaarden's two-level
grammar, was invented for the description of Algol-68.

A two-level grammar is in principle a context-free grammar that
produces the infinitely many rules of the context-free grammar that
describes the language.

Two-level grammars have two disadvantages:

- The languages they describe are, in general, not decidable.

- Nobody understands them. The defining report of Algol-68 was not
understood by anybody except a small group of enthusiasts. The only
usable information on the language was a book with the title
"Algol-68 with fewer tears". For other computer languages, the
defining report or the ANSI standard was used as a reference for
users - inconceivable for the esoteric Algol-68 report.

Two-level grammars were soon abandoned. In fact, until today, little
is known about *useful* and *intuitive* ways to describe languages
that are not context-free. Neither two-level grammars nor
context-sentitive grammars do a good job.

Today, context-free languages are but a tool to describe some
syntactic features of programming languages, to wit those that can
adequately be described in this way. Hence, their usability implies
nothing about the language but is tautilogical: they are useful for
those purposes for which they are useful, e.g. as a reference for the
users or for automatically generating parsers out of the
grammar. That's a real great deal, but it is engineering, not
philosophy.

Not being a linguist, I cannot say whether context-free grammars could
be a viable "engineering" tool in linguistics as well. I suspect that
yes, but the introduction of formal grammars into linguistics by a
"philosopher", not an "engineer", seems to have spoilt the innocence
of the tool.

> (*) Comparable with the number of elemental smells, as I read someplace,
> which is also 17, and with the largest-sided regular polygon with a prime
> number of sides that is constructible with straight-edge and compass (17?).

No. 257 and 65537 are other prime numbers with this property. It is
not known whether more of them exist. If so, they must be really big,
at least 646456994 digits long.

Helmut Richter

benlizross

unread,
Feb 12, 2002, 2:33:30 PM2/12/02
to
JGuy wrote:

>
> J. W. Love wrote:
>
> > But, Jacques, the Nasioi homeland isn't in New Guinea: Nasioi is a
> > non-Austronesian language of southeastern Bougainville. No?
>
> You got me there. Wasn't Bougainville where they had those
> riots a very few years ago, some people wanting it to secede
> from PNG and become part of the Solomons? Or was it the other
> way around? All I remember is that, during that crash
> course I was made to attend in Brisbane, run by SIL, there
> was an exercise with Nasioi data, with those verbs that went
> (my memory is failing me a bit here) sometimes bededi... sometimes
> bedededi... and we had to work out the rules. We all failed.
> That was 31 years ago. I sort of seem to remember that Nasioi
> was a PNG language. I don't mean _mainland_ PNG necessarily,
> just PNG. There are very few non-Austronesian languages in the
> Solomons. (Scratch scratch scratch...) Bambatana if I
> remember correctly (only 24 years ago, that), Savosavo
> (have I got it right?). I am pretty sure that Nasioi
> wasn't in that lot. But it is so long ago.

Nasioi is spoken in Bougainville, not [the island of] New Guinea, but
Bougainville is (still) politically part of Papua New Guinea.

Ross Clark

Mikael Thompson

unread,
Feb 12, 2002, 4:37:18 PM2/12/02
to
mark...@enteract.com (Mark Rosenfelder) wrote in message news:<a4birm$t4n$1...@bob.news.rcn.net>...

The number of phonemes would be much smaller; there's feature
dependency, after all. Some features are only applicable to certain
phonemes, such as voicing for consonants (not vowels in this model,
which are automatically [+voice]); or interdental and sibilant (or
whatever feature was used in SPE to distinguish various apicals),
which aren't applicable to velars or labials; or bilabial versus
labiodental, which isn't applicable to apicals and velars.

Mikael Thompson

Peter T. Daniels

unread,
Feb 12, 2002, 5:19:02 PM2/12/02
to

That would be more a reflection of SPE inefficiency -- didn't Jakobson,
Fant & Halle use the same labels for disparate subdivisions (so that,
say, interdental and sibilant might be distinguished by the same feature
that distinguished, say, Ichlaut and Achlaut? -- I'm not claiming that
that's an actual example!! only that it's a way of economizing the list
of features.)

Greg Lee

unread,
Feb 12, 2002, 5:17:41 PM2/12/02
to
Mark Rosenfelder <mark...@enteract.com> wrote:
...
> (I found a proof on the Web that C is not context-free, because it requires
> expressions to be typed, and variables to be defined before use. Compilers
> treat C as context-free but handle the typing using a symbol table.)

That's the same as human language. Taking account of anaphoric relationships
has to be done outside, or as some sort of auxiliary to, hierarchical
structures. (Like indices, for syntacticians, or the convention that
substitution for a variable is uniform, for logicians.)

...
--
Greg Lee <gr...@ling.lll.hawaii.edu>

stephen michael conley

unread,
Feb 12, 2002, 5:27:39 PM2/12/02
to
In article <3C695D...@dev.null.nu>, JGuy <jg...@dev.null.nu> wrote:

[a bunch of nonsense about skeletons and birds' nests deleted]

Which of these things are rule-based forms of communication?

Steve

Greg Lee

unread,
Feb 12, 2002, 5:28:55 PM2/12/02
to
Mikael Thompson <mith...@indiana.edu> wrote:
...

> The number of phonemes would be much smaller; there's feature
> dependency, after all.

Not in principle. Chomsky and Halle require that features represent
independently controllable aspects of articulation. (Their feature
system does not live up to this ideal 100% -- a problem discussed
by Gunnar Fant in _Speech sounds and features_.)

> Some features are only applicable to certain
> phonemes, such as voicing for consonants (not vowels in this model,

What model is that? Not SPE. Did you know that consonants can be
+- stress in the SPE system?

> which are automatically [+voice]); or interdental and sibilant (or
> whatever feature was used in SPE to distinguish various apicals),
> which aren't applicable to velars or labials; or bilabial versus
> labiodental, which isn't applicable to apicals and velars.

There may be some problems here, but bilabials are spread, as opposed
to labiodentals, and +- spread is applicable to apicals, at least.
(For velars, well, maybe with practice one could make the back
of the tongue hump up sharply.)

If the SPE features system is not totally orthogonal, I think that
means we should look for something better.

--
Greg Lee <gr...@ling.lll.hawaii.edu>

stephen michael conley

unread,
Feb 12, 2002, 5:55:46 PM2/12/02
to
In article <3C69FF...@dev.null.nu>, JGuy <jg...@dev.null.nu> wrote:

>stephen michael conley wrote:
>
>> Does it apply to the second sentence because it's the second sentence, or
>> because the required discourse elements have to be introduced first, which
>> by nature requires a preceding sentence? There _is_ a difference, you
>> know. Even so, since you're dealing with sentences, how does that
>> disprove _phrase structure_?
>
> [my emphasis]
>
>What I wrote was a comment about:

>
>>> (you never have grammatical rules that apply
>>> specifically to, say, the fifth word of a sentence).
>
>not about _phrase structure_ per se.

But I made that statement in the context of saying that all languages have
phrase structure. Actually, I think there might be a good argument or two
against phrase structure as an absolute truth, but you haven't made it.
Besides, your discourse rule doesn't even hold. While it can apply to the
second sentence, it does not _specifically_ apply to it just because it's
the second sentence, and there are plenty of cases where it wouldn't.

>I am quite familiar with this sort argumentation and
>I don't fall for it.

What sort of argumentation is that? That in disputing other people's
claims we present things that actually dispute their claims? I made a
statement about phrase structure, not discourse, but what the hell, I'll
even extend my statement to discourse, too. You're still wrong, you still
haven't presented an actual grammatical rule of any language, and if you
can produce such a rule (one that applies to the nth element of a string
regardless of any other syntagmatic factors), I'll be very surprised.

>> How about a rule that applies to the third word of the fifth
>> sentence, but only if the onset of the second syllable of that word is
>> higher on the sonority scale than the coda of the first syllable of the
>> second word of the third sentence?
>

>That is perfectly dishonest, and you know it.

It was an exaggeration meant to drive home what I was looking for.
Produce a rule that applies specifically to the nth word of a sentence,
period. I'm intentionally leaving myself wide open to all kinds of
prosodic stuff here. Come on, take a shot.

>> Or anything else that involves
>> actually counting things.
>

>Singular, dual, trial, paucal, plural.

I was talking about rules that count grammatical elements, not things in
language that involve speakers counting the stuff they're talking about.
I would think that you could tell the difference.

Steve

Michael Kuettner

unread,
Feb 12, 2002, 8:10:17 PM2/12/02
to

JGuy <jg...@dev.null.nu> wrote in message
news:3C6A05...@dev.null.nu...

> Greg Lee wrote:
>
> > (Computer procedures are recursive, but that's because computer
languages
> > are patterned after real languages.)
>
> SOME procedures can be written recursively (IF the computer language
> allows procedures to call themselves--none of the early computer
> languages
> did).
>
*Cough* - here you'll have to define what you mean by "early".
Almost any chip I've ever programmed (apart from the Fairchild
7-bit CPU) had a stack - pointer and recognized the "JSR" - mnemnonic
(Jump subroutine). You don't need more for recursion.
(Yes, I'm a SW - developer and "hacked" in my younger days).

> ALL recursive procedures can be rewritten without recursion, using
> loops, and advantageously so, because recursion is typically very,
> very wasteful of resources. (see below)
>
Nope. There are some problems for which recursion works faster.
(But make sure to set the compiler switches correct)

<snip part anout natural languages>
Indeed.

> Note. Wasteful: about 6 years ago I had a handful of
> programmers implementing a database of icons I had
> designed (I could have written it faster myself, but
> that's not the way to go about it in a corporate
> bureaucracy). At one stage, their program crashed
> after loading just 49 icons (the icons were the
> standard 32x32 pixels). "Out of memory." I was testing
> their stuff on a 486 with 32M of RAM. That made no
> sense to me. There was enough RAM there to hold
> thousands of icons. So I e-mailed them (they were in
> another part of town). The answer came back: "The
> implementation is heavily recursive." (Oh well,
> they'd just graduated in comp sci, I suppose
> they were having fun with recursion). "Don't you
> know that recursion is computationally very
> expensive? Rewrite it with loops instead."
>
Again, no.
Your example just shows insufficiently trained
"programmers". And maybe sloppy parameters
in the description of the project (been there, done that).
But it boggles my mind why anyone would write a
recursive algorithm to load icons.
As a counter-example, take a weighted tree.
(root - left -right for every node).
The more likely hits are inserted left, the less likely hits
inserted right.
Recursion works faster here.

> And BTW, not only are computers (as distinct from
> computer languages) incapable of recursion, they're
> also incapable of loops (i.e. the for, while,
> repeat, statements). These are features of the
> computer languagues, which are implemented as
> conditional jumps. Go dissassemble an executable
> and see.

*Cough* - branch and jump commands are different
animals.
Branch commands are conditional, while
jump commands are *not*.
(Less cycles needed for the JMPs).
And - although I'll deny it - I've disassembled quite
a few progs in my younger and wilder years to
*cough* "public - domain" them.

Cheers,

Michael Kuettner

PS : Since this is sci.lang, I won't bore you with
further details.
Dear JG - you know the waters around here
better than me.
As an assembler + machine - language discussion
seems off-topic here, let's take it to email.


Greg Lee

unread,
Feb 12, 2002, 10:05:02 PM2/12/02
to
JGuy <jg...@dev.null.nu> wrote:
> Greg Lee wrote:

>> (Computer procedures are recursive, but that's because computer languages
>> are patterned after real languages.)

> SOME procedures can be written recursively (IF the computer language


> allows procedures to call themselves--none of the early computer
> languages
> did).

> ALL recursive procedures can be rewritten without recursion, using


> loops, and advantageously so, because recursion is typically very,
> very wasteful of resources. (see below)

> Languages per se are not recursive. Their description under certain
> grammatical models, however, can be.

> ALL recursive procedures must have at least one exit statement
> to be functional.

> NONE of the recursive definitions I have come across dealing
> with language have exit points.
...

Even supposing all this made sense, it really isn't to the
point. We were speaking about what (if anything) makes the
structure of human languages special. I said that the distinctive
character consisted in allowing phrases within phrases within phrases,
and so on, without bound. The (or one) comparison to computer
procedures is that in computer programs, procedures can call procedures
which call procedures, and so on, without bound. This is an entirely
different matter from the sort of recursion you're talking about.
A recursive definition might be appropriate for characterizing
the structure of such computer programs, or a recursive program
might be appropriate for parsing them, but that doesn't make the
programs being characterized or parsed recursive. The _language_
in which they're expressed has a recursive structure.

See the difference between programs written in a recursive language
and recursive programs?

--
Greg Lee <gr...@ling.lll.hawaii.edu>

John Lawler

unread,
Feb 12, 2002, 10:52:28 PM2/12/02
to
Mikael Thompson <mith...@indiana.edu> writes:
>mark...@enteract.com (Mark Rosenfelder) writes:
>> Greg Lee <gr...@ling.lll.hawaii.edu> writes:

>>>It's the features that are inborn. SPE gives about 17 features,
>>>and if features are binary in the lexicon (as SPE proposes), that gives
>>>around 2^17 possible phonemes for the various languages of the world to
>>>choose among. The possible features are presumably a function of the
>>>shape and controllability of the human articulatory tract and human
>>>perceptual limitations.
>>
>> No doubt. Still, I don't see a strong basis for saying that the number
>> of phonemes in a language is "small", which seems to imply that one would
>> expect it to be larger. It seems to me that there could be good reasons
>> not to have 2^17 phonemes-- the possibility of confusion or distortion,
>> for instance. Once those reasons were investigated, perhaps we'd find
>> that the number of phonemes is just reasonable.
>
>The number of phonemes would be much smaller; there's feature
>dependency, after all. Some features are only applicable to certain
>phonemes, such as voicing for consonants (not vowels in this model,
>which are automatically [+voice]); or interdental and sibilant (or
>whatever feature was used in SPE to distinguish various apicals),
>which aren't applicable to velars or labials; or bilabial versus
>labiodental, which isn't applicable to apicals and velars.

Oddly (I've also thought), in a lot of voice production or recognition
software, "phoneme" has almost exactly the wrong meaning from a
linguistic point of view -- it's often used to refer to all the variant
pronunciations that need to be synthesized or recognized. The more the
better, usually occuring in powers of 2. "a 1024-phoneme system", etc.

Which probably illustrates Greg's point, that there are *a lot* of
possible and indeed occurring (software sense) "phonemes", which we
systematize into a very small number of (linguistic sense) phonemes.
Clearly, that's inbuilt, like the acoustic and neuromuscular and
fluid-dynamic parameters of the vocal tract.

-John Lawler http://www.umich.edu/~jlawler U Michigan Linguistics Dept
-----------------------------------------------------------------------
"Language is the most massive and inclusive art we know, a - Edward Sapir
mountainous and anonymous work of unconscious generations." 'Language'

Brian M. Scott

unread,
Feb 12, 2002, 11:49:19 PM2/12/02
to
On Wed, 13 Feb 2002 02:10:17 +0100, "Michael Kuettner"
<mik...@eunet.at> wrote:

>JGuy <jg...@dev.null.nu> wrote in message
>news:3C6A05...@dev.null.nu...
>> Greg Lee wrote:

>> > (Computer procedures are recursive, but that's because computer
>> >languages
>> > are patterned after real languages.)

>> SOME procedures can be written recursively (IF the computer language
>> allows procedures to call themselves--none of the early computer
>> languages
>> did).

>*Cough* - here you'll have to define what you mean by "early".
>Almost any chip I've ever programmed (apart from the Fairchild
>7-bit CPU) had a stack - pointer and recognized the "JSR" - mnemnonic
>(Jump subroutine). You don't need more for recursion.

But the *languages* didn't allow it. I'm quite sure that FORTRAN II
didn't, for instance. And while IBM 360 assembler had branch-and-link
instructions, you had to implement your own stack if you wanted one.
(I don't know about Jacques, but I mean *early*! First computer I
ever used was an IBM 1620.)

[...]

>*Cough* - branch and jump commands are different
>animals.
>Branch commands are conditional, while
>jump commands are *not*.

Try 360 machine language: the 'unconditional' jump is simply a
branch-on-any-condition -- if I remember correctly after 30+ years,
BC 16,addr.

[...]

Brian

Lee Sau Dan

unread,
Feb 13, 2002, 4:20:20 AM2/13/02
to
>>>>> "Mark" == Mark Rosenfelder <mark...@enteract.com> writes:

Mark> (I found a proof on the Web that C is not context-free,
Mark> because it requires expressions to be typed, and variables
Mark> to be defined before use. Compilers treat C as context-free
Mark> but handle the typing using a symbol table.)

Hm.... your description about strongly typed languages sound to me
like grammatical gender. The only difference is that the gender is
not defined in the dictionary (lexicon), but in the program itself.
BTW, doesn't FORTRAN determine types lexically (unless overridden with
explicit declarations)?


Mark> Only humans have created automobiles, too. So our brains
Mark> must have wheels.

Haha... We created machines that can fly in the sky. So, we must be
able to fly ourselves! :P

--
Lee Sau Dan 李守敦(Big5) ~{@nJX6X~}(HZ)

E-mail: dan...@informatik.uni-freiburg.de
Home page: http://www.informatik.uni-freiburg.de/~danlee

blazingmuse

unread,
Feb 13, 2002, 4:59:58 AM2/13/02
to
Hey Dave. Looks like we hit the jackpot. I sincerely hope you can
can make more sense of these replies than I. But then, you'll
probably benefit more. Yours,
Muse.

John Atkinson

unread,
Feb 13, 2002, 6:21:28 AM2/13/02
to

"JGuy" <jg...@dev.null.nu> wrote ...

> John Lawler wrote:
>
> > Which probably illustrates Greg's point, that there are *a lot* of
> > possible and indeed occurring (software sense) "phonemes", which we
> > systematize into a very small number of (linguistic sense) phonemes.
> > Clearly, that's inbuilt
>

> Read the description of the alveolar-flap-cum-interlabial-stop
> analyzed as a free-variant allophone of /g/ in Piraha~, posted
> here just yesterday (if you were in the same time zone as me).
> To have this strange sound freely alternating with a
> velar stop, especially when the language seems to have a
> dental (or alveolar?) stop...
>
> There is a (small) dictionary of Piraha~ there:
>
> http://orbita.starmedia.com/~i.n.d.i.o.s/piraha1.htm

Hmm. Notice how more than 60% of the words in this list start with /x/ --
a phoneme which doesn't appear at all among those listed in the references I
mentioned.

Just what is going on here?

John.

Greg Lee

unread,
Feb 13, 2002, 7:01:21 AM2/13/02
to
JGuy <jg...@dev.null.nu> wrote:
> John Lawler wrote:

>> Which probably illustrates Greg's point, that there are *a lot* of
>> possible and indeed occurring (software sense) "phonemes", which we
>> systematize into a very small number of (linguistic sense) phonemes.

>> Clearly, that's inbuilt

> Read the description of the alveolar-flap-cum-interlabial-stop
> analyzed as a free-variant allophone of /g/ in Piraha~, posted
> here just yesterday (if you were in the same time zone as me).
> To have this strange sound freely alternating with a
> velar stop, especially when the language seems to have a
> dental (or alveolar?) stop...

It's certainly a very interesting example, but just what does
it show? There are other segments known with multiple articulations,
and I have heard of denti-labials elsewhere ... You seem to think
this challenges some linguists' conventional ideas about language,
but what such idea does it challenge?

--
Greg Lee <gr...@ling.lll.hawaii.edu>

Peter T. Daniels

unread,
Feb 13, 2002, 7:59:40 AM2/13/02
to
JGuy wrote:

>
> Brian M. Scott wrote:
>
> > (I don't know about Jacques, but I mean *early*! First computer I
> > ever used was an IBM 1620.)
>
> IBM 370 here. Shortly followed (thank goodness!) by
> Univac 1108, then DEC KL-10. My first PC was
> a Kaypro II. (PC as in "my very own personal computer"
> not as in "IBM-PC" -- My second was an NEC APC-III)

Hey, mine was a KayPro 4'84! Unfortunately, just a few months before
Apple made their first offer to academic communities. But all the folks
that got those first Apples couldn't have files longer than 12K or so!

Greg Lee

unread,
Feb 13, 2002, 8:17:58 AM2/13/02
to

I wrote a prime number generator program for a Bendix computer in
1959. The computer was the size of a big refrigerator with a magnetic drum,
and had only a paper tape reader for input. So long as we're reminiscing.

--
Greg Lee <gr...@ling.lll.hawaii.edu>

John Atkinson

unread,
Feb 13, 2002, 9:00:41 AM2/13/02
to

"John Atkinson" <jo...@bigpond.com> wrote ...
>
> "JGuy" <jg...@dev.null.nu> wrote ...

> >
> > There is a (small) dictionary of Piraha~ there:
> >
> > http://orbita.starmedia.com/~i.n.d.i.o.s/piraha1.htm
>
> Hmm. Notice how more than 60% of the words in this list start with
/x/ --
> a phoneme which doesn't appear at all among those listed in the references
I
> mentioned.
>
> Just what is going on here?

OK, I worked it out -- on the website, /x/ denotes the glottal stop -- this
despite them saying up the top that they will use 7 for it. Taking this
into account, the "dictionary" seems to come from Everett's publication in
the Handbook of Amazonian Languages.

John.

Peter T. Daniels

unread,
Feb 13, 2002, 4:45:26 PM2/13/02
to

I was in 3rd grade in 1959.

Couldn't get into computer class at Cornell; at Chicago I learned Comit
II with Vic Yngve and did a job for Eric Hamp (Breton dialects) and a
job for Jay Gelb (Amorite root structure) on the IBM 360. I don't know
that anything ever happened with Eric's project, but the other is in
Gelb et al.'s *Computer-aided Analysis of Amorite*.

Brian M. Scott

unread,
Feb 13, 2002, 5:12:49 PM2/13/02
to
In article <3C6ADE...@att.net>, "Peter says...

[...]

>Couldn't get into computer class at Cornell; at Chicago I learned Comit
>II with Vic Yngve and did a job for Eric Hamp (Breton dialects) and a
>job for Jay Gelb (Amorite root structure) on the IBM 360. I don't know
>that anything ever happened with Eric's project, but the other is in
>Gelb et al.'s *Computer-aided Analysis of Amorite*.

As an undergrad at Pomona I got paid to write a program to
produce a concordance of the Coptic gospel according to St
Thomas from suitably encoded IBM cards; someone at Instant
Christianity (oops -- make that the Institute for Antiquity
and Christianity) was running the project. Don't know
whether they ever used it, though.

Brian

stephen michael conley

unread,
Feb 13, 2002, 5:45:38 PM2/13/02
to
JGuy <jg...@dev.null.nu> wrote:
>Wot? You don't know about Bickerton and Calvin's
>"Lingua ex Machina -- Reconciling Darwin and Chomsky"?
>
>It was published by MIT Press, so it's as kosher, halal,
>and nihil obstat as the brand of linguistics you are
>so proud to have learnt bits of by rote. Calvin discovers
>the origin of syntax in the "multi-jointed action and
>planning" involved in throwing a ball (he was playing
>bocce at the time, if he'd been playing golf he might
>not have had this brilliant insight. If Newton's apple
>had been a pear...). "Multi-jointed." You know what a
>joint is? Not the type you stick in your maw and
>light up -- the kneebone-connected type. Your
>rote education still leaves to be desired. But I have
>a kind heart, so I'll fill you in. Learn this by
>heart and write a 5,000-word commentary by Monday:

[snip]

What a nasty message. You seem to have a lot of trouble carrying on a
discussion of ideas without getting emotionally involved. I mean, really,
how incredibly immature can you get? But I guess that's at least part of
the reason you're "publishing" your insights on sci.lang. Anyhow, I bow
to your great wisdom. Obviously there is no way one so lowly as my poor
self can hope to argue with a mind as great as yours.

You see, I have no stake in being right or wrong. I would love to see
some evidence that I am wrong, because if I am wrong, I want to know it
now, before I waste any more time and effort on erroneous assumptions. I
have no money riding on this, no personal honor or social status at stake.
Any serious academic (as opposed to your typical Usenet kook) is quite
happy to get new information proving his/her assumptions wrong (well,
maybe not at the very first moment, but shortly thereafter!) because the
goal of any serious effort at inquiry is to get to the truth. Besides,
nobody ever got laid from winning a flame war on Usenet.

As for whether or not "my" brand of linguistics considers anything stamped
with the name of Chomsky or comes from MIT kosher, well, you've definitely
shown how ignorant you are both of my educational background (which isn't
surprising, since you know nothing about me) or the current state of the
academic world.

The latter is especially obvious, since you're citing Bickerton as if that
should impress me. Bickerton is a flake, and all this bioprogram hogwash
is based off very erroneous assumptions. See Jeff Siegle's excellent
paper, "Substrate influence in Hawai'i Creole English" in Language in
Society 29.

Oh, and by the way... >plonk< Learn some manners if you want people to
take you seriously.

Steve

Brian M. Scott

unread,
Feb 13, 2002, 7:53:35 PM2/13/02
to
In article <a4eqai$80n$1...@news.cis.ohio-state.edu>, con...@cis.ohio-state.edu
says...

>JGuy <jg...@dev.null.nu> wrote:
>>Wot? You don't know about Bickerton and Calvin's
>>"Lingua ex Machina -- Reconciling Darwin and Chomsky"?

>>It was published by MIT Press, so it's as kosher, halal,
>>and nihil obstat as the brand of linguistics you are
>>so proud to have learnt bits of by rote.

[...]

>As for whether or not "my" brand of linguistics considers anything stamped
>with the name of Chomsky or comes from MIT kosher, well, you've definitely
>shown how ignorant you are both of my educational background (which isn't
>surprising, since you know nothing about me) or the current state of the
>academic world.

>The latter is especially obvious, since you're citing Bickerton as if that
>should impress me.

No, he isn't. He's being sarcastic.

[...]

BMS

John Atkinson

unread,
Feb 13, 2002, 9:35:02 PM2/13/02
to
JGuy wrote:

>
> John Atkinson wrote:
>
> > Hmm. Notice how more than 60% of the words in this list start with /x/ --
>
> ... whilst only two start with /s/
>
> That /x/ is almost certainly Portuguese "x", i.e. IPA [S] -- esh.

No. Piraha~ doesn't have [S]. And SIL people in Brazil rarely know much
Portuguese anyway. My best guess is that the Everetts (or their
predecessors) decided to use /x/ for the glottal stop as a practical
orthography for their bible translations -- reasonable enough,
especially in the days of the typewriter, and a common sort of practice
(cf /q/ in Fijian). And the person who put together that wordlist for
the website didn't realize this, or else they had their mind in neutral
as they typed. (There are several other careless errors too.)

I notice that in the same error occurs, just once, in "The Amazonian
Languages" -- "xai" for "?ai" (to be). Probably they were
transliterating Everett's stuff into IPA, and missed this one.

> > a phoneme which doesn't appear at all among those listed in the references I
> > mentioned.
>
> > Just what is going on here?
>

> Search me. Could the various informants have taken the
> mickey out of those SIL people, each making up his
> own fancy pronunciation? I mean, that allophone of /g/!

Or maybe Piraha~ is evolving at an accelerated rate, and undergoes a new
set of sound changes every decade (conveniently, in between SIL visits).

> And again, suppose they did put up an elaborate show,
> and that hoax caught on with later generations. Then
> they'd end up with a language with a alveolar-flap-
> interlabial-stop _genuinely_ freely alternating with
> a velar stop. So it could happen... could it?

John Atkinson

unread,
Feb 14, 2002, 8:24:59 PM2/14/02
to

"JGuy" <jg...@dev.null.nu> wrote...

> I am starting to wonder whether Pirahã is not
> a huge hoax. I spent the last hour or so looking
> for data on the Web. Most of the links are dead
> or inaccessible. But this one is accessible:
>
> http://www.emich.edu/~linguist/issues/9/9-1695.html
>
> Quoting a bit of it:
>
> >Sally Thomason and I have recently written a very brief paper
> >arguing that the pronouns of Piraha, an Amazonian language, were
> >borrowed from Tupi-Guarani.

The paper in question is at
www-personal.umich.edu/~thomason/papers/pronborr.pdf
It reads plausibly enough -- and I have great respect for Sally Thomason's
work in general, so would be reluctant to dis anything she put her name on.

For what it's worth, here's what Aikhenvald and Dixon have to say about
this: "Everett (1986: 280ff) [that's his chapter in Handbook of Amazonian
Languages, not his article with Thomason: JA] quotes Nimuendaju' 1948 as
stating that the pronouns in Piraha~ were borrowed from Nheengatu' or
Li'ngua Geral, the old lingua franca of the area [a Portuguese-influenced
creole whose pronouns are apparently identical to those of the Tupi-Guarani
language Tupinamba' : JA]. In fact Nimuendaju''s remarks applied to Mura
(now extinct), not Piraha~, and he simply stated that three Li'ngua Geral
pronouns were in 'regular use' by the Mura, not that they had replaced the
original Mura forms."

However, in the Thomason and Everett article, considerable effort is exerted
to show that the current Piranha~ pronouns *are* related (more or less
plausibly) to the Nheengatu' pronouns, and Nimaundaju' isn't mentioned at
all. So much for A. and D.
>
> Yes, the whole pronoun system.

And, as far as Thomason knows, nothing else in the language. ("We have no
evidence (yet) of any other borrowings in Piraha~ from Tupi'-Guarani'.")

> De deux choses l'une:
>
> 1. Pirahã did borrow its pronouns, and the nice
> family trees produced by comparative linguists
> ain't worth shit.
>
> Or:
>
> 2. The data published about Pirahã are an elaborate
> hoax (a bit like the Tasaday people). I mean,
> that zany phonology AND borrowing one's pronoun
> system, that adds up to a bit much, no?


Miguel Carrasquer

unread,
Feb 15, 2002, 5:13:30 PM2/15/02
to
On 12 Feb 2002 15:16:59 GMT, con...@cis.ohio-state.edu (stephen
michael conley) wrote:

>In article <3C68FC...@dev.null.nu>, JGuy <jg...@dev.null.nu> wrote:


>>Steve Conley wrote:
>
>>> (you never have grammatical rules that apply
>>> specifically to, say, the fifth word of a sentence).
>>

>>But you have grammatical rules that apply to the second
>>sentence of a discourse. Can you think of one? And in
>>some New Guinea languages you have grammatical rules
>>which apply to the (n-1)th sentence of a discourse.
>>Do you know about them?


>
>Does it apply to the second sentence because it's the second sentence, or
>because the required discourse elements have to be introduced first, which
>by nature requires a preceding sentence? There _is_ a difference, you
>know. Even so, since you're dealing with sentences, how does that

>disprove phrase structure?


>
>How about a rule that applies to the third word of the fifth
>sentence, but only if the onset of the second syllable of that word is
>higher on the sonority scale than the coda of the first syllable of the

>second word of the third sentence? Or anything else that involves
>actually counting things.

Do you mean something liek Wackernagel's law (clitics come second in
the sentence), as generally assumed for PIE (Greek and Sanskrit at
least), and as valid for present day Serbian / Croat?

=======================
Miguel Carrasquer Vidal
m...@wxs.nl

Michael Kuettner

unread,
Feb 15, 2002, 5:08:58 PM2/15/02
to

JGuy <jg...@dev.null.nu> wrote in message
news:3C6A50...@dev.null.nu...

> Michael Kuettner wrote:
>
> > But it boggles my mind why anyone would write a
> > recursive algorithm to load icons.
>
> And, I expect, display them on the screen.
> (I didn't look at the code--I'm only guessing;
> even the most trivial tasks must have been
> implemented by recursion to gobble up so much
> memory with so little data -- what? 50K at the
> most).
>
Maybe they learned Turbo-Pascal ?
I can remember that TP + recursion was "in"
in the 1990's.

>
> Why? For fun, and out of awe for human-hostileness
> of much code written recursively. A challenge,
> maybe.
>
Human-hostileness at its purest is undocumented
recursive code (preferably with errors).

> I remember having "fun" playing with procedures calling
> one another recursively. It worked, but I needed
> a boxful of aspirin afterwards to figure out how
> it did it. (That was a long time ago, I'd just
> discovered ALGOL and Simula and I was silly). It might
> have been clever code, but it was more a puzzle
> than anything.
I had to fix a piece of code like that <mumble> years ago;
it would have been clever code if the bugger who wrote
it would have known that the stack-pointer *is not* the
stack.
I still wonder how he managed to sell that to the company -
it only worked with the two sets of test - data he had prepared
but with nothing else....

> Granted, some tasks are more
> easily coded using recursion, scanning a
> tree for a particular piece of data, for
> instance.
>
Yep; use the right tool for the task.

> What boggles my mind is why anyone would implement
> a grammar as (pseudo ALGOL/C here:)
>
> function sentence()
> begin
> sentence := concat(sentence(),word())
> end
>
> forgetting the exit condition too,
>
> (That was the famous infinite-sentence generator:
> S ::= <word>|S<word>
> where <word> is drawn from a predefined global lexicon
> by function word() )
>
> when this does nicely and clearly:
>
> function sentence()
> string s
> begin
> s := "
> while true do
> s := concat(sentence(), word())
> if had_enough() then exitloop;
> end;
> sentence := s
> end
>
*Sigh* memories ...
But shouldn't it be formulated as

function sentence()
string s
string h
begin
s := "
while true do
h := word()
if had_enough(s,h) then exit_loop;
s := concat(sentence(), h)
end;
sentence := s
end

(since I take it that had_enough is your grammar - check -
function) ?

> > *Cough* - branch and jump commands are different
> > animals.
> > Branch commands are conditional, while
> > jump commands are *not*.
>

> You're playing on words there.
>
> What's JP (jump if parity), JPE (jump if parity even),
> JNZ (jump if not zero) etc., then?
>
Not words - mnemnonics.
These are branch commands ;also called
BCS (branch if carry set), BCC (branch if
carry clear) and BNZ (branch if not zero).
You're right that in earlier CPU's these weren't
hard-wired (as Brian M. Scott also pointed
out) - my point was simply that when they were
hard-wired they needed extra machine-cycles
(because they checked some CPU - registers)
while an unconditional JMP (Jump) didn't.

Mny faulty mnemnory ;-)

> >Dear JG - you know the waters around here
> >better than me.
> >As an assembler + machine - language discussion
> >seems off-topic here, let's take it to email.
>

<snip linguistics + math>
Reminds me of an apocryphical tale, where the
professor told an assistent :
"This is my theory, now bugger off and find some
statistics to support it."

Cheers,

Michael Kuettner


Michael Kuettner

unread,
Feb 15, 2002, 5:23:51 PM2/15/02
to

Brian M. Scott <b.s...@csuohio.edu> wrote in message
news:3c69ee9a....@enews.newsguy.com...

> On Wed, 13 Feb 2002 02:10:17 +0100, "Michael Kuettner"
> <mik...@eunet.at> wrote:
>
> >JGuy <jg...@dev.null.nu> wrote in message
> >news:3C6A05...@dev.null.nu...
> >> Greg Lee wrote:
>
> >> > (Computer procedures are recursive, but that's because computer
> >> >languages
> >> > are patterned after real languages.)
>
> >> SOME procedures can be written recursively (IF the computer
language
> >> allows procedures to call themselves--none of the early computer
> >> languages
> >> did).
>
> >*Cough* - here you'll have to define what you mean by "early".
> >Almost any chip I've ever programmed (apart from the Fairchild
> >7-bit CPU) had a stack - pointer and recognized the "JSR" - mnemnonic
> >(Jump subroutine). You don't need more for recursion.
>
> But the *languages* didn't allow it.

Yo' high-level-language-white-collar-boy want to sneer at us
'ard 'orking guys who've gotten their 'ands dirty? ;-)


> I'm quite sure that FORTRAN II
> didn't, for instance.

Yes and no.
Some of the libs had recursions in themselves (in assembler); and my
feeble memory tells me that it was possible to do recursions
in FORTRAN with the right compiler switches. But that
was marked as "Go there on your own peril".

> And while IBM 360 assembler had branch-and-link
> instructions, you had to implement your own stack if you wanted one.
> (I don't know about Jacques, but I mean *early*! First computer I
> ever used was an IBM 1620.)
>

I always envy people who can mention computers and IBM[1] in
one sentence while keeping an absolut straight face ;-)

> [...]
>
> >*Cough* - branch and jump commands are different
> >animals.
> >Branch commands are conditional, while
> >jump commands are *not*.
>
> Try 360 machine language: the 'unconditional' jump is simply a
> branch-on-any-condition -- if I remember correctly after 30+ years,
> BC 16,addr.
>

See my other post to Jacques; when the instruction is hard-wired
it's a different beastie.

Cheers,

Michael Kuettner

[1] Incredibly Bad Machines

Brian M. Scott

unread,
Feb 15, 2002, 7:55:20 PM2/15/02
to
On Fri, 15 Feb 2002 23:23:51 +0100, "Michael Kuettner"
<mik...@eunet.at> wrote:

>Brian M. Scott <b.s...@csuohio.edu> wrote in message
>news:3c69ee9a....@enews.newsguy.com...
>> On Wed, 13 Feb 2002 02:10:17 +0100, "Michael Kuettner"
>> <mik...@eunet.at> wrote:

>> >JGuy <jg...@dev.null.nu> wrote in message
>> >news:3C6A05...@dev.null.nu...
>> >> Greg Lee wrote:

>> >> > (Computer procedures are recursive, but that's because computer
>> >> >languages
>> >> > are patterned after real languages.)

>> >> SOME procedures can be written recursively (IF the computer
>> >> language
>> >> allows procedures to call themselves--none of the early computer
>> >> languages
>> >> did).

>> >*Cough* - here you'll have to define what you mean by "early".
>> >Almost any chip I've ever programmed (apart from the Fairchild
>> >7-bit CPU) had a stack - pointer and recognized the "JSR" - mnemnonic
>> >(Jump subroutine). You don't need more for recursion.

>> But the *languages* didn't allow it.

>Yo' high-level-language-white-collar-boy want to sneer at us
>'ard 'orking guys who've gotten their 'ands dirty? ;-)

Hey, my working language back when I actually did this stuff was 360
Assembler. That was fun; PL/I was work.

>> I'm quite sure that FORTRAN II
>> didn't, for instance.

>Yes and no.
>Some of the libs had recursions in themselves (in assembler); and my
>feeble memory tells me that it was possible to do recursions
>in FORTRAN with the right compiler switches. But that
>was marked as "Go there on your own peril".

Don't *think* so, but it's been a long time, and I didn't use it for
long. I do remember writing a recursive FORTRAN II program that
managed its own stack (as an array), but that was just for the hell of
it.

[...]

Brian

Brian M. Scott

unread,
Feb 15, 2002, 10:45:34 PM2/15/02
to
On Sat, 16 Feb 2002 01:20:42 -0800, JGuy <jg...@dev.null.nu> wrote:

>Brian M. Scott wrote:
>> I do remember writing a recursive FORTRAN II program that
>> managed its own stack (as an array), but that was just for the hell of
>> it.

>I am sure you could write a recursive factorial
>function even on ... an IMSAI (was it?) and an
>Altair. Given that they came with 512 bytes of RAM
>(if memory serves), you'd have to be content with
>5! or so max, though. Doing silly things with
>every 12th syllable of a sentence is more fun.

My favorite was independently discovering the cheap trick of reading
machine code (as numbers) into an array so as to overrun the array,
which was carefully located so that the program would fall into the
overrun and start executing the machine code. 'How the hell did you
get a FORTRAN program to do *that*?!'

Brian

0 new messages