logical incompleteness and "syntactic incongruence"

20 views
Skip to first unread message

Rob Freeman

unread,
Sep 27, 2007, 1:09:01 AM9/27/07
to grammatical-i...@googlegroups.com
David, Alex,

I'm branching that old "corpus syntax" thread. It was getting unwieldy.

I have to confess I didn't take the time to look back to see the
example Alex gave (on Sept. 19th?):

"...we have pairs of strings like "I am thin" and "I am an
Englishman", but "thin" and "an Englishman" are not syntactically
congruent"

So I couldn't follow his argument that "the example I gave does not
use lexical ambiguity" meant logical incompleteness and "syntactic
incongruence" were different things.

It did not matter to me too much. My main point was that there are
many sets of syntactic regularities to be found, more than can be
enumerated (because many contradict.) Whether we equate those many
inconsistent sets with logical incompleteness is a secondary matter
(though I think they can be equated.)

Looking at it now, though, following David's analysis. Your argument
Alex (distinguishing logical incompleteness and "syntactic
incongruence") seems to be that your example involves syntactic
incongruence, but it does not involve lexical ambiguity (which is then
equated with logical incompleteness?)

But is this true? Does your example involve no (lexical) ambiguity?

My response (to your example) was:

"Yes, this does seem to be the same (as the logical incompleteness I
am talking about.)

Two words can have contexts in common, and yet not have all contexts
in common. This creates a paradox from the point of view of trying to
form a complete grammatical description. Sometimes the words are the
same, and sometimes they are different."

To which you responded that this "is quite different to the issue of
lexical ambiguity."

But is it not the same thing to say "sometimes they are same and
sometimes they are different" means the two words have some kind of
ambiguity. A syntactic ambiguity perhaps, but an ambiguity
nonetheless.

To follow your argument you will then need to make a distinction
between syntactic ambiguity and lexical ambiguity. That will be
difficult, because lexical ambiguity refers to "meaning", and there is
no commonly agreed definition for meaning. I have a definition which
says lexical ambiguity can be explained in terms of the different sets
specified by syntactic ambiguity. So by my definition they are they
same. But at that point everything becomes a matter of definitions,
and so difficult to argue.

To know whether my definition of lexical ambiguity matches your
intuitive sense, we would perhaps need to look at some examples where
the syntax of "thin" and "an Englishman" is not the same, to see if
the syntactic differences between them can be associated intuitively
with what you regard as a "meaning" difference (lexical ambiguity.)

I guess what we would need to do is find a difference in syntax which
does not involve a difference in "meaning".

I'm predicting such a thing will not exist. (I suggested to Mike
Maxwell that it might, Corpora, Sept 14. But I think it would tend to
create its own meaning distinction. My point there was that meaning
need not be prior to syntax.)

Anyway, it needs to be emphasized that such an analysis could only be
intuitive, and not data-based. We have no data-based definition of
meaning, or lexical ambiguity (other than mine, which equates it to
syntactic ambiguity.)

-Rob

P.S. David. Note that in answer to your analysis, I disagree that "the
phenomenon Alex highlights is present *even if we managed to resolve
all lexical ambiguity*". I equate syntactic incongruity and ambiguity,
so by my argument there is no distinction to be made. The fact that
"thin" and "an Englishman" are syntactically incongruent predicts they
will both be ambiguous. We don't see it here, but that is the
prediction. And of course I argue syntactic incongruity and ambiguity
can both be handled (in particular lexical ambiguity can be modeled
and resolved) by ad-hoc generalization.

On 9/26/07, David Brooks <d.j.b...@cs.bham.ac.uk> wrote:
>
> alexs...@googlemail.com wrote:
> > On Sep 20, 3:23 am, "Rob Freeman" <gro...@chaoticlanguage.com> wrote:
> >> On 9/20/07, alexscl...@googlemail.com <alexscl...@googlemail.com> wrote:
> >>> I think your use of the word paradox here is misleading: this is not
> >>> a logical issue. It's a problem for naive learning algorithms, not a
> >>> paradox.
> >>>
> >>> This is just a fact about certain classes of languages -- and is quite
> >>> different to the issue of lexical ambiguity.
> >>>
> >> I think the issues are the same, but I would be interested to hear arguments
> >> distinguishing them.
> >
> > The example I gave does not use lexical ambiguity -- that should be
> > argument enough!
>
> I think there is an important distinction: the phenomenon Alex
> highlights is present *even if we managed to resolve all lexical
> ambiguity*. (I know we're in a discussion about whether or not that is
> necessary/possible, but I think it's especially important for someone
> adopting a position like Rob's - if your approach copes with this where
> others fail, then it is an advantage you should highlight.)
>
> I guess that Rob conflates the issue because from a set-overlap
> perspective, he's saying his system can handle:
>
> A _B_ (term B observed in the context of A)
>
> even where:
> [1] instances of B may be syntactically incongruent (B is ambiguous)
> [2] A may /occur with/ syntactically incongruent terms.
>
> For [2] to occur, it is possible that A is simply ambiguous, and this
> relates to my earlier discussion of contextual ambiguity; however,
> ambiguity only accounts for a subset of these observations, not
> including Alex's example.
>
> As Alex also pointed out, this is a problem for naive learning
> algorithms: if an algorithm that forms classifications successfully
> discerns between syntactically incongruent instances of B (on the basis
> of the surface form of B), then it is quite reasonable to say it
> accounts for the problem. Data-sparsity in the surface form is one
> reason why this may not be possible, and it is particularly relevant
> when considering raw text (as opposed to POS-tagged texts).
>
> In my view, the arguments distinguishing ambiguity from the phenomenon
> Alex described are important as they define capacities of learning
> algorithms, even though the approaches to resolving them might be similar.
>
> For example, van Zaanen's thesis, Section 7.3 [p.101-102] suggests that
> clustering is a viable approach to resolving the following case (from
> ATIS corpus):
>
> show me the _nonstop_ flights
> show me the _morning_ flights
>
> He argues that distributional analysis over the words in question would
> lead us to think that they are in separate classes. My earlier point
> about data-sparsity (especially hapax legomena) is relevant here: if
> these are the only occurrences of the words, how do you determine that
> they belong to different syntactic types? I can't see how a set-based
> view such as that expounded by Rob would resolve this problem, unless
> there is much more data about the words in question.
>
> D

alexs...@googlemail.com

unread,
Sep 29, 2007, 5:25:51 PM9/29/07
to Grammatical Incompleteness
It's worth making a distinction between lexical ambiguity and
syntactic ambiguity.

Though I suppose if you think in terms of categorial grammar and
"rigid" CGs there is a point of similarity.

On 27 Sep, 06:09, "Rob Freeman" <gro...@chaoticlanguage.com> wrote:
> David, Alex,
>
> I'm branching that old "corpus syntax" thread. It was getting unwieldy.
>
> I have to confess I didn't take the time to look back to see the
> example Alex gave (on Sept. 19th?):
>
> "...we have pairs of strings like "I am thin" and "I am an
> Englishman", but "thin" and "an Englishman" are not syntactically
> congruent"
>
> So I couldn't follow his argument that "the example I gave does not
> use lexical ambiguity" meant logical incompleteness and "syntactic
> incongruence" were different things.
>

I have never argued that "logical incompleteness" and "syntactic
incongruence" were different things. Largely because I don't think
they have been defined precisely enough in this discussion.
To me these both have very specific technical meanings that are
completely different, but if you think they are the same then maybe
define them and argue that they are equivalent.


Rob Freeman

unread,
Sep 30, 2007, 2:29:18 AM9/30/07
to grammatical-i...@googlegroups.com
On 9/30/07, alexs...@googlemail.com <alexs...@googlemail.com> wrote:
>
> I have never argued that "logical incompleteness" and "syntactic
> incongruence" were different things.

I was referring to your original(?) post:

"I think from your description you are merely pointing out a flaw in
distributional modes of analysis -- completeness (in the logical
sense) is something utterly different.

Your examples above are surely just saying that observing two strings
in the same context does not imply they are syntactically congruent."

For clarity, I'm assuming the "examples" you refer to (though you did
not specify) are things like (Corpora, corpus syntax, Sept. 18):

I supported the man with a stick.
I accompanied the man with a stick.

In particular the contrast between sentences like these and sentences
like (Corpora, ad-hoc meaning, Sept. 13):

Tom supported the tomato plant with a stick.

Which "supported" does not share with "accompanied".

So "supported" = "accompanied" in some contexts, but "supported" !=
"accompanied" in other contexts.

So it is both true, and not true, that "supported" = "accompanied".

Equivalently, it is impossible to formulate a theory about these
examples which proves both that "supported" and "accompanied" will
occur in the same contexts, or that they won't.

How do I prove "I accompanied the man with a stick" is acceptable?
"Supported" occurred in this context, but there are contexts which
"accompanied" and "supported" demonstrably do not share.

How do I prove "*Tom accompanied the tomato plant with a stick" is not
acceptable? "Supported" occurred in this context, and there are
contexts which "supported" and "accompanied" demonstrably do share.

For "logical (or formal) incompleteness", as I said earlier I like
Janiczak's definition:

A Remark Concerning Decidability of Complete Theories, Antoni
Janiczak, The Journal of Symbolic Logic, Vol. 15, No. 4 (Dec., 1950),
pp. 277-279:

"A formalized theory is called complete if for each sentence
expressible in this theory either the sentence itself or its negation
is provable."

Though I'm happy to go with the one Yorick Wilks found in Wikipedia:

"Completeness normally (see e.g. Wikipedia) means that for every
sentence S expressible in a language either S or ~S is derivable from
the associated axioms, and that all sentences so derived are true
(i.e. theorems)."

-Rob

alexs...@googlemail.com

unread,
Sep 30, 2007, 2:57:49 PM9/30/07
to Grammatical Incompleteness
I think I have been unclear; I posted after a very good dinner. I
think the remainder of my post makes it clear that I think they are so
different that I was not sure you were claiming that they meant the
same thing. I guess a quantifier scope ambiguity is the source of the
confusion.

Two questions:

What does the equals sign in "supported" = "accompanied" mean?

I know what logical incompleteness is: could you explain what it has
to do with language?
i.e. when you say language is incomplete, what does this have to do
with logical incompleteness?

On 30 Sep, 07:29, "Rob Freeman" <gro...@chaoticlanguage.com> wrote:

Rob Freeman

unread,
Oct 1, 2007, 1:39:44 AM10/1/07
to grammatical-i...@googlegroups.com
On 10/1/07, alexs...@googlemail.com <alexs...@googlemail.com> wrote:
>
> Two questions:
>
> What does the equals sign in "supported" = "accompanied" mean?

I use it to mean "syntactically the same as", "having the same set of
contexts as", "in the same syntactic class as".

> I know what logical incompleteness is: could you explain what it has
> to do with language?
> i.e. when you say language is incomplete, what does this have to do
> with logical incompleteness?

The concept of logical incompleteness is (almost?) entirely limited to
the context of languages. It was proven in the context of languages.
It is really a result about languages. (Where language is defined very
simply as a string of symbols.)

E.g. (Yorick Wilks):

alexs...@googlemail.com

unread,
Oct 1, 2007, 10:11:11 AM10/1/07
to Grammatical Incompleteness
I think we are getting to the root of our confusion. Completeness
derives from a notion of proof: what does proof mean in the context fo
natural language. One way of looking at Godel's first theorem is that
he showed that proof and truth are completely different: I am trying
to figure out whether you are looking at language membership as being
proof theoretic or truth based (i.e. model theoretic).


On Oct 1, 6:39 am, "Rob Freeman" <gro...@chaoticlanguage.com> wrote:

Alex Clark

unread,
Oct 1, 2007, 12:26:41 PM10/1/07
to Grammatical Incompleteness
Let me make this even more explicit: I think that you think of truth
in this formal system, as meaning language membership. What then is
proof? Derivation from some phrase structure grammar? Fine. But then
we have no proofs of false statements; so language is trivially
incomplete. (Unless the language is \Sigma^*); even if the language is
a regular language like \{ (ab)* \}; where this is clearly decidable.


On Oct 1, 3:11 pm, "alexscl...@googlemail.com"

Rob Freeman

unread,
Oct 2, 2007, 1:00:01 AM10/2/07
to grammatical-i...@googlegroups.com
Alex,

We've got the same discussion going under two different threads. Let's
consolidate them here in the more appropriate thread.

On 10/2/07, Alex Clark <alexs...@googlemail.com> wrote:
>
> Let me make this even more explicit: I think that you think of truth
> in this formal system, as meaning language membership. What then is
> proof? Derivation from some phrase structure grammar? Fine. But then
> we have no proofs of false statements; so language is trivially
> incomplete.

I'm glad we agree on something.

Equally, by your assessment, unless we can show natural language is
derivable from some phrase structure grammar, we will have no proof of
true statements.

> On Oct 1, 3:11 pm, "alexscl...@googlemail.com"
> <alexscl...@googlemail.com> wrote:
> > I think we are getting to the root of our confusion. Completeness
> > derives from a notion of proof: what does proof mean in the context fo
> > natural language. One way of looking at Godel's first theorem is that
> > he showed that proof and truth are completely different: I am trying
> > to figure out whether you are looking at language membership as being
> > proof theoretic or truth based (i.e. model theoretic).

Perhaps either. I'm not sure it matters, since my hypothesis is that
natural language is incomplete, and therefore that neither truth nor
proof need apply.

And from the "data sparseness" thread:

> > > What do "truth" and "provability" refer to when you are talking about
> > > natural language?
> >
> > Exactly what they mean when you are talking about a formal language.
>
> What do "truth" and "provability" refer to when you are talking about
> a formal language.
> Consider the formal language { a^n b^n | n > 0 }. What does truth and
> provability mean in this context?

This is easy to do for a complete system (as you point out above) but
not strictly relevant, since what I am trying to argue is that it is
_not_ possible to say such things about natural language (if it were,
natural language would be complete.)

It is kind of funny that the argument you have been led into in an
attempt to show that formal systems do not apply to natural language,
turns out to be essentially the same as my argument that natural
language is not complete. Because in each case your argument turns out
to be distinguishing natural language from the properties only of
_complete_ formal systems.

So Alex. You've asked me a lot of questions. Let me ask you one.

What would an _incomplete_ formal system look like if you were to
write it down in symbols?

-Rob

Alex Clark

unread,
Oct 2, 2007, 2:23:45 AM10/2/07
to Grammatical Incompleteness

On Oct 2, 6:00 am, "Rob Freeman" <gro...@chaoticlanguage.com> wrote:
> Alex,
>
> We've got the same discussion going under two different threads. Let's
> consolidate them here in the more appropriate thread.
>

> On 10/2/07, Alex Clark <alexscl...@googlemail.com> wrote:
>
>
>
> > Let me make this even more explicit: I think that you think of truth
> > in this formal system, as meaning language membership. What then is
> > proof? Derivation from some phrase structure grammar? Fine. But then
> > we have no proofs of false statements; so language is trivially
> > incomplete.
>
> I'm glad we agree on something.
>
> Equally, by your assessment, unless we can show natural language is
> derivable from some phrase structure grammar, we will have no proof of
> true statements.
>
> > On Oct 1, 3:11 pm, "alexscl...@googlemail.com"
> > <alexscl...@googlemail.com> wrote:
> > > I think we are getting to the root of our confusion. Completeness
> > > derives from a notion of proof: what does proof mean in the context fo
> > > natural language. One way of looking at Godel's first theorem is that
> > > he showed that proof and truth are completely different: I am trying
> > > to figure out whether you are looking at language membership as being
> > > proof theoretic or truth based (i.e. model theoretic).
>
> Perhaps either. I'm not sure it matters, since my hypothesis is that
> natural language is incomplete, and therefore that neither truth nor
> proof need apply.
>
> And from the "data sparseness" thread:
>

> On 10/1/07, alexscl...@googlemail.com <alexscl...@googlemail.com> wrote:
>
>
>
> > > > What do "truth" and "provability" refer to when you are talking about
> > > > natural language?
>
> > > Exactly what they mean when you are talking about a formal language.
>
> > What do "truth" and "provability" refer to when you are talking about
> > a formal language.
> > Consider the formal language { a^n b^n | n > 0 }. What does truth and
> > provability mean in this context?
>
> This is easy to do for a complete system (as you point out above) but
> not strictly relevant, since what I am trying to argue is that it is
> _not_ possible to say such things about natural language (if it were,
> natural language would be complete.)
>
> It is kind of funny that the argument you have been led into in an
> attempt to show that formal systems do not apply to natural language,
> turns out to be essentially the same as my argument that natural
> language is not complete. Because in each case your argument turns out
> to be distinguishing natural language from the properties only of
> _complete_ formal systems.
>
> So Alex. You've asked me a lot of questions. Let me ask you one.
>
> What would an _incomplete_ formal system look like if you were to
> write it down in symbols?
>
> -Rob

Rob,

I have been trying to understand what you mean by incompleteness.
Since you have declined to provide formal definitions I have put some
for you. And we have established that it is completely vacuous, and
had no relationship to your arguments about context sets. So I think
this discussion is at an end, unless I have misinterpreted your
arguments.

Let me clarify my position: I think natural languages are natural
objects, but can and should be described formally: either using the
classical vocabulary of formal language theory or a probabilistic
language model. And I think the idea of incompleteness is irrelevant
since we are not dealing with logics, but with formal language theory.

And an example of an incomplete formal system would be Peano
Arithmetic.
Maybe you could give me an example of an complete but undecidable
system?

Maybe you could give me an example of

Alex Clark

unread,
Oct 2, 2007, 2:52:15 AM10/2/07
to Grammatical Incompleteness

> > What do "truth" and "provability" refer to when you are talking about
> > a formal language.
> > Consider the formal language { a^n b^n | n > 0 }. What does truth and
> > provability mean in this context?
>
> This is easy to do for a complete system (as you point out above) but
> not strictly relevant, since what I am trying to argue is that it is
> _not_ possible to say such things about natural language (if it were,
> natural language would be complete.)
>

Could you answer my question: what does truth and provability mean in
this context?
And do you think this language is "complete" in your sense?

> It is kind of funny that the argument you have been led into in an
> attempt to show that formal systems do not apply to natural language,
> turns out to be essentially the same as my argument that natural
> language is not complete. Because in each case your argument turns out
> to be distinguishing natural language from the properties only of
> _complete_ formal systems.
>

I have to say that I do not recognise this description of our debate.

Rich Cooper

unread,
Oct 2, 2007, 2:39:36 PM10/2/07
to grammatical-i...@googlegroups.com
Recently, I heard Richard Dawkins on a TV interview say that certain ideas
can be valuable in an evolutionary sense even if they are manifestly untrue
in reality. (He was discussing religious ideas then, which he believes are
false, but the principle applies to other ideas equally. I don't want to
start a religious flame war - it's the concept of useful false ideas that I
want to mount).

"It ain't what you don't know that hurts you - its what you do know that
ain't true" - Will Rogers, circa 1930s

It seems to me that some of the ambiguity of language, and the value of that
ambiguity, are in communicating false, but useful, ideas. Once
communicated, the ideas can be considered by others, and their replies might
be useful in helping the originator in making the ideas somewhat less false.
Iterative practice at that kind of conversation could be what makes the
scientific method, among other discourses, ultimately enlightening.

When you think about all the people you know, don't most of them have ideas
you consider clearly false, or unsupported? Maybe even each of us have that
situation to deal with.

The basic belief in linguistics today is that the syntax of languages is
crudely linked into the semantics of the world. The best way we have of
representing reality at this time is first order logic. There is a lot to
be said for building in mechanisms to deal with human perceptions, but the
basic representation for the senses' information is still being formed as
first order logic in some shape.

False and true ideas, such as belief systems, have been studied before. But
I am not aware of projects that link belief systems into linguistic
problems. Does anyone know of such work?

Are there other approaches that apply the concept of ideation to
linguistics?

Your comments solicited on how ideas can start from nonsense (pre Galilean
theories? pre Socratic mathematics?) and ultimately lead to progress.

Sincerely,
Rich Cooper
http://www.EnglishLogicKernel.com


Rich Cooper

unread,
Oct 2, 2007, 3:34:34 PM10/2/07
to grammatical-i...@googlegroups.com
Michael Shermer has a column on "integrative science" in Scientific
American, current issue:
http://www.sciam.com/article.cfm?chanID=sa006&colID=13&articleID=FAD36DC2-E7
F2-99DF-31C4971823C95F5F

This adds a little perspective on the issue of false ideas.

Sincerely,
Rich Cooper
http://www.EnglishLogicKernel.com

Rob Freeman

unread,
Oct 3, 2007, 1:23:34 AM10/3/07
to grammatical-i...@googlegroups.com
On 10/2/07, Alex Clark <alexs...@googlemail.com> wrote:
>
> I think the idea of incompleteness is irrelevant
> since we are not dealing with logics, but with formal language theory.
>
> And an example of an incomplete formal system would be Peano
> Arithmetic.

This appears to be self-contradictory. You say that incompleteness is
not relevant to formal language theory, and then right away you give
an example of a formal system which is incomplete.

Are you making a distinction between a formal language and a formal system?

Several of your opinions seem non-standard.

You reject the definitions of incompleteness found by myself and
Yorick Wilks (for good measure also rejecting Janiczak's perfectly
irrelevant statements about complete but undecidable theories.)

Then you give instead an example of a complete formal language, argue
that the relevant tests (for completeness) are not met for natural
language, and claim my arguments for the incompleteness of natural
language are "vacuous" as a result (because tests for completeness are
not met!)

That said, all our disagreements seem to come down to definitions. We
don't disagree on any points of substance.

A key problem seems to be your rejection of incompleteness as a
concept relevant in the context of anything but logic.

You started out by rejecting the applicability of incompleteness to
natural language. Now you are rejecting the applicability of
incompleteness to formal languages too (despite the fact the standard
definitions specifically refer to them.)

This insistence on logic is understandable in a way, because formal
systems did start out as formal expressions of logic. But they moved
on. They turned out to be more powerful than logic. Logic was
inadequate. Proof and truth do not always co-exist. That was a shock.
But from the point of view of these newly empowered formal systems
logic was now seen only to be a limitation. If its tests are not met,
that does not mean the system ceases to exist, or becomes meaningless.

Incompleteness is defined with reference to logic, but you need to see
that it is a property of languages, the formal systems themselves, not
the logic which originally motivated them.

Reject the idea of incomplete natural languages if you must, but I
think you need to at least accept the existence of incomplete _formal_
languages.

The Peano arithmetic is a good start. But perhaps it is unnecessarily
associated with the logic which proved to be its limitation. I would
suggest the Goedel string, a Universal Turing Machine, or especially
the maximally compact (random) programs of Chaitin's Algorithmic
Information Theory, as examples of incomplete formal systems which get
under what is a mere limitation of the Peano arithmetic and show the
power those limitations revealed.

When you accept formal systems can be incomplete in a constructive
way, and not just as a limitation of logic, then we can begin to
discuss whether natural language might be an example of such an
incomplete formal system.

-Rob

Rob Freeman

unread,
Oct 3, 2007, 1:26:29 AM10/3/07
to grammatical-i...@googlegroups.com
On 10/3/07, Rich Cooper <Ri...@englishlogickernel.com> wrote:
> ...

>
> False and true ideas, such as belief systems, have been studied before. But
> I am not aware of projects that link belief systems into linguistic
> problems. Does anyone know of such work?

Wittgenstein. Almost(?) all his work can be understood in these terms.

He believed all problems of philosophy were problems of language.

He also hints enticingly at limitations to logic.

> Your comments solicited on how ideas can start from nonsense (pre Galilean
> theories? pre Socratic mathematics?) and ultimately lead to progress.

Thomas Kuhn is a great reference on the subjectivity of truth. He has
some nice examples of how science sometimes goes backwards and accepts
truths it formerly rejected.

From memory one is "action at a distance" which was was rejected by
Galileo (for whom tides were a killer argument for the movement of the
earth), until Newton reintroduced it as a convenience (only to have it
rejected by Einstein again.)

It seems certain all truths are partial. Though as a truth, that is
sure to be partial too.

-Rob

Alex Clark

unread,
Oct 3, 2007, 3:57:42 AM10/3/07
to Grammatical Incompleteness
I have put some comments below.

On Oct 3, 6:23 am, "Rob Freeman" <gro...@chaoticlanguage.com> wrote:


> On 10/2/07, Alex Clark <alexscl...@googlemail.com> wrote:
>
>
>
> > I think the idea of incompleteness is irrelevant
> > since we are not dealing with logics, but with formal language theory.
>
> > And an example of an incomplete formal system would be Peano
> > Arithmetic.
>
> This appears to be self-contradictory. You say that incompleteness is
> not relevant to formal language theory, and then right away you give
> an example of a formal system which is incomplete.
>
> Are you making a distinction between a formal language and a formal system?
>

Indeed I am. Incompleteness is not relevant to formal language
theory. Indeed I think it is a vacuous concept in the way you define
it.

>From an earlier conversation:


"
> > What do "truth" and "provability" refer to when you are talking about
> > natural language?

> Exactly what they mean when you are talking about a formal language.

What do "truth" and "provability" refer to when you are talking about
a formal language.
Consider the formal language { a^n b^n | n > 0 }. What does truth and
provability mean in this context?
"

Do you think this language is complete? If you explain what you mean
by truth, provability and completeness with this simple example, I
would be grateful.

> Several of your opinions seem non-standard.
>
> You reject the definitions of incompleteness found by myself and
> Yorick Wilks (for good measure also rejecting Janiczak's perfectly
> irrelevant statements about complete but undecidable theories.)
>

They are standard definitions which I do not reject.

> Then you give instead an example of a complete formal language, argue
> that the relevant tests (for completeness) are not met for natural
> language, and claim my arguments for the incompleteness of natural
> language are "vacuous" as a result (because tests for completeness are
> not met!)
>

I do not recognise this description at all.

snip

> You started out by rejecting the applicability of incompleteness to
> natural language. Now you are rejecting the applicability of
> incompleteness to formal languages too (despite the fact the standard
> definitions specifically refer to them.)
>

I think you are confusing well-formedness with truth/proof. Formal
language theory does not use completeness.

Rob Freeman

unread,
Oct 3, 2007, 8:04:39 AM10/3/07
to grammatical-i...@googlegroups.com
Alex,

OK, I need to be more precise.

Assume when I say formal language I mean a corresponding formal
system. Do you now agree it can be incomplete?

What would the corresponding formal language look like?

-Rob

Alex Clark

unread,
Oct 3, 2007, 10:20:23 AM10/3/07
to Grammatical Incompleteness
I don't think we can proceed until I understand what you mean by proof
and truth in formal language theory.

As I asked before:

What do "truth" and "provability" refer to when you are talking about
a formal language.

Consider the formal language L = { a^n b^n | n > 0 }. What does truth


and
provability mean in this context?
"

Do you think this language is complete? If you explain what you mean
by truth, provability and completeness with this simple example, I

would be grateful. Suppose you have a CFG G that generates this
language L.

On Oct 3, 1:04 pm, "Rob Freeman" <gro...@chaoticlanguage.com> wrote:
> Alex,
>
> OK, I need to be more precise.
>
> Assume when I say formal language I mean a corresponding formal
> system. Do you now agree it can be incomplete?
>
> What would the corresponding formal language look like?
>
> -Rob
>

Rob Freeman

unread,
Oct 4, 2007, 4:55:50 AM10/4/07
to grammatical-i...@googlegroups.com
On 10/3/07, Alex Clark <alexs...@googlemail.com> wrote:
>
> I don't think we can proceed until I understand what you mean by proof
> and truth in formal language theory.
>
> As I asked before:
>
> What do "truth" and "provability" refer to when you are talking about
> a formal language.
> Consider the formal language L = { a^n b^n | n > 0 }. What does truth
> and
> provability mean in this context?
> "
>
> Do you think this language is complete? If you explain what you mean
> by truth, provability and completeness with this simple example, I
> would be grateful. Suppose you have a CFG G that generates this
> language L.

You are right. What I am thinking of is a formal system, not a formal language.

I have not been making the distinction and that is confusing.

And it is not enough just to consider the corresponding formal
language. That will be interesting, but not the system I am looking
for in itself.

It seems what I am really suggesting is that natural language is an
incomplete formal system of some kind.

Let's try to define it on those terms:

Let the axioms of the system be C.

Let the corresponding formal language be R.

Define proof to be that S in R can be generated from C by some rules
of inference (e.g. symbol "a" can be substituted for symbol "b", if it
is substituted in C.)

Let truth be the same (a trivial definition of truth, but I think we
can still split truth and proof.)

For completeness we have that for every sentence S expressible in R
either S or ~S is derivable from C, and that all sentences so derived
are true.

In the case of your formal language L, I think this system will be
(trivially?) complete. If "a" and "b" can be substituted in C, then it
should always be possible to substitute them, and all sentences
generated from C will be true.

But if R is random, I think this system will be incomplete, because
sometimes it will be possible to substitute "a" and "b" and sometimes
not.

Perhaps R doesn't need to be completely random.

What would the formal language associated with an incomplete formal
system look like?

More importantly, could a formal system of this kind explain what we
see for natural language, i.e. is natural language an incomplete
formal system, generating new sentences by substitution on a random,
or at least syntactically incongruent, corpus?

-Rob

David Brooks

unread,
Oct 4, 2007, 6:59:24 AM10/4/07
to grammatical-i...@googlegroups.com
Rob,

> But if R is random, I think this system will be incomplete, because
> sometimes it will be possible to substitute "a" and "b" and sometimes
> not.

I don't think that this constitutes a proof of incompleteness. Godel's
theorem states that a formal system is incomplete because you can have a
self-contradictory sentence (i.e. a sentence S is only true if ~S is
also true). This doesn't seem to be the case for "a" and "b" because
they are never in contradiction when used: an instance of "a" is either
the version that can be substituted with "b", or it is the version where
substitution is not possible, but it is never /both at the same time/.

But perhaps I am just conflating issues of formal language and formal
systems...

Cheers,
D

Alex Clark

unread,
Oct 4, 2007, 8:11:57 AM10/4/07
to Grammatical Incompleteness
If you identify language membership with truth, and truth with proof;
then all languages are trivially complete --
since every string either is or is not in the language.
If you identify prove with derivation, then all languages (except
Sigma^*) are trivially incomplete, since they have some strings not in
the language. This seems close to the logic programming NLP paradigm:
parsing as deduction.
But you don't have the logical symbol "~"; so you can't derive ~S.


On Oct 4, 9:55 am, "Rob Freeman" <gro...@chaoticlanguage.com> wrote:

Rich Cooper

unread,
Oct 4, 2007, 1:20:33 PM10/4/07
to grammatical-i...@googlegroups.com

> On 10/3/07, Rich Cooper <Ri...@englishlogickernel.com> wrote:
> ...
>
> False and true ideas, such as belief systems, have been studied before.
But
> I am not aware of projects that link belief systems into linguistic
> problems. Does anyone know of such work?

Wittgenstein. Almost(?) all his work can be understood in these terms.

==========================================
I don't think Wittgenstein discusses the usefulness of false ideas, though.
That's what I'm trying to focus on. Specifically, why are some false ideas
more useful than others, and what can be measured about an idea that makes
it more or less useful?
=========================================

He believed all problems of philosophy were problems of language.

He also hints enticingly at limitations to logic.

> Your comments solicited on how ideas can start from nonsense (pre Galilean
> theories? pre Socratic mathematics?) and ultimately lead to progress.

Thomas Kuhn is a great reference on the subjectivity of truth. He has
some nice examples of how science sometimes goes backwards and accepts
truths it formerly rejected.

=========================================
Do you have a URL for one of his works you would recommend?
=========================================


From memory one is "action at a distance" which was was rejected by
Galileo (for whom tides were a killer argument for the movement of the
earth), until Newton reintroduced it as a convenience (only to have it
rejected by Einstein again.)

It seems certain all truths are partial. Though as a truth, that is
sure to be partial too.

-Rob

=============================
Thanks for your inputs
-Rich
=============================

Rob Freeman

unread,
Oct 5, 2007, 2:06:18 AM10/5/07
to grammatical-i...@googlegroups.com
On 10/4/07, David Brooks <d.j.b...@cs.bham.ac.uk> wrote:
>
> > But if R is random, I think this system will be incomplete, because
> > sometimes it will be possible to substitute "a" and "b" and sometimes
> > not.
>
> I don't think that this constitutes a proof of incompleteness. Godel's
> theorem states that a formal system is incomplete because you can have a
> self-contradictory sentence (i.e. a sentence S is only true if ~S is
> also true). This doesn't seem to be the case for "a" and "b" because
> they are never in contradiction when used: an instance of "a" is either
> the version that can be substituted with "b", or it is the version where
> substitution is not possible, but it is never /both at the same time/.

Goedel uses a sentence. We use a corpus. It is the corpus which
contradicts itself. The corpus needs to be interpreted as a whole, as
a single string.

I think what we are really looking at is the power of a single string
to code two meanings. Goedel used this to make a string refer to
itself, and further, contradict itself. But he only needed to make it
refer to itself for the proof. The interesting property is that we can
get two meanings into the same string.

Once we have a string which can code two meanings, the sky is the
limit. Why not code more meanings into the string.

The amount of information you can squeeze into a single string is the
beauty of this kind of system.

The cost is that as you pack more information in, each symbol
individually becomes more and more ambiguous, and the string becomes
less and less regular. That ambiguity can only be resolved with
reference to the whole string (so if natural language works like this,
the corpus really is its most compact description.)

-Rob

Rob Freeman

unread,
Oct 5, 2007, 2:06:31 AM10/5/07
to grammatical-i...@googlegroups.com
On 10/4/07, Alex Clark <alexs...@googlemail.com> wrote:
>
> If you identify language membership with truth, and truth with proof;
> then all languages are trivially complete --
> since every string either is or is not in the language.
> If you identify prove with derivation, then all languages (except
> Sigma^*) are trivially incomplete, since they have some strings not in
> the language. This seems close to the logic programming NLP paradigm:
> parsing as deduction.
> But you don't have the logical symbol "~"; so you can't derive ~S.

Am I right in thinking you would only need this symbol to have a
statement _say_ something was false. You would not need it for
something to actually _be_ false?

Note I'm not trying to make meaningful statements at the moment, only
code the complexity of natural language syntax. If I want to make that
syntax say things later I could interpret the symbols

In fact since I'm identifying truth with proof, I don't think I can
get falsehood anyway, only contradiction. But contradiction (and
degrees of proof) is probably enough for natural language.

If you identify language membership with truth, then falsehood is
possible. But I think a system on those lines is indeed trivial, and
reduces to an actual formal language in all important respects.

In my case there are no constraints on what strings can be in the
language, so I can't have any strings not in the language, I can only
have contradiction of proof.

-Rob

Rob Freeman

unread,
Oct 5, 2007, 2:09:50 AM10/5/07
to grammatical-i...@googlegroups.com
On 10/5/07, Rich Cooper <Ri...@englishlogickernel.com> wrote:
>
>
> > On 10/3/07, Rich Cooper <Ri...@englishlogickernel.com> wrote:
> > ...
> >
> > False and true ideas, such as belief systems, have been studied before.
> > But I am not aware of projects that link belief systems into linguistic
> > problems. Does anyone know of such work?
>
> Wittgenstein. Almost(?) all his work can be understood in these terms.
> ==========================================
> I don't think Wittgenstein discusses the usefulness of false ideas, though.
> That's what I'm trying to focus on. Specifically, why are some false ideas
> more useful than others, and what can be measured about an idea that makes
> it more or less useful?
> =========================================

Defining "true" and "false" is the key issue for me.

> > Your comments solicited on how ideas can start from nonsense (pre Galilean
> > theories? pre Socratic mathematics?) and ultimately lead to progress.
>
> Thomas Kuhn is a great reference on the subjectivity of truth. He has
> some nice examples of how science sometimes goes backwards and accepts
> truths it formerly rejected.
> =========================================
> Do you have a URL for one of his works you would recommend?
> =========================================

I've only read "The Structure of Scientific Revolutions." I don't know
if that is on-line anywhere.

It is a very small book and worth reading in the original. Reading it
gave me a very different impression to that I gained from reviews.

Most reviews of Kuhn address the implications of his ideas for what
you might call the "dignity of science." He made a number of
controversial statements, such as that "normal" science does not seek
new truth, but only to prove what is already known. That alienates a
lot of people. In particular it alienates exactly those who have power
in any science at any given moment!

He implies they are dogmatists.

Maybe so. But in fact he values dogma, both as a focus for
observation, and a catalyst for change

Just as your Scientific American article points out, facts are
meaningless in the absence of theory. You might as well list the
pebbles in a quarry. Science needs to limit its view. This focus
eventually makes it possible to observe the facts which will lead to
the next change.

Here are some especially relevant quotes:

p.g. 15-16
"In the absence of a paradigm or some candidate for paradigm, all of
the facts that could possibly pertain to the development of a given
science are likely to seem equally relevant. As a result, early
fact-gathering is a far more nearly random activity than the one that
subsequent scientific development makes familiar. Furthermore, in the
absence of a reason for seeking some particular form of more recondite
information, early fact-gathering is usually restricted to the wealth
of data that lie ready to hand. ... though this sort of
fact-collecting has been essential to the origin of many significant
sciences, anyone who examines, for example, Pliny's encyclopedic
writings or the Baconian natural histories of the seventeenth century
will discover that it produces a morass. ... In addition, since any
description must be partial, the typical natural history often omits
from its immensely circumstantial accounts just those details that
later scientists will find sources of important illumination. ...
Moreover, since the casual fact-gatherer seldom possesses the time or
the tools to be critical, the natural histories often juxtapose
descriptions like the above with others ... that we are now quite
unable to confirm. ...

This is the situation that creates the schools characteristic of the
early stages of a science's development. No natural history can be
interpreted in the absence of at least some implicit body of
intertwined theoretical and methodological belief that permits
selection, evaluation, and criticism."

p.g. 48
"The pre-paradigm period, in particular, is regularly marked by
frequent and deep debates over legitimate methods, problems, and
standards of solution, though these serve rather to define schools
than to produce agreement."

p.g. 65
"By ensuring that the paradigm will not be too easily surrendered,
resistance guarantees that scientists will not be lightly distracted
and that the anomalies that lead to paradigm change will penetrate
existing knowledge to the core. The very fact that a significant
scientific novelty so often emerges simultaneously from several
laboratories is an index both of the strongly traditional nature of
normal science and to the completeness with which that traditional
pursuit prepares the way for its own change."

p.g. 109
"...since nature is too complex and varied to be explored at random,
that map is as essential as observation and experiment to science's
continuing development."

Also this is nice:

p.g. 18
'Francis Bacon's acute methodological dictum: "Truth emerges more
readily from error than from confusion." (Bacon, op. cit. p. 210)

I'm reminded of Pauli's(?) famous put down (which I see your Sci. Am.
author repeats) that someone was "not even wrong", meaning their
argument was simply irrelevant (though what is "relevant" can change,
which is the more famous side of Kuhn's book.)

-Rob

Richard Cooper

unread,
Oct 5, 2007, 12:12:44 PM10/5/07
to grammatical-i...@googlegroups.com, so...@west.poly.edu
Thanks Rob, Kuhn's quotes below are good examples of how theories are iteratively refined to make them more and more correct.

But I'm looking for a metric of usefulness that can be an estimate of correctness. L. A. Zadeh's fuzzy logic is an example, though John Sowa is not convinced about the epistemology of using the logical functions and, or, not over the fuzzy metrics to make a full algebra of fuzziness.

Another example is the method of rough sets, which approximates a set rather than defining it with complete accuracy. One metric for that method might be the percentage of correctly classified elements of a set which show up in the rough set.

Each of these could be candidate measures of usefulness in that they provide estimates in more detail than logic, or context free grammars. But both have their limitations. I'm hoping to generalize on fuzzy logic and rough sets to find some way to estimate the usefuless of a belief or theory.

In traditional pattern recognition, functional estimates are developed that are useful, in the sense that I am using, because they offer a more reliable prediction of the class to which a pattern belongs than pure random prediction does. Most pattern recognition work relies on linear math, though some relies on layers of decision making, identification of lower level primitives, and so on.

In particular, I'm interested in applications relating to linguistics. Unsupervised learning, in particular, has much to gain from a more refined theory of how useful an interpretation of a statement can be.

Thanks,
Rich
http://www.EnglishLogicKernel.com

------Original Mail------
From: "Rob Freeman" <gro...@chaoticlanguage.com>
To: <grammatical-i...@googlegroups.com>
Sent: Fri, 5 Oct 2007 14:09:50 +0800
Subject: [grammatical-incompleteness] Re: The value of ideas in language

Rob Freeman

unread,
Oct 6, 2007, 1:15:19 AM10/6/07
to grammatical-i...@googlegroups.com
On 10/6/07, Richard Cooper <ri...@englishlogickernel.com> wrote:
>
> Thanks Rob, Kuhn's quotes below are good examples of how theories
> are iteratively refined to make them more and more correct.

If you equate correctness with usefulness, and usefulness with
prediction, there might be a sense in which theories become more and
more "correct". Though the important point for me is that it seems
"correctness" itself can only ever be defined in relation to a theory
(and proof is contradictory?)

> In traditional pattern recognition, functional estimates are developed that are
> useful, in the sense that I am using, because they offer a more reliable
> prediction of the class to which a pattern belongs than pure random prediction
> does. Most pattern recognition work relies on linear math, though some relies
> on layers of decision making, identification of lower level primitives, and so on.

You may want to look at the AI work of Schmidhuber and co. which
defines intelligence in terms of prediction. This enables them to
avoid the pitfalls of logic, at least. They are hoping to build a new,
random, AI on this basis:

http://www.idsia.ch/~juergen/newai/newai.html

I believe Marcus Hutter extended this definition of intelligence to
show it is equivalent to some kind of "usefulness". See his book:

http://www.hutter1.net/

E.g. Hutter - "Most, if not all known facets of intelligence can be
formulated as goal driven or, more precisely, as maximizing some
utility function. It is, therefore, sufficient to study goal driven
AI. ... The problem is that, except for special cases, we know neither
the utility function, nor the environment in which the system will
operate, in advance. The mathematical theory, coined AIXI, is supposed
to solve these problems."

As for myself, while I believe any complete description of the world
must be statistically random, I don't think statistical randomness
itself is the way to approach it.

I prefer Chaitin's algorithmic randomness.

> In particular, I'm interested in applications relating to linguistics. Unsupervised
> learning, in particular, has much to gain from a more refined theory of how
> useful an interpretation of a statement can be.

This sounds like the premise of functional linguistics. You recall I
recommended you look at systemic functional linguistics some years
ago.

It is mostly the basis in contrast I like about functional
linguistics. Their efforts at learning have failed like all the rest.

I think it is the idea of "learning" which is the problem.

It is central to what I have been saying here that I believe
"learning" to be the wrong paradigm in relation to most collections of
observations. Truth is subjective. The correct paradigm is search. We
must search for the "truth" relative to each problem as it presents
itself.

Language, and in particular natural language syntax, is just one
example of that.

Though because uniquely in language we see thoughts coded back into
random observations (strings), which we can hypothesize are similar to
those from which they came, language may offer unique insight into the
relationship between observation and meaning. So an understanding of
the way language works is probably is the key to understanding the
rest.

-Rob

Reply all
Reply to author
Forward
0 new messages