[Link Grammar] what's the word for?

8 views
Skip to first unread message

Borislav Iordanov

unread,
May 4, 2010, 5:33:06 PM5/4/10
to link-grammar
Hi linkgrammar/relex users,

Just curious: has anybody here run into interesting software/research
that is capable of finding the correct term corresponding to a
definition/description? As a simple example, one would say "a piece of
furniture used to eat on, typically with four legs and traditionally
made out of wood", and the program would answer "table". Not that I
need that or that I seriously intend to work on the problem, just mere
curiosity :)

Best to all,
Boris

--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To post to this group, send email to link-g...@googlegroups.com.
To unsubscribe from this group, send email to link-grammar...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/link-grammar?hl=en.

Simon Stuart

unread,
May 5, 2010, 7:28:46 AM5/5/10
to link-g...@googlegroups.com

Boris,

The only example of such a thing I have seen is used in the AI of the Asimov robots to allow it to examine items and determine if their physical properties match a known noun in its preprogrammed vocabulary. Beyond that, I have yet to see a PC-based implementation of such an engine.... though one would definatley be interesting to play with.

Regards,

Simon

------Original Mail------
From: "Borislav Iordanov" <borislav...@gmail.com>
To: "link-grammar" <link-g...@googlegroups.com>
Sent: Tue, 4 May 2010 17:33:06 -0400
Subject: [Link Grammar] what's the word for?



____________________________________________________________________________
E-Mail scanned by Kaspersky Antivirus (Up To Date)

Josh Rowe

unread,
May 5, 2010, 10:23:34 AM5/5/10
to link-g...@googlegroups.com
I believe that what you're talking about is forward chaining inference. As such, OpenCog does it. Linas and/or Jared would know more specifically. As to an implementation, I believe it would be pretty simple to create specific cases that cogita-bot could handle... As I've yet to actually *use* it, however, I could be wrong.

Anyway, here's a fun link - lots of good references, concise, and edifying: http://en.wikipedia.org/wiki/Forward_chaining

Borislav Iordanov

unread,
May 5, 2010, 10:38:54 AM5/5/10
to link-g...@googlegroups.com
Well, I doubt that. Forward chaining is just a inference strategy, if
you have lots of facts and rules. And I'm not sure whether that
problem is best approached with inference. Sounds to me more like
content-addressable, associative memory. My intuition is that an
attempt to solve that problem well should lead to an interesting
representation of lexical semantics, something much richer then, say,
WordNet.

Best,
Boris
--
http://www.kobrix.com - HGDB graph database, Java Scripting IDE, NLP
http://kobrix.blogspot.com - news and rants

"Frozen brains tell no tales."

-- Buckethead

Linas Vepstas

unread,
May 5, 2010, 10:57:11 AM5/5/10
to link-g...@googlegroups.com
On 4 May 2010 16:33, Borislav Iordanov <borislav...@gmail.com> wrote:
> Hi linkgrammar/relex users,
>
> Just curious: has anybody here run into interesting software/research
> that is capable of finding the correct term corresponding to a
> definition/description? As a simple example, one would say "a piece of
> furniture used to eat on, typically with four legs and traditionally
> made out of wood", and the program would answer "table". Not that I
> need that or that I seriously intend to work on the problem, just mere
> curiosity :)

Well, IBM is working on an AI that they hope to play Jeopardy! on
live TV sometime soon. Presumably, it will even answer with a
question "What is a table?"

--linas

Rich Cooper

unread,
May 5, 2010, 11:53:33 AM5/5/10
to link-g...@googlegroups.com
Hi Boris,

I have an app that does the reverse of that; it reads a patent claim
statement (a very formal English statement) and looks for words that match
the key words of the claim. Then it displays a list of sentences in the
patent's specification which mention any of the key words highlighted in the
claim statement.

In a patent, the inventor must demonstrate novelty of the idea, show that it
hasn't been made publicly available by the inventor or anyone else before,
and compare the textual descriptions to the state of the prior art. This is
usually done with a "Claim Chart", which is what my app displays. If you
are interested, you can download and run it from my website:

http://www.englishlogickernel.com/elkforpatents.html

Read the readme file and install the setup.exe file and you will have full
access to that program.

HTH,
-Rich

Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2

-----Original Message-----
From: link-g...@googlegroups.com [mailto:link-g...@googlegroups.com]
On Behalf Of Borislav Iordanov
Sent: Tuesday, May 04, 2010 2:33 PM
To: link-grammar
Subject: [Link Grammar] what's the word for?

Simon Stuart

unread,
May 5, 2010, 3:19:27 PM5/5/10
to link-g...@googlegroups.com

I suppose the kind of engine one would need to achieve what you originally asked for depends entirely on its input.... for example, if you want a robot with "eyes" (cameras) to examine a physical object and cast assertions on various aspects of that object in order to form a description which in turn is used to retrieve a noun best associated with it, then that engine could work in more than one way, the most appropriate being "Convert individual attributes of a physical object into descriptive phrases and parse that phrase or phrase-set to retrieve an appropriate noun" or it could use an entirely different process.

Now let's presume for a moment that such a robot were to use the example I stated above, then that engine would be contextually identical to a basic console-type application (such as the LGP command-line app) where you simply type in a descriptive phrase or phrase-set, which then retrieves an appropriate noun or set of nouns. I guess it would be more a matter of creating a dictionary of nouns with one or more appropriate descriptions paired to it for comparrison.

Now that I think about it, I may have just wasted time producing a complicated method description where a simple one would suffise, if so I appologise.

If there is a real need for a simple engine such as I overly-described above, I'd be happy to lay the framework for one in my "spare time"... though writing the "dictionaries" with appropriate descriptions would be a very long and arduous task most likely requiring a group of contributors (unless the LGP dictionaries already have paired descriptions on nouns.... I wouldn't know).

Regards,

Simon

------Original Mail------
From: "Borislav Iordanov" <borislav...@gmail.com>

To: <link-g...@googlegroups.com>
Sent: Wed, 5 May 2010 10:38:54 -0400
Subject: Re: [Link Grammar] what's the word for?

Borislav Iordanov

unread,
May 5, 2010, 3:51:20 PM5/5/10
to link-g...@googlegroups.com
Hi Simon,

> If there is a real need for a simple engine such as I overly-described
> above, I'd be happy to lay the framework for one in my "spare time"...
> though writing the "dictionaries" with appropriate descriptions would be a
> very long and arduous task most likely requiring a group of contributors
> (unless the LGP dictionaries already have paired descriptions on nouns.... I
> wouldn't know).

The key difficulty, in my opinion, is that you can't offer an
exhaustive set of possible descriptions for all nouns (let alone all
words), so that a simple lookup will find the noun. As a simple
illustration of how difficult this will be: assume you have such a
list, pick a word, take the longest description of it from the
dictionary, then pick any word from that longest description and
replace it with one of its own descriptions => you've produced a yet
longer description, not in your original list :) So that's the point:
designing the structure of such a dictionary could very well be the
key to solving many NLP problems, which makes it an interesting angle
to look from.

Best,

Rich Elk

unread,
May 5, 2010, 4:14:10 PM5/5/10
to link-g...@googlegroups.com
Hi Simon and Boris,

If you google search the phrase "define:word", you get various definitions
that google extracts from the web. By iterating that search with each
"word" in your dictionary, you could get a rich set of glossy definitions,
each of which could be associated with that word in a database. So that
lets you automatically get a database of word definitions.

Then, using the LGP, you could automate parsing each of those definition
sentences. That could let you characterize each word in the set with
descriptive sentences. Using the *.v, *.a, *.n et cetera tags could help
you take a new sentence D, describing the thing, and match it against the
database of definitions. Finally, the best matching definitions (possibly
several for common words) would be an indication of what the word is defined
to mean, at least to whoever gave google the original definition page.

Even google desktop search could find the definitions it thinks match yours
best. However, you may need specialized processing to set up the pages on
your desktop that google desktop search uses.

By the way, you can search google scholar for more academically meaningful
definitions too I suppose. And of course, wordnet has lots of glosses
(though not for every word in the dictionary).

HTH,
-Rich

Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2

-----Original Message-----
From: link-g...@googlegroups.com [mailto:link-g...@googlegroups.com]
On Behalf Of Borislav Iordanov
Sent: Wednesday, May 05, 2010 12:51 PM
To: link-g...@googlegroups.com
Subject: Re: [Link Grammar] what's the word for?

Linas Vepstas

unread,
May 6, 2010, 12:38:36 AM5/6/10
to link-g...@googlegroups.com
On 5 May 2010 15:14, Rich Elk <richco...@gmail.com> wrote:
> Hi Simon and Boris,
>
> If you google search the phrase "define:word", you get various definitions
> that google extracts from the web.  By iterating that search with each
> "word" in your dictionary, you could get a rich set of glossy definitions,
> each of which could be associated with that word in a database.  So that
> lets you automatically get a database of word definitions.
>
> Then, using the LGP, you could automate parsing each of those definition
> sentences.  That could let you characterize each word in the set with
> descriptive sentences.  Using the *.v, *.a, *.n et cetera tags could help
> you take a new sentence D, describing the thing, and match it against the
> database of definitions.  Finally, the best matching definitions (possibly
> several for common words) would be an indication of what the word is defined
> to mean, at least to whoever gave google the original definition page.


Please be aware that people have been trying to do this for
quite a while, with mixed success. Its not easy, and if one could
do it well, one would be well on the path of true AGI. Certainly
the above, and far more sophisticated/advanced attempts
have been tried. Google senseval and semeval for academic
results for the past decade, and SIGLEX for the society
that engages in such efforts.

Also -- FWIW the "sense dictionary" that is provided as an
extra add-on to link-grammar, was generated using one of the
above algorithms. The add-on then tries to cross-correlate word
sense with syntactic structure. Its imprecise, but very fast.
(much faster than the full-fledged algos) I've started writing
a paper describing the assorted results that came out of this.

-- Linas

Rich Cooper

unread,
May 6, 2010, 12:18:51 PM5/6/10
to link-g...@googlegroups.com

Hi Linas,

 

Comments below;

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2

 

-----Original Message-----
From: link-g...@googlegroups.com [mailto:link-g...@googlegroups.com] On Behalf Of Linas Vepstas
Sent: Wednesday, May 05, 2010 9:39 PM
To: link-g...@googlegroups.com
Subject: Re: [Link Grammar] what's the word for?

 

On 5 May 2010 15:14, Rich Elk <richco...@gmail.com> wrote:

> Hi Simon and Boris,

> 

> If you google search the phrase "define:word", you get various definitions

> that google extracts from the web.  By iterating that search with each

> "word" in your dictionary, you could get a rich set of glossy definitions,

> each of which could be associated with that word in a database.  So that

> lets you automatically get a database of word definitions.

> 

> Then, using the LGP, you could automate parsing each of those definition

> sentences.  That could let you characterize each word in the set with

> descriptive sentences.  Using the *.v, *.a, *.n et cetera tags could help

> you take a new sentence D, describing the thing, and match it against the

> database of definitions.  Finally, the best matching definitions (possibly

> several for common words) would be an indication of what the word is defined

> to mean, at least to whoever gave google the original definition page.

 

 

Please be aware that people have been trying to do this for

quite a while, with mixed success.

 

RGC: do you have any references of publications about how they did it, and how well they thought it worked?

 

Its not easy, and if one could

do it well, one would be well on the path of true AGI. Certainly

the above, and far more sophisticated/advanced attempts

have been tried.  Google senseval and semeval for academic

results for the past decade, and SIGLEX for the society

that engages in such efforts.

 

Yes, all kinds of sentence matching algorithms have been tried without much success of a commercial quality.

 

Also -- FWIW the "sense dictionary" that is provided as an

extra add-on to link-grammar, was generated using one of the

above algorithms. The add-on  then tries to cross-correlate word

sense with syntactic structure. Its imprecise, but very fast.

(much faster than the full-fledged algos)  I've started writing

a paper describing the assorted results that came out of this.

 

I would love to see your paper when its ready!  I'm still working on some ideas of my own, but have had too much else going on recently to get much progress yet.  I would really like to see your work on this.

 

-Rich

 

-- Linas

 

--

You received this message because you are subscribed to the Google Groups "link-grammar" group.

To post to this group, send email to link-g...@googlegroups.com.

To unsubscribe from this group, send email to link-grammar...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/link-grammar?hl=en.

 

Amit Joshi

unread,
May 6, 2010, 6:48:32 AM5/6/10
to link-g...@googlegroups.com
This is one of the complicated reverse lookups - last time I was
trying to remember a word 'floaters' . The only way I could get it
was by posting definitions on my own words in a yahoo qna and getting
reply from other users in few minutes.

The reason why such reverse lookups are complicated is because we can
define the same term in multiple ways and not necessarily in similar
words as defined in one of the dictionaries we use.



Sent from my iPod

On 5 May 2010, at 16:53, "Rich Cooper" <ri...@englishlogickernel.com>
wrote:

Simon Stuart

unread,
May 6, 2010, 5:07:37 PM5/6/10
to link-g...@googlegroups.com

Just considering ideas on this subject...

Wouldn't it be easier to store a series of "tags" rather than actual descriptions. For example to describe a chair one could have tags such as the following...

3D, Flat Surface, Legs, Wood ~ Metal ~ Plastic, ?Arms, ?Round, ?Rectangular, ?Square

Where "?" represents a "possible" charecteristic, "~" represents an "OR" type conditional... this way you wouldn't have to store every "possible" description for a noun, but rather its most common attributes.... in this way one coud determine the most appropriate probable noun for a description based on whichever noun contains the most matching "tags" (or "keywords" if you preffer).

Again, this is just a vague idea... it may lead nowhere, but I think between everyone in the community interested in this topic, we should be able to come up with at least one practical method to elaborate on.

Regards,

Simon J Stuart

------Original Mail------
From: "Amit Joshi" <joshiami...@gmail.com>
To: "link-g...@googlegroups.com" <link-g...@googlegroups.com>
Sent: Thu, 6 May 2010 11:48:32 +0100
Subject: Re: [Link Grammar] what's the word for?


____________________________________________________________________________
E-Mail scanned by Kaspersky Antivirus (Up To Date)

--

Joel Pitt

unread,
May 6, 2010, 5:14:32 PM5/6/10
to link-g...@googlegroups.com
On Fri, May 7, 2010 at 9:07 AM, Simon Stuart <kra...@greycascade.com> wrote:
> Just considering ideas on this subject...
>
> Wouldn't it be easier to store a series of "tags" rather than actual
> descriptions. For example to describe a chair one could have tags such as
> the following...
>
> 3D, Flat Surface, Legs, Wood ~ Metal ~ Plastic, ?Arms, ?Round, ?Rectangular,
> ?Square
>
> Where "?" represents a "possible" charecteristic, "~" represents an "OR"
> type conditional... this way you wouldn't have to store every "possible"
> description for a noun, but rather its most common attributes.... in this
> way one coud determine the most appropriate probable noun for a description
> based on whichever noun contains the most matching "tags" (or "keywords" if
> you preffer).

This is getting close to what OpenCog does in terms on properties and
patterns associated with concepts... except OpenCog goes further and
gives the relationships imprecise probabilities that can be updated on
experience.

Joel Pitt, PhD | http://ferrouswheel.me
OpenCog Developer | http://opencog.org
Board member, Humanity+ | http://humanityplus.org
+64 21 101 7308

Simon Stuart

unread,
May 6, 2010, 5:20:40 PM5/6/10
to link-g...@googlegroups.com

Joel,

So OpenCog has relational linkages between the words within its dictionary? Allowing tags from multiple similar (or "related") words to be considered within a given context?

I've only just started really looking at the OpenCog project, so I appologise for not already knowing its capabilities and methods.

Wouldn't it be brilliant if one could reduce the entire fundemental process of thought into a single equation (or algorythm) which could be reproduced digitally? Theoretically it must be possible, albeit incomprehensibly difficult to achieve!

Anyway, thanks for the info!

Simon J Stuart

------Original Mail------
From: "Joel Pitt" <joel...@gmail.com>
To: <link-g...@googlegroups.com>
Sent: Fri, 7 May 2010 09:14:32 +1200


Subject: Re: [Link Grammar] what's the word for?


____________________________________________________________________________
E-Mail scanned by Kaspersky Antivirus (Up To Date)

--

Borislav Iordanov

unread,
May 6, 2010, 5:54:08 PM5/6/10
to link-g...@googlegroups.com
Yep....I refrained from making the claim that this problem is probably
AI-complete, but it could be a fun project for OpenCog.

Simon, I believe tagging won't be enough either, even though it will
be much more successful than using definitions (which must be precise,
unambiguous etc.). I believe it's really about deep meaning
representation. And while meaning representation is central to NLP/AI
research, I just haven't seen it attacked with that problem as a
starting point.

Best,
Boris
--
http://www.kobrix.com - HGDB graph database, Java Scripting IDE, NLP
http://kobrix.blogspot.com - news and rants

"Frozen brains tell no tales."

-- Buckethead

Linas Vepstas

unread,
May 9, 2010, 12:28:44 AM5/9/10
to link-g...@googlegroups.com
On 6 May 2010 11:18, Rich Cooper <ri...@englishlogickernel.com> wrote:

> Please be aware that people have been trying to do this for
>
> quite a while, with mixed success.
>
>
>
> RGC: do you have any references of publications about how they did it, and
> how well they thought it worked?

Yes, google sensval and semeval and other SIGLEX proceedings!

--linas
Reply all
Reply to author
Forward
0 new messages