make[1]: Entering directory `/home/vagrant/link-grammar-5.0.0/link-grammar'make[2]: Entering directory `/home/vagrant/link-grammar-5.0.0/link-grammar'CC analyze-linkage.loCC api.loCC build-disjuncts.loCC constituents.loCC count.loCC dict-common.loCC dictionary.loCC read-dict.loCC read-regex.loCC word-file.loCC disjunct-utils.loCC disjuncts.loCC error.loCC expand.loCC extract-links.loCC fast-match.loCC idiom.loCC post-process.loCC pp_knowledge.loCC pp_lexer.loCC pp_linkset.loCC preparation.loCC print.loCC print-util.loCC prune.loCC regex-morph.loCC resources.loCC spellcheck-hun.loCC string-set.loCC tokenize.loCC utilities.loCC word-utils.loCCLD liblink-grammar.lamake[2]: Leaving directory `/home/vagrant/link-grammar-5.0.0/link-grammar'make[1]: Leaving directory `/home/vagrant/link-grammar-5.0.0/link-grammar'Making all in viterbimake[1]: Entering directory `/home/vagrant/link-grammar-5.0.0/viterbi'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/home/vagrant/link-grammar-5.0.0/viterbi'Making all in bindingsmake[1]: Entering directory `/home/vagrant/link-grammar-5.0.0/bindings'Making all in javamake[2]: Entering directory `/home/vagrant/link-grammar-5.0.0/bindings/java'make[2]: Nothing to be done for `all'.make[2]: Leaving directory `/home/vagrant/link-grammar-5.0.0/bindings/java'Making all in ocamlmake[2]: Entering directory `/home/vagrant/link-grammar-5.0.0/bindings/ocaml'make[2]: Nothing to be done for `all'.make[2]: Leaving directory `/home/vagrant/link-grammar-5.0.0/bindings/ocaml'Making all in perlmake[2]: Entering directory `/home/vagrant/link-grammar-5.0.0/bindings/perl'make all-ammake[3]: Entering directory `/home/vagrant/link-grammar-5.0.0/bindings/perl'CXX clinkgrammar_la-lg_perl_wrap.lo../../bindings/perl/lg_perl_wrap.cc: In function 'void boot_clinkgrammar(PerlInterpreter*, CV*)':../../bindings/perl/lg_perl_wrap.cc:4917:3: warning: unused variable 'items' [-Wunused-variable]CXXLD clinkgrammar.lamake[3]: Leaving directory `/home/vagrant/link-grammar-5.0.0/bindings/perl'make[2]: Leaving directory `/home/vagrant/link-grammar-5.0.0/bindings/perl'Making all in pythonmake[2]: Entering directory `/home/vagrant/link-grammar-5.0.0/bindings/python'make all-ammake[3]: Entering directory `/home/vagrant/link-grammar-5.0.0/bindings/python'CXX _clinkgrammar_la-lg_python_wrap.lo../../bindings/python/lg_python_wrap.cc: In function 'void init_clinkgrammar()':../../bindings/python/lg_python_wrap.cc:6185:21: warning: variable 'md' set but not used [-Wunused-but-set-variable]CXXLD _clinkgrammar.lamake[3]: Leaving directory `/home/vagrant/link-grammar-5.0.0/bindings/python'make[2]: Leaving directory `/home/vagrant/link-grammar-5.0.0/bindings/python'make[2]: Entering directory `/home/vagrant/link-grammar-5.0.0/bindings'make[2]: Nothing to be done for `all-am'.make[2]: Leaving directory `/home/vagrant/link-grammar-5.0.0/bindings'make[1]: Leaving directory `/home/vagrant/link-grammar-5.0.0/bindings'Making all in link-parsermake[1]: Entering directory `/home/vagrant/link-grammar-5.0.0/link-parser'CC link-parser.oCC command-line.oCCLD link-parser../link-grammar/.libs/liblink-grammar.so: undefined reference to `dictionary_create_from_db'../link-grammar/.libs/liblink-grammar.so: undefined reference to `check_db'collect2: ld returned 1 exit statusmake[1]: *** [link-parser] Error 1make[1]: Leaving directory `/home/vagrant/link-grammar-5.0.0/link-parser'make: *** [all-recursive] Error 1
--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar...@googlegroups.com.
To post to this group, send email to link-g...@googlegroups.com.
Visit this group at http://groups.google.com/group/link-grammar.
For more options, visit https://groups.google.com/d/optout.
Making all in link-grammarmake[1]: Entering directory `/home/vagrant/link-grammar-5.0.1/link-grammar'make[2]: Entering directory `/home/vagrant/link-grammar-5.0.1/link-grammar'CC analyze-linkage.loCC and.lo
CC api.loCC build-disjuncts.loCC constituents.loCC count.loCC dict-common.lo
In file included from dict-common.c:23:0:dict-sql/read-sql.h:25:9: warning: no previous prototype for 'check_db' [-Wmissing-prototypes]dict-sql/read-sql.h:26:12: warning: no previous prototype for 'dictionary_create_from_db' [-Wmissing-prototypes]CC dictionary.loIn file included from dict-file/dictionary.c:27:0:./dict-sql/read-sql.h:25:9: warning: no previous prototype for 'check_db' [-Wmissing-prototypes]./dict-sql/read-sql.h:26:12: warning: no previous prototype for 'dictionary_create_from_db' [-Wmissing-prototypes]
CC read-dict.loCC read-regex.loCC word-file.lo
CC read-sql.lo
CC disjunct-utils.loCC disjuncts.loCC error.loCC expand.loCC extract-links.loCC fast-match.lo
CC fat.loCC idiom.loCC massage.lo
CC post-process.loCC pp_knowledge.loCC pp_lexer.loCC pp_linkset.lo
CC prefix.lo
CC preparation.loCC print.loCC print-util.loCC prune.loCC regex-morph.loCC resources.lo
CC spellcheck-aspell.lo
CC spellcheck-hun.loCC string-set.loCC tokenize.loCC utilities.loCC word-utils.loCCLD liblink-grammar.la
.libs/dictionary.o: In function `check_db':/home/vagrant/link-grammar-5.0.1/link-grammar/./dict-sql/read-sql.h:25: multiple definition of `check_db'.libs/dict-common.o:/home/vagrant/link-grammar-5.0.1/link-grammar/dict-sql/read-sql.h:25: first defined here.libs/dictionary.o: In function `dictionary_create_from_db':/home/vagrant/link-grammar-5.0.1/link-grammar/./dict-sql/read-sql.h:26: multiple definition of `dictionary_create_from_db'.libs/dict-common.o:/home/vagrant/link-grammar-5.0.1/link-grammar/dict-sql/read-sql.h:26: first defined here
collect2: ld returned 1 exit status
make[2]: *** [liblink-grammar.la] Error 1make[2]: Leaving directory `/home/vagrant/link-grammar-5.0.1/link-grammar'make[1]: *** [all-recursive] Error 1make[1]: Leaving directory `/home/vagrant/link-grammar-5.0.1/link-grammar'
make: *** [all-recursive] Error 1
sudo apt-get install sqlite3 libsqlite3-dev
make clean
./configuremake
On Ubuntu, I'm getting further, but then getting this:Making all in perlmake[2]: Entering directory `/home/Downloads/link-grammar-5.0.3/bindings/perl'make[2]: *** No rule to make target `../../bindings/perl/lg_perl_wrap.cc', needed by `all'. Stop.
--
OK, yes, that's my fault. I'll stick that into 5.0.4 as well.
On 15 April 2014 22:19, Danny Brian <da...@brians.org> wrote:
Ahh. It's make clean.bindings/perl/Makefile298: BUILT_SOURCES = $(top_builddir)/bindings/perl/lg_perl_wrap.cc
305: CLEANFILES = $(BUILT_SOURCES) $(dist_pkgperl_SCRIPTS)
584: clean-generic:585: -test -z "$(CLEANFILES)" || rm -f $(CLEANFILES)
On Tue, Apr 15, 2014 at 9:04 PM, Linas Vepstas <linasv...@gmail.com> wrote:
On 15 April 2014 19:22, Danny Brian <da...@brians.org> wrote:
On Ubuntu, I'm getting further, but then getting this:Making all in perlmake[2]: Entering directory `/home/Downloads/link-grammar-5.0.3/bindings/perl'make[2]: *** No rule to make target `../../bindings/perl/lg_perl_wrap.cc', needed by `all'. Stop.Hmm. That's confusing. That file is included in the tarball, it should be found. What directory do you build in? Do you just say ./configure; make or do you do something like mkdir build; cd build; ../configure; make ?If you can still reproduce this, then send me the tail end of a make V=1 which spews a verbose output.-- Linas
--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-g...@googlegroups.com.
To post to this group, send email to link-g...@googlegroups.com.
Visit this group at http://groups.google.com/group/link-grammar.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-g...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fzFNVuT5n4ogmGjkNS_c_sXTVVU%3DXysqCaMeyhhLKNXnQ%40mail.gmail.com.
--
You received this message because you are subscribed to a topic in the Google Groups "link-grammar" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/link-grammar/FMGJq5YhbME/unsubscribe.
To unsubscribe from this group and all its topics, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/260219c4-8411-40db-b1e3-34e835691a67%40googlegroups.com.
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fzq3FtDUEAkChvoOgbvuwCUGrtRgR_CHSjWUN79QSUWEQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA34YDmbctk7%2B%2BJ-oFEVENhi6fEAgwU52qjeDxsDcLVxXpQ%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fzEVVqwAwwsFKxNoJUFpU0aqFf-WN1qcRatnfSGFOoJ4w%40mail.gmail.com.
You received this message because you are subscribed to a topic in the Google Groups "link-grammar" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/link-grammar/FMGJq5YhbME/unsubscribe.
To unsubscribe from this group and all its topics, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA36CBgG3uOVViGbWvoXUx%3DrQ-YoXOJho9uiejGssfv3miA%40mail.gmail.com.
The server use was based on having LG as a remote server in a network of readers.Http was abandoned in favor of what the readme says: a TCP socket, but it's not responding to that.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fxT6dYf%2Bi__9uLBr4cCsXyYRtrsARbbTgEO9o%3D-50wMcw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA36k99%3DGbcMqMOe2OJy01NEJ6UaiuoF-67py8TZsebnPnA%40mail.gmail.com.
Appended "\n" to the string. Already flushed the output. Added a wait loop and sure enough, it returned. The wait loop was never hit. Not finished yet.It returned with an int 88 followed by a basically empty JSON string - empty links.
Ran the very same sentence in a console with echo and it returned 2542 followed by a very large JSON string.Trying to sort out why echo gets a full parse, but the socket doesn't.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fzyHRU1P305K5%2BLg1P4aWNN1%2BKVegJ0znTC5Lk6XeP8uA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA35oAHqjtqByvsrz3Fti4H4Cz_RSrRgT3V8Vf80W%3DS7dFQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fy695xXovifXwVt5pSE7vUtY1-YyKfK%2BsPYWkOS134UEQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA37FbDoszw60s_RHd8y_FWiv4qkEZiL7MXmasYAui1ZPhQ%40mail.gmail.com.
In doing the deep read on how the parser works, I see that it mirrors many of my thoughts on how an anticipatory text reader should work; working with LG is being quite entertaining.
I imagine something like that which takes entire subjects found while reading forming links which get satisfied later in the document.
--
You received this message because you are subscribed to a topic in the Google Groups "link-grammar" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/link-grammar/FMGJq5YhbME/unsubscribe.
To unsubscribe from this group and all its topics, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA35peGCH9b1Y8NHwAnymE7%3DkBJWQpNNAMLBuiLVkVd0krg%40mail.gmail.com.
Thanks, Linas, for those comments. I'd like to just make a small set of comments here and change the subject because of that.Long before I ever heard of AtomSpace, I created an experiment as part of some dissertation research following defending a thesis proposal which mentioned "anticipatory story reading".I decided - for reasons which now might seem mysterious - that a wordgram (n-gram with words) network might provide a kind of long-term memory for reading.
So, that project became OpenSherlock after a while. Each WordGram stores a record of the sentenceId in which it was detected, giving you word frequencies in context; each WordGram has edges corresponding to sentenceId (cardinality of that list would serve the same purpose), but, while terminals - single words are the initial graph, after parsing, pairs, triples, etc, replace their terminals in the graph, and those pairs, triples, etc, also have usage cardinalities.
I was once asked to compare my code's performance with some trials someone did with the OpenIE jar. They are not really comparable because OpenIE does things which my ASR component (anticipatory story reader) does, and vice versa, but the report is hereMore recently, I did discover AtomSpace, and as you might recall, started interpreting its code into Java.
That was an enormously useful exercise.
But, the road ahead for OpenSherlock is not yet cast in any concrete; I remain interested in things like Datalog, Prolog, and other technologies which need to be explored; the central concept in OpenSherlock is a topic map, a highly specialized knowledge graph.
--The thesis behind ASR is that expectations form first when a query finds a paper, then is continuously embellished and refined while reading.Since WordGrams are identified by the numeric identities of the words they contain, those identities make them content addressable.I have a friend experimenting with doing topic modeling not on words but on wordgram identifiers. We shall see what that produces.Just a half Euro for the day.Cheers,JackOn Fri, Apr 24, 2020 at 3:02 PM Linas Vepstas <linasv...@gmail.com> wrote:--On Thu, Apr 23, 2020 at 8:14 PM Jack Park <jack...@topicquests.org> wrote:In doing the deep read on how the parser works, I see that it mirrors many of my thoughts on how an anticipatory text reader should work; working with LG is being quite entertaining.The parser itself is sentence-by-sentence, and is not "anticipatory" word-by-word. For a while I had an idea that one could/should do a word-by-word parser, I called this the "viterbi parser" because that's what viterbi does, but soon came to the conclusion that this offers no benefits, and mostly disadvantages. We could debate this, if interested, but really, it was about what's faster, and what controls combinatorial explosion better.I imagine something like that which takes entire subjects found while reading forming links which get satisfied later in the document.Well, but of course. I can sketch this in greater detail. But first, a baby-step or two. So, there is this idea that "words have meanings" and that a word can have multiple meanings. It turns out that you can "guess" word meanings statistically, if you have access to a dictionary (e.g. to wordnet). The canoncial example is "I heard the church bells ring on Sunday", and you can infer that "ring" means "sound" and not "ring on finger" because of "hear". You can infer that "church" means "the building" and not "the abstract institution" because the abstract institution doesn't have bells. It also cannot be localized to a point in time: "sunday". stuff like that.So what I'd always wanted/hoped/like to do would be to build up a network of possible inter-relationships between different word-meanings, assign a probability weight to each link, and then crank on the network, raising or lower the probability on each link based on evidence of the surrounding links, eventually arriving at a most-like interpretation or meaning-assignment.The AtomSpace, if this is not yet clear, is an apparatus for storing such networks. It allows links to be created between things, and weights to be assigned to those links, and then an assortment of tools for walking over that graph, updating link-strengths, and other such generic operations.Of course, the neural-net systems, starting with word2vec and getting progressively more sophisticated, already so "something like this", but they obfuscate the network, and they make any number of other fundamental flaws. I've tried to sketch out some more principled, more reasonable ways of going about doing this, but have had trouble gaining an audience. So a bit stuck, dead-in-the-water, for just right now.-- Linas--cassette tapes - analog TV - film cameras - you
You received this message because you are subscribed to a topic in the Google Groups "link-grammar" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/link-grammar/FMGJq5YhbME/unsubscribe.
To unsubscribe from this group and all its topics, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA35peGCH9b1Y8NHwAnymE7%3DkBJWQpNNAMLBuiLVkVd0krg%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fx9htJWpjtxfc0Rkhj2gU7tFn9D%3DQ0YUFPqfD-7t4bxTw%40mail.gmail.com.
Hi Jack!On Fri, Apr 24, 2020 at 7:08 PM Jack Park <jack...@topicquests.org> wrote:Thanks, Linas, for those comments. I'd like to just make a small set of comments here and change the subject because of that.Long before I ever heard of AtomSpace, I created an experiment as part of some dissertation research following defending a thesis proposal which mentioned "anticipatory story reading".I decided - for reasons which now might seem mysterious - that a wordgram (n-gram with words) network might provide a kind of long-term memory for reading.Not unreasonable. This is the original idea behind corpus linguistics. Which was talking about word-grams since the IBM PC era, where you actually had a practical tool that you could use. The idea is even older -- biblical concordances, hand-assembled by scholastic monks in medieval times. Bit tedious though, doing it by hand :-)
So, that project became OpenSherlock after a while. Each WordGram stores a record of the sentenceId in which it was detected, giving you word frequencies in context; each WordGram has edges corresponding to sentenceId (cardinality of that list would serve the same purpose), but, while terminals - single words are the initial graph, after parsing, pairs, triples, etc, replace their terminals in the graph, and those pairs, triples, etc, also have usage cardinalities.Well, the insight I keep offering up is that you can gain a lot of power and insight by using parses instead of n-grams. For example, cataloging sentences which differ only in some adjective, or cataloguing sentences where a pair of words are separated by a very long modifying phrases. For example, "the dog ran in the park" and "the dog, a black cocker spaniel, ran in the park" are effectively the same sentence, and that's easy to catch in a parse, but hard to catch with n-grams. (unless you use sliding-window skip-grams, yadda, yadda... which then begs the question "why the complexity?")
I was once asked to compare my code's performance with some trials someone did with the OpenIE jar. They are not really comparable because OpenIE does things which my ASR component (anticipatory story reader) does, and vice versa, but the report is hereMore recently, I did discover AtomSpace, and as you might recall, started interpreting its code into Java.Ugh. Don't do that. Waste of time.That was an enormously useful exercise.Unless, of course, it's intellectually satisfying!
But, the road ahead for OpenSherlock is not yet cast in any concrete; I remain interested in things like Datalog, Prolog, and other technologies which need to be explored; the central concept in OpenSherlock is a topic map, a highly specialized knowledge graph.Well, software design is about making choices. datalog is interesting, but there is no "natural" way for it to store probabilities (or other numbers or tagging information) So that's how I got to the atomspace -- all other choices (including neo4j, etc.) seemed lacking and incapable and unpowerful (and too hard to use).prolog is interesting, but it only does crisp logic. Anyway, those old prolog backward-forward chainer technology is obsolete; its been replaced by answer-set programming (ASP), which looks just like prolog (is notationally the same, it looks just like prolog) but uses the new fast SAT solvers instead.Since prolog/ASP only work for crisp logic .. no one has done a probabalistic-programming version of prolog. Now, PLN kind-of-ish tried to be that, but PLN is not done yet. A whizzy probabilisitc programming system is pryo: http://pyro.ai/examples/intro_part_i.html but actually, I think probabilistc programming is stupid and boring, but that's a different topic. (well, OK, probabilistic prolog would be interesting but .. some other day).My own interests are about automatically discovering via unsupervised learning, all of these network relationships... so step one is to kind of throw all pre-existing structures out the window, as the goal is to find them ab initio.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA349vU2W7_Y1YjBTNjvqPZY5kJ%2BqBd2wPNzx98pTT28tCA%40mail.gmail.com.
Exactly! It was very satisfying to "get inside the heads of AtomSpace developers" and figure out what is going on.
But, the road ahead for OpenSherlock is not yet cast in any concrete; I remain interested in things like Datalog, Prolog, and other technologies which need to be explored; the central concept in OpenSherlock is a topic map, a highly specialized knowledge graph.
That has been the goal of OpenSherlock, though it is cast in the shadow of Watson, so there is an interactive tell-ask component which engages with wetware.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fxrL4dh-6Z_0E4_Ng2qPMGr%2BnkZr0%2BcSNDsc%2B_hDbsSDA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA36ymOEiuB4Qdm%2B1Ht9NcE1OM4-eqbLopeWPH9dcqD9L-w%40mail.gmail.com.
I do revisit my notes on AtomSpace from time to time, mostly out of an interest to see if one of a) it can do what I want, or b) I can borrow from it where useful, or c) there's a hybrid lurking between the two.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fxGVdZNKrucT8fyRM7St858x99nWMMYX9cRS%2BOGVzo-ag%40mail.gmail.com.
On Sat, Apr 25, 2020 at 9:36 AM Jack Park <jack...@topicquests.org> wrote:I do revisit my notes on AtomSpace from time to time, mostly out of an interest to see if one of a) it can do what I want, or b) I can borrow from it where useful, or c) there's a hybrid lurking between the two.Anything you can store in datalog, you can store in the atomspace. The meta-questions are: is it more compact? easier to use? faster? what's the API?
atomspace does have a postgres backend, but performance is so-so. You need SSD hard drives, not spinning disks, to get OK performance, (so clearly SATA and the disk drive itself is the bottleneck) but even then, performance is disappointing. The atomspace works best when everything fits in RAM -- loading up that RAM, and saving it back to disk is the bottleneck. (I do incremental load/save, as data is touched) As someone recently noted: saving smaller datasets to file, as ascii strings, and loading those up is 10x faster than postgres. By "smaller files" I mean gigabyte-sized.
re: "do what I want" ... well, that depends. Right now, the #1 strongest most versatile aspect of it is the graph query subsystem. You can search for arbitrarily complex graphs, and that works fast, bug-free. It's gotten to be very mature - all the bells & whistles, whiz-bang features, all the corner cases work well. Beyond that ... I don't know what you need/want.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA35EgYgoUZTiV8b8LKJdhzZaSae2SRhUZxdnqnJr6uzfqg%40mail.gmail.com.
Perhaps this thread belongs over in the opencog list rather than hear;
A topic map, which is a graph, but also in which the edges can also be vertices (fully addressable topics);
I have never been a big fan of in-memory reasoning across massive graphs;
I'm much more interested in sorting out paging algorithms which could make it possible to speed up inferencing across terabytes of graph data,
getting algorithms working which reliably do the task at any cost.
One simple speed up for the time being is to create a tiny raspi server farm of LG parsers
Graph search is useful. I don't rule out SQL as well for some tasks.
<snip>
--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fydKLSFN40GykJ5FNR0d_DKqPi0yrAyVU%2B4wpFd6aig9Q%40mail.gmail.com.
That's a useful observation. I wonder if it has anything to do with the fact that they give appearances more of offering a platform aimed at solving real-world problems as compared to a platform for language modeling research?Their landing page reads rather differently from that of the AtomSpace landing page. They make results-oriented promises; I don't see that on the AtomSpace landing page.Their landing page appears to be the work of skilled marketing types; the AtomSpace landing page appears to be the work of, well, not skilled marketing types. Their top nav bar says Products, Solutions, Use Cases, Community, ...; AtomSpace is a MediaWiki, one very familiar to developers, but not to business oriented people.What if AtomSpace ignored all the cool buzz words and opened with problems it can solve, ways people can start using it right out of the box without building it, tuning it, etc?Would you, as a scientist, be able to live with wall-street-oriented thinkers taking over your pet projects and packaging them up for far less-skilled consumers?I think this is an issue faced by a lot of us.
I studied Grakn carefully; the front story is one, for me, of some attraction to the apparent simplicity of the system; the back story, for me, is that Grakn appears optimized in ways which get in the way of allowing me to push it in ways I believe it should be pushed; a problem of ontological commitments I cannot undo, so I just walk away.For a really interesting case for comparison, take a look at https://opencrux.com/
On Sat, Apr 25, 2020 at 7:48 PM Linas Vepstas <linasv...@gmail.com> wrote:<snip>I dislike promoting competitors, but the grakn.ai system has taken some baby-steps in this general direction. I'm envious that they are far more popular/funded/suported/used than the atomspace.--linas
--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fydKLSFN40GykJ5FNR0d_DKqPi0yrAyVU%2B4wpFd6aig9Q%40mail.gmail.com.
While testing the parser on biomedical sentences, the phrase "type 2 diabetes mellitus" becomes problematic.
An apparent reason for that is this: the vocabulary only knows "type.v" and not "type.n" as evidenced in what the parse returns.
It's not obvious that I can simply add "type.n" to words.n.1-const or any of the other noun collections without rebuilding.
What is the process associated with augmenting the vocabulary?
Thanks in advance.Jack
--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fwJoZ5jOn0%2B5LnhFP6FJc2qYrv1UYZAhq5H9yhAJMH-Wg%40mail.gmail.com.
You received this message because you are subscribed to a topic in the Google Groups "link-grammar" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/link-grammar/FMGJq5YhbME/unsubscribe.
To unsubscribe from this group and all its topics, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA36GQq2B6nrFcQxN7DqQ8gZJH1DqvvaUkBp54rcR2b89Mw%40mail.gmail.com.
You received this message because you are subscribed to a topic in the Google Groups "link-grammar" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/link-grammar/FMGJq5YhbME/unsubscribe.
To unsubscribe from this group and all its topics, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA34MHa83vMBss-HtWVv-MsUi%3D%2BHcRLEHfrALAEaSsQTRvA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fw9u3ATBtpQ1AfDyhuiLkrKjzJ7TAhsUmFbW%2BbsQgpo%3DQ%40mail.gmail.com.
I just fixed this in the master git branch. That was ..interesting. First, the fix was remarkably easy. Second, the lack of the fix caused the parser to become profoundly confused about the correct parse, generating something quite insane.Anyway: you can either try to pull from git master, or you can hand-edit `data/en/4.0.dict` and insert the following:% Numerical identifiers
% NM+ & AN+ : "Please use a number 2 pencil"
% "He has type 2 diabetes"
number.i batch.i group.i type.i:
NM+ & AN+;So NM is a link to a "numerical modifier" while AN is a link to an "adjectival noun" (nouns that can be used as if they were adjectives). So, for example:+--------Ou--------+
+--->WV--->+ +-----AN----+
+->Wd--+-Ss+ +NMn+ |
| | | | | |
LEFT-WALL he has.v type.i 2 diabetes.n-u
I'm trying hard not to be an advocate of anything; rather, I'm in exploration mode.Tell you what: I'd like to see (have not yet found) some solid examples of AtomSpace - full-on knowledge graph implementations, whatever. Something to be able to follow through one or two complete examples which are more than toy exercises.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fxp6nFOVBvBLuDkW-i0nazm2FwgucGe5DHs_Enur83Gcw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA35-bUOfX3v3Hrj_YOwc8RVpW-Co-HsrfSct7WWpzQ17%2BQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA36dnu2voXYUizOf83VJjju88MormL-eDe-bFQrrxuAP0A%40mail.gmail.com.
Installing latest LG repo pull on ubuntu 18.04 and get this error message
--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fxAPYyZpqGZJwMh9aM1oNpun1ENOYzb23aU1vjwGseUDQ%40mail.gmail.com.
You received this message because you are subscribed to a topic in the Google Groups "link-grammar" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/link-grammar/FMGJq5YhbME/unsubscribe.
To unsubscribe from this group and all its topics, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA34%3DSaRf_VTYkWRMfSMpX8mHYhktZ8H_5gLeOy5s8PQdAw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fxtzaNFtuuTKW1Bcf%3DUzWoj116Yz%3DzCVYKioM1GGPLkEw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA37bWs5FPPsurTejBhtvv40-KCNckt4ZW1FRreBpVQ_G6g%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fxVBkde%3DZgtGLQvpGSG5TFsrYCY%2Bxyimm-t_fCVd37h%3DQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA362cT7O0_OPp9UT8DdiZ0dKh0pz6QrB823uUfiPEbo8rw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fz_CwL0KLF-Hp0iD1kJrgy90XYyEjAHhjmfnrV5D_6XYA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA36Ni99n0ZtgcA1Momrn67sEy8V3JY9-HwpaWfD0D5Vt9Q%40mail.gmail.com.
It is not at all clear to me where one controls weights in the java code.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fyjaDXeJX3HFJTCepoy1x3i2affzBqBDTF%3DgcJuYk9etg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA37F9fHCfno5D4n%2B6_uhysWWgsY0VzmBEgC2aaBhL7tTFA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fw6c%3DOFOMMf-2EHkUrNmM6oUv-2DESm7v3JXOt_xx1WRA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA36a66Ez6zJjxj52s9BH9qVGCLev%3Dtj3FqJTjfeTjd-XTA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAH6s0fx3NcXjd58N%2BhCFYusffVamzhYrLCaSF8uHtwC3cka0HQ%40mail.gmail.com.