--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/0211b9a9-a9d5-406b-a941-56795b4f4edfn%40googlegroups.com.
Hello again Linas,And what a terrific response! I saved the blog post, because it was that good. I really enjoyed the historical aspect and how you talked about how one thing led to another. Getting back to some of your possible solutions to the problem...I think the "bag of disjuncts" idea is an interesting idea. Currently, like you said, someone could surely just look at the frequency of words in a text and see if any of the bigrams/trigrams provide any information; although, using disjuncts might give someone a better approach at a given problem. I got inspired by Chapter Two in the book called Real World Python by Lee Vaughan. Except that it uses the NLTK package which of course is different from Link Grammar as some assembly is required to get it to work; plus, just looks at different parts of the text to determine if they are similar. If all the boxes are checked, so to speak, then one could assume that Shakespeare was the original author or whatever the original experimenter's intents were.Do you happen to remember the name of the paper that solves the multi-word synonymous-phrases problem?
Also, let me know what "concrete plans" need to be put in place in order to make the bag of disjuncts idea work. Would this require just taking some of the existing Link Grammar Source code and converting it to Python in some way? I really like coding in Python as hobby nonetheless, it's my language of choice, and sometimes just having some theoretical problems to throw myself at keeps the inspiration fire burning so to speak.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/6eba2a3e-8295-4abc-9532-2818ff7e35f8n%40googlegroups.com.
You received this message because you are subscribed to a topic in the Google Groups "link-grammar" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/link-grammar/YMVZJZsfTBw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA36WEv_mjuuiAVWVXY%2Bn5mDqLxepmDtLidyLhMvzyHLjqQ%40mail.gmail.com.
Here is the algorithm:
1. Parse the text with LG
2. Count the link types from the every parse
3. Consider the link count per type for specific author correspond to the point in vector space where each dimension is link type.
4. Do this for different authors and literary style and see if they are grouped as clusters in that space
5. Let us know if it worked ;-)
6. If it worked, you can get any text and do the same procedure
to find a point in space corresponding to original author.
Cheers,
-Anton
--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/0211b9a9-a9d5-406b-a941-56795b4f4edfn%40googlegroups.com.
-- -Anton Kolonin telegram/skype/facebook: akolonin mobile/WhatsApp: +79139250058 akol...@aigents.com https://aigents.com https://www.youtube.com/aigents https://www.facebook.com/aigents https://wt.social/wt/aigents https://medium.com/@aigents https://steemit.com/@aigents https://reddit.com/r/aigents https://twitter.com/aigents https://golos.in/@aigents https://vk.com/aigents https://aigents.com/en/slack.html https://www.messenger.com/t/aigents https://web.telegram.org/#/im?p=@AigentsBot
Sure, of course, TF-IDF style ;-)
To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/cb8f9f83-9ed3-4e45-8910-e1a0f50b8c61n%40googlegroups.com.