Re: [opencog-dev] What I hope to accomplish. Willing to pay people to help me get this done.

58 views
Skip to first unread message
Message has been deleted

Ivan V.

unread,
Mar 7, 2019, 2:35:39 PM3/7/19
to ope...@googlegroups.com
> Can this be done?

Not without a hard work and a lot of learning (manuals, research parers and books). The time for this learning measures in decades. If you are serious about AI, schedule the next decade or two for it, you'll be smarter what to do after all that time. You can start with googling out "symbolic AI" and opposed "neural networks". A plenty of materials and ideas out there, on the web. But I'm warning you, that knowledge beast has thousands of heads, and you have to be heavily motivated to sustain in your research. As you slowly climb in you learning quest, your vision about AI would profile into something you might be able to use in a real world. And don't forget, at least thousands of people with very high academic degrees are pursuing the same idea you have. If you want to contribute, prepare for a lot of work for a modest contribution. Only if you have some special abilities, things work more like a bit of investment for a loads of results. But I haven't met anyone like that in my whole life.

If this sounds too much for you, then buy some popcorn, sit back and enjoy the show. Things just began to be interesting, and it took  more than a half century to get where we all are now.

Be well,
Ivan V.
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

JRTA

unread,
Mar 7, 2019, 4:25:57 PM3/7/19
to opencog
yeah after googling the subjects you mentioned, if I am not mistaken,  it sounds like we are not quite there yet. 

Ivan V.

unread,
Mar 7, 2019, 5:26:45 PM3/7/19
to ope...@googlegroups.com
Jason,

did you mean this the other way around maybe? 

"things work more like a bit of investment for a loads of results."

like loads of investment for a bit of results? I hope not but have a feeling you did, loll.

Unfortunately, from my own experience (I'm self taught) it is a lot of work for a minor innovation (yet to be seen). After all this trouble, I regret I didn't spend more time on learning existing methods instead of trying to invent my own stuff, which was mostly reinventing the wheel. But I got some new stuff, so it isn't a complete waste of time.

If you opt out for symbolic AI (top-down approach to modelling an artificial mind), like I did, there exist: lambda calculus, different flavors of mathematical logics (propositional, predicate, higher order, fuzzy, dr. Ben Goertzel's probabilistic logic networks used in the very OpenCog, and so on... - ordered by complexity - also see excellent basic  Stanford's introduction to logic), then there is intuitionistic logic, Martin Löf's type theory, Thierry Coquand's calculus of constructions, and God knows what more there exists that I'm not aware of. I find Wikipedia very helpful for constructing a general overview, then to deep dive into googled research papers on the subjects I find interesting.

If you opt out for artificial neural networks (bottom-up approach in modelling an artificial mind), I'm afraid I'm not much of a use, but I'd put my bets on generative artificial NN in combination with partially supervised learning NN. Recently I found this field very promising and I want to make myself to find a time to check it out more thoroughly.

You may also like genetic algorithms, if you like natural evolutionary approach. There might be more ideas in the natural appearance of Earthlings than I thought at first.

yeah after googling the subjects you mentioned, if I am not mistaken,  it sounds like we are not quite there yet.

You never know what's just behind the corner. Brand new OpenAI GPT2 algorithm released these days just astonished me. I imagine that training it on research papers, instead of on Reddit posts, could actually make an excellent artificial scientist. It could be amazing and very inspirational work.

Also, did you check out some videos of "Sophia" robot interacting with humans? She is based on OpenCog architecture, but I don't know the details. She appears to conduct some reasoning inference not found in similar projects.

But if you are just after a chit-chat machine, you might want to check out a wide chat-bot collection. There are even specialized programming languages for building chatbots (like AIML), and some of chat-bots (like online award winning "Mitsuku") are very impressive embodyments of conversation carrying machines. I'd call them hopefull beginnings of AI, but there is a lot of space for improvements.


--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/4a2cbdd3-2591-497f-8960-cd7431afa902%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

JRTA

unread,
Mar 8, 2019, 12:11:14 PM3/8/19
to opencog
Ivan,



It's actually quite impressive that you managed to come up with some new stuff working on your own!

I found a research paper on""symbolic AI" and opposed "neural networks"" by someone at the university of Missouri here in town, which is pretty cool. Maybe I will get to meet with them. 
I am interested in learning what the top down approach has to offer but I have a gut feeling that bottom up is what will interest me most. 

You keep giving me great avenues to explore which is going to same me a large amount of time and is very much appreciated. Excited to read up on OpenAI GPT2. 
Yeah, Sophia is great. She is what really got me thinking we were making good progress and was stoked to find out she was based on Open cog. 

the conversational aspect is very important to me, but I have never really considered chat-bots to be actual AI. Perhaps I am mistaken, and perhaps its a situation of differing platforms that will be mashed together with others later, lol.  I just talked to Mitsuku. It will be interesting to learn about how she is programmed. She is crazy fast. However it would be amazing to watch AI learn language on its own through interaction, which is probably Hollywood talking, but I have a lot of reading to do still just to be able to sort Science Fiction from reality. 

It is highly important for me to strive towards AI that  will learn/grow/evolve in a human like way. This is a large part of the dream that motivates me, and likely calls for the bottom up approach. 

I will try and drop you a line when ever I come across or have a good list of things relating to "generative artificial NN in combination with partially supervised learning NN." I would make me feel like I was actually helping in some way as opposed to just being ignorant, lol. Which would be great. Also that was the first thing I began looking into, and took a short internet class on programming NN. I'm studying python currently to be able to work on it. 

Thanks Ivan I hope we can talk more on this. 

Jason

Ivan V.

unread,
Mar 8, 2019, 1:27:41 PM3/8/19
to ope...@googlegroups.com
Jason,

Yes, chatbots are actually away from true AI. They are basically consisted of "dirty tricks" to make you feel like you are actually talking to someone smart, while they are just being a kind of parrots. But I'm surprised how successful they are in this deception. Anyway, it could be a way to go, if you pair a chatbot engine with a logical reasoning engine. I suspect this is what Sophia does, but don't take me by the word.

But there is something else going on in another corner of my mind...

There is an idea I have for a while here, to exploit neural networks in a certain way, and these days, GPT2 just concreted my thoughts on it. Trained NN is actually a way of packaging which function results correspond to which function parameters. I consider these functions as black boxes (Turing complete, I hope) which magically do the right stuff we expect from them when posing some parameters. Ok, now, consider an imaginary human-machine conversation flow:

user says: ...
computer says: ...
user says: ...
computer says: ...
...

What could be interesting here is how the user behaves. What she/he says in cycles is actually always a result of the same function parameterized with previous relative inputs/outputs. The problem we are trying to solve here is what should the machine say (or do in some further development). And the answer is fairly simple: the machine could use the same function that the human use to respond to the machine. And that function could be learned by artificial neural network by observing the user's input relative to previous computer's output. Simple, isn't it?

I'd have to investigate neural networks more thoroughly to actually test this concept, but GPT2 keeps convincing me that the whole thing could work very well. The thing would represent a mirror of all responds it collected from its environment. It could even be placed online to gather communications with random users, learning and reflecting their responses in future dialogs. Initially, just to boost up a start of learning, it could be trained on Reddit, but later, it could be switched to alive user interaction. And it would reflect a collective hive of humanity thoughts.

All of this is just a conceptual thought you could judge after learning some of advanced properties of neural networks. But if you decide to test it before me, let me know how it went, I'd like to know if it works.

All well,
Ivan V.


Reply all
Reply to author
Forward
0 new messages