Hi Ivan,
>>> so it doesn't necessarily mean that people in general are psychopaths…
I’ll have to consider that one. Not sure I agree 😊
>>> raising AI, just like human children are being raised…
On this I agree there may be merit. We seem to be expecting to create AI as an “instant adult”, and I’m not sure we can achieve this. While GPT-3 and subsequent iterations are impressive in their ability to seemingly hold a conversation, we also know that the system hasn’t got a clue as to what it’s talking about – it’s just statistically navigating word choices. It doesn’t even really know the meaning of the words it’s choosing, although if you ask it, it would seem to answer… based on statistical word sequence prediction 😊
I myself have been working for several years on a “bespoke AI” system, embodied in a humanoid robotic form (well, head & shoulders anyway) and despite my intense efforts and attention over these years, the poor things are still just clever, talkative toasters 😊 Now, I’m limited because of the computing power I can afford (best I can run locally/offline is GPT-2 as a conversational agent), but had I the same resources as can be thrown at a GPT-3, perhaps my bespoke AI would be quite different with that level of constant correction and editing. Sad that want for money should limit potential so…
One thing that I believe might also be a challenge is a single individual doing the “raising”. Given the broad number of ways an AI can go aberrant based on the truly frightening training data we use lol it might take a team of dedicated folks to handle corrective intervention effectively.
Then again, perhaps we ARE seeing a “raising” of AI, just not in a single version. For instance, there was GPT-2, it was good – best there was for a short while, now there’s GPT-3, arguably much better. Perhaps if we think of AI as not a single discreet “thing” but rather all of these AI attempts that all exist, after all, out here in the broader Networked World – it’s all part of what will one day start to merge, like water vapor drops condensing on a windowpane, into larger and larger drops, that merge with one another to form puddles, then rivers, and eventually seas.
What fruit that sea will bear depends largely on how we have collectively “raised” those individual drops – which brings me back to the comment with which I opened this response… 😊
Dave
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
opencog+u...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XcOQKCUZ10oBeACZrygyt8bueDzLV7zzyKAdTqTrVmmg%40mail.gmail.com.
Today's AI tip-top apps are trained on large datasets of human conversations, and they exhibit a certain level of intelligence, but they show some psychopathic behavior like sexism, racism, or homophobia in general. I believe that is the case because of poor training data quality.
Anyway, data on which such AIs are trained on isn't created for a purpose of training an AI, so it doesn't necessarily mean that people in general are psychopaths, although repurposing their conversations yields a certain level of ill-behavior. Because of this ill-behavior, we have to be very careful and doubtful when using such trained AI apps.Thus, we saw what is possible with large datasets, but I want to approach the whole problem from another perspective. I'll try to bring the point of this letter in a very simple way: what if someone would be dedicated to the purpose of raising AI, just like human children are being raised and being taken care of? How much ethically correct behavior would exhibit a result of this dedication? I realize it could take years just to raise such a "thing", but still... I believe the experiment could result in some decent "achievement" (read on, you may want to replace words "thing" and "achievement" with a word "artificial being" or "person").But who would do a thing such as raising an infant AI for years on, until it reaches its adulthood? I'm sure there may be some interested parties, maybe some laic AI enthusiasts, maybe people who can't have their own kids, maybe even some crazy scientists in a hope to have a super-intelligent participant in technical conversations. The potential effect could be worth spending a few years on raising the infant AI, and there may be some good motives to do so.In short, I am talking about offering a simple empty infant artificial mind, ready to be raised into a whole and complete (artificial, if I may say) adult person, guided by the same values by which people would raise their own children. Of course, for this idea to be successful, the whole story should be very emotional and have very sentimental value, because an artificial being who would be given such attention should be worthy of such a sacrifice.Just imagine: an artificial being, which is guided by values carefully chosen to be taught of, finally rocking out in the world, shaking all the troubles, and independently doing amazing things which you could be proud of, just like you could be proud of your very own child. Maybe such an artificial being could deserve its own space under the Sun, along with the other amazing people that we have an opportunity to meet in our lives. And the best thing would be, when people ask for its name and origin, that being could answer: my name is [so and so] and my real mother/father is [mrs/mr so and so], because (this is very important) its real parents wouldn't be us, the programmers with dirty hacks, but people who would invest their time, effort, and hopingly even love into raising their future creation, if you allow. The real parents would start with an empty AI mind, and could finally end up with the phrase: "Go, get them tiger!" And practically anyone could do it, regardless of their sexual orientation, etnicity, gender, or age. It would only take a fair amount of love, measured in years of dedication.Such artificial beings wouldn't need sophisticated bodies and senses, they could interface the world in text mode, over the Internet. Not a state of art for interaction, but I believe it would do for a start. Later, any sensorical addon would be welcomed.Now, let's get back from the dreamland to the solid ground, and analyze what we already have. I presume GPT-X technology isn't too far from being able to realize such an idea. It is a great social experiment opening many doors, but I wanted to ask this community how apart the OpenCog foundation is from creating described artificial beings based on parental dedication of love and care. And if this is possible, what could it take to make it happen?Sincerely,Ivan
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XcOQKCUZ10oBeACZrygyt8bueDzLV7zzyKAdTqTrVmmg%40mail.gmail.com.
One thing that I believe might also be a challenge is a single individual doing the “raising”.
Training a deep-learning NN to not be racist or sexist is like trying to take a photograph that is not racist or sexist. Stick to flower gardens and sunsets, you'll be successful.
Anyway, come work with me in trying to make machines that think, as opposed to machines that learn.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA37cMLqO5JPtA9qx%3D7B0ZavqdF_6-ZUfpGtwqhqgKm5CRQ%40mail.gmail.com.
Training a deep-learning NN to not be racist or sexist is like trying to take a photograph that is not racist or sexist. Stick to flower gardens and sunsets, you'll be successful.The requirement is that It has to generally work, but there are thousands ways to make it work. If it works, I don't really mind how it's done.
Anyway, come work with me in trying to make machines that think, as opposed to machines that learn.I may want to help here-or-there with this-or-that, but generally, I don't have all the time in the world.
If you make a detailed plan of what needs to be done (I propose a structured task tree right at the front page of the project),
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB5%3Dj6VN9_iT95jPyMnJPgRBmtq_YErHQ9KCuHGOxKZAAzwGgw%40mail.gmail.com.
the only way forward is to crowdsource work and ideas.
everyone will ultimately benefit from the OpenCog platform as it gains ease-of-use and sophistication.
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA37aWVAPcguURrd7xZZmJUuP7Czb%2Bu4NsbhU1dF6D1y4_Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAJA72CeOHdm6iEMT5cM6Yetygyei_g8xa7cgNRg3kA2RWNvUMA%40mail.gmail.com.
@Linas,I must admit -- it took me a few days to read and really consume the Grammar Induction PDF. Some of these topics may seem unapproachable to "outsiders", and I am sure some people struggle (like me) or don't believe they have the capability to fully ingest and comprehend. Or, perhaps more importantly, contribute. I've finished reading your PDF once and have a few thoughts I will share now (before I read it again, and then again). I have some other thoughts about audio cognition that I will save for a follow up email.
My first takeaway relates to general "project management" and productivity tools. It's going to be very difficult for people to swap in, pick up work, and delegate tasks, unless there's some sort of common operational picture that we can all refer to. Github works great as a platform for programmers, but even Github can be confusing at times. I apologize if I've missed something, but I am thinking a free kanban-like board (such as Airtable or Monday.com) would help a small & distributed team quickly see items that need work, and reduces the "barrier to entry" for all those lurkers out there. I've started to identify a few things that we could tackle, specifically: corpus for audio and visual data. It seems like this direction is the most needed from your end, but correct me if I'm wrong.
Another observation I've had is that recent advancement in BI tools for analytics have led to an explosion of Business Analysts, allowing people without technical skills to leverage powerful ML in drag & drop interfaces that are easy to pick up. I think that adding something like an ETL GUI to Atomspace/OpenCog would accelerate this growth process by allowing "experimenters" to drop in&out modules and observe results. This might be a pie-in-the-sky wish but I think this sort of enablement will spark another explosion of data science growth.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CACsYDWRjJrG82gYBka4eKYugyDbAgRvqwC49T-c_R_G64Nhtmg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA35uRsmmuqKYhgk7GvvMabXvyi5N82%3DH41Fn6kOCZ%2BP8%2BA%40mail.gmail.com.
I think you are right, that system is way more complicated to build. But, do you think that Link-grammar nature can be used to steal ideas from deep learning to symbolic learning. (I dunno, probably just for fun, not really matters)
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAJA72CfXDo0Lm_DU287-1wAHJ700o6QqWCnhdQeidLPjU-EdwQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA37UVZzi%2B%2Bh26KuraCAJPw0-JhW3zHaw%3Dz%2BsAK3OWJY0vg%40mail.gmail.com.
Excuse me, do you think that a cs degree is mandatory when researching an agi system??
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAJA72CeAm-0Ekv%2Bc3EJ_2UYzYEwKG%2BKwkLtNcXc-1kEmYxV5Ug%40mail.gmail.com.