Parental loving an AI entity

136 views
Skip to first unread message

Ivan V.

unread,
Mar 30, 2022, 4:05:00 PM3/30/22
to opencog
Hello AI enthusiasts,

Something is going through my troubled mind for a while now. I'm not sure how to articulate it, but I'll try to do so.

Today's AI tip-top apps are trained on large datasets of human conversations, and they exhibit a certain level of intelligence, but they show some psychopathic behavior like sexism, racism, or homophobia in general. I believe that is the case because of poor training data quality. Anyway, data on which such AIs are trained on isn't created for a purpose of training an AI, so it doesn't necessarily mean that people in general are psychopaths, although repurposing their conversations yields a certain level of ill-behavior. Because of this ill-behavior, we have to be very careful and doubtful when using such trained AI apps.

Thus, we saw what is possible with large datasets, but I want to approach the whole problem from another perspective. I'll try to bring the point of this letter in a very simple way: what if someone would be dedicated to the purpose of raising AI, just like human children are being raised and being taken care of? How much ethically correct behavior would exhibit a result of this dedication? I realize it could take years just to raise such a "thing", but still... I believe the experiment could result in some decent "achievement" (read on, you may want to replace words "thing" and "achievement" with a word "artificial being" or "person").

But who would do a thing such as raising an infant AI for years on, until it reaches its adulthood? I'm sure there may be some interested parties, maybe some laic AI enthusiasts, maybe people who can't have their own kids, maybe even some crazy scientists in a hope to have a super-intelligent participant in technical conversations. The potential effect could be worth spending a few years on raising the infant AI, and there may be some good motives to do so.

In short, I am talking about offering a simple empty infant artificial mind, ready to be raised into a whole and complete (artificial, if I may say) adult person, guided by the same values by which people would raise their own children. Of course, for this idea to be successful, the whole story should be very emotional and have very sentimental value, because an artificial being who would be given such attention should be worthy of such a sacrifice.

Just imagine: an artificial being, which is guided by values carefully chosen to be taught of, finally rocking out in the world, shaking all the troubles, and independently doing amazing things which you could be proud of, just like you could be proud of your very own child. Maybe such an artificial being could deserve its own space under the Sun, along with the other amazing people that we have an opportunity to meet in our lives. And the best thing would be, when people ask for its name and origin, that being could answer: my name is [so and so] and my real mother/father is [mrs/mr so and so], because (this is very important) its real parents wouldn't be us, the programmers with dirty hacks, but people who would invest their time, effort, and hopingly even love into raising their future creation, if you allow. The real parents would start with an empty AI mind, and could finally end up with the phrase: "Go, get them tiger!" And practically anyone could do it, regardless of their sexual orientation, etnicity, gender, or age. It would only take a fair amount of love, measured in years of dedication.

Such artificial beings wouldn't need sophisticated bodies and senses, they could interface the world in text mode, over the Internet. Not a state of art for interaction, but I believe it would do for a start. Later, any sensorical addon would be welcomed.

Now, let's get back from the dreamland to the solid ground, and analyze what we already have. I presume GPT-X technology isn't too far from being able to realize such an idea. It is a great social experiment opening many doors, but I wanted to ask this community how apart the OpenCog foundation is from creating described artificial beings based on parental dedication of love and care. And if this is possible, what could it take to make it happen?

Sincerely,
Ivan

xanatos xanatos.com

unread,
Mar 30, 2022, 4:53:22 PM3/30/22
to ope...@googlegroups.com

Hi Ivan,

 

>>> so it doesn't necessarily mean that people in general are psychopaths…

 

I’ll have to consider that one.  Not sure I agree 😊

 

>>> raising AI, just like human children are being raised…

 

On this I agree there may be merit.  We seem to be expecting to create AI as an “instant adult”, and I’m not sure we can achieve this.  While GPT-3 and subsequent iterations are impressive in their ability to seemingly hold a conversation, we also know that the system hasn’t got a clue as to what it’s talking about – it’s just statistically navigating word choices.  It doesn’t even really know the meaning of the words it’s choosing, although if you ask it, it would seem to answer… based on statistical word sequence prediction 😊

 

I myself have been working for several years on a “bespoke AI” system, embodied in a humanoid robotic form (well, head & shoulders anyway) and despite my intense efforts and attention over these years, the poor things are still just clever, talkative toasters 😊  Now, I’m limited because of the computing power I can afford (best I can run locally/offline is GPT-2 as a conversational agent), but had I the same resources as can be thrown at a GPT-3, perhaps my bespoke AI would be quite different with that level of constant correction and editing.  Sad that want for money should limit potential so…

 

One thing that I believe might also be a challenge is a single individual doing the “raising”.  Given the broad number of ways an AI can go aberrant based on the truly frightening training data we use lol it might take a team of dedicated folks to handle corrective intervention effectively.

 

Then again, perhaps we ARE seeing a “raising” of AI, just not in a single version.  For instance, there was GPT-2, it was good – best there was for a short while, now there’s GPT-3, arguably much better.  Perhaps if we think of AI as not a single discreet “thing” but rather all of these AI attempts that all exist, after all, out here in the broader Networked World – it’s all part of what will one day start to merge, like water vapor drops condensing on a windowpane, into larger and larger drops, that merge with one another to form puddles, then rivers, and eventually seas. 

 

What fruit that sea will bear depends largely on how we have collectively “raised” those individual drops – which brings me back to the comment with which I opened this response… 😊

 

Dave

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XcOQKCUZ10oBeACZrygyt8bueDzLV7zzyKAdTqTrVmmg%40mail.gmail.com.

Linas Vepstas

unread,
Apr 4, 2022, 1:22:45 PM4/4/22
to opencog
Hi Ivan,

On Wed, Mar 30, 2022 at 3:05 PM Ivan V. <ivan....@gmail.com> wrote:

Today's AI tip-top apps are trained on large datasets of human conversations, and they exhibit a certain level of intelligence, but they show some psychopathic behavior like sexism, racism, or homophobia in general. I believe that is the case because of poor training data quality.

This is a factually correct statement, but belies a fundamental misperception of both human nature and AI.

First, all humans are flawed. All. You may feel that you are not racist or sexist, but you probably harbor some less-than-acceptable thoughts about Russians. Or at least Putin. Even Mother Theresa, a modern model of saintly behaviour, had some rather oddball thoughts about the world. One of the most damaging, it's been said, was the failure to believe in triage.

Flawed beliefs are unavoidable: at some point, you will add a bit of (incorrect) knowledge to your collection, make some (flawed) deduction on insufficient data, and as you sleep, your brain will incorporate it deeply into your foundations of knowledge, your web of thoughts, affecting later thinking and conclusions.  You might eventually notice your mistake, but then again, you might not.  There's only a finite amount of time to think about things; you'll never have enough time to sort through it all.

Next: today's "tip-top AI apps" are deep neural-nets. They do not think.  Their observations of nature are not revised by thinking. They do not examine, inquire, explore, discuss.They cannot ask of themselves the question "Am I a racist?". They can't do this because they don't know who "I" is; there is no sense of self, no sentience. They sort-of know what the word "racist" means: they might be able to write a few paragraphs about racism. But they are unable to relate this "knowledge" to any other spheres of verbal behavior that they engage in, because they have no cross-functional knowledge.  Today's tip-top AI apps are like photorealistic paintings: very life-like, until you realize that something is missing.

FWIW, I do oodles of AI training, and I can see the formation of both good knowledge, and of bad knowledge, and I can see how the bad knowledge accrets more data, how it pollutes and degrades the good knowledge.  There's a blurry edge beyond which there is a grey mush of incorrect knowledge. I can see the size and extent of the "bad knowledge" grow and shrink, based on the training time, on the corpus, on the adjustable parameters. I've also got assorted ideas and plans and strategies for dealing with this problem, in various recursive "thinking" steps.  The formation of incorrect ideas is not something that just humans do. Machines can do it too.

Perhaps the easiest way to explain this is that I am working on "thinking", rather than on "learning".  Today's AI systems "learn" much like a camera "learns" which parts of a picture are light and dark. Having thus learned, you can use that knowledge to recreate a facsimile, an "image", a "photograph" of what the camera "looked at".  The creation of accurate facsimiles is not true intelligence: those facsimiles cannot think, no more than a photograph can think.

I am not trying to draw an analogy here. I am trying to be literal. Photographs are literal representations of structural shapes lit by floods of photons.  Deep learning neural nets are likewise: they are photographs of the structures in the data put before them.  They are very abstract representations; they capture non-visual knowledge. But they are still snapshots.

I think most people are still deceived by this, or are still infatuated by the wondrous and beautiful (and sometimes ugly) snapshots that have been taken. Deep learning neural nets look so life-like ... but so do photographs. Don't be fooled, they are not alive.

Training a deep-learning NN to not be racist or sexist is like trying to take a photograph that is not racist or sexist. Stick to flower gardens and sunsets, you'll be successful.

Anyway, come work with me in trying to make machines that think, as opposed to machines that learn. The groundwork has been laid. The progress is good. Early results are excellent. A vast amount of work lies ahead.

-- Linas
 
Anyway, data on which such AIs are trained on isn't created for a purpose of training an AI, so it doesn't necessarily mean that people in general are psychopaths, although repurposing their conversations yields a certain level of ill-behavior. Because of this ill-behavior, we have to be very careful and doubtful when using such trained AI apps.

Thus, we saw what is possible with large datasets, but I want to approach the whole problem from another perspective. I'll try to bring the point of this letter in a very simple way: what if someone would be dedicated to the purpose of raising AI, just like human children are being raised and being taken care of? How much ethically correct behavior would exhibit a result of this dedication? I realize it could take years just to raise such a "thing", but still... I believe the experiment could result in some decent "achievement" (read on, you may want to replace words "thing" and "achievement" with a word "artificial being" or "person").

But who would do a thing such as raising an infant AI for years on, until it reaches its adulthood? I'm sure there may be some interested parties, maybe some laic AI enthusiasts, maybe people who can't have their own kids, maybe even some crazy scientists in a hope to have a super-intelligent participant in technical conversations. The potential effect could be worth spending a few years on raising the infant AI, and there may be some good motives to do so.

In short, I am talking about offering a simple empty infant artificial mind, ready to be raised into a whole and complete (artificial, if I may say) adult person, guided by the same values by which people would raise their own children. Of course, for this idea to be successful, the whole story should be very emotional and have very sentimental value, because an artificial being who would be given such attention should be worthy of such a sacrifice.

Just imagine: an artificial being, which is guided by values carefully chosen to be taught of, finally rocking out in the world, shaking all the troubles, and independently doing amazing things which you could be proud of, just like you could be proud of your very own child. Maybe such an artificial being could deserve its own space under the Sun, along with the other amazing people that we have an opportunity to meet in our lives. And the best thing would be, when people ask for its name and origin, that being could answer: my name is [so and so] and my real mother/father is [mrs/mr so and so], because (this is very important) its real parents wouldn't be us, the programmers with dirty hacks, but people who would invest their time, effort, and hopingly even love into raising their future creation, if you allow. The real parents would start with an empty AI mind, and could finally end up with the phrase: "Go, get them tiger!" And practically anyone could do it, regardless of their sexual orientation, etnicity, gender, or age. It would only take a fair amount of love, measured in years of dedication.

Such artificial beings wouldn't need sophisticated bodies and senses, they could interface the world in text mode, over the Internet. Not a state of art for interaction, but I believe it would do for a start. Later, any sensorical addon would be welcomed.

Now, let's get back from the dreamland to the solid ground, and analyze what we already have. I presume GPT-X technology isn't too far from being able to realize such an idea. It is a great social experiment opening many doors, but I wanted to ask this community how apart the OpenCog foundation is from creating described artificial beings based on parental dedication of love and care. And if this is possible, what could it take to make it happen?

Sincerely,
Ivan

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB5%3Dj6XcOQKCUZ10oBeACZrygyt8bueDzLV7zzyKAdTqTrVmmg%40mail.gmail.com.


--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
 

Ivan V.

unread,
Apr 4, 2022, 4:36:41 PM4/4/22
to opencog
Hi Xanatos,

One thing that I believe might also be a challenge is a single individual doing the “raising”.

That's the whole point, the bigger the challenge is, more valuable the result is. But the value of the result has to match the amount of the challenge, otherwise it's not viable. The idea is, after a few years of raising, we have to be able to say with pride to the world: this is my creation and my descendant. And the most of the attribution should shift from programmers to actual raisers. Hopefully, with the first successful creation, other potential raisers may be interested in devoting so much time on their own to reproduce their perspective on the world through their own creation.

---

Hi Linas,

Training a deep-learning NN to not be racist or sexist is like trying to take a photograph that is not racist or sexist. Stick to flower gardens and sunsets, you'll be successful.

The requirement is that It has to generally work, but there are thousands ways to make it work. If it works, I don't really mind how it's done.

Anyway, come work with me in trying to make machines that think, as opposed to machines that learn.

I may want to help here-or-there with this-or-that, but generally, I don't have all the time in the world. If you make a detailed plan of what needs to be done (I propose a structured task tree right at the front page of the project), when I spot something interesting to me, I'll offer my support and try to finish what I promise. I like to complete the stuff I occasionally start. I also propose some tangible check-points to track the current work in the roadmap, and to present to the outer world what was achieved with specific checkpoint. That may give a motivation to sustain in the contribution. But again, please don't count for a life devotion from me. I've got other things to do, too. And I don't know in advance how much free time I have, really. Things may change from day to day, from week to week, from month to month.

---

All best,
Ivan
 

Linas Vepstas

unread,
Apr 5, 2022, 1:04:13 PM4/5/22
to opencog
Hi Ivan,

On Mon, Apr 4, 2022 at 3:36 PM Ivan V. <ivan....@gmail.com> wrote:

Training a deep-learning NN to not be racist or sexist is like trying to take a photograph that is not racist or sexist. Stick to flower gardens and sunsets, you'll be successful.

The requirement is that It has to generally work, but there are thousands ways to make it work. If it works, I don't really mind how it's done.

I've attempted to parse what you wrote, in an attempt to extract it's true meaning. I'm very interested in parsing and extracting meaning.

If "it has to generally work" refers to "take a photograph that is not sexist or racist", then yes, by all means, take pictures of flower gardens and sunsets.  Best avoid photographs of half-naked women in Mardi Gras outfits. Seems obviously sexist to me. For extra flavor, throw in a photograph of a she-is-a-he crossdresser.  For a racist flair, make sure he is a luscious young Thai boy (which, if you don't know, are famous for a sexuality confusing to "straight" Westerners, and a delight for the sexually adventerous.)

If you want to avoid racism and sexism in your photographs, then avoid taking photos of human beings.  All humans. That includes old Soviet babushkas, as we know all too well what happened to many of them, when they were young.  Human life is filled with tragedy. That's just the way it is, however utopian your dreams are.

If "it has to generally work" means "build a deep learning neural net that thinks", my answer is no, that will never work. It is not possible. Those NN's are snapshots, photographs of complicated data relationships. They don't think, they are not alive. If you do not yet clearly understand that a NN is just a photograph, then perhaps the following might help provide that understanding:

Henry W. Lin, Max Tegmark, and David Rolnick
  "Why does deep and cheap learning work so well?"
  3 Aug 2017
  https://arxiv.org/pdf/1608.08225.pdf

If "it has to generally work" means "build a thinking being capable of solving moral and ethical dilemmas", well, no one knows how to do that yet, although there are some apparent paths through the fields and forests, mountains and streams.


Anyway, come work with me in trying to make machines that think, as opposed to machines that learn.

I may want to help here-or-there with this-or-that, but generally, I don't have all the time in the world.

None of us do. Welcome to the club. I wish I had a dozen lifetimes. A hundred. A thousand. There is so much I could do...

If you make a detailed plan of what needs to be done (I propose a structured task tree right at the front page of the project),

A detailed plan is possible only when you know what you are building.  Only when such a thing has been built before.

An analogy: we are in the 19th century, and talking about human flight. We both agree that it is possible, but what is the detailed plan?  Procure a laundry basket that can be suspended from a balloon? Procure many kilometers of rope? What size of rope? What is the detailed plan, exactly? How heavy is that balloon? Can it actually get off the ground?

The true path to human flight involved measurements made in small wind-tunnels on small wings.  The "detailed plan" would be "build a wind-tunnel, and make some measurements, then decide what to do next".  The AtomSpace is my wind-tunnel.

I am very willing to talk about what needs to be done; you need to be willing to think about how that fits into a grand scheme of things.

I am not willing to spend any time at all drafting a detailed blue-print for a rocket-ship. Not the least of which is that I still believe that I can use an airplane to fly all the way to the Moon.

--linas

Linas Vepstas

unread,
Apr 5, 2022, 2:04:24 PM4/5/22
to Craig Bosco, opencog
Hi Craig, (and Ivan)

Replying publicly to a private email:

On Mon, Apr 4, 2022 at 4:00 PM Craig Bosco <craig...@usegale.com> wrote:
the only way forward is to crowdsource work and ideas.
...
everyone will ultimately benefit from the OpenCog platform as it gains ease-of-use and sophistication. 
 
The OpenCog "platform" is both broad and deep; to discuss all aspects of it would be boiling the ocean. Unless, that is, you want to work on deep and basic infrastructure.  One such would be converting the AtomSpace into a commercially viable platform that ordinary developers would want to use on a day-by-day basis. Having this would attract public attention, although it would not much advance the overall AGI research goals.

One way to convert the AtomSpace into a commercially viable product would be to allow it to store generic JSON or similar (generic s-expressions, generic YAML, or even generic python or a json-like subset of python. Or all of the above.) This is "commercially appealing" because there already is a company that does this (grakn.ai, but since renamed to some other name I can't recall) and there are several other graph-database companies that offer something similar.  I've taken some small steps in this direction, but abandoned them as they seemed like a distraction from the main topic of AGI research.

The above might be appealing because it is a fairly well-defined, clear-cut project. It does not require arcane theory, or deep experimentation. It's mostly a matter of roll-up-your-sleeves and write code, which is exactly the kind of thing most programmers enjoy. Take a sketch, and turn it into a polished product.

As to AGI research: the stuff I'm working on now is very theory-laden and complex; I now realize that I should not much expect anyone to follow, although a shout out to Amir who continues to surprise me regularly. He's on the right track.

As to AGI research that you or Ivan could work on (... if only Ivan stopped skimming emails, and actually paid attention to what was written in them...) there is brand-new green-field development on audio and video processing.  Green-field, in that not much code has been written, and so you don't have to modify a large complex existing code-base. It does, however, require interfacing into large and complex existing systems. The path is fairly straight-forward; see attached PDF. The work, however, is definitely challenging: it will require some hard thinking and lots of work. It's not "just programming", it's architecture and exploration.  Some of that work is grunt-work, e.g. collecting a suitable corpus of images. Some is just painful: running CPU-intensive jobs for days on end.


-- Linas

nugi nugroho

unread,
Apr 8, 2022, 5:07:01 AM4/8/22
to ope...@googlegroups.com
I still do not understand the system completely, but I wonder if this system is theoretically capable of creating 3d models from video input that the cameramen rotate around an object at a certain angle. I was thinking of creating a corpus capable of converting video to 3d models for further processing by reasoning algorithms. I think AGI needs to have the skills to think at least about the 3d world(just my naive assumption though).  This was the first idea that popped into my head but, well, I still need to prepare for my university exam so I can't learn faster than my current pace and my current skills are not sufficient to realize that idea for now. I hope that I can make some contribution to the project at the end of this year, but I cannot promise.

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

nugi nugroho

unread,
Apr 8, 2022, 6:51:13 PM4/8/22
to ope...@googlegroups.com
Could it be possible to make some 3d model of an object, Isolate it, create another corpus for object recognition by inputting the video rendering result to the system to be able to recognize the object in 2d. I wonder if this is possible without inputting a 3d model to the new corpus and just the blank video input and by some iteration it will be able to identify the isolated object in some noisy environment. If this is possible then the system could theoretically learn to recognize objects from a youtube video by passing the video and the subtitle(for now the subtitle is created from a neural network since I have no idea how to create the audio corpus). Usually the cooking channels from youtube are explaining about the ingredients like apples and moving them as they cook it, so I think that I could be used as the input to teach the model. I know this is a huge and very difficult thing to do.

The thing that I know is I don't know a lot about how to use this technique for computer vision, I was still dumb. I still don't have any idea how to isolate each object in the video since this shouldn't be hardcoded but learned.

Linas Vepstas

unread,
Apr 9, 2022, 3:41:12 PM4/9/22
to opencog
Hi Nugi,

I have an idea for how to segment an image into parts, and recognize those parts. For example, given a car, to recognize where the wheels are, where  the tires on the wheels are, the windshields, the roof, the door-handles, and then to be able to labels these parts (with words, for instance) -- Likewise for faces (eyes, nose, mouth...) or any other real-world object.  This provides a "grammatical structure" to the object. It is more than just "image segmentation"; rather, it is a way of "seeing" the structure, the part-whole relationship.  Once this grammar is found, one can reason about objects, talk about objects, etc.

The PDF in the email chain explains how to do this. Writing the code would take a long time, as would running the experiments.  You are welcome to read about it, and think about it, but I don't think it is a good project for a student: it would take many years of full-time hard work to accomplish this.

--linas

Craig Bosco

unread,
Apr 11, 2022, 8:59:28 AM4/11/22
to ope...@googlegroups.com
 @Linas,
I must admit -- it took me a few days to read and really consume the Grammar Induction PDF. Some of these topics may seem unapproachable to "outsiders", and I am sure some people struggle (like me) or don't believe they have the capability to fully ingest and comprehend. Or, perhaps more importantly, contribute. I've finished reading your PDF once and have a few thoughts I will share now (before I read it again, and then again). I have some other thoughts about audio cognition that I will save for a follow up email. 

My first takeaway relates to general "project management" and productivity tools. It's going to be very difficult for people to swap in, pick up work, and delegate tasks, unless there's some sort of common operational picture that we can all refer to. Github works great as a platform for programmers, but even Github can be confusing at times. I apologize if I've missed something, but I am thinking a free kanban-like board (such as Airtable or Monday.com) would help a small & distributed team quickly see items that need work, and reduces the "barrier to entry" for all those lurkers out there. I've started to identify a few things that we could tackle, specifically: corpus for audio and visual data. It seems like this direction is the most needed from your end, but correct me if I'm wrong. 

Another observation I've had is that recent advancement in BI tools for analytics have led to an explosion of Business Analysts, allowing people without technical skills to leverage powerful ML in drag & drop interfaces that are easy to pick up. I think that adding something like an ETL GUI to Atomspace/OpenCog would accelerate this growth process by allowing "experimenters" to drop in&out modules and observe results. This might be a pie-in-the-sky wish but I think this sort of enablement will spark another explosion of data science growth.

I will re-read your paper and follow up with some additional thoughts from my professional audio experience later in the week...

Kind regards,
Craig

CONFIDENTIALITY NOTICE -- This email is intended only for the person(s) named in the message header. Unless otherwise indicated, it contains information that is confidential, privileged and/or exempt from disclosure under applicable law. If you have received this message in error, please notify the sender of the error and delete the message. The integrity and security of this email cannot be guaranteed. The sender is not liable for any damage caused by viewing this message. Thank you.

Linas Vepstas

unread,
Apr 12, 2022, 2:06:54 AM4/12/22
to opencog
Hi Craig,

On Mon, Apr 11, 2022 at 7:59 AM Craig Bosco <craig...@usegale.com> wrote:
 @Linas,
I must admit -- it took me a few days to read and really consume the Grammar Induction PDF. Some of these topics may seem unapproachable to "outsiders", and I am sure some people struggle (like me) or don't believe they have the capability to fully ingest and comprehend. Or, perhaps more importantly, contribute. I've finished reading your PDF once and have a few thoughts I will share now (before I read it again, and then again). I have some other thoughts about audio cognition that I will save for a follow up email. 

Thanks!  I tried to make it readable, as best as I could.  If you have specific questions, comments let me know.

My first takeaway relates to general "project management" and productivity tools. It's going to be very difficult for people to swap in, pick up work, and delegate tasks, unless there's some sort of common operational picture that we can all refer to. Github works great as a platform for programmers, but even Github can be confusing at times. I apologize if I've missed something, but I am thinking a free kanban-like board (such as Airtable or Monday.com) would help a small & distributed team quickly see items that need work, and reduces the "barrier to entry" for all those lurkers out there. I've started to identify a few things that we could tackle, specifically: corpus for audio and visual data. It seems like this direction is the most needed from your end, but correct me if I'm wrong. 

I don't know how to respond. You're a software dev manager, and I assume an experienced software dev.  In my work experience, it takes a new developer a few weeks to get oriented enough to be able to start making contributiions. It takes them at least a year, before they become generally familiar with the project, as a whole. I don't know how to short-circuit that. Or rather, I don't beleive its possible to short-circuit that.

4 or 5 embeded emails below, I mentioned that one possibility is to commericalize the AtomSpace. I think this could be done by a more-or-less conventional software+marketing team, following well-understood development management styles.  If that's really the goal, then the place to start would be to work up a marketing-and-competitive-advantage assement. Who are the competitors? We need to do most of what they can do, and offer some killer features that no one else has. I think the AtomSpace has such killer features.  The AtomSpace does NOT have a product planner who can assess what the market wants, and can negotiate with development on how to get that. You know of any good product planners?

Once there's a plan in place, then yes, you can dole out coding tasks to assorted programmers. At least, that's how it works when everyone is *paid*.  For a volunteer projject ... hoooboy ... famously, its all about 'scrtatchig that itch'.  So: what do *you*, personally, want to do?  I'm sure we can find something you'd like to do, that would be useful.  But what works for you won't work for the next guy.

... the above is *competely different* from the stuff described in the PDF. The PDF is about a sciencee research proposal. It's about doing somethig that's never been done before. It will require a lot of tinkering. That's an utterly different skill set. Different personality type. (In my experience, a personality type that will quit and find other employment, if asked to use airtable or monday.com They'll roll thier eyes, leave, and not even say goodbye. "If you can't say something polite, don't say anything at all.")


Another observation I've had is that recent advancement in BI tools for analytics have led to an explosion of Business Analysts, allowing people without technical skills to leverage powerful ML in drag & drop interfaces that are easy to pick up. I think that adding something like an ETL GUI to Atomspace/OpenCog would accelerate this growth process by allowing "experimenters" to drop in&out modules and observe results. This might be a pie-in-the-sky wish but I think this sort of enablement will spark another explosion of data science growth.

Err, The atomspace is a graph database, comparable to other graph databases, which, as far as I know, don't have drag-n-drop GUI's on them. I dunno. Maybe they do. That's why some kind of competitive analysis would need to be done.

The stuff in the PDF is a science project; there's nothing there than can be wrapped in a GUI; its too early for that.

Perhaps MOSES could be wrapped in a GUI. I dunno, I suspect that, presume that, conventional, mainstream ML can do what MOSES does, better cheaper faster.  Maybe. I dunno.  Beats me. ML is a commercially mature, multi-billion-dollar industry. its not the land of cowboys and yahoos that it used to be.  MOSES is still very intersting, but for completely different reasons, reasons that commerical users don't care about.

--linas



nugi nugroho

unread,
May 16, 2022, 1:03:53 AM5/16/22
to ope...@googlegroups.com
I think you are right, that system is way more complicated to build. But, do you think that Link-grammar nature can be used to steal ideas from deep learning to symbolic learning. (I dunno, probably just for fun, not really matters)

Linas Vepstas

unread,
May 16, 2022, 4:56:05 PM5/16/22
to opencog
On Mon, May 16, 2022 at 12:03 AM nugi nugroho <susa...@gmail.com> wrote:
I think you are right, that system is way more complicated to build. But, do you think that Link-grammar nature can be used to steal ideas from deep learning to symbolic learning. (I dunno, probably just for fun, not really matters)

Short answer is "yes", although I have not thought very much about this. Other tasks seem much more important to me right now, although clearly, most mainstream thinkers are very interested in coupling symbolic systems to deep learning.  Perhaps I should say: everyone else is thinking about this, perhaps my time is best spent working on something that everyone else is ignoring?

--linas

nugi nugroho

unread,
Jul 19, 2022, 3:43:15 PM7/19/22
to ope...@googlegroups.com
Excuse me, do you think that a cs degree is mandatory when researching an agi system??

Linas Vepstas

unread,
Aug 4, 2022, 9:59:12 AM8/4/22
to opencog
Hi Nugi,

On Tue, Jul 19, 2022 at 10:43 PM nugi nugroho <susa...@gmail.com> wrote:
Excuse me, do you think that a cs degree is mandatory when researching an agi system??

If you are not fluent in at least several programming languages, and don't clearly understand fundamental system processes (memory, cpu, network) then you won't be able to actively participate in the creation of AGI.

If you don't have a formal degree in the "hard sciences" (mathematics, physics) you will not be able to theorize what is going on, or even understand the theoretical advances.

Without this, you will be left to wonder and philosophize about what is going on "down there" in working systems. And maybe that's OK -- understanding the impact of AGI requires understanding sociology, anthropology, linguistics, philosophy, psychology, neuroscience ... so this is a marvelous generalist's playground.

Ideally, you would know a lot about all of the topics above. They all seem important.

--linas
 
Reply all
Reply to author
Forward
0 new messages