PhD locations for AGI

186 views
Skip to first unread message

Jose Ignacio Rodriguez-Labra

unread,
Apr 25, 2021, 12:49:08 PM4/25/21
to opencog
I want to get serious in the development of AGI, and I think doing a Ph.D. might be best for me. I will graduate with my master's this Fall 2021. Do you guys have any recommendations for universities, research institutes, or programs in the US or even international? Thank you.

Linas Vepstas

unread,
Apr 25, 2021, 6:52:30 PM4/25/21
to opencog
I don't know. Let us know if you find out. My general impression is that academia is still actively hostile to the idea of AGI.  I've never had a conversation on the topic that went well.

-- Linas

On Sun, Apr 25, 2021 at 11:49 AM Jose Ignacio Rodriguez-Labra <nachos...@gmail.com> wrote:
I want to get serious in the development of AGI, and I think doing a Ph.D. might be best for me. I will graduate with my master's this Fall 2021. Do you guys have any recommendations for universities, research institutes, or programs in the US or even international? Thank you.

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/d0efa315-7d7d-4fad-8ba3-33432981f0e5n%40googlegroups.com.


--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
 

Jose Ignacio Rodriguez-Labra

unread,
Apr 25, 2021, 11:31:02 PM4/25/21
to opencog
Thank you for the response, Linas. 

I thought that academia was the place to go to work on cutting-edge ideas and pushing boundaries :(

However to be honest I do see what you're saying. I've personally spoken to a couple of experienced professors who've done research on machine learning and neural networks, but they seem like they don't believe in AGI, or that it is in a ridiculously away future. I can see their disbelief in their eyes.

But I'm sure that in history there have been technologies that no one had believed in before their development, how did those come to fruition? What does the history manual recommend I do? Is AGI development from my basement in my free time the best I can do? How did you end up in the position that you are that allowed you to gain your expertise in AGI? Thank you.

Linas Vepstas

unread,
Apr 26, 2021, 12:19:20 AM4/26/21
to opencog
Hi Jose,

Regarding finding a PhD program in which to study AGI ...

On Sun, Apr 25, 2021 at 10:31 PM Jose Ignacio Rodriguez-Labra <nachos...@gmail.com> wrote:
Thank you for the response, Linas. 

I thought that academia was the place to go to work on cutting-edge ideas and pushing boundaries :(

However to be honest I do see what you're saying. I've personally spoken to a couple of experienced professors who've done research on machine learning and neural networks, but they seem like they don't believe in AGI, or that it is in a ridiculously away future. I can see their disbelief in their eyes.

But I'm sure that in history there have been technologies that no one had believed in before their development, how did those come to fruition? What does the history manual recommend I do? Is AGI development from my basement in my free time the best I can do? How did you end up in the position that you are that allowed you to gain your expertise in AGI? Thank you.

Those are good questions. To answer them, let me offer a few stories. A long time ago, when I was in grad school, I became very interested in fractals, "negative entropy", and "chaos". I thought these would be fascinating things to study, but was warned against it. I was told that only cranks and crackpots study such things and I would never be able to get a job.  This was, of course, shortly before "chaos theory" exploded on the scene. Eventually, there were best-seller books written about it!  (In the end, I did not study chaos; I feel like I missed my chance: I was in the right place at the right time and did the wrong thing.)

Academics are a conservative bunch, because they have to write grant proposals that do not get rejected. No grant -> no money -> no job/tenure, if you're young, can't do research, can't pay for grad students, can't pay for lab equipment, computer time.  The guys in the machine shop who drill pieces of metal for your experiment - they need to be paid. So you don't want to write grant proposals that sound crazy and are likely to be rejected. This makes academics stick fairly closely to the mainstream; they are risk-averse.

I did keep whining about chaos, and kept gluing pictures of fractals on the wall.  Someone suggested that I talk to Phil Anderson -- he was, by that point, a rather famous and established heretical thinker. However, he was in California, I was in New York, and this was before email. I would have to write a letter, or call on the phone. I was easily intimidated .. and socially awkward. I didn't know how to do that.  (I should have done it).

I was also sent to talk to Per Bak, a bit before he got famous. He was at Brookhaven; I was almost done with my PhD, I was silly and did not know what to talk about, and I did not know how to ask for a job or how to ask for advice (!!) I knew math and physics, I did not have "ordinary common-sense people skills". Alas.

The point of my stories is for you not to repeat them. There almost surely are some AGI thinkers embedded in academia, but I do not know who they are. It is much more valuable and important for you to find them; studying AGI in your basement is a formulas for intellectual suicide. Standard career advice: it is not what you know, it's who you know.

Here is an idea: someone recently pointed me at a paper by Geoffrey Hinton "How to represent part-whole hierarchies in a neural network":  https://arxiv.org/abs/2102.12627 -- I have not read it yet, but it looks right up my alley.  You should raise your hand and write to him, and say something like "hey me! I want to do that!" -- or at least, ask him what you asked me -- advice on getting a PhD.  If I may be so bold, I will bcc him on this (bcc, so as not to spam a public mailing list with a private email address.)

So here's the deal: I've been working on the intersection of neural nets and symbolic AI  since -- I dunno -- 2014? Earlier?  I've been laboring in (what feels like) complete obscurity, unable to get anyone interested or excited. Asking me for help is pointless: I have no connections; I don't know of anyone, besides Ben, who is interested in this topic.

By contrast: Geoffrey Hinton has a Wikipedia article about him. He has connections both to a respectable university and to Google. That's hard to beat.  He may not personally know anyone interested in AGI, but he might be able to point you in the right direction.

My strongest advice to you is to keep asking around, through your network of people who know other people who might know, and pose them this same question.

I can give you a specific curriculum to study, but this is not the most important thing, right now. Pester me later, as the mood hits.

-- linas



On Sunday, April 25, 2021 at 6:52:30 PM UTC-4 linas wrote:
I don't know. Let us know if you find out. My general impression is that academia is still actively hostile to the idea of AGI.  I've never had a conversation on the topic that went well.

-- Linas

On Sun, Apr 25, 2021 at 11:49 AM Jose Ignacio Rodriguez-Labra <nachos...@gmail.com> wrote:
I want to get serious in the development of AGI, and I think doing a Ph.D. might be best for me. I will graduate with my master's this Fall 2021. Do you guys have any recommendations for universities, research institutes, or programs in the US or even international? Thank you.

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/d0efa315-7d7d-4fad-8ba3-33432981f0e5n%40googlegroups.com.


--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
 

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

Ben Goertzel

unread,
Apr 26, 2021, 6:27:47 PM4/26/21
to opencog
I'd say a lot of us who were in academia and were into AGI ended up
... leaving academia ...

However there are still some hard-core AGI-ers in academic positions
supervising students, e.g. (an incomplete list), Kristinn Thorisson in
Iceland, Claes Strannegård in Sweden, Pei Wang at Temple U in
Philadelphia, Paul Rosenbloom at USC, Juergen Schmidhuber at IDSIA
.... Of course each of these folks has their own AGI approach /
predilections and. may prefer students who want to work in accordance
w/ their ideas/interests, that sort of thing becomes an individual
discussion btw student and potential advisor...

There are so far as I know no academic departments with strong AGI
"programmes" -- there are just isolated AGI-oriented profs like the
ones I listed above

Of course there are also many other profs out there interested in AGI
but doing research on other related but narrower stuff. E.g. my son's
PhD supervisor, Josef Urban, is an AGI enthusiast but as a researcher
he focuses on ML for automated theorem proving. Could a prof of this
ilk (AGI-interested and AGI-enthusiastic but not AGI-focused)
supervise a research student doing AGI? quite possibly it would depend
on the case... but would they have scholarship $$ for a student doing
AGI is a different question...

Of course doing a PhD on a CS / cog-sci topic related to , but not
directly constituting, AGI can still be valuable for advancing
yourself as an AGI researcher though.. that becomes a broader topic...

Academia is by far the best institution humanity has yet come up with
for fostering creative research in a stable and persistent way -- but
it still sucks...

ben
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA34f1KxcQTyV3uSvtVzQz316rhqizR8jp835be%3Dv9C121A%40mail.gmail.com.



--
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

Jose Ignacio Rodriguez-Labra

unread,
Apr 27, 2021, 12:32:29 AM4/27/21
to opencog
Thank you so much for the thorough response.

I realize now that the difficult thing about doing a Ph.D. in AGI is finding support for such an idea. On the positive side, perhaps this is an opportunity to take initiative and become a leading expert before it takes off, similar to your story of chaos theory. Maybe this is the right time to delve into it, or maybe it isn't. Still, I feel that at the end of the day someone, or more accurately a group of people, will have to sit down and figure AGI out. The fact that this needs to happen motivates me to find a way in.

I really appreciate your effort in introducing Geoffrey Hinton and asking for advice. I'll be sure to follow up with him, look for others, and contact them directly about doing a Ph.D. Generally, put a lot more focus on networking. Curriculum-wise was reading up on Ben's Real-World Reasoning. I'll definitely pester you later on AGI technicalities :)

Jose Ignacio Rodriguez-Labra

unread,
Apr 27, 2021, 1:07:38 AM4/27/21
to opencog
Thank you a lot for the valuable list of people. I'll try to get in contact with them, see what they think.

In regards to funding, there must be a financial source in our society that provides funding to crazy new risky ideas. I would imagine academia is such a source. If I looked for a venture capitalist for a start-up, I'm not sure how I could build a business model around AGI research.

Why is there so little support for AGI, anyways? I had always imagined governments racing to create a digital mind. Is it just so hard organizations don't believe it is possible at the moment? As if there aren't other hundreds of really hard things to do in science and engineering today with support. Also, why do AGI enthusiasts leave academia? Where do they go? Thank you.

Linas Vepstas

unread,
Apr 27, 2021, 2:58:45 PM4/27/21
to opencog
Hi Jose,

On Tue, Apr 27, 2021 at 12:07 AM Jose Ignacio Rodriguez-Labra <nachos...@gmail.com> wrote:

Why is there so little support for AGI, anyways? I had always imagined governments racing to create a digital mind. Is it just so hard organizations don't believe it is possible at the moment?

There is very little agreement as to what AGI is, or how to build it, or if it is even safe to build it, or if there is something almost as good, if its profitable, and geo-political factors.  Let me start with a slide from IBM:
SIRE.png
The above is a portion of the old IBM Watson that was famously used to compete with Ken Jennings on the Jeopardy! match. I call this the "popsicle stick" architecture. Many, many people have envisioned and built something similar. I don't quite want to speak for Ben, but I think he was envisioning something like this in the 2004 timeframe, placing PLN below the bottom-most arrow (PLN would do "probabilistic logic" reasoning; circa a similar time-frame Pei Wang's NARS was a "non-axiomatic reasoning system") I joined opencog circa 2008, and had built something like the above by 2010 or 2012 or so. One incarnation used a collection of "common-sense facts" from OpenCyc.  So, what I built was not anywhere near as good as what IBM built: I worked alone; IBM threw 100+ programmers to code, polish, debug, test. Nor was I the only one working on such things: Ben had a team to create a talking dog via Unity 3D graphics. A half-dozen universities created similar systems for question-answering. Go to the Internet Archive circa 2010 and look at what MIT CSAIL was publishing at the time: you'll find similar charts. I believe that most universities with AI departments built something like the above, at some point in time.  It was all very exciting!

It was exciting, since it seemed like the right path to AGI. It was only a matter of engineering, or additional resources: just hook up all of these modules, provide the correct reasoning system (PLN, NARS, maybe Doug Lennat's microtheories from Cyc -- that started in the 1990's, for crisake!) Ben called it the Manhattan Project of AI, envisioning similar levels of funding and activity!

That this won't work became clear to me circa 2012 or 2014. Some people figured out that this won't work much earlier. Others still think the above approach is a viable path to AGI. I believe that the reason the above won't work is because it depends on a brittle collection of knowledge bases curated by grad-students. It is incredibly difficult to make such knowledge bases complete, and bug-free. Early lessons were taught by "expert systems", as early as the 1980's: Early days of prolog, people built "medical experts" and "legal experts" and hey "petroleum bore-hole and geoexploration experts" that captured expert knowledge, from hundreds of petroleum experts, about what it means when the temperature is 180 degrees farenheit some 3000 meters down a borehole! Those systems worked OK-ish. But mostly were buggy, incomplete. Even though petroleum exploration is different than co-reference resolution or semantic labelling, or upper ontologies, the failure reasons are the same, in each case. That's the lesson.

There's some geo-politics: circa 2012(??) some of the Manhatten-Project-of-AI believers convinced Brussels bureaucrats to invest billions of Euro into a University-Industrial Complex to build such a thing! Jobs for everyone! No European city left out! Money falls from the sky! I don't recall the name of the consortium. I forget the name of the thing; it was classic pork-barrel funding.. and, as a project, it failed. As a money-spout to certain institutions, it was wildly successful.

Can fragile hand-built assemblages of expert systems and ontologies ever work? Well, if you try hard enough, sort-of. The Ken Jennings Jeopardy! match proved that, as did the early X-Prize competitions for self-driving cars.  It should be noted that the best FSD is now from Tesla. Although its top-secret, it appears to be made from neural nets. (i.e. it is *not* a fragile, hand-built assemblage of expert systems) That this works well is not an accident.

So is that the road to AGI? Well, back in the day (circa 2012, again) there was an idea that we could build a supercomputer to simulate a mouse brain, neuron by neuron, and we would have a thinking mouse, and make the supercomputer a little larger, and ta-dah, human-level intelligence. Yeah. Uhhh. Well, that did not work out. For starters, the wiring diagram of the mouse brain was not known. And now that we almost know it, we still don't know what it does. Every supercomputer simulation results in garbage, until it is compared to lab experiment. The lab experiments are a bottleneck to this path to AGI. Nothing special here: the same remarks apply to using supercomputers for weather simulation.

People like Bengio still think that better neural nets are all we need. Hinton disagrees, and suggests unifying with symbolic AI. (So do I)  Devil is in the details. No one seems to agree on the details. What I'm building is almost surely not what Hinton envisions. Ben does not agree with what I'm building, he's building something rather different.  Ask 50 other AGI researchers, none agree with each other as to the correct approach. Doesn't help that AGI is strongly infested with cranks and crank writing. In physics and astronomy, with some formal education, and some intellectual effort, you can mostly tell apart the cranks from the smart ones. It's a lot harder in AGI.  How do you secure funding in this environment? How do you build consensus?

Then there is China. The slaughter bots https://www.youtube.com/watch?v=9CO6M2HsoIA There's algorithmic propaganda and facebook ... I mean, we can use social media algorithms to encourage cult-like thinking, things like QAnon, flat Earth, etc. Cult thinking is hostile to humanity: Jim Jones and Jonestown in Ghana. I talked about AI question answering systems, above, and about self-driving cars. There's vast amounts of money flowing into "delusional AI" -- proto-AGI systems designed to convince consumers to buy products, or vote for political parties. Sounds benign, but realize the Great Firewall of China falls into this class. The systems that Facebook and Youtube run must be viewed as a kind of proto-AGI, running partly in silicon, and partly in human brains, and they are every bit as liberating and dangerous as Jim Jones was.  (Let's be clear: Jones was a hero, was a good guy ... he did the right things for the right reasons ... until he didn't. And then people died.)  This is the problem with AGI.  It is a kind of nuclear bomb for the brain. Are you prepared to explode a bunch of human brains?  Is this really the path you want to walk down?

--linas



Ben Goertzel

unread,
May 12, 2021, 1:52:54 AM5/12/21
to opencog
> In regards to funding, there must be a financial source in our society that provides funding to crazy new risky ideas. I would imagine academia is such a source. If I looked for a venture capitalist for a start-up, I'm not sure how I could build a business model around AGI research.

Nonetheless, pathetic as it is, arguably VCs have put more $$ into
AGI-oriented research than governments so far (e.g. Deep Mind, OpenAI,
Vicarious)

> Why is there so little support for AGI, anyways? I had always imagined governments racing to create a digital mind. Is it just so hard organizations don't believe it is possible at the moment? As if there aren't other hundreds of really hard things to do in science and engineering today with support.

It seems there is skepticism that the goal is achievable in a
reasonable timeframe, and fear of bottomless research-dollar
sinkholes.

I suspect personally there is a psychological undercurrent of
disbelief that the glories of the human mind can ever be replicated in
a computer ... and also an undercurrent of Fear of Terminator etc. ...
perhaps these psychological factors play a part, along with general
skepticism...

> Also, why do AGI enthusiasts leave academia? Where do they go? Thank you.

Some remain in academia of course. Some of us have gone into industry
and spent years/decades paying our bills doing narrow AI while
pursuing AGI R&D on the side. Some of us have
creatively/entrepreneurially found ways to fund modest AGI R&D teams
within commercial organizations focused more proximally on narrow
AI...

ben
Reply all
Reply to author
Forward
0 new messages