I am a big believer in the “brain lateralization thesis”. Starting in 1993, I began building an ambitious project based on this idea, but defined in terms of cognitive psychology and epistemology. This was in the days “before the internet”, and the project began as a listserv called “BRIDGE-L” running on the UC Santa Barbara computer system. The name of the project was “The Bridge Across Consciousness” and it was essentially about the epistemological tension across levels of abstraction – left brain (“science and analysis”) – right brain (“humanities and intuition”) – and how can that tension be bridged.
I did a lot with that idea. We did send 30,000 messages through BRIDGE-L – mostly university people and graduate students – until 1999, when this thing called “the internet” emerged and UCSB kicked us off their system.
Later, I did build a web site for the project at bridgeacrossconsciousness.net – but today much of the essential idea is online at
http://originresearch.com/bridge/index.cfm
It’s a simple thesis – one container for the entire range of human thinking, defined across a spectrum of “descending levels of abstraction” (“Plato vs. Aristotle”) – and very fragmented in actuality because these elements have evolved independently, and because “the great transcendental integration” has not yet really been possible.
Regarding this theme, there is a dedicated page on Wikipedia on the separation of “the two cultures”
https://en.wikipedia.org/wiki/The_Two_Cultures
"The Two Cultures" is the first part of an influential 1959 Rede Lecture by British scientist and novelist C. P. Snow which were published in book form as The Two Cultures and the Scientific Revolution the same year.[1][2] Its thesis was that science and the humanities which represented "the intellectual life of the whole of western society" had become split into "two cultures" and that this division was a major handicap to both in solving the world's problems.
And there’s this
https://en.wikipedia.org/wiki/Consilience_(book)
Consilience: The Unity of Knowledge is a 1998 book by the biologist E. O. Wilson, in which the author discusses methods that have been used to unite the sciences and might in the future unite them with the humanities. Wilson uses the term consilience to describe the synthesis of knowledge from different specialized fields of human endeavor.
**
I wish there were more people on Ontolog who would speak up on these issues.
Bruce Schuman
Santa Barbara CA USA
bruces...@cox.net / 805-705-9174
From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of Azamat Abdoullaev
Sent: Friday, January 15, 2021 1:20 AM
To: ontolog-forum <ontolo...@googlegroups.com>
Subject: Re: [ontolog-forum] Knowledge Processing, Logic, and the Future of AI - Georg Gottlob WLD 2021
An interesting survey. A couple of points to observe:
We urgently need to“get cracking on building machines equipped with common sense, cognitive models, and powerful tools for reasoning. [...]Together [with ML] these can lead to deep understanding, itself a prerequisite for building machines that can reliably anticipate and evaluate the consequences of their actions. [...]The royal road to better AI is through AI that genuinely understands the world.”
Great! The only thing is, nobody is walking along the royal road, preferring all sorts of side roads.
If our left hemisphere is now without NNs.
It is misleading to divide it as symbolic and subsymbolic AI. The latter is using binary data as symbols, with statistic inference, induction, deduction, transduction and abduction/transfer learning.
ML is NOT any true AI, but a computational predictive analytics statistical technique, with no need for "understanding the world".
What must be really done it is merging MI and HI, as HMIL.
On Thu, Jan 14, 2021 at 10:52 PM Marco Neumann <marco....@gmail.com> wrote:
This might be of interest to some of you. To celebrate World Logic Day 2021 Georg Gottlob presents his take on the future of AI.
The presentation has a bit of a slow start but gets better towards the end. Nothing really new here but a sober overview where things stand.
Formal (Symbolic) Logic needs to catch up with the progress made in ML & NN research to stay relevant.
Video
Slides
--
---
Marco Neumann
KONA--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CABWJn4RAbNscqmq-kUduLitXTpd0OSqaQ%3D8-9hC-%3DQ5nLBWy2w%40mail.gmail.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf_bJOX7Bkrtpj%3DdYg-gstCdV02JH5vLRdeAZNNiH3vuTg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/00fa01d6eb7e%24cd355740%2467a005c0%24%40cox.net.
“My assessment about why A.I. is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are,” he told me. “And this is hubris and obviously false.”
He adds that working with A.I. at Tesla lets him say with confidence “that we’re headed toward a situation where A.I. is vastly smarter than humans and I think that time frame is less than five years from now. But that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.” https://www.nytimes.com/2020/07/25/style/elon-musk-maureen-dowd.html
So the COVID-19 ruling is to be succeeded by global MI/AI by 2025...
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CABWJn4SyfaFO4tUSKgqYq99hSWzCFu6Zq8zsYkBsJ11D2P4PQA%40mail.gmail.com.
Azamat,
That is not going to happen within the 21st century. Musk may be smart about designing machines, but he's clueless about two things: AI and the nature of intelligence -- human, animal, and even plant.
AA>For the last two years, I have been repeating that intelligent machinery is highly likely to take over by 2025, instead of 2075, 2045, or 2030.
For an overview of the problems of understanding natural language, see the cartoon in slide 20 of http://jfsowa.com/talks/natlog.pdf . A child can understand that cartoon and answer every question on the slide.
But there is no AI system today that can understand and answer the simple questions on the slide. And the bottom line -- understand the implications and why they are humorous -- is far beyond what we can expect in the next 10 or 20 years.
For more reasonable, but important goals that could be achieved in the next 5 to 10 years, see http://jfsowa.com/eswc.pdf .
And by the way, if Elon M. really believed that robots would surpass human intelligence so soon, he would scrap the idea of sending humans to Mars. It would be much, much cheaper to send robots, who would be far superior to humans in exploring the planet. They could even build some nice comfy homes for humans, if and when the robots decided to let humans make the trip.
John
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/d643e57747881f8958a2ef00b2ecf414.squirrel%40webmail2.bestweb.net.
Hi Marco –
I am guessing this is you: http://www.marconeumann.org/
My professional interests touch on Distributed Information Syndication and Contexts for the Semantic Web, dynamic schema evolution in structured data, information visualisation, ontology based knowledge management, reputation based ranking in collaborative online communities, and last but not least the Semantic GeoSpatial Web. Since 2005 I have the opportunity to apply and implement my research in large-scale information management projects in international cultural heritage institutions and the private sector.
I just saw John’s comment on AI and Elon Musk, and probably agree.
What would be interesting – would be a visionary online conference process connecting “humanists” with technologists in a context that is actually willing and able to look at the major problems facing the world right now, and considering how technology could or should help.
Instead of talking about cute technical gizmos and ways to make more money for big corporations, we should be considering the big picture, and considering how big-time networks – probably with AI support – could be stepping into the huge issues of world crisis management.
Obviously, politics in the United States has veered dangerously close to sheer lunacy – and is in extreme need of a massive injection of coherent and well organized rationality.
And global stability in major areas like climate change is very vulnerable.
There is a huge opportunity for the application of powerful technical ideas at the macro scale – but it seems like people with the technical competence are generally short-sighted
So yes, “the two cultures” should be talking to each other.
In my opinion, we need “whole systems revolution” across the entire range of human experience – semantics, governance, law, ecology/environment
If there was a core group of competent people who were not locked into their own semi-blinding silos and inter-sector prejudices, a lot might happen.
****
What is AI? I just scanned through the Wikipedia article. There’s a lot to it.
https://en.wikipedia.org/wiki/Artificial_intelligence
Artificial intelligence was founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism,[12][13] followed by disappointment and the loss of funding (known as an "AI winter"),[14][15] followed by new approaches, success and renewed funding.[13][16] After AlphaGo successfully defeated a professional Go player in 2015, artificial intelligence once again attracted widespread global attention.[17] For most of its history, AI research has been divided into sub-fields that often fail to communicate with each other.[18] These sub-fields are based on technical considerations, such as particular goals (e.g. "robotics" or "machine learning"),[19] the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences.[22][23][24] Sub-fields have also been based on social factors (particular institutions or the work of particular researchers).[18]
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CABWJn4SyfaFO4tUSKgqYq99hSWzCFu6Zq8zsYkBsJ11D2P4PQA%40mail.gmail.com.
Azamat,
I'm sure that (a) Elon M. will develop good technology,; (b) he has plans for developing good AI systems to support it; and (c) the AI technology will be much better than anything that can be done with current AI systems.
But I'm also sure that neither his AI technology nor anybody else's will surpass human intelligence during the 21st c. However, I also believe that human-aided AI systems will produce much better technology than anything we have today.
The critical qualifier is "human aided". For further explanation of what that means, see the talk I presented at the European Semantic Web Conference: http://jfsowa.com/talks/eswc.pdf .
That talk was scheduled for Crete, but it was virtualized.
John
Andrea,
I'm sorry. For the European Semantic Web Conference: http://jfsowa.com/talks/eswc.pdf
All my talks are in jfsowa.com/talks and all publications are in jfsowa.com/pubs.
John
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/004c01d6ec3c%2492a7ed40%24b7f7c7c0%24%40cox.net.
Thank you, Marco.
MN >Bruce, I presume you would say that you have a foot in each of the so called cultures. Now would you describe AI as a science? And if so what is its field of inquiry?
Yes, I did sense that this was a serious question. But I did not feel like I could address it in a serious or competent way. I started gathering up books on Artificial Intelligence many years ago. I was excited to buy Marvin Minsky’s “Society of Mind” the moment it came out. I had both original volumes of “Parallel Distributed Processing – Explorations in the Micro-structure of Cognition” by Rumelhart and McClelland. At one point in my life, I actually had a collection of something like 200 Scientific American reprints about anything interesting or innovative regarding system science, mathematics or computers. I was very excited when Byte Magazine announced the original “ODBC” protocols for connecting databases – and I think that was the early 1980’s.
But – “Artificial Intelligence”? For me, that subject goes back to the popular analogy (in those days) between a neuron and a logic gate. Grossly over-simplified, right – and moving a bit erratically ever since.
To address your question, I went to the Wikipedia article on AI. It makes many points that support your concern, including the idea that the subject is badly fragmented, and maybe hyped.
https://en.wikipedia.org/wiki/Artificial_intelligence
So – I am not qualified to address your question – and I am not sure it can be answered.
What I indeed am seriously interested in is the mathematical modelling and interpretation of cognitive processes.
One book I have been happy about recently is “The Big Book of Concepts”, by Gregory Murphy – which is very readable, very non-pretentious – and has a great section (Chapter 11) on the relationship between word meaning and concepts. John might find that discussion fascinating.
http://cognet.mit.edu/book/big-book-of-concepts
My interest is in understanding human cognition – and how to make it stronger in a context of human foibles and serious misinformation.
Yes, there is “a science” to this.
From Murphy, Chapter 11
Throughout much of this book, I have waffled on the question of whether I am
talking about concepts or words. The two seem to be closely related. I can talk
about children learning the concept of sheep, say, but I can also talk about their
learning the word sheep. I would take as evidence for a child having this concept the
fact that he or she uses the word correctly. On a purely intuitive basis, then, there
appears to be considerable similarity between word meanings and concepts. And in
fact, much of the literature uses these two terms interchangeably. In this chapter, I
will address in greater detail the relation between concepts and word meanings, and
so I will have to be somewhat more careful in distinguishing the two than I have
been. (Let me remind readers of the typographical convention I have been following
of italicizing words but not concepts.)
To discuss the relation between these two, one ought to begin by providing definitions
for these terms, so that there is no confusion at the start. By concept, I mean
a nonlinguistic psychological representation of a class of entities in the world. This is
your knowledge of what kinds of things there are in the world, and what properties
they have. By word meaning, I mean quite generally the aspect of words that gives
them significance and relates them to the world. I will argue that words gain their
significance by being connected to concepts. It is important to note, however, that
this is not true by definition but needs to be argued for on the basis of evidence and
psychological theory. And that is the main goal of this chapter. More generally, the
goal is to say how it is that word meanings are psychologically represented.
***
Knowledge effects can be seen more specifically in conceptual combination (see
chapter 12), contextual modulation, typicality shifts (Roth and Shoben 1983), deter-
mining the intended sense in polysemy, and the use of background assumptions in
determining word meaning (Fillmore 1982). That is, there is no way to tell from the
word table by itself whether one is talking about the piece of furniture or the col-
lection of people eating dinner. But when embedded in a sentence such as "The table
left only a 10 tip," there is only one plausible choice-only one choice compatible
with our world knowledge. Although one can argue that such knowledge is not
really part of the word meaning but is more general domain knowledge, it effectively
becomes part of the meaning, because the knowledge is used to determine the specific
interpretation of that word in that context. Indeed, it is in lacking such everyday
knowledge that computerized translation and language comprehension devices
have often had great difficulty, as well as their difficulty in using pragmatics and
knowledge of the speaker. Thus, it can be very difficult to know where to draw the
line between what is part of the word meaning per se and what is "background"
knowledge. It is not clear to me that drawing this line will be theoretically useful.
In short, insofar as background knowledge and plausible reasoning are involved
in determining which sense is intended for a polysemous word for contextual modulation
and for drawing inferences in language comprehension, this provides further
evidence that background knowledge is involved in the conceptual representations
underlying word meaning.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CABWJn4RrpvsH50XxfCKPwE2FFh1PV2wsynzkoa6xHJutfoyTXA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CABWJn4RrpvsH50XxfCKPwE2FFh1PV2wsynzkoa6xHJutfoyTXA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf-h81a7iyiDOHOvAZhhDOUG9HsMgmVChd_4-2LPrsU0aw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/00fa01d6eb7e%24cd355740%2467a005c0%24%40cox.net.