We need to talk about AI - films

23 views
Skip to first unread message

Gerd Leonhard

unread,
Jul 15, 2019, 4:56:17 AM7/15/19
to Lifeboat Foundation Advisory Boards
Hi everyone I would like to share my recent short film 'we need to talk about AI' with you - it's reached over 450K views on YT https://youtu.be/XUVS5d3-Bis and I'd love some feedback from everyone here. You can also use the shortcut www.weneedtotalkaboutai.com .  Download all my films via VIMEO, btw:) or via www.gerd.cloud 
Cheers!

Gerd


Gerd Leonhard 

Futurist & Humanist, Keynote Speaker

Author of Technology vs. Humanity

Zurich, Switzerland


www.futuristgerd.com 

Youtube: www.gerdtube.com 

CEO www.thefuturesagency.com

LinkedIn

Twitter


VIDEOS 

Latest short keynote videos and Interviews: Gerd.Live

5 short keynote video excepts

New film on AI: We Need to Talk about AI

My key memes on video: http://www.humanity.digital/ 


off...@thefuturesagency.com 



CONTEXT

Download the free PDF on 'The Ethics of Technology'

Subscribe to my podcasts: https://gerd.fm/audioonlykeynotes

Newsletter sign-up: Humanity Futures



My latest book: "Technology Vs Humanity"
Now available in 10 languages 


Visit my shop - buy some cool t-shirts :)


Tihamer Toth-Fejel

unread,
Jul 15, 2019, 12:42:11 PM7/15/19
to Gerd Leonhard, Lifeboat Foundation Advisory Boards
I love the question, and have thought about it since 1984 (ref "Angels of Steel", page 8 of http://archives.nd.edu/Scholastic/VOL_0125/VOL_0125_ISSUE_0009.pdf)

Two problems with "We need to talk" (other than the nebulous title):
First, the current fad in AI is dependent on neural networks. Almost by definition, these types of systems cannot explain their decisions. HOL (Higher Order Logic) systems, OTOH, not only explain, but also prove their decisions; unfortunately, some of these proofs are long (e.g. 300 pages to prove that 1+1=2). There is a lot of technical work that needs to be done before HOL systems can work as usefully as neural networks --only without the bugs. Not to mention the limitations caused by Godel's Incompleteness. In addition, the requirements had better be set correctly. These are the technical challenges we face. Second, many of the laudable-sounding goals above are vague enough to be useless. Human flourishing, for example. What does it mean for humans to flourish? Some people would say it means limiting the population; some would say ithe opposite--we should fill the galaxy. To some, it means loving and obeying God; others would say it means being free of the obsolete and limiting rules of ancient superstitions. At it's heart, this is the philosophical problem that humans have struggled with for millennia--What is good and true? I doubt that the machines will solve this one either.  

A few nit-picks: 
The differentiators between machines and humans (at 2:36): emotions, spontaneity, intuition, and imagination are inadequate and inadequately defined. Even animals have emotions; plus genetic algorithms are pretty good at searching solutions spaces in a way that looks like imagination. I would have said "reaching a gestalt".  

Also, At 2:25, your text cleverly reconstitutes itself into the word "SOUL" (at 2;25) , but never mentions it.  What the heck *is* a soul, anyway? If soul is the differentiator between humans and machines, then you need to explain what it is. You certainly should not avoid all mention of it.


-Tee

--
You received this message because you are subscribed to the Google Groups "Lifeboat Foundation Advisory Boards" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lifeboat-advisory-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lifeboat-advisory-boards/f15c4588-b31a-44ad-b266-9d85c8eb8cd9%40googlegroups.com.


--
Tihamer Toth-Fejel
TihamerT...@gmail.com
Millersville, MD 21108

Rebecca Costa

unread,
Jul 15, 2019, 1:37:59 PM7/15/19
to Gerd Leonhard, Tihamer Toth-Fejel, Lifeboat Foundation Advisory Boards
There is another aspect to an AI-driven world which we appear to be sidestepping. And that is our ability to predict, with increasing precision, future events which have yet to occur.  Every minute predictive analytic algorithms grow more robust allowing those with powerful information to take advantage of events which have a high probability of occurring.  Those with this data have the opportunity to take advantage of those who do not  - creating the potential for a highly predatory environment.  Imagine, for example, large financial institutions knowing in advance exactly when and how the price of oil will change in in the next three minutes.  Or when a temporary drop in temperature will cause cows to produce less milk and elevate milk prices?  Or who is genetically prone to contract specific diseases that can be prevented through costly genetic therapies at birth?  If we are concerned about the impact that income inequality is having on society today then we should be terrified of what will occur when certain stratum of society have knowledge of future events whereas others become unknowing victims of manipulation.

Rebecca Costa 

antonella santuccione

unread,
Jul 15, 2019, 1:44:40 PM7/15/19
to Rebecca Costa, Gerd Leonhard, Tihamer Toth-Fejel, Lifeboat Foundation Advisory Boards
A vary valid point!
 Thank you Rebecca.

Antonella 

Liss - Tactile Architecture

unread,
Jul 15, 2019, 2:07:40 PM7/15/19
to Tihamer Toth-Fejel, Gerd Leonhard, Lifeboat Foundation Advisory Boards
Please remove my address from your mailingist



send from my phone


Architektin, AK Berlin, RIBA I / II

MaArch [UCL Bartlett], DipArch [UCL Bartlett], Bahons [UOW]

Geschäftsführung

Tactile Architecture - Office für Systemarchitektur

UnIMas - Universe by Intelligent Machines

Assistant Professor, Institut für Architektur, TU 

www.tactile-architecture.com

www.unims.berlin

+49 151 2724 5355


sent while on the road | think before you print

 



Woodrow Barfield

unread,
Jul 15, 2019, 2:53:06 PM7/15/19
to Liss - Tactile Architecture, Ugo Pagallo, Tihamer Toth-Fejel, Gerd Leonhard, Lifeboat Foundation Advisory Boards
Hi All, 

To add another perspective on this discussion thread, I have just coedited with Ugo Pagallo, a volume on... Research Handbook on Law and AI, and am cowriting a textbook on the same topic. I just want to make the point that there are many very good scholars writing about the legal and ethical implications of AI/algorithms... from human rights violations, to antitrust violations. So, having been an engineering professor, I have to admit, that the "legal perspective" fascinates me and seems to be gaining momentum, especially in the US and EU as legislators are now interested in addressing concerns which have been raised. Anyway, for those who want a wide understanding of developments in AI, the "application of law" is another interesting perspective. 

Woody Barfield 

Rebecca Costa

unread,
Jul 15, 2019, 3:12:33 PM7/15/19
to Ugo Pagallo, Woodrow Barfield, Tihamer Toth-Fejel, Gerd Leonhard, Lifeboat Foundation Advisory Boards
I am very interested in how the law related to events which have not yet occurred but have an "inevitability" to them.  Particularly where crime is concerned.  In the film "Minority Report" AI predictive algorithms became so precise that the laws allowed for a person to be stopped seconds before they perpetuated a crime and to be convicted of that crime as if it had actually occurred - thereby sparing the victim.  My own research into AI and knowledge of it's accelerating predictive capabilities got me thinking about how much the legal system is focused on events "after-the-fact."  Therefore, it gives me a great sense of optimism to know that you are looking upstream from a legal perspective to deal with the ethical issues of inevitable events which have not yet manifested. .. 

David Walton

unread,
Jul 16, 2019, 5:19:51 PM7/16/19
to Rebecca Costa, Ugo Pagallo, Woodrow Barfield, Tihamer Toth-Fejel, Gerd Leonhard, Lifeboat Foundation Advisory Boards
Gerd,

You touch on many of the interesting aspects of the AI conversation in your video.  As a call to start more conversations, I think it works well.  Personally, I think claims such as "AI is more dangerous than nuclear weapons" are more effective when combined with a discussion of what AI actually can and can't do.  I think the danger of AI is less military and more in its impact to jobs and society, but of course, it's worth talking about all angles.

If you're interested in a fictional exploration of this question, my novel THREE LAWS LETHAL, just released this month, comes at the question of AI through the self-driving car industry. It tackles issues like the Trolley Problem, military drone applications, what consciousness means and how it might evolve in a machine architecture, how an AI might see the world from a radically different perspective, how involved governments should be in regulations, and the case for open-source software in critical applications. Vernor Vinge said.Three Laws Lethal "gives the reader exciting insights into the threats and the promises that are coming our way."

Best wishes,
David

three laws lethal.jpg



Gerd Leonhard

unread,
Jul 16, 2019, 5:21:48 PM7/16/19
to David Walton, Rebecca Costa, Ugo Pagallo, Woodrow Barfield, Tihamer Toth-Fejel, Lifeboat Foundation Advisory Boards
thanks David et al, I shall have a look !

Gerd Leonhard 

Futurist & Humanist, Keynote Speaker

Author of Technology vs. Humanity

Zurich, Switzerland


www.futuristgerd.com 

Youtube: www.gerdtube.com 

CEO www.thefuturesagency.com

LinkedIn

Twitter


VIDEOS 

Latest short keynote videos and Interviews: Gerd.Live

5 short keynote video excepts



Martin Dudziak

unread,
Jul 16, 2019, 6:21:42 PM7/16/19
to Gerd Leonhard, David Walton, Rebecca Costa, Ugo Pagallo, Woodrow Barfield, Tihamer Toth-Fejel, Lifeboat Foundation Advisory Boards
Please remove me from this mailing list (and others like it!!)

Gerd Leonhard

unread,
Jul 17, 2019, 3:57:34 AM7/17/19
to Tihamer Toth-Fejel, Lifeboat Foundation Advisory Boards
Greetings Tihamer and everyone else,  just started reading 'Angel of Steels'.


Good comments on my film; thanks!

Re: "genetic algorithms are pretty good at searching solutions spaces in a way that looks like imagination. I would have said "reaching a gestalt"". very good point indeed, but then again I would concede that algorithms are and will be good at SIMULATIONS ie 'looking like imagination' which is interesting but still a far cry from actually HAVING imagination  (back to the Chinese room, maybe?)


Human flourishing is NOT a useless term, imho - yes it is hard to define and different for everyone but we still need to postulate it as the overall goal, imho, and not short from setting it forth.

Correctly on the SOUL point - I have been avoiding that as an argument but it is in fact unavoidable. 


Live long and prosper!

Cheers from Zürich 

Gerd  
 



Liselotte Lyngsø

unread,
Jul 17, 2019, 4:16:34 AM7/17/19
to Gerd Leonhard, David Walton, Rebecca Costa, Ugo Pagallo, Woodrow Barfield, Tihamer Toth-Fejel, Lifeboat Foundation Advisory Boards
Dear Gerd & co
Thank you for this interesting discussion. I think the cyborg element where we become AI embedded  will make it difficult both for legislators and for us to stay “organic” as a human only.
Musk Neuralink and the French Next Brain will get there shortly
All the best
Liselotte 

This message is send from my IPhone 

Liselotte Lyngsø, Founding Partner,
Chief Futurist
L...@FutureNavigator.dk
SOHO
Flæsketorvet 68
DK-1711 København

David

<three laws lethal.jpg>


Liselotte Lyngsø

unread,
Jul 17, 2019, 5:01:49 AM7/17/19
to Gerd Leonhard, David Walton, Rebecca Costa, Ugo Pagallo, Woodrow Barfield, Tihamer Toth-Fejel, Lifeboat Foundation Advisory Boards
And sometimes videos can provoke to more ethical discussions by only mentioning the benign everyday cases of AI and mind-reading. 
Let’s read the mind of babies and dogs😜
This message is send from my IPhone 

Liselotte Lyngsø, Founding Partner,
SOHO
Flæsketorvet 68
DK-1711 København

Silje Bareksten

unread,
Jul 17, 2019, 5:28:30 AM7/17/19
to Liselotte Lyngsø, Gerd Leonhard, David Walton, Rebecca Costa, Ugo Pagallo, Woodrow Barfield, Tihamer Toth-Fejel, Lifeboat Foundation Advisory Boards
Dear all,

Thanks for this interesting update and discussion. Completely agree with Liselotte here. In Norway, in spite of strict legislation on biotech (law of human-medicinal use of biotechnology), some of our top neurologist R&D environments are bypassing these restrictions in finding treatments for nevrological disorders - for example migraine or parkinson. Presicion instruments for the purpose of fast and safe injection are now tested in last phase clinical trials, in combination with implants intended to reroute, distort or completely halt neurological pathways that cause migrains or cluster headaches. That in combination with computational modeling (virtual assistants i.ex. which is to be considered mainstream now), what for example Neuralink is setting out is not groundbreaking, it’s merely the commercialization of research that have been in the works for a long time. Just slap the sticker «ai» on and suddenly it sounds sensational.

In my opinion, in addition to the «cyborg» element, another interesting issue arise now that the general population in some of our societies are getting ever more informed, programming skills are becoming more common and access to the latest science and its applications simpler yet. We’re in for some juicy stuff from the biohacking and cybernetics scene independent from the big scientific environments. 

Best and happy summer to all, 

Silje

Silje Bareksten 

Perry Monroe

unread,
Jul 18, 2019, 10:24:47 AM7/18/19
to Lifeboat Foundation Advisory Boards
Gerd,

        I do find your perspective very interesting. Just last year I had finished a paper on AI and the international military use of such programs for DARPA. As humans we have come to embrace AI as some kind of savior that will do so many things for humanity. In speaking to members in the intelligence community the use of AI can lead to a very dangerous situation. An example being a nation that has nuclear capability that has what is called a dead hand switch in play. It was shown that a meteoroid strike could trigger this dead hand switch leaving a 30 minute window from launch to impact of said nuclear weapon. In speaking to both military and the scientific communities it was the military that insisted we keep the human element in place to safeguard against this happening. The scientific community had more faith believing this could never happen. I would rather err on the side of caution myself. 

     A second aspect of AI that one has to wonder about is the perspective of it over taking humanity. As a race mankind looks at any machine no matter how advanced as little more than a toaster that can be tossed away at any given desire. If AI achieves our level for self preservation would it not see humanity as a threat? You have to wonder if we have implanted a race memory that AI will be the downfall of our cavillation. Every story I have seen or read had the machines (AI) overtaking organic life as what it saw as a threat. To the best of my knowledge there has yet to be a story of man and AI living in harmony you have to wonder why that is. Could it be that humanity is so ego centric that the thought of something we made could be our equal is uncomprehend able. Little more than 100 years ago everything was hand made. Now our use of technology has improved our standard of living that today that almost nothing we use is hand made. What mankind had done now is when we purchase about any product we are paying for our laziness because we don't want to do that even though we are more than capable of preforming that task. Will that complacency lead us to a place that when an AI platform truly thinks for itself we will not see it until it is too late.

Dr. Perry Monroe 
To unsubscribe from this group and stop receiving emails from it, send an email to lifeboat-advisory-boards+unsub...@googlegroups.com.


--
Tihamer Toth-Fejel
TihamerT...@gmail.com
Millersville, MD 21108

--
You received this message because you are subscribed to the Google Groups "Lifeboat Foundation Advisory Boards" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lifeboat-advisory-boards+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Lifeboat Foundation Advisory Boards" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lifeboat-advisory-boards+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Lifeboat Foundation Advisory Boards" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lifeboat-advisory-boards+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Lifeboat Foundation Advisory Boards" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lifeboat-advisory-boards+unsub...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages