Google April Fools for 2017 and other reflections

30 views
Skip to first unread message

Paul D. Fernhout

unread,
Apr 1, 2017, 12:00:40 PM4/1/17
to vir...@googlegroups.com
See:
https://venturebeat.com/2017/03/31/all-of-googles-jokes-for-april-fools-day-2017/

Two are Mars related (so far):
* Waze expands to Mars
* Google Cloud Platform also expands to Mars

More reflections on the last year or so below.

--Paul Fernhout

As I missed making an annual post here for 2016 (none about Mars -- Drop
Mic was the big news), here is summary for those:
https://venturebeat.com/2016/03/31/all-of-googles-jokes-for-april-fools-day-2016/

Also from 2016 (not April Fools):
"SpaceX CEO Elon Musk unveils grandiose plan to colonize Mars"
http://www.cbsnews.com/news/spacex-elon-musk-unveils-grandiose-plan-to-colonize-mars/

But at least he is delivering progress, as with a big success earlier
this week:
"SpaceX Makes Aerospace History With Successful Launch, Landing of a
Used Rocket"
https://science.slashdot.org/story/17/03/30/2344254/spacex-makes-aerospace-history-with-successful-launch-landing-of-a-used-rocket

Some Mars-related discussion sparked by that success:
"Who Will Own Mars?"
https://soylentnews.org/article.pl?sid=17/03/31/117234
"Everyone's excited about rockets to Mars, and each SpaceX launch brings
that dream closer to reality. Musk and others are putting a lot of money
and brainpower on the technical problem of getting people to Mars. Less
sensational topics, such as surviving on Mars, receive less attention —
but plenty of money and serious thought, because there's no way to get
around them.
But there's another important question which isn't getting much attention:
Who will own Mars, and how will it be governed?
Does Mars belong to the people who get there first? To the highest
bidder? To all the people of Earth?
Does Mars belong to Earth, or does Mars belong to Mars? Does it belong
to the Sun? To the Martian microbiome, if there is one? (What are the
indigenous rights of microbes, I wonder?)
Who will be in charge of Mars once the colonists arrive? If Mars turns
out to have valuable resources, who gets them? And if a Mars colony is
to govern itself, what kind of government would it have?
The Mars colonization project is driven by the ultra rich. And those
who want to stake their claim on Mars may rather the rest of us didn't
think too much about the little problem of who owns the planet next
door, and why."

Here is a post I made on the topic of "Jeff Bezos' Spaceflight Company
Blue Origin Gets Its First Paying Customer" reiterating my point about "
We need DOGS as well as CATS! [DOGS = Design of Great Settlements; CATS
= Cheap Access to Space]"
https://slashdot.org/comments.pl?sid=10350405&cid=54025073

In other real space news, India got to mars on a shoestring budget, and
here is some iron about that:
"Scientists Sent a Rocket To Mars For Less Than It Cost To Make 'The
Martian'"
https://science.slashdot.org/story/17/03/17/2234228/scientists-sent-a-rocket-to-mars-for-less-than-it-cost-to-make-the-martian

As I said elsewhere:
http://web.archive.org/web/20080905084112/http://www.oscomak.net/wiki/
A flow into foundations of $55 trillion is expected over the next 25
years: [Is Open Source the Answer To Giving?] And TV watching is
consuming 2,000 Wikipedias per year: [Mining the Cognitive Surplus] So
no one should seriously suggest the absence of money or time for R&D and
deployment is the problem for making either Spaceship Earth
(Sustainability) or Spaceship Mars (OpenVirgle) work for everyone, even
at the same time. It comes down to issues like ideology and imagination,
not "resources". "

And certainly the Open Manufacturing and 3D printing movement continues
to take off -- helping create the ideas and tools we will need to live
sustainably off-earth.
"Space Station's 3D Printer Makes Wrench From 'Beamed Up' Design"
http://www.space.com/28095-3d-printer-space-station-ratchet-wrench.html

I like it when we can pursue ideas that both help humanity live better
on Earth and potentially help humanity live in Space.

Here is a reading list I put together recently for those who want to
build real organizations either to support creating space habitats or
better earthly habitats (or any other innovative projects):
https://github.com/pdfernhout/High-Performance-Organizations-Reading-List

For those looking for yet more ideas to put Virgle/OpenVirgle-like
energy into, I (and many others) contributed a small section to this
newly released book:
"Visions for a World Transformed: 99 Ideas for Making the World a Better
Place — Starting Right Now -- by Philip Bowermaster and Stephen Gordon"
https://www.amazon.com/Visions-World-Transformed-Making-Starting-ebook/dp/B06XB4CT95
"How different will the future be from today? As different as we can
imagine, and possibly stranger and more wonderful than we ever HAVE
imagined. The key is turning our visions for the future into the future
itself. And that begins with articulating our visions."

Anyway, still plugging along in my spare time in the spirit of
Virgle/OpenVirgle -- along with many others here and elsewhere doing
good things. :-)

And, last but not least, some (for real) April Fools psychology:
"Why Are Some People More Gullible Than Others?"
https://soylentnews.org/article.pl?sid=17/03/31/2147248
Links to: https://phys.org/news/2017-03-people-gullible.html
" Gullibility occurs because we have evolved to deal with information
using two fundamentally different systems, according to Nobel Prize
winning psychologist Daniel Kahneman.
System 1 thinking is fast, automatic, intuitive, uncritical and
promotes accepting anecdotal and personal information as true. This was
a useful and adaptive processing strategy in our ancestral environment
of small, face-to-face groups, where trust was based on life-long
relationships. However, this kind of thinking can be dangerous in the
anonymous online world.
System 2 thinking is a much more recent human achievement; it is slow,
analytical, rational and effortful, and leads to the thorough evaluation
of incoming information.
While all humans use both intuitive and analytic thinking, system 2
thinking is the method of science, and is the best available antidote to
gullibility. So, education tends to reduce gullibility and those who
receive scientific training in critical, sceptical thinking also tend to
be less gullible and less easily manipulated.
Differences in trust can also influence gullibility. This may be
related to early childhood experiences, with the idea that trust in
infancy sets the stage for a lifelong expectation the world will be a
good and pleasant place to live.
Many factors, including mood, influence how we process incoming
information. Positive mood facilitates system 1 thinking and
gullibility, while negative mood often recruits more careful, cautious
and attentive processing."

That's one reason we need better tools for co-thinking and co-learning,
as I suggested here:
http://barcamp.org/w/page/47222818/Tools%20for%20Collective%20Sensemaking%20and%20Civic%20Engagement

And also here (unfortunately, with the new administration, Obama's
OpenPCAST initiative seems to be inaccessible, but thankfully the
Internet Archive has a copy):
https://web-beta.archive.org/web/20160825181841/http://pcast.ideascale.com/a/dtd/The-need-for-FOSS-intelligence-tools-for-sensemaking-etc/76207-8319
"While I can't guarantee success at the second option of using the
internet for abundance for all, I can guarantee that if we do nothing,
the first option of using the internet to round up dissenters (or
really, anybody who is different, like was done using IBM [tabulators]
in WWII Germany) will probably prevail. So, I feel the global public
really needs access to these sorts of sensemaking tools in an open
source way, and the way to use them is not so much to "fight back" as to
"transform and/or transcend the system". As Bucky Fuller said, you never
change thing by fighting the old paradigm directly; you change things by
inventing a new way that makes the old paradigm obsolete."

Hope everyone on this list (and beyond) has a great April Fools day and
a healthy happy 2017!

--Paul Fernhout (pdfernhout.net)
"The biggest challenge of the 21st century is the irony of technologies
of abundance in the hands of those still thinking in terms of scarcity."

Paul D. Fernhout

unread,
Apr 1, 2023, 9:02:38 PM4/1/23
to vir...@googlegroups.com
Hi Virgle list members,

Looks like no Google April fools jokes for 2023. See:
https://en.wikipedia.org/wiki/List_of_Google_April_Fools%27_Day_jokes
"Google canceled its 2020 April Fools' jokes for the first time due to
the COVID-19 pandemic, urging employees to contribute to relief efforts
instead. Since the cancellation in 2020, Google has not participated in
April Fools. However in 2020, April 1st was celebrated with the
anniversary of Jean Macnamara's birthday."

Hard to believe it has been fifteen years since Google's "Virgle" April
Fools' joke about colonizing Mars which spawned this mailing list:
https://archive.google.com/virgle/index.html

And it has been five years since I sent the previous email (forwarded
below) as an update to the Virgle list.

Much has happened since then, including:
* routine flights into space by private companies,
* a declared global pandemic (and its "daily sceptic" counterpoint),
* a growing understanding of how vitamin D, iodine, zinc+quercetin,
healthy gut bacteria, good sleep, exercise, positive sociality, good
thinking patterns (e.g. "Therapy in a Nutshell"
https://www.youtube.com/channel/UCpuqYFKLkcEryEieomiAv3Q ), and other
food and lifestyle issues affect health (like explored in "Blue Zones",
or as Stephen Ilardi puts it: https://tlc.ku.edu/ “We were never
designed for the sedentary, indoor, sleep-deprived, socially-isolated,
fast-food-laden, frenetic pace of modern life.”), and
* the continued improvements in technology like 3D printing, batteries,
self-driving cars, cheaper solar panels, and more (all of which can
contribute to the Virgle/OpenVirgle vision).

And sadly it seems the world is the closer to nuclear war than in
decades given the conflict physically centered around Ukraine (a nuclear
war which I hope still will be avoided if cooler heads prevail). And
even if they don't, I hope our global society will learn something good
from that all and do better in the future -- inspired perhaps by ideas
in hopeful sci-fi stories which are technically in our grasp even if
they may still be socially so far away.

More reflections on the last five years or so below.

--Paul Fernhout

==== More reflections on the last five years (2017-2023)

Remember Google's April Fools joke from 2009 about "CADIE", an AI who in
one day became sentient and took over and then decided to leave?

In a case of life sort-of imitating art, we saw last year:
"Google fires engineer who contended its AI technology was sentient"
https://www.cnn.com/2022/07/23/business/google-ai-engineer-fired-sentient/index.html

And now people are demanding a shut down of all AI development amidst
amazing things that ChatGPT and similar programs can do by often
creating surprisingly-high-quality text, images, and programs on request.
https://slashdot.org/story/23/03/29/2319224/pausing-ai-developments-isnt-enough-we-need-to-shut-it-all-down
"Earlier today, more than 1,100 artificial intelligence experts,
industry leaders and researchers signed a petition calling on AI
developers to stop training models more powerful than OpenAI's ChatGPT-4
for at least six months. Among those who refrained from signing it was
Eliezer Yudkowsky, a decision theorist from the U.S. and lead researcher
at the Machine Intelligence Research Institute. He's been working on
aligning Artificial General Intelligence since 2001 and is widely
regarded as a founder of the field. "This 6-month moratorium would be
better than no moratorium," writes Yudkowsky in an opinion piece for
Time Magazine. "I refrained from signing because I think the letter is
understating the seriousness of the situation and asking for too little
to solve it." Yudkowsky cranks up the rhetoric to 100, writing: "If
somebody builds a too-powerful AI, under present conditions, I expect
that every single member of the human species and all biological life on
Earth dies shortly thereafter.""

A comment I made today on all that (reprising an earlier comment made on
a story calling for increased regulation of AI):
https://slashdot.org/comments.pl?sid=22823280&cid=63417078
"[Q]uoting from an essay I wrote in 2010:
https://pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html
"Likewise, even United States three-letter agencies like the NSA and the
CIA, as well as their foreign counterparts, are becoming ironic
institutions in many ways. Despite probably having more computing power
per square foot than any other place in the world, they [as well as
civilian companies] seem not to have thought much about the implications
of all that computer power and organized information to transform the
world into a place of abundance for all. Cheap computing makes possible
just about cheap everything else, as does the ability to make better
designs through shared computing. ... There is a fundamental mismatch
between 21st century reality and 20th century security thinking. Those
"security" agencies [and civilian companies] are using those tools of
abundance, cooperation, and sharing mainly from a mindset of scarcity,
competition, and secrecy. Given the power of 21st century technology as
an amplifier (including as weapons of mass destruction), a
scarcity-based approach to using such technology ultimately is just
making us all insecure. Such powerful technologies of abundance,
designed, organized, and used from a mindset of scarcity could well
ironically doom us all whether through military robots, nukes, plagues,
propaganda, or whatever else... Or alternatively, as Bucky Fuller and
others have suggested, we could use such technologies to build a world
that is abundant and secure for all. ... The big problem is that all
these new war machines [or hyper-competitive companies] and the
surrounding infrastructure are created with the tools of abundance. The
irony is that these tools of abundance are being wielded by people still
obsessed with fighting over scarcity. So, the scarcity-based political
mindset driving the military uses the technologies of abundance to
create artificial scarcity. That is a tremendously deep irony that
remains so far unappreciated by the mainstream."
So, the issue goes much deeper than a need for "regulation".
Regulation by itself ultimately is unlikely to work long-term because
some groups will ignore regulation for short-term "competitive
advantage" as they socialize costs and risks while privatizing gains."

In general, the issue right now in the short term isn't that an AI like
depicted in, say, the 1957 movie the "Invisible Boy" will take over
(like CADIE supposedly did in 2009).
https://en.wikipedia.org/wiki/The_Invisible_Boy
"When Timmie expresses a wish to be able to play without being observed
by his parents, Robby, with the aid of the supercomputer, makes him
invisible. At first Timmie uses his invisibility to play simple pranks
on his parents and others, but the mood soon changes when it becomes
clear that the supercomputer is independent, ingenious, and evil. The
supercomputer had manipulated Timmie into altering Robby's programming
and, over many years, manipulated its creators into augmenting its
intelligence. It can control Robby electronically, and later uses
hypnosis and electronic implants to control human beings, along with
intending to take over the world using a military weapons satellite. (It
later declares its intent to destroy all life on Earth and then conquer
the entire galaxy and exterminate any life that it contains, even
bacteria.)"

I picked that 1957 example just to show how long people have had these
concerns -- but they go back further, like to R.U.R. (Rossum's Universal
Robots) in 1920 from which we get the term "robot". Although slaves have
been revolting for millennia, so I guess such fears are nothing new in
slave-holding societies?

The issue in the near term is more likely what Marshall Brain wrote
about years ago in Manna and "Robotic Freedom". He suggests these sorts
of powerful tools will create a dysfunctional concentration of wealth
when used in our current hyper-competitive winner-take-all
socio-politico-economic system.
https://marshallbrain.com/manna
https://marshallbrain.com/robotic-freedom

Marshall Brain's writings (perhaps inspired in part from him seeing a
demo I gave of Self-Replicating Robots around 1987?) inspired me to make
a YouTube video in 2010:
"The Richest Man in the World: A parable about structural unemployment
and a basic income"
https://www.youtube.com/watch?v=p14bAe6AzhA

And also put together some related information on heterodox economics:
https://www.pdfernhout.net/beyond-a-jobless-recovery-knol.html
"This article explores the issue of a "Jobless Recovery" mainly from a
heterodox economic perspective. It emphasizes the implications of ideas
by Marshall Brain and others that improvements in robotics, automation,
design, and voluntary social networks are fundamentally changing the
structure of the economic landscape. It outlines towards the end four
major alternatives to mainstream economic practice (a basic income, a
gift economy, stronger local subsistence economies, and resource-based
planning). These alternatives could be used in combination to address
what, even as far back as 1964, has been described as a breaking
"income-through-jobs link". This link between jobs and income is
breaking because of the declining value of most paid human labor
relative to capital investments in automation and better design. Or, as
is now the case, the value of paid human labor like at some newspapers
or universities is also declining relative to the output of voluntary
social networks such as for digital content production (like represented
by this document). It is suggested that we will need to fundamentally
reevaluate our economic theories and practices to adjust to these new
realities emerging from exponential trends in technology and society."

One may still hope that such AI-powered tool -- if distributed widely
and used wisely -- may make some positive difference. This hope is the
same that a wide distribution of the means of production via 3D printing
and machine tools can make a social difference -- like, say, Kevin
Carson writes about: https://kevinacarson.org/

Whether that hope makes sense long-term in an AI context given AI will
eventually be self-directing and "Slaughterbots"-capable may be
questionable, of course. But even if were a valid hope, it is not clear
how AI will play out given the advantage big organizations have in
training such systems to directly or indirectly pursue profits by
privatizing gains and socializing costs and risks.

Or, as with CADIE, or in James P. Hogan's 1979 AI novel "The Two Faces
of Tomorrow", it is hard to predict will happen given other risks of
unexpected behavior of AIs. I learned about some of those first-hand in
1987 when my first simulation of self-replicating robots (not much of an
AI) unexpectedly turned cannibalistic -- until I added a sense of smell
for identity to avoid the simulated robots eating their own offspring.
It was a later version of that simulation that Marshall Brain may have seen.

But as with the "Twirlip" name I use for some FOSS projects on
sensemaking and other information tools, I can't say my comment on
Slashdot will likely make much of a difference, sigh. It is essentially
the same thing I have been saying on Slashdot and elsewhere for many
years including my email sig.

I first used that sig in an email to Marvin Minsky in 2010, where I
wrote "My signature below sums up the most important thing I've learned
over the past 25 years by following the road less travelled. :-)"

As a random side note, both Marvin Minsky and I studied under George A.
Miller, with Marvin studying with George at the beginning of George's
career, and me studying with with George just before his mandatory
retirement at age 65. George went on to do much good work after that,
including expanding WordNet, which via Simpli helped create Google
Adsense and so Google's fortunes. Wordnet was indirectly inspired in
part by my own work with George related to AI and a Triplestore I called
Pointrel, which may have been indirectly inspired by Bill Kent's "Data
and Reality" book. Anyway, in a roundabout way, that makes Marvin and me
"peers" in terms of academic pedigree, for what that is worth. And it
also means Google owes some of its success to me too, indirectly. And
frankly, I'm not especially proud of that given where Google has been
going after it abandoned "don't be evil" -- like moving from organizing
all the world's information to increasingly organizing all the world's
eyeballs (even as some other companies may be far worse). Not that any
of that history matters much in practice, even as it may be good for my
own ego to reflect on to help keep going despite difficulties in a
Vladimir Zelenko, Gary Kildall, and/or Zach Barth sort of way?

In the past five years, I've added a lot more to the reading list I put
together around the time I sent that last email to the Virgle list --
intended to help those who want to build real organizations either to
support creating space habitats or better healthier earthly habitats (or
"Most of these books, web pages, and videos are about how to design
better organizations. Some are about how to be a more effective
individual within the organizations we currently have. The items are
divided into three broad categories -- Organization and Motivation,
Health and Wellness, and Software Development Specific."

That repository has over 1000 stars now on GitHub -- and was discussed
on Hacker News at one point.

Tangentially, Microsoft acquired GitHub since I wrote last in 2017 --
who saw that coming? And Microsoft Visual Studio Code is now a leading
FOSS development environment, another surprise. Who knows what the next
five years have in store, if even Microsoft can support Free and Open
Source software? (Maybe the Simpson's writers know, given they predicted
a Trump presidency?)

Microsoft's FOSS efforts and now even Linux support are a bit of a
"Tadodaho" redemption story (at least in part), like I discuss in this
2010 essay regarding other matters:
https://pdfernhout.net/on-dealing-with-social-hurricanes.html
"This approximately 60 page document is a ramble about ways to ensure
the CIA (as well as other big organizations) remains (or becomes)
accountable to human needs and the needs of healthy, prosperous, joyful,
secure, educated communities. The primarily suggestion is to encourage a
paradigm shift away from scarcity thinking & competition thinking
towards abundance thinking & cooperation thinking within the CIA and
other organizations. I suggest that shift could be encouraged in part by
providing publicly accessible free "intelligence" tools and other
publicly accessible free information that all people (including in the
CIA and elsewhere) can, if they want, use to better connect the dots
about global issues and see those issues from multiple perspectives, to
provide a better context for providing broad policy advice. It links
that effort to bigger efforts to transform our global society into a
place that works well for (almost) everyone that millions of people are
engaged in. A central Haudenosaunee story-related theme is the
transformation of Tadodaho through the efforts of the Peacemaker from
someone who was evil and hurtful to someone who was good and helpful. ...
Again, to follow Woodrow Wilson's points, better ideas (science)
could help with that, as could better stories and ideals (literature and
the humanities), as could some better tools that merge the two. The
Haudenosaunee (People of the Longhouse) may have not had so much fancy
technology as the Europeans (ignoring their biotech in terms of the
three sisters of corn, squash, and beans), but they certainly had, as
above, powerful stories that allowed them to build an expanding,
resilient, and sustainable civilization that was relatively peaceful and
equitable at least internally. ...
But the message of those tools has still not sunk in -- material
abundance is possible for all. People may still find reasons to compete
(over mates, over social status, etc.) but at least fighting over
*stuff* is becoming obsolete (or, similarly, we can move beyond thinking
there is not enough *stuff* to build a lot of different communities that
follow different rules within them). Except our entire military and
intelligence apparatus is configured assuming the big problem is
fighting over stuff one way or another, and becomes, in a way, a
self-fulfilling prophecy, or some kind of social knot."

I continue to plug along on FOSS sensemaking tools and so on to
hopefully someday help us transcend that knot while still having good
security for as many as possible.

Although more and more I think Dialogue Mapping with IBIS may be "good
enough" for most general group sensemaking tasks. I prepared a five
minute "lightning talk" for LibrePlanet 2021 on "Empowering users
through Dialogue Mapping using IBIS". That talk is a much-shortened
version of a longer talk I gave in July 2019 for the Cognitive Systems
Institute Group Speaker Series.
https://media.libreplanet.org/u/libreplanet/m/lightning-talk---dialogue-mapping-with-ibis/
https://twitter.com/sumalaika/status/1153279423938007040

Using AI (whether Watson, ChatGPT, or other similar tools) to help
groups make IBIS diagrams using Dialogue Mapping in real time as they
discuss topics is essentially what I proposed in 2019 in that IBM/CSIG
talk as a next great business opportunity.

As all too often, I seem too far ahead of the times -- a risk no one
told me about from reading too much sci-fi as a kid, no doubt. :-)

Maybe I will post again in 2028 or 2033 if the world as we know it is
still here?

Maybe we will become a "Midas World" -- one which has not transcended
the irony mentioned in my sig?
https://en.wikipedia.org/wiki/Midas_World

Or, we could do worse than instead move towards the society depicted in
a hopeful story from the 1950s -- the story that helped inspire
hypertext and the web and other innovations:
"The Skills of Xanadu" by Theodore Sturgeon (1956)
https://archive.org/details/pra-BB3830.08
https://archive.org/details/galaxymagazine-1956-07/page/n117/mode/2up

That (fictional) Xanadu society is one which has transcended the irony I
mention in my sig.

It's nice to have at least one example of what a hopeful future might
look like. And there are other hopeful stories out there if you look for
them (like J.P. Hogan's 1982 "Voyage from Yesteryear" or more recently
Miki Kashtan's short stories in "Reweaving Our Human Fabric: Working
Together to Create a Nonviolent Future"). If you look around, you may
find others.

Hope everyone still on this list (and beyond) has a great April Fools
day and a healthy happy 2023!

--Paul Fernhout (pdfernhout.net)
"The biggest challenge of the 21st century is the irony of technologies
of abundance in the hands of those still thinking in terms of scarcity."

Reply all
Reply to author
Forward
0 new messages