<
https://www.bbc.com/news/technology-65110030>
Key figures in artificial intelligence want training of powerful
AI systems to be suspended amid fears of a threat to humanity.
They have signed an open letter warning of potential risks, and
say the race to develop AI systems is out of control.
Twitter chief Elon Musk is among those who want training of AIs
above a certain capacity to be halted for at least six months.
Apple co-founder Steve Wozniak and some researchers at DeepMind
also signed.
OpenAI, the company behind ChatGPT, recently released GPT-4 - a
state-of-the-art technology, which has impressed observers with its
ability to do tasks such as answering questions about objects in
images.
The letter, from Future of Life Institute and signed by the
luminaries, wants development to be halted temporarily at that
level, warning in their letter of the risks future, more advanced
systems might pose.
"AI systems with human-competitive intelligence can pose
profound risks to society and humanity," it says.
The Future of Life Institute is a not-for-profit organisation
which says its mission is to "steer transformative technologies away
from extreme, large-scale risks and towards benefiting life".
Mr Musk, owner of Twitter and chief executive of car company
Tesla, is listed as an external adviser to the organisation.
Advanced AIs need to be developed with care, the letter says,
but instead, "recent months have seen AI labs locked in an out-of-
control race to develop and deploy ever more powerful digital minds
that no-one - not even their creators - can understand, predict, or
reliably control".
The letter warns that AIs could flood information channels with
misinformation, and replace jobs with automation.
Is the world prepared for the coming AI storm?
The letter follows a recent report from investment bank Goldman
Sachs which said that while AI was likely to increase productivity,
millions of jobs could become automated.
However, other experts told the BBC the effect of AI on the
labour market was very hard to predict.
Outsmarted and obsolete
More speculatively, the letter asks: "Should we develop non-
human minds that might eventually outnumber, outsmart, obsolete
[sic] and replace us?"
In a recent blog post quoted in the letter, OpenAI warned of the
risks if an artificial general intelligence (AGI) were developed
recklessly: "A misaligned superintelligent AGI could cause grievous
harm to the world; an autocratic regime with a decisive
superintelligence lead could do that, too.
"Co-ordination among AGI efforts to slow down at critical
junctures will likely be important," the firm wrote.
OpenAI has not publicly commented on the letter. The BBC has
asked the firm whether it backs the call.
Mr Musk was a co-founder of OpenAI - though he resigned from the
board of the organisation some years ago and has tweeted critically
about its current direction.
Autonomous driving functions made by his car company Tesla, like
most similar systems, use AI technology.
The letter asks AI labs "to immediately pause for at least six
months the training of AI systems more powerful than GPT-4".
If such a delay cannot be enacted quickly, governments should
step in and institute a moratorium, it says.
"New and capable regulatory authorities dedicated to AI" would
also be needed.
Recently, a number of proposals for the regulation of technology
have been put forward in the US, UK and EU. However, the UK has
ruled out a dedicated regulator for AI.
--
HRM Resident