The weaponisation of information is mutating at alarming speed

1 view
Skip to first unread message

Bipin Gautam

unread,
Aug 20, 2019, 12:24:37 AM8/20/19
to Nepali computer security and hacking community
Source : https://www.theguardian.com/commentisfree/2019/aug/19/weaponisation-of-information-mutating-privacy

Communication has been weaponised, used to provoke, mislead and
influence the public in numerous insidious ways. Disinformation was
just the first stage of an evolving trend of using information to
subvert democracy, confuse rival states, define the narrative and
control public opinion. Using the large, unregulated, open
environments that tech companies once promised would “empower”
ordinary people, disinformation has spread rapidly across the globe.
The power that tech companies offered us has become a priceless tool
in propagandists’ hands, who were right in thinking that a confused,
rapidly globalising world is more vulnerable to the malleable beast of
disinformation than straightforward propaganda. Whatever we do,
however many fact-checking initiatives we undertake, disinformation
shows no sign of abating. It just mutates.

While initially countries that were seasoned propagandists, such as
Russia and North Korea, were identified as the main culprits, the list
of states employing disinformation is growing. China is apparently
using disinformation to portray Hong Kong protesters as proxies of
nefarious western powers and violent rioters, potentially to prepare
the ground for more violent intervention to suppress the movement.
India has been the host of constant disinformation campaigns, either
ahead of the most recent elections or during the current standoff with
Pakistan over Kashmir. Lobbying and PR firms have now professionalised
online disinformation, as the cases of Sir Lynton Crosby’s CTF
Partners in the UK and the troll farms in the Philippines indicate.

The next stage in the weaponisation of information is the increasing
effort to control information flows and therefore public opinion,
quite often using – ironically enough – the spectre of disinformation
as the excuse to do so. Internet shutdowns made headlines recently
during India’s communications blackout in Kashmir but they have
already become commonplace in Africa. Access Now has reported that
internet shutdowns between 2016 and 2018 more than doubled. According
to some reports, the app used by protesters in Hong Kong to
coordinate, Telegram, also received a distributed denial of service
(DDoS) attack from mainland China.

The control of information can take more benign forms, too, such as
the total disintegration of the White House press briefings that have
made Donald Trump’s Twitter the de facto mouthpiece for the US
executive, or the attempt by Boris Johnson to establish a direct
channel of communication with his audience through Facebook. Removing
regulated, accountable and experienced journalists from the equation
can only be deleterious to the public interest. The fourth estate is a
fundamental part of our political systems. The never-ending series of
social media privacy and political scandals proves that tech companies
are not able to play that role – and in any case, they don’t want to.

The third stage in the weaponisation of information may be even
worse. As invasive and stealth data mining practices are becoming
commonplace, we may soon be dealing not just with disinformation or
communications blackouts, but with mass-scale surreptitious
manipulation through nudging. Prof Karen Yeung of Birmingham Law
School has used the term “hypernudges” to define adaptable,
continuously updated and pervasive algorithmically driven systems that
provide their subjects – us – with highly personalised environments
that define our range of choices by creating a tailored view of the
world.

According to IBM, 2.5 quintillion (that’s 1,000,000,000,000,000,000)
bytes of data is created every day. Data sets containing personal
information – obtained via our online engagements with people or
companies – are becoming more elaborate and expansive. Even though the
analysis necessary to obtain useful insights from them can overcome
human capacity, artificial intelligence systems and their algorithmic
models can fare much better.

Communication mediated through hypernudging can gradually shift our
moral values, norms and priorities. YouTube recommendations and their
alleged promotion of far-right content in Brazil, causing the
radicalisation of certain users, was a form of nudging – unwitting as
the tech company claimed it was. But intentional nudging using models
built on our individual preferences and vulnerabilities will become
much more impactful in the future. While the effectiveness of
personalised propaganda such as that employed by Cambridge Analytica
may still be debatable, there is no doubt long-term nudging can be
powerful – if not to swing a close election, maybe to increase apathy
or foment dissent and distrust towards our institutions. The
possibilities for manipulation are endless.

[snip]
Reply all
Reply to author
Forward
0 new messages