Www.your-freedom.net Account

0 views
Skip to first unread message

Oswalda Shutte

unread,
Aug 4, 2024, 4:33:11 PM8/4/24
to prosoonunun
Attackson free expression grew more common around the world. In a record 55 of the 70 countries covered by Freedom on the Net, people faced legal repercussions for expressing themselves online, while people were physically assaulted or killed for their online commentary in 41 countries. The most egregious cases occurred in Myanmar and Iran, whose authoritarian regimes carried out death sentences against people convicted of online expression-related crimes. In Belarus and Nicaragua, where protections for internet freedom plummeted during the coverage period, people received draconian prison terms for online speech, a core tactic employed by longtime dictators Alyaksandr Lukashenka and Daniel Ortega in their violent campaigns to stay in power.

Generative artificial intelligence (AI) threatens to supercharge online disinformation campaigns. At least 47 governments deployed commentators to manipulate online discussions in their favor during the coverage period, double the number from a decade ago. Meanwhile, AI-based tools that can generate text, audio, and imagery have quickly grown more sophisticated, accessible, and easy to use, spurring a concerning escalation of these disinformation tactics. Over the past year, the new technology was utilized in at least 16 countries to sow doubt, smear opponents, or influence public debate.


Advances in artificial intelligence (AI) are amplifying a crisis for human rights online. While AI technology offers exciting and beneficial uses for science, education, and society at large, its uptake has also increased the scale, speed, and efficiency of digital repression. Automated systems have enabled governments to conduct more precise and subtle forms of online censorship. Purveyors of disinformation are employing AI-generated images, audio, and text, making the truth easier to distort and harder to discern. Sophisticated surveillance systems rapidly trawl social media for signs of dissent, and massive datasets are paired with facial scans to identify and track prodemocracy protesters.


Many observers have debated the existential risks posed by future AI advances, but these should not be allowed to overshadow the ways in which the cutting-edge technology is undermining internet freedom today. Democratic policymakers should establish a positive regulatory vision for the design and deployment of AI tools that is grounded in human rights standards, transparency, and accountability. Civil society experts, the drivers of so much progress for human rights in the digital age, should be given a leading role in policy development and the resources they need to keep watch over these systems. AI carries a significant potential for harm, but it can also be made to play a protective role if the democratic community learns the right lessons from the past decade of internet regulation.


Despite being one of the best performers in Freedom on the Net, Costa Rica experienced a recession in internet freedom under the new administration of President Rodrigo Chaves Robles. Self-censorship reportedly increased as his government engaged in harassment of journalists, opposition politicians, and other critics. In one high-profile scandal, the health minister resigned in February 2023 after it was revealed that she had paid someone to harass journalists at three news outlets who reported on government mismanagement.


However, much of the work is still done by humans. During the coverage period, at least 47 countries featured progovernment commentators who used deceitful or covert tactics to manipulate online information, double the number from a decade ago. An entire market of for-hire services has emerged to support state-backed content manipulation. Outsourcing in this way provides the government with plausible deniability and makes attribution of influence operations more challenging. It also allows political actors to reach new and more niche audiences by drawing on private-sector innovation and expertise.


Deep learning: A subfield of machine learning that involves models learning in layers, building simpler patterns into more complex ones. This approach has enabled many recent AI advances, such as recognizing objects in images.


State officials have cultivated networks of private actors willing to spread false and misleading content. Rather than taking the political risk or developing the resources to engage in such activity themselves, an electoral campaign, politician, or ministry can simply hire a social media influencer or public relations firm that prioritizes lucrative contracts and political connections over ethical or legal probity.


The growing use of generative AI is likely to compound the impact that these existing networks of progovernment commentators have on information integrity and healthy public debate. During the coverage period, AI-based tools that can generate images, text, or audio were utilized in at least 16 countries to distort information on political or social issues. It takes time for governments and the private actors they employ to incorporate new technology into content manipulation, and the early dominance of English-language tools may slow adoption of generative AI technology globally. But this tally of countries is also likely an undercount. Researchers, journalists, and fact-checkers have difficulty verifying whether content is generated by AI, in part because many of the companies involved do not require labeling. Similar obstacles can impede attribution of AI-backed manipulation to a specific creator.


AI companies are already being enlisted for state-linked disinformation campaigns. In early 2023, Venezuelan state media outlets distributed videos on social media that depicted anchors from a nonexistent international English-language channel spreading progovernment messages. The videos were produced using an online AI tool created by Synthesia, in what the company said is a violation of its terms of service. The research firm Graphika has also linked the company to a campaign to spread pro-CCP disinformation via the nonexistent news station Wolf News to audiences in the United States, though the videos in question were of poor quality and did not achieve significant reach.


Like digital repression more broadly, AI-generated disinformation campaigns disproportionately victimize and vilify segments of society that are already under threat. The overwhelming majority of nonconsensual deepfakes featuring sexual imagery target women, often with the aim of damaging their reputations and driving them out of the public sphere. An online campaign using AI-manipulated pornographic videos was used to discredit prominent Indian journalist and government critic Rana Ayyub as early as 2018. During the coverage period, Nina Jankowicz, a US expert on disinformation, was subjected to pornographic deepfakes as part of a broader campaign against her and her work. These uses of sexualized deepfakes represent a twisted evolution of a much older practice, the nonconsensual distribution of intimate images of women activists. For example, during the coverage period, a smear campaign that featured nonconsensual intimate imagery of Azerbaijani prodemocracy activists and opposition figures spread across Telegram, TikTok, Facebook, and progovernment news sites.


To track the different ways in which governments seek to dominate the digital sphere, Freedom House monitors their application of nine Key Internet Controls. The resulting data reveal trends in the expansion and diversification of these constraints on internet freedom.


While AI has allowed for more subtle and efficient forms of content removal, blunt censorship remains pervasive. Shutdowns of internet service and blocks on entire social media platforms continued to be key tactics of information control around the world. The number of countries where governments imposed outright blocking on websites that hosted political, social, and religious speech reached an unprecedented high of 41 this year. Democracies are not immune to this trend. States that have long been defenders of internet freedom imposed censorship or flirted with proposals to do so, an unhelpful response to genuine threats of foreign interference, disinformation, and harassment.


Such measures are growing in popularity among governments with less robust technological and regulatory capacity. In Nigeria, where authorities have imposed significantly less censorship than their counterparts in Vietnam and India, a code of practice introduced in October 2022 requires companies to remove content within 48 hours of notification from a government agency. The code was introduced after then president Muhammadu Buhari imposed a seven-month block on Twitter because the company removed a post in which he appeared to threaten violence against separatists. It is unclear to what extent the code has been enforced since the election of President Bola Tinubu in February 2023.


Governments are increasingly blocking digital platforms as a means of compelling them to comply with internet regulations. Indonesian authorities restricted access to Yahoo, the gaming platform Steam, payment processor PayPal, and several other sites in July and August 2022 in order to force compliance with Ministerial Regulation 5, which requires the removal of overly broad categories of prohibited speech under tight deadlines. In Brazil in April 2023, after Telegram failed to hand over user data related to neo-Nazi chat groups, a judge ruled that the platform had violated data retention requirements and ordered it blocked entirely. Another judge reversed the ban days later, imposing a more proportionate daily fine on the company and finding that the wholesale blocking was too broad and unreasonably restrictive.


Many of the debates surrounding AI have their roots in long-standing policy questions related to internet governance: How can regulation effectively protect people from malicious state and nonstate actors, while fostering a competitive and innovative private sector? What legal responsibilities should companies bear when they fail to prevent their products from being used in harmful ways? The lessons learned from the past decade of deliberations regarding government oversight, the need for robust global civil society engagement, and the problem of overreliance on self-regulation collectively provide a roadmap for this new era. Given the ways in which AI is already contributing to digital repression, a well-designed regulatory framework is urgently necessary to protect human rights in the digital age.

3a8082e126
Reply all
Reply to author
Forward
0 new messages