Alok Prasanna Kumar
Is the government of India going to become a “troll monitor”?
When asked this question in the context of social media at a media summit on 17 March, Minister for Information and Broadcasting, Smriti Irani said:
“Instead of saying that you are a troll monitor, one can say that agencies that want to be a part of news, and factually want to give, let’s say, not only because news today is also very invested in views, it is not devoid of views – and that is a very fine line that certain journalists or media persons tend to cross. So it is now incumbent upon the consumer to have knowledge of what is pure information or what comes over as opinion, and that is, like I said, something that the ministry is considering in terms of putting it in those words which now reflect on broadcasting and advertorials – having a similar line of ethics and a code of conduct in a free society that is incumbent upon the agencies to abide to [sic].”
For those concerned with the regulation of content on social media, that word-salad of an answer is unlikely to inspire much confidence in the government’s approach to regulation.
In the context of content regulation in the media, the Press Council of India and the News Broadcasters Association are not very encouraging—they’re hardly the models of good and effective self-regulation in the context of newspapers and television, respectively. At the same time, content regulation specific to the internet in any way has been a disaster in India. We’ve witnessed the chaos and trouble caused by Section 66-A of the Information Technology Act, 2000 until it was struck down by the Supreme Court as being unconstitutional. What is definitely not needed is a toothless body with little power trying to tame media outlets on social media. And we certainly don’t need another law trying to define what is and what isn’t acceptable speech on social media.
Between the two extremes though, perhaps, there’s a third alternative.
As 2018 rolled in, the Federal Republic of Germany brought into effect one of the most hotly debated and paradigm shifting laws on the governance of the internet: the Netzwerkdurchsetzungsgesetz, or the “Network Enforcement Act” as it is known in English.
What does it do? It makes social networks with at least two million users or more in Germany responsible for taking down illegal content posted on them by users, and for putting in place a mechanism where illegal content can be reported and acted upon. Failure to put in place such a mechanism and meet the reporting obligations under the law (for complaints received and action taken) could result in fines from anywhere between 500,000 to 50 million euros. However, it does not impose any new restriction on content. It just requires that existing laws relating to restriction of harmful speech (Nazi propaganda, defamation, et al) be enforced by social network companies on their platforms.
The German law comes at a time when social media is no longer seen as a benign tool, let alone a force for good. Whether it is the non-stop harassment of women, the bigoted attacks on racial, caste or sexual minorities, or the all-pervasive problem of “fake news”, there’s perhaps a growing sense of understanding, even within Silicon Valley, that simply asking people to speak to each other on a public platform with no filters may not always be a good idea.
But is the Network Enforcement Act, or its approach of shifting the burden of compliance on to social networks, the right to answer to the problem? To get to the heart of the issue, we need to understand “intermediary liability” in the context of the internet.
Consider a traditional newspaper or even a website such as The Ken which hosts content. It features content not only written by employees but also freelancers and sometimes moderated comments posted by readers. Before anything goes up on the website, it is seen and approved by a human.
Now consider your broadband service provider. It just provides the “pipes” which carry all your information back and forth across the world. It may be technically capable of tapping into and seeing what content is flowing through (when needed) but it is certainly not in a position to vet or examine the content in each and every single byte of information flowing through the cables which gird the earth.
While one might use the term “medium” or “media” to describe both the newspaper and the optic fibre cable, it’s obvious they’re on two ends of the spectrum as far as legal responsibility for the content is considered.
But what of a social network such as Facebook or Twitter?
Under Indian law, social networks fall within the definition of an “intermediary” under the Information Technology Act, 2000 as entities which store, receive or transmit messages on the internet and provide services in relation to such messages. Consequently, under Section 79 of the Act, intermediaries are not held legally responsible for unlawful content hosted by them. This is, however, subject to the following relevant conditions:
So long as these requirements are met, an intermediary is safe from prosecution.
It’s important to keep in mind that this principle of legal responsibility came about in a specific context—one that is perhaps no longer valid. It was put into the law where the internet was still nascent and a heavy burden imposed on internet service providers to monitor and sweep for illegal content might have killed the industry.
In the context of India, the present law is a modification of the earlier provision which required “network service providers” to prove that they had no knowledge of the offence or had exercised due diligence before the illegal content was uploaded if they wanted to avoid criminal punishment. This, however, meant that Avneesh Bajaj of Baazee.com ended up facing criminal charges for the infamous “DPS-MMS” which was uploaded on to the website by an unknown user. Eventually, the charges were quashed but the fact that the burden of proving innocence was on the intermediary was clearly problematic.
Consider, for instance, the storied social media “career” of serial fake news spreader and “one-man hate factory” lawyer Prashant P Umrao. Alt News has done a stellar job documenting his many, many offending posts which are usually outright lies spreading bigotry online, most specifically on Twitter. One would think that Twitter would take some serious action for such sustained and repeated infringement of their terms and conditions of service, let alone Indian law.
Nope.
As of date, he still has a “blue tick” which verifies that he’s a genuine person of some importance on the platform.
So does that mean it’s a free-for-all, no-consequences zone?
Well… no. Let’s take a moment to remind ourselves of the Tanmay Bhat puppy filter controversy from last year. Bhat had posted a picture of Prime Minister Narendra Modi with a dog-ear Snapchat filter which resulted in a defamation case against him.
Likewise, Twitter did next to nothing when student activist and author Gurmehar Kaur repeatedly got rape threats, but it was very prompt in disabling Rose McGowan’s account when she accused Hollywood filmmaker Harvey Weinstein of sexual assault. I could multiply the examples, and add Facebook to the conversation as well. Social networks have, by and large, done a terrible job of monitoring their own platforms for hate speech, intimidation and “fake news”, and have willingly kowtowed to government diktats on free speech.
Legally, they’re justified in doing so. Nothing in the law prescribes what is “expeditious” or requires them to make public any action they might have taken on illegal content. Nothing in the law requires them to act on users’ complaints or give a fair opportunity to respond to accusations. Does the situation not call for some change?
To come back to the original comparison: are social networks more like newspapers or more like broadband service providers?
It boils down to how much control they exercise over the content on their platforms. The initial argument for safe harbour protection was that given the flow of information, it would be humanly impossible to pre-screen and filter content before it was published on the internet. With the development of machine-learning tools, that argument may not hold anymore. If social networks can examine and tailor content on your feeds based on your preferences and likes, surely it should not be a near-impossible task to be able to “read” content and decipher intent. Indeed, some social networks, such as YouTube, have already implemented such systems in the context of intellectual property claims. Could this not be extended to highlighting other forms of illegal content?
The harm that results from unlawful speech on the internet is not trivial. What we are left with in India, however, is the worst of both worlds: harmful speech inciting violence, defaming or intimidating someone is rarely stopped on social media, but criminal action is taken against those who dissent (usually at the behest of someone in power). Some of this is no doubt due to the dysfunctional state of India’s police forces—beholden to the ruling party and functioning as though the British Raj never left. However, the law is ripe for change.
The judgement in Shreya Singhal v Union of India in March 2015 was hailed as a victory for freedom of speech by multiple commentators. In retrospect, it is perhaps an outlier. Far from ensuring greater freedom of speech on the internet, it has only meant that the police and those in power continue to clamp down on online speech, but through different tools, ranging from internet shutdowns to frivolous complaints to the police. The problem also is that the law allows for only one kind of consequence—the criminal one. Civil remedies are out of reach given the delays in the judicial system.
In this scenario, would it not make sense to try something different?
Given the reach and influence of social media networks (not to mention their economic power), an approach that puts the burden of enforcement on social networks might make sense. This does not, of course, mean that such social networks have the last word on the matter of the legality of content. There will be a regulator to make sure that social networks do their jobs, rather than do it for them (in the Indian context, perhaps, overseen by the Telecom Regulatory Authority of India or a new independent regulatory authority set up for this purpose). There will be the option for users to also take criminal or civil action against the social network itself if required.
This idea has its critics.
The principal line of attack on this is that it “privatises” enforcement of free speech laws and leaves their interpretation in the hands of social networks. Critics point to the Manila Principles on intermediary liability (a set of principles which collate the best practices around the world in this respect) which suggests that takedowns (after substantive evaluation of content) should be based on judicial orders alone. While this is certainly desirable, it does not address the fact that content disseminates on the internet faster than the judicial system could ever hope to work. This approach makes sense for drastic actions—deleting content entirely or blocking a website. However, it would be quite ridiculous to expect that someone who finds blatant, harmful lies about them being shared on a social network should approach and get an ex parte, ad interim injunction from a magistrate’s court before any action is taken in this respect.
Given that failure to remove content in the context of a law like the Network Enforcement Act could result in fines and penalties, it’s possible we’ll see a lot of Type-I errors, i.e., false positives. If, however, this means that the Type-II errors, false negatives—the actively harmful speech which is allowed to stay online without any restriction—are eliminated, then perhaps it is an improvement from the present situation. A perfect system will ensure that in regulating harmful speech, neither false positives or false negatives occur, but perhaps we should not let the perfect be the enemy of the good enough.
Actively shifting the burden of enforcement of laws on speech on to social networks might not be the ultimate way to handle issues of free speech on the internet. But the present system, where it is left entirely in the hands of a dysfunctional police machinery or toothless media regulators, is not desirable either.
If the government is serious about regulating content on the internet, perhaps it would do well to discard the tried-and-failed models in the context of newspapers, television or even its past approach to regulating content on the internet.