Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Content Moderation Case Study: GitHub Attempts To Moderate Banned Words Contained In Hosted Repositories (2015)

0 views
Skip to first unread message

Go deface and purge CityxGuide.com, Backpage.com, and 1backpage.com. Because they waste of electricity

unread,
Feb 4, 2021, 10:28:14 PM2/4/21
to
Summary: GitHub solidified its position as the world's foremost host of open
source software not long after its formation in 2008. Twelve years after its
founding, GitHub is host to 190 million repositories and 40 million users.

Even though its third-party content is software code, GitHub still polices
this content for violations of its terms of service. Some violations are
more overt, like possible copyright infringement. But much of it is a bit
tougher to track down.


A GitHub user found themself targeted by a GitHub demand to remove certain
comments from their code. The user's code contained the word "retard" -- a
term that, while offensive in certain contexts, isn't offensive when used as
a verb to describe an intentional delay in progress or development. But
rather than inform the user of this violation, GitHub chose to remove the
entire repository, resulting in users who had forked this code to lose
access to their repositories as well.

It wasn't until the user demanded an explanation that GitHub finally
provided one. In an email sent to the user, GitHub said the code contained
content the site viewed as "unlawful, offensive, threatening, libelous,
defamatory, pornographic, obscene, or otherwise objectionable." More
specifically, GitHub told the user to remove the words "retard" and
"retarded," restoring the repository for 24 hours to allow this change to be
made.

Decisions for GitHub:

Is the blanket banning of certain words a wise decision, considering the
idiosyncratic language of coding (and coders)?
Should GitHub account for downstream repositories that may be negatively
affected by removal of the original code when making content moderation
decisions, and how?
Could banned words inside code comments be moderated by only removing the
comments, which would avoid impacting the functionality of the code?
Questions and policy implications to consider:
Is context considered when moderating possible terms of service violations?
Is it possible to police speech effectively when the content hosted isn't
what's normally considered speech?
Does proactive moderation of certain terms deter users from deploying code
designed to offend?
Resolution: The user's repository was ultimately restored after the
offending terms were removed. So were the repositories that relied on the
original code GitHub decided was too offensive to allow to remain unaltered.
Unfortunately for GitHub, this drew attention to its less-than-consistent
approach to terms of service violations. Searches for words considered
"offensive" by GitHub turned up dozens of other potential violations -- none
of which appeared to have been targeted for removal despite the inclusion of
far more offensive terms/code/notes.

And the original offending code was modified with a tweak that substituted
the word "retard" with the word "git" -- terms that are pretty much
interchangeable in other parts of the world. The not-so-subtle dig at GitHub
and its inability to detect nuance may have pushed the platform towards
reinstating content it had perhaps pulled too hastily.

Originally posted on the Trust & Safety Foundation website.

0 new messages