New
Introducing Compute Orchestration
August 30, 2018

Moderation in Moderation: Exploring the Ethics Around Social Media Moderation

Table of Contents:

Some content will always have to be moderated. No one, for instance, would argue against a marketplace like eBay moderating what it allows users to put up for sale, particularly where some auctions have breached laws or generally accepted social norms (e.g. human or organ trafficking). As I discussed in my last post, there’s now an opportunity for companies to use AI to reduce the amount of objectionable content that human moderators have to review and in turn protect them from the harmful effects such a task can have on them. Plus, with new laws like FOSTA-SESTA, moderation is moving from a nice-to-have to a must-have.

However, content moderation is not so cut and dry. Moderators will still be left to deal with content that is more ambiguous, treading the fine line between protecting free speech but not offering offensive, potentially harmful content on a platform.


The Dilemma: What happens when we don’t moderate?

In 2017, 71% of internet users used social networks, and that number is only expected to grow. With 2.2 billion users, Facebook alone has more people than any single country’s population. Therefore, the power these platforms yield in deciding what content is removed or approved is significant. While most users are completely unaware that their feeds are being moderated or when content is removed, many would agree that removing offensive and/or illegal (e.g. child pornography or trafficking) content is necessary.

However, what about when the content that remains is untrue or misleading and begins to go viral? While some “conspiracy theories” may be innocuous and harmless, much misinformation has the potential to not only influence elections but shape socio-political events.

In the much-discussed case of Rohingya Muslims of Myanmar for instance, Facebook has been blamed for facilitating the rage against this minority population (which has resulted in mass violence, abuse and a refugee crisis), after allowing “posts that range from recirculated news articles coming from pro-government outlets, to misrepresented or faked photos and anti-Rohingya cartoons” (CNN) to remain on the site.


This is not the first such happening. Facebook is also being blamed for spikes of violence across the developing world in the past 4 years, including riots and mob executions, all linked to posts by religious and political extremists that weren’t taken down.


Although Facebook does have a written terms and guidelines around posting, most users have never read them and don’t understand the process for removing content. That leaves human moderators with the task of determining what, within the guidelines they’ve been given, can or cannot be posted. For the human moderators that help Facebook, Google, and Twitter, there are up to 30 different categories of content for them to monitor and different teams that are “experts” in each. In The Cleaners, the terrorism moderation experts, for instance, need to memorize 37 terrorist organizations, including the names and flags.


Power and Choice: What is okay to post and what is not? Who decides?

Free speech has long been a cornerstone in the tech world, so the question of how far moderation should go remains hotly debated. Historically, social platforms have allowed users to post whatever they want, but with the reach of these platforms today, this has perhaps had a lot more influence on the world than we realized and in the last few years.


Therefore, these platforms’ sites are now being pushed to take more responsibility or be held accountable for what gets published. Just recently, Facebook announced that they would be removing misinformation that could cause harm. Along with them, Apple, Google and Spotify have all removed content from Alex Jones, labelled by the New York Times as “the most notorious internet conspiracy theorist”, and his website “Infowars.” “We have a broader responsibility to not just reduce that type of content but remove it,” Tessa Lyons, Facebook’s product manager said. But what are the parameters for that responsibility?


Most sites leave users to report offensive or harmful posts, but even when users do, effective moderation suffers from moderators not always having the context to know whether a post is harmful. In 2016, Facebook received worldwide criticism after removing the historic “Napalm Girl” photograph that showed a naked nine-year-old girl running from a napalm attack that caused severe burns to her back and arms during the Vietnam War. Facebook’s reason for removal lay in its Community Standards, with them releasing a notice that stated “any photographs of people displaying fully nude genitalia or buttocks, or fully nude female breast, will be removed”, noting that in some countries, any pictures of naked children qualified as child pornography.


Among many things, the site was accused of censoring the war and abusing their unprecedented power. Forced to relent, Facebook stated that after reviewing how they had applied their Community Standards in the case, they recognized “the history and global importance of this image in documenting a particular moment in time.” Not utilizing context when moderating “nudity” had previously left Facebook to defend itself and ultimately clarify this same policy regarding breastfeeding. “It is very hard to consistently make the right call on every photo that may or may not contain nudity that is reported to us,” said a spokesperson at the time. “Particularly when there are billions of photos and pieces of content being shared on Facebook every day.”


And much of those billions of posts fall into the notoriously ambiguous danger zone that is politics. Choice and conflict are hallmarks of democracy, but what about when those choices aren’t informed by fact? The 2016 election saw Facebook once again embroiled in a storm of controversy, with it being taken to task for how disinformation posted on its site possibly influenced the 2016 U.S. presidential election.

Both Facebook and Twitter are said to be spreading “fake news” much faster than the truth. A recent study from The MIT Sloan School of Management noted that many of the false news stories spread on Twitter evoked strong emotional responses. Deb Roy, a professor at the university and co-author of the study asks a poignant question: "How do you get a few billion people to stop for a moment and reflect before they hit the retweet or the share button, especially when they have an emotional response to what they've just seen?"

As midterm elections approach, many tech companies are trying to find ways to curb the spread of misinformation. Facebook is now rating users’ trustworthiness when they report posts (as many people report posts they disagree with versus what is actually false news) and picking and promoting what news users see by pushing more “high-quality sources.” This, it seems, is to allow users to make more informed decisions.


However, this attempt to stop the spread of false information is not without detractors. For one, by becoming more hands-on in their approach towards what people can and cannot share online, the influence of social platforms on our lives may actually strengthen. Furthermore, some critics argue that there’s a risk for political bias, with both sides of the aisle arguing that companies like Facebook have enabled the other. Who their users consider to be high-quality news sources may be a matter of political affiliation.

Overall, the issue of who should decide what we can and cannot see is a complicated issue, and while social platforms strive to find the best solutions, the truth is they will never be able to satisfy everyone.


How much moderation do you think is okay? I’d love to hear your thoughts.