When Does a Technology Giant Become a Civil Servant?

Facebook is taking steps into the realm of content moderation, but it is missing a balanced global viewpoint.

By Xische Editorial, May 17, 2019

Source: Art tools design/Shutterstock

Source: Art tools design/Shutterstock

From their earliest iterations, Silicon Valley social media platforms have tried to avoid the role of moderator. Facebook, as the world’s most popular social media platform, is a good example of the path many platforms have taken. Instead of hiring editors and moderators to carefully comb through the material posted to its platform, Facebook has largely tried to stay away from content moderation. This stance has protected the company and allowed it to operate largely unregulated. Of course, there have been guidelines of what is absolutely inappropriate (such as explicit calls to violence) but when it comes to general political discourse, Facebook has tried to stay away. That’s rapidly changing. 

After the Cambridge Analytica data privacy scandal last year, in which a political operation in the UK effectively weaponised Facebook user data for political ends, the platform has had to crack down on content in a serious way. Under pressure for failing to take serious action against hate speech and misinformation of all stripes, Facebook is finally getting involved in the moderation business. That’s a good thing. 

Facebook is not alone in breaking down the content moderation barriers that have long existed in Silicon Valley. Google, Twitter, and even Pinterest have recently taken similar stances when it comes to extremist political rhetoric on its platforms. So, when does a technology giant become a civil servant? If a company like Facebook decides it has to play a role in safeguarding society from hate speech or election manipulation, who provides the oversight? 

There are two primary concerns in this new space for social media companies. The first concerns the bias of Facebook employees who are now tasked with regulating materials and with writing the algorithms that underline how their platforms operate. The second concerns the fact that these companies operate around the world, so who should ensure that Facebook is on top of sensitives in virtually every market?  

According to the Wall Street Journal, Facebook has said it has strict guidelines to prevent political bias from factoring into its content-moderation decisions and that including a variety of perspectives is essential for serving its user base of more than 2bn people. This statement appears at odds with ample reporting that the majority of Facebook employees harbour left-leaning political views and lack exposure to markets outside of the United States. Now that the company is ready to take a stand against problematic content, how should it regulate its decision making? How should it ensure its employees are able to make impartial content-moderation decisions? 

At the present moment, there’s no answer to this question but it seems that US lawmakers are keen to have a say. Such a move is not a problem in and of itself but it does ignore billions of users outside the US. This is an opportunity for smaller markets to get involved. Dubai, as a city at the nexus of East and West and home to more than 200 nationalities, is a perfect place for Facebook to invest content moderation test beds. Not only do residents of Dubai have local knowledge about a wide variety of places from Africa to Central Asia but the city also has the infrastructure to sustain a potentially industry changing operation. Such an operation wouldn’t need to be confined to Facebook but could be used by all leading social media platforms. 

As these companies invest more resources into artificial intelligence and algorithms capable of predicting user behaviour, the question of bias and ethics will become even more complex and urgent. Now that Facebook has taken the first step and acknowledged that it has a major content management problem on its hands, it’s imperative for countries around the world to ensure they are part of the discussion over how best to handle content going forward. Those countries that act quickly will be part of a profound shift in how the internet operates. 

Major technology platforms are effectively like global utility companies at this stage and must be treated as such when it comes to critical questions like content. If small states have a role in ensuring that algorithms are trained to the particularities of their markets and other ethical considerations, the internet will be a safer and more productive place for everyone.