Former Facebook Content Moderator Files Lawsuit for PTSD
Ron Perillo / 7 months ago
Screening “Highly Toxic” Content
To a big company like Facebook, a lawsuit is just part of doing business. However, the charges often come from outside the company.
The latest class action lawsuit brought to their doors this time come from a former employee. Specifically, a former content moderator by the name of Selena Scola. She alleges that while filtering through content for the social media platform, she was “exposed to highly toxic, unsafe and injurous content”.
Scola’s lawyers claim that she has developed Post-Traumatic Stress Disorder as a result of the job. Part of the work involves not just screening and deleting hate speech, but also looking for offensive images and videos. This can span from innocuous suggestive photos, to extreme violence. According to the lawsuit, this even includes exposure to “millions of videos, images and broadcasts of child sex abuse, rape, torture, bestiality, beheadings, suicides and murder”.
The lawsuit alleges that the company does not provide its content moderators with “sufficient training” in handling the traumatic content. Lawyers claim that Ms. Scola’s PTSD can trigger whenever she touches a computer mouse, hear loud noises or enter a cold building, among other things.
Scola is the only employee named as plaintiff, but it is a class-action lawsuit representing up to thousands of Facebook content moderators in California. Facebook’s content moderation operation actually involves thousands more employees worldwide.
What is Facebook’s Position Regarding this Issue?
A Facebook spokesperson has reached out to Vice magazine and responded that they are currently “reviewing this claim“. Adding that they recognize that content moderation is “difficult”, and that they offer psychological support and wellness resources to their workers.
The spokesperson also adds that they have specific training protocols for would-be content moderators. However, the suit alleges that this is insufficient for the amount of traumatic content the employees see.
The social media giant has also increasingly been working on using AI to filter disturbing content. This screens the potential offensive content, even before it goes through human moderators. Thus, minimizing the potential trauma it can inflict.