In The Elements of AI Ethics I talk about content moderator trauma. But I don't think people ...

In The Elements of AI Ethics I talk about content moderator trauma. But I don't think people tend to reflect on just how massive the trauma is. This recent ruling in Kenya gives some insights.

From reporting in The Guardian:

“I’ve seen stuff that you’ve never seen, and I’d never wish for you to see,” Frank Mugisha, 33, a moderator from Uganda told the Guardian.

"Many said that they didn’t have full knowledge of what they were signing up for when they took the job. Some alleged they were led to believe they were taking customer service roles, only to end up sifting through gruesome content under tight timelines."

“It alters the way you think and react to things,” said Mugisha. “We may be able to find other jobs, but would we be able to keep them? I don’t know. We don’t interact normally any more.”

“I remember my first experience witnessing manslaughter on a live video … I unconsciously stood up and screamed. For a minute, I almost forgot where I was and who I was. Everything went blank,” read one of the written testimonies.

When we talk about the benefits of social media and AI we also need to talk about harm. The question not being asked and discussed often enough is this one: To gain these benefits, how much suffering are we willing to ignore?

https://www.theguardian.com/global-development/2023/jun/07/a-watershed-meta-ordered-to-offer-mental-health-care-to-moderators-in-kenya

#AIEthics #DigitalEthics