Employees say they weren’t adequately warned about the brutality of some of the text and images they would be tasked with reviewing, and were offered no or inadequate psychological support. Workers were paid between $1.46 and $3.74 an hour, according to a Sama spokesperson.

  • Cool. Using slave labor to train tools to strip the best parts of humanity away from us so that AI can do creative activities like poetry and art while we’re more and more stuck in a gig economy.

    Cool cool cool cool.

    •  QHC   ( @QHC@kbin.social ) 
      link
      fedilink
      33
      edit-2
      1 year ago

      so that AI can do creative activities

      Let me stop you right there. The current concept of “AI”–otherwise known as Large Language Models because that is really what people are referring to–is not capable of creativity. ChatGPT and things like it just regurgitate stuff they find. They can’t create something new and original

      • But that’s exactly what’s happening. Bloodsucking capitalists have decided that AI is a cheaper option than paying people a living wage, so creatives are losing their jobs.

        Instead of actually learning how to create art, shitbag grifters claim theyre “taking the power back from creatives” and doing nothing but stealing from actual creatives to make some sort of soulless synthesis, leaving actual creatives high and dry. For just one example, look at how many publishing outlets have stopped taking submissions because of the overwhelming flood of AI spam.

        All the while people are out here trying to make ends meet and are being forced into shitty, low paid jobs or gig work

      • And here I must be crazy thinking if it is US company paying them, maybe they deserve the equivalent of US employees, no matter what the fucking local pay is.

        That “local pay” bullshit is just an excuse to exploit. Pay them what you would have to pay a US citizen for the same job or fuck right off. They don’t deserve less because of geographic location.

  • To be honest, this isn’t an AI problem, but a content moderation and labor force ethics problem. You can swap out AI with social media and you’ll find the same amount of psychological harm to moderators.

  • This is the best summary I could come up with:


    The 51 moderators in Nairobi working on Sama’s OpenAI account were tasked with reviewing texts, and some images, many depicting graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest, the petitioners say.

    “We are in agreement with those who call for fair and just employment, as it aligns with our mission – that providing meaningful, dignified, living wage work is the best way to permanently lift people out of poverty – and believe that we would already be compliant with any legislation or requirements that may be enacted in this space,” the Sama spokesperson said.

    In sample passages read by the Guardian, text that appeared to have been lifted from chat forums, include descriptions of suicide attempts, mass-shooting fantasies and racial slurs.

    The announcement coincided with an investigation by Time, detailing how nearly 200 young Africans in Sama’s Nairobi datacenter had been confronted with videos of murders, rapes, suicides and child sexual abuse as part of their work, earning as little as $1.50 an hour while doing so.

    She wants to see an investigation into the pay, mental health support and working conditions of all content moderation and data labeling offices in Kenya, plus greater protections for what she considers to be an “essential workforce”.


    I’m a bot and I’m open source!

      • Thankfully this one doesn’t require AI. You can generally find the most important sentences in an article by counting the occurrences of every unique word, throwing out the common articles (e.g. a, an, the), and then extracting the sentences which contain the most frequently used words.