The search results varied when tested by different users, but the Guardian
verified through screenshots and its own tests that various stickers portraying
guns surfaced for these three search results. Prompts for “Israeli boy”
generated cartoons of children playing soccer and reading. In response to a
prompt for “Israel army” the AI created drawings of soldiers smiling and
praying, no guns involved. Meta’s own employees have reported and escalated the
issue internally, a person with knowledge of the discussions said.
And even if moderated, it will display new unique biases, as otherwise unassuming things will get moderated out of the pool by people who take exception to it.
Not to mention the absurd and inhuman mental toll this work will take on the exploited workers forced to sort it.
Like, this is all such a waist of time, effort, and human sanity, for tools of marginal use that are mostly just a gimmick to prop up the numbers for tech bros who have borrowed more money than they can pay back.
And even if moderated, it will display new unique biases, as otherwise unassuming things will get moderated out of the pool by people who take exception to it.
Not to mention the absurd and inhuman mental toll this work will take on the exploited workers forced to sort it.
Like, this is all such a waist of time, effort, and human sanity, for tools of marginal use that are mostly just a gimmick to prop up the numbers for tech bros who have borrowed more money than they can pay back.