Was this AI trained on an unbalanced data set? (Only black folks?) Or has it only been used to identify photos of black people? I have so many questions: some technical, some on media sensationalism

  • IMO, the fact that the models aren’t accurate with people of color but they’re putting the AI to use for them anyway is the systemic racism. If the AI were not good at identifying white people do we really think it would be in active use for arresting people?

    It’s not the fact that the technology is much much worse at identifying people of color that is the issue, it’s the fact that it’s being used anyway despite it.

    And if you say "oh, they’re just being stupid and didn’t realize it’s doing that " then it’s egregious that they didn’t even check for that.