• Seems like not a bias by Al models themselves, rather a reflection of the source material.

    That’s what is usually meant by AI bias: a bias in the material used to train the model that reflects in its behavior

      • I feel like not everyone is conscious of these biases and we need to raise the awareness and try preventing for example HR people from buying AI-based screening software that has a strong bias that is not disclosed by their vendors (because why would you advertise that?)

        •  NaN   ( @Bitrot@lemmy.sdf.org ) 
          link
          fedilink
          English
          2021 days ago

          I was confused how a resume or application would be largely affected, but the article points out that software is often used to look over social media now as part of hiring (which is awful).

          The bias when it determined guilt or considered consequences for a crime is concerning as more law enforcement agencies integrate black box algorithms into investigative work.

      • It’s FUCKING OBVIOUS

        What is obvious to you is not always obvious to others. There are already countless examples of AI being used to do things like sort through applicants for jobs, who gets audited for child protective services, and who can get a visa for a country.

        But it’s also more insidious than that, because the far reaching implications of this bias often cannot be predicted. For example, excluding all gender data from training ended up making sexism worse in this real world example of financial lending assisted by AI and the same was true for apple’s credit card and we even have full-blown articles showing how the removal of data can actually reinforce bias indicating that it’s not just what material is used to train the model but what data is not used or explicitly removed.

        This is so much more complicated than “this is obvious” and there’s a lot of signs pointing towards the need for regulation around AI and ML models being used in places it really matters, such as decision making, until we understand it a lot better.