Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.

  •  jarfil   ( @jarfil@beehaw.org ) 
    link
    fedilink
    English
    110 months ago

    I think you’re missing the opposite point.

    An AI trained on a given instance’s admin decisions, would increase the same censorship the admins already apply. We can agree on that.

    An AI trained by a third-party on unknown data (and actually illegal to be known) which can detect “CSAM (and potentially other content)”, would increase censorship of both CSAM… and of “potentially other content” out of the control, preferences or knowledge of the instance admins.

    Using an external service to submit ALL content for an AI trained by a third-party to make a decision, not only allows the external service to collect ALL the content (not just the censored one), but also to change the decision parameters without previous notice, or any kind of oversight, and apply it to ALL content.

    The problem is a difference between:

    • instance modlog -> instance content filtered by instance AI -> makes similar decisions as instance admins
    • [illegal to know dataset] -> third-party captures all content, feeds to undisclosed AI -> makes unknown decisions in the name of removing CSAM

    One is an AI that can make mistakes, but mostly follows whatever an admin would do. The other, is a 100% surveillance state nightmare in the name of filtering 0.03% of content.