Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.

  •  stravanasu   ( @pglpm@lemmy.ca ) 
    link
    fedilink
    English
    51
    edit-2
    10 months ago

    I’m not fully sure about the logic and perhaps hinted conclusions here. The internet itself is a network with major CSAM problems (so maybe we shouldn’t use it?).

    •  mudeth   ( @mudeth@lemmy.ca ) 
      link
      fedilink
      English
      31
      edit-2
      10 months ago

      It doesn’t help to bring whataboutism into this discussion. This is a known problem with the open nature of federation. So is bigotry and hate speech. To address these problems, it’s important to first acknowledge that they exist.

      Also, since fed is still in the early stages, now is the time to experiment with mechanisms to control them. Saying that the problem is innate to networks is only sweeping it under the rug. At some point there will be a watershed event that’ll force these conversations anyway.

      The challenge is in moderating such content without being ham-fisted. I must admit I have absolutely no idea how, this is just my read of the situation.

      • @mudeth @pglpm you really don’t beyond our current tools and reporting to authorities.

        This is not a single monolithic platform, it’s like attributing the bad behavior of some websites to HTTP.

        Our existing moderation tools are already remarkably robust and defederating is absolutely how this is approached. If a server shares content that’s illegal in your country (or otherwise just objectionable) and they have no interest in self-moderating, you stop federating with them.

        Moderation is not about stamping out the existence of these things, it’s about protecting your users from them.

        If they’re not willing to take action against this material on their servers, then the only thing further that can be done is reporting it to the authorities or the court of public opinion.

      •  stravanasu   ( @pglpm@lemmy.ca ) 
        link
        fedilink
        English
        15
        edit-2
        10 months ago

        Maybe my comment wasn’t clear or you misread it. It wasn’t meant to be sarcastic. Obviously there’s a problem and we want (not just need) to do something about it. But it’s also important to be careful about how the problem is presented - and manipulated - and about how fingers are pointed. One can’t point a finger at “Mastodon” the same way one could point it at “Twitter”. Doing so has some similarities to pointing a finger at the http protocol.

        Edit: see for instance the comment by @while1malloc0@beehaw.org to this post.

        •  mudeth   ( @mudeth@lemmy.ca ) 
          link
          fedilink
          English
          810 months ago

          Understood, thanks. Yes I did misread it as sarcasm. Thanks for clearing that up :)

          However I disagree with @shiri@foggyminds.com in that Lemmy, and the Fediverse, are interfaced with as monolithic entities. Not just by people from the outside, but even by its own users. There are people here saying how they love the community on Lemmy for example. It’s just the way people group things, and no amount of technical explanation will prevent this semantic grouping.

          For example, the person who was arrested for CSAM recently was running a Tor exit node, but that didn’t help his case. As shiri pointed out, defederation works for black-and-white cases. But what about in cases like disagreement, where things are a bit more gray? Like hard political viewpoints? We’ve already seen the open internet devolve into bubbles with no productive discourse. Federation has a unique opportunity to solve that problem starting from scratch, and learning from previous mistakes. Defed is not the solution, it isn’t granular enough for one.

          Another problem defederation is that it is after-the-fact and depends on moderators and admins. There will inevitably be a backlog (pointed out in the article). With enough community reports, could there be a holding-cell style mechanism in federated networks? I think there is space to explore this deeper, and the study does the useful job of pointing out liabilities in the current state-of-the-art.

          • Another way to look at it is: How would you solve this problem with email?

            The reality is, there is no way to solve the problem of moderation across disparate servers without some unified point of contact. With any form of federation, your options are:

            1. close-source the protocol, api, and implementation and have the creator be the final arbiter, either by proxy of code, or by having a back door
            2. Have every instance agree to a singular set of rules/admins
            3. Don’t and just let the instances decide where to draw lines.

            The reality is, any federated system is gonna have these issues, and as long as the protocol is open, anyone can implement any instance on top of it they want. It would be wonderful to solve this issue “properly”, but it’s like dealing with encryption. You can’t force bad people to play by the rules, and any attempt to do so breaks the fundamental purpose of these systems.

          •  stravanasu   ( @pglpm@lemmy.ca ) 
            link
            fedilink
            English
            2
            edit-2
            10 months ago

            I share and promote this attitude. If I must be honest it feels a little hopeless: it seems that since the 1970s or 1980s humanity has been going down the drain. I fear “fediverse wars”. It’s 2023 and we basically have a World War III going on, illiteracy and misinformation steadily increase, corporations play the role of governments, science and scientific truth have become anti-Galilean based on “authorities” and majority votes, and natural stupidity is used to train artificial intelligence. I just feel sad.

            But I don’t mean to be defeatist. No matter the chances we can fight for what’s right.

          • @mudeth @pglpm The grey area is all down to personal choices and how “fascist” your admin is (which goes on to which instance is best for you?)

            Defederation is a double-edged sword, because if you defederate constantly for frivolous reasons all you do is isolate your node. This is also why it’s the *final* step in moderation.

            The reality is that it’s a whole bunch of entirely separate environments and we’ve walked this path well with email (the granddaddy of federated social networks). The only moderation we can perform outside of our own instance is to defederate, everything else is just typical blocking you can do yourself.

            The process here on Mastodon is to decide for yourself what is worth taking action on. If it’s not your instance, you report it to the admin of that instance and they decide if they want to take action and what action to take. And if they decide it’s acceptable, you decide whether or not this is a personal problem (just block the user or domain on in your user account but leave it federating) or if it’s a problem for your whole server (in which case you defederate to protect your users).

            Automated action is bad because there’s no automated identity verification here and it’s an open door to denial of service attacks (harasser generates a bunch of different accounts, uses them all the report a user until that user is auto-suspended).

            The backlog problem however is an intrinsic problem to moderation that every platform struggles with. You can automate moderation, but then that gets abused and has countless cases of it taking action on harmless content, and you can farm out moderation but then you get sloppiness.

            The fediverse actually helps in moderation because each admin is responsible for a group of users and the rest of the fediverse basically decides whether they’re doing their job acceptably via federation and defederation (ie. if you show that you have no issue with open Nazis on your platform, then most other instances aren’t going to want to connect to you)

            •  mudeth   ( @mudeth@lemmy.ca ) 
              link
              fedilink
              1
              edit-2
              10 months ago

              Defederation is a double-edged sword

              Agreed. It’s not the solution.

              The reality is that it’s a whole bunch of entirely separate environments and we’ve walked this path well with email

              On this I disagree. There are many fundamental differences. Email is private, while federated social media is public. Email is one-to-one primarily, or one-to-few. Soc media is broadcast style. The law would see it differently, and the abuse potential is also different. @faeranne@lemmy.blahaj.zone also used e-mail as a parallel and I don’t think that model works well.

              The process here on Mastodon is to decide for yourself what is worth taking action on.

              I agree for myself, but that wouldn’t shield a lay user. I can recommend that a parent sign up for reddit, because I know what they’ll see on the frontpage. Asking them to moderate for themselves can be tricky. As an example, if people could moderate content themselves we wouldn’t have climate skeptics and holocaust deniers. There is an element of housekeeping to be done top-down for a platform to function as a public service, which is what I assume Lemmy wants to be.

              Otherwise there’s always the danger of it becoming an wild-west platform that’ll attract extremists more than casual users looking for information.

              Automated action is bad because there’s no automated identity verification here and it’s an open door to denial of service attacks

              Good point.

              The fediverse actually helps in moderation because each admin is responsible for a group of users and the rest of the fediverse basically decides whether they’re doing their job acceptably via federation and defederation

              The way I see it this will inevitably lead to concentration of users, defeating the purpose of federation. One or two servers will be seen as ‘safe’ and people will recommend that to their friends and family. What stops those two instances from becoming the reddit of 20 years from now? We’ve seen what concentration of power in a few internet companies has done to the Internet itself, why retread the same steps?

              Again I may be very naive, but I think with the big idea that is federation, what is sorely lacking is a robust federated moderation protocol.

              • @mudeth I 110% agree faeranne, especially in that this is much like the topic of encryption and how people (especially politicians) keep arguing that we just need to magically come up with a solution that allows governments to access all encrypted communication somehow without impacting security and preventing people from using existing encryption to completely bypass it. It’s much like trying to legislate math into functioning differently.

                The closest you can get to a federated moderation protocol is basically just a standard way to report posts/users to admins.

                You could absolutely build blocklists that are shared around, but that’s already a thing and will never be universal.

                Basically what you’re describing is that someone should come up with a way to *force* me to apply moderation actions to my server that I disagree with. That somehow such a system would be immune to abuse (ie. because it’s external to my server, it would magically avoid hackers and trolls manipulating it) and that I would have no choice in whether or not to allow that access despite running a server based on open source software in which I can edit the code myself if I wish (but somehow in this case wouldn’t be able to edit it to prevent the external moderation from working).

                You largely miss the point entirely of my other arguments: email is a perfect reference point because, despite private vs public, it faces all the same technical, social, and legal challenges. It’s just an older system with a slightly different purpose (that doesn’t change it’s technical foundations, only just how it’s interacted with), but the closest relative to activitypub with much much larger scale adoption. These issues and topics have already been discussed ad nauseum there.

                And I didn’t say users would moderate themselves, we decide what is worth taking action on. If you’re not an admin, you choose whether or not something is worth reporting and whether or not you find the server you’re on acceptable to your wants/needs. If you take issue with anti-vaxxers, climate change deniers, and nazis and your server allows all of that (either on the server itself, or has no issue with other servers that allow it)… then you move to a server that doesn’t.

                Finally, this doesn’t end in centralization because of all the aforementioned gray areas. There are many things that I don’t consider acceptable on my server but aren’t grounds for defederation.

                For example: I won’t tolerate the ignoring of minority voices on topics of cultural appropriation and microaggressions… but I don’t consider it a good idea to defederate other servers for it because the admins themselves often barely understand it and I would be defederation 90% of the fediverse at that point. If I see such from my users I will talk to them and take action as appropriate, but from other servers I’ll report if the server looks remotely receptive to it.

      • The problem is even bigger: some places (ejem Reddit) you will get deplatformed for explaining and documenting why there is a problem.

        (even here, I’ll censor myself, and hopefully restrict to content not too hard to moderate)

    •  jarfil   ( @jarfil@beehaw.org ) 
      link
      fedilink
      English
      210 months ago

      The internet itself is a network with major CSAM problems

      Is it, though?

      Over the last year, I’ve seen several reports on TV of IRL group abuse of children, by other children… which left everyone scratching their heads as to what to do since none of the perpetrators are legally imputable.

      During that same time, I’ve seen exactly 0 (zero) instances of CSAM on the Internet.

      Sounds to me like IRL has a major CSAM, and general sex abuse, problem.