The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.

Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I’m sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.

  • This was a problem on reddit too. Anyone could create accounts - heck, I had 8 accounts:

    one main, one alt, one “professional” (linked publicly on my website), and five for my bots (whose accounts were optimistically created, but were never properly run). I had all 8 accounts signed in on my third-party app and I could easily manipulate votes on the posts I posted.

    I feel like this is what happened when you’d see posts with hundreds / thousands of upvotes but had only 20-ish comments.

    There needs to be a better way to solve this, but I’m unsure if we truly can solve this. Botnets are a problem across all social media (my undergrad thesis many years ago was detecting botnets on Reddit using Graph Neural Networks).

    Fwiw, I have only one Lemmy account.

    • On Reddit there were literally bot armies by which thousands of votes could be instantly implemented. It will become a problem if votes have any actual effect.

      It’s fine if they’re only there as an indicator, but if the votes are what determine popularity, prioritize visibility, it will become a total shitshow at some point. And it will be rapid. So yeah, better to have a defense system in place asap.

    • I always had 3 or 4 reddit accounts in use at once. One for commenting, one for porn, one for discussing drugs and one for pics that could be linked back to me (of my car for example) I also made a new commenting account like once a year so that if someone recognized me they wouldn’t be able to find every comment I’ve ever written.

      On lemmy I have just two now (other is for porn) but I’m probably going to make one or two more at some point

    • If you and several other accounts all upvoted each other from the same IP address, you’ll get a warning from reddit. If my wife ever found any of my comments in the wild, she would upvoted them. The third time she did it, we both got a warning about manipulating votes. They threatened to ban both of our accounts if we did it again.

      But here, no one is going to check that.

    • I think the best solution there is so far is to require captcha for every upvote but that’d lead to poor user experience. I guess it’s the cost benefit of user experience degrading through fake upvotes vs through requiring captcha.

          •  TWeaK   ( @TWeaK@lemm.ee ) 
            link
            fedilink
            5
            edit-2
            1 year ago

            Fun fact: old reddit used to use one of the header functions as an underline. I think it was 5x # that did it. However, this was an unofficial implementation of markdown, and it was discarded with new reddit. Also, being a header function you could only apply it to an entire line or paragraph, rather than individual words.

    • I had all 8 accounts signed in on my third-party app and I could easily manipulate votes on the posts I posted.

      There’s no chance this works. Reddit surely does a simple IP check.

      • I would think that they need to set a somewhat permissive threshold to avoid too many false positives due to people sharing a network. For example, a professor may share a reddit post in a class with 600 students with their laptops connected to the same WiFi. Or several people sharing an airport’s WiFi could be looking at /r/all and upvoting the top posts.

        I think 8 accounts liking the same post every few days wouldn’t be enough to trigger an alarm. But maybe it is, I haven’t tried this.

      • I had one main account but also a couple for using when I didn’t want to mix my “private” life up with other things. I don’t even know if it’s not allowed in the TOS?

        Anyway, I stupidly made a Valmond account on several Lemmy instances before I got the hang of it, and when (if!) my server will one day function I’ll make an account there so …

        I guess it might be like in the old forum days, you have a respectable account and another if you wanted to ask a stupid question etc. admin would see (if they cared) but not the ordinary users.

  • In case anyone’s wondering this is what we instance admins can see in the database. In this case it’s an obvious example, but this can be used to detect patterns of vote manipulation.

    • This. It’s only a matter of time until we can automatically detected vote manipulation. Furthermore, there’s a possibility that in future versions we can decrease the weight of votes coming from certain instances that might be suspicious.

      • And it’s only a matter of time until that detection can be evaded. The knife cuts both ways. Automation and the availability of internet resources makes this back and forth inevitable and unending. The devs, instance admins and users that coalesce to make the “Lemmy” have to be dedicated to that. Everyone else will just kind of fade away as edge cases or slow death.

  • So far, the majority of content that approaches spam I’ve come across on Lemmy has been posts on !fediverse@lemmy.ml which highlight an issue attributed to the fediverse, but which ultimately have a corollary issue on centralised platforms.

    Obviously there are challenges to address running any user-content hosting website, and since Lemmy is a comminity-driven project, it behooves the community to be aware of these challenges and actively resolve them.

    But a lot of posts, intentionally or not, verge on the implication that the fediverse uniquely has the problem, which just feeds into the astroturfing of large, centralized media.

  • Honestly, thank you for demonstrating a clear limitation of how things currently work. Lemmy (and Kbin) probably should look into internal rate limiting on posts to avoid this.

    I’m a bit naive on the subject, but perhaps there’s a way to detect “over x amount of votes from over x amount of users from this instance”? and basically invalidate them?

    •  jochem   ( @jochem@lemmy.ml ) 
      link
      fedilink
      English
      11 year ago

      How do you differentiate between a small instance where 10 votes would already be suspicious vs a large instance such as lemmy.world, where 10 would be normal?

      I don’t think instances publish how many users they have and it’s not reliable anyway, since you can easily fudge those numbers.

      • 10 votes within a minute of each other is probably normal. 10 votes all at once, or microseconds of each other, is statistically less likely to happen.

        I won’t pretend to be an expert on the subject, but it seems like it’s mathematically possible to set some kind of threshold? If a set percent of users from an instance are all interacting microseconds from each other on one post locally, that ought to trigger a flag.

        Not all instances advertise their user counts accurately, but they’re nevertheless reflected through a NodeInfo endpoint.

      • I disagree, i just got massively bandwagon downvoted into oblivion in this thread and noticed that as soon as a single downvote hits, it’s like blood in the water and the piranhas will instantly downvote, even if its nonsensical. Downvotes act as a guide for people that don’t really think about the message contents, and need instructions on how to vote. I’d love if comments got their votes censored for 1 hour after posting.

        • I was referring to bots: whether they downvote one post/comment to -1000, or upvote the rest to +1000, the effect is the same… for anyone sorting by votes.

          In regards to people, I agree that downvotes are not really constructive, that’s why beehav.org doesn’t allow them.

          But in general, I’m afraid Lemmy will have to end up offering similar vote related features as Reddit: hide vote counts for some period of time, “randomize” them a bit to drive bots crazy, and that kind of stuff.

          • For bots the simple effect on the algorithm is similar either way, i agree. When we can see the downvotes though is my problem. If bots downvote you -1000 then it gives admins more info to control bot brigading, but helps users act like chimps and just mash the downvote button. Imagine having a bot group that just downvotes someone you don’t like by a reasonable number of votes for the context. You’d be so doomed. If that same bot group only gave upvotes to the bot-user then it’d be totally different in the thread.

  • IMO, likes need to be handled with supreme prejudice by the Lemmy software. A lot of thought needs to go into this. There are so many cases where the software could reject a likely fake like that would have near zero chance of rejecting valid likes. Putting this policing on instance admins is a recipe for failure.

  • I wonder if it’s possible …and not overly undesirable… to have your instance essentially put an import tax on other instances’ votes. On the one hand, it’s a dangerous direction for a free and equal internet; but on the other, it’s a way of allowing access to dubious communities/instances, without giving them the power to overwhelm your users’ feeds. Essentially, the user gets the content of the fediverse, primarily curated by the community of their own instance.

          • I guess. Ones really effective and tells everyone around you that the person is a nazi in case they were cloaking it, pushes back on their bullshit and makes everyone aware that it’s not okay to say shit like that and that it is okay to fight them.

            The other is a downvote and changes where the nazi content ends up in a rank.

            • Nazis will always act in bad faith and it shouldn’t surprise us when they use their 10 alts to fuck up voting, which is another reason to hide votes and focus on commenting rather than voting. Although i don’t agree with the negative style of confrontation, the positive and neutral are great though. Commentate on each bad faith action they take in real time so the audience understands how stupid nazis are, and becomes resistant to bad faith tactics.

            • You can do both!

              On Reddit, enough downvotes collapse the content so people who might really not feel up to seeing any Nazi content that day, even if there’s tons of pushback against it, didn’t have to see it. (Of course, the collapsed downvoted comment might have just been an unpopular but unbigoted and unharmful opinion, like “I think Mario games are poorly made and unfun” getting downvoted to hell. But it’s a risk you know you’re taking when you open a collapsed post with a score of -17. Unpopular opinion, spammer, or hate speech?). You have to open the comment thread to see it. I do not know if anything on the Fediverse has this functionality. Until then, downvotes can still make Nazi content less easy to see by being ranked lower.

              And despite the downvotes, lots of people still responded to the Nazi anyways, in a way that let me know that this was one troll and the community was very much not accepting of bigotry. That was also useful. Both things have a place.

  • Two solutions that I see:

    1. Mods and/or admins need to be notified when a post has a lot of upvotes from accounts on the same instance.
    2. Generalize whitelists and requests to federate from new instances.
            • I didn’t mean YOU are being a dick. If SOMEONE creates “alt” accounts for the sole purpose of vote manipulation, they’re being a dick. I was using the royal “you,” a weird english language thing. You, yourself, are not a dick. We’ll you might be, but I don’t think so.

              • Sorry, I misunderstood. I definitely agree accounts created for the sole purpose of upvoting stuff/bot farms are bad. I just don’t know if there’s an effective way to fight it as they’re getting pretty elaborate these days and it’s hard to distinguish them from real accounts.

                Pretty soon we’ll be at the point where no one will trust anything on the Internet.