The karma system was not even effective against spammers, while it did block out genuine new accounts and people with unpopular opinions. Bots would just repost popular posts and comments to farm karma and bypass the restrictions.
Anyone with half a brain could spot a karma farming account. When I first became a mod of a very large sub there I looked at the recent ban log and spotted some accounts banned a couple months back for karma farming. They were being used for things like promoting drug sites, crypto, etc.
The algorithm used karma and history to help filter/restrict accounts. The problem was that not everyone was committed to doing that. Most of the meme subs had mods who just didn’t give a shit and when you sent them a modmail with: ‘Hey these are clearly bot accounts reposting word for word popular posts including the links (which is a really good indication it’s a script’, they just wouldn’t do anything or just very aggressively respond that they wouldn’t do anything. Henceforth their subs became ground zero for botters and scammers who wanted to build history on an account.
Reddit had an automated process, as far as I could tell, for banning/restricting these kinds of account. If enough large subs banned them in a period of time, it would seem like their accounts would be suspended almost immediately. So if they happened to post in the wrong group of subs when I spotted them, I’d modmail all those subs and they’d all ban them and the account would disappear, but if they hit the right subs where the mods didn’t care, then they wouldn’t, and it would take a lot more work to get them dealt with.
Karma is a good way to track participation, but if people ignore/abuse the system it falls apart.
The fact that reddit didn’t have a process in place to track accounts posting word for word (including link) reposts and immediately ban them was really weak. Also the fact that the algorithm checked karma and history but the admin gave their blessing to subs like freekarma4u because some subs had karma requirements was a bit of a joke.
At the size of Reddit, it’s impossible as a mod to keep track of every account’s individual activities. A mod of a meme sub with millions of subscribers isn’t going to vet every user in the sub. To recognize a karma farming bot, you would have to know that their content is reposted. But if you’re viewing the bot’s post for the first time, how would you know? You would just assume that the post is original and upvote it. The karma bots also crop or filter their reposted images to make sure “repost identifier” bots don’t catch them.
You don’t need to vet every user in the sub. It is trivial to write a bot that detects whether or not things are reposts. There was literally a repostsleuth bot that did just that. All they would have had to do is pay attention to it.
I modded a sub of over 25 million subscribers and there wasn’t an unfiltered post that I didn’t go through. If you aren’t checked out as a mod, it’s pretty easy to spot the reposts after you mod the sub for awhile. It’s also fairly easy to spot a bot that needs investigating if you actually click on their profile.
The bots carried on for years doing little more than simply copying the previous post word for word and even if the image was hosted on reddit, they’d just repost the same link, they were trivially easy to catch and the mods of those subs couldn’t even put that little bit of effort in. Right up until the end of pushshift bots were reposting top posts from subs.
trying to dismiss their inability to act because bots have gotten more sophisticated doesn’t excuse them because they didn’t do anything when they were simple.
The karma system was not even effective against spammers, while it did block out genuine new accounts and people with unpopular opinions. Bots would just repost popular posts and comments to farm karma and bypass the restrictions.
It would have been if it was used right.
Anyone with half a brain could spot a karma farming account. When I first became a mod of a very large sub there I looked at the recent ban log and spotted some accounts banned a couple months back for karma farming. They were being used for things like promoting drug sites, crypto, etc.
The algorithm used karma and history to help filter/restrict accounts. The problem was that not everyone was committed to doing that. Most of the meme subs had mods who just didn’t give a shit and when you sent them a modmail with: ‘Hey these are clearly bot accounts reposting word for word popular posts including the links (which is a really good indication it’s a script’, they just wouldn’t do anything or just very aggressively respond that they wouldn’t do anything. Henceforth their subs became ground zero for botters and scammers who wanted to build history on an account.
Reddit had an automated process, as far as I could tell, for banning/restricting these kinds of account. If enough large subs banned them in a period of time, it would seem like their accounts would be suspended almost immediately. So if they happened to post in the wrong group of subs when I spotted them, I’d modmail all those subs and they’d all ban them and the account would disappear, but if they hit the right subs where the mods didn’t care, then they wouldn’t, and it would take a lot more work to get them dealt with.
Karma is a good way to track participation, but if people ignore/abuse the system it falls apart.
The fact that reddit didn’t have a process in place to track accounts posting word for word (including link) reposts and immediately ban them was really weak. Also the fact that the algorithm checked karma and history but the admin gave their blessing to subs like freekarma4u because some subs had karma requirements was a bit of a joke.
At the size of Reddit, it’s impossible as a mod to keep track of every account’s individual activities. A mod of a meme sub with millions of subscribers isn’t going to vet every user in the sub. To recognize a karma farming bot, you would have to know that their content is reposted. But if you’re viewing the bot’s post for the first time, how would you know? You would just assume that the post is original and upvote it. The karma bots also crop or filter their reposted images to make sure “repost identifier” bots don’t catch them.
You don’t need to vet every user in the sub. It is trivial to write a bot that detects whether or not things are reposts. There was literally a repostsleuth bot that did just that. All they would have had to do is pay attention to it.
I modded a sub of over 25 million subscribers and there wasn’t an unfiltered post that I didn’t go through. If you aren’t checked out as a mod, it’s pretty easy to spot the reposts after you mod the sub for awhile. It’s also fairly easy to spot a bot that needs investigating if you actually click on their profile.
The bots carried on for years doing little more than simply copying the previous post word for word and even if the image was hosted on reddit, they’d just repost the same link, they were trivially easy to catch and the mods of those subs couldn’t even put that little bit of effort in. Right up until the end of pushshift bots were reposting top posts from subs.
trying to dismiss their inability to act because bots have gotten more sophisticated doesn’t excuse them because they didn’t do anything when they were simple.