most of the time you’ll be talking to a bot there without even realizing. they’re gonna feed you products and ads interwoven into conversations, and the AI can be controlled so its output reflects corporate interests. advertisers are gonna be able to buy access and run campaigns. based on their input, the AI can generate thousands of comments and posts, all to support your corporate agenda.
for example you can set it to hate a public figure and force negative commentary into conversations all over the site. you can set it to praise and recommend your latest product. like when a pharma company has a new pill out, they’ll be able to target self-help subs and flood them with fake anecdotes and user testimony that the new pill solves all your problems and you should check it out.
the only real humans you’ll find there are the shills that run the place, and the poor suckers that fall for the scam.
it’s gonna be a shithole.
- EnglishMobster ( @EnglishMobster@kbin.social ) 106•1 year ago
This is already happening.
Bots are being used to astroturf the protests on Reddit. You can see at the bottom how this so-called “user” responds “as an AI language program…”
- rynzcycle ( @rynzcycle@kbin.social ) 43•1 year ago
Oh wow, that’s simultaneously hilarious, awesome, and terrifying.
- Bonehead ( @Bonehead@kbin.social ) 57•1 year ago
…and fake. The “AI” user admits further down that they are just trolling.
- TheFork ( @TheFork@kbin.social ) 16•1 year ago
It’s funny tho.
- Empyreal ( @Empyreal@kbin.social ) 1•1 year ago
Or its another form of a human-monitored bot account. Those have existed for years
Or its just another bot response. I’ve had arguments with bots that I have banned from my subreddit before. Some of their response mechanisms are quite creative.
- genoxidedev1 ( @genoxidedev1@kbin.social ) 39•1 year ago
I never fully trust users with automated usernames and this just proves my paranoia.
Then again someone who calls subreddits “subReddit” is automagically a bot in my eyes anyways.
- JunkMilesDavis ( @JunkMilesDavis@kbin.social ) 22•1 year ago
Glad it wasn’t just me. It wasn’t often I paid attention to usernames on the big subs, but it seemed like at some point they were absolutely flooded with “Adjective_Noun_1234” users, and I couldn’t stop seeing it once I noticed. Those and the comment-reposting bots (which probably won’t be called out by other bots anymore without a usable API) made me wonder how many actual humans I was interacting with.
- Anomander ( @Anomander@kbin.social ) 18•1 year ago
There was also some very good and valid reasons why real people wound up with those usernames - mainly, that the signup process (from the App I think? maybe also in New Reddit?) both downplayed, and obstructed changing, the default username during the process - and instead led the user to believe that only the “display name” selected later would appear to other users on the site.
Completely omitting the fact that anyone on old reddit or accessing through an app would only see the username, as “display names” don’t seem to have ever been served via the API.
To many of those users, they had no clue that what people were seeing attached to their comments or submissions was “extravagant_mustard_924” and not “Cool Dude Brian” or whatever they’d put in as their display name. They were led to believe that the latter was all that would display, and that signing up with a default account name would only determine what they entered in the top box while logging in.
- xXemokidforeverXx ( @xXemokidforeverXx@kbin.social ) 13•1 year ago
This is me learning display names were even a thing. I didn’t stray much from the Apollo app.
- blivet ( @blivet@kbin.social ) 9•1 year ago
It’s amazing how half-assed everything about Reddit is.
- quortez ( @quortez@kbin.social ) 5•1 year ago
TIL Reddit has display names. Why on earth would I know them is beyond me, but thanks for restricting your API ig ¯\(ツ)/¯
- genoxidedev1 ( @genoxidedev1@kbin.social ) 7•1 year ago
It would have helped Reddit, or at least the user experience on Reddit, majorly if they had just disabled API access for all but a select few bots (Like automod for example).
Also on the NSFW side of Reddit those automated username “users” are the ones spamming their, or someones, OF on every NSFW subreddit, even unrelated ones to the content they’re posting. Or so a friend told me, of course.
- Arotrios ( @Arotrios@kbin.social ) 8•1 year ago
Holy fucking shit I’m dying. That’s fucking hilarious.
I now want to make a bot that detects bots, grades their responses as 0% - 100% bot, posts the bottage score, and if they determine bottage, engage the other bot in endless conversation until it melts down from confusion.
We can live stream the battles. We’ll call the show Babblebots.
Any devs interested?
- cavemeat ( @cavemeat@beehaw.org ) 2•1 year ago
This sounds hysterical, and reddit’s corpse is a great battlefield.
- riktor ( @riktor@kbin.social ) 1•1 year ago
Yeah I’ve replied to a post here too about bots taking over.
I used ChatGPT to “reply to the post as if you were a robot”
Made it a pretty funny response and then people were asking if I was a bot.
Who knows, maybe I am. - desudesudesu ( @desudesudesu@kbin.social ) 1•1 year ago
anyone else remember how historically youtube comments were always pure garbage? i wonder if that was just a very primitive a.i. spamming posts on popular videos?
- Jarfil ( @Jarfil@kbin.social ) 1•1 year ago
They still are. That’s just “average and below” humans commenting.
Or as a park ranger would put it once: “there is a large overlap between the smartest bears and the dumbest humans”
- livus ( @livus@kbin.social ) 58•1 year ago
It is kind of getting that way already.
- ENEMYGUNSHIP ( @ENEMYGUNSHIP@kbin.social ) 25•1 year ago
yep. almost like a beta phase…
- pollodiabolo ( @pollodiabolo@kbin.social ) 28•1 year ago
It’s feasible. Highly profitable. Only a matter of time until someone does it. The only reason not do it, is if your morals stop you. and u/spez has no morals.
What’s happening right now is that the smart users leave the platform. Makes perfect sense, they are not needed anymore, in fact they would be in the way of the scam running smoothly. So you want them gone. Reddit’s actions make perfect sense really. They act exactly like they don’t need contributors anymore. And for some reason, it doesn’t bother them? There’s a reason why it doesn’t bother them, and people can’t delete their history.
- dismalnow ( @dismalnow@kbin.social ) 16•1 year ago
And it’s not really a hot take.
If I could have this thought independently, it’s probably already a common view.
(Reddit)'s dying… slowly, and painfully. This decline will go on for years. into the endgame of mostly automoderated, bot-driven content.
…
Force those who remain to use a substandard app - inhibiting human interaction with the platform further.
All you’re left with is content addicts, trolls, ads, dregs from the darkest corners, and bots that feed them.
- TheRazorX ( @TheRazorX@kbin.social ) 8•1 year ago
Another stealth benefit to reddit with all this API crap, is that it’ll be much harder to tell since most of the tools people use to analyze accounts won’t work anymore. Keeping in mind Reddit started out by inflating their user numbers.
- hardypart ( @hardypart@feddit.de ) 40•1 year ago
I actually think this is the fate of the entire corporate driven part of the internet (so basically 95% nowadays, lol). Non-corporate, federated platforms are the future and will remain as the bastions of actual human interaction while the rest of the internet is being FUBAR by large language model bots.
- mrbubblesort ( @mrbubblesort@kbin.social ) 26•1 year ago
Seriously asking, what makes you think the fediverse is immune to that? Eventually they’ll get good enough that they’ll be almost indistinguishable from normal users, so how can we keep the bots out?
- rastilin ( @rastilin@kbin.social ) 17•1 year ago
There’s a number of options including a chain of trust where you only see comments from someone who’s been verified by someone who’s been verified by someone and so on who’s been verified by an actual real human that you’ve met in person. We can also charge per post, which will rapidly drive up the cost of a botnet (as well as trim down the number of two word derails).
- BraveSirZaphod ( @BraveSirZaphod@kbin.social ) 3•1 year ago
I’m not sure how reliable chains of trust would be. There’s a pretty obvious financial incentive for someone to simply lie and vouch for a bot etc. But in general, I think some kind of network of trustworthiness or verification as a real human will eventually be necessary. I could see PGP etc being useful.
- CleoTheWizard ( @LimitedBrain@beehaw.org ) 1•1 year ago
As much as I hate Elon, having people pay a small fee to have an account (lower than $8 though) is a novel way of verifying users while also having very little impact to the user.
- archomrade [he/him] ( @archomrade@midwest.social ) 0•1 year ago
“charge per post”
That part kind of worries me, are you proposing charging users to participate in the fediverse? Seems like it would also exclude a lot of people who can’t afford to spend money on social media…
- riskable ( @riskable@kbin.social ) 2•1 year ago
Listen here, you! I paid good money for this here comment so you’re gonna read it, alright‽
<Brought to you by FUBAR, a corporation with huge pockets that can afford to sway opinion with lots of carefully placed bot comments>
- rastilin ( @rastilin@kbin.social ) 1•1 year ago
The obvious question is then “how are they helping pay for the servers they’re using?”.
It’s not that I don’t see your point, everyone should be able to take part in a community without having to spend money, but I do find it annoying that whenever the topic of money comes up, we end up debating the hypothetical of someone with 0c spare in their budget.
Charging for membership worked well for Something Awful, and they only charge something like $20 for lifetime membership anyway, plus an additional fee for extra functionality. But you don’t get the money back if you get banned. Corporations would still be able to spend their way into the conversation, but it would be harder to create massive networks that just flood the real users.
- archomrade [he/him] ( @archomrade@midwest.social ) 1•1 year ago
The nice thing about federated media is that there doesn’t need to be one instance that carries most of the traffic. The cost gets distributed among many servers and instances, and they can choose how to fund the server independently (many instance owners spend their own money to a point, then bridge the gap with donations from users).
I’m just not sure that’s the best way to cut down bots, IMHO.
- apemint ( @apemint@kbin.social ) 8•1 year ago
It’s not immune but until the fediverse reaches a critical mass, we’re safe… probably.
After that, it will be the same whac-a-mole game we’re used to and somehow I don’t think we’ll win. - CynAq ( @CynAq@kbin.social ) 6•1 year ago
Right now, we can already recognize lower quality bots within conversation. AI generated “art” is already very distinct to everyone to the point almost nobody misses it.
Language is a human instinct. Our minds create it, we can use it in all sorts of ways, bend it to our will however we want.
By the time bots become good enough to be indistinguishable online, they’ll either be actually worth talking to, or they will simply be another corporate shill.
- MrsEaves ( @MrsEaves@kbin.social ) 4•1 year ago
I was wondering about this myself. If a bot presents a good argument that promotes discussion, is the presence of a bot automatically bad?
I don’t love that right now, the focus is on eliminating or silencing the voice of bots, because as you point out, they’re going to be indistinguishable from human voices soon - if they aren’t already. In the education space, we’re already dealing with plagiarism platforms incorrectly claiming real student work is written by ChatGPT. Reading a viewpoint you disagree with and immediately jumping to “bot!” only serves to create echo chambers.
I think it’s better and safer long term to educate people to think critically, assume good intent, know their boundaries online (ie, don’t argue when you can’t be coherent about it and have to devolve to name calling, etc), and focus on the content and argument of the post, not who created it - unless it’s very clear from a look at their profile that they’re arguing in bad faith or astroturfing. A shitty argument won’t hold up to scrutiny, and you don’t have the risk of silencing good conversation from a human with an opposing viewpoint. Common agreement on community rules such as “no hate speech” or limiting self-promotion/review/ads to certain spaces and times is still the best and safest way to combat this, and from there it’s a matter of mods enforcing the boundaries on content, not who they think you are.
- Aesthesiaphilia ( @Aesthesiaphilia@kbin.social ) 14•1 year ago
I was wondering about this myself. If a bot presents a good argument that promotes discussion, is the presence of a bot automatically bad?
Because bots don’t think. They exist solely to push an agenda on behalf of someone.
- BraveSirZaphod ( @BraveSirZaphod@kbin.social ) 6•1 year ago
If a bot presents a good argument that promotes discussion, is the presence of a bot automatically bad?
If the people involved in the conversation are there because they are intending to have a conversation with people, yes, it’s automatically bad. If I want to have a conversation with a chatbot, I can happily and intentionally head over to ChatGPT etc.
Bots are not inherently bad, but I think it’s imperative that our interactions with them are transparent and consensual.
- Umbrias ( @Umbrias@beehaw.org ) 6•1 year ago
Part of the problem is that bots unfairly empower the speech of those with resources to dominate and dictate the conversation space, even in good effort, it disempowers everyone else. Even the act ofseeing the same ideas over and over can sway whole zeitgeists. Now imagine what bots cab do by dictating the bulk of what’s even talked about at all.
- Umbrias ( @Umbrias@beehaw.org ) 1•1 year ago
Ai generated art is actually not as distinct as you think. A lot of low quality stuff is, but studies have shown humans already can’t tell the difference between real pictures and ai generated ones
- TheRazorX ( @TheRazorX@kbin.social ) 2•1 year ago
Nothing is immune, but at least on the fediverse it’s unlikely API access will be revoked on tools used to detect said bots.
- taurentipper ( @taurentipper@kbin.social ) 5•1 year ago
I agree with you 100%. If their motive is to make profit for shareholders or themselves they’re imo inevitably going to do this.
- Andreas ( @Andreas@feddit.dk ) 21•1 year ago
Reddit has been that way for a long time, after it lost the reputation of “niche forum for tech-obsessed weirdos” and became the internet’s general hub for discussion. The default subreddits are severely astroturfed by marketing and political campaigning groups, and Reddit turns a blind eye to it as long as it’s a paid partnership. There was one obvious case where bots in /r/politics accidentally targeted an AutoModerator thread instead of a candidate’s promotion thread and filled it with praise for that candidate.
- Maxcoffee ( @Maxcoffee@kbin.social ) 3•1 year ago
I see something similar in a lot of tech-related threads too.
Just check out posts and comments about Corsair and AMD in particular. There is often no room for logic, facts or debate around their products on Reddit. Rather, threads feel like you’re stuck in a marketing promo event where everyone feels the products are great and fantastic and can do no wrong. It’s eerily like you’re seeing a bunch of bots or paid shill accounts all talking to each other.
- PabloDiscobar ( @PabloDiscobar@kbin.social ) 5•1 year ago
I discussed in the AMD sub and it’s completely filled with consumers. They have no clue about electronics or development. It could be malevolence, but it’s becoming harder and harder to discern it from ignorance.
- PabloDiscobar ( @PabloDiscobar@kbin.social ) 0•1 year ago
There was one obvious case where bots in /r/politics accidentally targeted an AutoModerator thread instead of a candidate’s promotion thread and filled it with praise for that candidate.
Any source for this? I’d like to have a look.
- Andreas ( @Andreas@feddit.dk ) 1•1 year ago
Nope, sorry. Just a memory of a Reddit thread with very out-of-context comments. Ironically, while trying to search for documentation of the thread, DuckDuckGo returned a lot of research papers about the analysis of bot content on Reddit starting from 2015, so there’s still proof that botting on Reddit goes way back.
- Madrigal ( @Madrigal@kbin.social ) 17•1 year ago
Given Huffman’s apparent lack of integrity, this is sadly plausible.
- Eggyhead ( @Eggyhead@kbin.social ) 14•1 year ago
How would this be prevented on the fediverse?
- JasSmith ( @JasSmith@kbin.social ) 12•1 year ago
We control the experience here to a greater degree. If an instance decides to lean into AI content, we can leave for another, and others can defederate (if desired). Further, bots will be far more transparent. Reddit can (and likely does) offer their preferred bots exemptions for automatic filtering; probably promoting their content using some opaque algorithm. Said bots will receive no such preferential treatment across the Fediverse.
- Mediocre_Bard ( @Mediocre_Bard@kbin.social ) 12•1 year ago
God damn, that is bleak. It’s probably not wrong, but it is bleak.
it’s bleak. can I say… what they want is for you to be half-asleep, hooked on drugs, forever hating each other. they want this. it’s your ideal state for anyone that wields power in this world.
- popekingjoe ( @popekingjoe@kbin.social ) 12•1 year ago
Yeah this makes a lot of sense. I’m glad to be rid of Reddit tbh.
I will miss the porn though.
- popekingjoe ( @popekingjoe@kbin.social ) 2•1 year ago
Well this is completely unwelcome, unsolicited, and unnecessary. 🤨
- rastilin ( @rastilin@kbin.social ) 2•1 year ago
I don’t dare click the link, what is it?
- popekingjoe ( @popekingjoe@kbin.social ) 1•1 year ago
It’s a link to a GitHub repo of an ebook that’s supposed to help with porn addiction.
- rastilin ( @rastilin@kbin.social ) 1•1 year ago
Not even a self published Kindle ebook? Yikes. I bet the secret turns out to be “becoming more religious”.
- popekingjoe ( @popekingjoe@kbin.social ) 2•1 year ago
I didn’t read too much into it but I’d take that bet for sure.
- pollodiabolo ( @pollodiabolo@kbin.social ) 10•1 year ago
I should go watch the Truman Show again.
- Hypx ( @Hypx@kbin.social ) 9•1 year ago
Ever heard of the Dead Internet Theory? It’s the idea that bots have taken over the Internet and there are few real humans left. For the whole of the Internet, this is a conspiracy theory. But for any individual platform, it is a totally plausible outcome. Reddit could become one of those bot networks that just pretends to be a social media platform. Twitter is on track for that too.
- 👍Maximum Derek👍 ( @Bishma@social.fossware.space ) 1•1 year ago
My thought too. The Dead Internet Theory was more of a prediction.
- style99 ( @style99@kbin.social ) 6•1 year ago
The larger subs are already starting to become a war between different groups of spammers. The smaller subs can get by for now, but when the war in the larger subs gets to the extent that spammers start needing to branch out, they’ll likely invade the smaller subs, as well.
- NetHandle ( @NetHandle@kbin.social ) 6•1 year ago
Didn’t they try that already and they ended up pulling the plug because the AI became a nazi?
- DarkenLM ( @DarkenLM@kbin.social ) 4•1 year ago
Like all other before it. Tay got the same fate, and the only reason ChatGPT isn’t it because they have some filters that have a bit more quality than the rest.
- Dr Cog ( @Dr_Cog@mander.xyz ) 5•1 year ago
The main difference is that gpt does not learn from user interactions
- esc27 ( @esc27@kbin.social ) 5•1 year ago
We need better solutions for proving identity online. Email, capcha, etc. are insufficient. I imagine a system similar to the certificate authority system, where you prove your identity to one of many trusted identity providers and then that provider vouches for you when you sign up for other services (while also protecting you anonymity.)
- The Cuuuuube ( @Cube6392@beehaw.org ) 5•1 year ago
In a seedy back alley bar, an identity broker checks his bank accounts as a man enters the front door. In his pocket, the man entering the bar carries a uSD card. He sits down across from the broker and sets the card on the vinyl table-top.
“PGP or minisign,” asks the broker, without looking up from his data pad.
“PGP,” responds the man, looking over his shoulder, back at the door, nervously.
The broker looks up, assesses the man, and says, “These older protocols cost extra, you know, you don’t look like you have the credits.”
“Look, I just need to prove I’m human by the end of tonight, or else The Outlaws are going to put a tire iron between my eyes for not being able to get them the goods they’ve asked for.”
“The problem,” the broker said, before taking a long pull from his tobacco nebulizer, “Is that the AI bots are getting harder and harder to tell from the humans in this city. Technology has come a long way since Greenville became a coastal town"
The man looks back at the broker, realization dawning on him about what’s about to happen. The gun which usually lived its days taped under the booth was now pointed at the man. “Typically, I wouldn’t do this, but I don’t like The Outlaws. I’m not going to lose business over that, though. But I work for The Bastards mostly. I know you don’t work for them directly. You got mixed up in all this, didn’t you? Nevertheless. In this one case, the cruelty is the point.”
Most of the inhabitants of the bar jumped as the pistol cracked, but made a point not to look over at the booth in the corner.
“Hmm… Yes… Blood. I should have your identity confirmed within the hour. I would wish you luck on your purchase, but frankly I wouldn’t mind if you failed,” says the broker, sliding the uSD card into a slot just to the side of his right eye
- fiah ( @fiah@discuss.tchncs.de ) 1•1 year ago
the protecting your anonymity part would be very hard though, such a system has a high risk of eventually enabling a dystopian future where your every online move is being monitored by big brother
I was thinking that a mandatory donation to a charity could work. Like a simple $5 donation per account to any of a (carefully curated) list of charities. It would dramatically throttle new account creation / app adoption, of course, which is bad, but if a potential user wants it bad enough then they’d be OK with donating $5 to their favorite charity. It would reduce the number of bots / trolls / Sybils and it could work in a decentralized manner (imaging a lemmy instance doing this)
- CarolineJohnson ( @CarolineJohnson@kbin.social ) 4•1 year ago
Reddit stole this idea from /r/subredditsimulator and /r/subredditsimulatorgpt2…
- AnonymousLlama ( @AnonymousLlama@kbin.social ) 3•1 year ago
Never underestimate the power of negative energy, plenty of people flock to also dump on things they don’t like, it’s a great way to drive engagement (albeit shitty engagement)