I already made some people mad by suggesting that I would I would make by computer run an ollama model. I suggested that they make a counter AI bot to find these accounts that don’t disclose they’re bots. What’s lemmy opinion of Ai coming into fediverse?

  • In general, if it isn’t open source in every sense of the term, GPL license, all weights and parts of the model, and all the training data and training methods, it’s a non-starter for me.

    I’m not even interested in talking about AI integration unless it passes those initial requirements.

    Scraping millions of people’s data and content without their knowledge or consent is morally dubious already.

    Taking that data and using it to train proprietary models with secret methodologies, locking it behind a pay wall, then forcing it back onto consumers regardless of what they want in order to artificially boost their stock price and make a handful of people disgustingly wealthy is downright demonic.

    Especially because it does almost nothing to enrich our lives. In its current form, it is an anti-human technology.

    Now all that being said, if you want to run it totally on your own hardware, to play with and help you with your own tasks, that’s your choice. Using in a way that you have total sovereignty over is good.

    • I wondered if comments you post are, according to AI they’re actually copyright protected. But it’s funny that no one read the TOS and basically give copywrite of comments to meta and Reddit (maybe) so legally the comments can be scraped without the authors consent. So there’s plenty of legally and pretty much (technically)ethical sources content for LLMs, if you’re okay with capitalism and corporations.

      I look at AI as a tool, the rich definitely look at as a tool too, so I’m not going to shy away from it. I found a way to use AI to discriminate if a post is about live stream or not and use that to boost the post on mastodon. And I built half a dozen scripts with perplexity and chat gpt, one of witch is a government watchdog to see if there’s any ethical or legal violations https://github.com/solidheron/AI-Watchdog-city-council

      I’m not advocate that you should be pro or anti AI, but if you’re anti AI then you should be doing anti AI measures

  • For starters, do you have reason to believe a large number of Lemmy users are legitimately bots, or is this just a thing where you saw someone with a different opinion? Lemmy overall is aligned in being generally anti-AI.

  • In the fediverse? Same as outside. It’s a solution looking for a problem. We generate our own content here, everyone is here because of the rest of the automated bots everywhere else. Look at lemmit online, it’s an instance dedicated to mirroring reddit subs for us here, but it’s a ghost town because we all pretty quickly realized it was boring interacting with bots.

    A bot has to have a good purpose here. Like an auto archive bot so people click a better link, or bots like wikibot. I’m not saying AI is useless here, but I haven’t seen a good actual use case for it here yet

  • While I am an AI enthusiast, generative AI has two issues that make it very hard to accept here

    One is definitely the fact that we all know they have been trained using our data without our informed consent, not to mention it bring a typical case where copyright only applies to big companies, it doesn’t really protect individuals.

    The second one is simply that we are in a social network. Social. We use it to communicate with people, not to play games or take part in experiments. It’s like using comments to a question for statistical purposes, you have to tell people they are taking part in it.

    Here we want to discuss daily life, politics and hobbies with other people, forming opinions based on what other people think, and spending time and energy to explain our positions to other people. If the other end is a machine, how is this different from an NPC from an RPG game?

    So, I guess the only way to go for it is to have separate communities that specifically allow AI bots, making sure people know about it so they take part if they are willing. Ofc we can expect some instances deciding to cut ties with AI filled ones, it’s up to them to decide.

  • If it added value then I wouldn’t be opposed. But I don’t see what value AI could possibly add to a social network. Some specific fields, like researchers combing through large data sets, have benefitted from AI. Every other place it’s been shoehorned into has suffered for it.

    If you see a problem and realize AI could address it, then that’s fantastic. If you’re coming at it from the other direction and looking for problems then you’re going to waste everyone’s time.

    • AI actually makes it so computers can process language. I had two issues one is tracking police based on where they are and the other is detecting live streams post is a live stream post and it’s hard to process abstract concepts like that so you get the LLM to make the determination. Beats going through all the data yourself and figuring out edge cases