• One of two things will be true. Either:

      1. AIs can successfully train on AI-generated content OR
      2. AIs will need human-generated content to improve

      If it’s 2, then we’ll have to develop AI that detects AI-generated content. But if you have the machine that can detect whether content is in the category that helps it improve, then you have an algorithm for generating content that helps it improve.

      So either 1 is true, or AI will plateau, or it will be trained only on networks where confirmed humans are the only ones participating.

  • 1. Support Human-Created Content: Supporting human-generated content involves more than just consuming the content. You can proactively participate in crowdfunding campaigns, subscribe to creator’s newsletters, or even become a patron on platforms like Patreon. This not only provides direct financial support but also signals to other consumers and platforms that human-generated content is valuable. If you’re an influencer or have a substantial online following, your endorsement of human-generated content can help create a broader cultural shift.

    2. Digital Literacy Education: Start by learning about digital literacy yourself and then share this knowledge with others. This could mean setting up workshops in your local community, offering online webinars, or mentoring a younger person. Use these opportunities to highlight the difference between human and bot-generated content, teach the basics of how algorithms shape online experiences, and foster a critical approach to online information consumption.

    3. Regulate AI and Algorithms: You can get involved in the legislative process at various levels. This could mean everything from writing letters to your local and national representatives, to participating in public protests or movements. You could also consider volunteering for organizations that work on these issues or even pursuing a career in tech policy.

    4. Transparency: Advocate for laws that would require tech companies to disclose their use of bots and AI. Write op-eds, start social media campaigns, or coordinate with organizations that are working towards this. Additionally, as a consumer, you can also ask direct questions to companies about their use of AI and their transparency practices.

    5. Promote Ethical AI Practices: Do research into which companies adhere to ethical AI practices, and consider giving them your business. You can also use your online platform, if you have one, to highlight these companies and their practices. Your recommendations can influence others to do the same.

    6. Use and Develop Tools: If you have coding skills, you can contribute to open-source projects that aim to develop tools for identifying bot-generated content. You can also participate in hackathons or online coding competitions focused on this problem. If you’re not a developer, consider supporting these initiatives financially or advocating for their wider use in your own network.

    While these actions can help mitigate the “Dead Internet” scenario, it’s important to keep in mind that the internet is a vast and complex ecosystem. It’s influenced by many factors, from the technology that underpins it, to the actions of users and tech companies, to legal and cultural norms. It will require a collective effort to shape its future.

  • I have to say that I feel that currently the most consumed contents in the Internet are mostly human-written; and my proof is actually that it is now when the tendency is clearly changing. I have stumbled upon a few AI-generated articles already in the past few months, without looking for them specifically. You could tell because it sometimes focuses on weird details, or even I have seen l some kind of

    as an AI, I do not have an opinion on the subject […]

    which is so funny when you see it.

    So, yeah, it is definitely starting to happen, and in the next few years I wouldn’t be surprised if 30 to 50 % of articles are just AI blorbs built for clicks.

    How to avoid this? We can’t. The only way would be to shut down the Internet, forbid computers and go back to a simpler life. And that, for many reasons will not happen unless some world-class destruction event happens.

    •  lemmyvore   ( @lemmyvore@feddit.nl ) 
      link
      fedilink
      English
      10
      edit-2
      10 months ago

      We actually can prevent it. We will go back to human-curated websites, and the links to those websites will also be maintained by humans.

      This is how the early web used to work in the 90s and early 00s. We will see a resurgence of things like portals, directories (like the Mozilla Directory project — DMOZ), webrings, and last but not least actual journalism.

      Unless Google manages to find a way to tell AI content from human they will become irrelevant overnight because Search is 90% of their revenue. This will kill other search engines too, but will also remove Google strangle-hold on browsers.

      This also means we’ll finally get to use interesting technologies that Google currently suppresses by refusing to implement them, like micro-payments. MP are an alternative to ads that was proposed a long time ago but never allowed browser support.

      MP are a way to pay very small sums (a cent or a fraction of a cent) when you visit a webpage, and to make it as painless as possible for both the visitor and the website. It adds up to the same earnings for websites but introduces human oversight (you decide if the page you want to visit is worth that fraction of a cent) and most importantly gets rid of the ad plague.

      • alot of this article is bullshit. there’s lots of “will be’s” and “imagines” and other speculative nonsense. can we make AI better than it is? yes. is there any hard barrier to AI getting better than humans? no. but we aren’t in some point in the future, and the challenges we must overcome to make AI “better” are not well understood. we aren’t approaching some exponential growth singularity bullshit, these models are stochastic, mathematically well understood, and limited in their scope. and the marketing, dude. fuck. all these companies talking about how impressive and revolutionary this tech is, Meta being like “oh, our model is sooo powerful, and suuuper dangerous guys.” i thought the fediverse was built on skepticism about corporate interests. didn’t you see how these companies shilled for NFT’s like only a little while ago? its the same bullshit. i’m not saying AI tech isn’t cool, and interesting, and useful, it is, and far more so than the blockchain ever was, but some media literacy is in order here.

        so much of this bullshit is marketing hype. if you think that these models can make content more enjoyable than the stuff humans make? do it. prove it. i want to see that. stop whining on and on about how the world is gonna end because we have a search engine that lies to you and an infinite stock photo library. stop catastrophizing about how nobody will ever be able to find a real person online, like bots haven’t existed for fucking forever. this dude is almost certainly a blockchain shill too, you know that right? probably has his life savings in some fucking bullshit coin he’s actively selling as the solution to all your problems, like web3 hasn’t been a solution looking for a problem since its inception. ugh.

        i’m getting irritated. the idea of how to deal with AI making stuff is worth considering. it’s already happening. but honestly? the “Dead Internet Theory” is out of scope for us right now, and also a conspiracy theory. we aren’t living in that world. we’re living in a world where niche subcultures can thrive, connecting diverse people from across the world, where people make their livelihoods online, where the most human creativity ever collected in one place is available to us all. where information about anything and everything we want is at our fingertips. if the internet is dead or dying, i’m just not seeing it. its just changing. and maybe that change is for the worse, maybe its all going to shit, maybe corporate interests are gonna ruin everything, and we’re all going to be drowned out in a sea of noise and artificial people. but there ain’t no proof its actually happening yet, and all the shit we can prove can be just as easily explained by the adtech surveillence industry that’s been around for like a decade at this point.

  • Probably the easiest way to avoid it is to simply rename it to something less scary sounding. Maybe something like Alive Enhanced Rich Content Internet Theory for Human People! See, not a problem now.

    Also maybe we should reread Breakfast of Champions by Kurt Vonnegut. It has a storyline about a guy who finds out he is the only actual real person on earth. Everyone else are robots. And he wants to know why.

    •  golli   ( @golli@lemm.ee ) 
      link
      fedilink
      3
      edit-2
      10 months ago

      The profit motive seems like the key: In the end online activity still has roots in physical hardware that requires resources which need to be provided by someone. And that someone will have an incentive to prune wasteful activity.

      • I’m worried that the costs of the physical hardware are trivial compared to the amount of money that content farming etc pulls in, so it’s just an expense that scales with the amount of junk content they produce.

  • For a permanent solution, it will to an extent require us to give up a level of anonymity. Whether it’s linking a discussion with a real life meetup… like this (NSFW warning)

    or some sort of government tracking system.

    When nobody knows whether you are a dog posting on the internet, or a robot or a human, mitigations like Captcha and challenge questions only will slow AI down but can’t hold it off forever.

    It doesn’t help that on Reddit (where there are/were a lot of interacting users), the top voted discussion is often the same tired memes/puns. That’s a handicap to allow AI to better imitate human users.

  • I’ve always wondered if anything on the internet is even real. I mean I don’t even know if everyone else is real. Maybe it’s just me and everything else is a simulation. Maybe this is a prison of some sort and every negative event is just part of the punishment.