• generative AI makes it very easy for anyone to flood the internet with generated text, audio, images, and videos.

    And? There’s already way too much data online to read or watch all of it. We could just move to a “watermark” system where everyone takes credit for their contributions. Things without watermarks could just be dismissed, since they have as much authority as an anonymous comment.

      • How would that work?

        AIs learn from existing images, they could just as well learn to reproduce a tattoo and link the pattern to a person’s name. Recreating it from different angles, would require more training data, but ultimately would get there.

        • For public ones, depending on what people started getting, it’d really strain the AIs. You could go in like 1 or two ways, probably different people getting both.

          Something very uniform but still unique, like a QR code kind of deal, AIs would hallucinate the crap out of that. Or abstractions, like people do to change the way the shape of their face to combat facial recognition.

          For private ones, just don’t ever get it photographed, any image showing that area without it would be probably fake.

      • Why would anyone pay for the service? Having a “name” is free, and that dumb worldcoin only works for people. It can’t work for governments or businesses.

        ActivityPub is actually a good way to authenticate things. If an organization vouches for something they can post it on their server and it can be viewed elsewhere.

        • I think the idea of WorldCoin is to have a “wallet” linked to a single physical person, then you can sign any work with your key, that you got by proving you are a real person.

          IMHO, the coin part is just a hype element to get people to sign up for the password part.

          As for ActivityPub, I don’t see how it helps with anything. An organization vouching for something, can already post it on their web, or if they want a distributed system, post it on IPFS.

  • We didn’t even have AI when the Internet became flooded with faked images and videos, and those actually are incredibly hard to tell are fake. AI generated images still has very obvious tells that it’s fake if you scrutinize them even a little bit. And video is so bad right now, you don’t have to do anything but have functioning sight to notice it’s not real.

  • Not sure what to make out of this article. The statistics are nice to know, but something like this seems poorly investigated:

    AI overview answers in Google search that tell users to eat glue

    Google’s AI has a strength others lack: not only it allows users to rate an answer, but it can also use Google’s search data to check whether people are laughing at or mocking its results.

    The “fire breathing swans”, the “glue on pizza”, or the “gasoline flavored spaghetti”, have disappeared from Google’s AI.

    Gemini now also uses a draft system where it reviews and refines its own initial answer several times, before presenting the final result.

  • I haven’t read this article as the statement is simply wrong. AI is just a technology. What it does (and doesn’t) depends on how it is used, and this in turn depends on human decision making.

    What Google does here is -once again- denying responsibilty. If I’d be using a tool that says you should put glue on your pizza, then it’s me who is responsible, not the tool. It’s not the weapon that kilks, it’s the human being who pulls the trigger.