• When the only thing that continues to work on you ad-filled web site is the captcha, I’m not interested in supporting your journalism any more.

    Protip: You can crash self-driving cars by purposefully misclicking during Captcha checks when they ask you to identify what is a bicycle, a car, a pedestrian, etc. Keep misclicking, your are poisoning the AI with each misclick. Just stay safe on the sidewalk.

        • Given the number of bots on the internet trying to crack captchas, this is already happening. I don’t think captchas are being used for AI training that much, since hCaptcha uses AI-generated images with prompts like “Select the images with a hamster eating a watermelon” for its tests. All of the reCaptcha road captchas I receive also have answer validation and won’t let me pass if I answer incorrectly because of a misclick.

        • I don’t think it would have the intended effect. What would happen if that captchas wouldn’t be useful for AI training, but it’s not like a car is sitting at a stoplight waiting for a person to identify if something is a bus or not.

        • Lol, getting 10 thousand users to slightly inconvenience themself even to stand against things that directly effect them is difficult. Imagine trying to get billions to do it for a slightly indirect possible effect on megacorps.

          There are probably half a billion people alone that would gladly lick the boot of any mega corporation that demanded it.

        • Legally you are correct but ethically you are wrong. If they include false data that causes a crash, everyone that intentionally contributed that data is morally at fault. You don’t get to wash your hands of it just because the business is the one legally liable for it.

          •  Puls3   ( @Puls3@lemmy.ml ) 
            link
            fedilink
            English
            31 year ago

            I mean ethically its a debatable topic, if I don’t help fix someone’s car and then he crashes it, its not my fault, he shouldn’t have driven it while it was broken.

            Same with user generated or AI data, it works 99.9% of the time, but that 0.1% is too dangerous to deploy in a life endangering situation.

            • You’ve got a bit of a point there I’ll give you that but it’s an apple to oranges comparison, unless you’re intentionally trying to cause them to crash by not helping them fix their car. The person I originally replied to is advocating intentionally trying to cause a crash.

              •  Puls3   ( @Puls3@lemmy.ml ) 
                link
                fedilink
                English
                11 year ago

                I think it was a more tongue in cheek reference to the incompetence of the companies and how they will use that data in practice, but I might have read too much into it. Regardless, intentionally clicking the wrong items on captchas shouldn’t cause a crash unless the companies force it to by cutting corners.

                • It doesn’t matter if it was tongue in cheek, if my dumbass took it seriously then you know other dumbass people will take it seriously. And I guess my main issue is about the vocal intent to cause harm which is demonstrated by their mention of making sure to stay safe on the sidewalk.

    •  comfy   ( @comfy@lemmy.ml ) 
      link
      fedilink
      English
      11 year ago

      Hah, it’s possible in theory but would require co-ordination that we are almost never going to see.

      Most people will just do them correctly to pass, and if 997 responses say yes and 3 say no, they’re probably confidently right.