There are lots of articles about bad use cases of ChatGPT that Google already provided for decades.

Want to get bad medical advice for the weird pain in your belly? Google can tell you it’s cancer, no problem.

Do you want to know how to make drugs without a lab? Google even gives you links to stores where you can buy the materials for it.

Want some racism/misogyny/other evil content? Google is your ever helpful friend and garbage dump.

What’s the difference apart from ChatGPT’s inability to link to existing sources?

Edit: Just to clear things up. This post is specifically not about the new use cases that come from AI. Sure, Google cannot make semi-non-functional mini programs automatically, and Google will not write a fake paper in whole for me. I am specifically talking about the “This will change the world” articles, that mirror stuff that Google can do exactly like ChatGPT can.

    • That’s not a workable solution. Since Meta’s algorithm was leaked, there has been such rapid advancement on the open-source side of LLMs that the tech has diverged too far to ever be detectable.

      You can now spin up a custom, targeted LLM in a few hours on low-power consumer hardware. And it beats the massive incumbents within the narrower scope of the training.

      Think, a Facebook comment bot, targeted specifically to sound like pro-[VIEW] comments, complete with typos and Internet slang. Or a high school essay bot, trained exclusivity on 5-paragraph essays.

      The tech right now gives a very high false positive rate, and there are also AI tools that rewrite text to avoid detection by the existing tools.