- cross-posted to:
- technews@radiation.party
- hackernews@derp.foo
- Spiracle ( @Spiracle@kbin.social ) 22•1 year ago
Finally. I haven’t seen a single positive use of these yet due to the poor performance. Only slightly more accurate than professors or lawyers asking ChatGPT whether something was written by ChatGPT.
- cloaker ( @cloaker@kbin.social ) 11•1 year ago
Good. There’s no good way to detect whether plain text is ai written. It’s a language model.
- BubblyMango ( @BubblyMango@lemmy.wtf ) 9•1 year ago
Bullshit. The U.S. constitution is AI written. You will never convince me otherwise.
- recycledbits ( @recycledbits@discuss.tchncs.de ) 1•1 year ago
“Is this AI written?” is a difficult/impossible question. “Did you write this?” is not. Running the language model against a text and recording its “amount of surprise per token” for all the released GPT x.y variants is something they definitely can do.
- RagnarokOnline ( @RagnarokOnline@reddthat.com ) 5•1 year ago
Sounds like they took it down to improve it?
“We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated,” OpenAI wrote.
- cloaker ( @cloaker@kbin.social ) 4•1 year ago
Good. There’s no good way to detect whether plain text is ai written. It’s a language model.
- lorez ( @lorez@lemm.ee ) 1•1 year ago
I just saw the same cat twice.