• (For clarity I’ll re-emphasise that my top comment is the result of misreading the word “documents” out, so I’m speaking on general grounds about AI “summaries”, not just about AI “summaries” of documents.)

      The key here is that the LLM is likely to hallucinate the claims of the text being shortened, but not the topic. So provided that you care about the later but not the former, in order to decide if you’re going to read the whole thing, it’s good enough.

      And that is useful in a few situations. For example, if you have a metaphorical pile of a hundred or so scientific papers, and you only need the ones about a specific topic (like “Indo-European urheimat” or “Argiope spiders” or “banana bonds”).

      That backtracks to the OP. The issue with using AI summaries for documents is that you typically know the topic at hand, and you want the content instead. That’s bad because then the hallucinations won’t be “harmless”.

      • But the claims of the text are often why you read it in the first place! If you have a hundred scientific papers you’re going to read the ones that make claims either supporting or contradicting your research.

        You might as well just skim the titles and guess.

        • But the claims of the text are often why you read it in the first place!

          By “not caring about the former” [claims], I mean in the LLM output, because you know that the LLM will fuck them up. But it’ll still somewhat accurately represent the topic of the text, and you can use this to your advantage.

          You might as well just skim the titles and guess.

          Nirvana fallacy.

            • not reading the fucking sidebar

              Yeah, I get that this is a place to vent. And I get why to vent about this. LLMs and other A"I" systems (with quotation marks because this shite is not intelligent!) are being shoved down every bloody where, regardless of actual usefulness, safety, or user desire. Telling you to put glue on your pizza, to eat poisonous mushrooms, that “cherish” has five letters, that Latin had no [w], that the Chinese are inferior to Westerners.

              While a crowd of irrationals tell you “it is intelligent, you can’t prove otherwise! CHRUST IT YOU DIRTY SCEPTIC/INFIDEL/LUDDITE REEEE! LALALA I’M PRETENDING TO NOT SEE THE HALLUCINATION LALALA”.

              I also get the privacy nightmare that this shit is. And the whole deal behind “we’re using your content as training data, and then selling the result back to you”. Or that it’s eating electricity like there’s no tomorrow, in a planet where global warming is a present issue.

              I get it. I get it all. That’s why I’m here. And if you (or anyone else) think that I’m here for any other reason, by all means, check my profile - you’ll find plenty pieces of criticism against those stupid corporate AI takes from vulture capital. (And plenty instances of me calling HN “Redditors LARPing as Hax0rz”. )

              However. Pretending that there’s no use case ever for LLMs is the wrong way to go.

              and thinking this is high school debate club fallacy

              If calling it “nirvana fallacy” rubs you the wrong way, here’s an alternative: “this argument is fucking stupid, in a very specific way: it pretends that either something is perfect or it’s useless, with no middle ground.”

              The other user however does not deserve the unnecessary abrasiveness so I’ll keep simply calling it “nirvana fallacy”.

                  •  self   ( @self@awful.systems ) 
                    link
                    fedilink
                    English
                    815 days ago

                    fucking right! there’s this unearned assumption that just because the tech’s been invented, it must have worth. and, like, no? there’s so many dead ends in science and technology, and notoriously throwing money at something doesn’t change its fundamental nature

                    and now I’m pissed and trying to decide if it’s even worth explicitly adding “don’t be a debatelord asshole” to the TechTakes sidebar, cause it’s not like they’re gonna stop

          • Unless it doesn’t accurately represent the topic, which happens, and then a researcher chooses not to read the text based on the chatbot’s summary.

            Nirvana fallacy.

            All these chatbots do is guess. I’m just saying a researcher might as well cut out the hallucinating middleman.