One reason hyperlinks work like they do – why they index other kinds of affiliation – is that they were first devised to exhibit the connections researchers made among different sources as they developed new ideas. Early plans for what became the hypertext protocol of Tim Berners-Lee’s World Wide Web were presented as tools for documenting how human minds tend to move from idea to idea, connecting external stimuli and internal reflections. Links treat creativity as the work of remediating and remaking, which is foregrounded in the slogan for Google Scholar: ‘Stand on the shoulders of giants.’
But now Google and other websites are moving away from relying on links in favour of artificial intelligence chatbots. Considered as preserved trails of connected ideas, links make sense as early victims of the AI revolution since large language models (LLMs) such as ChatGPT, Google’s Gemini and others abstract the information represented online and present it in source-less summaries. We are at a moment in the history of the web in which the link itself – the countless connections made by website creators, the endless tapestry of ideas woven together throughout the web – is in danger of going extinct. So it’s pertinent to ask: how did links come to represent information in the first place? And what’s at stake in the movement away from links toward AI chat interfaces?
The work of making connections both among websites and in a person’s own thinking is what AI chatbots are designed to replace. Most discussions of AI are concerned with how soon an AI model will achieve ‘artificial general intelligence’ or at what point AI entities will be able to dictate their own tasks and make their own choices. But a more basic and immediate question is: what pattern of activity do AI platforms currently produce? Does AI dream of itself?
If Pope’s poem floods the reader with voices – from the dunces in the verse to the competing commenters in the footnotes, AI chatbots tend toward the opposite effect. Whether ChatGPT or Google’s Gemini, AI synthesises numerous voices into a flat monotone. The platforms present an opening answer, bulleted lists and concluding summaries. If you ask ChatGPT to describe its voice, it says that it has been trained to answer in a neutral and clear tone. The point of the platform is to sound like no one.
Interesting thought piece on the importance of interconnection and what is lost when the connections are obscured. I just wish more people in charge of creating AI were spending more time thinking about what we lose in this compression. Thanks for the link, I appreciated the read.