• This is true. Right now the OG internet is sort of kept alive by oral history, but we have the technology to save these websites in perpetuity as historical artifacts. That might be a good coding project - a robust archiving system that lets you point a URL at a webpage and scrape everything under its domain and keep a static collection of its contents. The issue, though, is that this doesn’t actually truly “capture” many web pages. A lot of the backend data that might have been served dynamically from a database isn’t retrievable, so the experience of using the page itself is potentially non-archivable.