From the (middle of the) story: The reason CES was so packed with random “AI”-branded products was that sticking those two letters to a new company is seen as something of a talisman, a ritual to bring back the (VC) rainy season.

  • The fundamental problem with tech in the 2020s is that it’s pretty much done eating the world. The last big earth-moving platform shift was smartphones over a decade ago. Ever since they’ve just been trying to make wearables happen, then make VR/AR happen, then make web3 happen, then make AI happen.

    They keep on trying to make these new platforms happen but they don’t really have any compelling features. Before smartphones when I’d travel to a new city I’d buy a paper map… and get lost. I don’t get lost anymore. That’s genuinely a different experience. Nothing since has created any sort of earth shattering change on that level.

    • I think this is the thing right here.

      We’re hitting walls when it comes to increases in computing power. We’ve made transistors nearly as small as they can get, so all we can do now is parallelize the processing (multi-core, etc.). But even then, computing power has reached such a point that most people don’t even notice the difference from one generation of hardware to the next. I have a MacbookPro from 2015 that still runs like a champ. I’m sure I’m going to start hitting walls when it comes to what hardware macOS will support, but the hardware itself is perfectly fine. So unless you’re into specific niches (gaming, video production, 3D modeling, high-end workstation needs, etc.), incremental shifts these days are literally nothing to get excited about.

      Operating systems barely change between major upgrades. Beside the horrendous change of the System Settings pane in macOS to look like it’s iOS and iPadOS counterparts, you’d be hard pressed to see much of a difference in macOS from the last 4-5 versions.

      The web has coalesced into a handful of walled gardens, AI generated content, SEO-driven content, algorithmically driven content, with every website looking just like every other one, every post looking like every other, and big UX shifts just for the sake of a version number change.

      And the tech industry has become so toxic, predatory, and socially harmful that it’s really hard to feel exciting by anything anymore. More landfill fodder from companies that are actively destroying democracy? The latest AI product to use as an excuse to layoff massive numbers of people and wreck entire industries for a garbage product, just to pad the bottom line? Fuck all of that.

      The paradigm shifts we’re seeing now aren’t technological, capitalists are just using technology as an excuse to wreak havoc on society. It’s fucking bleak

      • Worth noting this is all a byproduct of capitalism as it exists today. Not nearly enough competition or regulation means every major change these days is really just “How can we put more ads in people’s devices? How can we instill FOMO? How can we charge more money for less service?”

        Look at how bad mainstream UX/UI has become, even on the biggest platforms with the most industry experience. Windows 11 is a fucking joke and everyone is certain the next one will be worse. We know Microsoft can design a good UI, but they choose not to because they don’t care about the consumer except as it pertains to getting clicks and ad revenue. They shove Edge back in your taskbar after every update, they make finding your own shit on your computer impossible so you’ll use Cortana. Who asked for icons to be centered by default? Who asked for them to fuck around with the start menu for at least the fifth time in two decades? Nobody. But everything they do is for the benefit of the shareholders.

    • ChatGPT is waay more surprising from the perspective of 2000 than Google Maps. Even if this is as good as it gets that’s another complete reshaping of our society as all the rote natural language jobs go the way of “computing clerk” - which was a thing, before mainframes and electronic calculators replaced them.

      Otherwise, yeah, they’ve become so accustomed to “disruption” they have trouble imagining it stopping, even though it totally could.

      • Ok then be specific.

        Like, before smartphones I was lost in hamburg, looking at a paper map trying to figure out where the heck I was, trying to find the street names hidden in the brickwork of the buildings. I had to have a friendly person help me out. I’ve never been lost like that since smartphones. Give me a specific case where chatgpt would do something like that.

        • From a consumer perspective, it’s less flashy, I guess. It’s helped me figure out things that I can’t find on a search engine, but that’s not quite as big. From an engineer’s perspective, all the tech for Google maps existed at the time, and for certain users accurate GPS with maps was already an established thing in 2000. On the other hand, we’d been trying to do anything useful with natural language since the 50’s and had thoroughly failed.

          From a business perspective, being able to lay off every order taker at your restaurant chain (and maybe the middle managers and bookkeepers too) is huge. It’s obviously huge for order takers, and it’s pretty big for the restaurant owners and anyone who eats at restaurants as well. I think that qualifies as “eating the world”.

          • On the other hand, we’d been trying to do anything useful with natural language since the 50’s and had thoroughly failed.

            That’s really not true. For instance, machine translation and spam detection (document classification) were getting really good by the late 2000s. Image recognition was great beginning the late 2010s.

            What we’ve seen in the last few years (besides continual incremental improvements in already-existing solutions) is improvement in the application of generative tools. So far the uses cases of generative models appear to be violating copyright, cheating on homework, and producing even more search engine spam. It can also be somewhat useful as a search engine so long as you want your answer to be authoritatively worded but don’t care if it’s true or not.

            • In the 50’s they thought we would have intellegent robot butlers by the 70’s. They had solved more structured problems that seemed hard, like chess, and figured language and simple physical tasks couldn’t be much different. They came up with some hacky chatbots and things in the 20th century, but it was all cheap tricks like strategically changing the subject - I talked to these things enough to tell. ChatGPT passes basically every test of short-term language reasoning we can throw at it. It’s solved the problem for really basic purposes. It can take your Wendy’s order without any fine-tuning.

              Alright, I’m going to respond to the rest of this in quip-like fashion, since you’ve touched on a lot of separate-ish points here, but the tone intended is still neutral.

              Image recognition was great beginning the late 2010s.

              That was literally the same tech we’re talking about here, just earlier and with a slightly different structure.

              For instance, machine translation and spam detection (document classification) were getting really good by the late 2000s.

              You and me have different memories of older machine translation. It could replace words and a few phrases fine, but it broke or produced awkward phrasings very often. It didn’t engage with the underlying meanings at all. Spam detection worked well, but not similarly smart, and IIRC in some case was neural nets again.

              violating copyright,

              Disagree.

              It can also be somewhat useful as a search engine so long as you want your answer to be authoritatively worded but don’t care if it’s true or not.

              Or if the answer is easily verifiable, like it has been in my own cases.