Had to click through to change my downvote to an upvote, lol.
Non of those examples are relevant.
Those examples are specific tools or specific implementation pattern, AI in development is a tool.
It doesn’t dictate how to write software or what the written code will look like, it’s a tool that speeds up your code wiring. It catches typos and silly bugs that take hours to debug, it’s able to generate useful unit tests, it can clean up and apply my code style way better than codemaid or resharper ever code, it’s taken care of so much tedious shit and made software development fun again.
Vibe coding is not the future of development. If you aren’t learning to use AI as a tool in development, you are going to be left behind.
It’s more apt to compare it to IDEs. Sure, you can still write you entire app in vim and compile it in the terminal, but you would have been very foolish to deny the future of development was in IDEs.
You’re describing exactly how all these web tools worked. “HTML, CSS, and JS are too hard to do manually. Here’s a shiny new tool that abstracts all that away and lets you get right to making your site!” Except they all added additional headaches, security concerns, and failed to fill in edge cases, so you still need to know how to do all that HTML, CSS, and JS anyway. That’s exactly how LLM generated code works now. It’ll be useful and common for a while and then the technical debt will pile up and pile up and eventually everyone will look around and think “what the hell were we thinking” and tear it all down.
None of those examples are relevant.
They seem pretty relevant. Those things didn’t go away, but they also didn’t remove the need for programmers (the way their sales people said they would).
It is always hilarious and strange to see the buy-in on these things. We have a single coder in his late 60s that has bought in hard to spicy autocorrect. Meanwhile, the youngest on our team (like 22) won’t touch it with a 10 ft pole.
The other issue is just the morality of it. Do I know people that got rich on Bitcoin? Yes. Do I feel like they’re participating in a pyramid scheme still? Also yes. And with spicy autocorrect, where they got their training data for any and all of these models is so freaking morally bankrupt, and they’re desperate to paper over that and make it “ok” for businesses to use it.
(let me preach a little, I have to listen to my boss gushing about AI every meeting)
Compare AI tools: now vs 3 years ago. All those 2022 “Prompt engineer” courses are totally useless in 2025.
Extrapolate into the future and realize, that you’re not losing anything valuable by not learning AI tools today. The whole point of them is they don’t require any proficiency. It “just works”.
Instead focus on what makes you a good developer: understanding how things work, which solution is good for what problem, centering your divs.
As an old fart you can’t imagine how often I heard or read that.
You should click the link.
Hehe. Damn, absolutely fell for it. Nice 😂
Yeah but it’s different this time!
I do wonder about inventions that actually changed the world or the way people do things, and if there is a noticeable pattern that distinguishes them from inventions that came and went and got lost to history, or that did get adopted but do not have mass adoption. Hindsight is 20/20, but we live in the present and have to make our guesses about what will succeed and what will fail, and it would be nice to have better guesses.
Quality work will always need human craftsmanship
I’d wager that most revolutionary technologies are either those that expand human knowledge and understanding, and (to a lesser extent) those that increase replicability (like assembly lines)
It’s tricky, because there’s no hard definition for what it means to “change the world”, either. To me, it brings to mind technologies like the Internet, the telephone, aviation, or the steam engine. In those cases, it seems like the common thread is to enable us to do something that simply wasn’t possible before, and is also reliably useful.
To me, AI fails on both those points. It doesn’t really enable us to do anything new. We already had chat bots, we already had Photoshop, we already had search algorithms and auto complete. It can do some of those things a lot more quickly than older technologies, but until they solve the hallucination problems it doesn’t seem reliable enough to be consistently useful.
These things make it come off more as a potential incremental improvement that is still too early in it’s infancy, than as something truly revolutionary.
Well it’ll change the world by consuming a shit ton of electricity and using even more precious water to fill the data centres. So changing the world is correct in that regard.
It needs to be more trustworthy. If I have to double check everything, I still have to figure out how to do whatever it’s doing, then figure out how it’s doing the thing, then verify if it did it right. By then, I could have just done it in step 1.5 probably.
I do wonder about inventions that actually changed the world or the way people do things, and if there is a noticeable pattern that distinguishes them from inventions that came and went and got lost to history,
Cool thought experiment.
Comparing the first iPhone with the release of BlockChain is a pretty solid way to consider the differences.
We all knew that modern phones were going to be huge. We didn’t need tech bros to tell us to trust them about it. The usefulness was obvious.
After I got my first iPhone, I learned a new thing I could do with it - by word-of-mouth - pretty much every week for the first year.
Even so, Google supposedly under-estimated the demand for the first Android phones by almost a factor of 10x.
BlockChain works fine, but it’s not changing my daily routine every week.
AI is somewhere in between. I do frequently learn something new and cool that AI can do for me, from a peer. It’s not as impactful as my first pocket computer phone, but it’s still useful.
Even with the iPhone release, I was told “learn iPhone programming or I won’t have a job.” I actually did not learn iPhone programming, and I do still have a job. But I did need to learn some things about making code run on phones.
I’d love to read a list of those instances/claims/tech
I imagine one of them was low-code/no-code?
/edit: I see such a list is what the posted link is about.
I’m surprised there’s not low-code/no-code in that list.
You’re right. It belongs on the list.
I was told several times that my programming career was ending, when the first low-code/no-code platforms released.
At my work we explored a low-code platform. It was not low on code at all. Beyond the simplest demos you had to code everything in javascript, but in a convoluted, intransparend, undocumented environment with a horrendous editing UI. Of course their marketing was something different than that.
That was not the early days of low-code mind you. It was rather recently; maybe three or four years ago.
This technology solves every development problem we have had. I can teach you how with my $5000 course.
Yes, I would like to book the $5000 Silverlight course, please.
I still think PWAs are a good idea instead of needing to download an app on your phone for every website. Like, for example, PWAs can easilly replace most banking apps, which are already just PWAs with added tracking.
They’re great for users, which is why Google and Apple are letting them die from lack of development so apps can make them money.
I’m not defending AI here, but “people have been wrong about other things in the past” is a completely worthless argument in any circumstance. See: Heuristics that Almost Always Work.
Interesting article, but you have to be aware of the flipside: “people said flight was impossible”, “people said the earth didn’t revolve around the sun”, “people said the internet was a fad, and now people think AI is a fad”.
It’s cherry-picking. They’re taking the relatively rare examples of transformative technology and projecting that level of impact and prestige onto their new favoured fad.
And here’s the thing, the “information superhighway” was a fad that also happened to be an important technology.
Also the rock argument vanishes the moment anyone arrives with actual reasoning that goes beyond the heuristic. So here’s some actual reasoning:
GenAI is interesting, but it has zero fidelity. Information without fidelity is just noise, so a system that can’t solve the fidelity problem can’t do information work. Information work requires fidelity.
And “fidelity” is just a fancy way of saying “truth”, or maybe “meaning”. Even as conscious beings we haven’t really cracked that issue, and I don’t think you can make a machine that understands meaning without creating AGI.
Saying we can solve the fidelity problem is like Jules Verne in 1867 saying we could get to the moon with a cannon because of “what progress artillery science has made during the last few years”. We’re just not there yet, and until we are, the cannon might have some uses, but it’s not space technology.
Interestingly, artillery science had its role in getting us to the moon, but that was because it gave us the rotating workpiece lathe for making smooth bore holes, which gave us efficient steam engines, which gave us the industrial revolution. Verne didn’t know it, but that critical development had already happened nearly a century prior.
Cannons weren’t really a factor in space beyond that.Edit: actually metallurgy and solid fuel propellants were crucial for space too, and cannons had a lot to do with that as well. This is all beside the point.
Is it worthless to say “(the current iteration of) AI won’t be a huge revolution”. For sure, it might be, the next decade will determine that.
Is it worhtless to say that many companies are throwing massive amounts of money at it, and taking huge risks on it, while it clearly won’t deliver for them? I would say no, that is useful.
And in the end, that’s what this complaint seems like for me. The issue isn’t “AI might be the next big thing”, but “We need to do everything with AI right now”, and then in a couple of years when they see how bad the results are, and how it negatively impacted them, noone will have seen it coming…
Thanks for summing it up so succinctly. As an aging dev, I’ve seen quite a lot of tech come and go. I wish more people interested in technology would spend more time learning the basics and the history of things.
Removed by mod
How dare they!
Removed by mod
No one can predict the future. One way or the other.
The best way to not be let behind is to be flexible about whatever may come.
Can’t predict the future, but I can see the past. Specifically the part of the past that used standards based implementations and boring technology. Love that I can pull up html with elements using ALL CAPs and table aligned content. It looks like a hot mess but it still works, even on mobile. Plain text keeps trucking along. Sqlite will outlive me. Exciting things are exciting but the world is made of boring.
it’s funny, but also holy moly do I not trust a “sign in with github” button
I can see this partly being true in that it’ll be part of a dev’s toolkit. The devs at my previous job loved using it to do busy work coding.
Oh god the hate in this sub. It is definitely another tool for a dev to use. Like autocomplete or a lot of other stuff a good IDE does to help you. If you don’t want to use it, fine. Perhaps you’re such a pro that you don’t need anything but a text editor. If you’re not, and you’re ignoring it for whatever petty reasons, you’ll probably fall behind all the devs who learned how to use it to get more productive (or, in developer terms, lazier)
Agreed. Like it or not, old school auto complete was the same thing, just not as advanced. That being said, comment op probably didn’t click the link.
I agree that it will continue to be a useful tool. I’ve gotten a similar productivity boost using AI auto-complete as I did from regular auto-complete. It’s also pretty good at identifiying potential uses with code, again, a similar productivity boost as a good linter. The chatbot does make a good sounding board, especially when you don’t remember the name of the concept you are trying to implement or need to pro-con two solutions and you can’t find articles about it.
But all these claims of 10x improvements in development speed are horse shit. Yeah, you might be able to shit out a 5-10,000 LOC tutorial app in an hour or two with prompt engineering, but try implementing a feature in a 100,000 LOC codebase and it promptly shits the bed: hallucinating internal frameworks, microservices, ignoring internal practices, writing straight up non-functional code, etc. I’d you spend enough time prompting it, you can eventually massage the solution you need out of it; problem is, it took longer to do that than writing the damn thing yourself.
I left 10 years ago, web development is shit.
I don’t remember progressive web apps having anywhere near the level of fanfare as the other things on this list, and as someone that has built several pwas I feel their usefulness is undervalued.
More apps in the app store should be pwas instead.
Otherwise this list is great and I love it.
If you’re not using Notepad, I don’t even know what to tell you.
deleted by creator












