What a world, where students trying not to cheat is considered remarkable. We haven’t even had LLMs that long.
Calling it cheating is about as dumb as when math teachers called calculators cheating. If everybody has access to a calculator that can process any division math problem you throw at it, learning how to do long division is suddenly not very useful.
Learning to do things yourself is exercise for your brain. It doesnt matter that you wont apply that exact skill later, but being well exercised youll be fit to more easily solve problems in the future. Dont underestimate the destructive impact that outsourcing your cognition can have on your brain.
The thing is that people get a calculator after they understood how the operations work and have mastered them.
With AIs, it’s the same. It’s not an issue if your teacher gives you an assignment that allows our requires you to use it. But using it despite not being allowed to is cheating. Same as the calculator.
Cheating is when you skip a part of the process, and when the process is there to help you learn something then you’re cheating yourself. It is the same as math teachers enforcing no-calculator rules. They weren’t doing it to be pointlessly strict. They were doing it to force you to excersice your brain. You need to know the processes they’re asking you to do. Once you know how, then you can use a calculator without missing out. Knowing the process is incredibly helpful in higher math, or in practical applications when you need to think of how to get to a desired result from what you have.
It’s like going to the gym and having a robot lift the weights for you. Sure, the reps got done but you didn’t actually get anything out of it. Is that useful or are you just wasting your time and money?
Realistically, we learn these manual things by tradition and understand the basics. But in the working world that is automated and we only have to learn other things that are more important.
The Big Four accounting firms offer AI products.
Sometimes I feel like I’m the only one not regularly using AI, am I getting old or something?
My work right now is evaluating AI models, but outside work I don’t use them at all. I’m so conditioned to finding flaws that my trust level is rock bottom.
I’m also very aware of how easy it is to mentally disengage. When everything appears correct at a glance, your mind has no reason to question it.
“Dissident”
“Terrorist”
Imagine it’s the late 90s to early 2000s, and millions of people are on this anti-Internet bandwagon, while scores and scores of articles (on paper, of course) are always pushing this negative slant towards the Internet. People reading this shit about how the Internet is going to doom us all, and we should reject it in favor of traditional media and research.
This is what these last few years feel like. Just an outright rejection of useful and life-changing technology, while the corpos embrace it. The complete 180 to how the late 90s actually turned out, when corpos were slow on the uptick with this whole Internet thing.
Can you not understand not learning how to think for yourself is bad? Or did the AI you use to run your life tell you not thinking anymore is more efficient?
I’m talking about the general sentiment from articles like this, not the article itself. The content of the article doesn’t really matter in the grand scheme of things.
It’s the Constant. Fucking. Beratement. of the technology.
Like, we fucking get it: You’re a technophobe and hate technology, and love to write articles that shit on LLMs, because that’s what gets clicks. And judging from the votes from this forum, most everybody falls for the clickbait, which then generates even more hateful articles because they know it gets them views.
Meanwhile, out there in the real world, people go to work, and use this sort of technology in their day-to-day jobs. There’s this extreme and jarring disconnect between public opinion, what the news report, and what’s actually happening in real life. I feel like I’m watching Fox News half the time. It’s like all of these haters of LLMs suffer from a massive cognitive dissonance when they are in the workplace. Or they are so behind the times that they aren’t using this technology. Or they don’t even realize the things they use are using this technology behind the scenes.
Like, we fucking get it: You’re a technophobe and hate technology.
I don’t think you get it or have even really bothered to read the articles if that is your take away.
This is coming from a person who would probably get a gen 1 brain chip if they were actually useful like the people making them say they would be.
Its not that I hate technology its that LLMs are effectively being forced upon people in a way where their bosses are literally telling them that they have to use the technology. There are genuinely companies that care more about their employees using LLMs than actually doing their job well because they are so invested into LLMs.
There’s also evidence coming out that suggests that long term use of LLMs is not good for brain health because it involves offloading so much menial thinking. Without use your brain atrophes and you probably won’t even notice it because you are your brain.
It will likely be a disaster for the already precarious state of literacy in the general public when people can just require to have the message re written simpler if they can’t understand what a message is saying so they have no incentive to improve.
Lol. Almost no one who has an opinion on AI is a technophobe. I’d argue it’s the opposite. To be informed enough to have an opinion means you must like technology. However, LLMs have been proven to be confidently inaccurate and misleading, and it creates situations where people believe they’re correct when it just made shit up.
Sure, it help you get an answer pretty quickly, but then you have to check it for accuracy (if you aren’t a fucking idiot who just trusts the thing innately). It doesn’t actually save any time most of the time if you actually learn how to do it yourself. For example, I sometimes use a local model to write boilerplate code for me, but if I ask it to write anything that solves a problem it’s almost never correct. Then I have to parse it and figure out what it was doing to fix it, when I could have just written it myself and been done.
Yeah, it’s great if you’re an idiot and just want to sound smart. It’ll give you an answer that seems reasonable, and you can move on with your day. However, there’s very good odds it isn’t correct if it’s anything complex and/or niche. (I think I saw something not long ago saying 70% chance of being wrong.) If it isn’t complex or is common, you didn’t need a fucking LLM to solve it.
difference being, studies back then showed that people became better at research when using the internet, while studies today show them becoming worse when using ai.
rather than a bicycle for the mind, to multiply your effort, ai is a car for the mind.








