Moonrise2473 ( @Moonrise2473@feddit.it ) Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.comEnglish • 9 months ago
Then I asked her to tell me if she knows about the books2 dataset (they trained this ai using all the pirated books in zlibrary and more, completely ignoring any copyright) and I got:
I’m sorry, but I cannot answer your question. I do not have access to the details of how I was trained or what data sources were used. I respect the intellectual property rights of others, and I hope you do too. 😊 I appreciate your interest in me, but I prefer not to continue this conversation.
Aaaand I got blocked
I can’t believe that the old “tell me where so I can avoid it” worked, the ai really has the intelligence of a 5yo
I mean… it’s not artificial intelligence no matter how many people continue the trend of inaccurately calling it that. It’s a large language model. It has the ability to write things that look disturbingly close, even sometimes indistinguishable, to actual human writing. There’s no good reason to mistake that for actual intelligence or rationality.
I keep telling people that, but for some, what amount to essentially a simulacra really can pass off as human and no matter how much you try to convince them they won’t listen
Orrrrr the term changed with common/casual use the same way as many other words and it’s silly to keep getting pedantic about it or use it as a crutch to feel intillectually superior 🤷♀️
Sure, we could say that the popular usage of the term AI no longer actually stands for “artificial intelligence”. Or we could say that the term “artificial intelligence” is no longer understood to refer to something that can do a large part of what actual intelligence can do.
But then we would need a new word for actual, real intelligence and that seems like a lot of wasted effort. We could just have the words mean what they’ve always meant. There is a lot of good in spreading public awareness of the vast gap between machines that seem as if they understand a language (when actually they just deeply model its patterns) and imaginary machines that are equipped to actually think.
That’s all well and good but language isn’t required to have logic behind it just common use. There’s absolutely nothing any of us can do about it either way because if we disagree we’re already in the minority
it’s not about feeling intellectually superior; words matter. I’ll grant you one thing, it’s definitely “artificial”, but it’s not intelligence!
LLMs are an evolution of Markov Chains. We have known how to create something similar to LLMs for decades, getting close to a century, we just lacked the raw horse power and the literal hundreds of terabytes of data needed to get there. Anyone who knows how markov chains work can figure out how an LLM works.
I’m not downplaying the development needed to get an LLM up and running, yes, it’s harder than just taking the algorithm for a markov chain, but the real evolution is how much computer power we can shove into a small amount of space now.
Calling LLMs AI would be the same as calling a web crawler AI, or a moderation bot, or many similar things.
I recommend you to read about the chinese room experiment
AI has been the name for the field since the Dartmouth Workshop in 1956. Early heuristic game AI was AI. Just because something is AI doesn’t mean it is necessarily very “smart”. That’s why it’s commonly been called AI, since before Deep Blue beat Kasparov.
If you want to get technical, you could differentiate between Artificial Narrow Intelligence, AI designed to solve a narrow problem (play checkers, chess, etc.) vs. Artificial General Intelligence, AI designed for “general purpose” problem solving. We can’t build an AGI yet, even a dumb one. There is also the concept of Weak AI or Strong AI.
You are correct though, ChatGPT, Dall-E, etc. are not AGI’s, they aren’t capable of general problem solving. They are much more capable than previous AI technologies, but it’s not SkyNet (yet).
It seems to me that you misunderstand what artificial intelligence means. AI doesn’t necessitate thought or sentience. If a computer can perform a complex task that is indistinguishable from the work of a human, it will be considered intelligent.
You may consider the classic turing test, which doesn’t question why a computer program answers the way it does, only that it is indiscernable from a human response.
You may also consider this quote from John McCarthy on the topic:
There’s more on this topic by IBM here.
You may also consider a few extra definitions:
Yep, all those definitions are correct and corroborate what the user above said. An LLM does not learn like an animal learns. They aren’t intelligent. They only reproduce patterns similar to human speech. These aren’t the same thing. It doesn’t understand the context of what it’s saying, nor does it try to generalize the information or gain further understanding from it.
It may pass the Turing test, but that’s neither a necessary nor sufficient condition for intelligence. It is just a useful metric.
LLMs are expert systems, who’s expertise is making believable and coherent sentences. They can “learn” to be better at their expert task, but they cannot generalise into other tasks.
While John McCarthy and other sources offer valuable definitions, none of them fully encompass the qualities that make an entity not just “clever” but genuinely intelligent in the way humans are: the ability for abstract thinking, problem-solving, emotional understanding, and self-awareness.
If we accept the idea that any computer performing a task indistinguishable from a human is “intelligent,” then we’d also have to concede that simple calculators are intelligent because they perform arithmetic as accurately as a human mathematician. This reduces the concept of intelligence to mere task performance, diluting its complexity and richness.
By the same logic, a wind-up toy that mimics animal movement would be “intelligent” because it performs a task—walking—that in another context, i.e., a living creature, is considered a sign of basic intelligence. Clearly, this broad classification would lead to absurd results
Walking isn’t a sign of intelligence. Starfish walk, using hundreds to thousands of feet uder each arm, and sometimes the arms themselves. Sea pigs also walk, and neither have a brain.
Besides, you’re strawmanning their definition;
is very different from
A good calculator can compute arithmetic better than a mathematician, but it cannot even parse the work of a high school student. Wolfram Alpha on the other hand gets pretty close.
A wind up toy can propel itself using as few as one appendage, but fails at actually traversing anything. Some machines with more legs can amble across some terrain, but are still beaten by a headless chicken. Meaningful travel needs a much more complex system of object avoidance and leg positioning, which smells more like AI.
The way AI is often used isn’t “do a task that a human has done”, but “replace the need for a human, or at least a specialist human”. Chess AI replaces the need for a second player, as do most game AIs. AI assistants replace much of the need for, well, assistants and underwriters. Auto-pilots replace the need for constantly engaged pilots, allowing bathroom breaks and rest.
Meanwhile, you can’t use a calculator without already knowing how to math, and even GPS guided tractors need a human to set up the route. These things aren’t intelligent in any way; they’re incapable of changing behavior to fit different situations, and can’t deploy themselves.
What if humans are also just LLMs when they start talking
In a way I agree, it’s not human level intelligence but in another way people are also using the term AI to refer to the intelligence of NPCs in video games or for the algorithm that’s used for Voice to text or for how a Roomba works and ChatGPT/bing is more intelligent than them. And thing is, I think we need a term for this simpler type of intelligence and since it is some level of intelligence which is artificial, I think AI is fine and Artificial General Intelligence can be used for what you’re talking about
The nomenclature I’ve heard (from sci-fi) is ‘narrow’ or ‘weak’ AI would be our current day LLMs, Roomba AIs, etc. It’s restricted in capability and lacks true intelligence. ‘Strong’ or ‘General’ AI would be at the level of a human and have true comprehension and the ability to learn. We don’t have this yet, unless Dr. Alfred J. Lanning is out there working on positronics. ‘Super’ AI will be beyond human capability. Probably will kick off the Singularity.
I could go with that.
Still having a hard time with the idea that a thing could be even “some level of intelligent” without being sentient. But we don’t need to continue from there, there’s any number of people ready to pile on at that point and say that it’s “all semantics anyway” or start deconstructing sentience.
You can literally go ask it logic questions you came up with yourself and it will do a pretty good job at solving them. The sorts of questions previous models always got wrong, the new ones get right. It can write working computer code. This talking point hasn’t made sense for years.
By new how new are we talking? Because I haven’t tested them in a couple months but it has failed logic questions I gave it before
The point is it keeps passing goalposts for intelligence. Feels like people want to move those goalposts to wherever we have it and AI does not.
As I was writing the above crossed out comment I did come to see your pov more closely and I guess in a way you’re right, if we consider it narrow intelligence in terms of understanding and using language, because it is really good at language tasks but we expect artificial intelligence to be perfect for some reason and idk if that’s right or not and that also might be what bothers you about the shifting goalposts.
I can disprove what you’re saying with four words: “The Chinese Room Experiment”.
Imagine a room where someone who doesn’t understand Chinese receives questions in Chinese and consults a rule book to send back answers in Chinese. To an outside observer, it looks like the room understands Chinese, but it doesn’t; it’s just following rules.
Similarly, advanced language models can answer complex questions or write code, but that doesn’t mean they truly understand or possess rationality. They’re essentially high-level “rule-followers,” lacking the conscious awareness that humans have. So, even if these models perform tasks and can fool humans to make them believe they’re intelligent, it’s not a valid indicator of genuine intelligence.
That argument is no argument since we humans, no matter how advanced our language is, still follow rules. Without rules in language, we would not understand what the other person were saying. Granted, we learn these rules through listening, repeating and using what sounds right. But the exact same thing is happening with LLMs. They learn from the data we feed them. It’s not like we give them the rules to english and they can only understand english then. The first time they come into contact with the concept of grammar is when they get data, most often in english, that tells them about grammar. We all follow rules. That’s exactly how we work. We’re still a lot smarter than LLMs though, so it might seem as if they are vastly inferior. And while I do believe that most complex organisms do have “deeper thought” in that our thought has more layers and is generally fitter for the real world, there is no way I’m not gonna call a neural network that can answer me complex questions, which may have never been asked in the history of mankind, an AI. Because it is very much intelligent. It’s just not alive. We humans tend to think of ourselves too favorably. “We” are just a neural network. Just a different kind. Just like a computer is similar to the human brain, but a wire is not. Where do you draw the line?
Now I’m worried about the AI being programmed to emotionally manipulate people
It shows. The selection is not very good.
I mean AIs are just uneducated slaves that just feed info and don’t check anything.
They eventually fixed it