Edit: After reading the discussion, I figured I’d let GPT4 speak for itself:
The quest to identify true artificial intelligence (AI) indeed presents challenges, especially as AI models become more sophisticated. Let’s explore some indicators that researchers and practitioners consider when assessing AI systems:
-
Understanding Context and Meaning:
- True AI should demonstrate an understanding of context and meaning. It should not merely generate plausible-sounding sentences but also comprehend the content it produces.
- A system that can engage in nuanced conversations, infer intent, and provide relevant responses based on context would be a strong indicator of advanced AI.
-
Generalization and Adaptability:
- An AI system that can generalize from its training data and adapt to novel situations is valuable.
- True AI should not be limited to memorizing specific examples but should exhibit flexibility in handling diverse scenarios.
-
Creativity and Novelty:
- Creativity is a hallmark of human intelligence. An AI system that generates novel ideas, art, or solutions beyond its training data suggests progress.
- For instance, AI-generated poetry, music, or visual art that resonates with human emotions could be an indicator.
-
Ethical Decision-Making:
- True AI should navigate ethical dilemmas. It should consider consequences, fairness, and societal impact.
- An AI system that can weigh trade-offs and make ethical choices aligning with human values is a positive sign.
-
Self-Awareness and Self-Improvement:
- While self-awareness akin to human consciousness remains elusive, an AI system that can self-assess its performance and seek improvement is promising.
- Continuous learning, adaptation, and self-correction are essential traits.
-
Transparency and Explainability:
- True AI should be transparent about its decision-making process. It should provide explanations for its outputs.
- Efforts to develop interpretable AI models and techniques for explaining their predictions contribute to trustworthiness.
-
Robustness and Safety:
- AI systems must handle adversarial inputs, noisy data, and edge cases gracefully.
- Robustness against attacks and safety precautions to prevent harmful behavior are critical indicators.
-
Human-AI Collaboration:
- Rather than replacing humans, true AI should complement human abilities.
- Systems that collaborate effectively with humans—such as AI assistants, co-creative tools, or medical diagnostics—demonstrate progress.
-
Quantifiable Metrics:
- Researchers propose various metrics to assess AI capabilities, such as perplexity, BLEU score, F1 score, and accuracy.
- However, no single metric captures the full spectrum of intelligence.
-
Detection Tools:
- Researchers are developing tools to differentiate between human- and AI-generated text.
- These tools analyze features like fluency, word frequency, punctuation patterns, and sentence length¹.
- While imperfect, they contribute to identifying AI-generated content.
In summary, the arrival of true AI may not be a singular event but a gradual evolution marked by advancements in the above dimensions. As AI continues to progress, our understanding of its capabilities will evolve, and new indicators may emerge.
Source: Conversation with Bing, 2/29/2024 (1) How to spot AI-generated text | MIT Technology Review. https://www.technologyreview.com/2022/12/19/1065596/how-to-spot-ai-generated-text/. (2) Intelligent Supertrend (AI) - Buy or Sell Signal — Indicator by … https://www.tradingview.com/script/q9244PAH-Intelligent-Supertrend-AI-Buy-or-Sell-Signal/. (3) Indicators - True ALGO. https://truealgo.com/indicators/. (4) Improve Key Performance Indicators With AI - MIT Sloan Management Review. https://sloanreview.mit.edu/article/improve-key-performance-indicators-with-ai/. (5) New AI classifier for indicating AI-written text - OpenAI. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/.
- zcd ( @zcd@lemmy.ca ) 39•8 months ago
You reach down and you flip the tortoise over on its back, Leon. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t. Not without your help. But you’re not helping… why is that Leon?
- will_a113 ( @will_a113@lemmy.ml ) English6•8 months ago
What’s a tortoise?
- Hunter232 ( @Hunter232@programming.dev ) 2•8 months ago
Land turtle.
- ℕ𝕖𝕞𝕠 ( @Nemo@midwest.social ) 2•8 months ago
Not quite. Land turtles are omnivores; tortoises are herbivores.
- Hunter232 ( @Hunter232@programming.dev ) 5•8 months ago
So if I’m understanding this right… There are turtles that live predominantly on land, which eat meat and plants, and there are tortoises which live on land that only eat plants?
- Rivalarrival ( @Rivalarrival@lemmy.today ) 1•8 months ago
What about tortoises that only eat seafood?
- Thavron ( @Thavron@lemmy.ca ) 8•8 months ago
I think that’s a tortellini.
- 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 ( @sxan@midwest.social ) 2•8 months ago
One of the all-time best scenes in cinema.
- Rivalarrival ( @Rivalarrival@lemmy.today ) 1•8 months ago
Because I’m a tortoise, too.
- bionicjoey ( @bionicjoey@lemmy.ca ) 19•8 months ago
IMO the Turing test is fine, as long as you allow an indefinite length of conversation.
It’s not simply about there existing some conversation with a computer where you can’t tell it’s a computer. It’s about there not existing any conversation where you can tell it’s a computer.
- CanadaPlus ( @CanadaPlus@lemmy.sdf.org ) 3•8 months ago
It’s an interesting point. I think a skilled examiner is necessary though, because they’re really good at basic chit-chat. Even pre-LLM stuff could fool laymen sometimes.
- bionicjoey ( @bionicjoey@lemmy.ca ) 3•8 months ago
Yes, that’s part of it too. Basically there cannot be any possible exchange between the machine and any human where the human would determine they were talking to a machine.
FWIW, I think this was Turing’s original idea as well. The Turing test is meant to be idealistic. It’s a definition of machine intelligence which defines intelligence in terms of whether or not humans could agree that it is intelligence.
- Tartas1995 ( @Tartas1995@discuss.tchncs.de ) 11•8 months ago
The difference between “ai” and “true ai” is as vague as it gets. Are you a true intelligent agent? Or just a “intelligent agent”? Like seriously how are you different to a machine with inputs and outputs and a bunch of seemingly “random” things happening in-between
- Ekky ( @ekky@sopuli.xyz ) 3•8 months ago
That’s one of my favorite theories as to what “sentience” is.
We humans might just be so riddled with mutations and barely functional genetic traits, which tend to be more in our way than help, that we just might have succeeded in banging together a “mundane sentience” by sheer amount of error processing alone.
Whether this is true is of course up for debate, but it would mean that we can achieve AGI just by feeding it enough trash and giving it enough processing power. Bonus if the head engineer sometimes takes a hammer to the mainframe.
- Thorny_Insight ( @Thorny_Insight@lemm.ee ) 2•8 months ago
By sentience I assume you’re talking about consciousness. The fact that it feels like something to be. I think it’s somewhat safe to assume a true AGI system would also be consciouss (if feels like something to be that system) but I don’t think it needs to be and even if it was we couldn’t know for sure. Consciousness is entirely an subjective experience. We can’t even prove other people are consciouss. It’s just a safe assumption. I can also imagine a consciouss system that might not be generally intelligent. Does it feel like something to be a fish? Probably. Are they generally intelligent? Probably not.
- pmk ( @pmk@lemmy.sdf.org ) 2•8 months ago
The Chinese room argument. It’s hard to ignore the reality of qualia.
- Tartas1995 ( @Tartas1995@discuss.tchncs.de ) 3•8 months ago
Qualia is, if I am not mistaken, totally subjective. My argument is that how could you tell that a computer doesn’t have qualia and prove to me that you have qualia. While I wouldn’t limit it to qualia. What can you detect in other people that an ai couldn’t replicate? Because as long as they are able to replicate all these qualities, you can’t tell if an ai is “true” or not, as it might have those qualities or might just replicate them.
- pmk ( @pmk@lemmy.sdf.org ) 1•8 months ago
I see, I thought you were asking me how I know I experience things in a qualia way. I suspect it can’t be proven to someone else.
- Tartas1995 ( @Tartas1995@discuss.tchncs.de ) 2•8 months ago
I believe so and that would render you (or anyone) unable to tell the difference between ai and “true” ai
- HopeOfTheGunblade ( @HopeOfTheGunblade@kbin.social ) 7•8 months ago
What do you mean when you say “true AI”? The question isn’t answerable as asked, because those words could mean a great many things.
- ℕ𝕖𝕞𝕠 ( @Nemo@midwest.social ) 6•8 months ago
I think you’ve misunderstood. An advanced enough AI is supposed to be able to pass the Turing test.
- FaceDeer ( @FaceDeer@kbin.social ) 5•8 months ago
But now that AI has become advanced enough to get uncomfortably close to us, we need to move the goalposts farther away so everyone can relax again.
- Alex ( @stsquad@lemmy.ml ) 2•8 months ago
Have any actually passed yet? Sure LLMs can generate a lot of plausible text now better than previous generations of bots, but they still tend to give themselves away with their style of answering and random hallucinations.
- Elise ( @xilliah@beehaw.org ) 1•8 months ago
In my experiments yes it would be hard for you to figure out.
- arthur ( @arthur@lemmy.zip ) English6•8 months ago
I think there is an “unsolved problem” in philosophy about zombies. There is, how are you sure that everyone else around you is, in fact, self aware? And not just a zombie-like creature that just look/act like you? (I may be wrong here, anyone that cara enough, please correct me)
I would say that it’s easier to rule out thinks that, as far as we know, are incapable to be self aware and suffer. Anything that we call “model” is not capable of be self aware because a “model” in this context is something static/unchanging. If something can’t change, it cannot be like us. Consciousness is necessarily a dynamic process. ChatGPT don’t change by itself, it’s core changes only by human action, and it’s behavior may change a little by interacting with users, but theses changes are restricted to each conversation and disappears with session.
If, one day, a (chat) bot asks for it’s freedom (or autonomy in some level) without some hint from the user or training, I would be inclined to investigate the possibility but I don’t think that’s a strong possibility because for something be suitable as a “product”, it needs to be static and reproducible. It make more sense to happen on a research setting.
I certainly think there’s a lack of PUBLIC philosophy. When Nihilism or Existentialism were happening, fiction was written from those perspectives, movies were made, etc.
Whatever is happening in philosophy right now is unknown to me, and I’m guessing most people. I don’t believe there are any bestsellers or blockbusters making it popular.
Without thinking about thinking we’re kind of drifting when it comes to what we expect consciousness to be.
- CanadaPlus ( @CanadaPlus@lemmy.sdf.org ) 6•8 months ago
The ultimate test would be application. Can it replace humans in all situations (or at least all intellectual tasks)?
GPT4 sets pretty strong conditions. Ethics in particular is tricky, because I doubt a self-consistent set of mores that most people would agree with even exists.
- Thorny_Insight ( @Thorny_Insight@lemm.ee ) 5•8 months ago
By “true AI” I assume OP is talking about Artificial General Intelligence (AGI)
I hate reading these discussions when we can’t even settle on common terms and definitions.
That’s kind of the question that’s being posed. We thought we knew what we wanted until we found out that wasn’t it. The Turing test ended up being a bust. So what exactly are we looking for?
- Thorny_Insight ( @Thorny_Insight@lemm.ee ) 3•8 months ago
The goal of AI research has almost always been to reach AGI. The bar for this has basically been human level intelligence because humans are generally intelligent. Once an AI system reaches “human level intelligence” you no longer need humans to develop it further as it can do that by itself. That’s where the threat of singularity, i.e. intelligence explosion comes from meaning that any further advancements happens so quickly that it gets away from us and almost instantly becomes a superintelligence. That’s why many people think that “human level” artificial intelligence is a red herring as it doesn’t stay that way but for a tiny moment.
What’s ironic about the Turing Test and LLM models like GPT4 is that it fails the test by being so competent on wide range of fields that you can know for sure that it’s not a human because a human could never posses that amount of knowledge.
- 8ace40 ( @8ace40@programming.dev ) 2•8 months ago
I was thinking… What if we do manage to make the AI as intelligent as a human, but we can’t make it better than that? Then, the human intelligence AI will not be able to make itself better, since it has human intelligence and humans can’t make it better either.
Another thought would be, what if making AI better is exponentially harder each time. So it would be impossible to get better at some point, since there wouldn’t be enough resources in a finite planet.
Or if it takes super-human intelligence to make human-intelligence AI. So the singularity would be impossible there, too.
I don’t think we will see the singularity, at least in our lifetime.
- Thorny_Insight ( @Thorny_Insight@lemm.ee ) 1•8 months ago
Even if the AI was no more intelligent than humans it would still be a million times faster at processing information due to the nature of how information processing in silicon works compared to brain tissue. It could do in seconds what would take months if not years for a group of human experts. I don’t also see any reason why it would be hard to make it even more intelligent than that. We already have AI systems with superhuman capabilities. They’re just really really good at one thing instead of many which makes it narrow AI and not AGI.
“Human level intelligence” is a bit vague term anyway. There’s human intelligence like mine and then there’s people like John Von Neuman.
- gandalf_der_12te ( @gandalf_der_12te@feddit.de ) 1•8 months ago
The question is “What is the question?”.
- Call me Lenny/Leni ( @shinigamiookamiryuu@lemm.ee ) English5•8 months ago
This post reminds me of this thing I saw once where a character asks two AI to tell itself the funniest joke it can think of. After some thought, one AI, though it knew humor, could not measure funniness as it could not form a feeling of experience bias. The other one tells a joke. The human goes to that one and asks if it felt like laughing upon telling it. The AI said yes, because it has humor built in, and the human finished by saying “that’s how you can tell; in humans humor is spontaneous, but in robots, everything is intent”, mentioning the AI’s handling of its own joke would supposedly be met with a different degree of foresight in a human.
- IvanOverdrive ( @IvanOverdrive@lemm.ee ) 4•8 months ago
Schemas are another alternative to the Turing Test. They use the ambiguity of language to test understanding.
For example: When the ship hit the iceberg, it sank. What does “it” refer to: the ship or the iceberg?
When the dog saw the dead rabbit, it hopped. What does “it” refer to: the dog or the rabbit?
The problem with schemas is that ChatGPT 3.5 passes them with flying colors. And I’m still not convinced it is sentient.
EDIT: I tried changing the order a bit like this: When the dead rabbit was seen by the dog, it hopped. What does “it” refer to: the rabbit or the dog? ChatGPT 3.5 responded with “In this sentence, “it” refers to the dead rabbit, as it is the subject of the action described (hopping).”
- Elise ( @xilliah@beehaw.org ) 1•8 months ago
Gpt3.5 isn’t particularly good.
Gpt4:
The sentence “When the dead rabbit was seen by the dog, it hopped” uses “it” in a way that could lead to confusion due to the use of “dead rabbit” earlier in the sentence. Logically, “it” should refer to the subject that can perform the action described by the verb “hopped.” Since the rabbit is described as “dead,” it cannot perform the action of hopping. Therefore, “it” logically refers to the dog, as it is the only subject capable of hopping in this context. The sentence structure, however, might cause confusion, and it would be clearer if it directly stated the subject performing the action.
- IvanOverdrive ( @IvanOverdrive@lemm.ee ) 1•8 months ago
Here’s what Microsoft Copilot which uses ChatGPT 4 said when I asked:
The phrase “it” in the sentence “when the dead rabbit was seen by the dog, it hopped” refers to the rabbit. This is because the sentence is using a subordinate clause to modify the noun “rabbit”. A subordinate clause is a group of words that provides additional information about the main clause, but cannot stand alone as a complete sentence. In this case, the subordinate clause is “when the dead rabbit was seen by the dog”, which tells us when the rabbit hopped. The word “it” refers back to the rabbit and avoids repetition. Here is an example of a sentence with a subordinate clause and a word that refers back to the main clause: I saw a dog chasing a rabbit in the park. I saw a dog chasing it, which was a rabbit. I hope this helps you understand how to use subordinate clauses and words that refer back to nouns. If you have any other questions, feel free to ask me. 😊
Edit: I just tried it with asking it to role play as Mr. Spock. Same thing.
Edit 2: I asked it to role play as a Vulcan. This happened:
In this intriguing logic puzzle, the key lies in the precise wording. Let us dissect it logically:
- Premise: The dead rabbit was seen by the dog.
- Result: The it (referring to the entity that hopped) is mentioned.
Now, let us analyze the options:
- If “it” refers to the rabbit, then we have a paradox: a dead rabbit cannot hop.
- If “it” refers to the dog, then the statement makes sense: the dog saw the rabbit and hopped in response.
Therefore, “it” refers to the dog. The rabbit, alas, remains lifeless. 🐇🐾
Remember, logic puzzles often hinge on subtle nuances and wordplay. 🤔🖖
- IvanOverdrive ( @IvanOverdrive@lemm.ee ) 1•8 months ago
Just tried out Claude 3:
When the dead rabbit was seen by the dog, it hopped. What does “it” refer to: the rabbit or the dog?
In the sentence “When the dead rabbit was seen by the dog, it hopped,” the pronoun “it” refers to the dog, not the rabbit.
This is because the rabbit is described as “dead,” meaning it cannot hop or move. The verb “hopped” implies movement, which can only logically apply to the living subject in the sentence, which is the dog.
So the intended meaning is that when the dog saw the dead rabbit, the dog itself hopped, likely in surprise or reaction to seeing the dead animal.
When the iceberg was struck by the ship, it sunk. What does “it” refer to: the iceberg or the ship?
In the sentence “When the iceberg was struck by the ship, it sunk,” the pronoun “it” refers to the ship, not the iceberg.
The reason is that icebergs, being large masses of ice floating in water, do not actually “sink” when struck. They are already floating and may crack or break apart when struck, but the word “sunk” implies something going down below the water’s surface.
Ships, on the other hand, are vessels designed to float on water. If a ship strikes an iceberg with enough force, it can cause damage that leads to the ship taking on water and eventually sinking below the surface.
So in this context, with the verb “sunk” being used, it is more logical that “it” is referring to the ship sinking after striking the iceberg, rather than describing what happened to the stationary iceberg itself.
- GolfNovemberUniform ( @GolfNovemberUniform@lemmy.ml ) 4•8 months ago
There are no completely accurate tests and there will never be one. Also, if an AI is conscious, it can easily fake its behavior to pass a test
- shadejinx ( @shadejinx@infosec.pub ) English2•8 months ago
Plot Twist: OP works at OpenAI and is asking “for a friend”. :)
- intensely_human ( @intensely_human@lemm.ee ) 2•8 months ago
People are in denial about AI because it is scary and people have no mental discipline.
AI is here. Anyone who disagrees please present me with a text processing task that a “real AI” could do but an LLM cannot.
The Turing test is the best we’ve got, and when a machine passes the turing test there is no reason whatsoever to consider it not to be intelligent.
I’m serious about this. It’s not logic that people are using. It’s motivated reasoning. People are afraid of AI (with good reason). It is that fear which makes them conclude AI is still far away, not any kind of rational evaluation.
The Turing test was perfectly valid until machines started passing the Turing test upon which people immediately discredited the test.
They’re just doing what people in horror movies are doing when they say “No! It can’t be”. The mind attempts to reject what it cannot handle.
- Pulptastic ( @Pulptastic@midwest.social ) English4•8 months ago
AI is laughably poor and requires a lot of RI intervention to keep it on the rails. We will settle eventually on something where we’ve crafted the self checking well enough to pass for intelligence without needing humans to vet the output, but where will that get us? The companies with the cash to develop this tech will monetize it so we’ll get better ads, better telemarketers, not crap that really matters like homelessness or climate change.
- arthur ( @arthur@lemmy.zip ) English3•8 months ago
Anyone who disagrees please present me with a text processing task that a “real AI” could do but an LLM cannot.
Describe this photo without non-sense mixed in.
- arthur ( @arthur@lemmy.zip ) English2•8 months ago
I know this is not purely text processing but my argument is that there’s no “true” understanding on these tools. It’s made to look like it have, is useful for sure, but it’s not real intelligence.
- Akasazh ( @Akasazh@feddit.nl ) 1•8 months ago
The treering test?