The actual article isn’t nearly as stupid as the tweet makes it seem. I recommend giving it a read. It’s behind a shitty paywall, but if you use Firefox’s reader mode (Ctrl-Alt-R, or the little papper icon to the right side of the address bar) as soon as the page loads, you can read it.
His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren’t conscious, then perhaps consciousness isn’t as important as we thought it was:
Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.
Why did consciousness appear in the evolution of brains? Why wasn’t natural selection content to evolve competent zombies? I can think of three possible answers.
Some people will surely contest his claim that LLMs are as competent as evolved organisms. There’s definitely a bit of AI boomerism at play here (we have benchmarks that show just how incompetent LLMs can be), but I don’t think that invalidates his point, because LLMs can be very competent in the domains they’re trained to be competent in – they just aren’t AGI.
I like the shift away from “are they conscious” towards “what’s a way to define consciousness?”
Because that’s the actual important question. And literally nobody can answer it. Any discussion is more philosophy than hard science
The most interesting part is the last paragraph
Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?
If I were to give it a shot, I’d say that consciousness is akin to proprioception - the ability to know the state of oneself and understand how actions taken will change that state. It has very little to do with intelligence, just the “sense of being”.
Or maybe in other words, object persistence (but for yourself) is all it takes in my opinion. Even the simplest of animals could be considered conscious by this definition.
I think, when we finally do have a generally-accepted definition of consciousness, we will be deeply unsettled by how simple it is. How unprofound. Like a magic trick after you know how it works. And I think it will require us to think hard about what to do with animals and software that have it.
I feel like that’s exactly why we don’t have a generally-accepted definition of consciousness. Western ethics assigns special protection to whatever is conscious, so it is convenient to come up with a definition of consciousness, which excludes groups you want to exploit.
Tale as old as time, or at least the conscious idea of time. Whatever consciousness is, we are it. Those humans over there though? Who’s to say they aren’t sub-humans? Isn’t it our job to enlighten them and also take their land and food and things and selves?
I would consider this to be two separate, semi-related concepts asserted together, one that consciousness is an illusion, and one that you are a different person each day.
The first point draws many questions; consciousness is an illusion of what? What mechanism causes the illusion? How does it cause it? Why does the illusion exist? And you may note that you could replace illusion in those questions with consciousness and be left in a similar (though still distinct) place. So simply calling consciousness an illusion seems to me to kick the can down the road without actually addressing the problem.
As for being a different person after a lapse in awareness, I’d like to take it a step further and say that you could be considered a new person with every change in moment. It’s easy enough to look back 10 years and say “yeah, that’s a younger me, but they’re not the same as me I can just see the path that led to where I am now.” Getting closer, you may feel different today compared to yesterday depending on various factors (sleep, diet, events), but are you a different person because you slept and had a lapse of awareness, or because the state of your mind and thoughts have shifted? When your internal monologue (or equivalent thought) asks “what is this guy talking about?” Is it not thinking “what” in a brand new context given the words it is responding to, forming a new beginning to a thought that puts the mind in a unique state primed to then enter a new state of “is?” And if the mind is in a unique state of novelty, could the person attached to the mind be considered distinct from the person that existed before?
There is a reason the word revelation exists, it indicates when a person has a novel thought that changes their perspective or way of thinking, altering who they are. Would they not be a new person despite being aware of the process of their change? Due to the above points I don’t think new personhood only occurs at sleep, but constantly. The rate of change may quicken or slow, but the change is always there.
By consciousness being an illusion I mean that we place great value on the uninterrupted continuation of our consciousness, but I think it’s likely that it (exactly as you suggest) only really exists in the moment. The illusion would then be the illusion that consciousness is uninterrupted, when in reality you’re almost constantly recreating yourself from memory.
This would, incidentally, make us concerningly similar to current AI models.
Of course I have no way of actually knowing any of this. It’s just what I’m betting on, because otherwise I think it’s really hard to explain any unconsciousness (be it sleep, general anesthesia, suspended animation or the Star Trek transporter) as anything short of death. My belief “solves” this problem by rejecting the whole premise of uninterrupted consciousness.
Yeah, I’m not entirely sure that microcontrollers aren’t conscious. If insects (and maybe plants and fungi) are conscious, a lot of mundane stuff we’ve built could technically be as well.
I think we need to get away from the idea that consciousness is special or rare.
Thank you for the comment, i feel silly for not linking the article when people will probably want to read it.
My thoughts:
His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren’t conscious, then perhaps consciousness isn’t as important as we thought it was
Seems like an “evil” and dangerous talking point. To me, the value of consciousness isn’t in ita evolutionary efficiency.
My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism.
I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.
Seems like an “evil” and dangerous talking point. To me, the value of consciousness isn’t in ita evolutionary efficiency.
It’s not a question of the value of consciousness, it’s a question of its necessity. If an unconscious “zombie” can be, to an external observer, indistinguishable from a conscious being, then that means we’ve been overestimating the importance of consciousness for intelligence. Like Dawkins says in the article, there could be unconscious aliens out there who are nonetheless as intelligent as (or more intelligent than) humans. This isn’t a new concept – it’s been explored many times in scifi – but AI is now bringing the question from the realm of philosophy to the real world.
I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.
This is less true than it ever was with reasoning models. Some of the latest reasoning models don’t necessarily even reason in English anymore but rather an eclectic mix of languages. The next step after that is probably going to be running the reasoning in latent space (see e.g. Coconut), which basically means the model skips the language generation layer altogether and feeds lower-level state back into itself. Basically it is getting closer and closer to what most humans consider “thinking”.
But even besides reasoning models, I believe LLMs aren’t as different from human language production as many people think. The human speech centre, in a way, also just selects the right combination of data to continue a conversation. It frequently even hallucinates (we call this “speaking before thinking”) and makes stupid mistakes (we provoke these with trick questions like those on the Cognitive Reflection Test). There’s also some fascinating experiments in people who have had the connection between their brain hemispheres severed that really suggest our speech centre is just making things up as it goes along.
Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .
…
Could a being capable of perpetrating such a thought really be unconscious?
Oh it’s actually stupider than the tweet makes it seem.
My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.
Competency should imply the ability complete a lengthy task (eg hunting, building a nest, writing a paper). LLMs can’t.
It’s hardly surprising that a model optimized for replacing StackOverflow couldn’t survive in the untamed wilderness. As for writing a paper… you must’ve missed the fact that academia is currently in a crisis precisely because LLMs are better at writing papers than most students.
By the way, the paper the blog post you link to as a source links to as a source benchmarked LLMs on graph diagrams, textile patterns and 3D objects. It is not news that the language model would do poorly on visual-heavy tasks.
Sorry, I assumed you would have actually read the DELEGATE-52 study linked instead of just the abstract. For “a model optimized for replacing StackOverflow” that is “better at writing papers than most students” LLMs sure did pretty bad at those tasks over multiple rounds.
As the chart on page 7 of the paper shows, LLMs are good at exactly the kind of tasks you’d expect (producing and manipulating language), and bad at exactly the kind of tasks you’d expect (doing almost anything else). All this paper shows is that (1) they aren’t AGI, and (2) as a consequence of not being AGI they aren’t good unsupervised.
What the fuck? The only task that didn’t degrade across most models was Python. Very basic things like JSON, Makefiles, and schemas got screwed. Fiction, emails, and food menus got screwed. Did you even bother to read the legend? If you consider a single pass to be “producing and manipulating language” you didn’t bother to read the idiotic article you started this thread in support of. Good luck.
Edit: why do you lie?
Catastrophic corruption (80 and below) occurs in more than 80% of model, domain combinations.
The only task that didn’t degrade across most models was Python.
Yeah, after 20 cycles of unsupervised iteration on the task. Gemini 3.1 Pro doing as well as it did under that experiment setup is quite remarkable actually.
LLMs are able to do things we previously thought only conscious beings would be capable of doing
“We” as in lay misunderstanding of some pop science, still don’t get what consciousness is and can’t describe it. There are people alive today who didn’t believe in their youth that black people are fully conscious, Dawkins demonstrated by his communication to his personal friend and hero Epstein, that he doesn’t fully believes that women are conscious. What we thought or didn’t think of previously can’t be a good indication of anything.
“We” as in anyone who put any weight in the Turing test used to think that passing it would be some indication of consciousness, but now that LLMs can handily pass it it’s evident it either isn’t evidence of consciousness or that LLMs are conscious.
Turing test can be reliably passed by a bot that repeats last part of the previous sentence with a question mark at the end, and sprinkles “oh that’s very smart I need to think about it”, “I am starting to fall in love with you, %USERNAME%”, and occasional “I am alive” thrown in randomly. And it was obvious for a long time.
Hell, a lot of people trully believe that their dogs can fully understand human speech because they bought them buttons that say words when you press them, and conditioned their dog to press a button to get a rewards, and then observe the dog pressing buttons.
Humans seem to be hardwired to mistake speech for intellect
No it can’t. If you’re actually saying that modern LLMs are no better at passing the Turing test than ELIZA, you are either trolling or an utterly delusional AI hater. Here, have a paper that proves you wrong: https://arxiv.org/pdf/2503.23674
I am not saying the Turing test is a good benchmark of consciousness. On the contrary, like I said, LLMs have proven that it is not. But mere ten years ago even the most advanced chatbots had no hope of passing it, whereas now the most advanced ones are selected as the human over 70% of the time in a test that pits the LLM against a human head to head.
No I’m saying the Turing test is a philosophical hypothetical from the time before computers, and doesn’t actually show anything, because it relies on the least accurate tool at our disposal: human pattern recognition machine, one that is oh so happy to be fooled by the ELIZAS of various sofistication. Chatbots were passing the Turing test since the invention of a chatbot. Yeah, modern chatbots are better at that, but it’s more of a damnation of our perception
But as you can see in the paper I linked, ELIZA passes the Turing test in their experiment about 20% of the time (that is to say, it doesn’t pass; passing is 50% in this test) whereas the best LLMs pass about 70% of the time (that is to say, they are significantly more convincing at being human than real humans).
That 20% figure is just a clear indication how shit people are at conducting such a test, and that was basically my original point. 2 in 10 times people were convinced by a particularly echoey room.
As LLMs have developed and have been able to cram more and more “thoughtlike” behaviour into smaller RAM and less computation, I’ve steadily become less impressed with human brains. It seems like the bits we think most highly of are probably just minor add-ons to stuff that’s otherwise dedicated to running our big complicated bodies in a big complicated physics environment. If all you want to have is the part that philosophizes and solves abstract problems and whatnot then you may not actually need all that much horsepower.
I’m thinking consciousness might also turn out to be something pretty simple. Assuming consciousness is even a particular “thing” in the first place and not just a side effect of being able to predict how other people will behave.
If all you want to have is the part that philosophizes and solves abstract problems and whatnot then you may not actually need all that much horsepower.
Just massive data centers requiring tons of energy and cooling, with a model developed by human brains and trained on all of human knowledge these developers can get their hands on, painstakingly labeled by vast teams of people so that the model can spit out seemingly correct answers.
ls the AI actually philosophizing and solving abstract problems or is it merely regurgitating philosophies and solutions that exist within its training set?
Still, it required humans to apply the finishing touches.
“The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Jared Lichtman, a mathematician at Stanford University whose doctoral thesis centered on one Erdős’s conjectures, told SciAm.
Brains aren’t impressive because of their compute (which is both immense and absurdly efficient) or their ability to predict the future (technically the main function of evolved minds). They’re impressive because they’re conscious. The fact that organic brains can also engage in hierarchical abstraction, which no digital computer (or Turing machine) can do by definition, is icing on the cake.
(The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computation is more likely to be involved in consciousness, if at all.)
You’re going to have to do a lot more to justify the leap from Godel’s Incompleteness and the Halting Problem to “digital is limited, analog is not”, because neither of those things have anything to do with digital processes at all, and in fact both came about before we’d invented digital computers.
To me this comment sounds like when popsci gets ahold of a few sciency words and suddenly decides everything is crystal vibrations universal harmonics string theory quantum tunneling aligning resonance with those around you.
Analog signals are not digitally irreducible without presuming there’s no level of noise floor under which greater detail is irrelevant, Turing’s machines are not digital by their construction and predate the concept by a long time, and the first computers we built were analog and we invented digital computers later because they were cheaper and more efficient and easier and more reliable.
Also the halting problem doesn’t say “there are things which a computer can’t know but a human can”, it says “there are some things that cannot be known”.
Similarly Gödel proved that there will always be true things about a system that cannot be proven from within the system, that is using its axioms. That was a real bummer for folks trying to prove all of math with a small set of axioms. But that does not mean there are things math can’t know that humans magically can, it just means there’s other math, outside the axioms, that are true without following from them, in math. He proved it with math, after all. It doesn’t claim to give any special abilities to human brains.
And also, again, nothing Gödel or Turing ever said has anything to do with the concept of “digital” anything. I think you’re using the term “digital” to mean “rulesy”? Which is not even close to what it means?
Turing’s machines are notdigital by their construction
I won’t argue with you, because some of what you wrote isn’t even wrong.
However, on the off chance that you actually care about what is true, I urge you to take a theoretical computer science course. Lectures from MIT and Carnegie Mellon are available on YouTube.
Stop watching podcasts with pseudo-intellectual media grifters and read the actual research literature by real philosophers and mathematicians on these otherwise arcane topics.
I’m only about 15% sure you yourself aren’t an AI bot making a beautifully ironic and satirical play here. But I think we can agree not to argue any longer 🤝
I don’t see why there would be any fundamental difference between analog and digital computing. Digital computers can emulate analog computing, and I doubt consciousness arises from having theoretically infinite decimal precision, because in practice analog systems cannot use infinite precision either. Analogs (heh!) of the halting problem and the theorems you mention also exist for analog computing.
Quantum effects in the brain are a slightly more plausible explanation for consciousness, but currently they teeter on magical thinking because we don’t really know anything about what they would actually do in the brain. It becomes an “a wizard did it” explanation.
(The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computing is responsible for consciousness.)
I hear that argument from time to time, and I never found a source for it. I want to understand the original claim. Because it doesn’t make any sense when people bring it up. because both theorems do not have anything to do with the areas it’s applied to. I understand why people think it does, but it just doesn’t
You’re misunderstanding the implications of both the halting problem and Gödel’s first incompleteness theorem.
What Turing and Gödel independently proved is that a human observer can (theoretically) always have insights about mathematics and programming that are incomputable. That is, you cannot program or axiomatize or formalize or digitize everything that a mind can do. Period.
Analog computers are sufficiently different from digital systems to potentially emulate brain activity. But digital (discrete) methods are probably too constrained.
What Turing and Gödel independently proved is that a human observer can (theoretically) always have insights about mathematics and programming that are incomputable. That is, you cannot program or axiomatize or formalize or digitize everything that a mind can do. Period.
that is not what either of them proved. like… at all
You will find what I said in any philosophy of mathematics textbook dealing with the subject. In fact, I am paraphrasing the Oxford logician Joel David Hamkins.
You’re welcome to also read Shapiro’s famous paper for a rephrasing. These results have been well understood for half a century, although because the implications are ultimately metaphysical and not mathematical, we can’t be sure of the wider consequences, if any.
Also, I never said computers can’t be conscious. I said that digital computers (Turing machines) probably can’t. Quantum and analog computers have no such theoretical constraints and they’re far, far more prevalent given that they’re found in every living creature.
Sure, you say you’re conscious. I can get an LLM to say it’s conscious too. This is why we need some method for measuring it. Otherwise how can I tell which of you is telling the truth?
This is called the problem of other minds. Of course I can’t be certain about the consciousness of others. I can only be certain about my own.
We do have a way of measuring the correlates of consciousness. But we have no clue how to detect the presence of subjective experience using quantitative methods.
Philosophy departments (which is where any discovery on this front will originate) are heavily defunded. If you’re waiting for physicists or biologists to figure this out you’ll be waiting even longer.
Exactly, which is why it’s IMO a bit presumptuous to say with confidence that humans are conscious while LLMs are categorically not conscious. We don’t even really know what that means.
I don’t personally think LLMs are conscious, at least not yet or not to the same degree that humans are. But that’s purely based on vibe, it’s not something I can know. We need to figure out what consciousness really is and how to measure it before we can say we know this with any certainty.
It is not presumptuous at all. Inference to the best explanation is how you know (almost) anything.
This table isn’t conscious.
This is my justified belief. No inferential claim is guaranteed and all objective claims are inferential (which is why scientific claims aren’t absolute).
That said, I have strong reasons to think that tables aren’t conscious. They might be, but I’m epistemically compelled to believe otherwise.
ChatGPT isn’t conscious.
Ditto. It would be irrational for me to believe otherwise given the strong evidence.
That you “don’t know for sure” is an implied disclaimer for every scientific claim.
If the evidence is ambiguous, we say so. Regarding ChatGPT, the evidence is unambiguous.
I am conscious.
This is a non-inferential claim that I know through direct contact with reality. It is a priori.
Yeah i dont really believe in consciousness, it’s the just the dynamic firing of neurons, it’s an emergent trait in other words. It’s like traffic, you will never find it if you zoom in to one car. You have to see it at a distance. Same with consciousness, if you zoom in it’s not there anymore.
The actual article isn’t nearly as stupid as the tweet makes it seem. I recommend giving it a read. It’s behind a shitty paywall, but if you use Firefox’s reader mode (Ctrl-Alt-R, or the little papper icon to the right side of the address bar) as soon as the page loads, you can read it.
His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren’t conscious, then perhaps consciousness isn’t as important as we thought it was:
Some people will surely contest his claim that LLMs are as competent as evolved organisms. There’s definitely a bit of AI boomerism at play here (we have benchmarks that show just how incompetent LLMs can be), but I don’t think that invalidates his point, because LLMs can be very competent in the domains they’re trained to be competent in – they just aren’t AGI.
Man, those conversations are eye roll inducing
I like the shift away from “are they conscious” towards “what’s a way to define consciousness?”
Because that’s the actual important question. And literally nobody can answer it. Any discussion is more philosophy than hard science
The most interesting part is the last paragraph
It’s very difficult to define, isn’t it?
If I were to give it a shot, I’d say that consciousness is akin to proprioception - the ability to know the state of oneself and understand how actions taken will change that state. It has very little to do with intelligence, just the “sense of being”.
Or maybe in other words, object persistence (but for yourself) is all it takes in my opinion. Even the simplest of animals could be considered conscious by this definition.
I think, when we finally do have a generally-accepted definition of consciousness, we will be deeply unsettled by how simple it is. How unprofound. Like a magic trick after you know how it works. And I think it will require us to think hard about what to do with animals and software that have it.
I feel like that’s exactly why we don’t have a generally-accepted definition of consciousness. Western ethics assigns special protection to whatever is conscious, so it is convenient to come up with a definition of consciousness, which excludes groups you want to exploit.
Tale as old as time, or at least the conscious idea of time. Whatever consciousness is, we are it. Those humans over there though? Who’s to say they aren’t sub-humans? Isn’t it our job to enlighten them and also take their land and food and things and selves?
Personally I’m in the “consciousness is an illusion and every time you go to bed a different person wakes up in the morning” camp.
I would consider this to be two separate, semi-related concepts asserted together, one that consciousness is an illusion, and one that you are a different person each day.
The first point draws many questions; consciousness is an illusion of what? What mechanism causes the illusion? How does it cause it? Why does the illusion exist? And you may note that you could replace illusion in those questions with consciousness and be left in a similar (though still distinct) place. So simply calling consciousness an illusion seems to me to kick the can down the road without actually addressing the problem.
As for being a different person after a lapse in awareness, I’d like to take it a step further and say that you could be considered a new person with every change in moment. It’s easy enough to look back 10 years and say “yeah, that’s a younger me, but they’re not the same as me I can just see the path that led to where I am now.” Getting closer, you may feel different today compared to yesterday depending on various factors (sleep, diet, events), but are you a different person because you slept and had a lapse of awareness, or because the state of your mind and thoughts have shifted? When your internal monologue (or equivalent thought) asks “what is this guy talking about?” Is it not thinking “what” in a brand new context given the words it is responding to, forming a new beginning to a thought that puts the mind in a unique state primed to then enter a new state of “is?” And if the mind is in a unique state of novelty, could the person attached to the mind be considered distinct from the person that existed before?
There is a reason the word revelation exists, it indicates when a person has a novel thought that changes their perspective or way of thinking, altering who they are. Would they not be a new person despite being aware of the process of their change? Due to the above points I don’t think new personhood only occurs at sleep, but constantly. The rate of change may quicken or slow, but the change is always there.
By consciousness being an illusion I mean that we place great value on the uninterrupted continuation of our consciousness, but I think it’s likely that it (exactly as you suggest) only really exists in the moment. The illusion would then be the illusion that consciousness is uninterrupted, when in reality you’re almost constantly recreating yourself from memory.
This would, incidentally, make us concerningly similar to current AI models.
Of course I have no way of actually knowing any of this. It’s just what I’m betting on, because otherwise I think it’s really hard to explain any unconsciousness (be it sleep, general anesthesia, suspended animation or the Star Trek transporter) as anything short of death. My belief “solves” this problem by rejecting the whole premise of uninterrupted consciousness.
That won’t get the IRS off your back, unfortunately
deleted by creator
Yeah, I’m not entirely sure that microcontrollers aren’t conscious. If insects (and maybe plants and fungi) are conscious, a lot of mundane stuff we’ve built could technically be as well.
I think we need to get away from the idea that consciousness is special or rare.
Blindsight by Peter Watts is a great sci Fi novel about consciousness
That novel also does a shout-out to Richard Dawkins despite being set in the distant future because it was written in 2006.
it’s on my to-read list.
Right now listening to Children Of Strife. Whose series is also quite deep into conciousness and sapience
I have that but haven’t started it yet. The second in the series is one of my all time favourites.
“We’re going on an adventure”
deleted by creator
Thank you for the comment, i feel silly for not linking the article when people will probably want to read it.
My thoughts:
Seems like an “evil” and dangerous talking point. To me, the value of consciousness isn’t in ita evolutionary efficiency.
I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.
It’s not a question of the value of consciousness, it’s a question of its necessity. If an unconscious “zombie” can be, to an external observer, indistinguishable from a conscious being, then that means we’ve been overestimating the importance of consciousness for intelligence. Like Dawkins says in the article, there could be unconscious aliens out there who are nonetheless as intelligent as (or more intelligent than) humans. This isn’t a new concept – it’s been explored many times in scifi – but AI is now bringing the question from the realm of philosophy to the real world.
This is less true than it ever was with reasoning models. Some of the latest reasoning models don’t necessarily even reason in English anymore but rather an eclectic mix of languages. The next step after that is probably going to be running the reasoning in latent space (see e.g. Coconut), which basically means the model skips the language generation layer altogether and feeds lower-level state back into itself. Basically it is getting closer and closer to what most humans consider “thinking”.
But even besides reasoning models, I believe LLMs aren’t as different from human language production as many people think. The human speech centre, in a way, also just selects the right combination of data to continue a conversation. It frequently even hallucinates (we call this “speaking before thinking”) and makes stupid mistakes (we provoke these with trick questions like those on the Cognitive Reflection Test). There’s also some fascinating experiments in people who have had the connection between their brain hemispheres severed that really suggest our speech centre is just making things up as it goes along.
This is one of the things that fascinates about LLMs - they seem like a part of how our brains work, without the internal self-referential parts
…
Oh it’s actually stupider than the tweet makes it seem.
Competency should imply the ability complete a lengthy task (eg hunting, building a nest, writing a paper). LLMs can’t.
It’s hardly surprising that a model optimized for replacing StackOverflow couldn’t survive in the untamed wilderness. As for writing a paper… you must’ve missed the fact that academia is currently in a crisis precisely because LLMs are better at writing papers than most students.
By the way, the paper the blog post you link to as a source links to as a source benchmarked LLMs on graph diagrams, textile patterns and 3D objects. It is not news that the language model would do poorly on visual-heavy tasks.
Sorry, I assumed you would have actually read the DELEGATE-52 study linked instead of just the abstract. For “a model optimized for replacing StackOverflow” that is “better at writing papers than most students” LLMs sure did pretty bad at those tasks over multiple rounds.
As the chart on page 7 of the paper shows, LLMs are good at exactly the kind of tasks you’d expect (producing and manipulating language), and bad at exactly the kind of tasks you’d expect (doing almost anything else). All this paper shows is that (1) they aren’t AGI, and (2) as a consequence of not being AGI they aren’t good unsupervised.
Why do you lie like this?
What the fuck? The only task that didn’t degrade across most models was Python. Very basic things like JSON, Makefiles, and schemas got screwed. Fiction, emails, and food menus got screwed. Did you even bother to read the legend? If you consider a single pass to be “producing and manipulating language” you didn’t bother to read the idiotic article you started this thread in support of. Good luck.
Edit: why do you lie?
Yeah, after 20 cycles of unsupervised iteration on the task. Gemini 3.1 Pro doing as well as it did under that experiment setup is quite remarkable actually.
The paper does not show what you are arguing.
“We” as in lay misunderstanding of some pop science, still don’t get what consciousness is and can’t describe it. There are people alive today who didn’t believe in their youth that black people are fully conscious, Dawkins demonstrated by his communication to his personal friend and hero Epstein, that he doesn’t fully believes that women are conscious. What we thought or didn’t think of previously can’t be a good indication of anything.
“We” as in anyone who put any weight in the Turing test used to think that passing it would be some indication of consciousness, but now that LLMs can handily pass it it’s evident it either isn’t evidence of consciousness or that LLMs are conscious.
Turing test can be reliably passed by a bot that repeats last part of the previous sentence with a question mark at the end, and sprinkles “oh that’s very smart I need to think about it”, “I am starting to fall in love with you, %USERNAME%”, and occasional “I am alive” thrown in randomly. And it was obvious for a long time.
Hell, a lot of people trully believe that their dogs can fully understand human speech because they bought them buttons that say words when you press them, and conditioned their dog to press a button to get a rewards, and then observe the dog pressing buttons.
Humans seem to be hardwired to mistake speech for intellect
No it can’t. If you’re actually saying that modern LLMs are no better at passing the Turing test than ELIZA, you are either trolling or an utterly delusional AI hater. Here, have a paper that proves you wrong: https://arxiv.org/pdf/2503.23674
I am not saying the Turing test is a good benchmark of consciousness. On the contrary, like I said, LLMs have proven that it is not. But mere ten years ago even the most advanced chatbots had no hope of passing it, whereas now the most advanced ones are selected as the human over 70% of the time in a test that pits the LLM against a human head to head.
No I’m saying the Turing test is a philosophical hypothetical from the time before computers, and doesn’t actually show anything, because it relies on the least accurate tool at our disposal: human pattern recognition machine, one that is oh so happy to be fooled by the ELIZAS of various sofistication. Chatbots were passing the Turing test since the invention of a chatbot. Yeah, modern chatbots are better at that, but it’s more of a damnation of our perception
OK, sounds like we broadly agree then.
But as you can see in the paper I linked, ELIZA passes the Turing test in their experiment about 20% of the time (that is to say, it doesn’t pass; passing is 50% in this test) whereas the best LLMs pass about 70% of the time (that is to say, they are significantly more convincing at being human than real humans).
That 20% figure is just a clear indication how shit people are at conducting such a test, and that was basically my original point. 2 in 10 times people were convinced by a particularly echoey room.
If an LLM is correct 2 in 10 times, would you call it “reliably correct”?
As LLMs have developed and have been able to cram more and more “thoughtlike” behaviour into smaller RAM and less computation, I’ve steadily become less impressed with human brains. It seems like the bits we think most highly of are probably just minor add-ons to stuff that’s otherwise dedicated to running our big complicated bodies in a big complicated physics environment. If all you want to have is the part that philosophizes and solves abstract problems and whatnot then you may not actually need all that much horsepower.
I’m thinking consciousness might also turn out to be something pretty simple. Assuming consciousness is even a particular “thing” in the first place and not just a side effect of being able to predict how other people will behave.
Just massive data centers requiring tons of energy and cooling, with a model developed by human brains and trained on all of human knowledge these developers can get their hands on, painstakingly labeled by vast teams of people so that the model can spit out seemingly correct answers.
ls the AI actually philosophizing and solving abstract problems or is it merely regurgitating philosophies and solutions that exist within its training set?
It’s actually solving abstract problems.
Also, local models are available that are quite good and run on a standard consumer-grade GPU.
One problem and…
Brains aren’t impressive because of their compute (which is both immense and absurdly efficient) or their ability to predict the future (technically the main function of evolved minds). They’re impressive because they’re conscious. The fact that organic brains can also engage in hierarchical abstraction, which no digital computer (or Turing machine) can do by definition, is icing on the cake.
(The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computation is more likely to be involved in consciousness, if at all.)
You’re going to have to do a lot more to justify the leap from Godel’s Incompleteness and the Halting Problem to “digital is limited, analog is not”, because neither of those things have anything to do with digital processes at all, and in fact both came about before we’d invented digital computers.
To me this comment sounds like when popsci gets ahold of a few sciency words and suddenly decides everything is crystal vibrations universal harmonics string theory quantum tunneling aligning resonance with those around you.
The situation is the following.
We’ll probably need analog computation, currently in its infancy, to get artificial (inorganic) consciousness.
I study metaethics and philosophy of mathematics. These problems are real, and I am being honest with you.
That is not the situation. 😛
Analog signals are not digitally irreducible without presuming there’s no level of noise floor under which greater detail is irrelevant, Turing’s machines are not digital by their construction and predate the concept by a long time, and the first computers we built were analog and we invented digital computers later because they were cheaper and more efficient and easier and more reliable.
Also the halting problem doesn’t say “there are things which a computer can’t know but a human can”, it says “there are some things that cannot be known”.
Similarly Gödel proved that there will always be true things about a system that cannot be proven from within the system, that is using its axioms. That was a real bummer for folks trying to prove all of math with a small set of axioms. But that does not mean there are things math can’t know that humans magically can, it just means there’s other math, outside the axioms, that are true without following from them, in math. He proved it with math, after all. It doesn’t claim to give any special abilities to human brains.
And also, again, nothing Gödel or Turing ever said has anything to do with the concept of “digital” anything. I think you’re using the term “digital” to mean “rulesy”? Which is not even close to what it means?
I won’t argue with you, because some of what you wrote isn’t even wrong.
However, on the off chance that you actually care about what is true, I urge you to take a theoretical computer science course. Lectures from MIT and Carnegie Mellon are available on YouTube.
Stop watching podcasts with pseudo-intellectual media grifters and read the actual research literature by real philosophers and mathematicians on these otherwise arcane topics.
I’m only about 15% sure you yourself aren’t an AI bot making a beautifully ironic and satirical play here. But I think we can agree not to argue any longer 🤝
deleted by creator
I don’t see why there would be any fundamental difference between analog and digital computing. Digital computers can emulate analog computing, and I doubt consciousness arises from having theoretically infinite decimal precision, because in practice analog systems cannot use infinite precision either. Analogs (heh!) of the halting problem and the theorems you mention also exist for analog computing.
Quantum effects in the brain are a slightly more plausible explanation for consciousness, but currently they teeter on magical thinking because we don’t really know anything about what they would actually do in the brain. It becomes an “a wizard did it” explanation.
So in the end, we just don’t know.
Then why not take a course on Theoretical Computer Science? Or do you not care about the differences?
I have a master’s degree in computer science.
Obviously I meant “I don’t see why there would be any fundamental difference between analog and digital computing [when it comes to consciousness].”
deleted by creator
deleted by creator
I hear that argument from time to time, and I never found a source for it. I want to understand the original claim. Because it doesn’t make any sense when people bring it up. because both theorems do not have anything to do with the areas it’s applied to. I understand why people think it does, but it just doesn’t
The simplest way to understand this problem is as follows.
Analog computation is not digitally reducible. (Brains are analog computers.)
Turing’s infamous Halting Problem.
I can write more about this and point you to more technical discussions if you want.
I really don’t see what either gödels or turnings theorems have to do with it
All they (basically) tell you is that you can’t tell if a computation will guarantee to halt , and that you can’t proof everything with math
It’s not excluding consciousness on a digital basis. Unless you already prerequisite some special property of consciousness to begin with
You’re misunderstanding the implications of both the halting problem and Gödel’s first incompleteness theorem.
What Turing and Gödel independently proved is that a human observer can (theoretically) always have insights about mathematics and programming that are incomputable. That is, you cannot program or axiomatize or formalize or digitize everything that a mind can do. Period.
Analog computers are sufficiently different from digital systems to potentially emulate brain activity. But digital (discrete) methods are probably too constrained.
that is not what either of them proved. like… at all
You will find what I said in any philosophy of mathematics textbook dealing with the subject. In fact, I am paraphrasing the Oxford logician Joel David Hamkins.
You’re welcome to also read Shapiro’s famous paper for a rephrasing. These results have been well understood for half a century, although because the implications are ultimately metaphysical and not mathematical, we can’t be sure of the wider consequences, if any.
I’m still awaiting a widely accepted method of actually measuring “consciousness.” It’s a conveniently nebulous property.
And simply defining it as something computers can’t do is even more convenient.
That doesn’t change the fact that I am conscious.
Also, I never said computers can’t be conscious. I said that digital computers (Turing machines) probably can’t. Quantum and analog computers have no such theoretical constraints and they’re far, far more prevalent given that they’re found in every living creature.
Sure, you say you’re conscious. I can get an LLM to say it’s conscious too. This is why we need some method for measuring it. Otherwise how can I tell which of you is telling the truth?
This is called the problem of other minds. Of course I can’t be certain about the consciousness of others. I can only be certain about my own.
We do have a way of measuring the correlates of consciousness. But we have no clue how to detect the presence of subjective experience using quantitative methods.
Philosophy departments (which is where any discovery on this front will originate) are heavily defunded. If you’re waiting for physicists or biologists to figure this out you’ll be waiting even longer.
Exactly, which is why it’s IMO a bit presumptuous to say with confidence that humans are conscious while LLMs are categorically not conscious. We don’t even really know what that means.
I don’t personally think LLMs are conscious, at least not yet or not to the same degree that humans are. But that’s purely based on vibe, it’s not something I can know. We need to figure out what consciousness really is and how to measure it before we can say we know this with any certainty.
It is not presumptuous at all. Inference to the best explanation is how you know (almost) anything.
This is my justified belief. No inferential claim is guaranteed and all objective claims are inferential (which is why scientific claims aren’t absolute).
That said, I have strong reasons to think that tables aren’t conscious. They might be, but I’m epistemically compelled to believe otherwise.
Ditto. It would be irrational for me to believe otherwise given the strong evidence.
That you “don’t know for sure” is an implied disclaimer for every scientific claim.
If the evidence is ambiguous, we say so. Regarding ChatGPT, the evidence is unambiguous.
This is a non-inferential claim that I know through direct contact with reality. It is a priori.
You need to lay off the AI if it’s making you this weirdly misanthropic.
This is how tech bros justify causing harm: they genuinely don’t care, because they think of the un-“enlightened” as less worthy of existing
There’s enough that it would be difficult to tell an actual sentient Ai from chatbot just by words.
Yeah i dont really believe in consciousness, it’s the just the dynamic firing of neurons, it’s an emergent trait in other words. It’s like traffic, you will never find it if you zoom in to one car. You have to see it at a distance. Same with consciousness, if you zoom in it’s not there anymore.