I’m pulling the “twitter is a microblog” rule even though twitter is pretty mega now, hope that’s ok.

  • turdas@suppo.fi
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 days ago

    Seems like an “evil” and dangerous talking point. To me, the value of consciousness isn’t in ita evolutionary efficiency.

    It’s not a question of the value of consciousness, it’s a question of its necessity. If an unconscious “zombie” can be, to an external observer, indistinguishable from a conscious being, then that means we’ve been overestimating the importance of consciousness for intelligence. Like Dawkins says in the article, there could be unconscious aliens out there who are nonetheless as intelligent as (or more intelligent than) humans. This isn’t a new concept – it’s been explored many times in scifi – but AI is now bringing the question from the realm of philosophy to the real world.

    I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.

    This is less true than it ever was with reasoning models. Some of the latest reasoning models don’t necessarily even reason in English anymore but rather an eclectic mix of languages. The next step after that is probably going to be running the reasoning in latent space (see e.g. Coconut), which basically means the model skips the language generation layer altogether and feeds lower-level state back into itself. Basically it is getting closer and closer to what most humans consider “thinking”.

    But even besides reasoning models, I believe LLMs aren’t as different from human language production as many people think. The human speech centre, in a way, also just selects the right combination of data to continue a conversation. It frequently even hallucinates (we call this “speaking before thinking”) and makes stupid mistakes (we provoke these with trick questions like those on the Cognitive Reflection Test). There’s also some fascinating experiments in people who have had the connection between their brain hemispheres severed that really suggest our speech centre is just making things up as it goes along.

    • 5too@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 days ago

      This is one of the things that fascinates about LLMs - they seem like a part of how our brains work, without the internal self-referential parts