I’m pulling the “twitter is a microblog” rule even though twitter is pretty mega now, hope that’s ok.

  • Nalivai@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 days ago

    LLMs are able to do things we previously thought only conscious beings would be capable of doing

    “We” as in lay misunderstanding of some pop science, still don’t get what consciousness is and can’t describe it. There are people alive today who didn’t believe in their youth that black people are fully conscious, Dawkins demonstrated by his communication to his personal friend and hero Epstein, that he doesn’t fully believes that women are conscious. What we thought or didn’t think of previously can’t be a good indication of anything.

    • turdas@suppo.fi
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 days ago

      “We” as in anyone who put any weight in the Turing test used to think that passing it would be some indication of consciousness, but now that LLMs can handily pass it it’s evident it either isn’t evidence of consciousness or that LLMs are conscious.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        Turing test can be reliably passed by a bot that repeats last part of the previous sentence with a question mark at the end, and sprinkles “oh that’s very smart I need to think about it”, “I am starting to fall in love with you, %USERNAME%”, and occasional “I am alive” thrown in randomly. And it was obvious for a long time.
        Hell, a lot of people trully believe that their dogs can fully understand human speech because they bought them buttons that say words when you press them, and conditioned their dog to press a button to get a rewards, and then observe the dog pressing buttons.
        Humans seem to be hardwired to mistake speech for intellect

        • turdas@suppo.fi
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          No it can’t. If you’re actually saying that modern LLMs are no better at passing the Turing test than ELIZA, you are either trolling or an utterly delusional AI hater. Here, have a paper that proves you wrong: https://arxiv.org/pdf/2503.23674

          I am not saying the Turing test is a good benchmark of consciousness. On the contrary, like I said, LLMs have proven that it is not. But mere ten years ago even the most advanced chatbots had no hope of passing it, whereas now the most advanced ones are selected as the human over 70% of the time in a test that pits the LLM against a human head to head.

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 days ago

            No I’m saying the Turing test is a philosophical hypothetical from the time before computers, and doesn’t actually show anything, because it relies on the least accurate tool at our disposal: human pattern recognition machine, one that is oh so happy to be fooled by the ELIZAS of various sofistication. Chatbots were passing the Turing test since the invention of a chatbot. Yeah, modern chatbots are better at that, but it’s more of a damnation of our perception

            • turdas@suppo.fi
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 days ago

              OK, sounds like we broadly agree then.

              But as you can see in the paper I linked, ELIZA passes the Turing test in their experiment about 20% of the time (that is to say, it doesn’t pass; passing is 50% in this test) whereas the best LLMs pass about 70% of the time (that is to say, they are significantly more convincing at being human than real humans).

              • Nalivai@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 days ago

                That 20% figure is just a clear indication how shit people are at conducting such a test, and that was basically my original point. 2 in 10 times people were convinced by a particularly echoey room.

                • Fedegenerate@fedinsfw.app
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  8 days ago

                  Turing test can be reliably passed by a bot that repeats last part of the previous sentence with a question mark at the end […]

                  If an LLM is correct 2 in 10 times, would you call it “reliably correct”?

                  • Nalivai@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    8 days ago

                    If a person murders people only two days out of 10, they’re a murderer, in order to not be a murderer they need to never do that.
                    Reliably correct is when you’re correct always. Demonstrably incorrect is when you’re incorrect even sometimes.