I’m pulling the “twitter is a microblog” rule even though twitter is pretty mega now, hope that’s ok.

  • GuyIncognito@lemmy.ca
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    8 days ago

    hey dick dorkins, here’s an idea: instead of asking the predictive question answering machine a question, how about you let it ask you questions of its choosing and at its leisure? What’s that? You can’t? That’s because its just a predictive algorithm that generates plausible-sounding responses to questions based on its training data.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 days ago

      I’m sure he actually knows that, he’s just been intransident as per usual. It annoys me that he’s considered a major authority because he’s made his career and just being awkward and argumentative.

    • Kptkrunch@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      I know this sounds great to most people but it demonstrates a very superficial level of thinking… I mean for sure an LLM is capable of asking questions, and if you set it up with real time “sensory” input it could generate constant reaction to that input… much in the way you are constantly being stimulated to react to your environment… I am not really sure what the distinction is between a biological brain and a predictive model or algorithm… I would ask you what you think your own brain is doing on a fundamental level.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 days ago

        I would actually argue that it is the most important question.

        Surely the most relevant test of any intelligence is whether or not itself starting. Any classical description of an artificial general intelligence would surely require the thing to actually do work on its own. If an intelligence is of greater than human intellect but it has to be prompted in order to do anything, then it’s always going to be limited by what a human can think to prompt for.

        • Kptkrunch@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          I think you are describing some notion of a “will” or motive… but also potentially describing an LLM’s lack of temporal experience. I would argue that a human is constantly being “prompted” to react to things happening to them via sensory input. And adding that to an LLM is trivial. (Provided the input is of a modality that it can understand like text or image embeddings).

          As far as will or motive to perform tasks goes, some think an AI agent could generate secondary sub-goals like a will to “survive” in order to carry out primary tasks like “make paperclips efficiently”. This is called instrumental convergence and its speculative. I think what would really be scary is if someone explicitly optimized a model with billions of parameters to survive or carry out some specific task and they utilized online reinforcement learning. I dont think there is a big technical hurdle there… you could imagine a sort of adversarial style training where one model predicts damage/danger/threats and the other attempts to avoid those. We could propagate rewards and punishment back over the sequence of actions that led to that state and train as the model is interacting with its environment.

      • GuyIncognito@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        Fuck if I know, but seems to me that intelligence is more than just reacting to stimulus. The problem is we’ve broken the Turing test. We’ve made a computer that can sound sentient, but clearly isn’t.