I’m pulling the “twitter is a microblog” rule even though twitter is pretty mega now, hope that’s ok.

  • zarkanian@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 days ago

    And yet, “having agency” is how they are advertised. That’s what the term “agentic” means. AI instances are called “agents”! That’s part of the marketing.

    It’s easy to handwave this away as “people are stupid”, and there’s certainly some truth to that, but the reason why people believe that LLMs are agents is because tech bros have spent a lot of money to get them to believe that. That’s also why they spread the myth that LLMs are potentially dangerous because they could become conscious and kill all of us. It helps to spread the myth of LLM agency. Of course they can’t become conscious, because that isn’t how things work. If LLMs are killing people, it’s because somebody put an LLM in front of the kill switch and they wanted to have plausible deniability. That is perhaps the most pernicious thing about LLMs: people using them to avoid responsibility. “It isn’t my fault! The bot did it!”

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      Totally agree, which is why I would slot anybody marketing these things as ‘agents’ or ‘agentic’ as psychotic.

      Before … several years ago now, I personally was using the term ‘Narrative’ or ‘Conversational’ to describe an LLM doing something that normally didn’t have an LLM doing it.

      Its not an ‘Agentic Search Engine’, its a ‘Conversational Search Engine’.

      Something like that, that at least is further away from using a term thst directly implies that it is essentially conscious… because what these things literally are, are extremely fancy autocomplete algorithms.

      But uh yeah, yeah, they outspent my marketing budget of $0 on that one.

      Yeah, they already are being broadly used to just… alleiviate responsibility from some task that a human would have had to ultimate have the buck stop with, at least in theory.

      I think I saw the phrase ‘An LLM cannot find out, therefore it should never be allowed to fuck around’.

      If these things are allowed to exist as a kind of liability black hole, in any sense… legal, colloquial, whatever… like it could literally destroy much of human civilization as we currently know it.

      The cognitohazard machine.

      At this point I genuienly can’t tell if the sociopathic nsrcissist CEOs that are so heavily pushing LLMs are … knowingly foisting a lie on all of us, or if they are actually just fully enraptured by the plagiarism sycophant machines, that constantly tell them how smart and special they are.

      I know we have to hold them accountable … otherwise they probably/maybe kill most of us and become functional demigods… but I actually can’t tell if they are more truly insane, or more truly evil.

      Because the way they are going about this is… just comically stupid and obviously catastrophic to basically everyone who isn’t them, and isn’t themselves enthralled.

      … Maybe pure evil just is pure insane stupidity.