I must confess to getting a little sick of seeing the endless stream of articles about this (along with the season finale of Succession and the debt ceiling), but what do you folks think? Is this something we should all be worrying about, or is it overblown?

EDIT: have a look at this: https://beehaw.org/post/422907

  • years_past_matter@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    A lot of the fearmongering surrounding AI, especially LLMs like GPT, is very poorly directed.

    People with a vested interest in LLM (i.e shareholders) play into the idea that we’re months away from the AI singularity, because it generates hype for their technology.

    In my opinion, a much more real and immediate risk of the widespread use of ChatGPT, for example, is that people believe what it says. ChatGPT and other LLMs are bullshitters - they give you an answer that sounds correct, without ever checking whether it is correct.

    • TheRtRevKaiser@beehaw.orgM
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The thing I’m more concerned about is “move fast and break things” techbros implementing these technologies in stupid ways without considering: A) Whether the tech is mature enough for the uses they’re putting it to B) Biases inherited from training data and methods.

      LLMs inherit biases from their data because their data is shitloads of people talking and writing, and often we don’t even know or understand the biases without close examination. Trying to apply LLMs or other ML models to things like medicine, policing, or housing without being very careful about understanding the potential for incredible biases in things that seem like impartial data is reckless and just asking for negative outcomes for minorities. And the ways that I’m seeing most of these ML companies try to mitigate those biases seem very much like bandaids as they attempt to rush these products out the gate to be the first out the door.

      I’m not at all concerned about the singularity, or GI, or any of that crap. But I’m quite concerned about ML models being applied in medicine without understanding the deep racial and gender inequities that are inherent in medical datasets. I’m quite concerned with any kind of application in policing or security, or anything making decisions about finance or housing or really any area with a history of systemic biases that will show up in a million ways in the datasets that these models are being trained in.