• dfyxA
    link
    fedilink
    arrow-up
    15
    arrow-down
    2
    ·
    edit-2
    8 months ago

    No joke here. Large language Models (which people keep calling AI) have no way of checking if what they’re saying is correct. They are essentially just fancy text completion machines that answer the question what word comes next over and over. The result looks like natural language but tends to have logical and factual problems. The screenshot shows an extreme example of this.

    In general, never rely on any information an LLM gives you. It can’t look up external information that wasn’t in its training set. It can’t solve logic problems. It can’t even reliably count. It was made to give you a plausible answer, not a correct one. It’s not a librarian or a teacher, it’s an improv actor who will „yes, and“ everything. LLMs will often rather make up information than admit that they don’t know. As an easy demonstration, ask ChatGPT for a list of restaurants in your home town that offer both vegan and meat-based options. More often than not, it will happily make you a list with plausible names and descriptions but when you google them, none of the restaurants actually exist.

    • PM_ME_FAT_ENBIES@lib.lgbt
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Large language Models (which people keep calling AI)

      As my AI teacher used to say, it’s AI until someone builds it.