Not all searches get AI answers, but Google has been steadily expanding this feature since it debuted last year. One searcher on Reddit spotted a troubling confabulation when searching for crashes involving Airbus planes. AI Overviews, apparently overwhelmed with results reporting on the Air India crash, stated confidently (and incorrectly) that it was an Airbus A330 that fell out of the sky shortly after takeoff. We’ve run a few similar searches—some of the AI results say Boeing, some say Airbus, and some include a strange mashup blaming both Airbus and Boeing. It’s a mess.

Always remember that AI, or more accurately, LLMs are just glorified predictive text like on your phone. Don’t trust them. Maybe someday they will be reliable, but that day isn’t today.___

  • otp@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    19 days ago

    “Lies” and even “fabricates” imply intent. “Makes shit up” is probably most accurate, but it also implies intent, which we can’t really apply to an LLM.

    Hallucination is probably the most accurate thing. There’s no intent – it’s something made up, that it expresses as true not because it is trying to mislead, but because it’s just as “true” to the LLM as anything else it says.

    • ctrl_alt_esc@lemmy.ml
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      19 days ago

      Or because it was programmed with a bias to respond in a certain way. There may not be intent on the LLM’s part, the same is not necessarily true for its developers though.

      • otp@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        19 days ago

        Definitely! Journalists would have to be reasonably certain of the intent to be able to publish it that way, though.

      • Boddhisatva@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 days ago

        There may not be intent on the LLM’s part

        There can’t be intent on the part of a non-sentient program. It has working code, flawed code, and probably intentionally biased code. Dont think of it as a being that intends to do anything.

    • skuzz@discuss.tchncs.de
      link
      fedilink
      arrow-up
      3
      ·
      18 days ago

      “Large Language Model incorrectly travels down the wrong statistical path when choosing words from its N-dimensional matrices and ends up guessing the wrong aircraft manufacturer. Possibly because of training bias against foreign manufacturers in a xenophobic American future.”

      Just doesn’t have that ring to it, versus “AI SLAMS AIRBUS IN HOT TAKE!”