Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

  • tinselpar@feddit.nl
    link
    fedilink
    arrow-up
    6
    ·
    2 years ago

    AI bots don’t ‘hallucinate’ they just make shit up as they go along mixed with some stuff that they found in google, and tell it in a confident manner so that it looks like they know what they are talking about.

    Techbro CEO’s are just creeps. They don’t believe their own bullshit, and know full well that their crap is not for the benefit of humanity, because otherwise they wouldn’t all be doomsday preppers. It all a perverse result of American worship of self-made billionaires.

    See also The super-rich ‘preppers’ planning to save themselves from the apocalypse

    • soiling@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      2 years ago

      “hallucination” works because everything an LLM outputs is equally true from its perspective. trying to change the word “hallucination” seems to usually lead to the implication that LLMs are lying which is not possible. they don’t currently have the capacity to lie because they don’t have intent and they don’t have a theory of mind.

      • tinselpar@feddit.nl
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        Misinformation is misinformation, whether it is intentional or not. And it’s not farfetched that soon someone will launch a propaganda bot with biased training data that intentionally spreads fake news.

      • variaatio@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        Well neither can it hallucinate by the “not being able to lie” standard. To hallucinate would mean there was some other correct baseline behavior from which hallucinating is deviation.

        LLM is not a mind, one shouldn’t use words like lie or hallucinate about it. That antromorphises a mechanistic algorhitm.

        This is simply algorhitm producing arbitrary answers with no validity to reality checks on the results. Since neither are those times it happens to produce correct answer “not hallucinating”. It is hallucinating or not hallucinating exactly as much regardless of the correctness of the answer. Since its just doing it’s algorhitmic thing.