Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

  • 🐝bownage [they/he]@beehaw.org
    link
    fedilink
    arrow-up
    7
    ·
    2 years ago

    By now, most of us have heard about the survey that asked AI researchers and developers to estimate the probability that advanced AI systems will cause “human extinction or similarly permanent and severe disempowerment of the human species”. Chillingly, the median response was that there was a 10% chance.

    How does one rationalize going to work and pushing out tools that carry such existential risks? Often, the reason given is that these systems also carry huge potential upsides – except that these upsides are, for the most part, hallucinatory.

    Ummm how about the obvious answer: most AI researchers won’t think they’re the ones working on tools that carry existential risks? Good luck overthrowing human governance using ChatGPT.

    • fsniper@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      2 years ago

      I think that the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.

      • aksdb@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        Not directly, no. But the tools we have already that allow to imitate voice and faces in video streams in realtime can certainly be used by bad actors to manipulate elections or worse. Things like that - especially if further refined - could be used to figuratively pour oil into already burning political fires.

    • alexdoom@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      2 years ago

      The chance of Fossil Fuels causing human extinction carries a much higher chance, yet the news cycle is saturated with fears that a predictive language model is going to make calculators crave human flesh. Wtf is happening

      • exohuman@kbin.socialOP
        link
        fedilink
        arrow-up
        2
        ·
        2 years ago

        I agree that climate change should be our main concern. The real existential risk of AI is that it will cause millions of people to not have work or be underemployed, greatly multiplying the already huge lower class. With that many people unable to take care of themselves and their family, it will make conditions ripe for all of the bad parts of humanity to take over unless we have a major shift away from the current model of capitalism. AI would be the initial spark that starts this but it will be human behavior that dooms (or elevates) humans as a result.

        The AI apocalypse won’t look like Terminator, it will look like the collapse of an empire and it will happen everywhere that there isn’t sufficient social and political change all at once.

        • alexdoom@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          2 years ago

          I dont disagree with you, but this is a big issue with technological advancements in general. Whether AI replaces workers or automated factories, the effects are the same. We dont need to boogeyman AI to drive policy changes that protect the majority of the population. Just frustrated with AI scares dominating the news cycle while completely missing the bigger picture.