In an open letter published on Tuesday, more than 1,370 signatories—including business founders, CEOs and academics from various institutions including the University of Oxford—said they wanted to “counter ‘A.I. doom.’”

“A.I. is not an existential threat to humanity; it will be a transformative force for good if we get critical decisions about its development and use right,” they insisted.

  • nanoobot@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 year ago

    It makes my blood boil when people dismiss the risks of ASI without any notable counterargument. Do you honestly think something a billion times smarter than a human would struggle to kill us all if it decided it wanted to? Why would it need a terminator to do it? A virus would be far easier. And who’s to say how quickly AI will advance now that AI is directly assisting progress? How can you possibly have any certainty on any timelines or risks at all?

    • axum@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Put down the crack, there is a huge ass leap between general intelligence and the LLM of the week.

      Next you’re going to tell me cleverbot is going to launch nukes. We are still incredibly far from general intelligence ai.

    • SSUPII@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      And how can you be certain, if the virus generation its an actual possibility, there won’t be another AI already made to combat such viruses?