In an open letter published on Tuesday, more than 1,370 signatories—including business founders, CEOs and academics from various institutions including the University of Oxford—said they wanted to “counter ‘A.I. doom.’”

“A.I. is not an existential threat to humanity; it will be a transformative force for good if we get critical decisions about its development and use right,” they insisted.

  • Cade@beehaw.org
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    Correct me if I’m wrong, but I thought the big risk with AI is its use as a disinfo tool. Skynet ain’t no where near close, but a complete post truth world is possible. It’s already bad now… Could you imagine AI generated recordings of crimes that are used as evidence against people? There are already scam callers that use recordings to make people think theyve kidnapped relatives.

    I really feel like most people aren’t afraid of the right things when it comes to AI.

    • Peanut@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      That’s largely what these specialists are talking about. People emphasising the existential apocalypse scenarios when there are more pressing matters. I think purpose of the tools in mind should be more of a concern than the training data as well in many cases. People keep freaking out about LLMs and art models while still ignoring the plague of models built specifically to manipulate and predict subconscious habits and activities of individuals. Models built specifically to recreate the concept of a unique individual and their likeness for financial reason should also be regulated in new unique ways. People shouldn’t be able to be bought wholesale, but to sell their likeness as a subscription with rights to withdraw from future production, etc.

      I think the ways we think about a lot of things have to change based around the type of society we want. I vote getting away from a system that lets a few own everything until people no longer have the right to live.

    • FIash Mob #5678@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Indeed, because the AI just makes shit up.

      That was the problem with the lawyer who brought bullshit ChatGPT cases into court.

      Hell, last week I did a search for last year’s Super Bowl and learned that Patrick Mahomes apparently won it by kicking a game-winning field goal.

      Disinfo is a huge, huge problem with these half-baked AI tools.