But in her order, U.S. District Court Judge Anne Conway said the company’s “large language models” — an artificial intelligence system designed to understand human language — are not speech.

  • Natanael@infosec.pub
    link
    fedilink
    arrow-up
    3
    ·
    4 days ago

    All you need to argue is that its operators have responsibility for its actions and should filter / moderate out the worst.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 days ago

      That still assumes level of understanding that these models don’t have. How could you have prevented this one when suicide was never explicitly mentioned?

      • Natanael@infosec.pub
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        You can have multiple layers of detection mechanisms, not just within the LLM the user is talking to

          • Natanael@infosec.pub
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            3 days ago

            I’m told sentiment analysis with LLM is a whole thing, but maybe this clever new technology doesn’t do what it’s promised to do? 🤔

            Tldr make it discourage unhealthy use, or else at least be honest in marketing and tell people this tech is a crapshot which probably is lying to you