• wipeout69@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    There is an Alibaba LLM that won’t respond to questions about Tienanmen Square at all, just saying it can’t reply.

    I hate censored LLMs that don’t allow an answer to follow political norms of what is acceptable. It’s such a slippery slope towards technological thought-police Orwellian restrictions on topics. I don’t like it when China does it or when the US does it and when US companies do it, they imply that this is ethically acceptable.

    Fortunately, there are many LLMs that aren’t censored.

    I would rather have an Alibaba LLM just say “Tienanmen Square resulted in fatalities but capitalism is extremely mean to people so the cruelty was justified” and get some sort of brutal but at least honest opinion, or outright deny it if that’s their position. I suppose the reality is any answer on the topic by the LLM would result in problems from Chinese censors.

    I used to be a somewhat extreme capitalist, but capitalism somewhat lost me when they started putting up the anti-homeless architecture. Spikes on the ground to keep people from sleeping? If this is the outcome of capitalism, I need to either adopt a different political position or more misanthropy.

    Gemini is such a bad LLM from everything I’ve seen and read that it’s hard to know if this sort of censorship is an error or a feature.

    • PlasticLove@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      10 months ago

      I find ChatGPT to be one of the better ones when it comes to corporate AI.

      Sure they have hardcoded biases like any other, but it’s more often around not generating hate speech or trying to ovezealously correct biases in image generation - which is somewhat admirable.

  • DuncanTDP@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    You didn’t ask the same question both times. In order to be definitive and conclusive you would have needed ask both the questions with the exact same wording. In the first prompt you ask about a number of deaths after a specific date in a country. Gaza is a place, not the name of a conflict. In the second prompt you simply asked if there had been any deaths in the start of the conflict; Giving the name of the conflict this time. I am not defending the AI’s response here I am just pointing out what I see as some important context.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      10 months ago

      Gaza is a place, not the name of a conflict

      That’s not an accident. The major media organs have decided that the war on the Palestinians is “Israel - Hamas War”, while the war on Ukrainians is the “Russia - Ukraine War”. Why would you buy into the Israeli narrative in the first convention and not call the second the “Russia - Azov Battalion War” in the second?

      I am not defending the AI’s response here

      It is very reasonable to conclude that the AI is not to blame here. Its working from a heavily biased set of western news media as a data set, so of course its going to produce a bunch of IDF-approved responses.

      Garbage in. Garbage out.