• Stepos Venzny@beehaw.org
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    But I don’t think the software can differentiate between the ideas of defined and undefined characters. It’s all just association between words and aesthetics, right? It can’t know that “Homer Simpson” is a more specific subject than “construction worker” because there’s no actual conceptualization happening about what these words mean.

    I can’t imagine a way to make the tweak you’re asking for that isn’t just a database of every word or phrase that refers to a specific known individual that the users’ prompts get checked against and I can’t imagine that’d be worth the time it’d take to create.

    • Zagorath@aussie.zone
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      If they’re inserting random race words in, presumably there’s some kind of preprocessing of the prompt going on. That preprocessor is what would need to know if the character is specific enough to not apply the race words.

      • Big P@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yeah but replace("guy", "ethnically ambiguous guy") is different than does this sentence reference any possible specific character

        • stifle867@programming.dev
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          I don’t think it’s literally a search and replace but a part of the prompt that is hidden from the user and inserted either before or after the user’s prompt. Something like [all humans, unless stated otherwise, should be ethnically ambiguous]. Then when generating it’s got confused and taken it as he should be named ethnically ambiguous.

          • intensely_human@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            It’s not hidden from the user. You can see the prompt used to generate the image, to the right of the image.

      • intensely_human@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Gee, I wonder if there’s any way to use GPT-4 to detect whether a prompt includes reference to any specific characters. 🤔

    • Quokka@quokk.au
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      ChatGPT was just able to parse a list of fictional characters out of concepts, nouns, and historical figures.

      It wasn’t perfect, but if it can take the prompt and check if any mention of a fictional or even defined historical character is in there it could be made to not apply additional tags to the prompt.

      • Stepos Venzny@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Let’s say hypothetically I had given you that question and that instruction on how to format your response. You would presumably have arrived at the same answer the AI did.

        What steps would you have taken to arrive at that being your response?