Deleted

  • Jamie@jamie.moe
    link
    fedilink
    English
    arrow-up
    35
    ·
    1 year ago

    If you can use human screening, you could ask about a recent event that didn’t happen. This would cause a problem for LLMs attempting to answer, because their datasets aren’t recent, so anything recent won’t be well-refined. Further, they can hallucinate. So by asking about an event that didn’t happen, you might get a hallucinated answer talking about details on something that didn’t exist.

    Tried it on ChatGPT GPT-4 with Bing and it failed the test, so any other LLM out there shouldn’t stand a chance.

    • AFK BRB Chocolate@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      That’s a really good one, at least for now. At some point they’ll have real-time access to news and other material, but for now that’s always behind.

    • pandarisu@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      On the other hand you have insecure humans who make stuff up to pretend that they know what you are talking about

    • incompetentboob@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Google Bard definitely has access to the internet to generate responses.

      ChatGPT was purposely not give access but they are building plugins to slowly give it access to real time data from select sources

      • Jamie@jamie.moe
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        When I tested it on ChatGPT prior to posting, I was using the bing plugin. It actually did try to search what I was talking about, but found an unrelated article instead and got confused, then started hallucinating.

        I have access to Bard as well, and gave it a shot just now. It hallucinated an entire event.