ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future::AI for the smart guy?

  • Zeth0s@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Unless they reverted the chance recently, you can test yourself the max number of tokens of gpt-4 from webui, that is now ~4k. It used to be ~ 8k.

    What you are talking about are the APIs, that are different, and are not discussed in the news. They are even different models, in the sense that depending on the size of the context you get different results because of the attention mechanism (unless they are using the same model restricting the number of tokens, we don’t know). Unfortunately there is no official benchmark from openai as a comparison between gpt-3.5-turbo models with different context size, but I would not trust them much anyway. They are very defensive on their data, and push out mainly marketing stuff. I would wait for a 3rd party to do the benchmark.

    “Breaking” jailbreaking is not a bug, but it limits the ability to instruct the model, i.e. prompt engineering, because it is literally meant to limit prompt engineering, it is the whole idea behind it

    Edit. Here a link of a guide where they have the ~4k limit as well for gpt-4 https://the-decoder.com/chatgpt-guide-prompt-strategies/