Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • dbilitated@aussie.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    to be fair, fucking up maths problems is very human-like.

    I wonder if it could also be trained on a great deal of mathematical axioms that are computer generated?

    • Cabrio@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      It doesn’t calculate anything though. You ask chatgpt what is 5+5, and it tells you the most statistically likely response based on training data. Now we know there’s a lot of both moronic and intentionally belligerent answers on the Internet, so the statistical probability of it getting any mathematical equation correct goes down exponemtially with complexity and never even approaches 100% certainty even with the simplest equations because 1+1= window.