Top physicist says chatbots are just ‘glorified tape recorders’::Leading theoretical physicist Michio Kaku predicts quantum computers are far more important for solving mankind’s problems.

  • PixelProf@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I understand that he’s placing these relative to quantum computing, and that he is specifically a scientist who is deeply invested in that realm, it just seems too reductionist from a software perspective, because ultimately yeah - we are indeed limited by the architecture of our physical computing paradigm, but that doesn’t discount the incredible advancements we’ve made in the space.

    Maybe I’m being too hyperbolic over this small article, but does this basically mean any advancements in CS research are basically just glorified (insert elementary mechanical thing here) because they use bits and von Neumann architecture?

    I used to adore Kaku when I was young, but as I got into academics, saw how attached he was to string theory long after it’s expiry date, and seeing how popular he got on pretty wild and speculative fiction, I struggle to take him too seriously in this realm.

    My experience, which comes with years in labs working on creative computation, AI, and NLP, these large language models are impressive and revolutionary, but quite frankly, for dumb reasons. The transformer was a great advancement, but seemingly only if we piled obscene amounts of data on it, previously unspeculated of amounts. Now we can train smaller bots off of the data from these bigger ones, which is neat, but it’s still that mass of data.

    To the general public: Yes, LLMs are overblown. To someone who spent years researching creativity assistance AI and NLPs: These are freaking awesome, and I’m amazed at the capabilities we have now in creating code that can do qualitative analysis and natural language interfacing, but the model is unsustainable unless techniques like Orca come along and shrink down the data requirements. That said, I’m running pretty competent language and image models on 12GB of relatively cheap consumer video card, so we’re progressing fast.

    Edit to Add: And I do agree that we’re going to see wild stuff with quantum computing one day, but that can’t discount the excellent research being done by folks working with existing hardware, and it’s upsetting to hear a scientist bawk at a field like that. And I recognize I led this by speaking down on string theory, but string theory pop science (including Dr. Kaku) caused havoc in people taking physics seriously.

    • Goodman@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      1 year ago

      He is trying to sell his book on quantum computers which is probably why he brought it up in the first place

      • PixelProf@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Oh for sure. And it’s a great realm to research, but pretty dirty to rip apart another field to bolster your own. Then again, string theorist…

    • joe@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      My opinion is that a good indication that LLMs are groundbreaking is that it takes considerable research to understand why they give the output they give. And that research could be for just one prediction of one word.

      • PixelProf@lemmy.ca
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        1 year ago

        For me, it’s the next major milestone in what’s been a roughly decade-ish trend of research, and the groundbreaking part is how rapidly it accelerated. We saw a similar boom in 2012-2018, and now it’s just accelerating.

        Before 2011/2012, if your network was too deep, too many layers, it would just breakdown and give pretty random results - it couldn’t learn - so they had to perform relatively simple tasks. Then a few techniques were developed that enabled deep learning, the ability to really stretch the amount of patterns a network could learn if given enough data. Suddenly, things that were jokes in computer science became reality. The move from deep networks to 95% image recognition ability, for example, took about 1 years to halve the error rate, about 5 years to go from about 35-40% incorrect classification to 5%. That’s the same stuff that powered all the hype around AI beating Go champions and professional Starcraft players.

        The Transformer (the T in GPT) came out in 2017, around the peak of the deep learning boom. In 2 years, GPT-2 was released, and while it’s funny to look back on now, it practically revolutionized temporal data coherence and showed that throwing lots of data at this architecture didn’t break it, like previous ones had. Then they kept throwing more and more and more data, and it kept going and improving. With GPT-3 about a year later, like in 2012, we saw an immediate spike in previously impossible challenges being destroyed, and seemingly they haven’t degraded with more data yet. While it’s unsustainable, it’s the same kind of puzzle piece that pushed deep learning into the forefront in 2012, and the same concepts are being applied to different domains like image generation, which has also seen massive boosts thanks in-part to the 2017 research.

        Anyways, small rant, but yeah - it’s hype lies in its historical context, for me. The chat bot is an incredible demonstration of the incredible underlying advancements to data processing that were made in the past decade, and if working out patterns from massive quantities of data is a pointless endeavour I have sad news for all folks with brains.

          • PixelProf@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 year ago

            Hmm… Nothing off the top of my head right now. I checked out the Wikipedia page for Deep Learning and it’s not bad, but quite a bit of technical info and jumping around the timeline, though it does go all the way back to the 1920’s with it’s history as jumping off points. Most of what I know came from grad school and having researched creative AI around 2015-2019, and being a bit obsessed with it growing up before and during my undergrad.

            If I were to pitch some key notes, the page details lots of the cool networks that dominated in the 60’s-2000’s, but it’s worth noting that there were lots of competing models besides neural nets at the time. Then 2011, two things happened at right about the same time: The ReLU (a simple way to help preserve data through many layers, increasing complexity) which, while established in the 60’s, only swept everything for deep learning in 2011, and majorly, Nvidia’s cheap graphics cards with parallel processing and CUDA that were found to majorly boost efficiency of running networks.

            I found a few links with some cool perspectives: Nvidia post with some technical details

            Solid and simplified timeline with lots of great details

            It does exclude a few of the big popular culture events, like Watson on Jeopardy in 2011. To me it’s fascinating because Watson’s architecture was an absolute mess by today’s standards, over 100 different algorithms working in conjunction, mixing tons of techniques together to get a pretty specifically tuned question and answer machine. It took 2880 CPU cores to run, and it could win about 70% of the time at Jeopardy. Compare that to today’s GPT, which while ChatGPT requires way more massive amounts of processing power to run, have an otherwise elegant structure and I can run awfully competent ones on a $400 graphics card. I was actually in a gap year waiting to go to my undergrad to study AI and robotics during the Watson craze, so seeing it and then seeing the 2012 big bang was wild.