Which of the following sounds more reasonable?

  • I shouldn’t have to pay for the content that I use to tune my LLM model and algorithm.

  • We shouldn’t have to pay for the content we use to train and teach an AI.

By calling it AI, the corporations are able to advocate for a position that’s blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.

  • FancyGUI@lemmy.fancywhale.ca
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I can tell for a fact that there’s nothing new going on. Only the MASSIVE investment from Microsoft to allow them to train on an insane amount of data. I am no “expert” per se, but I’ve been studying and working with AI for over a decade - so feel free to judge my reply as you please

    • SCB@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      8
      ·
      1 year ago

      nothing new going on

      I can’t think of anything less accurate to say about LLMs other than that they’re a world-ending threat.

      This is a bit like saying “The internet is a cute thing for tech nerds but will never go mainstream” in like 1995.

    • Trainguyrom@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      1 year ago

      nothing new going on

      Uhhhh the available models are improving by leaps and bounds by the month, and there’s quite a bit of tangible advancement happening every week. Even more critically the models that can be run on a single computer are very quickly catching up to those that just a year or two ago required some percentage of a hyperscaler’s datacenter to operate

      Unless you mean to say that the current insane pace of advancement is all built off of decades of research and a lot of the specific advancements recently happen to be fairly small innovations into previous research infused with a crapload of cash and hype (far more than most researchers could only dream of)

      • FancyGUI@lemmy.fancywhale.ca
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 year ago

        all built off of decades of research and a lot of the specific advancements recently happen to be fairly small innovations into previous research infused with a crapload of cash and hype>

        That’s exactly what I mean! The research projects I’ve been 5-7 years ago had already created LLMs like this that were as impressive as GPT. I don’t mean that the things that are going on aren’t impressive, I just mean that there’s nothing actually new. That’s all. IT’s similar to the previous hype wave that happened in AI with machine learning models when google was pushing deep learning. I really just want to point that out.

        EDIT: Typo