When it comes to dealing with advertisements when they’re surfing on their browsers. I’ve just learned recently about how Google has or is killing UBlock Origin on the Chrome browser as well as all Chromium based browsers too.

We’ve heard for years about people complaining, bitching, whining and vice versa about how they keep seeing ads. And those trying to help them, keep wasting time to tell these people that they’re surfing without extensions. Whether it’d be on Chrome or Firefox or another browser.

By this point, I’ve long stopped being that helper because if you cared at all about the advertisements you see, you would’ve long had gotten on the wagon of getting adblockers by now. You bring this onto yourself.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 hours ago

    That AI safety is much more important than AI hurting copyright or artists.

    I say this because the “AI sucks haha” and “AI just steals” retoric is very harmful to AI safety movement as people just don’t believe AGI or even close-to-AGI will be capable enough to harm our society.

    Currently many estimate that there’s 1-20% chance that AGI could end our civilization. So fuck the copyright and fuck the artists when we’re looking at ods like this we need to start preparing now even if it’s 10 years away.

    But alas, nobody can’t think further out than the length of their nose and honestly I’m just hoping we’re lucky enough to be in that 80% because clearly we’re not going to do anything about it.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 hours ago

      AI safety is definitely an important thing but when you follow it up with “AGI could end our civilization” you lose me.

      • capital@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 hours ago

        It sounds hyperbolic but if you assume it will reach human-level intelligence and will have the ability to update its own code, you very quickly have something much smarter than us. Whether it will want to help or hurt us is an unknown. Whether we can control something that’s smarter than us (and getting smarter every second) is unlikely, IMO.

    • Nutteman@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      10 hours ago

      That would require an actual AGI to emerge, which it has not and is not going to. LLMs are fancy text prediction tools and little more.

      • capital@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 hours ago

        Are you assuming LLMs are the only way humans could ever try making an AGI? If so, why do you assume that?

        • Nutteman@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 hours ago

          There’s more important shit than worrying about if an unproven sci fi concept will come to being any time soon.

        • anothermember@lemmy.zip
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          3 hours ago

          I agree that AGI is dangerous but I don’t see LLMs as evidence that we’re close to AGI, I think they should be treated as separate issues.

          • capital@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            3 hours ago

            Given what I think I know about LLMs, I agree. I don’t think they’re the path to AGI.

            The person I replied to said AGI was never going to emerge.

      • Ceedoestrees@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        10 hours ago

        What we see in AI as an average consumer is like the RC hotwheels to a state of the art tank being used by big corps.

        Just imagine that if an early LLM can fool an engineer into thinking it’s sentient, what a state of the art system can do, one designed to predict the market, run propaganda bots on social media or straight up manufacture news stories with the footage to back it up.

        The AI being used by big corporations is so advanced, it’s one of the reasons countries have been trying to digitally isolate themselves. It’s really not an if, it’s a when.

          • Ceedoestrees@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            9 hours ago

            I do. I did get a little lost in the weeds with my point though, as I was talking in a more general sense about how AI is already powerful and dangerous - because AI safety is a subject in this thread.