• hertg@infosec.pub
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    If you want to find solutions online, stop using Google.

    Sometimes I post stuff to my blog about things that I could not find a satisfying solution to and where I had to figure one out myself. I post those things because I want it to be discoverable by the next person who is searching for it.

    I did a quick test, and my posts don’t show up anywhere on Google. I can find them via Kagi, DuckDuckGo, and even Bing. But Google doesn’t show my stuff, even when hitting specific keywords that only my post talks about. And if my site even shows up, it is only about +6 months after I posted.

    Even tried their search console thing, it doesn’t report any issues with my site. So it must be the lack of ads, cookies, and AI generated content which makes Google suspicious of it.

    So, If you are an engineer looking for solutions to your problems online, just stop using Google. It’s become so utterly useless, it’s ridiculous. Of course you will miss all the cool AI features and scam ads, but there’s always some drawbacks.

    _Reposting my post from Mastodon yesterday, it felt relevant. https://infosec.exchange/@hertg/112989703628721677_

    • kameecoding@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 months ago

      Google used to better than ddg like a year ago, now it’s almost completely unusable for development and I find myself going back to ddg and actually finding what I want instead of some unrelated nonsense, ads and LLM output crap

  • Revan343@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 months ago

    The best book is either Consider Phlebas by Iain Banks, or Fine Structure by Sam Hughes.

    Oh, you meant programming books. Maybe still try Sam Hughes, it’ll probably be more blog post than book, though

    Edit: You might also like Ra by Sam Hughes; it’s magic as a field of science/engineering, and spells have programming-like syntax. Spoiler: ‘magic’ is not actually magic

          • Zacryon@feddit.org
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            If we’re speaking of transformer models like ChatGPT, BERT or whatever: They don’t have memory at all.

            The closest thing that resembles memory is the accepted length of the input sequence combined with the attention mechanism. (If left unmodified though, this will lead to a quadratic increase in computation time the longer that sequence becomes.) And since the attention weights are a learned property, it is in practise probable that earlier tokens of the input sequence get basically ignored the further they lie “in the past”, as they usually do not contribute much to the current context.

            “In the past”: Transformers technically “see” the whole input sequence at once. But they are equipped with positional encoding which incorporates spatial and/or temporal ordering into the input sequence (e.g., position of words in a sentence). That way they can model sequential relationships as those found in natural language (sentences), videos, movement trajectories and other kinds of contextually coherent sequences.

      • jxk@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        My best guess is that in some configurations it raises SIGSEGV and then dumps core. Then, you use a debugger to analyse the core dump. But then again you could also set a breakpoint, or if you absolutely want a core dump, use abort() and configure SIGABRT to produce a core dump.