• 0 Posts
  • 141 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • Models are geared towards seeking the best human response for answers, not necessarily the answers themselves. Its first answer is based on probability of autocompleting from a huge sample of data, and in versions that have a memory adjusts later responses to how well the human is accepting the answers. There is no actual processing of the answers, although that may be in the latest variations being worked on where there are components that cycle through hundreds of attempts of generations of a problem to try to verify and pick the best answers. Basically rather than spit out the first autocomplete answers, it has subprocessing to actually weed out the junk and narrow into a hopefully good result. Still not AGI, but it’s more useful than the first LLMs.


  • Unless you want a war-time level mobilisation

    Some of the more “radical” scientists have been calling for such a thing for a while now. Meaning that it’s needed even more now since we haven’t done much to change anything and more damage has been done. You aren’t wrong, addressing the core problems would be a long and intensive process and most people would resist even required participation (which says something about the chances of voluntarily doing much).



  • Multi-generational homes doesn’t necessarily equate to multiple incomes for support. Historically there was a single income earner because cost of living was more balanced with average income (not true for everyone and every demographic, but on average). Having two or more people in the family earning a paycheck is a modern invention as wages flatlined. I suppose you could go further back when the income was the family farm or business and the kids were free labor, but that’s not really a comparable situation to what’s being discussed.


  • It’s not AGI that’s terrifying, but how people are so willing to let anything take over their control. LLMs are “just” predictive text generation with a lot of extras to make things come out really convincing sometimes, and yet so many individuals and companies basically handed over the keys without even second guessing its answers.

    These past few years have shown how if (and it’s a big if) AGI/ASI comes along, we are so screwed, because we can’t even handle dumber tools well. LLMs in the hands of willing idiots can be a disaster itself, and it’s possible we’re already there.









  • Rhaedas@kbin.socialtoPrivacy@lemmy.mlPrivacy = no free speech
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    Free speech also entails how willingly you are to put that speech out there. If you want to cover it with a paywall of any sort, you are most welcome to do that. Keep in mind that free speech and its actions also have consequences. If your content is good enough, people might pay to see it. Free market and all that.


  • These softwares are purging resumes of perfectly qualified candidates without the human hiring managers ever knowing about it.

    I was watching an astronomer’s channel the other day and she brought up how automated much of the initial processes are for telescopes now. She said a similar thing, wondering if there is good information in that filtering that is never seen by the humans who view the “sanitized” end product. Any tool is useful as long as you understand its limitations and don’t have blind trust. I fear that somehow most people are using AI with a blind trust of the “intelligence” part, not understanding that it’s hardly perfect and often times very bad if misused. Or overused for everything.