I suspect he’s just preparing to make the pharmaceutical lobby pay their dues.
I suspect he’s just preparing to make the pharmaceutical lobby pay their dues.
Get ready, there’s going to be a whole lot of peeling off the warning labels.
I wonder, well they outright outlaw vaccines, or they just be privatized only…
I’m fairly certain, they just all got together to make a list of all the stupid things they could do to own the libs.
The neat thing about this one, is that It only hurts the people that are going to follow him. The people who didn’t vote for that side know better.
Sir, after the latest round of training to the new LLM it hallucinates all the time talking about nonsense that never happened, and every time you ask it any questions, It gets preoccupied with the first answer it comes up with and won’t take any more input.
Wait I have an idea…
John Oliver needs a little bit of a hiatus I wouldn’t trust him being around here with or without security.
Same as the thin blue liners with punisher tattoos. They’re not in it for the backstories or the morals
Town of 400? Was she a flight risk. Could they not have asked her to come by and remand herself? Do keep in mind this is a non-violent crime. This was for show, to make a point or simply to be cruel.
The police here are probably just two of the 400 people with no oversight.
When the deputy was complaining about her child wandering downtown, It paints a very different picture if there’s only a general store a Dollar general and a gas station.
The only thing I can imagine if the police aren’t being dick heads, is that the kid was getting into trouble. Maybe he was stealing or being perceived as stealing. Because he was being homeschooled maybe he was down there and they were pissed off because he was truant but can’t really be truant.
That’s what happens when you have a general store and a Dollar general comes in next door. They sell anything you can’t get at the Dollar general store and then advertising space.
Yeah, once you have to question its answer, it’s all over. It got stuck and gave you the next best answer in it’s weights which was absolutely wrong.
You can always restart the convo, re-insert the code and say what’s wrong in a slightly different way and hope the random noise generator leads it down a better path :)
I’m doing some stuff with translation now, and I’m finding you can restart the session, run the same prompt and get better or worse versions of a translation. After a few runs, you can take all the output and ask it to rank each translation on correctness and critique them. I’m still not completely happy with the output, but it does seem that sometime if you MUST get AI to answer the question, there can be value in making it answer it across more than one session.
I would cancel that subscription SOOO FAST.
I’d argue that YTMusic is a superior product to YT, but both put together aren’t worth anywhere near the cost. You can get a premium TV/Movie service for that price with family access.
We’ve GOT A PAYERR OVER HEREEEEE!!!
I think Wil Wheaton had something that was supposed to air on Freevee, the link his PR person gave him just threw you back into the Amazon video page, I’ve never actually seen any information about the service or a working video stream surface.
It seems like a lot of places are ready to throw millions of dollars into system and just never freaking marking them.
Oh god yes, ran into this asking for a shell.nix file with a handful of tricky dependencies. It kept trying to do this insanely complicated temporary pull and build from git instead of just a 6 line file asking for the right packages.
This has already started to happen. The new llama3.2 model is only 3.7GB and it WAAAAY faster than anything else. It can thow a wall of text at you in just a couple of seconds. You’re still not running it on $20 hardware, but you no longer need a 3090 to have something useful.
You can get a lot done currently with ARC. The mobile ARC versions share system memory, So if you get a mini PC with ARC and upgrade it to 96GB, you can share system ram with the GPU and load decently large models. They’re a little slow it not being vram and all, but still useful (and cheap)
https://www.youtube.com/watch?v=xyKEQjUzfAk
I have it running on a zenbook duo with 32GB so I can’t load the 70B models, but I works shockingly well.
Sounds like you’re getting better numbers than we do :) Wonder if there’s some incompatibility in our fleet hardware that you don’t have. We’re mostly Dell XPS. The biggest problem we regularly have is the audio output and mic inputs going rogue. They’ll be using the machine with sound all day, no problem, go into a meeting and there’s no sound. They’ll have the same problem with microphones. Somehow the browser session behind the scenes doesn’t pick up the current default device settings and the volume for the Slack session ends up being muted.
I certainly don’t wan to run windows on it :)
I’ve been running llama keep my telemetry out of the hands of Microsoft/Google/"open"AI. I’m kind of shocked how much I can do locally with a half assed video card, and offline model and a hacked up copy of searxng.
we have 60 ppl, it varies
The directions specifically lead through using a docker host and an elastic search host, But there’s certainly no reason you couldn’t just do that on your own.