I vote for xX-[X]-Xx
Alas, this being the darkest timeline, we’ll probably end up with X Social
.
I vote for xX-[X]-Xx
Alas, this being the darkest timeline, we’ll probably end up with X Social
.
You can list every man page installed on your system with man -k .
, or just apropos .
But that’s a lot of random junk. If you only want “executable programs or shell commands”, only grab man pages in section 1 with a apropos -s 1 .
You can get the path of a man page by using whereis -m pwd
(replace pwd
with your page name.)
You can convert a man page to html with man2html
(may require apt get man2html
or whatever equivalent applies to your distro.)
That tool adds a couple of useless lines at the beginning of each file, so we’ll want to pipe its output into a | tail +3
to get rid of them.
Combine all of these together in a questionable incantation, and you might end up with something like this:
mkdir -p tmp ; cd tmp
apropos -s 1 . | cut -d' ' -f1 | while read page; do whereis -m "$page" ; done | while read id path rest; do man2html "$path" | tail +3 > "${id::-1}.html"; done
List every command in section 1, extract the id only. For each one, get a file path. For each id and file path (ignore the rest), convert to html and save it as a file named $id.html
.
It might take a little while to run, but then you could run firefox .
or whatever and browse the resulting mess.
Or keep tweaking all of this until it’s just right for you.
He literally just fixed it, and he learned nothing from this, Dunning-Kruger as strong as always.
Instead of simply blurring them, it’d be technically possible to feed their images through a stable diffusion prompt, like “humanoid lizards” or “frantic lemmings”…
Also, I understand that a large language model could be made to rewrite articles about them with a matching prompt.
That would be very silly, of course.
More appropriate tools to detect AI generated text you mean?
It’s not a thing. I don’t think it will ever be a thing. Certainly not reliably, and never as a 100% certainty tool.
The punishment for a teacher deciding you cheated on a test or an assignment? I don’t know, but I imagine it sucks. Best case, you’d probably be at risk of failing the class and potentially the grade/semester. Worst case you might get expelled for being a filthy cheater. Because an unreliable tool said so and an unreliable teacher chose to believe it.
If you’re asking what’s the answer teachers should know to defend against AI generated content, I’m afraid I don’t have one. It’s akin to giving students math homework assignments but demanding that they don’t use calculators. That could have been reasonable before calculators were a thing, but not anymore and so teachers don’t expect that to make sense and don’t put those rules on students.
There are stories after stories of students getting shafted by gullible teachers who took one of those AI detectors at face value and decided their students were cheating based solely on their output.
And somehow those teachers are not getting the message that they’re relying on snake oil to harm their students. They certainly won’t see this post, and there just isn’t enough mainstream pushback explaining that AI detectors are entirely inappropriate tools to decide whether to punish a student.
No True Christian would ever activate a fully automated sentry killbot that doesn’t use at least one of its compute cores to pray to the Almighty on a loop.
“I’m not X but <position statement that clearly requires them to be X” and “I don’t want to Y but <proceeds to do exactly Y>” are used by people that mistakenly believe a disclaimer provides instant absolution.
On the other hand, I’ve never had anybody threaten to yuck my yum in exactly those terms, and I’m slightly intrigued by the prospect.
I was watching the network traffic sent by Twitter the other day, as one does, and apparently whenever you stop scrolling for a few seconds, whatever post is visible on screen at that time gets added to a little pile that then gets “subscribed to” because it generated “engagement”, no click needed.
This whole insidious recommendation nonsense was probably a subplot in the classic sci-fi novel Don’t Create The Torment Nexus.
Almost entirely unrelated, but I’ve been playing The Algorithm (part of the Tenet OST, by Ludwig Göransson) on repeat for a bit now. It’s also become my ring tone, and if I can infect at least one other hapless soul with it, I’ll be satisfied.
Several times now, I’ve sent people I knew links to articles that looked perfectly fine to me, but turned out to be unusable ad-ridden garbage to them.
Since then, I try to remember to disable uBlock Origin to check what they’ll actually see before I share any links.
That’s odd. Their own sidebar points to a Want to reform work? Start or join a union where you work. post, so your ban was perhaps not tied to your use of the U-word.
On that note, maybe it would have been more constructive to post your actual question here rather than a “I got banned” post.
Presumably because they don’t have a single delivery employee. They just provide “tech” that lets drivers and customers find each others.
Of course if those companies were to become responsible for providing a living wage to their “gig workers”, then it becomes harder to still call them mere “tech” companies (and some might argue that an article using that label to describe them is in fact implicitly picking a side in that lawsuit.)
The term AI was coined many decades ago to encompass a broad set of difficult problems, many of which have become less difficult over time.
There’s a natural temptation to remove solved problems from the set of AI problems, so playing chess is no longer AI, diagnosing diseases through a set of expert system rules is no longer AI, processing natural language is no longer AI, and maybe training and using large models is no longer AI nowadays.
Maybe we do this because we view intelligence as a fundamentally magical property, and anything that has been fully described has necessarily lost all its magic in the process.
But that means that “AI” can never be used to label anything that actually exists, only to gesture broadly at the horizon of what might come.
That sounds like an improbable attempt to leverage the notion that minors can’t enter into a legally binding contract into a loophole to get anything for free by simply having your kid order it.
I’ll note that there are plenty of models out there that aren’t LLMs and that are also being trained on large datasets gathered from public sources.
Image generation models, music generation models, etc.
Heck, it doesn’t even need to be about generation. Music recognition and image recognition models can also be trained on the same sort of datasets, and arguably come with similar IP right questions.
It’s definitely a broader topic than just LLMs, and attempting to enumerate exhaustively the flavors of AIs/models/whatever that should be part of this discussion is fairly futile given the fast evolving nature of the field.
I have a small userscript/style tweak to remove all input fields from reddit, so I’m still allowing myself to browse reddit in read-only mode on desktop, with no mobile access.
It’s a gentle way to wean myself off. I’m still waiting for my GDPR data dump anyway, so I need to check reddit fairly regularly to be able to grab it when/if it arrives.
One of my guilty pleasures is to rewrite trivial functions to be statements free.
Since I’d be too self-conscious to put those in a PR, I keep those mostly to myself.
For example, here’s an XPath wrapper:
const $$$ = (q,d=document,x=d.evaluate(q,d),a=[],n=x.iterateNext()) => n ? (a.push(n), $$$(q,d,x,a)) : a;
Which you can use as $$$("//*[contains(@class, 'post-')]//*[text()[contains(.,'fedilink')]]/../../..")
to get an array of matching nodes.
If I was paid to write this, it’d probably look like this instead:
function queryAllXPath(query, doc = document) {
const array = [];
const result = doc.evaluate(query, doc);
let node= result.iterateNext();
while (node) {
array.push(node);
n = result.iterateNext();
}
return array;
}
Seriously boring stuff.
Anyway, since var/let/const are statements, I have no choice but to use optional parameters instead, and since loops are statements as well, recursion saves the day.
Would my quality of life improve if the lambda body could be written as => if n then a.push(n), $$$(q,d,x,a) else a
? Obviously, yes.
That last message to /u/ModCodeOfDoWhatWeSay is a bit heartbreaking, like there’s somehow still a chance that this is all one big silly misunderstanding and if only Reddit knew all the facts, they would absolutely revert that decision.
Pixel 7 with a barely customized Nova Launcher, because I’m basic but I need rounded square icons.
The background looks iffy in the shot, but it’s a live wallpaper from Shader Editor running Machine DNA’s GLSL shader with minimal tweaks needed to make it fit on the phone.
That weird twitter icon is a Firefox PWA running twitter.com with various userscripts installed, to remove antifeatures and bad logos.