Has been a while since AI were introduced into the daily basis of the users around all the internet. When it firstly came I was curious yeah like everyone and tried some prompts to see “what this thing can do”, then, I never, ever, used AI again, because I really never saw it like something necesary, we had automatic systems already, so, the time keep moving to me until this day, when I realized something: how people is dependent of this shit. I mean, REALLY is dependent, and then they go like “I only used it for school 😢” like, are you serious dude? Do you leave your future to an algorithm? Coming back with my question, years have passed, I do think we all have an opinion more developed about AI, what do you think? Fuck it and use it anyways? If that is the case, why blame companys to make more accessible it’s use? Like microsoft putting copilot even in notepad. “Microsoft just wants to compile your data.” Isn’t LLM about that? Why blame them if you are going to use the same problem with different flavor? Not defending Microsoft here, I’m only using it like an example, change it for the company of your own preference.
Fuck off and die. That’s addressed to AI and AI companies, not you.
Like every new technology that is hailed as changing everything it is settling into a small handful of niches.
I use a service called Consensus which will unearth relevant academic papers to a specific clinical question, in the past this could be incredibly time consuming.
I also sometimes use a service called Heidi that uses voice recognition to document patient encounters, its quite good for a specific type of visit that suits a rigid template but 90% of my consults i have no idea why they are coming in and for those i find it not much better than writing notes myself.
Obviously for creative work it is near useless.
I want actual AI, and not even necessarily for anything other than answering the question of “can we make a sentient being that isn’t human?”
What is being sold as AI isn’t anything cool, or special, or even super useful outside of extremely specific tasks that are certainly not things that can be sold to the general public.
I find it a little bit useful to supplement a search engine at work as a dev but it can’t write code properly yet.
I can see it doing a lot of harm in the ways has been implemented unethically, and in some cases we don’t have legal resolution on whether it’s “legal” but I think any reasonable person knows that taking an original artist’s work, and making a computer generate counterfeits is not really correct.
I think there is going to be a massive culling of people who are charlatans anyways, and whose artistic output is meritless. See 98% of webcomics. Most pop music. Those are already producing output that is so flavorless and bland it might as well have come from AI model. Those people are going to have to find real jobs that they are good at.
I think the worst of what AI is going to bring is not even in making art, music, video, shit like that… It’s going to be that dark pattern stuff where human behavioral patterns and psychology is meticulously analyzed and used against us. Industries that target human frailties are going to use these heavily.
Effective communication will become a quaint memory of the past that seniors rant about.
Except for a very few niche use cases (subtitles for hearing-impaired) almost every aspect of it (techbros, capitalism, art-theft, energy-consumption, erosion of what is true etc etc) is awful and I’ll not touch it with a stick.
It’s a great new technology that unfortunately has become the subject of baying mobs of angry people ignorant of both the technical details and legal issues involved in it.
It has drawn some unwarranted hype, sure. It’s also drawn unwarranted hate. The common refrain of “it’s stealing from artists!” Is particularly annoying, it’s just another verse in the never-ending march to further monetize and control every possible scrap of peoples’ thoughts and ideas.
I’m eager to see all the new applications for it unfold, and I hope that the people demanding it to be restricted with draconian new varieties of intellectual property law or to be solely under the control of gigantic megacorporations won’t prevail (these two groups are the same group of people, they often don’t realize this).
Except they DID steal. Outright. They used millions of people’s copyrighted works (art, books, etc.) to train these data sets and then sold them off. I don’t know how else you can phrase it.
As I said above:
mobs of angry people ignorant of both the technical details and legal issues involved in it.
Emphasis added.
They do not “steal” anything when they train an AI off of something. They don’t even violate copyright when they train an AI off of something, which is what I assume you actually meant when you sloppily and emotively used the word “steal.”
In order to violate copyright you need to distribute a copy of something. Training isn’t doing that. Models don’t “contain” the training material, and neither do the outputs they produce (unless you try really hard to get it to match something specific, in which case you might as well accuse a photocopier manufacturer of being a thief).
Training an AI model involves analyzing information. People are free to analyze information using whatever tools they want to. There is no legal restriction that an author can apply to prevent their work from being analyzed. Similarly, “style” cannot be copyrighted.
A world in which a copyright holder could prohibit you from analyzing their work, or could prohibit you from learning and mimicking their style, would be nothing short of a hellish corporate dystopia. I would say it baffles me how many people are clamoring for this supposedly in the name of “the little guy”, but sadly, it doesn’t. I know how people can be selfish and short-sighted, imagining that they’re owed for their hard work of shitposting on social media (that they did at the time for free and for fun) now that someone else is making money off of it. There are a bunch of lawsuits currently churning through courts in various jurisdictions claiming otherwise, but let us hope that they all get thrown out like the garbage they are because the implications of them succeeding are terrible.
The world is not all about money. Art is not all about money. It’s disappointing how quickly and easily masses of people started calling for their rights to be taken away in exchange for the sliver of a fraction of a penny that they think they can now somehow extract. The offense they claim to feel over someone else making something valuable out of something that is free. How dare they.
And don’t even get me started about the performative environmental ignorance around the “they’re disintegrating all the water!” And “each image generation could power billions of homes!” Nonsense.
I’m a fan generally of LLMs for work, but only if you’re already an expert or well versed at all in whatever you’re doing with the model because it isn’t trust worthy.
If you’re using a model to code you better already know how that language works and how to debug it because the AI will just lie.
If you need it to make an SOP then you better already have an idea for what that operation looks like because it will just lie.
It speeds up the work process by instantly doing the tedious parts of jobs, but it’s worthless if you can’t verify the accuracy. And I’m worried people don’t care about the accuracy.
I’m tired of people’s shit getting stolen, and I’m tired of all the AI bullshit being thrown in my face.
Llms have been here for a while, which helped a lot of people, the thing is now though the “AI” now is corporations stealing content from people instead of making it there own or creating a llm on training data that is not stolen from the general public.
Llms are fucking amazing, helps with cancer research iirc, and other things, I believe auto correct is a form of a LLM. But now capatilism wants more and more making it with stolen content which is the wrong direction they should be going.
It’s just like any big technological breakthrough. Some people will lose their jobs, jobs that don’t currently exist will be created, and while it’ll create acute problems for some people, the average quality of life will go up. Some people will use it for good things, some people will use it for bad things.
I’m a tech guy, I like it a lot. Before COVID, I used to teach software dev, including neural networks, so seeing this stuff gradually reach the point it has now has been incredible.
That said, at the moment, it’s being put into all kinds of use-cases that don’t need it. I think that’s more harmful than not. There’s no need for Copilot in Notepad.
We have numerous AI tools where I work, but it hasn’t cost anyone their job - they just make life easier for the people who use them. I think too many companies see it as a way to reduce overheads instead of increasing output capability, and all this does is create a negative sentiment towards AI.
- I find it useful for work (I am a software developer/tester).
- I think it’s about as good as it’s ever going to get.
- I believe it is not ever going to be profitable and the benefits are not worth reopening nuclear and coal power plants.
- If US courts rule that training AI with copyrighted materials is fair use, then I will probably stop paying for content and start pirating it again.
I’m fundamentally anti-private property and copyright. So I’m definitely pro AI art. Once it’s on the internet - it’s there forever. It was always being scraped, you just get to see the results now.
That said I don’t like AI being shoved into everything. The fun picture recombination machine shouldn’t be deciding who lives and dies. Content sorting algorithms and personalized algos are all bad too, it shouldn’t take agency away from people.
The llms have impressive mathematics but can it cure cancer or create world peace? No. Can it confuse people by pretending to be human? Yes. Put all that compute to work solving problems instead of writing emails or basic code or customer service and I’ll care. I hear that AlphaFold is useful. I want to hear more about the useful machine learnimg.
It was fun for a time when their API access was free so some game developers put llms into their games. I liked being able to communicate with my ships computer, but quickly saw how flawed it was.
“Computer, can you tell me what system we’re in?”
“Sure, we’re in the Goober system.”
“But my map says we’re in Tweedledum.”
“Well it appears that your map is wrong.” Lol
I’m much more concerned about the future when “AGI” is actually useful and implemented into society. “We” (the ownership class) cannot accept anything other than the standard form of ownership. Those that created the AGI own the AGI and “rent” it to those that will “employ” the AGI. Pair that with more capable robotics being currently developed and there will be very little need for people to work most jobs. Because of the concept of ownership we will not let go of, if you can afford to live then you just die. There will be no “redistribution” to help those who cannot find work. We will start hearing more and more about “we don’t need so many people, X billion is too many. There isn’t enough work for them to support themselves.” Not a fun future…