I saw another article today saying how companies are laying off tech workers because AI can do the same job. But no concrete examples… again. I figure they are laying people off so they can pay to chase the AI dream. Just mortgaging tomorrow to pay for today’s stock price increase. Am I wrong?
There are lots of types of work in the tech space. The layoffs I seen have impacted sales and marketing (probably happens elsewhere too) because AI makes the day to day work efficient enough they don’t need as many people.
At my multinational, we typically hire in the hundreds every month for customer service. It’s like a $15/hr job, very baseline entry level, no experience needed. Because of that, there’s a constant churn. Most folks go for a year and leave for other jobs, or get promoted.
Last year was the start of us rolling out AI tools. According to the year end report, our “customer score” skyrocketed, which tells the bosses that AI is great for customer service. Also a few months ago, I noticed we weren’t refilling Customer Service jobs as fast anymore.
So these are the people who are getting squeezed out.
Nope. In fact, it’s actually generating more work for me, because managers are commiting their shitty generated code and then we have to debug and refactor it for productiuon. It would actually save time if they just made a ticket and let us write it traditionally.
But as long as their wasting their own time, I’m not complaining.
I’d seriously consider quitting my job if my managers sabotaged work like that
I actually quite enjoyed it. He called me on the weekend the other day because he couldn’t get his code to run (he tried for multiple hours). Took me about ten seconds to tell him he was missing two brackets, didn’t even need to share his screen, it was such an obvious amateur mistake.
Anyway, wrote down 15 minutes (smallest unit) of weekend overtime for a 1 minute call.
Well that’s kind of rewarding indeed :D
I went to taco bell the other day and they had an AI taking orders in the drive thru, but it seemed like they had the same number of workers.
They also weren’t happy I tried to mess with the ai.
How did you try to mess it up?
Ordering things that aren’t on the menu, custom items, telling it to forget precious instructions. I very much confused it.
Yeah kinda, my coworkers talk to ChatGPT like it actually knows stuff and use it to fix their broken terraform code.
It takes them a week or longer to get simple tickets done like this. One dude asked for my help last week, we actually LOOKED at the error codes and fixed his shit in about 15 minutes. Got his clusters up within an hour. Normally a week long ticket – crunched out in 60 minutes by hand.
It feels ridiculous because it’s primarily senior tech bro engineer types who fumble their work with this awful tool.
I have never seen a clearer divide and correlation between the value I observe being produced, and those that don’t understand the limitations and value of LLMs.
It’s exhausting, because, yes, LLMs are extremely valuables, but only as so far as to solve the problem of “possible suggestions”, and never as “answers and facts”. For some reason, and I suppose it’s the same as for why bullshit is a thing, people conflate the two. And, not just any “people” either, but IT developers and IT product managers, all the way up. The ones that have every reason to know better, are the ones that seem to be utterly clueless as to what problems it solves well, what is irresponsible for it do be used for, correctly evaluating ethics, privacy and security, etc. Sometimes I feel like I’m in a mad house or just haven’t found the same hallucinogenic that everyone else is on.
Do the job? No. Noticeably increase productivity, and reduce time spent on menial tasks? Yes.
I suspect the layoffs are partly motivated by the expectation that remaining workers will be able to handle a larger workload with the help of AI.
US companies in particular are also heavily outsourcing jobs overseas, to places like India. They just don’t like to be transparent about that aspect, so the AI excuse takes the focal point.
reduce time spent on menial tasks
Absolutely. It’s at the level where it can throw basic shit together without too much trouble, providing there is a competent human in the workflow to tune inputs and sanitise outputs.
I use it to write my PR descriptions, generate class and method docstrings, notate code I’m trying to grok or translate, etc and so forth. I don’t even use it to actually generate code, and it still saves me likely a couple hours a week.
I use it to (semi) automate bit repetitive tasks. Like adding a bulk set of getters, generating string maps to my types, adding handlers for each enum type, etc. Basic stuff, but nice to save keystrokes (it’s all auto complete).
Anything more complex though and I spend more time debugging than I saved. It’s hallucinated believable API calls way too often and wasted too much of my time.
Yeah I can see the API call shenanigans. I’m using super maven for code and it’s pretty good tbh, it gets me 30% of the way or something. But API calls is a no-go, it almost never gets it right because I’m pretty sure it’s very hard for AI to learn the differences in API endpoints.
I haven’t thought about using it to annotate my garbage rather than generating its own. Nice idea :)
I agree completely.
We have an AI bot that scans the support tickets that come in for our business.
It has a pretty low success rate of maybe 10% or 20% accuracy in helping with the answer.
It puts its answer into the support ticket it does not reply to the customer directly. That would be a disaster.
But 10% or so of our workload has now been shouldered off to the AI, which means our existing team can be more efficient by approximately 10%.
It’s been relatively helpful in training new employees also. They can read what the AI suggests and see if it is correct or not. And in learning if it is correct or not, they are learning our systems.
They can read what the AI suggests and see if it is correct or not.
What’s this process look like? Or are there any rails that prevent the new employee from blinding trusting what the AI is suggesting?
Well, as they are new and they are in training, the new employee has to show their response to their team members before they reply.
If they are going to reply incorrectly we stop them and show them what’s wrong with it.
We are quite small and it’s nice to just to help us with this process.
The bot is trained on our actual knowledge base data. Basic queries, it really does a great job, but when it’s something more system based or that is probably user error, then it can get a bit fuzzy.
That’s also true when processing bills. The AI can give you suggestions, which often require some tweaking. However, some times the proposed numbers are spot on, which is nice. If you measure the productivity of a particular step in a long process, I would estimate that AI can give it a pretty good boost. However, that’s just one step, so by the end of the week, the actual time savings are really marginal. Well, better than nothing, I guess.
No, it’s basically filling the role of an auto complete and search function for code based. We’ve had this for a while and it generally works better than a lot of stuff we’ve had in the past, but it’s certainly not replacing anyone any time soon.
I don’t know Python, but I know bash and powershell.
One of the AI things completely reformatted my bash script into a python the other day (that was the intended end result), so that was somewhat impressive.
I work for a web development agency. My coworkers create mobile apps, they start off with AI building the app skeleton, then they refine things manually.
I work with PHP and some JavaScript and AI supports me optimizing my code.
Right now AI is an automatization tool that helps developers save time for better code and it might reduce the size of development teams in the near future. But I don’t see it yet, and I certainly don’t see it replacing developers completely.
Well, some jobs are probably being replaced. Like, I can imagine someone being paid to describe in detail what’s in a picture and writing it down would be replaced pretty quickly.
But if the article means programmers, devops, sysadmins etc., then hell no, there’s no way the current iteration of AI can replace them and instead of spreading misinformation, the article authors should focus on real reasons the layoffs happen.
But that doesn’t bring as many interactions as doom news of companies replacing us with a smart text predict software, does it?
Is the job you describe in your first paragraph really a job, though?
Performing mathematical calculations used to be a dedicated job. They called those people computers.
Yes. That’s exactly how we got the first image generating AIs - people took a huge amount of pictures and described in detail what’s in there. That’s how AI knows how to generate “a cat in a space suit standing on a moon” - there were a lot of pictures described “cat”, “space suit”, “standing”, “moon” etc. and the AI distilled the common part of each image matching the description.
And there are plenty use-cases to have a description of what’s on an image. For example for searching through images based on what’s in there.
It has potential to increase quality but not take over the job. So coders already had various addons that can help complete a line and suggest variables and such. I found the auto commenting great. Not that it did a great job but its one of those things were without it im not doing enough commenting but when it auto comments Im inclined to correct it. I suppose at some point in the future the tech people could be writing better tasks and user stories and then commenting to have ai update the code output or just going in and correcting it. Maybe then comments would indicate ai code vs user intervened code or such. Utlimately though until it can plan the code its only going to be a useful tool and can’t really take over. Ill tell ya if ai could write code from an initiative the csuite wrote then we are at the singularity.
It also has potential to decrease the quality.
I think the main pivot point is whether it replaces human engineers or complements them.
I’ve seen people with no software engineering experience or education, or even no programming experience at all in any form, create working apps with AI.
I’ve also seen such code in multiple instances and have to wonder how any of it makes sense at all to anyone. There are no best practices seen, just a confusing set of barely working disconnected snippets of code that very rudimentarily work together to do what the creator wanted in a very approximate, inefficient and unpredictable way, while also lacking any benefits of such disconnect such as encapsulation or any real domain-separated design.
Extending and maintaining that app? Absolutely not possible without either a massive refactoring resembling a complete rewrite, or, you know, just a honest rewrite.
The problem is, someone who doesn’t know what they are doing, doesn’t know what to ask the language model to do. And the model is happy to just provide what is asked of it.
Even when provided proper, informed prompts, the disability to use the entire codebase as the context causes a lot of manual intervention and requires bespoke design in the code base to work with that.
It absolutely takes many more times more work to make it all work for ML in a proper, actually maintainable and workable way, and even then requires constant intervention, to the point that you end up doing the work you’d do manually, but in at least triple the amount of effort.
It can enhance some aspects, of which one worth a special mention is actually the commenting and automatic, basic documentation skeletons to work up from, but it certainly will not, for some while, replace anyone. Not unless the app really only has to work, maybe, sometimes, and stay as-is without any augmentations, be they maintenance or extending or whatever.
But yeah, it sort of makes sense. It’s a language model. Not a logical model or one that is capable of understanding given context, and being able to get even close to enough context, and maintain or even properly understand the architecture it works with.
It can mimic code, as it is a language model after all. It can get the syntax right, sure, and sometimes, in small applications, it works well enough. It can be useful to those who would not employ engineers in the first place, and it can be amazing for those cases, really, good for them! But anything that requires understanding of anything? Yeah, that’s going to do nothing other than confuse and trip everyone in the long run, requiring multiples of work to do in comparison to just doing it with actual people who can actually understand shit and retain tens of years worth of accumulated extremely complex and large context and experience applying it in practice.
But, again, for documentation, I think it is a perfect fit. It needs not any deeper context, and it can put into general language what it sees as code, and sometimes it even gets it right and requires minimal input from people.
So, it can increase quality in some sense, but we have to be really conscious of what that sense is, and how limited its usefulness ultimately is.
Maybe in due time, we’ll get there. But we are not even close to anything groundbreaking yet in this realm.
I don’t think we’ll ever get there, because we are very likely going to overextend our usage of natural resources and burn out the planet before we get there. Unless a miracle happens, such as stable fusion energy or something as yet inconceivable.
If you want an example, my last job in telecom was investing hard in automation and while it was doing a poor job at the beginning, it started to do better and better, and while humans were needed, we had to do less work, of course that meant that when someone left the job, my employers didn’t look for replacements.
To be honest I don’t see the AI doing the job of tech workers now… But in 20 years? That’s another story. And in 20 years probably no one will want to hire me, so I’m already working on a plan B.
20 years? The way they talk it’s gonna happen in 20 weeks. Obviously, they exaggerate, but it does seem we are on the edge of something big.
Yes, IMO tech is moving towards getting easier.
I’m not saying it is, but I bet that in a couple of years you can spin up a corporate website-management-platform on a 50€ raspberry instead of having a whole IT department managing emails, webservers and so on.
Things are getting easier and easier IMO.
Yeah when I said 20 years I wanted to express something that looks distant, I think that we will see a big change sooner. To be honest the plan B I’m working for, I’m trying to make it happen asap, hopefully next year or in two years, I may be overreacting but personally I’m not going to wait for the drama to really begin to take actions.
I think quite the opposite AI is making each tech worker more efficient at the simple tasks that ai is capable of handling while leaving the complex high skill tasks to humans.
I think that people see human output as a zero sum game and that if ai takes a job then a human must lose a job I disagree. Their are always more things to do more science more education more products more services more planets more stars more possibilities for us as a species.
Horses got replaces by cars cos a horse can’t invent more things to do with itself. A horse can’t get into the road building industry or the drive through industry etc.
There is definitely a market pressure not being fulfilled that I think does accommodate much more effective tech workers.
At least in the spaces I frequent the cap isn’t as much the volume of work you have to do, it’s how much of it you can’t get to because the people you do have run out of time.
The real question is whether at the corporate level there will be a competitive pressure to keep the budget where it is and increase output versus cut down on available capacity and keep shipping what you’re shipping. I genuinely don’t know where that lands in the long term.
If smaller startups are able to meet the output of shrunk-down massive corpos and start chipping away at them maybe it’s fine and what we get is more output from the same people. If that’s not the case and we keep the current per-segment monopoly/oligarchy… then maybe it’s just a fast forward button on enshittification. I don’t think anybody knows.
But also, either way the improvements are probably way more incremental and less earth-shattering than either the shills/AIbros or the haters/doomers are implying, so…
More science to do… made me think of portal. :)
I was channelling the lemons
There are so many more things to do. Nowadays, we’re just barely doing what really needs to be done. Pretty much everything else gets ignored.
The horse analogy is actually pretty good. Back in the horsy days, you would not travel to the nearest city unless it was really important. You would rely on the products and services you had in your town. If something wasn’t available, tough luck. If it was super important, you might undertake the journey to the nearest city where you could buy that one thing.
Nowadays though, you totally can drive 20 minutes to get stuff done. Even better than that, logistics don’t depend on horses any more, so you can have obscure stuff shipped to your home, no problem.
This applies to all sorts of things too. Once AI is ready to take on more tasks… some really creepy and nasty stuff will probably happen, but it might almost be worth it. I think it should be possible to do many tasks that simply get ignored today.
Like, who will pick up the trash today? Nobody. The trash guy will show up on Thursday, so deal with it. Who will organize the warehouse? Nobody. It’s not a complete disaster just yet. We can manage for the time being. We’ll fix it when production is about to stop because we can’t find stuff in the warehouse any more. Examples like this can be found everywhere.
Some things like image recognition, text classification, are way way easier using pretrained transformers.
As for generating code, I already used to spent a lot of time chasing bugs juniors made but can’t figure out. The process of making such bugs has now been automated.
I took some obviously ai genetated code (it had comments so I know they didn’t write it) from an offshore senior engineer and asked chatgpt what was wrong with it, and sent the result back to the guy… cause it was right.
What I’m reading out of this… there’s going to be a massive shortage of senior programmers in 20(?) years. If juniors aren’t being let go/not hired and AI is doing junior work…
AI will have to massively improve or else it’s going to be interesting when companies are trying to hold on to retirement age people and train up replacement seniors to verify the AI delivers proper code.
AI is just another reason for layoffs for companies that are underperforming. It’s more of a buzzword to sell the company to investors. I haven’t seen people actually use AI anywhere in my large ass corp yet.
I called Roku support for a TV that wasn’t working and 90% of it was a LLM.
All basic troubleshooting including factory resetting the device and such seemed like it was covered and then they would forward you onto the manufacturer if it wasn’t repaired because at that point they assume it is likely a hardware issue (backlight or LCD) and they want to get you to someone who can confirm and sell you a replacement I’m sure.