In an interview with Rolling Stone, Scott, who has directed several movies featuring AI, was asked if the technology worried him. He says he's always believed the...
Blade Runner director Ridley Scott calls AI a “technical hydrogen bomb” | “we are all completely f**ked”::undefined
I’m sure that a film director is an expert on the technical underpinnings of large language models, which primarily are used to generate blocks of text that have the appearance of being coherent.
Several departments where I work had massive layoffs in favour of implementing customized versions of GPT4 chatbots (both client facing services and internal stuff). That’s just the LLM end of AI.
That’s not even considering the generative image spectrum of AI. I fear for my companies graphics, web design, and UX/UI teams who will probably be gone this time next year.
I work freelance but occasionally needed to partner with artists and other stuff. But I now use various “ai” projects and no longer need to pay people to do the with as the computer can do it good enough.
I’m not some millionaire, I’m just a guy trying to save money to buy a house one day, so it’s not like a large economic impact, but I can’t be the only one.
I know very well what UX is having studied it as my major in uni. Senior executives do not know what it is and have and are making decisions to “replace” them with LLMs and “prompt engineers”. I see it daily at work.
There is a great disconnect where hiring managers and executives see LLMs as a quick win that will cut costs and make moves to cut costs without doing any analysis.
Mm, I’ve already seen marketers present outputs from GPT models as if it’s useful customer feedback. My suspicion is this bubble will burst though, because at some point it will become clear that they are not as good as what they’re doing as execs have been told they are.
Even though you are technically correct, you assume people who are in charge of making decisions have the same insight and knowledge you do about the current limitations of gen ai.
I absolutely assure you that senior managers think it is fully matured since it gives convincing answers and they have made permanent and expensive decisions based off of this viewpoint. To them, it fully replaces UX/UI and developers. So they have made cuts. We’re currently sourcing some offshore help to fix our customer service chatbot which keeps giving off-topic advice to users 🤪
Oh, 100 percent right you are.
Definitely not saying clueless corporate idiot bosses aren’t going to try and replace their workforce with AI.
But I am saying that it won’t work for them after they do that. They’re going to crash and burn here, and have lost that talent and expertise within their company so there’s no replacing it, except slowly over time.
From personal experience I think they’ll keep doubling down and when that doesn’t prove successful, lobby governments to make changes or ask for bailouts.
My company (along with a whole onslaught of other similar orgs) successfully lobbied local politicians who convinced the mayor to pass a major bylaw that changed zoning rules and effectively killed remote work in my area.
It’s depressing how right you probably are about how companies are going to cope with this.
Reminds me of that quote:
“If Conservatives become convinced that they cannot win democratically, they will not abandon conservatism. They will reject Democracy.”
But, like, apply that to Capitalism and Capitalists rejecting Capitalism in favor of Socialism for them.
I can tell you now that AI won’t come for UX/UI teams, at least not in the near future. Clients rarely are able to really articulate what they need out of software and until AI is smart enough to suss that out, we’re good. That being said, I’m sure there will be companies that try to go that route but I doubt it will work, again, in the near term.
I’m not saying that AI will properly come for UX/UI teams.
It already is. AI is as you said not smart enough to evenly replace UX/UI teams, but managers and executives and csuite individuals don’t understand that. AI has been sold to them as a quick win that lowers costs. To give you an example, 3 members of our CX team were replaced by an annual license to Enterprise GPT-4 and some custom training for business stuff. In the last 2 months so much has broken down with it/hasn’t worked well and clients complained so now we are subcontracting a Bangalore firm to try and fix it. Pretty sure we’ve exceeded those 3 people’s salary costs by now.
Closely related to Verne’s science-fiction reputation is the often-repeated claim that he is a “prophet” of scientific progress, and that many of his novels involve elements of technology that were fantastic for his day but later became commonplace. These claims have a long history, especially in America, but the modern scholarly consensus is that such claims of prophecy are heavily exaggerated. In a 1961 article critical of Twenty Thousand Leagues Under the Seas’ scientific accuracy, Theodore L. Thomas speculated that Verne’s storytelling skill and readers’ faulty memories of a book they read as children caused people to “remember things from it that are not there. The impression that the novel contains valid scientific prediction seems to grow as the years roll by”. As with science fiction, Verne himself flatly denied that he was a futuristic prophet, saying that any connection between scientific developments and his work was “mere coincidence” and attributing his indisputable scientific accuracy to his extensive research: “even before I began writing stories, I always took numerous notes out of every book, newspaper, magazine, or scientific report that I came across.”
I use Copilot in my work, and watching the ongoing freakout about LLMs has been simultaneously amusing and exhausting.
They’re not even really AI. They’re a particularly beefed-up autocomplete. Very useful, sure. I use it to generate blocks of code in my applications more quickly than I could by hand. I estimate that when you add up the pros and cons (there are several), Copilot improves my speed by about 25%, which is great. But it has no capacity to replace me. No MBA is going to be able to do what I do using Copilot.
As for prose, I’ve yet to read anything written by something like ChatGPT that isn’t dull and flavorless. It’s not creative. It’s not going to replace story writers any time soon. No one’s buying ebooks with ChatGPT listed as the author.
It’s never going to go away. AI is like the “god of the gaps” - as more and more tasks can be performed by computers to the same or better level compared to humans, what exactly constitutes intelligence will shrink until we’re saying, “sure, it can compose a symphony that people prefer to Mozart, and it can write plays that are preferred over Shakespeare, and paint better than van Gogh, but it can’t nail references to the 1991 TV series Dinosaurs so can we really call it intelligent??”
So much this. Most people under 40 must have grown up with video games. Shouldn’t they have noticed at some point that the enemies and NPCs are AI-controlled? Some games even say that in the settings.
I don’t see the point in the expression “AGI” either. There’s a fundamental difference between the if-else AI of current games and the ANNs behind LLMs. But there is no fundamental change needed to make an ANN-AI that is more general. At what point along that continuum do we talk of AGI? Why should that even be a goal in itself? I want more useful and energy-efficient software tools. I don’t care if it meets any kind of arbitrary definition.
Saying this is like saying your a particularly beefed-up bacteria. In both cases they operate on the same basic objective, survive and reproduce for you and the bacteria, guess the next word for llm and auto-complete, but the former is vastly more complex in the way it achieves those goals.
Yes, I thought he was talking about the film industry (“we’re fucked”) and how AI is/would be used in movie. In which case he would be competent to talk about it.
But he’s just confusing science-fiction and reality. Maybe all those ideas he’s got will make good movies, but they’re poor predictions.
I’m sure that a film director is an expert on the technical underpinnings of large language models, which primarily are used to generate blocks of text that have the appearance of being coherent.
Several departments where I work had massive layoffs in favour of implementing customized versions of GPT4 chatbots (both client facing services and internal stuff). That’s just the LLM end of AI.
That’s not even considering the generative image spectrum of AI. I fear for my companies graphics, web design, and UX/UI teams who will probably be gone this time next year.
I work freelance but occasionally needed to partner with artists and other stuff. But I now use various “ai” projects and no longer need to pay people to do the with as the computer can do it good enough.
I’m not some millionaire, I’m just a guy trying to save money to buy a house one day, so it’s not like a large economic impact, but I can’t be the only one.
Ux is not about drawing pictures. That work is already automated by ui kits anyway. Ux is about thinking through requirements and research.
I know very well what UX is having studied it as my major in uni. Senior executives do not know what it is and have and are making decisions to “replace” them with LLMs and “prompt engineers”. I see it daily at work.
There is a great disconnect where hiring managers and executives see LLMs as a quick win that will cut costs and make moves to cut costs without doing any analysis.
Suits are idiots. No argument there.
Mm, I’ve already seen marketers present outputs from GPT models as if it’s useful customer feedback. My suspicion is this bubble will burst though, because at some point it will become clear that they are not as good as what they’re doing as execs have been told they are.
Perhaps but the egos on “decision makers” are so large that I see them doubling down until the end.
If shareholders’ profits are affected then so will the decisions lol
At the end of the day they’re still TPS reports. I’m afraid the only bubble that’s gonna burst is yours.
We’re a long way out from that fortunately.
Not saying that some jobs won’t be cut/lost, but the companies doing that were likely looking for reasons to downsize.
AI models do not replace competent UI/UX. That’s just not what they’re designed to do. Very different functions.
Even though you are technically correct, you assume people who are in charge of making decisions have the same insight and knowledge you do about the current limitations of gen ai.
I absolutely assure you that senior managers think it is fully matured since it gives convincing answers and they have made permanent and expensive decisions based off of this viewpoint. To them, it fully replaces UX/UI and developers. So they have made cuts. We’re currently sourcing some offshore help to fix our customer service chatbot which keeps giving off-topic advice to users 🤪
Oh, 100 percent right you are. Definitely not saying clueless corporate idiot bosses aren’t going to try and replace their workforce with AI.
But I am saying that it won’t work for them after they do that. They’re going to crash and burn here, and have lost that talent and expertise within their company so there’s no replacing it, except slowly over time.
From personal experience I think they’ll keep doubling down and when that doesn’t prove successful, lobby governments to make changes or ask for bailouts.
My company (along with a whole onslaught of other similar orgs) successfully lobbied local politicians who convinced the mayor to pass a major bylaw that changed zoning rules and effectively killed remote work in my area.
It’s depressing how right you probably are about how companies are going to cope with this.
Reminds me of that quote: “If Conservatives become convinced that they cannot win democratically, they will not abandon conservatism. They will reject Democracy.”
But, like, apply that to Capitalism and Capitalists rejecting Capitalism in favor of Socialism for them.
I can tell you now that AI won’t come for UX/UI teams, at least not in the near future. Clients rarely are able to really articulate what they need out of software and until AI is smart enough to suss that out, we’re good. That being said, I’m sure there will be companies that try to go that route but I doubt it will work, again, in the near term.
I’m not saying that AI will properly come for UX/UI teams.
It already is. AI is as you said not smart enough to evenly replace UX/UI teams, but managers and executives and csuite individuals don’t understand that. AI has been sold to them as a quick win that lowers costs. To give you an example, 3 members of our CX team were replaced by an annual license to Enterprise GPT-4 and some custom training for business stuff. In the last 2 months so much has broken down with it/hasn’t worked well and clients complained so now we are subcontracting a Bangalore firm to try and fix it. Pretty sure we’ve exceeded those 3 people’s salary costs by now.
Oh we’re in agreement here. AI isn’t coming for us, the bosses are.
Jules Verne wasn’t a technical expert either, but here we are somehow. Don’t underestimate a keen and observant imagination.
https://en.wikipedia.org/wiki/Jules_Verne
deleted by creator
Mandela effect?
I use Copilot in my work, and watching the ongoing freakout about LLMs has been simultaneously amusing and exhausting.
They’re not even really AI. They’re a particularly beefed-up autocomplete. Very useful, sure. I use it to generate blocks of code in my applications more quickly than I could by hand. I estimate that when you add up the pros and cons (there are several), Copilot improves my speed by about 25%, which is great. But it has no capacity to replace me. No MBA is going to be able to do what I do using Copilot.
As for prose, I’ve yet to read anything written by something like ChatGPT that isn’t dull and flavorless. It’s not creative. It’s not going to replace story writers any time soon. No one’s buying ebooks with ChatGPT listed as the author.
sigh. Can we please stop this shitty argument?
They are. In a very broad sense. They are just not AGI.
I agree with you but this argument is never gonna go away.
It’s never going to go away. AI is like the “god of the gaps” - as more and more tasks can be performed by computers to the same or better level compared to humans, what exactly constitutes intelligence will shrink until we’re saying, “sure, it can compose a symphony that people prefer to Mozart, and it can write plays that are preferred over Shakespeare, and paint better than van Gogh, but it can’t nail references to the 1991 TV series Dinosaurs so can we really call it intelligent??”
So much this. Most people under 40 must have grown up with video games. Shouldn’t they have noticed at some point that the enemies and NPCs are AI-controlled? Some games even say that in the settings.
I don’t see the point in the expression “AGI” either. There’s a fundamental difference between the if-else AI of current games and the ANNs behind LLMs. But there is no fundamental change needed to make an ANN-AI that is more general. At what point along that continuum do we talk of AGI? Why should that even be a goal in itself? I want more useful and energy-efficient software tools. I don’t care if it meets any kind of arbitrary definition.
Saying this is like saying your a particularly beefed-up bacteria. In both cases they operate on the same basic objective, survive and reproduce for you and the bacteria, guess the next word for llm and auto-complete, but the former is vastly more complex in the way it achieves those goals.
An 85 year old film director*
Yes, I thought he was talking about the film industry (“we’re fucked”) and how AI is/would be used in movie. In which case he would be competent to talk about it.
But he’s just confusing science-fiction and reality. Maybe all those ideas he’s got will make good movies, but they’re poor predictions.
You don’t need to be an expert to see a demo and understand what you can do with the tech.
You kinda do, as anyone in tech that has ever had to communicate with customers can attest to.