Paywall Bypass Link https://archive.is/0Dxp0
A car that accelerates instead of braking every once in a while is not ready for the road. A faucet that occasionally spits out boiling water instead of cold does not belong in your home. Working properly most of the time simply isn’t good enough for technologies that people are heavily reliant upon. And two and a half years after the launch of ChatGPT, generative AI is becoming such a technology.
Even without actively seeking out a chatbot, billions of people are now pushed to interact with AI when searching the web, checking their email, using social media, and online shopping. Ninety-two percent of Fortune 500 companies use OpenAI products, universities are providing free chatbot access to potentially millions of students, and U.S. national-intelligence agencies are deploying AI programs across their workflows.
Technology has never been perfect, but we’ve trusted it and its purveyors to fix the problems when they’re noticed.
The problem with AI has been that a lot of people are trusting it unquestioningly, and when it spews out garbage, attitude has been “Meh, close enough.” It’ll get better, but only by double and triple validating the work, which means even more processing and power usage.
Those people who want to turn over full control over to LLMs are in for a rude awakening.
There’s a world coming in which every appliance and automated system you can imagine will have had it’s onboard OS pretzled together by vibe coders. Good coding by real human engineers will be considered a luxury process for high-end products while the masses live their lives in a sea of glitchy, unreliable, deeply insecure, highly networked cheap consumer goods, vacuuming up and reselling every byte of data they can get their claws into. This will be heralded by the tech oligarchs and their pet journalists and politicians as a great and revolutionary stepped forward on the March of Progress.
Can’t wait for the Needle of Reality to smash into this bubble.