cross-posted from: https://programming.dev/post/36394646
In the last two years I’ve written no less than 500,000 words, with many of them dedicated to breaking both existent and previous myths about the state of technology and the tech industry itself. While I feel no resentment — I really enjoy writing, and feel privileged to be able to write about this and make money doing so — I do feel that there is a massive double standard between those perceived as “skeptics” and “optimists.”
To be skeptical of AI is to commit yourself to near-constant demands to prove yourself, and endless nags of “but what about?” with each one — no matter how small — presented as a fact that defeats any points you may have. Conversely, being an “optimist” allows you to take things like AI 2027 — which I will fucking get to — seriously to the point that you can write an entire feature about fan fiction in the New York Times and nobody will bat an eyelid.
In any case, things are beginning to fall apart. Two of the actual reporters at the New York Times (rather than a “columnist”) reported out last week that Meta is yet again “restructuring” its AI department for the fourth time, and that it’s considering “downsizing the A.I. division overall,” which sure doesn’t seem like something you’d do if you thought AI was the future.
Meanwhile, the markets are also thoroughly spooked by an MIT study covered by Fortune that found that 95% of generative AI pilots at companies are failing, and though MIT NANDA has now replaced the link to the study with a Google Form to request access, you can find the full PDF here, in the kind of move that screams “PR firm wants to try and set up interviews.” Not for me, thanks!
In any case, the report is actually grimmer than Fortune made it sound, saying that “95% of organizations are getting zero return [on generative AI].” The report says that “adoption is high, but transformation is low,” adding that “…few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models, or measurable changes in customer behavior.”
Yet the most damning part was the “Five Myths About GenAI in the Enterprise,” which is probably the most wilting takedown of this movement I’ve ever seen:
- AI Will Replace Most Jobs in the Next Few Years → Research found limited layoffs from GenAI, and only in industries that are already affected significantly by AI. There is no consensus among executives as to hiring levels over the next 3-5 years.
- Generative AI is Transforming Business → Adoption is high, but transformation is rare. Only 5% of enterprises have AI tools integrated in workflows at scale and 7 of 9 sectors show no real structural change.
- Editor’s note: Thank you! I made this exact point in February.
- Enterprises are slow in adopting new tech → Enterprises are extremely eager to adopt AI and 90% have seriously explored buying an AI solution.
- The biggest thing holding back AI is model quality, legal, data, risk → What’s really holding it back is that most AI tools don’t learn and don’t integrate well into workflows.
- Editor’s note: I really do love “the thing that’s holding AI back is that it sucks.”
- The best enterprises are building their own tools → Internal builds fail twice as often.
These are brutal, dispassionate points that directly deal with the most common boosterisms. Generative AI isn’t transforming anything, AI isn’t replacing anyone, enterprises are trying to adopt generative AI but it doesn’t fucking work, and the thing holding back AI is the fact it doesn’t fucking work. This isn’t a case where “the enterprise” is suddenly going to save these companies, because the enterprise already tried, and it isn’t working.
An incorrect read of the study has been that the “learning gap” that makes these things less useful, when the study actually says that “…the fundamental gap that defines the GenAI divide [is that users resist tools that don’t adapt, model quality fails without context, and UX suffers when systems can’t remember.” This isn’t something you learn your way out of. The products don’t do what they’re meant to do, and people are realizing it.
Nevertheless, boosters will still find a way to twist this study to mean something else. They’ll claim that AI is still early, that the opportunity is still there, that we “didn’t confirm that the internet or smartphones were productivity boosting,” or that we’re in “the early days” of AI, somehow, three years and hundreds of billions and thousands of articles in.
I’m tired of having the same arguments with these people, and I’m sure you are too. No matter how much blindly obvious evidence there is to the contrary they will find ways to ignore it. They continually make smug comments about people “wishing things would be bad” or suggesting you are stupid — and yes, that is their belief! — for not believing generative AI is disruptive.
Today, I’m going to give you the tools to fight back against the AI boosters in your life. I’m going to go into the generalities of the booster movement — the way they argue, the tropes they cling to, and the ways in which they use your own self-doubt against you.
They’re your buddy, your boss, a man in a gingham shirt at Epic Steakhouse who won’t leave you the fuck alone, a Redditor, a writer, a founder or a simple con artist — whoever the booster in your life is, I want you to have the words to fight them with.
(The paste above stops just before the table of contents)
https://www.wheresyoured.at/how-to-argue-with-an-ai-booster/#table-of-contents
There’s also the fact that what we are currently calling AI isn’t, that there are better options that aren’t environmental catastrophes (I’m hopeful about small language models), and that no one seems to want all the “AI” being jammed into every goddamn thing.
No, I don’t want Gemini in my email or messaging, I want to read messages from people myself. No, I don’t want Copilot summaries of my meetings in Teams, half the folks I work with have accents it can’t parse. Get the hell out of my way when I’m trying to interact with actual human beings.
And I say that as someone whose job literally involves working with LLMs every day. Ugh.
I think the main thing that’s happening is analogous to what’s happened with a lot of electronics over the past couple of decades. It seems like every electronic device runs off of a way more powerful computer than is necessary because it’s easier/cheaper to buy a million little computers and do a little programming than it is to have someone design a bespoke circuit, even if the bespoke circuits would be more resource efficient, robust, and repairable. Our dishwashers don’t need wifi, but if you are running them off a single board computer with wifi built in, why wouldn’t you figure out a way to advertise it?
Similarly, you have all sorts of tasks that can be done with way more computational efficiency (and trust and tweakability) if you have the know-how to set something bespoke up, but it’s easier to throw everything at an overpowered black box and call it a day.
The difference is that manufacturing costs for tiny computers can come down to be cheaper in price relative to a bespoke circuit, but anything that decreases the cost of computing will apply equally to an LLM and a less complex model. I just hope industry/government pushing isn’t enough to overcome what the “free market” should do. After all, car centric design (suburbia, etc) is way less efficient than train centric, but we still went there.
My work would be improved by the dumbest of dumb retrieval augmented models: a monkey with a thesaurus, ctrl+f, and a pile of my documents. Unfortunately, the best they can offer is a service where I send my personal documents into the ether and a new wetland is dried in my honor (or insert your ecological disaster metaphor of choice).
With dishwashers at least it would be cheaper to manufacture them with a different board without wifi.
Cheaper to manufacture, yes, but then they’d lose all the sweet residuals from selling consumer data.
No one checks the privacy policy for a dishwasher. If a washing machine can send over 3gb of data in a day, I’d bet every other “smart” appliance is doing something similar.
deleted by creator
Also worth checking out is his Hater’s Guide to the AI Bubble. I listened to the three part podcast. He’s the only person I have to listen to at less than 1x, but it’s really good stuff.
Oh my god… This is one of the best blogs I have EVER seen. I added it to my RSS feed right after reading that post.
If you’re into podcasts, he has one too - its called Better Offline.
Thanks!
Really? I thought it had a lot of problems. Weird editor’s notes in a bunch of places that add nothing. An intro that is too long.
Some of the arguments were just plain wrong. For example, the argument that it’s obvious that the internet is good for ordering books is an argument from incredulity. And on top of that, people did argue exactly what he’s saying they wouldn’t argue. I remember. I was there.
Most of the general advice is good, and I agree with the premise of the article, but it didn’t strike me as one of the best blogs ever.
Personally I get a lot of value out of LLMs. I’ve used them almost everyday for the last three years in various projects and general chat bots, but I wouldn’t call myself an AI Booster. I’ve heard the criticisms about Gen AI the author addresses and I believe this is another case of blaming technology instead of the capitalists behind it. Do you hate the fact that a computer can generate text or images or do you really just hate the assholes that unethically scraped all of our human achievements for their benefit and are willing to destroy everything to enrich themselves on our labor? I bet it’s the latter.
It’s the latter, counter point; the tool never would have existed without the unethical scraping.
He’s pretty explicit in that regard. He even added an interesting point at the start of the article : most people he knows who actually work with AI and know shit about it are not boosters. It’s an important distinction that Ed doesn’t ignore.
He is against the over hype of “AGI” and skeptical of the hundreds of billions that have been poured into it for sinister reasons. He’s not denying that the tech has uses, but rather confronting the value of those uses with their actual, non subsidized cost.