Maybe the waiting is actually the easiest part
I’ve written before how there are two opposing views of large language models and generative AI, with little to no space between them. In one view, they’re already an essential tool and anyone who isn’t wholeheartedly embracing their use in day-to-day work might as well be insisting on keeping their job as a telegraph operator. In the other, they’re a maddening case of the emperor’s new clothes—costs are only going up, the frequency of hallucinations is increasing rather than decreasing, and the sooner we admit that OpenAI is the next Enron, the sooner we can all get on with real work.
Opposing viewpoints about new technologies is hardly unheard of, and it’s more difficult than we like to believe to not only spot winners and losers, but even to make generalizations. Most new technologies fail and world-changing technologies are hard to spot aren’t mutually exclusive. Are they both true? Kinda? Maybe? Just because boosters say a technology is world-changing doesn’t mean that it is, but that doesn’t make nay-sayers automatically correct.
What’s weird about generative AI, though, is the seeming rise of a third position—one that accepts most of the criticisms of AI while nonetheless taking it as a given that we should be incorporating it into our workflows wherever possible. Call it, perhaps, the “AI triumphalist” position: it doesn’t matter whether the ethical concerns, or even the practical business considerations, are ever addressed—it’s the future, like it or not, baby. A surprising number of people whose opinions I (usually) enjoy reading/listening to seem to have adopted this: they acknowledge all the problems, then go on to use ChatGPT for coding, incorporate it into their shortcuts, and so on. So far none of them have used the image generation tools, as far as I know, but it’s quite possible that’s only because these folks are mostly in the Apple-centric space, and Apple’s “Image Playground” is a dumpster fire even by generative AI standards.
I can’t help but wonder if the proof by assertion fallacy is in part at work here. When we keep hearing anything repeated over and over, eventually we can’t help but wonder if we’re missing something vital. Every tech company is racing to AI all the things, from our code editors to our image editors, our search engines, even our keyboards and mice. Surely, surely, there must be something to this all. When we try it ourselves, it’s promising, isn’t it? The responses to our natural language search queries are just what we want, confidently presented, and correct like nine out of ten times. The message summaries only occasionally screw up in laughable and/or horrific ways. The header images we’re creating for our blog articles might not be what was in our head, but they’re interesting and colorful and you have to look closely to see the mistakes, and hey, we wouldn’t have paid a real artist to do this in the first place. That medical diagnosis definitely sounds authoritative, and look, what are the chances those wild mushrooms are that poisonous, really.
So when I hear pundits and analysts look at what Apple is (and isn’t) doing in the AI space, I often see it just taken as a given that Apple has to do this. They have to shove AI desperately in all the things, because they’re way behind the curve. The delays in bringing the magical Star Trek-level Siri they pretended they were just about to ship might be existential. The quality isn’t the issue—this is the future, goddammit, and they have to be there yesterday. The AI triumphalist position is the dominant one.
And yet, there’s still no indication that hallucinations are a solvable problem. There’s no obvious path to profitability for OpenAI (even their paid plans cost them more money to run than they make), and there’s virtually no “AI industry” without OpenAI. What if the triumphalists are wrong?
I’m not (necessarily) suggesting this is a binary, that there will be no value in generative AI, let alone value in other machine learning that, for right now, is unfortunately lumped in with it. Even in the anti-triumphalist case, there will be use cases for large language models. But in this “worst” case, people realize that more often than not, the time they have to spend checking their amazing robot assistant’s work reduces or even eliminates the subjective benefits. And companies quietly start backing away. Google rolls out an “improved” summary for results that applies machine learning in different, non-bullshitting ways. The Copilot brand sticks around, but it gets quietly de-emphasized. Companies whose business model is “repackage somebody else’s LLM” disappear, unless they’ve found a genuine niche. Companies whose business model is monetizing LLMs, like OpenAI and Anthropic, get bought or, in some cases, just fold. (Since LLMs as they exist today are largely fungible, building a moat around such a business is incredibly difficult.)
In this scenario, Apple’s mistake won’t prove to be ignoring AI for as long as they did, it will prove to be not continuing to ignore it. Adding a few carefully focused features here and there, like ML-assisted image cleanup and better text prediction, sure. But there are just so many other more pressing issues—from declining software reliability and concerningly poor UX decisions of late to what on God’s green earth they’re doing with the Vision Pro—they should be prioritizing.
So could Apple’s foot-dragging end up being accidentally genius? Maybe. They’ve always pushed “machine learning” as their brand, and if it’s in their best interest to stop talking about LLMs, they’ll do it in a heartbeat. While Apple rarely kills popular or even cult hit software and services with the wild abandon that Google does, stuff that never took off quietly gets murdered with no fanfare—Image Playground could easily go the way of Ping and Music Memos.
I suspect WWDC is going to be a big tell this year. The rumors that have people buzzing with excitement, dread, or skepticism aren’t rumors about new “Apple Intelligence” features—not even rumors like “they’ll finally ship what they announced at the last WWDC.” I don’t subscribe to the idea that the rumored new huge UI changes across the line are an evil plot to distract us from Apple Intelligence’s failures, but it’s surely not lost on anyone in Cupertino that a year spent talking about the bold decision to change the UI font to Zapfino or make application icons non-Euclidean or whatever is a year not spent talking about Siri. And that might just mean that if an LLM-backed Siri doesn’t ship until mid-2026, it’ll be entering a market which no longer breathlessly expects generative AI to cure cancer, replace all software developers, and animate Oscar-winning movies over a lunch break. That might just be a better market for all concerned—including Apple.
To support my writing, throw me a tip on Ko-fi.com!
©
2025
Watts Martin
·
License: CC BY-NC-SA 4.0
Contact: Mastodon ·
Bluesky ·
Email