Ready for the revolution

Not so long ago, I considered myself basically a capitalist, and—echoing folks like Ralph Nader and Elizabeth Warren—would have said our problems are with corporate capitalism. Maybe so, but whether or not corporatism is the capitalism that Adam Smith envisioned, it’s the capitalism we have. Inasmuch as the notion of “markets” includes convention dealers’ rooms, great local coffee shops and bookstores, and the guy down the road selling honey made by the bees in his side yard, markets are great—but the capitalists that we have don’t really like markets very much. They’d all be perfectly happy to be the only vendor you could buy from, full stop.

The problem for a tech nerd like me is that the technology industry has been distilled to modern capitalism in its most toxic form. Perhaps I’m an idealist, but I don’t think this was always the case; garage startups from idealists who think they can change the world for good may have always been a cliché, but there was a truth there, however aspirational.

Silicon Valley’s outlook tended toward what I called technolibertarianism, the socially liberal but anti-regulation philosophy of Stewart Brand and Wired magazine. Yet, there’s long been a technofascist streak, exemplified by neo-reactionaries like Curtis Yarvin and in even larger part by PayPal Mafia members Peter Thiel, Elon Musk, and David Sacks. There is a great nonfiction book to be written exploring why so much of what’s gone wrong with the tech industry and our country’s politics stems from this one fucking company. In the last decade and change, tech billionaires have come to see fascism as the best way to keep what they have.

This sucks because, at its core, a lot of technology is still pretty damn fun. Mac hardware is great, and while macOS gives me way too much to bitch about these days, I still prefer it to the alternatives I’ve tried. I don’t need a BYOK but I kind of want one. I haven’t been using my iPad Air much since I moved my computing life to the MacBook Pro M5, but I have a place in my world for an OLED iPad mini.

And I’ll be honest: if I could set aside the ethical, legal, economic, and environmental issues with generative AI, it’d be pretty damn cool, too. Large language models give us a quantum leap in natural language processing, proofreading, transcription, translation, and summarization. Yes, I know all the ways in which LLMs are “bad” at all those things, but in comparison to the previous state of the art in machine-powered proofing, transcription, translation, and summarization, it’s just no contest. You can use LLMs in ways that aren’t “generative” in a conventional sense at all. For instance, asking an MCP-powered agent to generate a playlist for you, or using a Shortcut to export a photo from your iPhone photo library and give it a sensible filename by sending the image to an LLM.

But of course, you can’t set aside the ethical, legal, economic, and environmental issues with generative AI; it’s being used as an excuse to rip up dozens of fields, including my own (technical writing). When it is used to generate “creative” work, it threatens the livelihood of everyone from visual artists to musicians, and its output—even when it’s “good enough”—is, as I’ve repeated ad nauseum, definitionally never better than median. Worse, it’s creating that median output from input that, very often, it has no legal or moral right to. And while the playlist creator and photo namer don’t threaten anyone’s livelihood, they still take advantage of models built on environmentally and legally unsound foundations.

I’ve had an odd journey with generative AI. While I appreciated the problems with it and would never want to use it for creative work, I also appreciated the possibility of that cool stuff, you know? I didn’t want it to write for me, but maybe it could help brainstorm bits and bobs. (It can, albeit like a precocious seventh-grader writing a book report, so its value is questionable.) I don’t like Google’s AI-generated summaries, but Duck Duck Go’s are mostly decent, attempt to give sources, and in general don’t come across like they’re trying to keep you away from other people’s websites. I have little interest in “vibe coding” a complete app, but I’ve generated a few utility scripts, and when I was working on a now-stalled Swift project I tried to use AI to work through a few thorny problems. (Success: mixed.)

But even as generative AI gets better, both in terms of output quality and in practical application, my feeling towards it gets queasier, and now we’re back to—surprise!—capitalism. The problem isn’t the technology in and of itself. It’s the companies pushing it. Every tech product and service screams AAAAAA!!! IIIIII!!! louder, harder, and faster. Microsoft Office is now “Microsoft 365 Copilot”; Visual Studio Code is now “the open source AI code editor.” MinIO, a NoSQL database company, now advertises itself as an “Exascale AI Data Store.” I nearly got a job at MinIO, which turned out to be a dodged bullet: they recently fired all their tech writers because they think they can be replaced by guess what? Laravel, the most popular PHP web development framework, advertised itself as “the PHP framework for web artisans” for years, until earlier this year when it switched to “the clean stack for artisans and agents.” My favorite crazy story outlining/plotting application, Dramatica Story Expert, has been “reinvented” as a web-based, AI-powered service that costs $20 a month to use at its cheapest level.

Other than the AI companies themselves, who is this all for? Who asked for this? Who benefits from this? The users? If they’re developers and/or productivity nerds, maybe. If you’re using the AI as an assistive tool rather than doing the equivalent of falling asleep in the back seat while your Tesla drives off an overpass, it works, mostly. And I almost wish it didn’t, because it’s led a depressing number of otherwise smart, kind people to swallow the hype wholesale with a hear-no-evil, see-no-evil attitude.

Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

Developers: Wheeeeeeeeee!

James Thomson (@jamesthomson@mastodon.social)

But who else? AI’s being pushed to creative workers, from Dramatica to Sudowrite to whatever fuckery Adobe is up to this week, and obviously those programs have paying customers. But the people in the publishing, film, and game industries who love AI the most aren’t writers, musicians, and artists—they’re the people who want to screw over writers, musicians, and artists. Sudowrite and similar “editors” are writing programs for people who hate writing: the screwing-over crowd is their target market. Dramatica has historically positioned itself as a tool for actual creators, but their marketing increasingly talks up “teams and studios”—as it must, given the prices they need to charge given what the AI companies their product is now built on top of charge them.

I’ve been in computers long enough to live through several epochs, but I’d say there have been only three seismic shifts, fundamental changes in the way we related to our computers: the graphic user interface, the internet, and the smart phone. It feels like it’s the right time for another revolution, and we’re being told by pundits and companies—especially companies—that AI is clearly it, baby.

I don’t think it is.

To repeat it loud enough so those in the back can hear, I’m not suggesting generative AI is valueless. If they can solve both the economic and legal/moral issues surrounding it—and those are, to be clear, very fucking big ifs—it has a bright future in verticals where it can do what it’s best at: tightly constrained, non-creative generation. As a former programmer who’s seen what a lot of commercial code actually looks like, I assure you that you and I both, right now, rely on 100% human-written code shittier than what Claude Code farts out. Putting unreviewed LLM-generated code into production is insane, but as long as there are humans who read, understand, and verify the generated code, it’s going to be fine.

But beyond code generation, we fall off the cliff of diminishing returns real fast. Chatbots are good as a jumping-off point for web searches and research, but you can’t rely on them. They’re not good at writing text that requires, or is even just improved by, any kind of verve or voice or original thinking—so that leaves, what, first-level customer support responses? Meeting summaries? Business memos? I’m aware of, and sympathize with, the “this way lies Idiocracy” concerns here, but I suspect they’re going to prove overblown. While tech-centric productivity nerds I say “tech-centric productivity nerd” with love; I’ve bought more than one guide from MacSparky, including his actual Productivity Field Guide—which is, interestingly, the least tech-centric of all his guides. might go all in on robot assistants, most of us will at best half-heartedly try whatever Copilot does in Microsoft Office, like asking it “what goddamn ‘ribbon’ did you hide the button I’m looking for in.” (It’s great at answering that.)

AI simply doesn’t have the juice to support either its market valuations or the “this is the next big tech revolution” hype. Very few AI “users” who aren’t developers—or tech columnists/podcasters—pay for it. AI companies aren’t just unprofitable, they’re losing money at an earth-shattering rate, and evidence is mounting that the entire industry stays afloat through financial shenanigans on the order of Enron and WorldCom. If OpenAI and Anthropic both disappeared in a flaming cloud of worthless stock options tomorrow, some developers and tech-adjacent productivity nerds would fall into mourning, but everyone else would shrug and move on. Except for visual artists, animators, voice actors, and writers, who would throw a worldwide bacchanalia that made Mardi Gras look like a monastic retreat.

But if AI isn’t the next tech revolution, what is? I’ve seen a lot of hope that we’re collectively finally going to decide to stop giving big tech all the power, that we’re going to lean into owning as much of our digital lives as we can. Indieweb! Decentralization! Federation!

Personally, I’m all for this. But I’ve watched my non-techie communities opt for Bluesky and Threads over Mastodon, recoil from the idea of setting up their own websites—or set them up and stop using them after a couple of months—and choose to bemoan being locked into big-name subscription apps in lieu of investigating alternatives. We all agree it’s terrible to depend on giant companies with predatory business models and/or morally corrupt executives/backers, but darn, it’s just so convenient (they write from their MacBook Pro).

Frankly, I don’t have a good answer for this. I don’t think it’s impossible that the current iteration of AI leads to the next true tech revolution, but we need to have another, less tech-focused and more fundamental revolution first. As much as possible, we need to choose smaller businesses to both buy from and work for; when we choose to buy from or work for bigger companies, we need to take into account what they’re doing for, or to, our environment, our privacy, and our politics. The current system needs to be shaken up, if not outright dismantled, and that’s not going to come from the top. It has to come from us.

Also: fuck off, Laravel, I’m going back to Perl.

Back to Articles


To support my writing, throw me a tip on Ko-fi.com!