“How are you leveraging AI in your technical writing?”
I shared a LinkedIn post with that title last week. I’m going to share what I wrote there, but I’m going to ask a second question after that, which I may or may not share with the LinkedIn crowd later. If you’re thinking “but I thought you were an AI skeptic!”, hold on. Okay? Okay, here’s the post:
First and foremost, what we’re calling “AI” is not the AI of sci-fi movies. Generative pretrained transformers are statistical prediction engines; when they’re coupled with large language models, what they do is generate a statistically likely continuation of their input text. That’s it. Full stop. Now, it turns out that “that’s it” undersells how amazing that output can be. The statistically likely continuation of an input text that's a search query is a plausible answer to that query; the statistically most likely continuation of feeding a Github repository into an LLM with a command to write documentation for the code is plausible documentation.
But plausible is not the same as correct. The bigger and more complicated the task presented to an LLM is, the likelier it is to go off the rails—maybe just a little, maybe a lot. If you don’t know what the output should look like, you may not catch this. That’s the obvious peril of “vibe coding”; a similar issue arises with “vibe documenting.” The more constrained the task you give it, the better job it’s going to do at it. Claude will probably nail “write a Python script to transform the date formats in this text file” on the first try; “write a clone of Photoshop that runs as a web app,” not so much. Likewise, “write reference descriptions for this API, including input parameters, sample output, and error conditions” will get you a better first draft than “write a complete casual developer’s guide for this database.”
Second, the statistically most likely continuation of anything is definitionally the median. An LLM will be much faster at generation than a human, but at its absolute tippy-top best, it’ll be better than about fifty percent of humans at the same task…and worse than about fifty percent of them. The more difficult and expansive the problem you’ve given it is, the less likely it is to be at that absolute tippy-top best. It’ll make mistakes a human wouldn’t, because it will have made what appear to be incorrect decisions or assumptions: calling a library that doesn’t actually exist, or including a guide on Git basics in an overview of how to write game plugins. The thing is, it’s not actually making any decisions or assumptions. It’s just determined that statistically speaking, a lot of code in its corpus calls similar libraries in similar functions, and a lot of documentation in its corpus talks about version control after it talks about NPM modules (or whatever).
So, if you’re going to use AI in documentation:
- Target constrained, clearly defined use cases that contain sufficient context the LLM will stick to the task at hand.
- Expect the output to be first draft quality, not production ready.
- Humans must review the output, both to verify the engineering aspects (do the samples work, does the API really have those calls, etc.) and to make sure it’s staying on topic.
- Do not expect the LLM to have your corporate voice or stick to your corporate style guide, even if you’ve prompted it to do so. It won’t.
- The human reviewer(s) should be using a Markdown linter. (There’s probably some automated way to handle this, but I think it’s a good idea regardless.)
Okay. LinkedIn enough for you? Here’s the second question:
“Should you be leveraging AI in your technical writing?”
A tech writing group I’ve been loosely involved with the past few years, Write the Docs, has been consumed by the question. A writer in that group who lost his job but got a new one less than a month later rhapsodized about LLMing all the things to do it (rewriting his LinkedIn profile and his resume, editing cover letters, creating custom targeted resumes, etc.). People in the WTD Slack talk about rebranding themselves as “context engineers” or “knowledge engineers.” Technical writers, they say, have been writing clear instructional procedures for technical concepts for decades. That’s literally the job description. Ergo, we’re exactly who companies should be hiring to direct AI systems. We should be kings of the new world!
And, man, my urge to leave tech and open a tiki bar or coffee shop or take up woodworking has never been stronger.
I’ve talked about my various reasons to be skeptical/wary of AI before, so I won’t rehash them. LLMs have become good code monkeys, but generated code is less likely to be algorithmic plagiarizing than images or, more broadly, any attempt at creative work. (Also, “less likely” is not the same as “never”.) Training models on copyrighted material almost certainly is a copyright violation if you don’t have a legal right to access it: it’s behind a paywall, it’s a non-freely-licensed ebook, possibly it’s just a blog article that’s only licensed for non-commercial use (like the Creative Commons “Attribution-NonCommercial-ShareAlike” license I use). Literally all the current popular AI systems use corpora that violate copyright in that regard. So, you know, there’s that.
But even if we set that aside (which we shouldn’t), even if we grant the relatively rose-colored view of the first part of this post, there are two big questions.
First, for companies: do you want your documentation to be middle-of-the-road at best? Because that’s what you’re going to get if LLMs do most of the work. Companies that let AI write all of their documentation won’t even hit that mid-level bar. The AI drafts that I worked with were okay, in the way that a first draft written by a freshman intern hopped up on Red Bulls would be okay. Every section needed editing, and most sections needed to be entirely rewritten. Or moved. Or cut completely. The overcaffeinated robot intern doesn’t understand information architecture or ontologies. You may think you are going to get it there with sufficiently clever prompting; you won’t.
Did using an LLM save any time? In theory, it should. Even if the first draft isn’t that good, writing is almost always slower than editing. In practice, though, if I’m doing so much editing it basically is writing, it’s probably a draw. And if I’m not, the documentation isn’t as good as it could be. If the LLM adds any value here, it’s in giving me something to bounce off of, something to look at and say, That section doesn’t need to be there, it needs to be over here. The voice here is all wrong. This example is redundant and this other example doesn’t quite work, but I can find one that does.
Back in Write the Docs, there seems to be a consensus—a vibe, if you will—that users can’t tell the difference between AI-written and human-written documentation. As “documentarians,” we need to accept that and get with the times if we want to keep a job. To which I say: bullshit. Users can tell the difference between good documentation and bad documentation, just like they can tell the difference between good UX design and bad UX design, or good typography and bad typography. Maybe they can’t explain the difference, why this interface is good and that one is bad, why that poster looks beautiful and that one looks like ass, or Stripe’s API documentation is mostly terrific and Apple’s API documentation is mostly undercooked tripe. But that doesn’t mean they can’t tell.
So this leads me to the other question, for other writers: is this actually what you want?
Like I said, I can see ways to use LLMs and still be a writer. But prompt engineering is not writing. Could I do a job that involves just tweaking prompts to Claude until it emits something with the general appearance of product documentation? Sure, I guess. But I’m not convinced anyone will pay me enough to keep me in the amount of high-quality rum—we are not talking Captain Morgan here—I’ll require to deal with that.
So, should you be leveraging AI in your technical writing? Here’s a LinkedIn-friendly answer: If you wanna play around with it in the margins—using it for proofreading (not rewriting), or as a jumping-off point for initial drafts, summarizing, research—try it! But don’t take it for granted that it’s going to make you faster or smarter or better. Check its work constantly.
Here’s an honest answer: I am already so damn tired of thinking about this question.
I’m so tired of the fear that I’ll be penalized if I do not sound sufficiently enthusiastic about a tool which is already being used to replace people in my field despite being demonstrably incapable of doing so. (“Now that we have electric screwdrivers, we’ll never need trained carpenters again!”)
I’m so tired of the entire tech industry willfully ignoring the signs that AI is a bubble, that it is being wildly oversold and jammed into places where it is inappropriate or even dangerous.
And frankly, I’m not just tired, I’m angry seeing other technical writers buy into the idea that the only way they can stay in their career is by helping people who never took the field seriously to start with relentlessly devalue it. Maybe I’m a weirdo here, but I’m a writer because I enjoy writing. I’m a technical writer because I like the work of taking complex topics and figuring out best to present them to a specific audience. I take pride in my craft. I think there’s even some art to it.
I’m not saying it’s impossible to take pride in figuring just the right words to type to get Gemini to spit out the best possible mid-tier documentation it’s capable of, but—actually, scratch that. I think that’s what I’m saying.
Now, if you’ll excuse me, I’m going to go work on a tiki cocktail menu. You know, just in case.
To support my writing, throw me a tip on Ko-fi.com!
©
2026
Watts Martin
·
License: CC BY-NC-SA 4.0
Contact: Mastodon ·
Bluesky ·
Email