My purpose in blogging

That title sounds like I’m going to try and share something insightful with you, but the truth is that I haven’t the faintest idea what my purpose in blogging is anymore.

I feel like I’ve hit a breaking point with Twitter, but I’ve felt that off and on for over a year, so I won’t make any promises to swear it off. Even so, it’s absolutely been a distraction, and sometimes much worse—call it a depression amplifier, perhaps. Part of me wants to talk about politics, but part of me suspects it’ll just make me sad and angry. (And tie me up in knots.)

Obviously I haven’t been feeling the tech blogging call for a while, either. I still have thoughts; I’m still an Apple user. I like the iPad more after iOS 11, and I travel with it more than my laptop now—I’m writing on it at this very moment. I also still think that there are a myriad of little ways that it’s not as good for writing as a MacBook is, and that in the long run, if Apple wants to truly move the iPad from computing appliance to general-purpose computing platform they’ll have to open it up like, well, a general-purpose computing platform.

But so far I haven’t wanted to get into that, either.

I’m trying to force myself to “de-Twitter” for a while; it’s not easy. I thought maybe joining would encourage me to…well, what, exactly? People don’t use it quite the way they do Twitter, which is probably for the best. In some ways it feels more like an adult version of LiveJournal, albeit without all the wonderful granular access controls. It’s possible that if I stop checking my phone quite so obsessively for tweets, though—and my computer and my iPad and and and—I’ll start finding more to say that’s longer form again.

The “A” word

In the wake of Apple’s HomePod release, I’ve seen talk about whether it is or isn’t an “audiophile” speaker. A Reddit post went through an inordinate number of tests to argue that yes, in fact, it is; Chris Connaker of The Computer Audiophile goes into great golden ear detail to call bullshit.1

First off, let’s address the snake oil in the room. There’s a decades-long debate in audio circles between “objective” (if you can’t measure a difference, you can’t hear a difference) and “subjective” (measurements don’t capture everything you can hear in terms of sound quality). While I lean toward the objectivist side, the subjectivists aren’t intrinsically bananas; it’s not hard to find audio components—particularly speakers—with similar measurements that still produce different sound.

The problem that we run into is when subjectivists step beyond “there are things you can hear that aren’t reflected in measurements” and stand firmly on “there are things you can hear which can’t be subject to testing, period, only listening.” This, in a nutshell, is how audiophile came to be synonymous in the popular imagination with rich white guy who buys $550 USB cables. Sure, the fact that nobody can consistently distinguish the sound of that cable from a $5 one might be due to an inherent flaw with the very concept of double-blind testing, but there are other possibilities one could plausibly entertain.

My own definition of audiophile is fairly prosaic: someone who wants a system capable of reproducing the source recording as closely as possible. I found fairly early on that spending a few hundred bucks more here and there in my system made what were, to my ears, profound differences in quality. (I also learned that calibrating and positioning speakers properly makes just as big a difference.)

When reviews talk about the HomePod being “audiophile,” though, they’re not using my definition, and they’re certainly not using the Computer Audiophile’s. What they mean is: “does this sound good?” Does it outperform other audio-systems-in-a-can? Does it have low lows and high highs and, uh, middle mids? Does it fill the room? Does it seem expensive at $350, or is it punching above its weight class? From most reports, it’s damn impressive for a system that’s only $350.

Did I say “Only?” Yes. It’s a lot cheaper than my system, let me tell you. And my system is (ahem) just a wee bit cheaper than Connaker’s: the speakers he used in his reference system were around $45,000 for the pair. He defended this by arguing that he needed a reference system for, well, reference. I’m not saying that’s wrong, but I’m questioning how much we truly learn about a Mazda 3 by taking it out on the Nürburgring with our reference Audi R8.

I’ve only heard the HomePod for a few minutes in an Apple Store. It sounded…fine, other than bass so pumped up it verged on the comical. I’d like to hear it in, ah, a less challenging environment sometime. But I knew going in—as did Connaker, surely—that it not only performs automatic room correction and equalization, it “separates the music into direct and ambient sound,” in Apple’s words. There’s a truckload of processing going on here, which is antithetical to what audio purists look for. It isn’t meant for, as audiophiles would put it, “critical listening”; it’s meant to be something you can stick in a corner, shout Hey, Siri, play The Wandering Hearts at, and have not just that corner but the whole room filled with pretty good-sounding music. It’s a “sound first” smart speaker, not a High-End Reference System in a Tube. It’s Apple saying, “Hey, you can have something like Alexa, but have it sound way better,” and it does.2

The perennial lament of the audiophile is that the Unenlightened Masses accept low quality sound as “good enough,” and that if only they’d spend a little more they’d understand. And, you know, I did spend a little more, and there’s definite truth here. But while cheap junk is generally still cheap junk, the “midrange” of the audio world like Marantz, Denon, B&W and KEF has gotten great over the last couple of decades. I won’t get into arguments over bit rates and audio formats; if you can consistently hear the difference between CD quality and “high resolution” audio in a blind test, good on you, but most people can’t. (If instead you want to argue that proves blind tests are garbage, well, bless your heart.) Personally, I’m not convinced I can reliably tell the difference between lossless and well-mastered, high-bitrate lossy; I do know that when I stream that (gasp) 256K AAC Apple Music through my stereo, it sounds pretty good.

Ironically, that’s why I have no plans to buy a HomePod. It would surely be more convenient than my setup, but the HomePod is aimed squarely at households that don’t have or want a living room AV system, or want a secondary music-only system in another room.3 Neither of those describe my living situation presently. But I know people for whom it would be better than good enough.

  1. It’s worth noting that since the Reddit post was published and widely shared, the author has walked back some of these claims, as per the edit at the post’s top. 
  2. Of course, it’s not Alexa, it’s Siri. But you’ve never heard it say “I’m sorry, I can’t find the track ‘Egg Freckles'” in audio quality this high before! 
  3. The HomePod is very clearly not designed to be a TV speaker, even in a stereo pair, although I’m sure some enterprising nerd will find a way to do it and be dreadfully disappointed. 

The unbearable glibness of tweeting

I still love Twitter. A lot of us still love Twitter. But it’s past time to admit it’s an abusive relationship. (“Yes, he hits me sometimes, but it’s only for the retweets.”)

The common wisdom is that the Big Blue Bird’s problem is their lack of moderation, that the service is Exhibit A in the case against Silicon Valley’s belief that you can solve everything with algorithms. I think that’s some of it, but I don’t think it’s all of it. When your software becomes global community infrastructure, the choices reflected in your design have profound effects on behavior. It’s a choice, for instance, to offer no privacy controls other than “protecting” your account. That one choice alone is a large part of why Twitter is so hospitable to harassers: your only option for controlling who engages with you is flipping your entire feed between open and locked down, and—given that anyone you follow can inject anything into your timeline via retweet—aggressively curating not just who you follow, but who you allow retweets from.

Here are some other choices Twitter’s made. “Favorites” are public accolades, not private bookmarks. Mechanisms for retweets and quote tweets are baked in. Official clients stream notifications about not just who favorited and retweeted you, but who favorited and retweeted your retweets. And let’s not even get into who gets verified and what verification offers. None of these choices are necessarily wrong in either a technical or moral sense. But they’ve created a culture that rewards painting everything in the starkest, loudest terms possible.

There’s a metric crapton of political tweets across the partisan spectrum that I could point to, but as I was writing this piece, a bag of “Lady Doritos” dropped into my lap.

PepsiCo CEO Indra Nooyi gave an interview to the Freakonomics podcast in which she observed that women ate Doritos differently than men did (“they don’t like to crunch too loudly in public”) and said the company was getting ready to launch “snacks for women that can be designed and packaged differently.” The Sun, a UK tabloid, reported this as “Doritos to launch crisps for WOMEN because they don’t like crunching loudly or licking their fingers, boss reveals.” This led to a veritable tortillanado of hot take tweets about snack food sexism.

But wait! Then came the New York Times reporting “Not a Real Thing, Company New Says,” which quoted PepsiCo’s gently acerbic retort, “We already have Doritos for women. They’re called Doritos.” Snap! Fake news! Well, yes and no. The quotes from Ms. Nooyi in the last paragraph are true; Frito-Lay is working on “snacks for women,” whatever the hell that may mean. The fake news part—in the sense that the Sun came up with it, not Nooyi—was the existence of “Lady Doritos.” Gosh, what an outrage-inducing, easily hashtaggable name they invented! Surely that couldn’t have been their intent. Ha. Ha ha. As of this writing, we’re 48 hours into Chipghazi, and the Twitter trends are just starting to ebb.

And this is a problem inherent in Twitter’s design that may not be solvable. Even if Twitter engineers could just go into the database and type DELETE FROM users WHERE is_nazi = 1, the software’s literally designed to reward superficial hot takes. It’s optimized for tweets that make you go “yeah, get those fuckers!” rather than tweets that make you go “hmm.”

When was the last time you scrolled through your Twitter timeline and felt smarter, happier, and generally more at peace with the world?

Mastodon and both propose that the solution to Twitter’s ills is decentralization. Mastodon has multiple “instances,” like Twitter servers, that each have their own rules and community guidelines. Because all the instances can interact with one another, you can follow any Mastodon user, not just the ones on your instance. is, if anything, more radical: a set of open standards that let good old fashioned weblogs interact with one another in Twitter-esque fashion. You can use it just like Twitter, but under the hood it’s using an RSS-like system to build your timeline. They can host your own (paid), which is a full-featured Jekyll install under the hood, but you could host your own blog wherever and on whatever software you want.

So far, these solutions are working, but I’m worried that—particularly in Mastodon’s case—it’s not because they’ve chosen a more resilient design, it’s simply because the community is so much smaller. There’s less social reward for turning the volume on everything up to 11 when the audience is tiny. But Mastodon makes many of the same choices Twitter has, including favorites, quotes (“embeds”), and retweets (“boosts”), then stirs in the questionable belief that moderation issues are effectively moot under their federated server model. deliberately has no retweet mechanism. Favorites are just private bookmarks. As far as I can tell you can’t even get a list of followers. Unlike Mastodon, shows replies people make to people you aren’t following, the way Twitter did in its first couple of years. All this adds up to a surprisingly friendly, conversational timeline. (Also,’s first hire has been a community manager, which says a lot about their philosophy here.) But as I alluded above, if you want to use it just like Twitter—i.e., no work on your part—you need to pay them to host your blog. They’re looking at it as a turnkey blog hosting service, but if it’s perceived as “like Twitter but with less features for $5 a month,” that’s a problem.

Yet both and Mastodon are just…nicer. I think is the better of the two, in no small part for the UX choices they’ve made that are explicitly the opposite of both Twitter and Mastodon, but Mastodon’s free nature gives it the potential to grow further. Either way, though, both of them have one huge advantage: they’ve seen the shitshow that’s turned Twitter into a Dead Bird Walking, and they can say, “You know what? Let’s not do that.”

Hanging out the shingle

I’ve been a technical writer for a few years now. While I never pursued “professional blogger” as an income stream, this blog is to blame. (Well, its Tumblr incarnation, strictly speaking.) Folks at RethinkDB found it, liked my writing, and one thing led to another.

RethinkDB ran out of money in late 2016. I spent 2017 at Realm, another startup in a similar space. While they got their neeeded round of investment, it came with a catch: lay off some of their staff. I was one of the lucky ones.

Now, it’s time to find an income stream again, and I confess I’m struggling with how to go about it. Will I take another position that involves an hour-long commute to a giant open office plan? Ugh. (RethinkDB was close by. Realm wasn’t, but they let me work from home three days a week.) I’d have to want to work for that company, with no reservations.

It turns out planting the only on my terms flag in the ground when you’re job-hunting is difficult. Not just for the obvious reasons, but also for some less obvious ones. It takes privilege to say “thanks, but no thanks” to a recruiter—and I won’t have this specific privilege all that long. My non-retirement savings should stretch about a half a year, but that’s never as long as it sounds. (Also, a talk with a company isn’t an offer. I’ve already been rebuffed by one company I assumed I had a pretty solid “in” with.)

So this leaves me tempted to try and forge my own path. Listening to podcasters Marco Arment and Myke Hurley try to convince their fellow podcaster Casey Liss to give up his “real job” and go independent is a contributing factor. But the truth is, I’ve thought about this before. More than once. (I’m also, by Silicon Valley standards, ancient. I have unprovable suspicions that’s already been an issue.)

The truth is also that I’ve done consulting work before, though, and we’ll just say that the Tim Ferriss 4-Hour Workweek® dream proved elusive. I loved setting my own hours and working from wherever I wanted, but those weren’t high-income years. I had no idea what rates to charge and little idea how to rustle up clients; now I at least know how much I should charge. (Okay, I think I know.) Even so, among that set of podcasters I mentioned, I’m temperamentally closer to John Siracusa than the rest (if less particular about pizza). Having a job I enjoy matters, but so does financial security.

And yet. And yet.

I’m inching closer to setting up another small income stream or two, but those won’t be much. So can I make a go of being a technical writing consultant? An awful lot of companies out there need docs of one kind or another. (The subset of companies that both know that and are willing to pay for it is, I suspect, much smaller.) Part of this process involves figuring out what kind of network I have that I can draw on. If you’re reading this, you may be part of that network.

To put it baldly: know anyone who could use some technical writing done?

Thoughts on Patreon’s (rolled-back) fee change

Update, morning of December 13: Patreon has rolled back this announced fee change, which they clearly waited to do until I’d posted this. More seriously, I think a lot of the business dynamics I’ve talked about here with respect to the kind of creators Patreon needs to court in the future still hold true—and are still worth thinking about.

In case you missed this particular part of the internet catching on fire the other day, Patreon—a platform designed to allow content creators to get direct support from their fans, via pledges paid monthly or “per work”—has announced a change to their fee structure. Instead of taking 5% and a sliding (and opaque) transaction processing surcharge of 2–10% from money going to creators, they’ll take a flat 5% from creators and add a transaction processing surcharge of 35¢ plus 2.9% to pledges.

You’ve probably seen the reaction on Twitter and other social media: that this change will cause too many patrons to leave and will devastate creators’ incomes and everything will end in ash and fire. The community does not, as a whole, appear to be pleased, is what I’m saying.

Why make this change?

First off, I don’t think this is was a cash grab by Patreon. The description of how the new fees work clarifies they’re going to stop “bundling” pledge charges together in one charge per month; this is not a sneaky way to pocket new transaction fees two through N if you’re supporting N creators. (I’ve seen people nitpick the specific numbers Patreon announced, but they use at least two different back-end processors with differing fees. It’s unlikely Patreon’s making money off this.)

Okay, but what about that marketing weasel guest post from June that everybody linked to? “We’d rather have our (Patreon’s) GMV be made up of fewer, but truly life-changed creators rather than a lot of creators making a few dollars.” (“GMV” is “gross merchandise volume.”) You can definitely read that as “screw the little guys.”

But should you? The argument is that creators need to have “an established online following, even if small” before launching a Patreon, and that it’s in Patreon’s best interest to focus on creators who meet that metric. This may sound brutal, but there’s truth to it. Also, remember Patreon makes the bulk of their revenue from aggregating across the “long tail”; I think the proper translation from Marketing Weasel here is “focus on the head and the tail will follow.”

Having said all this, let me put an asterisk beside “Patreon isn’t blowing off small creators.” I’m coming back to it at the end.

So they made wanted this change because…

This year Patreon’s on track to make around $7.5M, according to an article about how they closed a $60M funding round in mid-September. That sounds like a lot, but as an 80-ish person company (in San Francisco!) that probably doesn’t cover their labor costs, much less anything else. They’re taking venture capital money because they need it.

Now, VC money comes with…strings. As I write this, I’m unemployed in part because my last employer’s investment round came with a demand for them to cut headcount, and I was one of the lucky ones. (Yay!) We don’t know what strings were attached to Patreon’s last investment round, but we know that it valued the company at a boggling $450M. Given that the investors would like to see a payout in five or six years rather than thirty or more, Patreon is going to have to change somehow. I’d bet this change connects back to this investment.

Why? Well, as Christie Koehler argued, Patreon may want to get out of the micropayments business. I’m not positive she has all the details right (for instance, payments Patreon processes with Stripe are likely covered by Stripe’s “money transmitter” license), but I suspect the gist of her argument is on point.

At first glance this seems baffling. A lot of people flocked to Patreon because it was the only company with a micro-transaction model! Well, yes, because nobody else could make it work. It’s possible that at the end of the day, Patreon can’t make it work, either. Bottom line: I think Patreon’s most recent round of investment came with a requirement that they move to this billing model. (Update: I still think this is what the investors wanted.)

This doesn’t materially change Patreon’s revenue, though, so how are they going to earn that $450M valuation in five to ten years? Can they do that just by growing the number of creators? Well—maybe. Assuming they don’t change the business model, they just need to increase the number of creators and patrons they have. But it’ll really, really help if they increase the amount of revenue they get from each patron. They’d rather have you spend $15 than $12, right?

So, completely theoretically, what if they make a change that doesn’t technically bring Patreon more revenue, but nonetheless makes $12 one-dollar donations cost more after fees than three five-dollar donations do?


[There’s a section I’ve deleted here that went into specific numbers: “I’m not convinced it’s the shitshow it’s been widely received as.” As Patreon themselves noted in their retraction, this change disproportionately affects low-value pledges, but it doesn’t necessarily follow that creator revenues would go down. While I spent a lot of time digging into those numbers, they’re all moot now.]

Going forward

First, while I can’t be positive this isn’t would have been the Patreoncalypse everyone else seems to think it is, so far that’s not supported by the data. Let’s check back in a month. Meanwhile, if we’re going to be angry about it, at least let’s make sure we’re angry over accurate information. I’ve seen a lot of (well-intentioned) bullshit passed around.

Second, if you’re a creator using Patreon who’s willing to stick with the platform, but you don’t have reward levels designed to encourage people to go up to the $5 or higher level, consider changing that. I’m not suggesting getting rid of the $1 level, but the transaction fees drop off sharply with just a few extra bucks. ($1 becomes $1.38, but $3 becomes $3.44, and $5 becomes $5.50.) even without the new transaction fees, this will really help you get higher revenue.

Third, lest I come across too much as saying “everyone should just rally around Patreon,” well: no. I’ve been an advocate of owning your own space on the internet for a long time, and that’s why I consolidated a lot of my presence to a new home earlier this year. There’s a strong argument for hosting your own content on your own site, using a service like Memberful to handle subscription management. But this isn’t something everyone’s up for doing on their own. No matter how easy you make it, it costs time and money.

So here’s that asterisk about large vs. small creators I promised I’d come back to (remember?). To earn that $450M valuation Patreon has, they’re going to have to double revenue every year for the next four or five. Wouldn’t a great way for them to start making serious bank be to start landing creators who can get a few hundred thousand patrons instead of just a few thousand?

As of this writing, just six creators have more than 10,000 patrons. The top of the “long tail” curve Patreon has just isn’t that far above the bottom. This is the part of Patreon’s business that I suspect investors are most keen on changing. It’s great that Patreon can get Amanda Palmer now, but they’re going to need to get Imagine Dragons. I don’t mean “the next” Imagine Dragons, either. I mean an existing artist who can bring a bazillion fans with them.

And to do that, going after Financially Successful Creators™ as they’ve defined it now isn’t good enough. They can’t just go after people they think Patreon can bring to the next level. They’re going to have to go after people who are already making six- or even seven-figure incomes from their art. They have to be able to say, hey, if you take a chance on us, we can give you the same income with fewer middlemen.

Maybe they can do that while still providing good service for “little guys”—by which I mean everyone currently on the platform—the way WordPress seems to have managed. But it’s tough to be both a consumer-focused company and an enterprise-focused one.

And, yes, “enterprise-focused” is what I mean. It’s just that the big value unicorn in Patreon’s space isn’t General Electric. It’s Beyoncé.

The revolution will have a monthly subscription

Introducing the first iteration of the Apple TV with an app store, Tim Cook (in)famously declared, “We believe the future of TV is apps.” The Apple TV stands out from its competitors for only two things: the App Store, and a much more powerful CPU than its competitors. So it’s safe to say that Apple genuinely does believe this is the future of TV.

It’s also safe to say, though, that it doesn’t appear to be panning out. Pretty much nobody buys Apple TVs for much other than what other streaming boxes do.1 We’re watching Netflix, Hulu, HBO, and Amazon Prime. (Well, we will be watching Prime. Later. Theoretically.)

What we want from TV is—sorry for the buzzword—content. In practice, it doesn’t matter how we get Game of Thrones or Star Trek: Discovery as long as we can get it easily on demand. Apps are arguably less help than hindrance. Imagine having a storefront that had all the shows, and we just paid per episode or per season for permanent access to our favorite shows–we could stream them or download them. Wouldn’t that be much better?

Ha ha! I’m pulling a fast one on you. Sorry. We had that from Apple and Amazon by the mid-2000s. Have you ever bought a TV show on iTunes? No? Yes, but only because it wasn’t available on Netflix? Once we got “all you can eat” streaming for $10–12 a month, we all said fuck this à la carte thing. We’ll just wait for all the networks and all the studios to put all the things on Netflix. Everybody wins!

But studios don’t make as much selling to Netflix as they used to in old syndication deals. They make a lot less. So what are they going to do? Start their own streaming service. Yay! You know Hulu, Netflix, and Amazon Prime, and HBO Go/Now. Maybe you know Walmart’s me-too Vudu service. And you’ve recently heard Trek nerds bitch about CBS All Access. But there’s also Crunchyroll, Feeln, Acorn TV, Filmstruck, BritBox, Shudder, Screambox, Youtube Red, and others that I’m certainly forgetting—and that’s without counting the “cable replacement” services like Sling Orange, PlayStation Vue and Hulu Live. Disney is gearing up for their service, with plans to pull their stuff off other streaming services. And there’s whatever the hell Apple is doing.2

“But nobody’s going to subscribe to all those streaming services!” Not if you’re already paying $100+ a month for cable before you add any streaming services, no. But imagine a world (it’s easy if you try) in which you’re only paying, say, $50 a month for network access with no bundled television. All your shows now come from streaming services. So the chances are you’re going to end up subscribing to more than just Netflix and one other.

If you look at cord cutting as a money-saving move, this sounds depressing: it’s painting a picture of a future where the money you save by going data-only gets eaten up by streaming services. Well, true. But now you’re paying for everything on demand, in most cases commercial-free. Honestly, that’s still a win.

“Okay, but even if you get me to pay for five or six services, you listed eighteen services and claimed you were probably forgetting some. That is not gonna happen.” No, it isn’t. Most of those services aren’t going to survive long-term. They’re going to merge with other services or just quietly vanish. (SeeSo, we hardly knew yeeso.) But streaming video will likely never consolidate to a point where you can get every show you want by ponying up for one or two big names.

Is this just about money? Is it just greed that stops networks and studios from making it easier on all us consumers by just putting everything on Netflix or Hulu? Sort of. But it’s also about control.

Giant aggregators kind of reverse the way we think of monopolies working: instead of giant companies gaining control over a market and gouging consumers at retail, they lower retail prices and deliver the real pain to the suppliers. Walmart is the original giant aggregator, and it’s not hard to find stories of companies driven to bankruptcy by “success” selling through them. Twenty-First Century Walmart, Amazon, is remarkably cavalier about counterfeiters selling physical goods on their site. And you don’t have to be on the take from Penguin Random House to wonder whether it’s particularly healthy for self-publishers to rely on Amazon for three-quarters or more of their sales. If they decide they’d rather only give “indies” a 50% cut of the cover price instead of 70%, well, what are you gonna do about it? Pray they don’t alter the deal any further.3

The music industry still blames Apple’s iTunes ecosystem for destroying the once-lucrative CD market. So it’s not surprising that studios have decided that if on-demand streaming was truly going to be the future of television, they did not, in fact, want to chill with Netflix. Think about streaming music: artists say that unless they’re Taylor Swift, they’re making bupkis from Spotify, yet Spotify pays out so much for music that they’re still not profitable. These sound mutually exclusive, but they’re not: Spotify and friends should have charged $15 or $20 a month for unlimited music streaming, not $10.

Does that mean that Netflix should be charging us more than $9.99 $10.99? If they wanted to be the video version of Spotify, yes. But they don’t: they want to be a network. Amazon wants to be a network. Hulu wants to be a network. Apple (probably) wants to be a network. CBS wants to remain a network.

And at the end of the day, that’s what this boils down to: video streaming services aren’t the new airwaves, they’re the new networks. And since we’ve pretty much all collectively decided we can’t stand commercial breaks—how we “paid” for most network programming for sixty-odd years—we’re going to end up paying those networks directly.

So the future of TV is not apps—the future of TV is, just like the past of TV, networks. The key shift is a move from an advertising-supported model to consumers paying the networks directly.4

But will this future last as long as what it’s replacing? The network-and-affiliate broadcast model has been with us for nearly a century, predating television itself. That’s a lot harder to say. The model definitely needs tweaks—streaming services need to stop treating their metadata as proprietary secret sauce and let companies building streaming appliances build comprehensive cross-service program guides, for a start. But it seems to me like this future, even if it’s not precisely the one we wanted, has legs.

It’s much less clear to me whether this model will work well for software, as more and more programs take cues from Adobe and Microsoft and move toward subscription models. That, however, is another post.

  1. The Apple TV is arguably most of the way to being a solid “casual” game console, but it’s become clear that Apple has no idea how to make it attractive to either developers or consumers in that space. 
  2. I suspect Jason Snell is correct: Apple will take an “HBO approach…offering a dozen original series and a curated collection of films and classic TV shows.” 
  3. This is what much of Amazon’s stock price was historically based on: investors bet they would do exactly what Walmart did. That this hasn’t come to pass may well be due to Amazon Web Services becoming the company’s biggest revenue driver. 
  4. Advertising-supported services that are free to watch will stick around, but there’s a strong antipathy toward services with monthly bills and ads. I doubt that “blended” model will be with us long-term. 

Feed housekeeping

I finally remembered that I’d been using FeedPress for, er, feed stuff (remember RSS?) on the original Coyote Tracks, and I’ve updated it to pick up the feed from the new site instead. So if you were one of the couple hundred people who’d been reading posts that way, hi!

Since the new blog merges what was Coyote Tracks and my rarely-updated writing blog, Coyote Prints, you may get more than what you want here. If you only want tech posts, you can subscribe to the tech category feed. If you only want writing posts, you can subscribe to the writing category feed. (You can get a feed for any of the categories by going to the category page itself, if you really insist.)

And, if you’re following this as a link from Goodreads (hi?), note that the Goodreads blog is only pulling from the writing category.

Last but not least, if you’re subscribed to the Tumblr feed directly somehow (, you’re going to get what’s cross-posted to Tumblr, which is going to be mostly tech but probably kind of random.

Cotton, hay, and rags: giving bias the veneer of rationality

As you’ve surely heard by now, a mid-level engineer at Google—he’s anonymous, so I’ll call him Mr. Rationalface—wrote a memo called “Google’s Ideological Echo Chamber” in which he argued that “differences in distributions of traits between men and women may in part explain why we don’t have 50% representation of women in tech and leadership. Discrimination to reach equal representation is unfair, divisive, and bad for business.” (His words, not mine.) In response, recently former Google engineer Yonatan Zunger wrote the simply-titled “About this Googler’s manifesto,” in which he argues it’s manifest bullshit. (My words, not Zunger’s).

Between the time I started writing this and now, news has come out that Mr. Rationalface has been fired. I’ll come back to that.

I’ve been thinking about responses I saw on Hacker News to Zunger’s piece. The most common defense of Mr. Rationalface’s thesis was to restate its core premise: This whole drive for diversity rests on the premise that there’s no difference between men and women, but the falsehood of that is apparent to even the most casual of observers.

This is a common rhetorical trick I see in this particular corner of the internet (i.e., rationalists who want to rationally prove that PC SJW WTFery is irrational): restate the opposing premise incorrectly, then commence a full frontal assault on the restatement. Of course there are biological differences between men and women; who claimed otherwise? Mr. Rationalface proceeds from here to assert the following totally objective, non-sexist truths:

  • Women are more open toward feelings and aesthetics, while men are more open to ideas.
  • Women have more empathy than men, while men have more interest in systematizing.
  • Women are gregarious and agreeable; men are assertive!
  • Women are more neurotic, with higher anxiety and lower stress tolerance.
  • Women are irrational, that’s all there is to that! Their heads are full of cotton, hay, and rags!

Whoops! While the first four are from Mr. Rationalface, that last bullet point was from noted academic rationalist Henry Higgins.

A fairer way to state the “pro-diversity” case is more like, some perceived differences between men and women used to justify associating higher-paying professions with men are rooted in dubious stereotypes. And we can test whether there’s prima facie evidence for that by looking at the actual history of software engineering. In the early days, it was women’s work: it was seen as more like filing and typing than math and logic—the hard stuff was the hardware. But by the mid-1970s, it was men’s work. But the work hadn’t changed. What changed was the perception of the work: society started to consider it high-status white collar work rather than low.

I know that—irony of ironies—I’m trying to rationally analyze an argument that is, at its heart, not about rationality at all. It’s about reclaiming ground in the Great Culture War. If the gender disparity in the engineering workforce at Google reflects something broken in their culture, it demands a solution that involves taking action one might call “affirmative.” PC! SJW! Cthluhu fhtagn! So don’t even allow the possibility that the problem is in the culture. If the problem isn’t in the culture, it must be in women. The solutions offered must involve working with and around Essential Feminine Nature.

But it’s the argument style that leaves me fascinated, the same style employed by many of his defenders, and a style that echoes through GamerGate, the Sad Puppies and other geeky outposts in the Great Culture War. If I may engage in some stereotyping myself, it’s an argument style beloved of folks who are mostly white, mostly male, mostly under 30, and mostly a little too sure of their razor-sharp logic. I don’t think this kind of guy gets redpilled because of deep-rooted anxiety over losing white male privilege—I think they get redpilled because it’s just effin’ cool to be told you’re one of the few people smart enough to see reality as it is, rather than buying into the conventional wisdom that traps all the other sheeple. This is why so many fringers, from anti-vaxxers to white supremacists, construct elaborate, nearly-logical theories built on a stack of unexamined premises. This is obvious to the most casual of observers, so let’s move on, they say, while the rest of us sheeple are making the time-out signal and saying wait, what?

Isn’t it obvious when premises are false? Isn’t this willful—and malicious—ignorance? Sometimes. If we’re honest with ourselves, more often than not. But the more boxes you tick on the cis-het-white-male line, the more advantages you get for no actual work on your part. You have, if I might be so bold, a rational self-interest in supporting arguments that those advantages are immutable nature, and attacking arguments that they’re uncomfortably squishy social constructs. To paraphrase Upton Sinclair, “It is difficult to get a man to understand something when his social status depends on his not understanding it.”

So about Mr. Rationalface’s firing. If I were his manager, would I have canned him? I admit I’m not comfortable with hey, it’ll only chill the speech we don’t want; you can’t know that only the “right” group of people will take away exactly the message you intend to send. (Exhibit A: Hacker News.) But as Yonatan Zunger noted, a substantial number of Mr. R’s (former) coworkers were likely furious; he might as well have scrawled Does Not Work Well With Others, Especially Wimmen across his face with a Sharpie. From a—dare I say it—coldly rational standpoint, Google HR gets a firestorm no matter what, but keeping him risks a second, bigger firestorm when he shoots his mouth (or text editor) off to a coworker again.

I looked back at Hacker News briefly on the day of his firing and saw, well, what I expected. This is an outrage! This proves all the author’s points! This was not the anti-diversity manifesto the SJWs are claiming it is, it’s a well-written, polite, logical argument! It definitely had the appearance of logic, and it was debatably civil. But from its mischaracterization of the “pro-diversity” arguments through its “you’d agree with me if bias wasn’t blinding you to my truth” conclusion, it was precisely what its critics claimed it was. It’s easy to say Mr. Rationalface lost his job for not kowtowing to liberal groupthink, but sometimes a burning bridge is just a burning bridge.

A new home

Over the years, I’ve ended up with multiple “presences” online:

  • The original Coyote Tracks, hosted at Tumblr
  • “Coyote Prints,” an attempt at a writing news-ish weblog, generated with Jekyll
  • My website, made with a hacky homebrew static site generator
  • The occasional foray onto Medium

That’s not even inclusive of earlier attempts at this, like a LiveJournal and, before that, a very simple bloggy thing that worked by putting files with names like 1999-01-01-entry.txt in a specific directory that were picked up by a small PHP script. (That was back in the days when PHP was just used to embed bits of interactivity in HTML pages, just like that, which is something it’s pretty good at. I’m pretty sure I was doing that in early 1998, which by some measure might make me one of the earliest bloggers, or would if there had been just one damn person reading my home page.)

While this hodgepodge of bloglike objects had good intentions—separation of concerns, trying new platforms, keeping up with the cool kids—it’s become too unwieldy. The decision where to post is sometimes kind of arbitrary. Many of the people who read about my writing are interested in tech; while the reverse isn’t as true, I’d actually kinda like to expose some of my tech audience to my writing, especially stories that involve techy things.

A bigger concern, though, comes down to fully controlling my own content.

This isn’t a new concern; Marco Arment was writing about owning your identity back in 2011. Some blogging services let you bring your own domain—Tumblr does it for free, which is why you go to instead of—and others, like, let you do it for a modest charge. Medium makes it possible, but only for publications (and at a fairly high cost); many other services don’t offer this at all.

So: Welcome to

But while owning your online identity is necessary, it’s not sufficient: you need to own your content, too. I don’t mean that in a legal sense—despite the headless chicken dance the internet goes through every time somebody changes their legal boilerplate, no reputable service ever has or ever will tried to steal your copyright. I mean it in an existential sense.

I still like Tumblr, despite its foibles, but as far as I know it was never profitable on its own, it was never profitable for Yahoo, and it’s on track to never be profitable for Verizon. As for Medium, I love what it’s trying to do, or maybe I love what it was trying to last business model and not so much now, or maybe vice-versa, or maybe it was three or four business models ago. What other businesses call pivots, Medium calls Tuesdays.

I’ll circle back to that, but the upshot is that I decided I needed a POSSE: “publish own site, syndicate everywhere.” (Look, I didn’t make it up.) And that brings me to…WordPress.

I’ll be blunt: I don’t like WordPress. Internally it’s a dumpster fire, full of arcanely formatted non-OO code, bloated HTML, and a theming engine designed by bipolar squirrels.

So I looked at other things. I know there are ways to make static site generators quasi-automatic, that Matt Gemmell swears it’s faster to blog from his iPad with Jekyll. I’ve done it, with a system not too dissimilar from the one he describes. It works, but I don’t love it. I’m comfortable at a shell prompt, but I don’t want it to be necessary for blogging, especially if I’m on an iPad. (I’m moving back to the Mac for portable writing, but that’s another post.)

I also looked at Ghost, which started with some fanfare a couple years ago as a modern take on WordPress that focused back on blogging essentials rather than shoehorning in a content management system. Now they’re a “professional publishing platform,” and all their messaging is we are not for you, casual blogger, pretty much the opposite of their original ideology.

But I can publish to WordPress right from Ulysses. Or MarsEdit. Or the WordPress web interface, desktop app, or iOS app. The WordPress API is, at least for me, a killer feature. And its ecosystem is unmatched: I have access to thousands of plugins, at least six of which are both worth using and actively maintained.

So: I’m still finding my way. I’ve added a cross-poster which can theoretically post everywhere I want, although I’m not sure if I’m going to use its Medium functionality—I want to be able to vet what it’s posting before it goes live there, so I’ll probably just use Medium’s post importer. And I don’t want to syndicate everything everywhere: I want to syndicate selectively. (This post probably won’t even go to Medium, for instance.)

The semi-ironic footnote: I don’t know if this is really going to make me post more, when all is said and done. I’ve always been guilty of being more interested in building things than running them. But we’ll see.

Form over Frolic: Jony Ive’s quest for boring perfection

Right now I’m sitting in front of a 27″ iMac. It’s the best computer I’ve ever owned, with a 5K display, high color gamut, 24 gigs of RAM and 512 gigs of SSD storage. It’s beautiful and minimalist, just like every iMac they’ve released since they switched to aluminum in 2007.

It’s also the least modifiable desktop computer I’ve ever owned. This trend also goes back to that aluminum iMac, in which—like today’s—only the RAM is user-upgradeable. (Since 2012, even that’s no longer true of the smaller 21″ iMac.) It’s hard not to ask: why is thinness the priority in all of Apple’s designs?

You know the answer: Jony Ive. It’s clear by now that he would like everything Apple produces to look as close to a pure pane of glass as he can make it, with minimal, unadorned metallic frames, as close to unbroken and symmetrical as functionality allows. And Ive’s team is perfectly willing to sacrifice functionality in pursuit of this goal. A female Lightning port is fractionally thinner than a female USB-C port, and now you know why the iPhone will never get USB-C ports. Sorry. You’re lucky the one-port MacBook’s one port isn’t a Lightning port. (I have it on good authority that was under consideration.)

This often gets portrayed as a choice between staying chained to legacy hardware and forging ahead to the future. But if you were using Macs a decade ago, do you remember the way the power indicator light on a Mac, both desktop and laptop, used to slowly pulse when it was asleep, as if it were slowly breathing? Or the way batteries on laptops, both replaceable and permanent, used to let you check charge levels without turning on or waking up the machine. Or, as recently as last year, the way power plugs changed color to show charging state. All of that—along with the illuminated Apple logo and, now, the cheerful startup chime—has gone away.

All the price of progress, right?

A couple years ago, Shawn Blanc published a book about “how to make good things great” called Delight is in the Details. That phrase captures an essential paradox: we want our products to stay out of our way in everyday use, yet products that convert us from merely satisfied customers to fans have little touches that call attention to themselves in just the right way. When I start my Mazda, its display lights up with the words “Zoom Zoom” for just a few seconds. It’s stupid, but after six years it still makes me smile.

“Little touches that call attention to themselves” are the opposite of Ive’s guiding aesthetic. He creates beautiful objects you can appreciate as works of art. You can’t help but marvel at the lengths to which his team will go to make a perfect fusion of glass and metal, to craft UIs that appear to directly manipulate data, to make the hardware disappear while you’re using it. Under Ive’s direction, Apple delivers works which are closer to the science fiction future than any other major consumer electronics company. And yet his designs are relentlessly whimsy-free. There won’t be a moment that catches you off-guard and makes you smile. Ive’s work never aspires to make you giggle with delight.

Software doesn’t escape his penchant for austerity, either. The Ive era of software UX has been about flattening, removing, relentlessly stamping out skeuomorphism. The “traffic light” window controls are just circles now; the swirling barber pole progress bars are simple blue, with a subtle pulse; we don’t even get the little puff of smoke when we pull icons off the dock. I’m surprised the iOS icons still jiggle-dance when they’re in rearrangement mode. I’m not sure that it’s fair to say that we’re seeing a software analog to Apple’s quest for thinness, but I’m not sure it isn’t, either.

I’d hardly be the first one to complain about a perceived drop in software and UX quality, or to question whether Apple’s being a little too aggressive in dropping legacy ports. Yet it feels like that’s always been part of the deal, right? We’re taking away the floppy drive, or only giving you these weird USB ports, or sealing the battery in, but look at how cool we can make this thing now! It’s not like anything else on the market. It’s fun.

This iMac is the best computer I’ve ever owned, but nothing about it screams fun. The quirkiest thing about it is my mechanical keyboard, something Apple would never dream of making on their own these days. (So gauche.)

Yes, but you keep talking about the Mac line. The future is in iOS! Despite revealing myself in past posts as a Mac partisan, I think this is not only true but, overall, good. I’m a fan of that science fiction future, and it’s not one in which I see many people sitting down in front of 27″ monitors and keyboards for their computing needs—even if the monitors are holographic and the keyboards aren’t physical.

But man, talk about the “pure pane of glass” ideal, right?

The argument Apple is implicitly making is that computers—especially the computers of the future that the iPad typifies—are appliances. Appliances can be beautiful, but they shouldn’t exhibit frippery. They should be focused. We should prefer the Kitchen-Aid stand mixer to the plastic knockoff that does twice as much at half the price, because it won’t do any of those things well and it’ll fall apart in a year. (Besides, you can do all those things with the Kitchen-Aid, anyway; you’ll just need to buy some dongles.)

That’s all true. Maybe Ive knows best. But if you showed me a table with an iPad Pro, a Surface Pro, and a Surface Book on it and asked me to rank them in order of Cool Factor, I’d be hard-pressed to put the iPad at the head of the line. Microsoft isn’t trying for tiny-quirk delight, which is just as well (“It looks like you’re trying to add personality to your UX! Can I help?”), but they’re sweating small, thoughtful details. Apple sweats the details of manufacturing processes. That’s great, but it’s not the same thing.

Maybe—just maybe—a little frippery is okay, even if it adds a half-millimeter in depth to a product, or adds a touch of (gasp) skeuomorphism to the UI here and there, or allows a slightly less restrained, tasteful pigment on the anodized aluminum case. Injecting a bit of fun, even weirdness, to their computers in the late ’90s helped pull Apple back from the brink. It may be time for another injection.

Being Kitchen-Aid is a fine goal, but you know what? They sell that stand mixer in nearly three dozen colors.

Originally published at Hacker Noon.