I can say, having finally played enough to burn out / quit, that Farm RPG is pretty much the ideal idle game for me. It is simple, pleasant, non-predatory, and strikes the perfect balance of encouraging habitual play without making you feel bad for not playing it. There's a huge amount of content, the game changes a meaningful and interesting amount over the course of your playthrough, and it rewards creativity & pathing without really requiring you to bust out a spreadsheet.
If there's any flaw in it (besides a meaningful dearth of social aspects, which in my case could almost be argued as a boon), it's that the endgame is toilsome. My last six months of playing Farm RPG were all the same: log in in the morning, spend five minutes doing my daily chores, make a bit of monotone progress on whatever the current milestone was (which was always rote), and then log out. It's hard to hold this against the game specifically because all idle-games seem to struggle with this prolonged endgame — but it's worth calling out nonetheless.
I loved The Phoenix Project, this book's spiritual and literal predecessor — while I quibbled with the prose and characters, I deeply enjoyed not just the concept of a ludocratic narrative but the execution of it was such that I felt like I learned a good amount about management and scaling a technical organization.
The Unicorn Project is meant to be the same book as The Phoenix Project, except with a more specific software engineering focus rather than a general IT focus — and this, perhaps, is where it all falls apart to me, because unlike general information technology I now have a good deal about best practices in high-performing software engineering organizations, and as such I had nothing to learn from this book except that man, Kim is not good at fiction-writing.
This is not meant to be a condemnation of the book's recommendations! Kim walks through the importance of reproducible builds, continuous integration, functional programming: all good things, all important things, all things that anyone who has spent time in a FAANG already knows as good. If you're not one of those people (and I don't mean this in a dismissive way, I promise) — the book might be more useful.
But I was hoping for a book that teaches how to go from an 80th percentile organization to a 95th percentile organization, and this book is a primer instead on going from a 10th percentile organization to a 70th percentile organization.
When I look back on the critical and popular fervor for the second season of Ted Lasso — and remember that it came out in the first year of a global pandemic that filled the average watcher (myself included!) with a level of dread and despair that has, for most folks, not been experienced before or since — things click into place a bit. What the show lacks in traditional merit (a satisfying, interesting plot; consistent and believable characters; humor that builds over the course of a scene, episode or season; an internal consistency that rewards the viewer's loyalty and attention) it purportedly makes up for in positivity.
I don't think this is undue criticism of the show's faults, and indeed it seems to lean into its own Flanderization to a great extent. We float weightlessly through Richmond's ups and downs as a team (they begin the season 0-0-6, and then suddenly it's 4-4-6, and then suddenly they're on the top of the table, and we have watched maybe five minutes of actual football). I don't begrudge the show for leaning away from the actual mechanics of football — not every show needs to be Friday Night Lights — but it's symptomatic of any actual episode-to-episode stakes or drama. Things that certainly seem like they should be meaningful (a relegation team jettisoning their biggest sponsor) are never discussed or revisited; personal demons are exercised after a single therapy session; every character (for the most part, well-portrayed and lovable) floats pleasantly from one surface-level issue to the next.
This is not problematic on its own, except for one thing: the show is not very funny. There are some good one-liners (the Bill Lawrence touch!) but it lacks the Bob's Burgers / Azumanga Daioh DNA of "here are a bunch of wacky people you love doing very funny things", and it tries to keep one foot in the real world for serious depictions (at least the simulacra of serious depictions) of mental health.
The job of a piece of art is to enchant and transform us; I think Ted Lasso did that for a cohort of its die-hard fans back in 2021, and I begrudge neither the show nor its fans when I say that, removed from the pandemic, it shows little ability of either. Instead, it feels like a Lifetime movie given an Apple TV budget; pleasant, and mollifying, but certainly not great.
Hard not to draw parallels to Azumanga Daioh — a show that I think Cromartie edges out as the funniest anime I've ever seen. Whereas Azumanga feels sweet and dated — not in a bad way, but in that it shows its age and clearly influenced a legion of copycat "cute girls doing cute things" successors — Cromartie, despite coming out a single year later, strikes you as incomprehensibly modern. The closest historical parallel that comes to mind is the Adult Swim extended universe, but whereas those shows lean too far into a kind of nihilistic absurdism that never quite resonated with me, Cromartie has a cleverness with every single throwaway gag or callback that wins you over quickly. I'm shocked there's only a single season of this show; I'm shocked it's not talked about more often.
Buttondown's API calls are very fast, and one of the reasons why is that we've removed every single possible database query that we can.
The most recent was what looked like a fairly benign COUNT(*) query
, coming from the default Django paginator; if you're gonna paginate things, you need to know how many to paginate, fair enough.
However, it irked me a little bit that we were always doing that COUNT(*)
query even when we didn't need to: say, if we were returning a list of 14 emails we can put up to 50 emails in a page. Objectively speaking, that COUNT(*)
query is unnecessary overhead: we know there aren't any more emails than that, since we've serialized a full list that is less than the page size.
I went poking around for solutions to this problem, and came across a great article from Peter Be that abstractly talks about both the use case that I had in mind and the right solution, which was at a high level: count and serialize the full results list up until the maximum page size, and then make a full count query if you hit the page size.
Peter's snippet is more pseudocode than actual code, and I wanted something that I could actually use as a drop-in replacement to the Django paginator. Here it is, in full!:
from typing import Generic, TypeVar
from django.core.paginator import Page as DjangoPage
from django.core.paginator import PageNotAnInteger
from django.core.paginator import Paginator as DjangoPaginator
class Paginator(DjangoPaginator):
def validate_number(self, number) -> int:
try:
if isinstance(number, float) and not number.is_integer():
raise ValueError
number = int(number)
except (TypeError, ValueError):
raise PageNotAnInteger("That page number is not an integer")
return number
def page(self, number) -> DjangoPage:
validated_number = self.validate_number(number)
if validated_number != 1:
return super().page(number)
internal_results = []
for i in self.object_list[: self.per_page]:
internal_results.append(i)
if len(internal_results) == self.per_page:
break
if len(internal_results) < self.per_page:
# The below override correctly throws a type error because we are
# overriding a read-only cached property (ie a method) with a constant.
# This is the whole point of this subclass, so we ignore the type error.
self.count = len(internal_results) # type: ignore
return DjangoPage(internal_results, validated_number, self)
Note that it is important to override validate_number
, too: that contains a sneaky little check of .count
, which is a read-only cached property (ie a method) that triggers the COUNT(*)
query.
I don’t think there’s anything wrong with an anodyne, predictable rom-com. I am fully baptized into the Church of Ephron: there are few autumnal traditions more pleasant and comforting to me than whiling away an afternoon to the paint-by-numbers plot beats of You've Got Mail et al (though I’ll exclude When Harry Met Sally from this umbrella, which is of course excellent but at least makes some overtures at novelty in terms of form and function.)
Netflix knows how to produce and distribute these kinds of films en masse: they are incrasingly their bread and butter, as they begin to cede more of the “prestige TV with the edges sanded down” territory to Apple TV. What separates the good versions of these films from the awful ones is how much time you enjoy spending with its world and characters that a bored team of Malibu screenwriters conjured on an otherwise uneventful afternoon — and this is Find Me Falling’s greatest sin, for outside of a somewhat winning performance from Ali Fumiko Whitney (in a very gee-shucks sort of way), every single person in this film appears miserable, as if they are resigned to their fates. Set aside the incomprehensible leaps in characterization and the quasi-sociopathy required to use a suicide cliff as a plot point: how do you spend six months in Cyprus on a Netflix budget and not at least make me jealous that I’m not in your shoes?
Gilbert & Sullivan would have loved Ted Lasso — absurd and friendly and ever, ever obsessed with having its pathos and eating it too. I write this in a period of what I would describe as post-post-backlash: there was a period where America, particularly during COVID, was obsessed with Ted Lasso, and the a period after that where it was considered overrated and smarmy and pedestrian, and now a point where it is largely — not forgotten, as I'm sure the fourth season will be Apple TV's biggest launch ever — consigned to neither being the center of the brief schizophrenic cultural zeitgeist nor being rubbed out of existence entirely.
I think Ted Lasso's first season is trifling, and a good way to spend some time, and not exactly Great Television. Sudeikis gives a great performance despite a script that cannot decide whether he is competent or caricature; the supporting cast is all winning, and as long as you don't look at the edges of anything for too long you won't be upset. It has, I think, the signature Bill Lawrence (of Scrubs) touch: charming and clever and snackable and mostly empty — but we all deserve a cheat meal every once in a while.
(It is both entertaining and, I think, illustrative, how deeply uninterested the show is with the actual mechanics of football. I love the idea of an EPL team having never done suicides before.)
Who is the target audience for this book? The most charitable answer I can give is "people who are productive individual contributors that have literally never worked in a team setting before"; a less charitable answer (but one that I suspect is more accurate, both towards the authors' aims and the average reader's characteristics) is "people willing to buy Yet Another Management Book because it has the GTD stamp on it."
Certainly, if you have read any sort of literature (and by literature I am mean to be broad enough to include, like, blog posts) around management the vast majority of this book is of no use. Lamont and Allen offer advice of the genre: "meetings should always start on time", "it's important to have clear lines of communication", and "knowing who owns what is important". Nothing is incorrect; everything is anodyne.
(Lamont even has an aside towards the end of the book about how the editor commented on the draft as being "particularly lean" and not needing much editing, trying to spin this as a good thing and that the book was a useful primer to revisit over the years.)
I am being particularly critical of this book, because it left a particularly poor taste in my mouth. I think there is a real need for a practitioner's guide towards GTD in a group setting (how do you integrate tooling? how should you revise your understanding of delegation?) and, modulo a too-short appendix, none of this is addressed and indeed David's main contribution to the book probably accounts for all of ten pages (five of which are simply him saying "I agree with what Ed said".) It's certainly not harmful, but it takes everything I loved about GTD (directness, idiosyncracy, concreteness) and excised all of that in favor of table-stakes collaboration ideas that most competent people are already familiar with.
Do not buy this book!
It’s tempting to compare Cloud Atlas to its obvious metafictional forebears, If One A Winter's Night A Traveler, The Years of Rice and Salt being chief among them. [1] But almost from the book’s onset my mind kept on travelling back to an unlikely comparison: Chrono Cross.
Not just because of the incredibly ambitious, vaguely-multi-dimensional aspects of the plot and structure, nor for the pleasant and somewhat under-utilized grounding in a distinctly island world.
But because Chrono Cross’s much-ballyhoo’d “dynamic speech engine” (that purported to dynamically alter the story and various cutscenes based on which of the thirty possible characters — a huge amount for the PS1 era! — were in your three-person party at any time) amounted to stuff like this:
Pierre:
Ah, oui?
Is this really
a ghost ship?
Nikki:
Could this really be
a ghost ship...?
Korcha:
This ain'tCHA ordinary
ghost ship...
Razzly:
I've never been on a
ghost ship before...
I'm fairy scared...
Mel:
Wow! So this is
a ghost ship!?
Greco:
Is this...
a ghost ship,
amigo...?
Guile:
What is this...?
Glenn:
So this is
the infamous
ghost ship...?
Macha:
Is this whatCHA
call a ghost ship?
Doc:
I sense human activity
aboard this ship...
Luccia:
Is dis a ghost ship?
...Highly illogical.
NeoFio:
Is this really a
ghost ship?
Which is to say: changes in syntax without changes in anything else. (And a feat in its own right, but falling short of what you might hope.)
Cloud Atlas’s central flaw (or flaws) is that, frankly: the individual stories are not that good, in terms of plotting and prose. Mitchell does a mostly persuasive job of the mechanics — the period pieces scan as period pieces, but the authorial tone and beating of each piece is so flat and one-dimensional that there is very little to be excited about besides thinking about the threads binding the chapters together. [2]
Compare this to the titles which Cloud Atlas usually is bucketed with: the two aforementioned (two of my favorites, to be sure); Pale Fire; A Visit From The Goon Squad. All of these books are lauded not just for what they do outside the form but what they do within it, and Cloud Atlas reads — for a majority of its time — as pulp. [3]
Conversely, I thought the metafiction worked quite well. I went in expecting a Lost-style puzzle box, with very concrete clues and ties explaining how everything added up to a single gestalt; Mitchell denies you such hedonistic pleasures, and you’re left with a more ethereal understanding of these characters and their relationships — their being united in a struggle against power, their solace in consumption of their predecessor’s literature, and so on. (Though even then, the author giveth and he taketh away. I think it was clever, and correct, to lampshade the implausibility of reincarnation and recurrence to deny the easy answer of “ah, this is the same soul in different bodies”, and yet Mitchell felt bizarrely and amateurishly compelled to make sure every single character references the title of the book, a true That’s Chappie moment.)
Mitchell has, I think, very lovely and sweet things to say about the human experience: he believes in the importance and virtue of emancipation, he believes in the exultatory power of art, he believes in redemption.
Cloud Atlas is summed up by its ending lines:
My life amounts to no more than one drop in a limitless ocean. Yet what is any ocean, but a multitude of drops?
A beautiful metaphor, clumsily delivered.
And, to be clear, both of these are much better pieces of literature than Cloud Atlas. ↩︎
A single exception to this: Robert Frobisher’s section. Frobisher’s character is not just the most interesting and surprising of the entire lot but also by far the most entertaining. ↩︎
You can make, I suppose, the argument that this is intentional — one of the books is literally pulp. ↩︎
Highlights
Glass & peace alike betray proof of fragility under repeated blows.
To fool a judge, feign fascination, but to bamboozle the whole court, feign boredom.
Men invented money. Women invented mutual aid.
To wit: history admits no rules; only outcomes.
Implausible truth can serve one better than plausible fiction, and now was such a time.
How do you view a series of books — or any periodic work, like a long-running TV series? Do you let a single work determine your view of the series, or must you evaluate the whole series as a gestalt with each passing entry?
Spy Line is, at it's heart, a dismantling of the four books leading up to it (with Spy Sinker, its sequel, being a near-callous mockery and derision of those books.) Two thirds of the book are fairly banal and much in the "Samson way" that so comfortably dominated the series (petty, entertaining politics; brutish and clever spycraft; pleasant caricaturing of the British intelligence services) and the other third reveals that the entire plot leading up to these moments have been sleight-of-hand, and every inch of drama we've been subjected to has largely been a feint and a ruse.
A feint and a ruse. But a waste of time? This is an interesting question, and it largely depends on your faith and trust in Deighton. You can charitably interpret this shift as a very wide-lens postmodern critique of spy literature — Bernie's gullibility and snookerdom is a metaphor for our own, and for the misguided sense that any one well-meaning person can make a legitimate difference in late-20th-century global dynamics. Or you can uncharitably interpret it as something like: Deighton loved writing the first trilogy, and wanted to spend more time with these characters, so he sat down and thought about how to make a second trilogy, but in doing so he burnt most of his forest to the ground.
As much fun as I've had with Deighton's writing, I have to go with the latter interpretation. You can point to little hints and clues in the original few books that this was indeed all part of his master plan, but characterization makes no sense, and other things along the fringes don't lend Deighton much more credibility — the entire stamp collection escapade, the way in which Samson's red notice is entirely forgotten and everyone pretends it never happened. Indeed, this feels like a writer's room trying to figure out a way to make a compelling fifth season of a show that should have ended after its third: you don't have to dislike the characters to know that the time for their exit has long-gone.
A handful of folks sent me this quip from Nate Silver a few days ago:
Slightly against interest to admit this (I don't want more competition lol) but I think we're still probably a year or two away from Peak Newsletter. It's just a really good distribution mechanism for certain types of writers. It does take some time to build up momentum, though.
One of the things that makes "Peak Newsletter" as a concept both interesting and slightly pernicious is that the growth of newsletters is somewhat nebulous and amorphous compared to other similar content industry booms. [1] For some, P.N. refers to a focus on individual brand over masthead; for others, it refers to emancipating oneself from the distribution + growth channels afforded by traditional media and social media in favor of SMTP; for others, it refers to the very calculated paid acquisition-heavy approaches of buying subscribers and recouping costs on advertising.
With that in mind, here are some scattered thoughts.
-
One of the few legitimate things that has changed over the past decade is that it is much easier to solicit paid subscriptions thanks to services like Stripe. This is a technological shift rather than a cultural or consumptive one, and as much as I hate the language of "the creator economy" it does get at a meaningful point: part of the role of traditional media apparati is to provide financial tooling, and the marginal value of that role is diminishing-to-none.
-
Every large media startup conveges on a mission of "change consumer behavior en masse". [2] These missions fail: Netflix did a great job destroying cable, but I don't think they were comparatively successful in net creation of television consumption; every single podcast startup or podcast-heavy media platform quickly discovered that all the money in the world couldn't get a financially non-trivial segment of listeners to start becoming rabid podcast consumers. Beehiiv and Substack's mission are both some variant of "we will become a billion-dollar company by not just capturing the existing content landscape but by growing it significantly", and I am similarly skeptical that a meaningful amount of people will dramatically change their overall consumption habits.
-
One source of jet fuel for the podcast boom of yesteryear was an ocean of capital that went into advertising; the amount of money spent on advertising was justified less on hard ROIC and more on the lack of easy attribution due to the inherent nature of podcasting, and once clear data and metrics emerged a lot of the willingness to throw money at legions of Casper ads was lost (and, as a second-order effect, so was a lot of the willingness to spin up dozens of new podcasts to capture the revenue from said Casper ads). We are already seeing the same effect happen in email advertising: sophisticated vendors are only looking at intent/conversion-level data, and the emergence of co-reg has turned subscriber count and open count into a vanity metric for "platform-based" publishers.
-
The trend of every social media platform is towards a capricious algorithm that intentionally penalizes long-term relationships and rewards short-term spotlights. This is antipodal to email subscriptions; creators of any type have more incentive now than ever before to try and capture their audience before TikTok yanks them away.
Are we at Peak Newsletter? Probably not. I think — to steer this into the realm of concrete and falsifiable — that more people will spend more money and time on individual writers three years from now than they do today. I suspect we will see a lot of "pullback", to borrow a financial term, over the coming years, as "content arbitrage"-style businesses suddenly no longer have a viable business model due to the deflation of advertising, but it will continue to get easier and more profitable for great writers to build their own audiences and monetize their work.
In general, I think a pretty easy way to predict much of what will happen in the newsletter landscape is to look at what happened five years earlier with podcasts. ↩︎
Most famously quipped by Reed Hastings, who said Netflix's biggest competitor is sleep. ↩︎
Spy Hook forms the start of a second trilogy of books in the "Bernie Samson" series by Len Deighton; the previous book, which ended the first trilogy, was London Match, which I finished reading a few months ago and wrote:
I think I'll take a (long) break before resuming the nonalogy of these books, but I'm happy to have blitzed through the first three: they were both gripping and rewarding, and Bernard Samson makes for a delight.
I wrote in that book's review that there was a whiff of the workplace sit-com in reading the trilogy; there is a realism and propulsion in these books, and in unwinding the narrative and realpolitik, but also a deep comfort in checking in with your ol' pals in MI5 and the Berlin field unit. This sense of — for lack of a better term — amity (reminiscient of Slow Horses as well) is a nice contrast from the work of le Carré whose theses is often about the transactional and callous nature of this work, and yet the repetition started to hit me a little bit with this one: how many times can Bernie accuse and then vindicate the same colleague? How many times can we be surprised by one of Werner's romantic flings?
Add to this the sense that — more than any of the preceding three books — Spy Hook is not a self-contained story. It is perhaps an overlong first act, but there is no satisfying conclusion beyond a Dumas-esque "gotta read the next one!" ending. And, of course, read the next one I will — but besides getting to chat again with our old friends at the office, it's hard for me to say that I learned or picked up anything new from this book that I hadn't from the originals.
A really fun and pleasant movie that was never surprising and extremely fun, start-to-finish. This is a by-the-numbers bildungsroman — two teens, a cool girl next door, a deified older friend, hijinks — and you know exactly what plot points are going to get and they are delivered capably. The script is modern in a way that betrays its theoretical nineties setting (with apologies to the director, Adam Carter Rehmeier, but unless you were really ahead of your time I don't think the Juno-esque bon mots were really that autobiographical!) but is sold so convincingly and brightly by the pair of leads that you don't mind at all.
Does this movie have revelation? I'm not sure about that. Conor Sherry's performance — and his visceral sense of alienation — has a whiff of the sweet, relatable angst that I loved so much about The Secret History and even the first few bits of The Perks of Being a Wallflower. Much of the center does not hold: Mika Abdalla's character is a little too vacuous and the final few beats are just so neatly wrapped that it's hard to really come away from the movie feeling like it's anything better than a more modern Sandlot (which is not that bad of a final analysis, to be clear!)
(More than anything, this movie reminded me of why I loved Everybody Wants Some!! so much: Linklater created a world that was warm and sweet and anodyne and had you exit that world in a way that felt novel.)
Andrew Rea with an interesting and increasingly familiar take about how AI will disrupt software-focused private equity:
Distribution and brand moats can protect your legacy products for a while (esp in enterprise) but eventually you get lapped by competitors with better products, service, pricing, etc. Software is too competitive and changes too fast for this model to work in 2024.
I think most of the takes around AI and software (c.f. Chris Paik’s thesis on the same thing) all center around the same few starting lemmata:
- Over the past twenty years, the fixed costs required to build and distribute software have gone down.
- AI purports to accelerate that trend: maybe significantly, maybe in totality. [1]
- Therefore, software qua software as an asset is going to round down to zero over time, and software companies will differentiate themselves on things outside of core functionality.
From there, they diverge into two main camps:
- This trend is good for incumbents, because incumbents have data, brand, process power, and other strategic assets that don’t get rounded down to zero. (I think this conversation between Des Traynor and Patrick O’Shaughnessy is a particularly good articulation of this thesis.)
- This trend is bad for incumbents, because incumbents rely too heavily on customer inertia and revenue capture and are systemically disinclined to innovate at the rate that a disruptor would. (See Andrew’s essay which opened this post.)
I think these conclusions are less contradictory than they appear. It is getting easier to write and deploy software en masse, which makes it harder for established organizations to stay differentiated on functionality alone; but those organizations can now, at least in theory, use and deploy their other assets for more interesting ends, and a lot of the capital expenditure inherent in significant engineering work suddenly becomes much easier to pencil out.
That being said!
I think it is very easy to look at rate of change and the speed and polish with which startups are building impressive bodies of work and...skip to the epilogue, where they’ve triumphed over the incumbents of the world who are more focused on cash flow extraction than customer value creation. The reality is: the number of industries where people are making retention/churn decisions based purely on functionality alone is smaller than you would think at first glance; the strategies deployed by Thoma Bravo et al (aggressive cross-selling, aggressive contract durations, rolling up to drive down unit economics) are already the right ones, insofar as we define “the right ones” as “the ones that maximize long-term enterprise value.”
Whether you’re an incumbent or a new market entrant, it’s very important to think about strategic long-term moat, points of customer acquisition, tail risks, and useful levers: which was also true in 2020, and 2010, and so on.
Any argument otherwise is science fiction: interesting and thought-provoking but rarely useful. (And remember to taste the kool-aid.)
Epistemic disclaimer: I think our distance from “instant, Matter Compiler-style AI-built software products” is so far away from the present that it doesn’t really warrant serious discussion ↩︎
There’s a nascent trend of releasing ostensibly-private material (changelogs, public wikis, handbooks, etc.) to the public as a bit of a marketing push. This is essentially a form of debt, to the extent that you’re taking a lump-sum payment now in exchange for the implicit cost of keeping these things “up to date” indefinitely (and if you don’t, it’s immediately obvious: nothing gives me the ick more than seeing a changelog whose last entry was six months ago or a “internal handbook” that hasn’t changed since it was launched.) There are some teams that do it well (GitLab and Significa come to mind); there are other teams where it fairly vividly reads as “we haven’t figured out a marketing channel yet, maybe this will do the trick.”
Buttondown’s most pernicious form of content debt is the more conventional kind: docs have screenshots that are out of date, blog posts reference features that have been moved or renamed, comparison pages are anchored on old pricing, et cetera. It all boils down to some variation on vestigiality: you publish a thing that doesn’t have a direct line of communication to the source of truth (whether that source of truth is “a YAML file containing pricing plan information” or “the live production-level codebase” or whatever.) A lot of my strategic work the past month, and in the month to come, is focused on widening those lines:
- Replacing screenshots in the docs-site with automatically-generated iframes;
- Replacing hand-written API examples with ones built by httpsnippet and verified in CI;
- Building out a single-source-of-truth demo site with reasonably, semantic fixture data in support of both above efforts.
This is yeoman’s work; it can be fun to puzzle out efficient solutions, but it’s certainly hard to justify on the grounds of immediate business value alone. But, like investing in paying down technical debt, it earns its keep over the course of years if not months.
Moreover, whenever I’m poking around interesting marketing or content projects the thing that I ask myself first is: “what is the cost of maintaining this accurately and indefinitely?” Which, you know, is a fairly anodyne and 101-level question — but I’m more used to asking it about code than content.
(It also strikes me that this kind of thing is much more fertile ground for better developer tooling. I don’t think I’ve talked to a single founder who feels really good about their process of making sure docs stay up to date.)
Lastly, the organizations I’m most envious of are the ones that sidestep this problem entirely by shipping their application as their marketing site: Rows, Typefully, I’m sure there are others. Do these experiences convert at a rate as high as a dedicated buildout might? I don’t know, but there’s a single-mindedness and clarity that I admire.
Glyph (whose writing and contributions to the Python ecosystem I am deeply grateful for) wrote Against Innovation Tokens yesterday:
In 2015, Dan McKinley laid out a model for software teams selecting technologies. He proposed that each team have a limited supply of “innovation tokens”, and, when selecting a technology, they can choose boring ones for free but “innovative” ones cost a token. This implies that we all know which technologies are innovative, and we assume that they are inherently costly, so we want to restrict their supply. That model has become popular to the point that it is now part of the vernacular. In many discussions, it is accepted as received wisdom, or even common sense. In this post I aim to show you that despite being superficially helpful, this model is wrong, and in fact, may be counterproductive. I believe it is an attractive nuisance in computer programming discourse.
I find his argument unpersuasive, for two reasons:
- His minor quibbles about CBT [1] are enumerated as “it’s incorrect to assume that new technologies have more overhead than old technologies; it’s incorrect to assume that new technologies are harder to learn than old ones; you shouldn’t make technology choices based on how easy it will be to hire people, since you’ll need to train them up regardless.” I think that third point is a nuanced and interesting one, but the whole point of CBT is that there’s no way to truly understand the difficulty of a newer technology without the experience of using it in a deployed environment, and doing so is risky.
- The larger metaphor he uses to illustrate the downside of CBT — deploying Haskell and then wrapping it in Ruby to limit the blast radius — misses the fundamental question one should ask, which is: “do we actually need to use Haskell in the first place?” I think search is a really useful example here because it's something that many engineering teams have encountered — the process of spinning up an ElasticSearch instance (or something similar) and suddenly having to deal with an entirely new caste of problems versus pouring time and effort into improving the existing relational database to make it handle search-shaped workloads.
However, he introduces what I think is a very useful concept: boundary tokens. He writes:
That is to say, rather than evaluating the general sense of weird vibes from your architecture, consider the consistency of that architecture. If you’re using Haskell, use Haskell. You should be all-in on Haskell web frameworks, Haskell ORMs, Haskell OAuth integrations, and so on.1 To cross the boundary out of Haskell, you need to spend a boundary token, and you shouldn’t have many of those. ... When people complain about programming languages, they’re often complaining about how many different kinds of thing they have to remember in order to use it.
I think this is absolutely correct, and the north star a nascent engineering organization should be pursuing is something along the lines of: how much fixed-cost (onboarding) and marginal-cost (context-switching) time and energy is required to be able to touch every single part of the codebase?
This is hard to quantify, but it's one of those things for which vibe-checking is very effective. At Buttondown, we've got a core app written in Vue and a bunch of smaller auxiliary microservices running on Next; these are separate frameworks, sure, but it's all Typescript, and plexing between the two is much more trivial than if you had to hop over to Phoenix or Rails something entirely different.
Glyph also talks about the anti-intellectualism inherent in CBT, an argument to which I'm sympathetic. If you were to take CBT to its rhetorical and logical extreme, you'd never use anything new; doing so is a sort of bet against the fundamental promise of technology, which is that things are (jaggedly, but monotonically) only getting better over time.
First off: I think both the accusation and the reality are kind of true. Kubernetes is the go-to punching bag for this kind of thing, but it is important to internalize, deeply internalize, that many new technologies are not going to improve the rate at which the median technology company can create enterprise value. (For more on this, read Use Rails.) If you are a senior member of a technical organization, your job is to keenly and efficiently evaluate various new technologies on the bleeding-edge, to find the rare exceptions where that is not the case.
Second off: I do think it's important to have escape valves in technical organizations so that you can evaluate new technologies in a manner that is less operationally onerous than prod, but more legitimate than a hackathon or side-project. Good candidates include: internal-facing tools, microservices that can be interfaced with over REST, engineering-as-marketing buildouts.
Choose Boring Technology, not, uh, the other one. ↩︎
Here is the entire gist of the book: use envelope-based budgeting (as made famous by, at least in my life, You Need A Budget) for your business. Allocate, say: 45% of your business towards owner's compensation, 10% towards profit, 15% towards tax, 30% towards operating expenses. Use these target percentages to back your way into sustainable salaries / cost structures and grow your business within these parameters, rather than shoveling any liquidity back into the business (because that will cause you to lose discipline and line-of-sight on actual business health.)
I think the book's thesis is good, and even though the true business model of the book is upselling you towards a professional consulting network (not unlike The E-Myth Revisited) it's a quick and easy read — a rare business audiobook where the reader (the author!) enhances rather than dulls the whole thing. I also don't think I'm quite the right target reader, which is not the fault of the book nor its writer.
Of course, there are quibbles:
- The author literally says to throw away the entire institution of GAAP because it's "funny money" and then has to spend the back third of the book re-building GAAP from first principles to handle things like recognized revenue and capital expenditures;
- The business tactics that the author tries to emphasize will emerge from the "Profit First Mentality" are thin-to-threadbare, the best (worst?) example of which being "you absolutely can double your output at half the cost: an example is how I did it in my business, which I cannot elaborate on due to Trade Secrets" [1]
- Mike totally mischaracterizes not just the tragedy of Frankenstein's monster but perhaps the entire message of Frankenstein [2]
All in all, I can't fault this book too much — it was quick, entertaining, and responsible for me making a positive change in my business (I am going to split out my omnibus checking account in Mercury into a bunch of sub-accounts!) I do not think you will get better at business strategy after reading this book, but it is something I would recommend to people who are losing sleep over their business's cash flow.
A deeply real and viscous sort of thriller. There were so many naturalistic fluorishes — long, languid shots on our two protagonists in deeply grimy places, bursts of the kind of quiet human humor that rarely brighten a serious script, casual lapses into German and authentically implacable accents — that this felt less like the taut production of Klute or the painterly opus of Ripley and more like something from le Carre or Len Deighton.
Hopper and Ganz are terrific, two, as a pair of leads who convince you both as friends and enemies alike.
Inspired by Adam Johnson's test for pending migrations, and of course in conversation with my own love of weird tests, I offer a similar concept: a test for finding stray print
statements in your codebase, with a ratchet array for stuff to ignore.
import glob
PATH = "**/*.py"
irrelevant_paths = (
"/commands/",
"node_modules",
)
def test_no_print_statements() -> None:
all_files = glob.glob(PATH, recursive=True)
relevant_files = [
filename for filename in all_files
if not any(
irrelevant_path in filename
for irrelevant_path in irrelevant_paths
)
]
files_with_print = [
filename for filename in relevant_files
if "print(" in open(filename).read()
]
assert not files_with_print, f"Print statements found in {files_with_print}"
I anticipate a concern being "this is slow! you're opening thousands of files!", to which I reply: this test is faster than any other test you have that touches a database. That being said, I'm sure there are edge cases or ways to improve it, so please let me know!
I watched Gary Bernhardt's talk on static routing back a few years ago and — I'm not sure if I would call it formative, but it stuck in my craw as a platonic ideal of sorts, as something I couldn't really justify adopting within Buttondown but really wanted.
I built out and open-sourced some feints in this direction — see django-typescript-routes, which provides a TS router generated from a Django backend — but that's not quite the same thing, and time and time again I found myself in the position of pushing bugs that would have been caught if I had a typesafe router in Vue.
Tanner Linsley makes the pithiest possible case for such an abstraction:
Too many people don't realize they're managing the most critical state of their application in a
/string?Record<string, string># string
type. 🤦♂️
I was thrilled to stumble upon the very poorly named unplugin-vue-router earlier this year and resolved to spend some time hacking with it to see if it was worth the cost. It was, and I'm glad I did it.
How it works
Vue Router is a very simple abstraction: you define a list of routes (where a route is a component and a matching path and some metadata), and Router routes for you. Something like this:
import { createMemoryHistory, createRouter } from "vue-router";
import HomeView from "./HomeView.vue";
import UserListView from "./UserListView.vue";
import UserDetailView from "./UserDetailView.vue";
const routes = [
{ path: "/", component: HomeView },
{ path: "/users", component: UserListView },
{ path: "/users/:id", component: UserDetailView },
];
const router = createRouter({
history: createMemoryHistory(),
routes,
});
Nothing particularly magical or fancy. The Faustian bargain you sign with UVR, though, is that in order to get typesafe routing you must also adopt file-based routing:
./HomeView.vue
becomes./index.vue
;./UserListView.vue
becomes./users.vue
;./UserDetailView.vue
becomes./users.[id].vue
.
And then the above mapping file gets magicked away:
import { createMemoryHistory, createRouter } from "vue-router/auto";
const router = createRouter({
history: createMemoryHistory(),
});
I actually don't mind the file-based routing, but it made adoption much more painful — it was very difficult to do a piecemeal migration, and it basically ended up as an omnibus PR touching every single view in the application. (Though that PR was made much safer by the fact that now all the routes had type information!)
You might also notice that the third file was not /users/[id].vue
, but /[users].[id].vue
. UVR handles nested routing for things like modals differently than I was used to in Next; you nest modals by plopping them in directories in a way that is logically coherent but still takes a bit of getting used to.
Three months later
By the time I was truly waist-deep in the UVR migration, it felt like it was:
- Too late to turn back;
- Perhaps not worth all the effort just for some type safety.
Three months later, though, I am quite glad I did it. It was a pretty big up-front cost, but has saved me many times over from pushing bad code, and the doubts I had about the approach being 'janky' and messing with VSCode have not borne out.
If you're using Vue, highly recommend. (Now all I need to do is get a similar abstraction for query parameters!)