I was excited to finally remove django-extensions from my pyproject.toml file when 5.2 dropped because they added support for automatic model import. However, I found myself missing one other little escape hatch that django-extensions exposed, which was the ability to import other arbitrary modules into the namespace. Django explains how to do bring in modules without a namespace, but I wanted to be able to inoculate my shell, since most of my modules follow a similar structure (exposing a single call function).

It took the bare minimum of sleuthing to figure out how to hack this in for myself, and now here I am to share that sleuthing with you. Behold, a code snippet that is hopefully self-explanatory:

from django.core.management.commands import shell


class Command(shell.Command):
    def get_namespace(self, **options):
        from emails.models.newsletter import actions as newsletter_actions
        from emails.models.subscriber_import import actions as subscriber_import_actions

        namespace = super().get_namespace(**options)
        namespace["newsletter_actions"] = newsletter_actions
        namespace["subscriber_import_actions"] = subscriber_import_actions

        return namespace


Part of running a business like Buttondown is spending more time than you'd like installing esoteric email clients and trying to debug odd rendering behavior. Superhuman is one such client, but they do us the favor of being an Electron app with a concomitant Chrome extension. So rather than having to guess at what the black box is telling you, you can just pop open Chrome Inspector and see exactly what's happening in the DOM.

Specifically, we had someone write in and say that their spacing on their emails was a little wonky for reasons passing understanding. I installed and poked around Superhuman and discovered the root cause, which is that Superhuman wraps the emails that they're rendering in a shadow DOM node, a perfectly reasonable thing to do. However, we declare some CSS variables in our emails. And in fact, that CSS variable is what was getting stripped out.

Take the following example:

:root {
  --my-color: red;
}

p {
  color: var(--my-color);
}

Simple enough, right? Now imagine that embedded in a page with a shadow DOM:

<template>
  <style>
    :root {
      --my-color: red;
    }
  </style>
  <p>This text should be red.</p>
</template>

And yet:

The solution to this is simple and probably why you are here if you're on this article because you googled Superhuman CSS rendering. Instead of declaring the variable on a root, you need to also declare it on :host, which allows the scoping to attack DOM nodes or shadow root DOM nodes as well:

:root,
:host {
  --my-color: red;
}

p {
  color: var(--my-color);
}


The last of summer's grip finally loosened its hold this September, and Richmond began its annual transformation into something gentler and more contemplative. This morning's walk with Telly required a dusting-off of the closet-buried Patagonia puffer jacket; it's perfect for walks with Lucy, who has graduated into the Big Kid stroller making it easier than ever for her to point at every dog ("dah!"), every bird (also "dah!"), every passing leaf that dared to flutter in her line of sight.

As you will read below, the big corporate milestone for me this month was sponsoring Djangocon and having our first offsite over the course of a single week. Sadly, our Seattle trip was once again canceled. Haley and Lucy both got a little sick, and we had to abandon course. It's weird to think this will be the first year since 2011 that we have not stepped foot in the Pacific Northwest.

More than anything though, I learned this month for the first time how impossibly difficult it is to be away from your daughter for six days. It is something I hope I don't have to go through again for a very long time.

Post Date Genre
What follows GitHub? September 2 Technology
From Up On Poppy Hill September 3 Film
Goodwill September 5 Business
Onboarding survey, one week in September 6 Business
Django forever September 7 Technology
63 postcards September 11 Personal
Pulumi September 17 Technology
Weeknotes 2.0 September 21 Personal
SABLE, fABLE September 22 Music
Hidden coupons September 29 Business


Much of our work at Buttondown revolves around resolving amorphous bits of state and cleaning it up to our ends, particularly state from exogenous sources. This manifests itself in a lot of ways: SMTP error codes, importing archives, et cetera. But one particularly pernicious way is straight. An author can come to Buttonown having already set up a Stripe account, whether for some ad hoc use case or because they were using a separate paid subscriptions platform such as Substack or Ghost that also interfaces with Stripe. And one of the first things we do is slurp up all that data so we understand exactly what their prior history is, how many paid subscribers they have, et cetera. As you might imagine, this is very, very effective because the biggest perceived barrier for users is friction and how difficult it is for them to move from one place to another. And every time we can make it incrementally easier for them, it's worth our while. However, as you can also imagine, we deal with a lot of edge cases and idiosyncratic bits of behavior from Stripe. (And if anyone from Stripe is reading this essay, please don't interpret it as that large of a complaint because Connect is a pretty impressive bit of engineering, janky as it is.)

One thing we have to do is pull in all coupon and discount data. So this is for a variety of reasons that are all uninteresting. The point of this essay is to talk about a divergence and where the abstract breaks down.

You might think, as we once did, that the way to do this is pretty simple. You compile a list of all the available coupons, and then you iterate through every single subscription looking for said coupons. This is also the approach outlined in the docs and surfaced in the dashboard, so your naivete is excusable.

However, this neglects to highlight an entirely different genre of discount, which is ad hoc discounts that are created and applied during the checkout session process, as well as probably a couple other places in which I'm unaware. To iterate through these, you must iterate through the subscriptions themselves:

tangible_coupon_ids = [c.id for c in stripe.Coupon.list().auto_paging_iter()]
intangible_coupons = []
for subscription in stripe.Subscription.list():
  if not subscription.discount:
    continue
  if subscription.discount.coupon.id not in tangible_coupon_ids:
    intangible_coupons.append(subscription.discount.coupon.id)

I'm sure there are a lot of interesting and nuanced reasons why these intangible coupons are not actually available through the core endpoint — I also don't care! It is a bad abstraction that I can get two different answers for "what are the coupons for this account?"; it is particularly bad because the "real" answer is by looking in the non-obvious place.

At the same time, I am sympathetic. "I should not have to create a dedicated Coupon object just to apply a single discount to a single subscription" is a very reasonable papercut that I understand Stripe's desire to solve; in so doing, they created a different (and perhaps more esoteric) problem. This is why API design is a fun and interesting problem.


I have been going through a certain kind of reckoning with my previous self and the things that I loved back when I was an undergrad with fewer things on my mind and more time to listen to music — and even more importantly, more time to spend being the person who has listened to music. I blame the new Blind Pilot album — the idea that my favorite band could record an album that I didn't even find disagreeable, but just supremely forgettable, has me in a constant state of mild panic that the couture indie band that I had made my cause célèbre for some random spring semester turned out, in fact, not to be an important part of my personal tapestry, but yet another 7.4 on Pitchfork to be lost like so many tears in the rain.

And I say all this as preface for the zag, which is Bon Iver was and is not one of those artists. I did not like a lot of his post self-titled output because I felt then, as I felt now, that trying to achieve some sort of post-modernism via funny Unicode song titles felt like a cheap and disingenuine way out of whatever he was trying to negotiate with himself. But those first two albums are absolutely above reproach! For Emma, Forever Ago, far from being dated, feels more timeless now than it did when I first listened to it. And it has the double pleasure of taking me not just to the same cabin Justin Vernon sat in all those winters ago, but the cabins, metaphorical and otherwise, that I was in as I listened to it as that bright-eyed and bushy-tailed undergrad, hoping more than anything that I would be able to get a PBR without getting caught.

(You could probably spin all of that into a single narrative, which is Justin Vernon recognizing himself as being the For Emma guy and spending a decade trying to escape it, as if escaping the pigeonhole he created for himself would also help him escape the conditions that brought him his fame: an angel's voice and a shattered heart.)

It is perhaps overthinking it to call this album an extended reckoning, but it's also — despite the annoyingly ironic capitalization in the title — a very earnest record, bordering, if not infringing, on 80s dad rock at times, in a way that doesn't feel like cosplay but feels like the turn Vampire Weekend made with Father of the Bride — a new kind of music that represents neither stagnation nor escape, but a logical progression. And so many of its songs — If Only I Could Wait, There's a Rhythm — are great in the same way Holocene was great: an instant recognition that it would stick with you for a decade, and maybe longer still.


Once upon a time, I wrote weeknotes for Buttondown. I’ve started them up again—the first edition is linked below. I’ll spare you the navel-gazing about whether they belong there or on the blog (I cover that in the other post). In short: this won’t really affect the blog. Most of what will go into weeknotes are things I’ve been too much of a coward to blog about until now. So, consider this just more content for your enjoyment.

Two quick programming notes:

  1. The subscriber lists for the blog and weeknotes will remain separate. I’ll make sure to occasionally nudge folks from one list to the other, and might even use this as a chance to dogfood some cross-promotional features—like, how do you send a CTA only to people not already subscribed to a given list? Still, it’s important (for my brain and heart) that these stay truly independent publications, as grandiose as that sounds. I want to keep firing off takes about ’70s cinema without worrying about The Brand, and vice versa.

  2. As for this site: I’d like to cross-publish weeknotes here, but right now that’s basically impossible. So, I’ll probably use this as an excuse to rejigger the design into something a bit more microbloggy. I’m especially inspired by Gina's newly launched “note to self,” which nails the vibe I’m after: a mix of short, link-blogging-style posts and longer standalone pieces.

Read the weeknotes

I'm spending a lot more time lately using Pulumi. This is for a handful of reasons. The two biggest ones are as follows.

  1. First, we're ramping up our investment in quote-unquote infrastructure. We're sending a lot from our own machines and want to be able to scale that up in a way that is more observable, predictable, and legible.

  2. Second, external infrastructure is often a dark forest. It's very, very easy to change state, i.e., swap over a dyno in Heroku or futz with some settings in S3 and never propagate those Changes back to internal docs. This creates a vicious cycle in which someone looks at the docs, notices a difference between the docs and reality, and then decides that the docs aren't worth the time to read, let alone to write.

I approach tools like Pulumi as a bit of an outsider. I’ve never really had to deal with this class of DevOps tooling before—either I was at a small company where orchestration wasn’t worth the trouble, or at a big company where the whole problem was abstracted away. So, if you’re reading this with more experience, you might see some of what follows as naive, or even a little clueless. (Tell me what I'm missing!)

That said: Pulumi is genuinely cool, but I still find myself baffled every time I use it. I get the high-level idea: you declare your infrastructure as code, store the state somewhere, diff the code and the state (and the state and reality), and then apply changes. But the process just feels unnecessarily painful, which makes me wonder if I’m missing something obvious.

Why isn’t there a one-click way to declare all my state in Cloudflare? Why do I have to write a bunch of ad hoc scripts just to slurp up zone records? Why does every provider have its own slightly weird authentication scheme? There are a lot of stubbed toes along the way.

And yet, once you get past all that, it’s still extremely cool. I love being able to write five lines of Python and add MX records to 20 servers at once. I love that those same lines of Python can double as living documentation. Pulumi is a genuinely useful abstraction—one that sometimes feels like it’s succeeding in spite of itself.

A lot of the best advances in developer tooling over the last decade have been about taking practices that were once exclusive to resource-rich, cutting-edge companies and making them accessible to everyone else. There’s a similar opportunity here, especially now that LLMs make it so much easier to reason about your system architecture when it’s all laid out in a tidy YAML file.

Honestly, I think a hyper-opinionated, polished successor to Pulumi—one that nails the first-run experience—would absolutely crush. And if you know of something like that already, please let me know.


I wrote two days ago about how our pytest suite was slow, and how we could speed it up by blessing a suite-wide fixture that was scoped to session. This was true! But, like a one-year-old with a hammer, I found myself so gratified by the act of swinging that I found myself also trying to pinpoint another performance issue: why does it take so long to run a single smoke test?

There are a lot of known problems: for instance, import stripe takes 400ms. But there was an eight second lag between starting pytest and the first test running, and I wanted to know why.

All I needed to do was run pyinstrument -m pytest -k test_smoke and I got three useful results that I am cataloging in hopes they might find you:

  1. boto3 hits AWS upon client instantiation, and we had a dependencies file that imported and created the client. Easy fix: just lazily instantiate it.
  2. We were globbing actions/*.py to run a test on every single one of our "actions" (see Use weird tests to capture tacit knowledge). Turns out: this took 2 seconds. Another fix, though less easy: move the globbing output to an autogen file and read from that instead. (Causes a bit of a Rube Goldberg effect, but it's worth it.)
  3. Two more heavy imports (each at 500ms): one for parsing msg files, the other for parsing iCal. Unlike Stripe, the surface area here is fairly well-contained, so it was easy to move things around to lazily import them.



The speed of Buttondown's pytest suite (which I've written about here, here, and here) is a bit of a scissor for my friends and colleagues: depending on who you ask, it is (at around three minutes when parallelized on Blacksmith) either quite fast given its robustness or unfathomably slow.

We've done all of the obvious and reasonable things to speed it up (which I'm defining as "all the stuff in Adam Johnson's excellent book on the subject"): all the Postgres and Django knobs have been turned, all the HTTP requests are mocked, and all the fixtures are colocated. With all of that low-hanging fruit picked, we're left with the problem of finding a very high ladder and then making sure we place it under the right copse of trees.

First off: shout out to pyinstrument. Almost every Python profiler feels dated in some way: annoying to use, difficult to interpret, or both. Pyinstrument is the exception: it's easy to use, easy to interpret, and it's fast. I can drop the following snippet into my suite:

import pathlib
import pytest
from pyinstrument import Profiler

@pytest.fixture(autouse=True, scope="session")
def auto_profile(request):
    """
    Automatically profile all tests in a module and save HTML reports to .profiles.
    Requires pyinstrument to be installed.
    """
    if Profiler is None:
        yield
        return

    TESTS_ROOT = pathlib.Path(__file__).parent
    PROFILE_ROOT = TESTS_ROOT / ".pytest"
    profiler = Profiler()
    profiler.start()

    yield

    profiler.stop()
    PROFILE_ROOT.mkdir(exist_ok=True)
    results_file = PROFILE_ROOT / "output.html"
    profiler.write_html(str(results_file))

And then, when I run pytest, I get a lovely little HTML file output. Very ergonomic!

Anyway. The problem goes like this:

  1. Buttondown uses pytest-django. This library, amongst other things, manages the Django test client, handles migrations, and generally makes it easier to run tests that are "Django-aware".
  2. pytest-django is (correctly) opinionated against invoking the database in tests, because hitting the database is slow. It forces you to manually enable database access via a db fixture, and what's more you can't do so at any scope other than function.
  3. As such, all of our database fixtures are scoped to function.
  4. Buttondown's core object is the Newsletter. Many tests require the presence of a Newsletter object in the database (for permissions checking, auditing, etc.).
  5. The Newsletter object is expensive to create: lots of associated objects, lots of lookups. Right now, on my M4, it takes around 100ms to create a Newsletter object in a test.
  6. Expensive object plus no fixture reuse equals slow tests.

There are "flip a switch" answers and there are "chisel away at granite" answers. I am tempted to just mock out a lot of the newsletter creation process; this feels like it would be a thing I regret doing. The real thing to do is to start blessing a "suite-wide" fixture that's scoped to session and can be used for cases where we need a Newsletter object handy but don't actually mutate it, and that's in fact what I've started doing:


@pytest.fixture(autouse=True, scope="session")
def global_fixtures(django_db_setup, django_db_blocker):
    with django_db_blocker.unblock():
        for shard in Newsletter.Shard.values:
            PostmarkServer.objects.get_or_create(
                name=shard,
                postmark_id=f"server_{shard}",
                api_key=f"key_{shard}",
                color=f"color_{shard}",
            )
        user, _ = User.objects.get_or_create(username="test")
        account, _ = Account.objects.get_or_create(username="test", user=user)
        newsletter, _ = Newsletter.objects.get_or_create(
            id=GLOBALLY_AVAILABLE_NEWSLETTER_ID,
            username="test",
            name="Test",
            owning_account=account,
        )
        Permission.objects.get_or_create(newsletter=newsletter, account=account)

Which means instead of having to invoke a newsletter fixture in every test:

def test_subscriber_email_validation(newsletter):
    response = post(
        payload={
            "email_address": "telemachus@buttondown.email",
            "newsletter_id": newsletter.id,
        },
        client=client,
    )
    assert response.status_code == 200, response.json()

I can just do this:

def test_valid_email_with_no_subscriber(client) -> None:
    response = post(
        payload={
            "email_address": "telemachus@buttondown.email",
            "newsletter_id": GLOBALLY_AVAILABLE_NEWSLETTER_ID,
        },
        client=client,
    )
    assert response.status_code == 200, response.json()

This is not, to be clear, a brilliant insight. Everyone knows that having a global fixture set is generally a good idea; but, for all of the nice investments in testing that we've made, a literal global fixture set has not been one of them.


One may also ask if it's worth optimizing this at all. A single test suite that I applied this approach to dropped from four seconds to a little under two seconds: a dramatic change, but is it worth the labor? I don't think it's a clear-cut answer, but I tend to say yes. Two reasons why:

  1. Test suites more than any other part of a codebase tend to accelerate in whatever direction they're headed. Very excellent suites stay very excellent; suites that are a little janky tend to become more janky. (This is particularly true in our LLM world: LLMs are pretty good at writing tests, but they're even better at pattern matching.)
  2. Performance in the test suite is a useful proxy for performance in the codebase as a whole. Many of these issues, like the one we're discussing here, are just about pytest itself and not Buttondown, but many aren't. And, given that a test suite's value is directly proportional to its resemblance to "the real world", being able to clear out artificial noise from the suite's performance means being able to better-surface actual performance issues.

(BTW: if there's something obvious I'm missing here, please let me know!)


We have wrapped up the formal portion of DjangoCon. DjangoCon is not Buttondown's first conference that we've sponsored, but it is the first one that we've actually manned a booth at — and we did so in a fashion that I would describe as idiosyncratic, ramshackle, and informed by a charming bootstrapper ethos — which is to say, deeply on brand. We didn't really know what boothing was like, so we showed up with a stack of postcards because everyone loves a good postcard.

We quickly discovered that every single other booth at DjangoCon had a big banner and some one-pagers, so we printed out a big banner that says EMAILS! and a one-pager about what Buttondown is. (And then, because that felt a little too corporate, a second one-pager with a list of our favorite Django packages that we use. As you might expect, that one was the more popular of the two handouts.)

So, about the postcard. We agonized for a while about what a good little piece of swag could be. We wanted something that was novel, on-brand, interesting, and cheap enough that we could ideally replicate it for future conferences. Steph had the bright idea of a postcard illustrated by a local Chicago artist that folks could then send to friends, loved ones, rivals, et cetera. This was a perfect idea, and we printed them, and hoped people would love them too.

I think if you were to have had me guess how many completed postcards we would have gotten back, I would have guessed in the neighborhood of around 30 to 40: DjangoCon had around 250 attendees, so this would be percentage-wise a solid but not absurd result. The idea of being able to just send a couple dozen nice postcards that people will hold on to way longer than they hold on to yet another Field Notes notebook or yet another stress ball seemed like a great goal.

But as you might have guessed from the H1 of this blog post, we surpassed that. Not counting any stragglers, we're at a count of 63 postcards. The brand, as I have grown increasingly fond of saying, is stronger than ever, and 63 fridges or tackboards will be emblazoned with the word Buttondown for days and weeks to come.


Dave asks how decide what to write about. This is something I'm being asked more and more: I don't think I risk braggadocio when I say that Buttondown's writing outkicks its coverage (by many metrics, including the important ones, but the one I like to point to right now is that we've hit the top of Hacker News three times in the past five months). There are two answers here, the literal and the abstract:

The literal answer is that, every month, Matt and Ryan send over a list of four or five ideas for posts that we (meaning anyone in the #marketing channel) whittle down to two, and then they write them and I edit and publish them. This is supplemented by one-off things like announcements that don't make sense on the changelog or thunderbolts of inspiration.

The abstract answer is the more interesting one: at what point did we decide it was a good use of our time and money to write stories about since-deceased competitor protocols to RSS? This is not exactly part of the content marketing meta right now (which is not to say that we don't have some more conventional ideas on the blog, too.

Here, as nakedly and simply as I can manage, is the answer:

  1. Our goal is not SEO. This is for two reasons:
    • other companies can publish a lot of content (and they do), and they will always beat us on volume
    • our marketing site will always be a tiny fraction of search-engine-driven inbound relative to our web archives
  2. Therefore our writing is for brand-building, not for lead capture. (Notice that there is no CTA at the bottom of our blog for anything but signing up to Buttondown.)
  3. Our brand, much like our product, is idiosyncratic, esoteric, nerdy, and high-quality.
  4. Our writing should be idiosyncratic, esoteric, nerdy, and high-quality.

Here lies normally the denouement of the essay, in which I gracefully loop back around to the postcards and, in my trademark rococo style, remind you that the postcards followed the same strategy. (I'll leave that as an exercise for the reader.) Instead, I wanted to pre-emptively answer the question of: "what if it's all just a giant waste of time and money, and none of these things move the bottom line?"

That is certainly possible! There are fatalistic afternoons that I look at our acquisition statistics and suspect that every single thing we do pales in comparison to the natural flywheel of the product itself, in which users migrate to Buttondown and bring their subscribers, who then get emails from Buttondown and learn about it, who then migrate their own newsletters to Buttondown, and so on.

But, in such a world where Buttondown's growth trajectory remains unbent by our efforts, which would you rather be left with: hundreds upon hundreds of Taboola-tier posts meant to be consumed primarily by indexers rather than humans, or some really, really nice postcards, and 63 people whose day was brightened (however slightly) by what we've done?


Tomorrow, I am taking a very early morning flight to Chicago to attend DjangoCon US. Buttondown is sponsoring, less as an exercise in lead generation and more as an act of circuitous open source sponsorship, and perhaps "sponsorship" is not quite the right word compared to gratuity in the most literal sense. I have been writing Django for fourteen years now: I started during 1.3, a hinterlands time that I remember mostly in two respects:

  1. Django did not support migrations yet (RIP south);
  2. Beyond that, Django was substantially the same framework it is today, a point often levelled as a criticism that I offer as praise.

Professionally, I see myself mostly surrounded by frameworks largely dominated by superfans: Rails, Laravel, and Next all make it their business (for the latter two, quite literally so) to turn framework adoption into a quasi-religious affair. Each one of these frameworks has something I admire and, in the truest form of admiration, occasionally try to steal wholesale in some form:

  1. Rails has an aggressive, opinionated, and unapologetic stance on elevating an opinionated way to do things "correctly" and making it very easy to do, at the exclusion of all other approaches;
  2. Laravel has turned the in-sourcing of community frameworks that reach sufficient levels of traction and maturity into an art form;
  3. Next looks at every single piece of an application and asks "is there a better way to do this?" and then implements it, consequences be damned.

It is tempting to find elements of Django staid in comparison. contrib is littered with packages that should have been excised long ago (flatpages comes to mind) and it has taken longer than I would find ideal to in-source things that feel like vital parts of the ecosystem: an async runner (though this is on its way!); email adapters; Oauth; a developer toolbar.

And yet: these things pale in comparison to what I think Django gets right, and why I turn to it time and time again (besides, of course, the touch-feel familiarity of having used it for a decade): it nails doing the hard stuff both at an API level (user modeling and everything therein; ORM; routing and middlewares) and an existential level (friendliness and commitment to politesse; extremely good documentation).

It is not an exaggeration in either direction to say: when I look up a StackOverflow answer about Next from one year ago it is usually outdated, and when I look up a StackOverflow answer about Django from six years ago it is almost always still accurate. Stability can be boring; but Buttondown's codebase is at the age (seven years!) where Ship of Theseus metaphors start becoming apt, and while many many parts of the codebase have changed, everything in Django-land core is as rock-solid as ever: ergonomic, performant, and humming away.

I am grateful for everyone who has contributed to Django over the years: the core team, the community, the countless users who have made it what it is today. Using it is a blessing that I do not take lightly.


One extremely compelling form of blogging, both for the reader and the writer, is the admission of defeat. Seriously: whenever you find yourself faced with an empty iA Writer window and a dearth of ideas, ask yourself "what have I been wrong about this week?" and let the digital ink flow.

With that as an ominous intro: we have officially concluded the first week of Buttondown's new onboarding flow. (Of course I am not going to share a screenshot. Try it out yourself on the demo site.)

It's a simple survey meant to scissor our customer base twice, into what seemed to me like the most pertinent and binary-search-y categories:

  1. First, divide them into "people migrating from another platform" and "people who don't have any data";
  2. Second, divide them into archetypes ("casuals", "creators", "technologists", "entrepreneurs").

The goal of these things vary from organization to organization. Our goals are twofold:

  1. Better tailor documentation / lifecycle emails / empty states / etc. based on the information provided.
  2. (And this is kind of the embarrassing one) unironically better know the customer base. Buttondown's grown a lot; I no longer have the touch-feel sense of who every customer is, and I am routinely surprised by who is using us and to what end. We have some existential questions to answer about where we want to start building up our infrastructure, and who out of our increasingly diverse customer base is going to be (relatively speaking) marginal to our plans.

So, the admission of defeat. Or, admissions: the survey has been running for a week, which is not long enough for statistical significance but long enough to garner interesting results. I have been shocked by not one but three things in this little survey's short life:

  1. 70% of new users fill out at least one question. This is way higher than I expected (as someone who is routinely guilty of mashing the "skip" button on every post-registration survey I come across).
  2. Our user base is, unfortunately, precisely divided across those four archetypes. Each answer has at least a 20% share of the pie chart, which is good news insofaras we have a good understanding of our four core segments and bad news insofaras those four segments are extremely broad.
  3. Here was the legitimate shocker to me, and the one that means I have to go back and rethink our onboarding and positioning: eighty percent of new users have never had a newsletter before. In my head it was around fifty-fifty.


Morrowind is made much easier when everyone you talk to has an 80+ Disposition!

One of the amazing things about Morrowind is that it's a combat-rich game in which you can feasibly go through much of the game without ever fighting; indeed, gaming up your Disposition with every NPC you meet is a key to a much easier and interesting experience.


A major vendor that I rely on has had three major incidents this week. If you were to expand the aperture slightly, they've had five in the past 30 days. This is a vendor that I've historically liked a lot and advocated for to people who ask me about it for a number of reasons, but most of all, their customer service, which has been historically excellent.

...Until recently, that is. It's hard to pinpoint the exact time when support crossed the threshold from "good" to "bad," but the memory that immediately comes to mind is being six emails into a thread with someone asking for an RCA on an incident and then getting hijacked by the AI chatbot they were rolling out, which promptly and cheerfully directed me to a bunch of unrelated documentation.


In Naughtiness, I wrote:

Customer goodwill is a real asset; it is one that will probably become more valuable over the next decade, as other software-shaped assets start to become devalued. It feels almost anodyne to say "it is in a company's best interest to do right by their customers", but our low churn and high unpaid growth in a space uniquely defined by lack of vendor lock-in is perhaps a sign that being nice is an undervalued strategy. And "being nice" in a meaningful sense is, like "being naughty", something that gets baked into an organization's culture very early and very deeply.

When thinking about this vendor — and the process, which now feels inevitable, of having to shop around for a replacement — I am reminded of this. Little grace notes and positive interactions build up a war chest of goodwill that can be drawn upon to offset bad days. Good will is finite and must be constantly replenished; it will save you from the blowback of one or two things but not, say, five.

People talk about AI-enhanced support largely through the lens of efficiency: support is a cost center, here is a way to reduce those costs. Setting aside the obvious flaws (I have thoughts for another essay on that!), I reject the underlying frame because customer support is a profit center. Trying to outsource the function (whether to an AI or whomever) is not dissimilar to trying to outsource pricing or positioning: if you're comfortable giving up one of the key ways you interact with users and deliver differentiated value, you better have a really good reason.


The summer heat in Richmond clung to everything this August like a second skin, broken only by afternoon thunderstorms that sent Telemachus scurrying to his fortress of solitude (the upstairs bathroom), and Antibes — Lucy's first European stamps in her passport, her delight at the Mediterranean blue (all ten minutes of it before we scurried to shade), her confusion at why Papa kept trying to order things in a language that clearly wasn't working. The wedding was beautiful; the pissaladière was everything I hoped it would be; there was no pastis, but enough Mojitos (apparently a thing!) and spritzes to compensate.

Post-France, the house had an air I would artfully describe as "pleasantly chaotic". Lucy aged a month in a week, as she often does — the ability to climb stairs and (as of this very morning) flush toilets mean that there is no chance to be bored. Still, rituals continue, the smaller the better: bath time splashes, the little hiss of the Moccamaster at dawn, the weight of a snoring corgi across my feet as I write.

September looms. For what feels like the first time all year, I don't end my morning walk flushed and sweaty; the infinite heat is breaking, even slightly. It is almost time to harvest the last of the herbs and the first of the watermelons; we're doing a couple quick trips (Chicago, Seattle) and then, somehow, impossibly, Lucy will turn one year old, marking the end of the fastest and happiest and most mystical year of my life. She will be smiling all day, just as she does now; it is impossible not to smile in kind.

I hope you are well; I hope you are looking forward, as I am, to the changing of the seasons.

Post Date Genre
STAR LINE August 18 Music
Three apps that will not change your life August 19 Technology
shovel.sh August 20 Technology
The Bourne Legacy August 21 Film
Liberal Arts August 21 Film
Ashland, 2025 August 24 Personal
Broadcast News August 25 Film
The Firm August 28 Film


I pray for safe voyages.

Going into From Up on Poppy Hill, my main context was that of mostly understanding it as nobody's favorite Studio Ghibli—not to say that it's bad so much as it is unremarkable in a literal sense.

And, in fairness, there's a certain flatness to its texture, containing neither the weighty undercurrent sported by Princess Mononoke, Spirited Away, or The Wind Rises, nor the childlike fantasy of Kiki's Delivery Service or My Neighbor Totoro. The reason for that is fairly simple: this movie is more of a vibe than a plot, and I truly do not mean that as a critique.

(The one thing that you could say "happens" in this film is that two very charming people meet and discover that they are not related by blood. But I can't grade Studio Ghibli films by the same rubric that I might grade other things.)

I think what Miyazaki accomplishes and tries to accomplish is a different thing than most other directors, and I think the fun and joy of watching his work is getting to let it all wash over you.

There is something so deeply warm and pure-hearted about the naivete and chaos of this tiny little world we spend 90 minutes in modernity, enemy as it is in so much of his work, minimized even further as a villain to the very margins we spend our time not thinking about the trials of the Latin Quarter and the sea-side boarding house, and perhaps by extension how far away today we are from such idylls.


Movies do so many things; great movies even more so. I remember the first time I watched My Neighbor Totoro — five years ago, with my then-partner and now-wife, and suddenly understanding at the beginning of the final act just what the movie was actually about, and becoming absolutely overwhelmed with the terror and panic unique to being a very young person who does not know many things about the world but knows that the person they love is having something bad happen to them, and wanting to do anything you can to help. That Totoro can do that is not a magic trick; it is a portal, and a portal I step in even by writing and thinking about it.

And where Totoro's success is by placing you in the heart and eyes of its protagonists, Poppy Hill is more interested in the distance between the two stories it tells: the tragedy of the parents, the joy of the kids. And that can be a kind of amulet: a daughter to a martyred father and distant mother, facing a world rife with change and upheaval, who spends each day grateful to raise a set of signal flags — whose hardest evening is one with a last-minute trip to the market at the bottom of the hill. Life was hard then, and yet luck and bliss in every corner — life is hard now, and yet blessing upon blessing for all those willing to listen.


It seems fairly clear that, as far as product lifecycle goes, GitHub is in its “Azure metered billing” stage. I don’t mean this as a negative value judgment in of itself — who am I to argue that GHE is not, from a certain utilitarian point of you, more valuable to the world than all of the things I’m about to kvetch about? — but two things seem quite clear:

  1. The core experience of using GitHub for normal, GitHub-shaped things (reviewing pull requests; browsing a codebase) has degraded substantially over the past five years. Views are slower and buggier.
  2. GitHub has not innovated significantly on the broad process of managing and producing software since Copilot (2021), which while valuable felt then and now like a bit of a sidequest relative to their core interface and mission. Their two major non-enterprise ships in the past five years have been: Projects (2022), which I would describe as a solid iterative improvement on GitHub Issues that is still strictly worse than Linear and Jira; Discussions (2020), which is a clear and obvious success but is slowly being eaten away by Discord and other walled gardens.

Lest you argue that I am being uncharitable: as I write this, the splash page for Github shows an above-the-fold carousel of images with five captions (Code, Plan, Collaborate, Automate, and Secure). Three of those images show Copilot; one shows Actions (which, it should be said, I do find really useful!); one shows Projects. The story is much the same below the fold: the goal of GitHub as an organization has shifted largely to a) get larger organizations using GitHub and b) to have all organizations start adopting Copilot.


The above is only problematic if you believe that there is room to grow in the realm of “hosting and owning source code", and that the current model of reviewing a pull request by reading the modified files in alphabetical order is not in fact a global maximum. Linear’s interested in this space; Pierre is, too (though Pierre appears, at least right now, to be much more interested in Being Pierre).

I’m not well equipped to prognosticate here: all I know is that this is not the tool of the future, and whoever replaces GitHub will have a narrative arc of incumbency displacement that will feel obvious and trite in retrospect.


I have been a little bit guilty lately of waxing poetic on the big-budget middle-brow thrillers of yesteryear. It does seem like the 80s and 90s were packed with a kind of not particularly smart, not particularly dumb, but well-crafted, thoughtful thriller or drama—a genre that feels at this point extinct, having been supplanted by streaming miniseries and Marvel movies. I wish we had more of this kind of film and am perhaps predisposed to review their ilk more kindly for want of modern replacement. They, like any other genre, are easily bruised. The Firm as a plot is not just Grisham but feels like a platonic ideal of Grisham. It is the, I say this without condescension or critique, simplest and most stereotypical Grisham legal thriller you can imagine. And because that is the case, I find it hard to talk about the gestalt of the movie. Instead, my mind immediately goes to the granular things that worked and didn't work.

I'll start with the things that didn't work so I can end on a more positive series of notes. This is an overlong, overwrought, melodramatic film with some absolutely ridiculous flourishes (I'm thinking of the cartwheel scene and a completely unnecessary 10-minute chase scene involving Wilford Brimley) that is managed rather than buoyed by Tom Cruise, who is given charge of a script that is scattershot and thin. It just plain drags, which is hard to say given how rote and predictable everything is in hindsight. Even the score feels bizarre. The final act of the film has a jaunty jazz piano undercurrent, which makes the whole thing feel like an Ocean's precursor rather than a taut, dramatic climax. That being said, there are some terrific, terrific moments, and all of them are outshadowed by Gene Hackman playing a still thin script absolutely perfectly. His blend of smarm, menace, and self-awareness quite literally carried the film, both in its earlier acts where a more obvious portrayal would give the whole game away, and in the final act where it would drain the entire "heist" of any nuance or interest.

The firm is largely populated with one-note portrayals of one-note characters, and Hackman devours them all. It's a film worth sitting through for his performance alone (and, in second place, Tripplehorn's).


Holly Hunter, William Hurt, and Albert Brooks in the newsroom

What do you do when your real life exceeds your dreams? / Keep it to yourself.

God, we used to be a great country.

Everything about this film feels true. The foundation, without which nothing matters, is the pitch-perfect triptych of Hunter, Hurt, and Brooks: all three are perfectly cast, perfectly written, and perfectly executed. Every single person in the world has worked with a Jane and an Aaron and a Tom; the movie never flinches in showing us who they really are, both in virtue and in vice, and Brooks is wise enough to resist trope and obvious resolution in favor of a progression that trades satisfaction for honesty.

But around them, an equally impressive orbit. Joan Cusack doing her usual shtick; Jack Nicholson in what feels today like a stunt-casting role that borders on metafiction; sets and dressing details that never feel too outre, and interstitial newsroom scenes (with the exception of a brief and silly Central American soujourn) that feel far less contrived than the usual romantic comedy fare.

I think it is tempting to dwell on the "message" of the movie, and to think about Jane's opening monologue: as someone who grew up with broadcast news firmly in the rear view mirror, the idea of the "advent" of the anchorman and what it did to a serious journalistic praxis is novel, and of course Hunter's character is proven prescient over the forty years that follows. (One is reminded of the ending to Between the Lines, but in that film the folks getting laid off appear to be neither interested nor competent, which of course cannot be said of our protagonists here.)

But this is a film much more interested in the human condition than the newsroom, and while I don't think it ever succeeds quite on marrying the two as neatly as it would like (the workplace is compelling; the characters, dominated by their work, are compelling; I am not sure this symbiosis teaches us much about the way the world works) it does not matter because you leave the theater understanding the world more clearly because you understand these three people in vivid technicolor. Tom and Aaron have found various satisfactions in paths that are still not quite exactly what they wanted; they know it, and you know it. And Jane — god, what a perfect performance from Holly Hunter — is the one who we know is fiercest, is smartest, is bravest, and all we can do is hope that she stops crying.


Trundle is not quite the right word. When I hear trundle, I think of layers, of wool and dampness, of hitting the road before the sun does, wrapped in a blanket and uncaffeinated haze. One cannot trundle in the summer, and so this morning we did not so much trundle as we did shuffle our way to the car: Haley and I sporting matching pairs of sunken eyelids and cans of Celsius that could not kick in soon enough; my mother, gracious as ever, bright-eyed and cheery at 6:00am, and Lucy, still wearing her pajama onesies, awake but not quite conscious, confused but thrilled (as she always is) to be involved. And off we went to Ashland — not a far drive at all, maybe twenty minutes from Richmond. Ashland is a lovely and small town known for the following items: a train station, a sniper attack, a very pretty college campus, a strawberry festival, and a half marathon / 5K.

That last item is the source of our 5am alarm clock. We ran the Ashland 5K two years ago, the weekend before our wedding; we would have run it last year, but Haley was deep into her third trimester.

We get there; we park; we make our way to the packet pickup tent. To give you a sense of the size [or lack thereof] of this race, it is such that we can arrive ten minutes before the race begins, grab our bibs, and still have time left over to stretch and enjoy the sun slowly coming over the quiet environs. Lucy is captivated by the novelty and the relative chaos; she is not sure what’s going on, but she is delighted all the same.

It is very quickly clear that the weather will be perfect for a morning run, a welcome change: it’s been unbearably (though not uncharacteristically) gross in Virginia this year, hot and still and sticky, but this morning it is crisp. We’ve got fresh legs, though too fresh at that: the last time we ran was a 10K a few months back, which itself was the last time we ran since Lucy was born. And, as such, the race proceeds: it’s a very forgiving route, flat and calm, and our pace is a slow and steady jog, neither impressive nor unpleasant.

Time passes. My left knee twinges, as it is oft to do; I shift from listening to Charly Bliss to Navy Blue, and then toss my AirPods in my pocket. We hit the three mile mark; the end is upon us. We turn right onto the main stretch of road that cuts through Ashland, sitting parallel to the train track, and immediately we start to hear the generic “you have finished the race” Top 40 playlist and a guy on P.A. shouting out the half-marathon finishers (who, frankly, deserve the praise more than us). With one difference: my mother and Lucy, rather than waiting at the finish line, have camped out at the bend, and with a smile and nod I stop, pick up Lucy and begin running again — because, after all, it is her race too.


Haley, Lucy and I all cross the finish line, hand in hand in hand. Lucy is, as always, jostled and deliriously happy.

Strava gently informs me that it was my worst 5K pace in quite a few years; I gently inform it back that Lucy just set a new PR, and she’s only going to get faster from here on out.

Ashland 5K


Next page

Lightning bolt
About the author

I'm Justin Duke — a software engineer, writer, and founder. I currently work as the CEO of Buttondown, the best way to start and grow your newsletter, and as a partner at Third South Capital.

Lightning bolt
Greatest hits

Lightning bolt
Elsewhere

Lightning bolt
Don't miss the next essay

Get a monthly roundup of everything I've written: no ads, no nonsense.