In Notes on buttondown.com and How Buttondown uses HAProxy, I outlined the slightly kludgy way we serve buttondown.com
both as a marketing site (public-facing, Next/Vercel, largely just content pushed by non-developers) and an author-facing app (behind a login, Django/Heroku/Vue) and recommended developers not do that and instead just do the sensible thing of having:
foo.com
, a marketing siteapp.foo.com
, an application site
This prompted questions, all of the genre: “why do you need your marketing site to be hosted/built differently from your application site?”
A few reasons:
- You are, at some point, going to want someone non-technical to be able to contribute to your marketing site. If you host your marketing site from the same framework/repository as your application site, you have suddenly capped your CMS flexibility at “whatever integrates with my stack.”
- You do not want to couple deployments/SEO of the marketing site with the application site. (Do you want to force CI to run and end-to-end deployments to trigger to fix a typo? Do you want an OOM to take down your blog?)
- Namespacing is much easier (for things like, say, a whole-ass domain migration) when you don’t have to keep a sitemap that contains both internal and external paths.
There are reasonable counterarguments:
- Being able to commingle business logic with marketing can lead to powerful programmatic SEO or other clever things.
- Shunting your marketing site off to a purpose-built CMS like Webflow hamstrings your ability to iterate quickly.
I think the synthesis that we landed on — Next (powered by Keystatic) gives us the best of both worlds. Non-technical writers can publish and edit easily; we can do fancy programmatic things. But none of that obviates what is in my mind the more clear-cut piece of advice:
Even if you’re dead-set on having a single application serve both the marketing and application site, deploy them to separate domains.
Cable Cowboy is a rollicking read that serves better as a primer for a fascinating industry than as a legitimate profile (or hagiography) of the man from whose eyes the history has unfolded. Robichaux lacks the incisive vigor that made Barbarians at the Gate so compelling (and sometimes frustrating) as a character study and the Caro-esque majesty of vision to more carefully connect the dots of the major players and landscapes not just with each other but with the overarching shift in how the world worked.
But to condemn this book for not adequately interrogating power or serving double duty as a treatise on America's technologically-driven shift from statism into modern neoliberal nationhood is unfair, because what it does do is take you through a whirlwind tour of a fascinating set of companies and operators, and their struggle against all takers — the public sector, the financiers, the relentless march of technological progress — to make money.
TCI — and John Malone — are most well-known for their approach to capital, and the book goes over the broad strokes a fair enough amount (albeit not enough to reallly cause any new revelations): a focus on lean capital efficiency, an aggressive love of financial instrumentation, an interest in deals qua deals more than areas of strategic investment. There was nothing noteworthy here.
What I found more fascinating — and what I think gets omitted in the SparkNotes version of TCI's growth and history — is how recurrent every company and player is over a sufficiently long timespan in a commoditized industry. Someone who you are at war with in 1983 is a joint venture partner in 1992; a sales connection in 1972 is a potential acquirer in 1977.
Cable Cowboy makes the industry, for lack of a less cheugy metaphor, look like Settlers of Catan: Malone's gift was neither a ruthless efficiency nor an unparalleled understanding of markets, but an uncanny knack for always finding a deal to be done to eke out a little bit of long-term margin, and never burning any bridges.
Highlights
They killed the pig.
They killed the pig.
We finished Buttondown’s migration from MDX to Markdoc last week. It went swimmingly, except for one little hitch: our RSS feeds, which sat on top of getServerSideProps
and read in the flat .mdoc
files, threw 500s in Vercel. (They worked fine locally and in CI, but then those files were purged by Vercel as part of the post-compile deploy.)
I was considering going back to our previous (slightly janky but perfectly reasonable) approach of having a script that generates the RSS files and then just serving them as static asset, but Max Stoiber pointed me in the right direction:
- Create an App Router
Route Handler
- Set
dynamic="force-static"
- Build the XML in-band.
This means all we had to do was this:
import { createReader } from "@keystatic/core/reader";
import config, { localBaseURL } from "../../../keystatic.config";
const reader = createReader(localBaseURL, config);
export const dynamic = "force-static";
const CHANNEL_METADATA = {
title: "Buttondown's blog",
description: "Buttondown's blog — guides, tutorials, and more",
link: "https://buttondown.com",
};
export async function GET() {
const slugs = await reader.collections.blog.list();
const rawPostData = await Promise.all(
slugs.map(async (slug) => {
const response = await reader.collections.blog.read(slug, {
resolveLinkedFiles: true,
});
return {
slug,
...response,
};
})
);
const sortedPostData = rawPostData.sort((a, b) => {
const coercedADate = new Date(a.date || "");
const coercedBDate = new Date(b.date || "");
return coercedBDate.getTime() - coercedADate.getTime();
});
const items = sortedPostData.map((post) => ({
title: post.title,
description: post.description,
link: `https://buttondown.com/blog/${post.slug}`,
pubDate: new Date(post.date || "").toUTCString(),
}));
const rssFeed = `<rss version="2.0">
<channel>
<title>${CHANNEL_METADATA.title}</title>
<description>${CHANNEL_METADATA.description}</description>
<link>${CHANNEL_METADATA.link}</link>
${items
.map(
(item) => `<item>
<title>${item.title}</title>
<description>${item.description}</description>
<link>${item.link}</link>
<pubDate>${item.pubDate}</pubDate>
</item>`
)
.join("\n")}
</channel>
</rss>`;
return new Response(rssFeed, {
headers: {
"Content-Type": "text/xml",
},
});
}
Hopefully, Next will make this all a thing of the past and create a lightweight DSL like they did for sitemaps. In the meantime, though, I hope this helps!
Yesterday, I was trying to set a unique constraint for comments in Buttondown to prevent accidental double-commenting, and I ran into a problem that I hadn't seen before:
index row size 2816 exceeds btree version 4 maximum 2704 for index "emails_comment_email_id_subscriber_id_text_0542cca9_uniq"
DETAIL: Index row references tuple (165,7) in relation "emails_comment".
HINT: Values larger than 1/3 of a buffer page cannot be indexed.
Consider a function index of an MD5 hash of the value, or use full text indexing.
Simple enough: indexing a very long string is going to be prohibitively bad. It was immediately clear that the right path forward was to index the MD5 hash of the text rather than the text itself, but the literature on how to do so within the ORM was somewhat lacking:
- A decades-old tracking ticket had nothing useful
- StackOverflow either recommended dropping down to raw SQL or didn't think it was possible for uniqueness constraints
However, the solution is actually quite easy! Since Django 4.0, you can use expression-based uniqueness constraints, and Django even offers a handy MD5 function right out of the box. All I had to do was this:
from django.db.models import UniqueConstraint
from django.db.models.functions import MD5
class Comment(models.Model):
text = models.TextField()
email = models.EmailField()
class Meta:
constraints = [
models.UniqueConstraint(MD5("text"), "email", name="unique_text_email_idx")
]
And that's it!
In Paul Graham’s latest essay, he writes:
The theme of Brian's talk was that the conventional wisdom about how to run larger companies is mistaken. As Airbnb grew, well-meaning people advised him that he had to run the company in a certain way for it to scale. Their advice could be optimistically summarized as "hire good people and give them room to do their jobs." He followed this advice and the results were disastrous. So he had to figure out a better way on his own, which he did partly by studying how Steve Jobs ran Apple. So far it seems to be working. Airbnb's free cash flow margin is now among the best in Silicon Valley.
Readers are not privy to the exact talk; Graham presents it as a dichotomy between “manager mode” and “founder mode”, where founder mode exists not as a distinct methodology but as a rejection of “manager mode” practices:
Hire good people and give them room to do their jobs. Sounds great when it's described that way, doesn't it? Except in practice, judging from the report of founder after founder, what this often turns out to mean is: hire professional fakers and let them drive the company into the ground.
This dichotomy is reductive, but it hints at two commingled and pernicious issues facing most scaling organizations:
- The process of interviewing and evaluating managers is incredibly inaccurate.
- There exists very few organizational incentives or backstops to curtail growth.
Insofaras management is a science, we are still in the “leeches and humours” stage of things — we do not know how to organize knowledge workers and we do not know how to evaluate knowledge work. (To paraphrase Huemer, we are more likely to kill companies through bloodletting than save them through germ theory), and it makes sense that “founder mode” (as defined by a bias towards, more bluntly, dancing with the corporate ethos that brought you) is on net better than the current state of the art.
But “don’t overhire and don’t overstratify your management” is necessary, but not sufficient: if that’s all it took, presumably we’d see a dynamic wherein there were more Valve-shaped companies (small, flat, incredibly prolific.) I think it’s important to cultivate what Sebastian Bensusan calls lieutenancy:
Most people don’t realize it is their job to unblock themselves and that they don’t need permission to do it. You need people who act even when they hit “extraordinary blockers”.
Okay, so how do you cultivate lieutenancy? In three ways, each of which is probably worth writing about more:
- Prioritize tenure as an organizational health metric.
- Align compensation with business outcomes.
- Draw very clear lines of ownership, and very high demands for owners.
I am sure all of my brightless praise of Le Samourai — a film that, in retrospect, I should have watched a decade or so ago — has all been said before, time and time again. The gorgeous, minimalist direction (I need to watch more Melville, clearly, and have added Le Cercle Rouge to my watchlist); the impassive and perfect performance of Alain Delon (and, in obviously a much smaller but equally delightful role, Nathalie Delon); the perfect-no-notes-copied-many-times-but-improved-never ending.
It is rare when a noir succeeds both at the visceral, tangible level and at the spiritual level. And even then: great noirs leave you asking questions about the world: you finish Chinatown or The Parallax View with an understanding of the events you've witnessed but a deep gnawing doubt about the world around you. Le Samourai inverts that: you understand the world, and you understand not just your place and Jef's place in it, but you sit there and wonder: who is this man? What did he want from the world, and what did he know that we don't?
Someone emailed me in response to Two years as an independent technologist, in which I wrote:
I miss of being at a large company, which is dealing with deeply cutting-edge technical problems, but my ability to analyze information, make decisions, and perform at a high-level has grown very quickly.
They followed up:
I had a lot of trepidation around “losing my edge” not working on “hard engineering problems”. It sounds like you had the exact same concerns as well. Reflecting now, do you think you’ve continued to level up or refine your engineering skills?
My response is as follows!
Depends on how specifically you want to define “engineering skills.” For instance: everything Buttondown-related is, objectively speaking, pretty trivial in terms of scale. Our largest table is in the order of ~five billion rows, and that’s an outlier (event data!); this is just simply not that much compared to my time at Stripe/Amazon, where a much bigger part of my job was some variation of “figure out how to architect a system that can handle one or two orders of magnitude more volume than conventional wisdom permits.”
So, unless you’re working on a very specific kind of company, I think it’s just not very likely that you’re going to progress in terms of “hard engineering” through independent work compared to spending time at a FAANG where all of the engineering problems are definitionally hard engineering problems.
However! There are two (slightly related) things that make up for this:
- What you lose in depth, you make up for in breadth. I am exposed to so much more new stuff on a weekly or even daily basis: partially because the ease of adopting and deploying a new piece of technology is much higher (no SecRev, no SOC2, no enterprise sales dance...), partially becauses there is so much ground to be covered, so quickly.
- You can always outsource your core competencies (don’t want to deal with hardware? use Vercel/Railway! don’t want to deal with hyperscaing a database? use Planetscale! etc.) but you can’t even outsource the decision to outsource those core competencies. There’s no
#search
team in Slack that you get to pepper with questions about benefits and drawbacks of certain ElasticSearch use case; you learn by doing, and you learn that it’s important to do so really, really quickly.
Incremental games are tricky beasts. I think the best ones are like Farm RPG and Melvor Idle, which share a handful of common traits:
- An emphasis on pathing and napkin-level theorycrafting, where you feel rewarded for your mastery of simple mechanics by getting from point A to point B 10% faster than you would have otherwise.
- Calm progression punctuated by bursts of epiphany and dopamine (a rare drop that changes your plans; a new skill or item that dramatically unlocks a genre of work).
- A overall feeling of a good time, and sufficient levels of facade to distract you from the fact that you're essentially incorporating Progress Quest into your daily routine.
Wizrobe hits on some of these — particularly the second — but not all three simultaneously, and once the excitement of midgame progression is over, it's a bit of a letdown. There are too many systems for you to feel great and in command of the gestalt, and many of the mechanics are simply underwhelming (the adventure/combat system perhaps most notable). It simply does not sell the illusion of agency strongly enough; you feel Skinner's influence a little too strongly, with little outcome to show for it.
There are few technical decisions I regret more with Buttondown than the decision to combine the author-facing app, the subscriber-facing app, and the marketing site all under a single domain. Most technical decisions are reversible with sufficient grit and dedication; this one is not, because it requires customers to change their URLs and domains.
There are a number of reasons why this was a bad decision, and that’s probably worth an essay in its own right, but this is more meant to discuss how we work around the problem.
At a high level, it looks something like this:
All requests run through buttondown-load-balancer
, which is a Docker container in Heroku containing HAProxy. I got the bones of this container from a lovely blog post from Plane.
The HAProxy configuration looks something like this:
global
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http
# $PORT comes from Heroku, via some sort of dark magic.
bind "0.0.0.0:$PORT"
option forwardfor
redirect scheme https code 301 if ! { hdr(x-forwarded-proto) https }
# We want to find references to the old TLD and redirect them to the new one.
# Don't redirect POST requests, because they're used for webhooks.
acl is_old_tld hdr(host) -i buttondown.com
http-request redirect prefix https://buttondown.com code 302 if METH_GET is_old_tld
# Yes, it is unfortunate that this is hardcoded and can't be pulled out
# from the sitemap of `marketing`. But in practice I do not think it is
# a big deal.
# 1. These represent absolute paths without subfolders
acl is_marketing path -i / /climate /alternatives /pricing /sitemap.xml /sitemap-0.xml /stack /open-source
# 2. These represent subfolders. (Legal has a trailing slash so usernames can start with legal.)
acl is_marketing path_beg /features /use-cases /comparisons /legal/ /changelog /rss/changelog.xml /blog /rss/blog.xml /api/og /comparison-guides /stories/ /testimonials /resources
# Docs technically live on a subdomain, but lots of people try to go to `buttondown.com/docs`.
acl is_marketing path_beg /docs
# 3. `_next` is a Next.js-specific thing...
acl is_marketing path_beg /_next /_vercel
# 4. But `next-assets` is something I made up just to make it easy to namespace the assets.
# (There's a corresponding `/next-assets` folder in `marketing`.)
acl is_marketing path_beg /next-assets
# Where the magic happens: route all marketing traffic to the marketing app.
use_backend buttondown-marketing if is_marketing
default_backend buttondown-app
backend buttondown-app
# We need to set `req.hdr(host)` so that `app` can correctly route custom domains.
http-request set-header X-Forwarded-Host %[req.hdr(host)]
http-request set-header X-Forwarded-Port %[dst_port]
reqirep ^Host: Host:\ buttondown.herokuapp.com
server buttondown-app buttondown.herokuapp.com:443 ssl verify none
backend buttondown-marketing
http-request set-header X-Forwarded-Host buttondown.com
http-request set-header X-Forwarded-Port %[dst_port]
reqirep ^Host: Host:\ marketing.buttondown.com
server buttondown-marketing marketing.buttondown.com:443 ssl verify none
This allows us to deploy the marketing site and the application site separately and without real worry about polluting the two (and indeed, our marketing site is on Vercel whereas the application site is on Heroku).
The only real source of angst comes from keeping the routing information up to date. As you can see from the above file, we have a thinner list of potential routes on the marketing site so we have to enumerate them, and often we forget to do so and so new pages are “hidden” (ie served by the Django app, which then throws a 404.)
Another challenge was testing this. When I first developed this approach three years ago it was, frankly, pretty reasonable to test in prod — Heroku is very quick at reverting Docker deploys, so I could just push and yoink if necessary. Now, though, a few seconds of downtime corresponds to thousands of page-views lost; we’re using Hurl as a very lovely testing harness, with an approach largely inspired by this blog post from InfoQ.
All in all: this approach is janky, and required a lot of tinkering, but is very stable in its steady state. I cannot emphasize enough that if you are starting an app, you should not even entertain the idea of doing this: namespace your actual application under app.
or dashboard.
or whatever and call it a day, and your marketing department will thank you. But if you’re already stuck and need a way out: this one works.
(By the way, if someone has a non-insane way to do this, please let me know. This approach has worked well, is stable, and does not feel particularly unperformant, but it feels bad.)
Enough time has passed for me to admit that I thought Room 25 was a poor album. I suspect there's a lot of Trump-era literature that will age poorly in much the same way, which is not to say that it was "too political" but that we found ourselves, briefly, in a time where we were comfortable mistaking incohesion for deconstructionist ambition and poorly-recycled Twitter bits (imagine the mockery if Drake had rapped "I’m struggling to simmer down, maybe I'm an insomni-black") for cleverness. It was a disappointing sophomore release [1] from a great artist, and I suspect many of its laudits — not unlike Dedication — came more from critics wanting to honor the artist's previous work.
I say this as a precursor to Sundial addressing almost every flaw of Room 25. Noname's politics are sharper and more honest; she conjures again the intimacy that made Telefone such a treat; the production shifts away from a slightly schizophrenic neo-soul thing back to the (Saba-inflected) light jazz rap. Sundial is (and I don't mean this in a faint-praise way) coherent; it sounds like an obvious evolution of her work and her thesis rather than a rush to get something out the door while everyone's listening.
Her studio debut, sure, but calling Telefone a mixtape feels like a distinction without a difference. ↩︎
We spent $85,000 for buttondown.com
in April; this was the biggest capital expenditure I've ever made, and though it was coming from cash flow generated by Buttondown rather than my own checking account it was by rough estimation the largest non-house purchase I've ever made.
As of August, we're officially migrated over from buttondown.com
to buttondown.com
. I'm sure I'll do a more corporate blog post on the transition in the future, but for now I want to jot down some process notes:
- The entire process was made much more painful due to Buttondown's architecture, which is a hybrid of Vercel/Next (for the marketing site and docs site) and Django/Heroku (for the core app) managed by a HAProxy load balancer to route requests. We ended up using hurl as a test harness around HAProxy, something we probably should have done three years ago.
- I went in expecting SEO traffic to be hit as Google renegotiates legions of canonical URLs; it hasn't, at least thus far. Instead, everything seems to have just bumped fairly healthily.
- I expected more production issues to come up than actually did. I credit this to a fairly clear scope: the goal was "to migrate all web traffic to .com", which meant that a) we didn't need to re-map any paths and b) we didn't need to worry about mapping SMTP traffic (which still runs through
buttondown.com
). - The hardest part of the process was the stuff you can't grep for. URLs on other sites, OAuth redirect URLs, that sort of thing.
- Starting with isolated domains (the documentation site, the demo site) that weren't tied to the aforementioned HAProxy load balancer gave me some good early confidence that the migration would be smooth.
Overall: very happy with how it turned out. I would describe the project roughly as "three months of fretting/planning, one week of grepping, and one week of fallout."
Was it worth it? Yes, I think so. Most theoretical capital expenditures Buttondown can make right now have a non-trivial ongoing cost associated with them (buy another newsletter company or content vertical and now you have to run it on a day-to-day basis; do a big marketing build-out and you have to manage it; etc.) — this was a sharp but fixed cost, and it's something that I knew I wanted to do in the fullness of time. (And, most importantly, people stop referring to Buttondown as "Buttondown Email", a personal pet peeve of mine.)
When it comes to AI tooling, I am equal parts optimist and cynic. I have no moral qualm with using these tools (Supermaven is a pretty heavy part of my day-to-day work), but have found most tools quite bad by the metric of "do they make me more productive on Buttondown's code base?" I think it's important to be able to taste the kool-aid with these kinds of things, and try to carve out an hour every weekend to test something new.
My own personal Turing test as of late has been porting some old Django test cases to pytest. Our codebase is around 75% pytest, and I'd love for that to be 100% but it's not really urgent, but it does have a couple characteristics that make it particularly useful for testing an AI tool:
- It's immediately obvious whether or not the work was successful (i.e. do the tests execute and pass or not?)
- It's the kind of work that I really want to be able to delegate to a tool — I can do it myself, but it's monotonous and I don't add much value
- There's a good amount of prior art on how pytest works, but it's not as common as
unittest
. pytest
fixtures are tricky (they exist in different files, their usage pattern is non-obvious).
Two standalone tools (GitHub's Copilot Workspace, SourceGraph's Cody) have failed this test; Cursor, however, succeeded.
To emphasize, these are not complicated test files. Here's a very basic (real) file that Cursor succeeded at porting from Django's test framework:
from django.test import TestCase
from monetization.events.charge_refunded import handle
from monetization.models import StripeAccount, StripeCharge
from monetization.tests.utils import construct_event
class ChargeRefundedTestCase(TestCase):
def setUp(self) -> None:
self.event = construct_event("charge_refunded.json")
self.account_id = "acct_whomstever"
self.account = StripeAccount.objects.create(account_id=self.account_id)
def test_basic(self) -> None:
charge = StripeCharge.objects.create(
charge_id="ch_whatever", account=self.account
)
handle(self.event.object, self.account_id)
assert charge.refunds.count() == 1
to pytest
!
import pytest
from monetization.events.charge_refunded import handle
from monetization.models import StripeCharge
from monetization.tests.utils import construct_event
@pytest.fixture
def stripe_charge(stripe_account):
return StripeCharge.objects.create(
charge_id="ch_whatever", account=stripe_account
)
def test_basic(stripe_account, stripe_charge):
event = construct_event("charge_refunded.json")
handle(event.object, stripe_account.account_id)
assert stripe_charge.refunds.count() == 1
(Note that it's intentional that the stripe_account
fixture is not actually in this file: it's in a global conftest.py
that I pointed Cursor to.)
This is basically the most trivial possible port (and, again, Cody + Copilot Workspace both failed). Here's a slightly more complicated one testing out our Exports API:
from unittest import mock
from unittest.mock import MagicMock
from django.test import override_settings
from api.tests.utils import ViewSetTestCase
from emails.models.account.model import Account
from emails.models.export.model import Export
from emails.tests.utils import FakeData
class ExportViewSetTestCase(ViewSetTestCase):
url = "/v1/exports"
@override_settings(IS_TEST=False, DEBUG=False)
def test_list_on_v2(self) -> None:
account = self.newsletter.owning_account
account.billing_type = Account.BillingType.V2
account.save()
response = self.api_client.get(self.url)
assert response.status_code == 403, response.content
assert "upgrade your account" in response.json()["detail"]
@mock.patch("emails.models.export.actions.s3.put")
def test_list(self, put_mock: MagicMock) -> None:
put_mock.return_value = "s3://foo/bar"
FakeData.export(newsletter=self.newsletter)
FakeData.export(newsletter=self.newsletter)
response = self.api_client.get(self.url)
assert response.status_code == 200, str(response.content)
assert isinstance(response.json(), dict), response.json()
assert response.json()["count"] == 2, response.json()
@mock.patch("emails.models.export.actions.s3.put")
def test_list_should_not_pollute_across_newsletters(
self, put_mock: MagicMock
) -> None:
put_mock.return_value = "s3://foo/bar"
other_newsletter = FakeData.newsletter(
owning_account=self.newsletter.owning_account
)
FakeData.export(newsletter=other_newsletter)
FakeData.export(newsletter=other_newsletter)
response = self.api_client.get(self.url)
assert response.status_code == 200, str(response.content)
assert isinstance(response.json(), dict), response.json()
assert response.json()["count"] == 0, response.json()
@mock.patch("emails.models.export.actions.s3.put")
def test_export_return_ids(self, put_mock: MagicMock) -> None:
put_mock.return_value = "s3://foo/bar"
FakeData.export(newsletter=self.newsletter)
response = self.api_client.get(self.url)
assert "id" in response.json()["results"][0], response.content
@mock.patch("emails.models.export.actions.s3.put")
def test_POST_request_of_export_api(self, put_mock: MagicMock) -> None:
put_mock.return_value = "s3://foo/bar"
self.assertPOSTReturnsStatusCode(
{
"collections": ["comments"],
},
201,
)
assert Export.objects.filter(newsletter=self.newsletter).exists()
@mock.patch("emails.models.export.actions.s3.put")
def test_empty_collection_POST_request_of_export_api(
self, put_mock: MagicMock
) -> None:
put_mock.return_value = "s3://foo/bar"
self.assertPOSTReturnsStatusCode(
{
"collections": [],
},
400,
)
@mock.patch("emails.models.export.actions.s3.put")
def test_export_requester_matches_account(self, put_mock: MagicMock) -> None:
put_mock.return_value = "s3://foo/bar"
self.user = FakeData.user()
self.account = Account.objects.get(user_id=self.user.id)
self.newsletter = FakeData.newsletter(owning_account=self.account)
self.account.email_address = "[email protected]"
self.account.save()
FakeData.export(newsletter=self.newsletter, requester=self.account)
export = Export.objects.filter(newsletter=self.newsletter).first()
self.assertEqual(
export.requester.email_address,
self.account.email_address,
"Requester does not match",
)
Beyond the obvious size of the file, there are a couple other things that made this trickier:
- We're overriding settings in some tests, mocking in others, and inheriting from a custom class.
- We're using
FakeData
, a sort of factory-pattern generator that would ideally be replaced withpytest
fixtures.
Cursor did pretty well, though the final output (seen below) required a couple tweaks on my end:
import json
import pytest
from django.test import override_settings
from emails.models.account.model import Account
from emails.models.export.model import Export
from emails.tests.utils import FakeData
@pytest.fixture
def url():
return "/v1/exports"
@pytest.fixture
def mocked_s3_put(mocker):
return mocker.patch(
"emails.models.export.actions.s3.put", return_value="s3://foo/bar"
)
@override_settings(IS_TEST=False, DEBUG=False)
def test_list_on_v2(logged_in_client, url, newsletter):
account = newsletter.owning_account
account.billing_type = Account.BillingType.V2
account.save()
response = logged_in_client.get(url)
assert response.status_code == 403, response.content
assert "upgrade your account" in response.json()["detail"]
def test_list(logged_in_client, url, newsletter, mocked_s3_put):
FakeData.export(newsletter=newsletter)
FakeData.export(newsletter=newsletter)
response = logged_in_client.get(url)
assert response.status_code == 200, str(response.content)
assert isinstance(response.json(), dict), response.json()
assert response.json()["count"] == 2, response.json()
def test_list_should_not_pollute_across_newsletters(
logged_in_client, url, newsletter, mocked_s3_put
):
other_newsletter = FakeData.newsletter(owning_account=newsletter.owning_account)
FakeData.export(newsletter=other_newsletter)
FakeData.export(newsletter=other_newsletter)
response = logged_in_client.get(url)
assert response.status_code == 200, str(response.content)
assert isinstance(response.json(), dict), response.json()
assert response.json()["count"] == 0, response.json()
def test_export_return_ids(logged_in_client, url, newsletter, mocked_s3_put):
FakeData.export(newsletter=newsletter)
response = logged_in_client.get(url)
assert "id" in response.json()["results"][0], response.content
def test_POST_request_of_export_api(logged_in_client, url, newsletter, mocked_s3_put):
response = logged_in_client.post(
url, json.dumps({"collections": ["comments"]}), content_type="application/json"
)
assert response.status_code == 201, response.json()
assert Export.objects.filter(newsletter=newsletter).exists()
def test_empty_collection_POST_request_of_export_api(
logged_in_client, url, mocked_s3_put
):
response = logged_in_client.post(
url, json.dumps({"collections": []}), content_type="application/json"
)
assert response.status_code == 400
def test_export_requester_matches_account(db, mocked_s3_put):
user = FakeData.user()
account = Account.objects.get(user_id=user.id)
newsletter = FakeData.newsletter(owning_account=account)
account.email_address = "[email protected]"
account.save()
FakeData.export(newsletter=newsletter, requester=account)
export = Export.objects.filter(newsletter=newsletter).first()
assert (
export.requester.email_address == account.email_address
), "Requester does not match"
Some notes on its efforts:
- It got the fixture-mocking API down first try, which is more than I can say for myself (I always end up trying to create a
MagicMock
or forget tostart
it or some other such trivial error) - It originally put those final three tests in a completely empty class for reasons passing understanding; I amended the prompt to say
no classes
and it fixed it - The final test (
test_export_requester_matches_account
) was failing because it did not place thedb
fixture, which I had to fix manually.
Overall, I was impressed. Not only did Cursor pass the first bar of "actually accomplishing the task", it also passed the second bar of "accomplishing the task faster than it would have taken me."
Not unlike Icarus, I was emboldened by these results. I tried two other genres of task that I suspected Cursor would be able to handle well:
-
Localizing a footer in a handful of different languages. This was, I presume, made easier by the fact that the relevant file already had localization logic.
-
Adding some logic to a Django admin form. Not only did this work, this was the clearest example of Cursor doing something that I didn't know how to do off the top of my head. (It's a trivial change, and I would have been able to figure it out in five minutes of Googling — but ten seconds is better than five minutes.)
Cursor did both! ...and then it choked on some more complicated feature work that spanned multiple files. Which is fine: a tool does not need to be flawless to be useful, and Cursor proved itself useful.
I can say, having finally played enough to burn out / quit, that Farm RPG is pretty much the ideal idle game for me. It is simple, pleasant, non-predatory, and strikes the perfect balance of encouraging habitual play without making you feel bad for not playing it. There's a huge amount of content, the game changes a meaningful and interesting amount over the course of your playthrough, and it rewards creativity & pathing without really requiring you to bust out a spreadsheet.
If there's any flaw in it (besides a meaningful dearth of social aspects, which in my case could almost be argued as a boon), it's that the endgame is toilsome. My last six months of playing Farm RPG were all the same: log in in the morning, spend five minutes doing my daily chores, make a bit of monotone progress on whatever the current milestone was (which was always rote), and then log out. It's hard to hold this against the game specifically because all idle-games seem to struggle with this prolonged endgame — but it's worth calling out nonetheless.
I loved The Phoenix Project, this book's spiritual and literal predecessor — while I quibbled with the prose and characters, I deeply enjoyed not just the concept of a ludocratic narrative but the execution of it was such that I felt like I learned a good amount about management and scaling a technical organization.
The Unicorn Project is meant to be the same book as The Phoenix Project, except with a more specific software engineering focus rather than a general IT focus — and this, perhaps, is where it all falls apart to me, because unlike general information technology I now have a good deal about best practices in high-performing software engineering organizations, and as such I had nothing to learn from this book except that man, Kim is not good at fiction-writing.
This is not meant to be a condemnation of the book's recommendations! Kim walks through the importance of reproducible builds, continuous integration, functional programming: all good things, all important things, all things that anyone who has spent time in a FAANG already knows as good. If you're not one of those people (and I don't mean this in a dismissive way, I promise) — the book might be more useful.
But I was hoping for a book that teaches how to go from an 80th percentile organization to a 95th percentile organization, and this book is a primer instead on going from a 10th percentile organization to a 70th percentile organization.
When I look back on the critical and popular fervor for the second season of Ted Lasso — and remember that it came out in the first year of a global pandemic that filled the average watcher (myself included!) with a level of dread and despair that has, for most folks, not been experienced before or since — things click into place a bit. What the show lacks in traditional merit (a satisfying, interesting plot; consistent and believable characters; humor that builds over the course of a scene, episode or season; an internal consistency that rewards the viewer's loyalty and attention) it purportedly makes up for in positivity.
I don't think this is undue criticism of the show's faults, and indeed it seems to lean into its own Flanderization to a great extent. We float weightlessly through Richmond's ups and downs as a team (they begin the season 0-0-6, and then suddenly it's 4-4-6, and then suddenly they're on the top of the table, and we have watched maybe five minutes of actual football). I don't begrudge the show for leaning away from the actual mechanics of football — not every show needs to be Friday Night Lights — but it's symptomatic of any actual episode-to-episode stakes or drama. Things that certainly seem like they should be meaningful (a relegation team jettisoning their biggest sponsor) are never discussed or revisited; personal demons are exercised after a single therapy session; every character (for the most part, well-portrayed and lovable) floats pleasantly from one surface-level issue to the next.
This is not problematic on its own, except for one thing: the show is not very funny. There are some good one-liners (the Bill Lawrence touch!) but it lacks the Bob's Burgers / Azumanga Daioh DNA of "here are a bunch of wacky people you love doing very funny things", and it tries to keep one foot in the real world for serious depictions (at least the simulacra of serious depictions) of mental health.
The job of a piece of art is to enchant and transform us; I think Ted Lasso did that for a cohort of its die-hard fans back in 2021, and I begrudge neither the show nor its fans when I say that, removed from the pandemic, it shows little ability of either. Instead, it feels like a Lifetime movie given an Apple TV budget; pleasant, and mollifying, but certainly not great.
Hard not to draw parallels to Azumanga Daioh — a show that I think Cromartie edges out as the funniest anime I've ever seen. Whereas Azumanga feels sweet and dated — not in a bad way, but in that it shows its age and clearly influenced a legion of copycat "cute girls doing cute things" successors — Cromartie, despite coming out a single year later, strikes you as incomprehensibly modern. The closest historical parallel that comes to mind is the Adult Swim extended universe, but whereas those shows lean too far into a kind of nihilistic absurdism that never quite resonated with me, Cromartie has a cleverness with every single throwaway gag or callback that wins you over quickly. I'm shocked there's only a single season of this show; I'm shocked it's not talked about more often.
Buttondown's API calls are very fast, and one of the reasons why is that we've removed every single possible database query that we can.
The most recent was what looked like a fairly benign COUNT(*) query
, coming from the default Django paginator; if you're gonna paginate things, you need to know how many to paginate, fair enough.
However, it irked me a little bit that we were always doing that COUNT(*)
query even when we didn't need to: say, if we were returning a list of 14 emails we can put up to 50 emails in a page. Objectively speaking, that COUNT(*)
query is unnecessary overhead: we know there aren't any more emails than that, since we've serialized a full list that is less than the page size.
I went poking around for solutions to this problem, and came across a great article from Peter Be that abstractly talks about both the use case that I had in mind and the right solution, which was at a high level: count and serialize the full results list up until the maximum page size, and then make a full count query if you hit the page size.
Peter's snippet is more pseudocode than actual code, and I wanted something that I could actually use as a drop-in replacement to the Django paginator. Here it is, in full!:
from typing import Generic, TypeVar
from django.core.paginator import Page as DjangoPage
from django.core.paginator import PageNotAnInteger
from django.core.paginator import Paginator as DjangoPaginator
class Paginator(DjangoPaginator):
def validate_number(self, number) -> int:
try:
if isinstance(number, float) and not number.is_integer():
raise ValueError
number = int(number)
except (TypeError, ValueError):
raise PageNotAnInteger("That page number is not an integer")
return number
def page(self, number) -> DjangoPage:
validated_number = self.validate_number(number)
if validated_number != 1:
return super().page(number)
internal_results = []
for i in self.object_list[: self.per_page]:
internal_results.append(i)
if len(internal_results) == self.per_page:
break
if len(internal_results) < self.per_page:
# The below override correctly throws a type error because we are
# overriding a read-only cached property (ie a method) with a constant.
# This is the whole point of this subclass, so we ignore the type error.
self.count = len(internal_results) # type: ignore
return DjangoPage(internal_results, validated_number, self)
Note that it is important to override validate_number
, too: that contains a sneaky little check of .count
, which is a read-only cached property (ie a method) that triggers the COUNT(*)
query.
I don’t think there’s anything wrong with an anodyne, predictable rom-com. I am fully baptized into the Church of Ephron: there are few autumnal traditions more pleasant and comforting to me than whiling away an afternoon to the paint-by-numbers plot beats of You've Got Mail et al (though I’ll exclude When Harry Met Sally from this umbrella, which is of course excellent but at least makes some overtures at novelty in terms of form and function.)
Netflix knows how to produce and distribute these kinds of films en masse: they are incrasingly their bread and butter, as they begin to cede more of the “prestige TV with the edges sanded down” territory to Apple TV. What separates the good versions of these films from the awful ones is how much time you enjoy spending with its world and characters that a bored team of Malibu screenwriters conjured on an otherwise uneventful afternoon — and this is Find Me Falling’s greatest sin, for outside of a somewhat winning performance from Ali Fumiko Whitney (in a very gee-shucks sort of way), every single person in this film appears miserable, as if they are resigned to their fates. Set aside the incomprehensible leaps in characterization and the quasi-sociopathy required to use a suicide cliff as a plot point: how do you spend six months in Cyprus on a Netflix budget and not at least make me jealous that I’m not in your shoes?
Gilbert & Sullivan would have loved Ted Lasso — absurd and friendly and ever, ever obsessed with having its pathos and eating it too. I write this in a period of what I would describe as post-post-backlash: there was a period where America, particularly during COVID, was obsessed with Ted Lasso, and the a period after that where it was considered overrated and smarmy and pedestrian, and now a point where it is largely — not forgotten, as I'm sure the fourth season will be Apple TV's biggest launch ever — consigned to neither being the center of the brief schizophrenic cultural zeitgeist nor being rubbed out of existence entirely.
I think Ted Lasso's first season is trifling, and a good way to spend some time, and not exactly Great Television. Sudeikis gives a great performance despite a script that cannot decide whether he is competent or caricature; the supporting cast is all winning, and as long as you don't look at the edges of anything for too long you won't be upset. It has, I think, the signature Bill Lawrence (of Scrubs) touch: charming and clever and snackable and mostly empty — but we all deserve a cheat meal every once in a while.
(It is both entertaining and, I think, illustrative, how deeply uninterested the show is with the actual mechanics of football. I love the idea of an EPL team having never done suicides before.)