2026-01-17T16:16:14+00:00 Fullscreen Open in Tab
Note published on January 17, 2026 at 4:16 PM UTC
Sat, 17 Jan 2026 15:18:04 +0000 Fullscreen Open in Tab
Pluralistic: The world needs an Ireland for disenshittification (17 Jan 2026)


Today's links



A green Irish pillarbox, standing before a verdant, rolling Irish countryside. The pillarbox is emblazoned with the poop emoji from the cover of 'Enshittification,' with angry eyebrows and a grawlix-scrawled black bar over its mouth.

The world needs an Ireland for disenshittification (permalink)

Ireland is a tax haven. In the 1970s and 1980s, life in the civil-war wracked country was hard – between poverty, scarce employment and civil unrest, the country hemorrhaged its best and brightest. As the saying went, "Ireland's top export is the Irish."

In desperation, Ireland's political class hit on a wild gambit: they would weaponize Ireland's sovereignty in service to corporate tax evasion. Companies that pretended to establish their headquarters in Ireland would be able to hoard their profits, evading their tax obligations to every other country in the world:

https://en.wikipedia.org/wiki/Ireland_as_a_tax_haven

A single country – poor, small, at the literal periphery of a continent – was able to foundationally transform the global order. Any company that has enough money to pretend to be Irish can avoid 25-35% in tax, giving it an unbeatable edge against competitors that lack the multinational's superpower of magicking all its profits into a state of untaxable grace somewhere over the Irish Sea.

The effect this had on Ireland is…mixed. The Irish state is thoroughly captured by the corporations that pretend to call Ireland home. Anything those corporations want, Ireland must deliver, lest the footloose companies up sticks and start pretending to be Cypriot, Luxembourgeois, Maltese or Dutch. This is why Europe's landmark privacy law, the GDPR, has had no effect on America's tech giants. They pretend to be Irish, and Ireland lets them get away with breaking European law. The Irish state even hires these companies' executives to regulate their erstwhile employers:

https://pluralistic.net/2025/12/01/erin-go-blagged/#big-tech-omerta

But there is no denying that Ireland has managed to turn the world's taxable trillions into its own domestic billions. The fact that Ireland is cashing out less than 1% of what it's costing everyone else is terrible for the world's tax systems and competitive markets, but it's been a massive windfall for Ireland, and has lifted the country out of its centuries of colonial poverty and privation.

There are many lessons to be learned from Ireland's experiment with regulatory arbitrage, but one is unequivocal: even a small, poor, disintegrating nation can change the world system by offering a site where you can do things that you can't do anywhere else, and if it does, that poor nation can grow wealthy and comfortable.

What's more, there are plenty of "things that you can't do anywhere else" that are very good. It's not just corporate tax evasion.

First among these things that you can't do anywhere else: it's a crime in virtually every country on earth to modify America's defective, enshittified, privacy-invading, money-stealing technology exports. That's because the US trade representative has spent the past 25 years using the threat of tariffs to bully all of America's trading partners into adopting "anti-circumvention" laws:

https://pluralistic.net/2026/01/15/how-the-light-gets-in/#theories-of-change

There is nothing good about this. The fact that local businesses can't sell you a privacy blocker, an alternative client, a diagnostic tool, a spare part, a consumable, or even software for your American-made devices leaves you defenseless before US tech's remorseless campaign of monetary and informational plunder – and it means that your economy is denied the benefits of creating and exporting these incredibly desirable, profitable products.

Incredibly, Trump deliberately blew up this multi-trillion dollar system of US commercial advantage. By chaotically imposing and rescinding and re-imposing tariffs on the world, he has neutralized the US trade rep's tariff threats. Foreign firms just can't count on exporting to America anymore, so the threat of (more) tariffs grows less intimidating by the minute:

https://pluralistic.net/2025/12/16/k-shaped-recovery/#disenshittification-nations

The time is ripe for the founding of a disenshittification nation, an Ireland for disenshittification. I have no doubt that eventually, most or all of the countries in the world will drop their anti-circumvention laws (the laws that ban the modification of US tech exports). Once one country starts making these disenshittifying tools, there'll be no way to prevent their export, since all it takes to buy one of these tools from a circumvention haven is an internet connection and a payment method.

Once everyone in your country is buying and using jailbreaking tools from abroad, there'll be no point in keeping these laws on your own books. But the first country to get there stands a chance of establishing a durable first-mover advantage – of reaping hundreds of billions selling disenshittifying products around the world. That country could be to enshittification-resistant technology what Finland was to mobile phones during the Nokia decade (and wouldn't you know it, the EU's newly minted "Tech Sovereignty" czar is a Finn!):

https://commission.europa.eu/about/organisation/college-commissioners/henna-virkkunen_en

The world has experimented with many kinds of havens over the centuries. In the early 18th century, Madagascar became a haven for British naval deserters, who were adopted into the island's matriarchal clans. Together, they founded an anarchist pirate utopia:

https://pluralistic.net/2023/01/24/zana-malata/#libertalia

The global system of trade has allowed America's tech companies to steal and hoard trillions, and to put every country at risk of being bricked when their IT systems are switched off at a single word from Trump:

https://pluralistic.net/2026/01/01/39c3/#the-new-coalition

There are more than 200 countries in the world. There's also an ever-expanding cohort of brilliant international technologists whose Silicon Valley dreams have turned into a nightmare of being shot in the face by an ICE goon, or being kidnapped, separated from their families and being locked up in a Salvadoran slave-labor prison. These techies are looking for the next place to put down roots and "make a dent in the universe." Lots of countries could be that place.

The Ireland for disenshittification wouldn't just have their pick of international technologists – they'd have plenty of Americans hungering for a better life. Two-thirds of young Americans "are considering leaving the US":

https://www.newsweek.com/nearly-two-thirds-of-young-americans-are-considering-leaving-the-us-11010814

Ireland pulled off its tax-haven gambit by making influential people very rich, so that they would go to bat for Ireland. The Ireland for disenshittification will have the same chance. The new tech companies that unlock US Big Tech's trillions and turn them into their own billions (with the remainder being shared by us, tech users, in the form of lower prices and better products) will be a powerful bloc in support of this project.

Ireland showed us: it just takes one country to defect from this global prisoner's dilemma, and then everything is up for grabs.

(Image: Stuart Caie, CC BY 2.0; Sourabh.biswas003; CC BY-SA 3.0; modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrago Hollywood’s Member of Parliament makes national news https://web.archive.org/web/20060213161019/http://www.macleans.ca/topstories/politics/article.jsp?content=20060123_120006_120006

#20yrsago Skip $250/plate dinner for dirty MP, eat with copyfighters https://web.archive.org/web/20060118062522/http://www.onlinerights.ca/

#20yrago Octavia Butler’s “Fledgling”: subtle, thrilling vampire novel https://memex.craphound.com/2006/01/17/octavia-butlers-fledgling-subtle-thrilling-vampire-novel/

#10yrsago Revealed: the hidden web of big-business money backing Europe and America’s pro-TTIP “think tanks” https://thecorrespondent.com/3884/Big-business-orders-its-pro-TTIP-arguments-from-these-think-tanks/855725233704-2febf71a

#10yrsago The bizarre magnetic forest rings of northern Ontario https://www.bldgblog.com/2016/01/rings/

#10yrsago 2016 is the year of the telepathic election, and it’s not pretty http://www.antipope.org/charlie/blog-static/2016/01/some-american-political-marker.html

#10yrsago Trump Casinos lost millions every single year that Donald Trump ran it (but he’s still rich) https://memex.craphound.com/2016/01/17/trump-casinos-lost-millions-every-single-year-that-donald-trump-ran-it-but-hes-still-rich/

#10yrsago Oregon domestic terrorists now destroying public property in earnest https://www.theguardian.com/us-news/2016/jan/16/oregon-militias-behavior-increasingly-brazen-as-public-property-destroyed?CMP=edit_2221

#10yrsago Jeremy Corbyn proposes ban on dividends from companies that don’t pay living wages https://www.theguardian.com/politics/2016/jan/16/jeremy-corbyn-to-confront-big-business-over-living-wage

#10yrsago The Electable Mr Sanders https://web.archive.org/web/20160119083607/http://robertreich.org/post/137454417985

#10yrsago Suspicious, photo-taking “Middle Eastern” men were visually impaired tourists https://www.cbc.ca/news/canada/british-columbia/vancouver-mall-video-men-1.3406619

#5yrsago Fighting fiber was the right's dumbest self-own https://pluralistic.net/2021/01/17/turner-diaries-fanfic/#1a-fiber


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1045 words today, 9348 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Fri, 16 Jan 2026 17:21:02 +0000 Fullscreen Open in Tab
Pluralistic: Catch this! (16 Jan 2026)


Today's links



A juggler, who is juggling email icons. Instant message icons are flying at him from all directions. In the background is a frantic scene from Bosch's 'Garden of Earthly Delights.'

Catch this! (permalink)

Call it "lifehacking," or just call it, "paying attention to how you stay organized" – I don't care what you call it, I am an ardent practitioner of it.

I like improving my processes because I like what I do, and the more efficient I am at all of it (with apologies to Jenny Odell), the more of that stuff I can get done:

https://memex.craphound.com/2019/04/09/how-to-do-nothing-jenny-odells-case-for-resisting-the-attention-economy/

I want to do a lot of stuff. I am one of those people who is ten miles wide and one inch deep (it probably has something to do with imbibing Heinlein's maxim that "specialization is for insects" at an impressionable age). There's a million waterways I want to dip my toe (or my oar) into, and the better organized I am, the more of that stuff I'll get to do before I kick off. I'm 54, and while there's a lot of road ahead of me, I can see the end, off there in the distance. It's coming, and I'm not done – I'm barely getting started.

I've been around lifehacking since the very moment it was born. I was there. I published the notes on Danny O'Brien's seminal 2004 talk at the O'Reilly Emerging Technology Conference, "Life Hacks: Tech Secrets of Overprolific Alpha Geeks":

https://craphound.com/lifehacksetcon04.txt

In the years since, I've cultivated a small – but mighty – repertoire of organizational habits and tools that let me get a hell of a lot done. Weirdly, many of these tools are things that other people hate, and I can see why – they use them in very different ways from me. That's true of browser tabs (I loooove browser tabs):

https://pluralistic.net/2024/01/25/today-in-tabs/#unfucked-rota

And to-do lists, which will totally transform your life, once you realize that the most important to-do list is the one you maintain for everyone else who owes you a response, a package, or money:

https://pluralistic.net/2024/10/26/one-weird-trick/#todo

Other essential tools languish in neglect, artifacts of the old, good web – the elegant weapons that dominated a more civilized age. First among these? RSS readers:

https://pluralistic.net/2024/10/16/keep-it-really-simple-stupid/#read-receipts-are-you-kidding-me-seriously-fuck-that-noise

I will freely stipulate that people have a good reason to hate all this stuff. "Productivity porn" is often proffered as a mix of humblebrag (a way to make other people jealous of your almighty "productivity") and denial (fiddling with your systems is a ready substitute for actually doing things). Many (most?) of the foremost self-appointed pitchmen for "lifehacking" are cringey charlatans peddling "courses" and other nonsense.

But if you keep digging, there's a solid foundation beneath all the rot. At its very best, this stuff is a way to figure out what you really want to do, and to organize your life so that the stuff you want to do is the stuff you're doing.

A lot of people get into this kind of thing thinking it'll let them do everything. No one can do everything. The best you can hope for is to make conscious decisions about which stuff you'll never get to, while leaving at least a little room for serendipity.

Like I said, I want to do a lot of stuff. My organizing tactics are as much about deciding what I won't do as they are about deciding what I will do:

https://locusmag.com/feature/cory-doctorow-how-to-do-everything-lifehacking-considered-harmful/

Which brings me to another tool that everyone hates and I love: email. I live and die by email.

First of all, I filter all my incoming email: mail from people who are in my address book stays in my inbox; mail from people I've never heard from before goes into a mailbox called "People I don't know." When I reply to a message, Thunderbird adds the recipient to my address book, so the next time I hear from them, they'll stay in my main mailbox.

I also filter out anything containing the word "unsubscribe," sending it into a folder called "Unlikely" (but not if the message contains my name – which is how I can stay subscribed to mailing lists I don't have time to read and make sure to reply when someone mentions me).

Second of all, I have a zillion Quicktext macros that I use to reply to frequently asked questions. I have one that spits out my mailing address; another that spits out my bio; and others for politely saying no to things I don't have time for, for information about how to pay one of my invoices, etc, etc.

Third: I have a small folder of emails that I can't reply to right away (usually because I need some information from a third party), which I review every morning and answer anything that I can clear.

Finally, I save it all. I have so much saved email, which means that if you ask me about something from 20 years ago, there's a good chance I can find it – provided we organized it over email.

All of which explains why I refuse – to the extent that I can – to do anything important over instant messaging, whether that's Signal or any of the other messaging tools that come with social media, workplace software, etc.

I understand why people like instant messaging: it does not overwhelm you with the burdens of the past. It is largely ahistorical, with archives that are hard to access and search. Its norms and register are less formal than email.

And, of course, instant messaging is far superior to email in some contexts. If you're on vacation with friends, having a big group-chat where you can say, "I'm making dinner – is everyone OK with cheese?" is indispensable. Same goes for asking a friend for directions, announcing that you've arrived at someone's office, or confirming whether it's OK to substitute 2% for whole milk on a grocery run.

But if you're like me – if you've figured out how to do as many of the things that matter to you as you can possibly squeeze in, then getting an IM mid-flow is like someone walking up to a juggler who's working on a live chainsaw, a bowling ball, and a machete and tossing him a watermelon while shouting, "Hey, catch this!"

The problem is that if you are asking about something important, something that can't be instantaneously managed by the recipient, then they will have to drop everything they're doing and, at the very least, make a note to themselves to go back to your message later and deal with it. Instant messaging doesn't have an inbox with everything you've been sent. Of course, that's why people love it. But the fact that you can't see all the things other people are expecting you to answer doesn't mean that they aren't expecting it. It also doesn't mean that everything will be fine if you just ignore all those messages.

Instant messaging is a great tool for managing something that everyone is doing at the same time. It's also a nice way to keep an ambient social flow of updates from people in a rocking groupchat. But IM is fundamentally unserious. It is antithetical to the project of making a conscious decision about what you won't do, so that you do as many of the things that matter to you before you get to the end of the road.

A massive email inbox is intimidating, but switching to IMs doesn't make all the demands in the email go away. It just puts them out of sight until they either expire or explode. Far better to decide what balls you're going to drop than to have them knocked out of your hand by a fast-moving watermelon.

(Image: Mark James, CC BY 2.5, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Teresa Nielsen Hayden’s formal excommunication from the Latter Day Saints https://web.archive.org/web/20010203204300/http://www.panix.com/~pnh/GodandI.html

#20yrsago King Foundation uses copyright to suppress “I Have a Dream” speech https://www.washingtonpost.com/wp-dyn/content/article/2006/01/14/AR2006011400980.html

#20yrsago Firefly fans trying to raise enough dough to produce a new season https://web.archive.org/web/20060118033219/https://www.browncoatsriseagain.com/

#20yrsago New discussion draft of GNU General Public License is released https://gplv3.fsf.org/

#10yrsago “Late stage capitalism” is the new “Christ, what an asshole” https://x.com/mjg59/status/688238257935548416

#10yrsago Worried about Chinese spies, the FBI freaked out about Epcot Center https://www.muckrock.com/news/archives/2016/jan/14/fbi-epcot/

#10yrsago India’s Internet activists have a SOPA moment: no “poor Internet for poor people” https://www.theguardian.com/world/2016/jan/15/india-net-neutrality-activists-facebook-free-basics

#5yrsago Pelosi kicks Katie Porter off the Finance Committee https://pluralistic.net/2021/01/16/speaker-willie-sutton/#swampgator


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1141 words today, 8278 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

2026-01-16T04:59:54+00:00 Fullscreen Open in Tab
Finished reading Storm Front
Finished reading:
Cover image of Storm Front
The Dresden Files series, book 1.
Published . 372 pages.
Started ; completed January 15, 2026.
Illustration of Molly White sitting and typing on a laptop, on a purple background with 'Molly White' in white serif.
2026-01-16T01:13:08+00:00 Fullscreen Open in Tab
Note published on January 16, 2026 at 1:13 AM UTC

If anyone was still pretending this isn’t just about excluding trans kids from public life, the Trump administration just opened a Title IX investigation into a Maine school district because a trans student is on a co-ed cheerleading squad.

The U.S. Department of Education announced a slew of Title IX-related investigations this week that include 2 school districts in Maine.
Illustration of Molly White sitting and typing on a laptop, on a purple background with 'Molly White' in white serif.
Thu, 15 Jan 2026 20:31:40 +0000 Fullscreen Open in Tab
Pluralistic: How the Light Gets In (15 Jan 2026)


Today's links



A wall with a crack running through it. Light is flooding through the crack. Circuit board traces are bleeding through the periphery of the wall.

How the Light Gets In (permalink)

Of all the tools that I use to maintain my equilibrium in these dark days, none is so important as remembering the distinction between happiness, optimism and hope.

Happiness is self-explanatory – and fleeting. Even in the worst of times, there are moments of happiness – a delicious meal with friends, a beautiful sunrise, a stolen moment with your love. These are the things we chase, and rightly so. But happiness is always a goal, rarely a steady state.

Optimism, on the other hand, is a toxin to be avoided. Optimism is a subgenre of fatalism, the belief that things will get better no matter what we do. It's just the obverse of pessimism. Both are ways of denying human agency. To be an optimist is to be a passenger of history, along for the ride, with no hope of changing its course.

But hope? That's the stuff. Hope is the belief that if we change the world for the better, even by just a little, that we will ascend a gradient towards a better future, and as we rise up that curve, new terrain will be revealed to us that we couldn't see from our lower vantage-point. It's not necessary – or even possible – to see a course from here to the world you want to live in. You can get there in stepwise fashion, one beneficial change at a time:

https://pluralistic.net/2021/10/03/hope-not-optimism/

These days, I am often unhappy, but I am filled with hope.

A couple of weeks ago, I gave a speech, "The Post-American Internet," at the 39th Chaos Communications Congress in Hamburg:

https://pluralistic.net/2026/01/01/39c3/#the-new-coalition

In that talk, I laid out the case for hope. So many of the worst aspects of modern life can be traced to our enshittified technology, from mass surveillance and totalitarian control to wage suppression and conspiratorial cults. This enshittified technology, in turn, is downstream of policy decisions made by politicians who were bullied into their positions by the US trade rep, who used the threat of tariffs to push for laws that protected the right of tech giants to plunder the world's money and data, by criminalizing competitors who disenshittified their products, leaving technology users defenseless.

Trump's tariffs have effectively killed that threat. If you can't tell from day to day – let alone year to year – whether the US will accept your exports, you can't rely on exporting to the USA. What's more, generations of pro-oligarch policies have stripped America's bottom 90% of discretionary income, stagnating their wages and leaving them mired in health, education, and housing debt (even as the system finds ever more sadistic and depraved ways for arm-breakers to collect on that debt):

https://pluralistic.net/2025/12/16/k-shaped-recovery/#disenshittification-nations

This is terrible for Americans, but when life gives you SARS, you make sarsaparilla. With the decline of the US market for global exporters, there's finally political space to stop worrying about tariffs and reconsider anti-circumvention laws, to create "disenshittification nations" that stage raids on the most valuable lines of business of the most profitable companies in world history – Big Tech:

https://pluralistic.net/2026/01/13/not-sorry/#mere-billions

People who dream of turning American tech trillions into their own billions are powerful allies in the fight against enshittification, but they're only one group that we can recruit to our side. There's another powerful bloc waiting in the wings: national security hawks.

These people are rightly terrified that Trump will order his tech companies to switch off their governments, businesses and households, all of whom are dependent on US cloud-based administrative software for email, document creation and archiving, databases, mobile devices. Trump's tech companies could also brick any nation's mobile phones, medical devices, cars, and tractors.

It's the same risk that China hawks warned of when it looked as though Huawei would provide all of the world's 5G infrastructure: allow companies that are absolutely beholden to an autocrat who is not restrained by the rule of law to permeate your society, and your society becomes a prisoner to the autocrat's whims and goodwill.

A coalition of digital rights activists; investors and entrepreneurs; and national security hawks makes for a powerful bloc indeed. Each partner in the coalition can mobilize different constituencies and can influence different parts of the state. These are very different groups, and that's why this coalition is so exciting: this is a three-pronged assault on the hegemony of Big Tech.

That's not to say that this will automatically happen. Nothing happens automatically. Fuck pessimism, and fuck optimism, too. Things happen because people do stuff:

https://pluralistic.net/2021/10/17/against-the-great-forces-of-history/

That's where hope comes in. The door to a better technological future has been slammed shut and triple-locked for 25 years. Today, it is open a crack. A crack isn't much, but as Leonard Cohen taught us, "that's how the light gets in":

https://genius.com/Leonard-cohen-anthem-lyrics

Understand: this isn't a bet on politicians discovering heretofore unsuspected wellsprings of courage or principle. This is a bet on politicians confronting unstoppable political will that corners them into doing the right thing.

I understand why Europeans, Canadians and Britons might feel cynical about their political classes (to say nothing of Americans, of course). It has been decades since a political party delivered broad, structural change that improved the lives of everyday people. Instead, we've had generations of neoliberal austerity sadists, autocrats and corrupt dolts who've helped billionaires stripmine our civilization and set the world on fire.

But politics have changed before, and they can change again (note that I didn't say they will change – just that they can, because we can change them). Society may feel deadlocked, but crises precipitate change. As I said in my Hamburg speech, the EU went from 15 years behind in their solar transition to ten years ahead, in just a few years, thanks to the energy crisis that slammed into the continent after Putin invaded Ukraine.

Crises precipitate change. The fact that the EU pivoted so quickly away from fossil fuels to solar is nothing short of a miracle. Anyone who feels like their politicians would never buck Big Tech needs to explain how it came to pass that these politicians just told Big Oil to fuck off. The fossil fuel industry is losing. This is goddamned wild – indeed, their loss might just be locked in at this point, because fossil fuel and its applications (like internal combustion) are now more expensive and more impractical than the cleantech alternatives:

https://pluralistic.net/2025/10/02/there-goes-the-sun/#carbon-shifting

Sure, it sucks that Trump has killed incentives to drive an EV and that the EU is dropping its goal for phasing out internal combustion engines, but given that EVs are faster, cheaper and better than conventional automobiles, the writing is on the wall for the IC fleet.

That's the wild thing about better technology: people want it, and they get pissed off when they're told they can't have it. When the Texas legislature tried to pass a law requiring that power companies add a watt of fossil-fuel generation capacity for every watt of solar they brought online, Trump-voting farmers and ranchers from the deepest red parts of Texas (Texas!!) flooded town halls and hearings, demanding an end to "DEI for natural gas":

https://billmckibben.substack.com/p/for-reality

They won.

Politics aren't just terrible today, they're in chaos. Crises precipitate change.

After World War II, one of Britain's two parties, the Liberals (AKA "Whigs") imploded. With them out of the way, the Labour Party rose to power, with a transformative agenda backed by a mass movement, which created the British welfare state.

Today, the British Conservative Party (AKA "Tories") are also imploding, and look set to be taken over by a fascist MAGA-alike party, Reform. As of a couple months ago, that seemed like very bad news, since Labour is also set to implode, thanks to Prime Minister Keir Starmer's austerity, authoritarianism, corruption and cowardice. For quite a while, it looked like when Starmer's Labour is totally wiped out in the next election, they would give way to Reform, plunging Britain into Hungarian- (or American)-style autocracy.

But all that has changed. Today, the UK Greens have a new leader, Zack Polanski, who has dragged the Greens into an agenda that promises transformations as bold as the ones that remade the country under Clement Attlee's Labour government. Polanski is a fantastic campaigner, and he is committed to the same kind of grassroots co-governance with a mass movement that characterized Zohran Mamdani's historic NYC mayoral campaign.

In other words, it seems like both of Britain's sclerotic mainstream parties will be wiped out in the next election, and the real fight in the UK is between two transformative upstart parties, one of which plans to spend billionaires' dark money to mobilize fascists yearning for ethnic cleansing; and the other wants a fair, prosperous and equitable society where we abolish billionaires, confront the climate emergency, and smash corporate power. In other words, the UK is heading into an election in which voters have a choice that's more meaningful than Coke vs Pepsi.

Versions of this are playing out around the world. Anti-billionaire policies have surfaced time and again, everywhere, since the late 2010s:

https://pluralistic.net/2025/06/28/mamdani/#trustbusting

None of this means that we will automatically win. I'm not asking you to be an optimist here, but I am demanding that you have hope. Hope is a discipline: it requires that you tirelessly seek out the best ways to climb up that gradient toward a better world, trusting that as you attain higher elevation, you will find new paths up that slope.

The door is open a crack. Now isn't the time to complain that it isn't open wider – now's the time to throw your shoulder against it.

(Image: Joe Mabel, CC BY 3.0)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsao Journal of a homeless woman in San Francisco: witty, articulate, pregnant, and addicted to heroin https://web.archive.org/web/20010124050200/https://www.thematrix.com/~sherrod/diary.html

#20yrsago Study: how Canadian copyright law is bought by entertainment co’s https://web.archive.org/web/20060207141159/http://www.michaelgeist.ca/index.php?option=com_content&task=view&id=1075

#20yrsago My Toronto Star editorial about Hollywood’s Member of Parliament https://web.archive.org/web/20060616024225/http://www.thestar.com/NASApp/cs/ContentServer?pagename=thestar/Layout/Article_Type1&call_pageid=971358637177&c=Article&cid=1137279034770

#10yrsago Aaron Swartz’s “Against School” – business leaders have been decrying education since 1845 https://newrepublic.com/article/127317/school

#10yrsago Yosemite agrees to change the names of its significant locations to appease trademark troll https://www.outsideonline.com/outdoor-adventure/environment/yosemite-rename-several-iconic-places/?scope=anon

#10yrsago Bernie Sanders support soars among actual voters, if not Democratic Party power-brokers https://www.theguardian.com/commentisfree/2016/jan/14/bernie-sanders-is-winning-with-the-one-group-his-rivals-cant-sway-voters

#5yrsago Tesla's valuation is 1600x its profitability https://pluralistic.net/2021/01/15/hoover-calling/#intangibles

#5yrsago Disneyland kills annual passes https://pluralistic.net/2021/01/15/hoover-calling/#disney-dash

#5yrsago Machine learning is a honeypot for phrenologists https://pluralistic.net/2021/01/15/hoover-calling/#phrenology

#5yrsago Yugoslavia's Cold War obsession with Mexican music https://pluralistic.net/2021/01/15/hoover-calling/#yu-mex

#5yrsago I was investigated by the FBI https://pluralistic.net/2021/01/15/hoover-calling/#g-man

#5yrsago Facebook says it's the best henhouse fox https://pluralistic.net/2021/01/15/hoover-calling/#hens-need-foxes

#5yrsago Laura Poitras fired from First Look ( https://pluralistic.net/2021/01/15/hoover-calling/#poitras


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1058 words today, 7122 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

2026-01-14T18:39:19+00:00 Fullscreen Open in Tab
Read "How Prediction Markets Turned Life Into a Dystopian Gambling Experiment"
Read:
You’re sitting in your living room trying to make a few bucks by guessing the date Israel will next strike Lebanon. Meanwhile, someone with inside knowledge of that date is planning to use it to take your money. Meanwhile, the prediction markets are taking a cut of the transaction and using it to buy lobbyists to keep oversight down, brand partnerships to make them look legitimate, and advertising to keep you gambling. Meanwhile, someone in Lebanon is sitting in their apartment hoping their building doesn’t explode. 
Illustration of Molly White sitting and typing on a laptop, on a purple background with 'Molly White' in white serif.
Illustration of Molly White sitting and typing on a laptop, on a purple background with 'Molly White' in white serif.
Wed, 14 Jan 2026 16:30:37 +0000 Fullscreen Open in Tab
Pluralistic: It's not normal (14 Jan 2026)


Today's links



A 1790 illustration entitled 'The World Turned Upside-Down.' It features a topsy-turvy nature scene in which a giant fish stands on the bank, fishing for a human who is gasping in the water. The sky is filled with flying fishes and eels, the sea is filled with swimming birds. The image has been hand-tinted. The background has been replaced with a printed circuit board.

It's not normal (permalink)

Samantha: This town has a weird smell that you're all probably used to…but I'm not.

Mrs Krabappel: It'll take you about six weeks, dear.

-The Simpsons, "Bart's Friend Falls in Love," S3E23, May 7, 1992

We are living through weird times, and they've persisted for so long that you probably don't even notice it. But these times are not normal.

Now, I realize that this covers a lot of ground, and without detracting from all the other ways in which the world is weird and bad, I want to focus on one specific and pervasive and awful way in which this world is not normal, in part because this abnormality has a defined cause, a precise start date, and an obvious, actionable remedy.

6 years, 5 months and 22 days after Fox aired "Bart's Friend Falls in Love," Bill Clinton signed a new bill into law: the Digital Millennium Copyright Act of 1998 (DMCA).

Under Section 1201 of the DMCA, it's a felony to modify your own property in ways that the manufacturer disapproves of, even if your modifications accomplish some totally innocuous, legal, and socially beneficial goal. Not a little felony, either: DMCA 1201 provides for a five year sentence and a $500,000 fine for a first offense.

Back when the DMCA was being debated, its proponents insisted that their critics were overreacting. They pointed to the legal barriers to invoking DMCA 1201, and insisted that these new restrictions would only apply to a few marginal products in narrow ways that the average person would never even notice.

But that was obvious nonsense, obvious even in 1998, and far more obvious today, more than a quarter-century on. In order for a manufacturer to criminalize modifications to your own property, they have to satisfy two criteria: first, they must sell you a device with a computer in it; and second, they must design that computer with an "access control" that you have to work around in order to make a modification.

For example, say your toaster requires that you scan your bread before it will toast it, to make sure that you're only using a special, expensive kind of bread that kicks back a royalty to the manufacturer. If the embedded computer that does the scanning ships from the factory with a program that is supposed to prevent you from turning off the scanning step, then it is a felony to modify your toaster to work with "unauthorized bread":

https://arstechnica.com/gaming/2020/01/unauthorized-bread-a-near-future-tale-of-refugees-and-sinister-iot-appliances/

If this sounds outlandish, then a) You definitely didn't walk the floor at CES last week, where there were a zillion "cooking robots" that required proprietary feedstock; and b) You haven't really thought hard about your iPhone (which will not allow you to install software of your choosing):

https://pluralistic.net/2024/01/12/youre-holding-it-wrong/#if-dishwashers-were-iphones

But back in 1998, computers – even the kind of low-powered computers that you'd embed in an appliance – were expensive and relatively rare. No longer! Today, manufacturers source powerful "System on a Chip" (SoC) processors at prices ranging from $0.25 to $8. These are full-fledged computers, easily capable of running an "access control" that satisfies DMCA 1201.

Likewise, in 1998, "access controls" (also called "DRM," "technical protection measures," etc) were a rarity in the field. That was because computer scientists broadly viewed these measures as useless. A determined adversary could always find a way around an access control, and they could package up that break as a software tool and costlessly, instantaneously distribute it over the internet to everyone in the world who wanted to do something that an access control impeded. Access controls were a stupid waste of engineering resources and a source of needless complexity and brittleness:

https://memex.craphound.com/2012/01/10/lockdown-the-coming-war-on-general-purpose-computing/

But – as critics pointed out in 1998 – chips were obviously going to get much cheaper, and if the US Congress made it a felony to bypass an access control, then every kind of manufacturer would be tempted to add some cheap SoCs to their products so they could add access controls and thereby felonize any uses of their products that cut into their profits. Basically, the DMCA offered manufacturers a bargain: add a dollar or two to the bill of materials for your product, and in return, the US government will imprison any competitors who offer your customers a "complementary good" that improves on it.

It's even worse than this: another thing that was obvious in 1998 was that once a manufacturer added a chip to a device, they would probably also figure out a way to connect it to the internet. Once that device is connected to the internet, the manufacturer can push software updates to it at will, which will be installed without user intervention. What's more, by using an access control in connection with that over-the-air update mechanism, the manufacturer can make it a felony to block its updates.

Which means that a manufacturer can sell you a device and then mandatorily update it at a later date to take away its functionality, and then sell that functionality back to you as a "subscription":

https://pluralistic.net/2022/10/28/fade-to-black/#trust-the-process

A thing that keeps happening:

https://www.theverge.com/2024/7/20/24202166/snoo-premium-subscription-happiest-baby

And happening:

https://www.eff.org/deeplinks/2020/11/ink-stained-wretches-battle-soul-digital-freedom-taking-place-inside-your-printer

And happening:

https://pluralistic.net/2024/05/24/record-scratch/#autoenshittification

In fact, it happens so often I've coined a term for it, "The Darth Vader MBA" (as in, "I'm altering the deal. Pray I don't alter it any further"):

https://pluralistic.net/2025/09/01/fulu/#i-am-altering-the-deal

Here's what this all means: any manufacturer who devotes a small amount of engineering work and incurs a small hardware expense can extinguish private property rights altogether.

What do I mean by private property? Well, we can look to Blackstone's 1753 treatise:

The right of property; or that sole and despotic dominion which one man claims and exercises over the external things of the world, in total exclusion of the right of any other individual in the universe.

You can't own your iPhone. If you take your iPhone to Apple and they tell you that it is beyond repair, you have to throw it away. If the repair your phone needs involves "parts pairing" (where a new part won't be recognized until an Apple technician "initializes" it through a DMCA-protected access control), then it's a felony to get that phone fixed somewhere else. If Apple tells you your phone is no longer supported because they've updated their OS, then it's a felony to wipe the phone and put a different OS on it (because installing a new OS involves bypassing an "access control" in the phone's bootloader). If Apple tells you that you can't have a piece of software – like ICE Block, an app that warns you if there are nearby ICE killers who might shoot you in the head through your windshield, which Apple has barred from its App Store on the grounds that ICE is a "protected class" – then you can't install it, because installing software that isn't delivered via the App Store involves bypassing an "access control" that checks software to ensure that it's authorized (just like the toaster with its unauthorized bread).

It's not just iPhones: versions of this play out in your medical implants (hearing aid, insulin pump, etc); appliances (stoves, fridges, washing machines); cars and ebikes; set-top boxes and game consoles; ebooks and streaming videos; small appliances (toothbrushes, TVs, speakers), and more.

Increasingly, things that you actually own are the exception, not the rule.

And this is not normal. The end of ownership represents an overturn of a foundation of modern civilization. The fact that the only "people" who can truly own something are the transhuman, immortal colony organisms we call "Limited Liability Corporations" is an absolutely surreal reversal of the normal order of things.

It's a reversal with deep implications: for one thing, it means that you can't protect yourself from raids on your private data or ready cash by adding privacy blockers to your device, which would make it impossible for airlines or ecommerce sites to guess about how rich/desperate you are before quoting you a "personalized price":

https://pluralistic.net/2025/12/11/nothing-personal/#instacartography

It also means you can't stop your device from leaking information about your movements, or even your conversations – Microsoft has announced that it will gather all of your private communications and ship them to its servers for use by "agentic AI":

https://www.youtube.com/watch?v=0ANECpNdt-4

Microsoft has also confirmed that it provides US authorities with warrantless, secret access to your data:

https://www.forbes.com/sites/emmawoollacott/2025/07/22/microsoft-cant-keep-eu-data-safe-from-us-authorities/

This is deeply abnormal. Sure, greedy corporate control freaks weren't invented in the 21st century, but the laws that let those sociopaths put you in prison for failing to arrange your affairs to their benefit – and your own detriment – are.

But because computers got faster and cheaper over decades, the end of ownership has had an incremental rollout, and we've barely noticed that it's happened. Sure, we get irritated when our garage-door opener suddenly requires us to look at seven ads every time we use the app that makes it open or close:

https://pluralistic.net/2023/11/09/lead-me-not-into-temptation/#chamberlain

But societally, we haven't connected that incident to this wider phenomenon. It stinks here, but we're all used to it.

It's not normal to buy a book and then not be able to lend it, sell it, or give it away. Lending, selling and giving away books is older than copyright. It's older than publishing. It's older than printing. It's older than paper. It is fucking weird (and also terrible) (obviously) that there's a new kind of very popular book that you can go to prison for lending, selling or giving away.

We're just a few cycles away from a pair of shoes that can figure out which shoelaces you're using, or a dishwasher that can block you from using third-party dishes:

https://www.theguardian.com/technology/2015/feb/13/if-dishwashers-were-iphones

It's not normal, and it has profound implications for our security, our privacy, and our society. It makes us easy pickings for corporate vampires who drain our wallets through the gadgets and tools we rely on. It makes us easy pickings for fascists and authoritarians who ally themselves with corporate vampires by promising them tax breaks in exchange for collusion in the destruction of a free society.

I know that these problems are more important than whether or not we think this is normal. But still. It. Is. Just. Not. Normal.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#15yrsago Belarusian mobile operators gave police list of demonstrators https://charter97.org/en/news/2011/1/12/35161/

#15yrsago Threatened library gets its patrons to clear the shelves https://www.theguardian.com/books/2011/jan/14/stony-stratford-library-shelves-protest

#15yrsago Canadian regulator smacks Rogers for Net Neutrality failures https://web.archive.org/web/20110116044741/https://www.michaelgeist.ca/content/view/5574/125/

#10yrsago A day in the life of a public service serial killer’s intern https://web.archive.org/web/20160116122141/https://motherboard.vice.com/read/the-killing-jar

#10yrsago How an obsessive jailhouse lawyer revealed the existence of Stingray surveillance devices https://www.theverge.com/2016/1/13/10758380/stingray-surveillance-device-daniel-rigmaiden-case

#10yrsago The Internet of Things in Your Butt: smart rectal thermometer https://web.archive.org/web/20160116182024/https://motherboard.vice.com/read/this-rectal-thermometer-is-the-logical-conclusion-of-the-internet-of-things

#10yrsago UK Home Secretary auditions for a Python sketch: “UK does not undertake mass surveillance” https://web.archive.org/web/20160114224805/https://motherboard.vice.com/read/the-uk-does-not-undertake-mass-surveillance-says-uk-home-secretary

#10yrsago US Treasury Dept wants to know which offshore crimelords are buying all those NYC and Miami penthouses https://www.csmonitor.com/USA/USA-Update/2016/0113/Are-luxury-condo-purchases-hiding-dirty-money

#5yrsag Facebook shows mall ninja gear ads on insurrection articles https://pluralistic.net/2021/01/14/10-point-program/#monetizing

#5yrsago The Black Panther self-care method https://pluralistic.net/2021/01/14/10-point-program/#panthers


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1001 words today, 6053 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

2026-01-14T03:45:22+00:00 Fullscreen Open in Tab
Note published on January 14, 2026 at 3:45 AM UTC
Tue, 13 Jan 2026 17:27:22 +0000 Fullscreen Open in Tab
Pluralistic: Sorry, eh (13 Jan 2026)


Today's links



A Canadian flag, its elements replaced with circuit boards. In the foreground, a bent-double, exhausted Uncle Sam trudges over rocky terrain, shlepping a giant sack on his back. Centered in the maple leaf is the word SORRY.

Sorry, eh (permalink)

Like all the best Americans, I'm Canadian, and while I have lived abroad for most of this century, I still hew faithfully to our folkways, which is why I'd like to start this essay by apologizing.

I'm sorry.

I'm sorry! I'm a technology writer, which means I'm supposed to be encouraging you to throw hundreds of billions of dollars at the money-losingest technology in human history, AI. No one has ever lost as much money as the AI companies.

There is no way to operate one of Nvidia's big AI-optimized GPUs without losing money. The owners of these GPUs who have lost the least money are the ones who rushed into buying GPUs without ensuring they'd have electricity to power them, and have been forced to leave their GPUs to age in warehouses. The minute they plug in those GPUs, they'll start losing money, and the more they use them, the more money they'll lose.

I'm sorry. As a technology writer, I'm supposed to be telling you that this bet will some day pay off, because one day we will have shoveled so many words into the word-guessing program that it wakes up and learns how to actually do the jobs it is failing spectacularly at today. This is a proposition akin to the idea that if we keep breeding horses to run faster and faster, one of them will give birth to a locomotive. Humans possess intelligence, and machines do not. The difference between a human and a word-guessing program isn't how many words the human knows.

I'm sorry. I know that when we talk about "digital sovereignty," we're obliged to talk about how we can build more data-centres that we can fill up with money-losing chips from American silicon monopolists in the hopes of destroying as many jobs as possible while blowing through our clean energy goals and enshittifying as much of our potable water as possible.

I don't have any advice for how to do that. I'm sorry!

As Canada contemplates our response to the collapse of the American empire and its alliances with the world, the cornerstone of our current strategy is sacrificing our dollars, water and energy in order to become more dependent on America, in a weird and improbable bet that we will figure out how to make millions of Canadians unemployed. I'm sorry, that just doesn't sound like a great idea to me.

If I can beg your indulgence, I'd like to propose an alternative.

Back in 2012, Canada passed Bill C-11, the Copyright Modernization Act. It's a law that bans Canadian companies from modifying America's digital tech exports. We passed it because the US threatened us with tariffs:

https://pluralistic.net/2025/05/08/who-broke-the-internet/#bruce-lehman

Thanks to Bill C-11, a Canadian company can't sell jailbreaking kits for phones and consoles, which would let Canadian sellers offer goods and services to Canadian buyers outside of US app stores, sidestepping the 30% app tax that Apple, Google, Microsoft, Sony and others impose on our digital economy.

Thanks to Bill C-11, a Canadian company can't sell mechanics a universal diagnostic tool that turns every "check engine" light into a useful error message. Instead, Canadian mechanics have to send $10,000/year/manufacturer to America for a proprietary car diagnosis kit.

Thanks to Bill C-11, a Canadian company can't offer ink cartridge manufacturers software that will ensure their cartridges work in the printers Canadians buy from the American inkjet cartel. As a result, Canadians have to spend $10,000/gallon on ink, making it the most expensive fluid a Canadian civilian can purchase without a government permit.

Thanks to Bill C-11, a Canadian company can't sell our farmers software that lets them start using their tractors as soon as they've fixed them. Instead, after a Canadian farmer fixes their tractor, they have to wait for a service call from a rep for a US ag-tech monopolist who'll type an unlock code into the tractor's keyboard and charge the farmer a couple hundred bucks for this "service."

Thanks to Bill C-11, a Canadian company can't revive one of the most successful technologies in modern history: the home video recorder. Remember those? First we had VCRs, then we had digital successors like the Tivo. Canadian law says you're allowed to record the video that comes into your home, whether by broadcast, cable, satellite or streaming. But Bill C-11 bans a Canadian company from selling you a gadget that lets you save the video you get in an app or from a set-top box.

It's crazy: we have actually uninvented the VCR! You know how everyone is pissed off about their favourite shows being yanked from the streaming services? Repeal C-11 and you could just save those shows forever. Repeal C-11 and you'd kill the grinchy little racket that services like Prime pull, where Christmas cartoons are in the free tier from March to November, and cost $3.99 to watch between November and March. Just tape 'em in August and save 'em for later!

It doesn't stop there. Remember when Facebook banned all links to the news in Canada? Repeal C-11 and a Canadian company could sell you an alternative Facebook app that puts the news back into your feed! Repeal C-11 and Canadians could get an alternative app that replaces all the streaming services, letting you search and stream every service you have an account for in one place, mixing in Canadian content from the NFB, public broadcasters, and commercial services.

Virtually every Canadian ministry, corporation and household is locked into a US Big Tech silo. Any of these could be shut down at a single word from Trump to any of the tech giants who've lined up to do his bidding. Repeal C-11 and we can extract all our data from these walled gardens/prisons and get it onto auditable, trustworthy, transparent open source software, hosted in data-centres located safely on Canadian soil.

If there's one thing Canadians are good it, it's going to other countries and extracting their wealth. We're world champions at it.

America's tech monopolies have sequestered trillions of dollars worth of monopoly rents on their balance sheets. This is dead capital, being pissed up the wall on nonsense like stock buybacks and data-centres and grotesque executive bonuses.

As Jeff Bezos said to the publishers: "Your margin is my opportunity."

America's tech trillions represent a rich and readily accessible seam that we can extract – safely, from our own country! – and turn into our billions, and an exportable line of products that the whole world would beat a path to our door to buy.

Look, I'm sorry. I don't have any ideas for how Canada can get to a better future by lighting billions on fire in a bet on a failing technology whose dubious profitability depends on ruining our job market, our power grid and our water supply, which will tie the American political situation to our ankles.

All I've got is an idea for how we can make insanely profitable products that people really want to buy, that will insulate us from cyberattacks by US tech giants who are in thrall to Trump, and that Americans will pay us to use in order to free themselves from the tech giants who abuse them, too.

I'm really sorry. I know it's out of step with the times, but all I have is ideas that make money, make us safer, make us richer, and make our technology better.

On the other hand, those chatbots sure are cute. It's funny when they "hallucinate."


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Hey, Mark made me a guest editor! https://memex.craphound.com/2001/01/13/hey-mark-made-me-a/

#15yrsago Woz on Network Neutrality https://www.theatlantic.com/technology/archive/2010/12/steve-wozniak-to-the-fcc-keep-the-internet-free/68294/

#15yrsago Disney World’s awful Tiki Room catches fire https://web.archive.org/web/20110116093950/http://thedisneyblog.com/2011/01/12/fire-reported-at-magic-kingdom-tiki-room/

#10yrsago For the first time in 15 years, there’s a new Violent Femmes album https://www.npr.org/sections/allsongs/2016/01/13/462656061/hear-a-song-from-violent-femmes-first-album-in-15-years

#10yrsago 3D Systems abandons its Cube printers, but DRM means you can’t buy filament from anyone else https://michaelweinberg.org/post/137045828005/free-the-cube

#10yrsago Why Moveon endorsed Bernie Sanders https://medium.com/middle-of-nowhere-center-of-everything/the-top-5-reasons-moveon-members-voted-to-endorse-bernie-with-the-most-votes-and-widest-margin-in-78c2e69990ec#.py5rdi9xc

#10yrsago Sneak-privatization of public schools: attacking teachers, unions and standards https://web.archive.org/web/20160112065749/https://www.washingtonpost.com/news/answer-sheet/wp/2016/01/07/a-primer-on-the-damaging-movement-to-privatize-public-schools/

#10yrsago Income inequality makes the 1% sad, too https://hbr.org/2016/01/income-inequality-makes-whole-countries-less-happy

#5yrsago Will Biden bust trusts? https://pluralistic.net/2021/01/13/two-decades/#thanks-obama

#5yrsago 20 years a blogger https://pluralistic.net/2021/01/13/two-decades/#hfbd


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1037 words today, 5059 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Mon, 12 Jan 2026 16:01:00 +0000 Fullscreen Open in Tab
Pluralistic: A winning trade war strategy for Canada (11 Jan 2026)


Today's links



A turn of the century Main Street, USA. Over the horizon looms a giant Canadian flag, made out of circuitry. In the foreground is a pixelboard sign reading 'U.S. BORDER CLOSED.'

A winning trade war strategy for Canada (permalink)

As the great Canadian philosopher Keanu Reeves averred in the 1994 public transportation documentary Speed, sometimes the winning move is to shoot the hostage.

That is: when your adversary has trapped you in a deadlock situation where neither of you can win, the winning move is to stop playing the game – rather, change the rules, and a bouquet of new moves will bloom.

Trump thinks he has Canada cornered, but we have a hell of a winning move. Unfortunately, we're not making it (yet).

Thus far, Canada's response to Trump's tariffs has been tit for tat: retaliatory tariffs. America smacked Canada's exports with tariffs, so Canada smacked the goods we import from the US with tariffs, too. This means that everything we buy in Canada is more expensive, which is certainly one way to punish Trump! It's like punching yourself in the face as hard as you can and waiting for the downstairs neighbour to say "ouch!"

https://pluralistic.net/2025/01/15/beauty-eh/#its-the-only-war-the-yankees-lost-except-for-vietnam-and-also-the-alamo-and-the-bay-of-ham

Not only are retaliatory tariffs bad for Canadians, they're also bad for the Americans who are also suffering under Trump. Rather than fostering an alliance with Americans against our common enemy – America's oligarchs and their god-king Trump – Canadians have declared war on all of our American cousins.

Take the decision to eschew delicious American bourbon and switch to Wayne Gretzky's undrinkable rye. Somewhere in a state that begins and ends with a vowel, there is a corn farmer who never did anything to hurt Canada who's suffering as a result of this decision. We get shitty booze, and he can't afford to make payments on his tractor. Everyone loses!

Now, it's a funny thing about that tractor. Chances are, it's made by John Deere, a rapacious ag-tech monopolist that bought out all its competitors and now screws farmers in every imaginable way. One particularly galling scam is how John Deere handles repair. Farmers typically repair their own tractors. After all, a tractor is a business-critical machine with a lot of moving parts that can fail in a million ways.

But after the farmer fixes their tractor, it will not work until they pay John Deere to send a technician to their farm to type an unlock code in the tractor's keyboard. This is a totally superfluous step, inserted solely to allow Deere to rip off their customers. Farmers have been fixing their own farm implements since the first plow – after all, when you need to bring the crops in and the storm is coming, you can't wait for a service call at the end of your lonely country road – but John Deere has declared the end of history. In John Deere's world, farmers can only use their tractors when an ag-tech monopolist says they can:

https://pluralistic.net/2022/08/15/deere-in-headlights/#doh-a-deere

No farmer wants this anti-feature in their tractor. In a normal world, someone would go into business selling farmers a kit to disable it. After all, this is all accomplished with software, and software is infinitely flexible. Every computable program can be executed on every computer. John Deere installed a 10-foot pile of shit in its tractor software, so someone else could go into business shipping 11-foot ladders made out of software that can be delivered instantaneously to anyone in the world with an internet connection and a payment method.

But we don't live in a normal world. We live in a fundamentally broken world. It's been broken since 1998, when Bill Clinton signed a law called the "Digital Millennium Copyright Act" (DMCA). Section 1201 of the DMCA establishes a felony, punishable by a 5-year sentence and a $500k fine, for anyone who "bypasses an access control" on a digital system. This means that if John Deere designs its tractors to ensure that incoming instructions were authorized by the company (say, a manufacturer's password that needs to be entered before you can update the software), then it is a felony to bypass that check. When John Deere puts one of these access controls in its tractor, it conjures up a new felony out of thin air, making it a literal crime for a farmer to modify their own tractor to work the way they want it to. It's what Jay Freeman calls "felony contempt of business model."

The US isn't the only country with a law like this – far from it! At the very instant Bill Clinton signed the DMCA, the US Trade Rep sent officials all over the world to bully America's trading partners into enacting their own version of this law, threatening them with tariffs unless they changed their national laws to make it a crime to fix the broken technology America shipped around the globe.

Which brings me back to Canada's retaliatory tariffs, those self-punishing, indiscriminate, ally-alienating tits-for-tat.

Canada presented no more of a challenge for the bullying US Trade Rep than any of those other countries. In 2012, two of Stephen Harper's ministers, James Moore and Tony Clement, rammed a carbon copy of DMCA 1201 through Parliament: Bill C-11, the Copyright Modernization Act:

https://pluralistic.net/2025/05/08/who-broke-the-internet/#bruce-lehman

C-11 was incredibly unpopular. Three earlier attempts to pass a law like this had failed, and in the end, Clement and Moore had to ignore their own consultation results and dismiss the thousands of respondents who wrote in to object to the bill as "babyish…radical extremists."

Harper, Clement and Moore whipped C-11 through Parliament because the US trade rep threatened them with tariffs unless the did so, and promised them tariff-free access to the US if they toed the line. Now that Trump has whacked Canada with tariffs, Canada should wipe this law off its books.

There's so many good domestic reasons to do this. Without C-11, Canadian companies could defend their fellow Canadians from American data-theft and cash ripoffs by making alternative clients, jailbreaks, and other add-ons that disenshittified America's defective tech:

https://pluralistic.net/2026/01/10/markets-are-regulations/#carney-found-a-spine

But today, I want to focus on how repealing C-11 would benefit America. You see, America's businesses – large and small – are victims of Big Tech's extraction. The Big Five publishers get screwed by Amazon, as do all the little indie publishers. Every games company gets screwed by Apple and Google, who suck 30 cents out of every dollar their customers spend in an app. Same goes for console games companies, who pay a 30% tax on every dollar they make on Xbox, Nintendo or Playstations (the exception, of course, is the games companies owned by Microsoft, Sony and Nintendo, who don't pay the 30% tax and can therefore always outcompete the independents).

Merchants who sell on Amazon pay a 50-60% junk fee tax. Businesses large and small are locked into cloud products from Microsoft, Oracle, and Google who are training their AIs on their corporate customers' proprietary data. Health providers are locked into Epic, the giant electronic health record monopolist, whose abuses are the stuff of legend:

https://pluralistic.net/2024/10/02/upcoded-to-death/#thanks-obama

Many (if not all) of these scams could be mitigated with new code. For example, anyone stuck paying the app taxes could offer mobile phone and console owners jailbreaks that install third-party app stores, and then offer discounts to anyone who uses them – if you're saving 30% on every payment, you can split those savings with your customers.

Merchants could list their products for sale directly on Amazon through app and website plugins, and get paid and fulfill them themselves:

https://pluralistic.net/2022/07/10/view-a-sku/

Performers and content creators could encourage their audiences to escape the platforms' inscrutable algorithms and install jailbroken apps that let users control their recommendations:

https://www.eff.org/deeplinks/2022/05/tracking-exposed-demanding-gods-explain-themselves

Social media startups could offer alt clients that let users who sign up see the messages posted by their friends on legacy platforms like Twitter and Facebook, and push replies to them:

https://pluralistic.net/2022/12/10/e2e/#the-censors-pen

Mechanics, farmers and repair depots who are locked out of diagnostics, who can't use generic parts, and can't initialize OEM parts without paying for a license could jailbreak their customers' devices for them and offer independent repair:

https://pluralistic.net/2023/09/22/vin-locking/#thought-differently

So think back to that corn farmer, currently wondering how to make tractor payments because Canadians are drinking Gretzky's shitty rye instead of delicious bourbon. Rather than pauperizing that blameless farmer, Canada could go into business selling him the tools to escape John Deere's rent-collecting repair racket, to extract all the soil condition data needed for precision agriculture, and to make use of competitors' front-ends (accessories that turn a tractor into a thresher or some other machine).

That farmer is getting screwed by Trump, just like Canadians. He's not a shareholder in Big Tech. He's not gonna be pissed off when Canada turns Big Tech's trillions into Canadian billions – not if he gets lower prices and more reliable technology as a result.

When I talk to Canadians about retaliating against the Trump tariffs by repealing our anti-jailbreaking law, they often express concern that this will make Trump even angrier at us. I mean, of course it will: literally anything that works will make Trump angry. I don't think that means we should only respond to the Trump tariffs with useless gestures.

If Canada goes into business rescuing Americans from their own tech companies, they will become our allies. If those companies depend on selling to the Canadian market to remain profitable, they will become our allies.

Trump is an autocrat, but he's not omnipotent. He's an old, sick man with white matter disease dementia who can't stay awake through a 10-minute briefing or remember what he was talking about from minute to minute. To pursue his agenda, he needs to hold his coalition together, and that's something he's getting progressively worse at as he slides towards his incipient death/permanent incapacity.

All Canada will get if it sticks with its current response to the tariffs is Gretzky's undrinkable novelty booze and the permanent enmity of American businesses. On the other hand, if Canada repeals its anti-circumvention law, we can make billions of dollars, destroy the profits of America's most important technological allies, liberate ourselves from America's defective technology, and forge a durable, powerful anti-Trump alliance with American firms who are preyed upon just as surely as Canadians are.

Let's shoot the hostage. Let's change the rules of the game. Let's break the deadlock. It's what Keanu would tell us to do.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Indie labels give free MP3s to customers who buy vinyl https://web.archive.org/web/20060111215100/https://www.eff.org/deeplinks/archives/004313.php

#20yrsago Hollywood’s Canadian politico lies about her approach to lawmaking https://web.archive.org/web/20110425163053/http://www.michaelgeist.ca/index.php?option=com_content&task=view&id=1071

#20yrsago Correcting the Record: Wikipedia vs The Register https://memex.craphound.com/2006/01/11/correcting-the-record-wikipedia-vs-the-register/

#20yrsago Hollywood’s MP denounces “users,” “EFF members” — video https://web.archive.org/web/20060323035434/http://accordionguy.blogware.com/blog/_archives/2006/1/12/1659162.html

#20yrsago My short-short story “Printcrime” in this week’s Nature magazine https://craphound.com/stories/2006/01/12/printcrime/#more

#15yrsago HOWTO teach your small children to swordfight https://reactormag.com/spec-fic-parenting-this-my-son-is-a-sword/

#15yrsago HOWTO make a secure, decentralized, human-readable name system http://www.aaronsw.com/weblog/squarezooko

#15yrsago Demon rug https://www.flickr.com/photos/missmonstermel/5346690831/in/photostream/

#15yrsago Jeff Koons claims to own all balloon dogs https://www.designboom.com/art/jeff-koons-can-one-copyright-a-balloon-animal/

#10yrsago Brewster Kahle remembers Aaron Swartz: “an open source life” https://www.aaronswartzday.org/brewster-sf-memorial/

#10yrsago Sympathetic Bernie Sanders profile in Bloomberg Businessweek https://www.bloomberg.com/features/2016-bernie-sanders-fundraising/

#10yrsago Internal documents from breathalyzer company Lifesaver dumped online https://web.archive.org/web/20160113075611/https://motherboard.vice.com/read/car-breathalyzer-company-gets-hacked-internal-docs-dumped-on-dark-web

#10yrsago How fraudsters’ call centers work https://krebsonsecurity.com/2016/01/a-look-inside-cybercriminal-call-centers/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+KrebsOnSecurity+(Krebs+on+Security)

#10yrsago Why all scientific diet research turns out to be bullshit https://fivethirtyeight.com/features/you-cant-trust-what-you-read-about-nutrition/?ex_cid=story-facebook

#10yrsago NSA says it will take four years to answer questions about its kids’ coloring book https://web.archive.org/web/20160114074709/https://motherboard.vice.com/read/the-nsa-told-me-it-needs-4-years-to-answer-a-foia-about-a-coloring-book

#10yrsago Bowie, Eno and serendipity https://www.ted.com/talks/tim_harford_how_frustration_can_make_us_more_creative

#10yrsago Chelsea Manning reviews book of Aaron Swartz’s writing https://medium.com/@xychelsea/remembering-aaron-swartz-94d204b9e190#.5fcfs5mby

#10yrsago WATCH: documentary on Walt Disney, the futurist https://www.youtube.com/watch?v=pwLznNpJz2I

#10yrsago Guns filled with guts: Anatomy of War https://www.noahscalin.com/#/anatomyofwar1/

#10yrsago Book says Daddy Koch built Nazi oil refinery & hired a Nazi nanny for his boys, who blackmailed their gay brother https://web.archive.org/web/20160114081716/https://www.washingtonpost.com/news/post-politics/wp/2016/01/11/new-book-father-of-politically-active-koch-brothers-built-a-refinery-for-the-nazis/

#10yrsago Rich Americans are embarrassed by Donald Trump https://web.archive.org/web/20160115052314/https://gawker.com/donald-trumps-personal-brand-is-slowly-excruciatingly-1752374812?utm_source=recirculation&utm_medium=recirculation&utm_campaign=tuesdayAM

#10yrsago New US law says kids can walk to school by themselves https://www.fastcompany.com/3055107/federal-law-now-says-kids-can-walk-to-school-alone

#10yrsago Toronto’s mayor demands an end to competition for fast, affordable broadband https://www.michaelgeist.ca/2016/01/why-mayors-john-tory-and-jim-watson-are-against-competition-for-access-to-affordable-fast-broadband/

#10yrsago Your smartwatch knows your ATM and phone PIN https://arxiv.org/pdf/1512.05616v1

#10yrsago Keep your scythe, the real green future is high-tech, democratic, and radical https://memex.craphound.com/2016/01/12/keep-your-scythe-the-real-green-future-is-high-tech-democratic-and-radical/

#10yrsago Will the W3C strike a bargain to save the Web from DRM? https://www.eff.org/deeplinks/2016/01/you-cant-destroy-village-save-it-w3c-vs-drm-round-two

#5yrsago Bunkered, infectious, maskless Republicans infected Congress https://pluralistic.net/2021/01/12/maskholio/#maskholes

#5yrsago Awful voting-machine demands silence https://pluralistic.net/2021/01/11/seeing-things/#ess

#5yrsago Weaponing and monetizing apophenia https://pluralistic.net/2021/01/11/seeing-things/#woo

#5yrsago DC's security theater panned https://pluralistic.net/2021/01/11/seeing-things/#curtain-call


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.

r



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America ( words today, total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

2026-01-12T04:23:51+00:00 Fullscreen Open in Tab
Finished reading Demon World Boba Shop Vol. 5
Finished reading:
Cover image of Demon World Boba Shop Vol. 5
Demon World Boba Shop series, book 5.
Published . 373 pages.
Started ; completed January 11, 2026.
Illustration of Molly White sitting and typing on a laptop, on a purple background with 'Molly White' in white serif.
Sat, 10 Jan 2026 15:02:13 +0000 Fullscreen Open in Tab
Pluralistic: Predistribution vs redistribution (Big Tech edition) (10 Jan 2026)


Today's links



A Canadian flag. The Maple Leaf has been replaced with a rotten apple. Crawling out of the apple is a woim. Over the apple is Apple's 'Think Different' wordmark. The woim is crawling through one of the 'e's.

Predistribution vs redistribution (Big Tech edition) (permalink)

All over the world, for all of this decade, governments have been trying to figure out how to rein in America's tech companies. During the Biden years, this seemed like a winner – after all, America was trying to tame its tech companies, too, with brave trustbusters like Lina Khan, Jonathan Kanter, Rohit Chopra and Tim Wu doing more work in four years than their predecessors had done in forty.

But under Trump, the US government has thrown its full weight into defending its tech companies' right to spy on and rip off everyone in the world (including Americans, of course). It's not hard to understand how Big Tech earned Trump's loyalty: from the tech CEOs who personally paid a million dollars each to sit behind Trump on the inauguration dais; to Apple CEO Tim Cook hand-assembling a gold participation trophy for Trump on camera; to Zuckerberg firing all his fact-checkers; to the seven-figure contributions that tech companies made to Trump's Epstein Memorial Ballroom at the White House. Trump is defending America's tech companies because they've bribed him, personally, to do so.

Given that these companies are so much larger than most world governments, this poses a serious barrier to the kind of enforcement that world governments have tried. What's the point of fining Apple billions of Euros if they refuse to pay? What's the point of ordering Apple to open up its App Store if it just refuses?

But here's the thing: most of these enforcement actions have been redistributive. In effect, lawmakers and regulators are saying to America's tech giants, We know you've stolen a bunch of money and data from our people, and now we want you to give some of it back. There's nothing inherently wrong with redistribution, but redistribution will never be as powerful or effective as predistribution – that is, preventing tech companies from stealing data and money in the first place.

Take Big Tech's relationship to the world's news media. All over the world, media companies have been skeletonized by collapsing ad revenues and even where they can get paid subscribers, tech giants rake off huge junk fees from every subscriber payment. Reaching new or existing subscribers is also increasingly expensive, as tech platforms algorithmically suppress the reach of media companies' posts, even for subscribers who've asked to see their feeds, which lets the platforms charge more junk fees to "boost" content.

Countries all over the world – Australia, Germany, Spain, France, Canada – have arrived at the same solution to this problem: imposing "link taxes" that require tech companies to pay for the privilege of linking to the news or allowing their users to discuss the news. This is pure redistribution: tech stole money from the media companies, so governments are making them give some of that money back.

It hasn't worked. First of all, the thing tech steals from the news isn't the news, it's money. Helping people find and discuss the news isn't theft. News you're not allowed to find or discuss isn't news at all – that's a secret.

Meanwhile, tech companies have an easy way to escape the link tax: they can just ban links to the news on their platform. That's what Meta did in Canada, which means that Canadians on Instagram and Facebook no longer see the actual news, just far-right "influencer" content. Even when tech companies do pay the link tax, the results are far from ideal: in Canada, Google has become a partner of news outlets, which compromises their ability to report on Google's activities. Shortly after Google promised millions to the Toronto Star, the paper dropped its award-winning, hard-hitting "Defanging Big Tech" investigative series. Given that Google came within centimeters of stealing most of downtown Toronto just a few years ago, we can hardly afford to have the city's largest newspaper climb into bed with the company:

https://memex.craphound.com/2019/10/31/leaked-document-reveals-that-sidewalk-labs-toronto-plans-for-private-taxation-private-roads-charter-schools-corporate-cops-and-judges-and-punishment-for-people-who-choose-privacy/

Worse still: any effort to make Big Tech poorer – by curbing its predatory acquisition of our data and money – reduces its ability to pay the link tax, which means that, under a link tax, the media's future depends on Big Tech being able to go on ripping us off.

All of which is not to say that Big Tech should be allowed to go on ripping off the media. Rather, it's to argue that we should stop tech from ripping off Canadians in the first place, as a superior alternative to asking Big Tech to remit a small share of the booty to a few lucky victims.

Together, Meta and Google take 51 cents out of every advertising dollar. This is a huge share. Before the rise of surveillance advertising, the ad industry's share of advertising dollars amounted to about 15%. The Meta/Google ad-tech duopoly has cornered the ad market, and they illegally colluded to rig it, which allows them to steal billions from media outlets, all around the world:

https://en.wikipedia.org/wiki/Jedi_Blue

What would a predistribution approach to ad-tech look like? Canada could ban the collection and sale of consumer data outright, and punish any domestic firm that collects consumer data, which would choke off much of the supply of data that feeds the ad-tech market.

Canada could also repeal its wildly unpopular "anticircumvention" law, The Copyright Modernisation Act of 2012 (Bill C-11), which was passed despite the public's overwhelming negative response to a consultation on the bill:

https://pluralistic.net/2024/11/15/radical-extremists/#sex-pest

Under this law, it's illegal for Canadian companies to reverse engineer and modify America's tech exports. This means that Canadian companies can't go into business selling an alternative Facebook client that blocks all the surveillance advertising and restores access to the news, and offers non-surveillant, content-based ways for other Canadian businesses to advertise:

https://www.eff.org/deeplinks/2023/05/save-news-we-must-ban-surveillance-advertising

Repealing Bill C-11 would also allow Canadian companies to offer alternative app stores for phones and consoles. Google and Apple have a duopoly on mobile apps, and the two companies have rigged the market to take 30% of every in-app payment. The actual cost of processing a payment is less than 1%. This means that 30 cents out of every in-app subscriber dollar sent to a Canadian news outlet is shipped south to Cupertino or Mountain View. Legalizing made-in-Canada app stores, installed without permission from Apple or Google, would stop those dollars from being extracted in the first place. And not just media companies, of course – the app tax is paid by performers, software authors, and manufacturers. Extend the program to include games consoles and Canada's game companies would be rescued from Microsoft and Nintendo's own app tax, which also runs to 30%.

But a C-11 repeal wouldn't merely safeguard Canadian dollars – it would also safeguard Canadian data. Our mobile phones collect and transmit mountains of data about us and our activities. Yes, even Apple's products – despite the company's high-flying rhetoric about its respect for your privacy, the company spies on everything you do with your phone and sells access to that data to advertisers. Apple doesn't offer any way to opt out of this, and lied about it when they were caught doing it:

https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar

These companies will not voluntarily stop stealing our data. That's the lesson of nine years under the EU's GDPR, a landmark, strong privacy law that US tech companies simply refuse to obey. And because they claim to be headquartered in Ireland (because Ireland lets them cheat on their taxes) and because they have captured the Irish state, they are able to simply flout the law:

https://pluralistic.net/2025/12/01/erin-go-blagged/#big-tech-omerta

Telling Big Tech not to gather our data is redistribution. So is dictating how they can use it after they collect it. The predistribution version of this is modifying our devices so that they don't gather or leak our data in the first place.

Big Tech is able to suck up so much of our data because anticircumvention law – like Canada's Bill C-11, or Article 6 of the EU Copyright Directive – makes it illegal to modify your phone so that it blocks digital spying, preventing the collection and transmission of your data.

Repeal anticircumvention law and businesses could offer Canadians (or Europeans) (or anyone in the world with a credit card and an internet connection) a product that blocks surveillance on their devices. More than half of all web users have installed an ad-blocker for their browser (which offers significant surveillance protection), but no one can install anything like this on their phones (or smart TVs, or smart doorbells, or other gadgets) because anticircumvention law criminalizes this act.

Big Tech are notorious tax cheats, colluding with captured governments like the Irish state to avoid taxes worldwide. Canada tried to pass a "digital service tax" that would make the US pay a small share of the tax it evades in Canada. Trump went bananas and threatened to hit the country with (more) tariffs, and Canada folded.

Tax is redistributive, and getting money back from American companies after they steal it from Canadians is much harder than simply arranging the system so it's much harder for American companies to steal from Canadians in the first place. Blocking spying, clawing back the app tax, unrigging the ad market – these are all predistributive rather than redistributive.

So is selling alternative clients for legacy social media products like Facebook and Twitter – clients that unrig their algorithms and let Canadians see the news they've subscribed to, so they can't be used as hostages to extract "boosting" fees from media outlets who want to reach their own subscribers.

Canada's redistribution efforts have been a consistent failure. Canada keeps trying to get streaming companies like Netflix to include more Canadian content in their offerings and search results. Legalize jailbreaking and a Canadian company could start selling an alternative client that lets you search all your streaming services at once, mixing in results from Canadian media companies and archives like the National Film Board – all while blocking surveillance by the tech giants. This client could also incorporate a PVR, so you could record shows to watch later, without worrying about the tech giants making your favorite program vanish. Remember, if it's legal to record a show from broadcast or cable with a VCR or a Tivo, it's legal to record it from a streaming service with an app.

These predistribution tactics don't rely on US tech companies obeying Canada's orders. Instead, they take away American companies' ability to use Canada's courts and law enforcement apparatus to shut down Canadian competitors who disenshittify America's spying, stealing tech exports. Canada may not be able to push Google or Apple or Facebook around, but Canada can always decide whether Google or Apple or Facebook can use its courts to push Canadian competitors around.

Back in December, when Trump started threatening (again) to invade Canada and take over the country, Prime Minister Mark Carney broke off trade talks. Those talks are slated to begin again in a matter of days:

https://www.detroitnews.com/story/business/2025/12/19/canada-u-s-to-start-talks-to-review-free-trade-deal-in-mid-january/87843153007/

Getting Trump to deal fairly with Canada is just as unlikely as getting Trump's tech companies to give Canadians a fair shake. Canada isn't going to win the trade war with an agreement. Canada will win the trade war by winning: with Made-in-Canada tech products that turn America's stolen trillions into Canadian billions, to be divided up among Canadian tech businesses (who will reap profits) and the Canadian public (who will reap savings).

(Image: Dietrich Krieger, CC BY-SA 4.0; Tiia Monto, CC BY 4.0, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago HOWTO convert an Oral B flosser into a vibrating lockpick https://web.archive.org/web/20060113090614/http://www.inventgeek.com/Projects/lockpick/lockpick.aspx

#20yrsago Levi’s to ship iPod jeans https://web.archive.org/web/20060113045708/https://www.popgadget.net/2006/01/levis_ipod_jean.php

#20yrsago Chumbawamba: Why we don’t use DRM on our CDs https://web.archive.org/web/20060112044019/http://www.chumba.com/Chumbawambacopyprotect1.html

#20yrsago UK Parliamentarians demand WiFi https://www.cnet.com/home/internet/british-parliament-members-demand-wi-fi-access/

#15yrsago Sue Townsend talks Adrian Mole with the Guardian book-club https://www.theguardian.com/books/audio/2011/jan/10/sue-townsend-adrian-mole-book-club

#15yrsago Major record labels forced to pay CAD$45M to ripped-off musicians https://web.archive.org/web/20110112055510/https://www.michaelgeist.ca/content/view/5563/125/

#10yrsago Why Americans can’t stop working: the poor can’t afford to, and the rich are enjoying themselves https://www.theatlantic.com/business/archive/2016/01/inequality-work-hours/422775/

#10yrsago Juniper blinks: firewall will nuke the NSA’s favorite random number generator https://www.reuters.com/article/us-spying-juniper-idUSKBN0UN07520160109/

#5yrsago Impeachment and realignment https://pluralistic.net/2021/01/10/realignments/#realignments

#5yrsago Busting myths about the Night of the Short Fingers https://pluralistic.net/2021/01/10/realignments/#mythbusting


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1008 words today, 4020 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

2026-01-09T21:00:50+00:00 Fullscreen Open in Tab
Note published on January 9, 2026 at 9:00 PM UTC
Fri, 09 Jan 2026 14:59:17 +0000 Fullscreen Open in Tab
Pluralistic: bunnie's piggyback hack (09 Jan 2026)


Today's links



A slide from bunnie Huang's 39C3 talk.

bunnie's piggyback hack (permalink)

If Andrew "bunnie" Huang didn't actually exist, I'd swear he was a character out of a(n extraordinarily technologically well-informed) cyberpunk novel. Every time I interact with this legendary hardware hacker, he blows my mind with some incredible project or insight that permanently alters how I think about technology.

I first met bunnie when he came to EFF for help with the threats he'd received from Microsoft. At the time, bunnie was an electrical engineering grad student at MIT, and he'd taken the bootloader locks on the new Xbox platform as a personal affront and challenge. He applied his prodigious skill and talent to these digital handcuffs, and in short order, he had broken the Xbox and installed Linux on it. MIT's general counsel immediately washed its hands of any responsibility to defend this young grad student from bullying by a corporate monopolist, hanging him out to dry. So he turned to us – and we got his back. You can read all about it in Hacking the Xbox, his canonical work about hardware hacking and technological freedom (it's free!):

https://bunniefoo.com/nostarch/HackingTheXbox_Free.pdf

In the many years since, I've been lucky enough to count bunnie as a friend, colleague and comrade, albeit one I only physically run into every year or so, usually at some tech event or on the playa at Burning Man, where he still camps with the MIT crew at The Institute.

I just got to see bunnie in person again, over Christmas week at the Chaos Communications Congress in Hamburg. He gave a late-night presentation with his collaborator Sean "xobs" Cross, entitled "Xous: A Pure-Rust Rethink of the Embedded Operating System":

https://www.youtube.com/watch?v=BbWWGkyIBGM

Don't let the technical-sounding title intimidate you! This was a banger of a talk, and as with every bunnie Huang production, it left a pleasant and persistent aftertaste.

The background for this talk is bunnie's obsession with building a trustworthy computer. For decades, bunnie has been chasing the dream of a computer whose every component – operating system, drivers, firmware, and hardware designs – are open to inspection. Bunnie's reasoning here is that anything that can't be inspected (and, by extension, modified) by its users is a spot where bad guys can hide bad stuff, and where lurking bugs can fester until they are exploited by bad guys. Remember the spectacular (and still mysterious) claims that Apple's servers had all been compromised with minuscule hardware bugs? The single best explanation of that you will find comes from bunnie:

https://www.youtube.com/watch?v=RqQhWitJ1As

Bunnie was doing all this before there was an "open source hardware" movement, and he remains at its vanguard. His "Precursor" project is a reference hardware platform where every component is open to inspection and modification, from the chassis to the random number generator:

https://www.bunniestudios.com/blog/category/betrusted/precursor/

One area of especial concern and interest for bunnie is the promise and peril of the "system-on-a-chip" (SoC). This is exactly what it sounds like: a cheap chip that incorporates everything you need to do full-fledged computing, including interfaces and drivers for networks, screens, peripherals, etc. SoCs are ubiquitous. You find them in things like individual car engine components and inkjet printer cartridges, and each one is a whole-ass computer, capable of running some really ugly malware.

As bunnie explained back in 2020, there are two problems with SoCs: first, they are packaged such that the silicon traces inside of them can't be readily inspected, and second, they are so expensive to fabricate that someone like bunnie can't possibly come up with the millions needed to make an open, trustworthy, inspectable alternative:

https://pluralistic.net/2020/11/10/dark-matter/#precursor

That's where bunnie's CCC talk comes in. The chips that SoCs are etched upon have lots of space (relatively speaking – we're talking about nanometer-scale circuits, after all). Even after an SoC designer packs in a ton of extra traces to handle oddball applications, the chip is still mostly "dark matter" – blank silicon.

The first half of bunnie and xobs's talk concerns itself with "Xous," a secure operating system for an SoC, written in Rust. But the second half of the talk tackles the problem of procuring an SoC that you can trust to run Xous on. That's where this dark matter comes in.

Bunnie's day-job is consulting on extremely gnarly, high-stakes, high-value hardware design and manufacturing, so naturally, he's got lots of clients and contacts in the SoC manufacturing world. He approached one of these companies with a proposal: let me tape out a whole separate chip that fits in the dark matter for one of your upcoming chips. Adding these traces adds virtually no cost to the production, and adding bunnie's chips to the production run actually saves the manufacturer money, because the prices drop when the quantities increase.

The idea is to put two chips on the chip, and badge most of them with the OEM's branding, while a small rump of the chips will have bunnie's branding (he calls it the Baochip). On bunnie's chips, the traces to the OEM chip will be physically cut, meaning that the Baochips will just be Baochips – the original chip will be inaccessible and unusable.

What's more, bunnie didn't just fit one chip into the OEM's "dark matter" – he fit five separate, specialized SoCs into the unused space. Remember, the beauty of SoCs is that once they're taped out and sent to production, the cost of an actual chip is peanuts, meaning that these Baochips are cheap as hell.

Even better: the traces on these chips are scaled to be readily inspected using relatively low-cost equipment, meaning that many parties around the world can grab one of these chips, stick it in a machine, and compare the traces on the chip to the free, open sourcefile that was used to produce it, confirming that there are no nasty surprises lurking inside.

This was such an exciting talk, and as I sat through it, I had this nagging feeling that it reminded me of something else I'd learned about years before, though I couldn't quite place it. Finally, as bunnie and xobs were stepping off the stage, I had it – it reminded me of another bunnie talk I'd seen – this one at The Institute, the MIT Burning Man camp, more than a decade prior.

Back in 2015, bunnie designed and built a set of really cool, wearable radio-linked badges for his campmates, which would help them locate one another on the playa at night. These badges were really cool – they used a genetic algorithm to "have sex" with one another and mutate their color patterns. Bunnie even worked in a "consent" mechanism!

https://www.bunniestudios.com/blog/2015/sex-circuits-deep-house/

But the really cool part that stuck with me was the manufacturing story. Bunnie wanted to fabricate custom injection-molded plastic enclosures for these pendants, but injection molding – like chip design – is a mass production phenomenon, with sky-high setup costs and incredibly cheap per-unit costs thereafter.

So (and this might sound familiar) bunnie reached out to a die-maker that he worked with in China and said, "Hey, the next time you're contracted to mill out a die for a client, let me know if there's any extra space on the face of the die, and I'll provide you with a shapefile you can carve out of this 'dark matter.'" This doesn't add any cost to the die setup, and it means that bunnie can run just a couple dozen injection-molded, custom cases at a cost of pennies per unit.

I grabbed bunnie later that night and mentioned this old Burning Man project to him and he said, "You know, I haven't ever thought of it, but you're right, there's definitely a throughline between the two projects."

I asked him what he called this technique and he shrugged and said he didn't really have a name for it, but he thought of it as "piggybacking," which seems like a good name to me.

It seems to me that these two kinds of manufacturing can't be the only ones that can be "piggybacked" onto. That's what motivated me to write this post – to get people thinking about these high-setup/low-unit cost production types that might be piggybacked for small batch, delightful projects like bunnie's.

Well, that, and just to do one of my periodic bunnie Huang appreciation posts. If there's one person that I'd recommend people pay more attention to, it's him. He's also a terrific communicator, and an indecently great writer. My readers might be familiar with him thanks to the afterword he contributed to Little Brother:

https://craphound.com/littlebrother/download/

More recently, he wrote a fantastic intro for last year's Science Comics Computers: How Digital Computers Work, a brilliant middle-grades graphic novel that uses steampunk dinosaurs to explain digital logic and the building blocks of computation:

https://pluralistic.net/2025/11/05/xor-xand-xnor-nand-nor/#brawniac

He also co-authored a fascinating research paper with Edward Snowden, after the two of them collaborated on a daughter-board that spots otherwise untraceable malware:

https://assets.pubpub.org/aacpjrja/AgainstTheLaw-CounteringLawfulAbusesofDigitalSurveillance.pdf

Again, my readers will recognize this as a gimmick from my 2020 novel Attack Surface (a Little Brother novel for adults):

https://us.macmillan.com/books/9781250757517/attacksurface/

That's not bunnie's only sweet hardware hack, of course. Check out the insanely clever design for a contact-tracing dongle he prototyped for the EU in 2020:

https://pluralistic.net/2020/06/23/cryptocidal-maniacs/#trace-together

But really, you owe it to yourself to read bunnie at book length, and his best book is 2016's The Hardware Hacker, a tour-de-force, lay-friendly exegesis on the theory and practice of hardware hacking:

https://memex.craphound.com/2016/12/30/the-hardware-hacker-bunnie-huangs-tour-de-force-on-hardware-hacking-reverse-engineering-china-manufacturing-innovation-and-biohacking/


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago John McDaid’s brilliant sf story Keyboard Practice free online https://web.archive.org/web/20060112044109/https://www.sfsite.com/fsf/fiction/jm01.htm

#20yrsago Pledge to boycott DRM CDs https://web.archive.org/web/20060112061657/http://www.pledgebank.com/boycottdrm

#20yrsago Hollywood’s Canadian MP claims she’s no dirtier than the rest https://memex.craphound.com/2006/01/08/hollywoods-canadian-mp-claims-shes-no-dirtier-than-the-rest/

#10yrsago Gene Luen Yang’s inaugural speech as National Ambassador for Young People’s Literature https://memex.craphound.com/2016/01/08/gene-luen-yangs-inaugural-speech-as-national-ambassador-for-young-peoples-literature/

#10yrsago Menstruation is the mother of invention https://lastwordonnothing.com/2016/01/07/the-wonderful-world-of-period-patents/

#10yrsago Juniper’s products are still insecure; more evidence that the company was complicit https://www.wired.com/2016/01/new-discovery-around-juniper-backdoor-raises-more-questions-about-the-company/

#10yrsago Red-baiting water speculator plans to drain the Mojave of its ancient water https://www.wired.com/2016/01/the-2-4-billion-plan-to-water-la-by-draining-the-mojave/?mbid=social_alleniverson


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America ( words today, total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

2026-01-09T02:23:50+00:00 Fullscreen Open in Tab
Note published on January 9, 2026 at 2:23 AM UTC

gotta love the Maine elections coming up where people are going to vote for King or Pingree in the gubernatorial thinking they're their parents (Senator and Congresswoman), or Baldacci in CD-2 thinking he's his brother (former governor)

to her credit, Hannah Pingree's yard signs say "HANNAH"

(i think Angus King III's might also say "ANGUS" but that's less helpful as the son of Angus King Sr. Perhaps he should print a run of signs that say "NOT MY DAD")

Illustration of Molly White sitting and typing on a laptop, on a purple background with 'Molly White' in white serif.
Thu, 08 Jan 2026 13:30:43 +0000 Fullscreen Open in Tab
Pluralistic: Where did the money go? (08 Jan 2026)


Today's links



A US$100 bill, tinted red; the face of Ben Franklin has been replaced with the hostile red eye of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey.'

Where did the money go? (permalink)

America is trudging through its third consecutive K-shaped recovery (an economic rally where the rich get richer and everyone else gets poorer). The rich have never been richer, and the debt-fueled consumption that kept the economy going is tapering down to a trickle.

This isn't down to the iron laws of economics or the great forces of history. It's because we made rules that let rich people steal from everyone else, including local, state and federal tax authorities, and also workers, customers and suppliers (and society at large). From junk fees to wage theft to greedflation, politicians have thumbed the scales in favor of scumbags who drain the wealth of workers and remit it to parasites.

These crooks and hustlers keep coming up with ways to squeeze a few more drops out of us. They come up with gimmicks like buy now/pay later (and then slam us with massive fees when we can't pay later), or margin-based gambling on cryptocurrency or "prediction markets," both of which are crooked poker tables where you are always the sucker and the house always wins.

The Trump administration didn't invent the idea of government-supported scams and hustles, but they sure supercharged it. Trump rips off his supporters like crazy – as anyone who's long on $TRUMPcoin knows – and surrounds himself with "businessmen" notorious for scamming workers, customers, and the government itself.

But even as Trump throws his support behind hustlers and con artists, he's also backing debt-collectors, whether they're chasing student debt, medical debt, or the spiraling penalties for missing the fourth payment on your Klarna.

Broadly, these are the two industries in America now: scammers who put Americans into debt, and industries who torment Americans into paying the debt. And while these two industries represent a moral crisis for the nation, they also represent an economic crisis, because they are at irreconcilable odds with one another.

If you're in the business of scamming Americans so they go into debt, you want your suckers to have money (so they can give it to you). But if you're in the business of collecting the losses that Americans incur at the hands of scammers, then you're at odds with those scammers themselves – every dollar you collect on the debt from the last scam is a dollar that can't be lost to the next scam.

This is what gave us the Great Financial Crisis: scumbag bankers tricked people into taking out unsustainable mortgages whose "teaser rates" would blow up after a couple years to levels that the borrower couldn't possibly pay back. But the lenders didn't care, because they were only "loan originators" who could pass those loans off to "investors" via exotic financial instruments. These two groups had an irreconcilable conflict: the people making the loans could only keep their scam going so long as the people collecting the loans didn't demand repayment.

But these two groups – scammers and arm-breakers – aren't the only two groups in the economy. There's a third group that you might call, "People who want to make useful things that we like and pay for." This third group is at odds with both the scammers and the arm-breakers, because their potential customers are being tricked (by scammers) and bankrupted (by arm-breakers).

Say you want to go into business renting hotel rooms to people at reasonable rates. You're an honest sort, so you list your room prices right there on your site. But the scumbags you're competing with want to rip people off, so they list a lower price than yours, and then whack the customer with junk fees at check-in that make their room more expensive than yours.

What's more, the scumbags make so much money that they can bribe the handful of dominant travel sites (which are all owned by one of two massive private-equity backed rollups) to list their hotels ahead of yours. They might not like paying bribes – in fact, they probably hate it – but they're willing to part with some of that hard-won ripoff money to keep the money-machine going. Besides, they can make up the difference with more junk fees. Whaddya gonna do, walk away from your nonrefundable, prepaid reservation and try and get a last-minute booking in a strange city?

Societally speaking, the problem is that economic growth only comes from the third group. They're the ones inventing new categories of (useful) products and services that delight their customers and enrich their workers and shareholders (who then buy more things in the economy, keeping the virtuous cycle going).

This festering economic zit is finally coming to a head with AI, whose most profitable use is in predicting how much a vendor can charge you – or how little a boss can pay you – without you walking away from the table:

https://www.reddit.com/r/shitrentals/comments/1q38sh4/if_you_get_promoted_at_work_keep_it_a_secret_from/

AI's most enthusiastic customers, meanwhile, are bosses who dream of firing most of their workers and using the ensuing terror to force down the wages of the remaining workers:

https://pluralistic.net/2026/01/05/fisher-price-steering-wheel/#billionaire-solipsism

If the average American is a squeezed-flat toothpaste tube that's been drained of all its readily extractable contents, then AI is the scissors that slit the tube up the side so that the very last dregs can be scraped out.

As Anil Dash put it,

Those niceties that everybody loved, like great healthcare and decent benefits, were identified by the people running the big tech companies as “market inefficiencies” which indicated some wealth was going to you that should have been going to them.

https://www.anildash.com/2026/01/06/500k-tech-workers-laid-off/

The scammer/arm-breaker economy is fundamentally extractive. When a private equity fund buys a company, sells off its assets, declares a special dividend and gives the proceeds to itself, and pronounces the company to have been "right-sized" because now it has to rent the things it used to own, they are setting that company up to fail. All it takes is one rent-shock or a couple bad quarters and a once-healthy business will fall over:

https://pluralistic.net/2024/05/23/spineless/#invertebrates

Looking at America, it's hard not to ask, "Where did all the money go?" Where did free state college tuition, excellent public libraries, public housing, transit, fully staffed national parks and air-traffic control towers all go? Why can't we fix the potholes? How is it that a country that once electrified itself from top to bottom and sea to sea can't figure out how to run fiber lines to the same roofs where all those power lines connect?

It's because the system is organized around cheaters and arm-breakers. The Heritage Foundation – architects of Trump's Project 2025 – were founded and funded by Jay Van Andel and Rich DeVos, the guys who made their billions running Amway, a pyramid scheme that was legalized by their pet Congressman, Gerry Ford, shortly after he became president:

https://pluralistic.net/2025/05/05/free-enterprise-system/#amway-or-the-highway

The nation's system has been colonized and is being operated by people whose institutional home was created by pyramid-scheme hucksters. Why doesn't Trump's administration care about scam ads on Twitter and Facebook that clean out the very same Boomers who voted him into office? Because Trump's ideological project was founded by actual, non-metaphorical, non-hyperbolic con artists.

That's where the money went. Smart people keep asking how Trump plans on stealing Venezuela's oil when the country is in a state of shambolic collapse and its people are starving? Who will invest hundreds of billions of dollars in new equipment when every dollar spent on capital will require a dollar for a gunman to keep it from being stolen and sold for food?

You could ask the same question about America. In a country where we've literally legalized bribery, who wants to invest in productive businesses?

https://www.youtube.com/watch?v=VX9Ej0L6rGk

America's crisis is the world's opportunity. A chaotic mess of cyberwarfare, trade war, and invasions means that America is no longer your ally or your trading partner – it's a threat.

To neutralize that threat, we must take away the money (and thus the power) of America's oligarchs. We start down that path by changing the international laws – passed at the insistence of the US over the past 25 years – that ban foreign tech companies from modifying America's tech products.

Once other countries' companies start producing the tools that let farmers fix their tractors, that let games publishers sell outside of the official ripoff app stores, that let merchants avoid the Amazon tax, they will not only reap billions of dollars, they will also create a market that favors good products, rather than scams:

https://pluralistic.net/2026/01/01/39c3/#the-new-coalition

America's largest companies have amassed trillions by robbing Americans (first) and then everyone else (once the US trade rep got laws passed that prevented non-US tech companies from making defensive products). The project of the next ten years is to convert those trillions to billions (in profits for companies that disenshittify America's defective technology – and in savings for people who use those tools to escape America's scam economy).

The beneficiaries of this program aren't limited to the investors in foreign tech companies, nor their overseas customers. Americans will also benefit from this technology, because Americans were the first victims of the US scam economy. Everyday Americans pay the app tax, the Amazon tax, the streaming tax, the Apple tax, the Google tax, the Microsoft tax. Supply Americans with the digital arms to resist these corporate raids, and they will stage a tax revolt (a thing that Americans are remarkably good at).

Escaping oligarchy, escaping the climate emergency, escaping economic desperation: these goals require doing things and making things. They require real products and services, they require real infrastructure and tools. By and large people would rather have real things than scams.

Ponzi America is breaking down. It's run out of suckers.

We just can't afford to structure our economy like an Amway downline anymore. We never could.

(Image: Cryteria, CC BY 3.0, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#10yrsago Caught lying by an EFF investigation, T-Mobile CEO turns sweary https://www.theverge.com/2016/1/7/10733298/john-legere-binge-on-lie

#10yrsago Code for America’s year in civic tech https://web.archive.org/web/20160811012751/https://www.codeforamerica.org/blog/2015/12/22/this-year-in-civic-tech-2015-in-review/

#10yrsago Flying while trans: still unbelievably horrible https://trans-fusion.blogspot.com/2016/01/traveling-while-trans-false-promise-of.html

#10yrsago Resilience over rigidity: how to solve tomorrow’s computer problems today https://locusmag.com/feature/cory-doctorow-wicked-problems-resilience-through-sensing/

#10yrsago Dear Comcast: broadband isn’t gasoline https://www.techdirt.com/2016/01/07/with-fixed-costs-fat-margins-comcasts-broadband-cap-justifications-are-total-bullshit/

#10yrsago High-rez trip through Florida’s Haunted Mansion with a low-light filter https://www.youtube.com/watch?v=ZKVd-xwxgJs

#5yrsago Revolutionary Colossus https://pluralistic.net/2021/01/07/revolutionary-colossus/#1776


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1003 words today, 2023 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

2026-01-08T00:00:00+00:00 Fullscreen Open in Tab
A data model for Git (and other docs updates)

Hello! This past fall, I decided to take some time to work on Git’s documentation. I’ve been thinking about working on open source docs for a long time – usually if I think the documentation for something could be improved, I’ll write a blog post or a zine or something. But this time I wondered: could I instead make a few improvements to the official documentation?

So Marie and I made a few changes to the Git documentation!

a data model for Git

After a while working on the documentation, we noticed that Git uses the terms “object”, “reference”, or “index” in its documentation a lot, but that it didn’t have a great explanation of what those terms mean or how they relate to other core concepts like “commit” and “branch”. So we wrote a new “data model” document!

You can read the data model here for now. I assume at some point (after the next release?) it’ll also be on the Git website.

I’m excited about this because understanding how Git organizes its commit and branch data has really helped me reason about how Git works over the years, and I think it’s important to have a short (1600 words!) version of the data model that’s accurate.

The “accurate” part turned out to not be that easy: I knew the basics of how Git’s data model worked, but during the review process I learned some new details and had to make quite a few changes (for example how merge conflicts are stored in the staging area).

updates to git push, git pull, and more

I also worked on updating the introduction to some of Git’s core man pages. I quickly realized that “just try to improve it according to my best judgement” was not going to work: why should the maintainers believe me that my version is better?

I’ve seen a problem a lot when discussing open source documentation changes where 2 expert users of the software argue about whether an explanation is clear or not (“I think X would be a good way to explain it! Well, I think Y would be better!”)

I don’t think this is very productive (expert users of a piece of software are notoriously bad at being able to tell if an explanation will be clear to non-experts), so I needed to find a way to identify problems with the man pages that was a little more evidence-based.

getting test readers to identify problems

I asked for test readers on Mastodon to read the current version of documentation and tell me what they find confusing or what questions they have. About 80 test readers left comments, and I learned so much!

People left a huge amount of great feedback, for example:

  • terminology they didn’t understand (what’s a pathspec? what does “reference” mean? does “upstream” have a specific meaning in Git?)
  • specific confusing sentences
  • suggestions of things things to add (“I do X all the time, I think it should be included here”)
  • inconsistencies (“here it implies X is the default, but elsewhere it implies Y is the default”)

Most of the test readers had been using Git for at least 5-10 years, which I think worked well – if a group of test readers who have been using Git regularly for 5+ years find a sentence or term impossible to understand, it makes it easy to argue that the documentation should be updated to make it clearer.

I thought this “get users of the software to comment on the existing documentation and then fix the problems they find” pattern worked really well and I’m excited about potentially trying it again in the future.

the man page changes

We ended updating these 4 man pages:

The git push and git pull changes were the most interesting to me: in addition to updating the intro to those pages, we also ended up writing:

Making those changes really gave me an appreciation for how much work it is to maintain open source documentation: it’s not easy to write things that are both clear and true, and sometimes we had to make compromises, for example the sentence “git push may fail if you haven’t set an upstream for the current branch, depending on what push.default is set to.” is a little vague, but the exact details of what “depending” means are really complicated and untangling that is a big project.

on the process for contributing to Git

It took me a while to understand Git’s development process. I’m not going to try to describe it here (that could be a whole other post!), but a few quick notes:

  • Git has a Discord server with a “my first contribution” channel for help with getting started contributing. I found people to be very welcoming on the Discord.
  • I used GitGitGadget to make all of my contributions. This meant that I could make a GitHub pull request (a workflow I’m comfortable with) and GitGitGadget would convert my PRs into the system the Git developers use (emails with patches attached). GitGitGadget worked great and I was very grateful to not have to learn how to send patches by email with Git.
  • Otherwise I used my normal email client (Fastmail’s web interface) to reply to emails, wrapping my text to 80 character lines since that’s the mailing list norm.

I also found the mailing list archives on lore.kernel.org hard to navigate, so I hacked together my own git list viewer to make it easier to read the long mailing list threads.

Many people helped me navigate the contribution process and review the changes: thanks to Emily Shaffer, Johannes Schindelin (the author of GitGitGadget), Patrick Steinhardt, Ben Knoble, Junio Hamano, and more.

(I’m experimenting with comments on Mastodon, you can see the comments here)

2026-01-07T21:08:06+00:00 Fullscreen Open in Tab
Read "The Case for Blogging in the Ruins"
Read:
Virginia Woolf wrote about the importance of having a room of one's own: physical space for creative work, free from interruption and control. A blog is a room of your own on the internet. It's a place where you decide what to write about and how to write about it, where you're not subject to the algorithmic whims of platforms that profit from your engagement regardless of whether that engagement makes you or anyone else nebulously smarter. Diderot built the Encyclopédie because he believed that organizing knowledge properly could change how people thought. He spent two decades on it. He went broke. He watched collaborators quit and authorities try to destroy his work. He kept going because the infrastructure mattered, because how we structure the presentation of ideas affects the ideas themselves. We're not going to get a better internet by waiting for platforms to become less extractive. We build it by building it. By maintaining our own spaces, linking to each other, creating the interconnected web of independent sites that the blogosphere once was and could be again.
Illustration of Molly White sitting and typing on a laptop, on a purple background with 'Molly White' in white serif.
Illustration of Molly White sitting and typing on a laptop, on a purple background with 'Molly White' in white serif.
Tagged: blogging.
2026-01-07T21:01:11+00:00 Fullscreen Open in Tab
Read "Writing vs AI"
Read:
Replacing freshman comp with dozens of small groups run like graduate seminars is expensive and hard to imagine. But it would create a generation of students who wouldn't use an AI to write their essays any more than they'd ask an AI to eat a delicious pizza for them. We should aspire to assign the kinds of essays that change the lives of the students who write them, and to teach students to write that kind of essay.
Illustration of Molly White sitting and typing on a laptop, on a purple background with 'Molly White' in white serif.
Illustration of Molly White sitting and typing on a laptop, on a purple background with 'Molly White' in white serif.
2026-01-07T18:52:15+00:00 Fullscreen Open in Tab
Published on Citation Needed: "The year of technoligarchy"
Wed, 07 Jan 2026 14:55:46 +0000 Fullscreen Open in Tab
Pluralistic: Writing vs AI (07 Jan 2026)


Today's links

  • Writing vs AI: If you wouldn't ask an AI to eat a delicious pizza for you, why would you ask it to write a college essay?
  • Hey look at this: Delights to delectate.
  • Object permanence: WELL State of the World; A poem in 30m logfiles; Weapons of Math Destruction; The cost of keeping "13" a British secret; Congress v. "Little Green Men"; "Food and Climate Change Without the Hot Air"
  • Upcoming appearances: Where to find me.
  • Recent appearances: Where I've been.
  • Latest books: You keep readin' em, I'll keep writin' 'em.
  • Upcoming books: Like I said, I'll keep writin' 'em.
  • Colophon: All the rest.



A midcentury male figure in a suit seated at a yellow typewriter; his head has been replaced with the hostile red eye of HAL 9000 from Stanley Kubrick's 2001: A Space Odyssey. He sits in a steeply ranked lecture hall filled with wooden seats. A halo radiates from his head.

Writing vs AI (permalink)

I come from a family of teachers – both parents taught all their lives and now oversee Ed.D candidates, brother owns a school – which has left me painfully aware of the fact that I am not a great teacher.

I am, however, a good teacher. The difference is that a good teacher can teach students who want to learn, whereas a great teacher can inspire students to want to learn. I've spent most of my life teaching, here and there, and while I'm not great, I am getting better.

Last year, I started a new teaching gig: I'm one of Cornell's AD White Visiting Professors, meaning that I visit Cornell (and its NYC campus, Cornell Tech) every year or two for six years and teach, lecture, meet, and run activities.

When I was in Ithaca in September for my inaugural stint, I had a string of what can only be called "peak experiences," meeting with researchers, teachers, undergrads, grads and community members. I had so many conversations that will stick with me, and today I want to talk about one of them.

It was a faculty discussion, and one of the people at the table had been involved in a research project to investigate students' attitudes to their education. The research concluded that students come to Cornell to learn – because they love knowledge and critical thinking – but they are so haunted by the financial consequences of failure (wasting tens, if not hundreds, of thousands of dollars repeating a year or failing out altogether, and then entering the job market debt-burdened and degree-less) that they feel pressured not to take intellectual risks, and, at worst, to cheat. They care about learning, but they're afraid of bad grades, and so chasing grades triumphs over learning.

At that same discussion, I met someone who taught Cornell's version of freshman comp, the "here's how to write at a college level" course that every university offers. I've actually guest-taught some of these, starting in 2005/6, when I had a Fulbright Chair at USC.

Now, while I'm not a great teacher, I am a pretty good writing teacher. I was lucky enough to be mentored by Judith Merril (starting at the age of 9!), who taught me how to participate in a peer-based writing workshop:

https://pluralistic.net/2020/08/13/better-to-have-loved/#neofuturians

In high school, I met Harriet Wolff, a gifted writing teacher, whose writing workshop (which Judith Merril had actually founded, decades earlier) was so good that I spent seven years in my four-year high-school, mostly just to keep going to Harriet's workshop:

https://pluralistic.net/2025/08/30/merely-clever/#rip-harriet-wolff

I graduated from the Clarion science fiction and fantasy workshop (where Judith Merril learned to workshop) in 1992, and then went on to teach Clarion and Clarion West on several occasions, as well as other workshops in the field, such as Viable Paradise (today, I volunteer for Clarion's board). I have taught and been taught, and I've learned a thing or two.

Here's the thing about every successful writing workshop I've been in: they don't necessarily make writing enjoyable (indeed, they can be painful), but they make it profoundly satisfying. When you repeatedly sit down with the same writers, week after week, to think about what went wrong with their work, and how they can fix it, and to hear the same about your work, something changes in how you relate to your work. You come to understand how to transform big, inchoate ideas into structured narratives and arguments, sure – but you also learn to recognize when the structure that emerges teaches you something about those big, inchoate ideas that was there all along, but not visible to you.

It's revelatory. It teaches you what you know. It lets you know what you know. It lets you know more than you know. It's alchemical. It creates new knowledge, and dispels superstition. It sharpens how you think. It sharpens how you talk. And obviously, it sharpens how you write.

The freshmen comp students I've taught over the years were amazed (or, more honestly, incredulous) when I told them this, because for them, writing was a totally pointless exercise. Well, almost totally pointless. Writing had one point: to get a passing grade so that the student could advance to other subjects.

I'm not surprised by this, nor do I think it's merely because some of us are born to write and others will never get the knack (I've taught too many writers to think that anyone can guess who will find meaning in writing). It's because we don't generally teach writing this way until the most senior levels – the last year or two of undergrad, or, more likely, grad school (and then only if that grad program is an MFA).

Writing instruction at lower levels, particularly in US high schools, is organized around standardized assessment. Students are trained to turn out the world's worst literary form: the five-paragraph essay:

https://www.smbc-comics.com/index.php?id=3749

The five-paragraph essay is so rigid that any attempt to enliven it is actually punished during the grading process. One cannot deviate from the structure, on penalty of academic censure. It's got all the structural constraints of a sonnet, and all the poetry of a car crusher.

The five-paragraph essay is so terrible that a large part of the job of a freshman comp teacher is to teach students to stop writing them. But even after this is done, much of the freshman comp curriculum is also formulaic (albeit with additional flexibility). That's unavoidable: freshman comp classes are typically massive, since so many of the incoming students have to take it. When you're assessing 100-2,000 students, you necessarily fall back on a formula.

Which brings me back to that faculty discussion at Cornell, where we learned first that students want to learn, but are afraid of failure; and then heard from the freshman comp teacher, who told us that virtually all of their students cheated on their assignments, getting chatbots to shit out their papers.

And that's what I've been thinking about since September. Because of course those students cheat on their writing assignments – they are being taught to hit mechanical marks with their writing, improving their sentence structure, spelling and punctuation. What they're not learning is how to use writing to order and hone their thoughts, or to improve their ability to express those thoughts. They're being asked to write like a chatbot – why wouldn't they use a chatbot?

You can't teach students to write – not merely to create formally correct sentences, but to write – through formal, easily graded assignments. Teaching writing is a relational practice. It requires that students interact extensively with one another's work, and with one another's criticism. It requires structure, sure – but the structure is in how you proceed through the critiques and subsequent discussion – not in the work itself.

This is the kind of thing you do in small seminars, not big lecture halls. It requires that each student produce a steady stream of work for critique – multiple pieces per term or semester – and that each student closely read and discuss every other student's every composition. It's an intense experience that pushes students to think critically about critical thought itself. It's hard work that requires close supervision and it only works in small groups.

Now, common sense will tell you that this is an impractical way to run a freshman comp class that thousands of students have to take. Not every school can be Yale, whose Daily Themes writing course is the most expensive program to deliver with one instructor for every two students:

https://admissions.yale.edu/bulldogs-blogs/logan/2020/03/01/daily-themes

But think back to the two statements that started me down this line of thinking:

1) Most students want to learn, but are afraid of the financial ruin that academic failure will entail and so they play things very safe; and

2) Virtually all freshman comp students use AI to cheat on their assignments.

By the time we put our students in writing programs that you can't cheat on, and where you wouldn't want to cheat, they've had years of being taught to write like an LLM, but with the insistence that they not use an LLM. No wonder they're cheating! If you wanted to train a graduating class to cheat rather than learn, this is how you'd do it.

Teaching freshman comp as a grammar/sentence structure tutorial misses the point. Sure, student writing is going to be bad at first. It'll be incoherent. It'll be riddled with errors. Reading student work is, for the most part, no fun. But for students, reading other students' writing, and thinking about what's wrong with it and how to fix it is the most reliable way to improve their own work (the dirty secret of writing workshops is that other writers' analysis of your work is generally less useful to you than the critical skills you learn by trying to fix their work).

The amazing thing about bad writing is that it's easy to improve. It's much easier than finding ways to improve the work of a fluid, experienced writer. A beginning writer who makes a lot of easily spotted mistakes is a beginning writer who's making a lot of easily fixed mistakes. That means that the other writers around the circle are capable of spotting those errors, even if they're just starting out themselves. It also means that the writer whose work is under discussion will be able to make huge improvements through simple changes. Beginning writers can get a lot of momentum going this way, deriving real satisfaction from constant, visible progress.

Replacing freshman comp with dozens of small groups run like graduate seminars is expensive and hard to imagine. But it would create a generation of students who wouldn't use an AI to write their essays any more than they'd ask an AI to eat a delicious pizza for them. We should aspire to assign the kinds of essays that change the lives of the students who write them, and to teach students to write that kind of essay.

Freshman comp was always a machine for turning out reliable sentence-makers, not an atelier that trained reliable sense-makers. But AI changes the dynamic. Today, students are asking chatbots to write their essays for the same reason that corporations are asking chatbots to do their customer service (because they don't give a shit):

https://pluralistic.net/2025/08/06/unmerchantable-substitute-goods/#customer-disservice

I'm not saying that small writing workshops of the sort that changed my life will work for everyone. But I am saying that teaching writing in huge lecture halls with assignments optimized for grading works for no one.

(Image: Cryteria, CC BY 3.0, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#10yrsago The annual WELL State of the World, with Bruce Sterling and Jon Lebkowsky https://people.well.com/conf/inkwell.vue/topics/487/Bruce-Sterling-Jon-Lebkowsky-Sta-page01.html

#10yrsago NZ police broke the law when they raided investigative journalist’s home https://www.techdirt.com/2016/01/05/new-zealands-raid-investigatory-journalist-was-illegal/

#10yrsago Someone at the Chaos Communications Congress inserted a poem into at least 30 million servers’ logfiles https://web.archive.org/web/20160106133105/https://motherboard.vice.com/read/chaos-communication-congress-hackers-invaded-millions-of-servers-with-a-poem

#10yrsago Bernie Sanders on small money donations vs sucking up to billionaires https://readersupportednews.org/opinion2/277-75/34452-this-is-not-democracy-this-is-oligarchy

#10yrsago Weapons of Math Destruction: how Big Data threatens democracy https://mathbabe.org/2016/01/06/finishing-up-weapons-of-math-destruction/

#10yrsago Charter schools are turning into the next subprime mortgages https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2704305

#10yrsago New York Public Library does the public domain right https://www.nypl.org/research/resources/public-domain-collections

#10yrsago UK government spent a fortune fighting to keep the number 13 a secret https://www.bbc.com/news/uk-politics-35221173

#5yrsago Congress bans "little green men" https://pluralistic.net/2021/01/06/methane-diet/#ndaa

#5yrsago Mass court: "I agree" means something https://pluralistic.net/2021/01/06/methane-diet/#i-agree

#5yrsago Food and Climate Change Without the Hot Air https://pluralistic.net/2021/01/06/methane-diet/#3kg-per-day#5yrsago


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1013 words, 1013 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

2026-01-07T02:58:51+00:00 Fullscreen Open in Tab
Note published on January 7, 2026 at 2:58 AM UTC
2026-01-06T15:56:24+00:00 Fullscreen Open in Tab
Note published on January 6, 2026 at 3:56 PM UTC
2026-01-04T17:54:17+00:00 Fullscreen Open in Tab
Note published on January 4, 2026 at 5:54 PM UTC
2026-01-04T00:55:56+00:00 Fullscreen Open in Tab
Finished reading The Primal Hunter
Finished reading:
Cover image of The Primal Hunter
The Primal Hunter series, book 1.
Published . 716 pages.
Started ; completed January 3, 2026.
Illustration of Molly White sitting and typing on a laptop, on a purple background with 'Molly White' in white serif.
Tagged: fantasy, litRPG.
2026-01-01T21:41:05+00:00 Fullscreen Open in Tab
Note published on January 1, 2026 at 9:41 PM UTC
2026-01-01T17:13:05+00:00 Fullscreen Open in Tab
Note published on January 1, 2026 at 5:13 PM UTC
2025-12-31T15:37:28+00:00 Fullscreen Open in Tab
Note published on December 31, 2025 at 3:37 PM UTC
2025-12-31T02:50:16+00:00 Fullscreen Open in Tab
Note published on December 31, 2025 at 2:50 AM UTC
2025-12-31T02:06:16+00:00 Fullscreen Open in Tab
Note published on December 31, 2025 at 2:06 AM UTC
2025-11-25T13:25:00-08:00 Fullscreen Open in Tab
Client Registration and Enterprise Management in the November 2025 MCP Authorization Spec

The new MCP authorization spec is here! Today marks the one-year anniversary of the Model Context Protocol, and with it, the launch of the new 2025-11-25 specification.

I’ve been helping out with the authorization part of the spec for the last several months, working to make sure we aren't just shipping something that works for hobbyists, but something that even scales to the enterprise. If you’ve been following my posts like Enterprise-Ready MCP or Let's Fix OAuth in MCP, you know this has been a bit of a journey over the past year.

The new spec just dropped, and while there are a ton of great updates across the board, far more than I can get in to in this blog post, there are two changes in the authorization layer that I am most excited about. They fundamentally change how clients identify themselves and how enterprises manage access to AI-enabled apps.

Client ID Metadata Documents (CIMD)

If you’ve ever tried to work with an open ecosystem of OAuth clients and servers, you know the "Client Registration" problem. In traditional OAuth, you go to a developer portal, register your app, and get a client_id and client_secret. That works great when there is one central server (like Google or GitHub) and many clients that want to use that server.

It breaks down completely in an open ecosystem like MCP, where we have many clients talking to many servers. You can't expect a developer of a new AI Agent to manually register with every single one of the 2,000 MCP servers in the MCP server registry. Plus, when a new MCP server launches, that server wouldn't be able to ask every client developer to register either.

Until now, the answer for MCP was Dynamic Client Registration (DCR). But as implementation experiences has shown us over the last several months, DCR introduces a massive amount of complexity and risk for both sides.

For Authorization Servers, DCR endpoints are a headache. They require public-facing APIs that need strict rate limiting to prevent abuse, and they lead to unbounded database growth as thousands of random clients register themselves. The number of client registrations will only ever increase, so the authorization server is likely to implement some sort of "cleanup" mechanism to delete old client registrations. The problem is there is no clear definition of what an "old" client is.  And if a dynamically registered client is deleted, the client doesn't know about it, and the user is often stuck with no way to recover. Because of the security implications of an endpoint like this, DCR has also been a massive barrier to enterprise adoption of MCP.

For Clients, it’s just as bad. They have to manage the lifecycle of their client credentials on top of the actual access tokens, and there is no standardized way to check if the client registration is still valid. This frequently leads to sloppy implementations where clients simply register a brand new client_id every single time a user logs in, further increasing the number of client registrations at the authorization server. This isn't a theoretical problem, this is also how Mastodon has worked for the last several years, and has some GitHub issue threads describing the challenges it creates.

The new MCP spec solves this by adopting Client ID Metadata Documents.

The OAuth Working Group adopted the Client ID Metadata Document spec in October after about a year of discussion, so it's still relatively new. But seeing it land as the default mechanism in MCP is huge. Instead of the client registering with each authorization server, the client establishes its own identity with a URL it controls and uses the URL to identify itself during an OAuth flow.

When the client starts an OAuth request to the MCP authorization server, it says, "Hi, I'm https://example-app.com/client.json." The server fetches the JSON document at that URL and finds the client's metadata (logo, name, redirect URIs) and proceeds on as usual.

This creates a decentralized trust model based on DNS. If you trust example.com, you trust the client. It removes the registration friction entirely while keeping the security guarantees we need. It’s the same pattern we’ve used in IndieAuth for over a decade, and it fits MCP perfectly.

There are definitely some new considerations and risks this brings, so it's worth diving into the details about Client ID Metadata Documents in the MCP spec as well as the IETF spec. For example, if you're building an MCP client that is running on a web server, you can actually manage private keys and publish the public keys in your metadata document, enabling strong client authentication. And like Dynamic Client Registration, there are still limitations for how desktop clients can leverage this, which can hopefully be solved by a future extension. I talked more about this during a hugely popular session at the Internet Identity Workshop in October, you can find the slides here.

You can try out this new flow today in VSCode, the first MCP client to ship support for CIMD even before it was officially in the spec. You can also learn more and test it out at the excellent website the folks at Stytch created: client.dev.

Enterprise-Managed Authorization (Cross App Access)

This is the big one for anyone asking, "Is MCP safe to use in the enterprise?"

Until now, when an AI agent connected to an MCP server, the connection was established directly between the MCP client and server. For example if you are using ChatGPT to connect to the Asana MCP server, ChatGPT would start an OAuth flow to Asana. But if your Asana account is actually connected to an enterprise IdP like Okta, Okta would only see that you're logging in to Asana, and wouldn't be aware of the connection established between ChatGPT and Asana. This means today there are a huge number of what are effectively unmanaged connections between MCP clients and servers in the enterprise. Enterprise IT admins hate this because it creates "Shadow IT" connections that bypass enterprise policy.

The new MCP spec incorporates Cross App Access (XAA) as the authorization extension "Enterprise-Managed Authorization".

This builds on the work I discussed in Enterprise-Ready MCP leveraging the Identity Assertion Authorization Grant. The flow puts the enterprise Identity Provider (IdP) back in the driver's seat.

Here is how it works:

  1. Single Sign-On: First you log into an MCP Client (like Claude or an IDE) using your corporate SSO, the client gets an ID token.

  2. Token Exchange: Instead of the client starting an OAuth flow to ask the user to manually approve access to a downstream tool (like an Asana MCP server), the client takes that ID token back to the Enterprise IdP to ask for access.

  3. Policy Check: The IdP checks corporate policy. "Is Engineering allowed to use Claude to access Asana?" If the policy passes, the IdP issues a temporary token (ID-JAG) that the client can take to the MCP authorization server.

  4. Access Token Request: The MCP client takes the ID-JAG to the MCP authorization server saying "hey this IdP says you can issue me an access token for this user". The authorization server validates the ID-JAG the same way it would have validated an ID Token (remember this app is also set up for SSO to the same corporate IdP), and issues an access token.

This happens entirely behind the scenes without user interaction. The user doesn't get bombarded with consent screens, and the enterprise admin gets full visibility and revocability. If you want to shut down AI access to a specific internal tool, you do it in one place: your IdP.

Further Reading

There is a lot more in the full spec update, but these two pieces—CIMD for scalable client identity and Cross App Access for enterprise security—are the two I am most excited about. They take MCP to the next level by solving the biggest challenges that were preventing scalable adoption of MCP in the enterprise.

You can read more about the MCP authorization spec update in Den's excellent post, and more about all the updates to the MCP spec in the official announcement post.

Links to docs and specs about everything mentioned in this post are below.

2025-11-25T08:07:14-08:00 Fullscreen Open in Tab
Recurring Events for Meetable

In October, I launched an instance of Meetable for the MCP Community. They've been using it to post working group meetings as well as in-person community events. In just 2 months it already has 41 events listed!

One of the aspects of opening up the software to a new community is stress testing some of the design decisions. An early design decision was intentionally to not support recurring events. For a community calendar, recurring events are often problematic. Once a recurring event is created for something like a weekly meetup, it's no longer clear whether the event is actually going to happen, which is especially true for virtual events. If an organizer of the event silently drops away from the community, it's very likely they will not go delete the event, and you can end up with stale events on the calendar quickly. It's better to have people explicitly create the event on the calendar so that every event was created with intention. To support this, I made a "Clone Event" button to quickly copy the details from a previous instance, and it even predicts the next date based on how often the event has been happening in the past.

But for the MCP community, which is a bit more formal than a purely community calendar, most of the events on their site are weekly or biweekly working group meetings. I had been hearing quite a bit of feedback that the current process of scheduling out the events manually, even with the "clone event" feature, was too much of a burden. So I set out to design a solution for recurring events to strike a balance between ease of use and hopefully avoiding some of the pitfalls of recurring events.

What I landed on is this:

You can create an "event template" from any existing event on the calendar, and give it a recurrence interval like "Every week on Tuesdays" or "Monthly on the 9th".

recurrence options

(I'll add an option for "Monthly on the second Tuesday" later if this ends up being used enough.)

Once the schedule is created, copies of the event will be created at the chosen interval, but only a few weeks out. For weekly events, 4 weeks in advance will be created, biweekly will get scheduled 8 weeks out, monthly events 4 months out, and yearly events will have only the next year scheduled. Every day a cron job will create future events at the scheduled interval in advance. If the event template is deleted, future scheduled events will also be deleted.

So effectively for organizers there is nothing they need to do after creating the recurring event schedule. My hope is by having it work this way, instead of like recurring events on a typical Google calendar, it strikes a balance between ease of use but avoids orphaned events on the calendar. It still requires an organizer to delete a recurrence, so should only be used for events that truly have a schedule and are unlikely to be cancelled often.

Hopefully this makes Meetable even more useful for different kinds of communities! You can install your own copy of Meetable from the source code on GitHub.

2025-10-11T09:49:59-07:00 Fullscreen Open in Tab
Adding Support for BlueSky to IndieLogin.com

Today I just launched support for BlueSky as a new authentication option in IndieLogin.com!

IndieLogin.com is a developer service that allows users to log in to a website with their domain. It delegates the actual user authentication out to various external services, whether that is an IndieAuth server, GitHub, GitLab, Codeberg, or just an email confirmation code, and now also BlueSky.

This means if you have a custom domain as your BlueSky handle, you can now use it to log in to websites like indieweb.org directly!

bluesky login

Alternatively, you can add a link to your BlueSky handle from your website with a rel="me atproto" attribute, similar to how you would link to your GitHub profile from your website.

<a href="https://example.bsky.social" rel="me atproto">example.bsky.social</a>

Full setup instructions here

This is made possible thanks to BlueSky's support of the new OAuth Client ID Metadata Document specification, which was recently adopted by the OAuth Working Group. This means as the developer of the IndieLogin.com service, I didn't have to register for any BlueSky API keys in order to use the OAuth server! The IndieLogin.com website publishes its own metadata which the BlueSky OAuth server can use to fetch the metadata from. This is the same client metadata that an IndieAuth server will parse as well! Aren't standards fun!

The hardest part about the whole process was probably adding DPoP support. Actually creating the DPoP JWT wasn't that bad but the tricky part was handling the DPoP server nonces sent back. I do wish we had a better solution for that mechanism in DPoP, but I remember the reasoning for doing it this way and I guess we just have to live with it now.

This was a fun exercise in implementing a bunch of the specs I've been working on recently!

Here's the link to the full ATProto OAuth docs for reference.

2025-10-10T00:00:00+00:00 Fullscreen Open in Tab
Notes on switching to Helix from vim

Hello! Earlier this summer I was talking to a friend about how much I love using fish, and how I love that I don’t have to configure it. They said that they feel the same way about the helix text editor, and so I decided to give it a try.

I’ve been using it for 3 months now and here are a few notes.

why helix: language servers

I think what motivated me to try Helix is that I’ve been trying to get a working language server setup (so I can do things like “go to definition”) and getting a setup that feels good in Vim or Neovim just felt like too much work.

After using Vim/Neovim for 20 years, I’ve tried both “build my own custom configuration from scratch” and “use someone else’s pre-buld configuration system” and even though I love Vim I was excited about having things just work without having to work on my configuration at all.

Helix comes with built in language server support, and it feels nice to be able to do things like “rename this symbol” in any language.

the search is great

One of my favourite things about Helix is the search! If I’m searching all the files in my repository for a string, it lets me scroll through the potential matching files and see the full context of the match, like this:

For comparison, here’s what the vim ripgrep plugin I’ve been using looks like:

There’s no context for what else is around that line.

the quick reference is nice

One thing I like about Helix is that when I press g, I get a little help popup telling me places I can go. I really appreciate this because I don’t often use the “go to definition” or “go to reference” feature and I often forget the keyboard shortcut.

some vim -> helix translations

  • Helix doesn’t have marks like ma, 'a, instead I’ve been using Ctrl+O and Ctrl+I to go back (or forward) to the last cursor location
  • I think Helix does have macros, but I’ve been using multiple cursors in every case that I would have previously used a macro. I like multiple cursors a lot more than writing macros all the time. If I want to batch change something in the document, my workflow is to press % (to highlight everything), then s to select (with a regex) the things I want to change, then I can just edit all of them as needed.
  • Helix doesn’t have neovim-style tabs, instead it has a nice buffer switcher (<space>b) I can use to switch to the buffer I want. There’s a pull request here to implement neovim-style tabs. There’s also a setting bufferline="multiple" which can act a bit like tabs with gp, gn for prev/next “tab” and :bc to close a “tab”.

some helix annoyances

Here’s everything that’s annoyed me about Helix so far.

  • I like the way Helix’s :reflow works much less than how vim reflows text with gq. It doesn’t work as well with lists. (github issue)
  • If I’m making a Markdown list, pressing “enter” at the end of a list item won’t continue the list. There’s a partial workaround for bulleted lists but I don’t know one for numbered lists.
  • No persistent undo yet: in vim I could use an undofile so that I could undo changes even after quitting. Helix doesn’t have that feature yet. (github PR)
  • Helix doesn’t autoreload files after they change on disk, I have to run :reload-all (:ra<tab>) to manually reload them. Not a big deal.
  • Sometimes it crashes, maybe every week or so. I think it might be this issue.

The “markdown list” and reflowing issues come up a lot for me because I spend a lot of time editing Markdown lists, but I keep using Helix anyway so I guess they can’t be making me that mad.

switching was easier than I thought

I was worried that relearning 20 years of Vim muscle memory would be really hard.

It turned out to be easier than I expected, I started using Helix on a vacation for a little low-stakes coding project I was doing on the side and after a week or two it didn’t feel so disorienting anymore. I think it might be hard to switch back and forth between Vim and Helix, but I haven’t needed to use Vim recently so I don’t know if that’ll ever become an issue for me.

The first time I tried Helix I tried to force it to use keybindings that were more similar to Vim and that did not work for me. Just learning the “Helix way” was a lot easier.

There are still some things that throw me off: for example w in vim and w in Helix don’t have the same idea of what a “word” is (the Helix one includes the space after the word, the Vim one doesn’t).

using a terminal-based text editor

For many years I’d mostly been using a GUI version of vim/neovim, so switching to actually using an editor in the terminal was a bit of an adjustment.

I ended up deciding on:

  1. Every project gets its own terminal window, and all of the tabs in that window (mostly) have the same working directory
  2. I make my Helix tab the first tab in the terminal window

It works pretty well, I might actually like it better than my previous workflow.

my configuration

I appreciate that my configuration is really simple, compared to my neovim configuration which is hundreds of lines. It’s mostly just 4 keyboard shortcuts.

theme = "solarized_light"
[editor]
# Sync clipboard with system clipboard
default-yank-register = "+"

[keys.normal]
# I didn't like that Ctrl+C was the default "toggle comments" shortcut
"#" = "toggle_comments"

# I didn't feel like learning a different way
# to go to the beginning/end of a line so
# I remapped ^ and $
"^" = "goto_first_nonwhitespace"
"$" = "goto_line_end"

[keys.select]
"^" = "goto_first_nonwhitespace"
"$" = "goto_line_end"

[keys.normal.space]
# I write a lot of text so I need to constantly reflow,
# and missed vim's `gq` shortcut
l = ":reflow"

There’s a separate languages.toml configuration where I set some language preferences, like turning off autoformatting. For example, here’s my Python configuration:

[[language]]
name = "python"
formatter = { command = "black", args = ["--stdin-filename", "%{buffer_name}", "-"] }
language-servers = ["pyright"]
auto-format = false

we’ll see how it goes

Three months is not that long, and it’s possible that I’ll decide to go back to Vim at some point. For example, I wrote a post about switching to nix a while back but after maybe 8 months I switched back to Homebrew (though I’m still using NixOS to manage one little server, and I’m still satisfied with that).

2025-10-08T12:14:38-07:00 Fullscreen Open in Tab
Client ID Metadata Document Adopted by the OAuth Working Group

The IETF OAuth Working Group has adopted the Client ID Metadata Document specification!

This specification defines a mechanism through which an OAuth client can identify itself to authorization servers, without prior dynamic client registration or other existing registration.

Clients identify themselves with their own URL, and host their metadata (name, logo, redirect URL) in a JSON document at that URL. They then use that URL as the client_id to introduce themselves to an authorization server for the first time.

The mechanism of clients identifying themselves as a URL has been in use in IndieAuth for over a decade, and more recently has been adopted by BlueSky for their OAuth API. The recent surge in interest in MCP has further demonstrated the need for this to be a standardized mechanism, and was the main driver in the latest round of discussion for the document! This could replace Dynamic Client Registration in MCP, dramatically simplifying management of clients, as well as enabling servers to limit access to specific clients if they want.

The folks at Stytch put together a really nice explainer website about it too! cimd.dev

Thanks to everyone for your contributions and feedback so far! And thanks to my co-author Emilia Smith for her work on the document!

2025-10-04T07:32:57-07:00 Fullscreen Open in Tab
Meetable Release Notes - October 2025

I just released some updates for Meetable, my open source event listing website.

The major new feature is the ability to let users log in with a Discord account. A Meetable instance can be linked to a Discord server to enable any member of the server to log in to the site. You can also restrict who can log in based on Discord "roles", so you can limit who can edit events to only certain Discord members.

One of the first questions I get about Meetable is whether recurring events are supported. My answer has always been "no". In general, it's too easy for recurring events on community calendars go get stale. If an organizer forgets to cancel or just stops showing up, that isn't visible unless someone takes the time to clean up the recurrence. Instead, it's healthier to require each event be created manually. There is a "clone event" feature that makes it easy to copy all the details from a previous event to be able to quickly manually create these sorts of recurring events. In this update, I just added a feature to streamline this even further. The next recurrence is now predicted based on the past interval of the event.

For example, for a biweekly cadence, the following steps happen now:

  • You would create the first instance manually, say for October 1
  • You click "Clone Event" and change the date of the new event to October 15
  • Now when you click "Clone Event" on the October 15 event, it will pre-fill October 29 based on the fact that the October 15 event was created 2 weeks after the event it was cloned from

Currently this only works by counting days, so wouldn't work for things like "first Tuesday of the month" or "the 1st of the month", but I hope this saves some time in the future regardless. If "first Tuesday" or specific days of the month are an important use case for you, let me know and I can try to come up with a solution.

Minor changes/fixes below:

  • Added "Create New Event" to the "Add Event" dropdown menu because it wasn't obvious "Add Event" was clickable.
  • Meeting link no longer appears for cancelled events. (Actually the meeting link only appears for "confirmed" events.)
  • If you add a meeting link but don't set a timezone, a warning message appears on the event.
  • Added a setting to show a message when uploading a photo, you can use this to describe a photo license policy for example.
  • Added a "user profile" page, and if users are configured to fetch profile info from their website, a button to re-fetch the profile info will appear.
2025-08-06T17:00:00-07:00 Fullscreen Open in Tab
San Francisco Billboards - August 2025

Every time I take a Lyft from the San Francisco airport to downtown going up 101, I notice the billboards. The billboards on 101 are always such a good snapshot in time of the current peak of the Silicon Valley hype cycle. I've decided to capture photos of the billboards every time I am there, to see how this changes over time. 

Here's a photo dump from the 101 billboards from August 2025. The theme is clearly AI. Apologies for the slightly blurry photos, these were taken while driving 60mph down the highway, some of them at night.

2025-06-26T00:00:00+00:00 Fullscreen Open in Tab
New zine: The Secret Rules of the Terminal

Hello! After many months of writing deep dive blog posts about the terminal, on Tuesday I released a new zine called “The Secret Rules of the Terminal”!

You can get it for $12 here: https://wizardzines.com/zines/terminal, or get an 15-pack of all my zines here.

Here’s the cover:

the table of contents

Here’s the table of contents:

why the terminal?

I’ve been using the terminal every day for 20 years but even though I’m very confident in the terminal, I’ve always had a bit of an uneasy feeling about it. Usually things work fine, but sometimes something goes wrong and it just feels like investigating it is impossible, or at least like it would open up a huge can of worms.

So I started trying to write down a list of weird problems I’ve run into in terminal and I realized that the terminal has a lot of tiny inconsistencies like:

  • sometimes you can use the arrow keys to move around, but sometimes pressing the arrow keys just prints ^[[D
  • sometimes you can use the mouse to select text, but sometimes you can’t
  • sometimes your commands get saved to a history when you run them, and sometimes they don’t
  • some shells let you use the up arrow to see the previous command, and some don’t

If you use the terminal daily for 10 or 20 years, even if you don’t understand exactly why these things happen, you’ll probably build an intuition for them.

But having an intuition for them isn’t the same as understanding why they happen. When writing this zine I actually had to do a lot of work to figure out exactly what was happening in the terminal to be able to talk about how to reason about it.

the rules aren’t written down anywhere

It turns out that the “rules” for how the terminal works (how do you edit a command you type in? how do you quit a program? how do you fix your colours?) are extremely hard to fully understand, because “the terminal” is actually made of many different pieces of software (your terminal emulator, your operating system, your shell, the core utilities like grep, and every other random terminal program you’ve installed) which are written by different people with different ideas about how things should work.

So I wanted to write something that would explain:

  • how the 4 pieces of the terminal (your shell, terminal emulator, programs, and TTY driver) fit together to make everything work
  • some of the core conventions for how you can expect things in your terminal to work
  • lots of tips and tricks for how to use terminal programs

this zine explains the most useful parts of terminal internals

Terminal internals are a mess. A lot of it is just the way it is because someone made a decision in the 80s and now it’s impossible to change, and honestly I don’t think learning everything about terminal internals is worth it.

But some parts are not that hard to understand and can really make your experience in the terminal better, like:

  • if you understand what your shell is responsible for, you can configure your shell (or use a different one!) to access your history more easily, get great tab completion, and so much more
  • if you understand escape codes, it’s much less scary when cating a binary to stdout messes up your terminal, you can just type reset and move on
  • if you understand how colour works, you can get rid of bad colour contrast in your terminal so you can actually read the text

I learned a surprising amount writing this zine

When I wrote How Git Works, I thought I knew how Git worked, and I was right. But the terminal is different. Even though I feel totally confident in the terminal and even though I’ve used it every day for 20 years, I had a lot of misunderstandings about how the terminal works and (unless you’re the author of tmux or something) I think there’s a good chance you do too.

A few things I learned that are actually useful to me:

  • I understand the structure of the terminal better and so I feel more confident debugging weird terminal stuff that happens to me (I was even able to suggest a small improvement to fish!). Identifying exactly which piece of software is causing a weird thing to happen in my terminal still isn’t easy but I’m a lot better at it now.
  • you can write a shell script to copy to your clipboard over SSH
  • how reset works under the hood (it does the equivalent of stty sane; sleep 1; tput reset) – basically I learned that I don’t ever need to worry about remembering stty sane or tput reset and I can just run reset instead
  • how to look at the invisible escape codes that a program is printing out (run unbuffer program > out; less out)
  • why the builtin REPLs on my Mac like sqlite3 are so annoying to use (they use libedit instead of readline)

blog posts I wrote along the way

As usual these days I wrote a bunch of blog posts about various side quests:

people who helped with this zine

A long time ago I used to write zines mostly by myself but with every project I get more and more help. I met with Marie Claire LeBlanc Flanagan every weekday from September to June to work on this one.

The cover is by Vladimir Kašiković, Lesley Trites did copy editing, Simon Tatham (who wrote PuTTY) did technical review, our Operations Manager Lee did the transcription as well as a million other things, and Jesse Luehrs (who is one of the very few people I know who actually understands the terminal’s cursed inner workings) had so many incredibly helpful conversations with me about what is going on in the terminal.

get the zine

Here are some links to get the zine again:

As always, you can get either a PDF version to print at home or a print version shipped to your house. The only caveat is print orders will ship in August – I need to wait for orders to come in to get an idea of how many I should print before sending it to the printer.

2025-06-10T00:00:00+00:00 Fullscreen Open in Tab
Using `make` to compile C programs (for non-C-programmers)

I have never been a C programmer but every so often I need to compile a C/C++ program from source. This has been kind of a struggle for me: for a long time, my approach was basically “install the dependencies, run make, if it doesn’t work, either try to find a binary someone has compiled or give up”.

“Hope someone else has compiled it” worked pretty well when I was running Linux but since I’ve been using a Mac for the last couple of years I’ve been running into more situations where I have to actually compile programs myself.

So let’s talk about what you might have to do to compile a C program! I’ll use a couple of examples of specific C programs I’ve compiled and talk about a few things that can go wrong. Here are three programs we’ll be talking about compiling:

  • paperjam
  • sqlite
  • qf (a pager you can run to quickly open files from a search with rg -n THING | qf)

step 1: install a C compiler

This is pretty simple: on an Ubuntu system if I don’t already have a C compiler I’ll install one with:

sudo apt-get install build-essential

This installs gcc, g++, and make. The situation on a Mac is more confusing but it’s something like “install xcode command line tools”.

step 2: install the program’s dependencies

Unlike some newer programming languages, C doesn’t have a dependency manager. So if a program has any dependencies, you need to hunt them down yourself. Thankfully because of this, C programmers usually keep their dependencies very minimal and often the dependencies will be available in whatever package manager you’re using.

There’s almost always a section explaining how to get the dependencies in the README, for example in paperjam’s README, it says:

To compile PaperJam, you need the headers for the libqpdf and libpaper libraries (usually available as libqpdf-dev and libpaper-dev packages).

You may need a2x (found in AsciiDoc) for building manual pages.

So on a Debian-based system you can install the dependencies like this.

sudo apt install -y libqpdf-dev libpaper-dev

If a README gives a name for a package (like libqpdf-dev), I’d basically always assume that they mean “in a Debian-based Linux distro”: if you’re on a Mac brew install libqpdf-dev will not work. I still have not 100% gotten the hang of developing on a Mac yet so I don’t have many tips there yet. I guess in this case it would be brew install qpdf if you’re using Homebrew.

step 3: run ./configure (if needed)

Some C programs come with a Makefile and some instead come with a script called ./configure. For example, if you download sqlite’s source code, it has a ./configure script in it instead of a Makefile.

My understanding of this ./configure script is:

  1. You run it, it prints out a lot of somewhat inscrutable output, and then it either generates a Makefile or fails because you’re missing some dependency
  2. The ./configure script is part of a system called autotools that I have never needed to learn anything about beyond “run it to generate a Makefile”.

I think there might be some options you can pass to get the ./configure script to produce a different Makefile but I have never done that.

step 4: run make

The next step is to run make to try to build a program. Some notes about make:

  • Sometimes you can run make -j8 to parallelize the build and make it go faster
  • It usually prints out a million compiler warnings when compiling the program. I always just ignore them. I didn’t write the software! The compiler warnings are not my problem.

compiler errors are often dependency problems

Here’s an error I got while compiling paperjam on my Mac:

/opt/homebrew/Cellar/qpdf/12.0.0/include/qpdf/InputSource.hh:85:19: error: function definition does not declare parameters
   85 |     qpdf_offset_t last_offset{0};
      |                   ^

Over the years I’ve learned it’s usually best not to overthink problems like this: if it’s talking about qpdf, there’s a good change it just means that I’ve done something wrong with how I’m including the qpdf dependency.

Now let’s talk about some ways to get the qpdf dependency included in the right way.

the world’s shortest introduction to the compiler and linker

Before we talk about how to fix dependency problems: building C programs is split into 2 steps:

  1. Compiling the code into object files (with gcc or clang)
  2. Linking those object files into a final binary (with ld)

It’s important to know this when building a C program because sometimes you need to pass the right flags to the compiler and linker to tell them where to find the dependencies for the program you’re compiling.

make uses environment variables to configure the compiler and linker

If I run make on my Mac to install paperjam, I get this error:

c++ -o paperjam paperjam.o pdf-tools.o parse.o cmds.o pdf.o -lqpdf -lpaper
ld: library 'qpdf' not found

This is not because qpdf is not installed on my system (it actually is!). But the compiler and linker don’t know how to find the qpdf library. To fix this, we need to:

  • pass "-I/opt/homebrew/include" to the compiler (to tell it where to find the header files)
  • pass "-L/opt/homebrew/lib -liconv" to the linker (to tell it where to find library files and to link in iconv)

And we can get make to pass those extra parameters to the compiler and linker using environment variables! To see how this works: inside paperjam’s Makefile you can see a bunch of environment variables, like LDLIBS here:

paperjam: $(OBJS)
	$(LD) -o $@ $^ $(LDLIBS)

Everything you put into the LDLIBS environment variable gets passed to the linker (ld) as a command line argument.

secret environment variable: CPPFLAGS

Makefiles sometimes define their own environment variables that they pass to the compiler/linker, but make also has a bunch of “implicit” environment variables which it will automatically pass to the C compiler and linker. There’s a full list of implicit environment variables here, but one of them is CPPFLAGS, which gets automatically passed to the C compiler.

(technically it would be more normal to use CXXFLAGS for this, but this particular Makefile hardcodes CXXFLAGS so setting CPPFLAGS was the only way I could find to set the compiler flags without editing the Makefile)

As an aside: it took me a long time to realize how closely tied to C/C++ `make` is -- I used to think that `make` was just a general build system (and of course you can use it for anything!) but it has a lot of affordances for building C/C++ programs that it doesn't have for building any other kind of program.

two ways to pass environment variables to make

I learned thanks to @zwol that there are actually two ways to pass environment variables to make:

  1. CXXFLAGS=xyz make (the usual way)
  2. make CXXFLAGS=xyz

The difference between them is that make CXXFLAGS=xyz will override the value of CXXFLAGS set in the Makefile but CXXFLAGS=xyz make won’t.

I’m not sure which way is the norm but I’m going to use the first way in this post.

how to use CPPFLAGS and LDLIBS to fix this compiler error

Now that we’ve talked about how CPPFLAGS and LDLIBS get passed to the compiler and linker, here’s the final incantation that I used to get the program to build successfully!

CPPFLAGS="-I/opt/homebrew/include" LDLIBS="-L/opt/homebrew/lib -liconv" make paperjam

This passes -I/opt/homebrew/include to the compiler and -L/opt/homebrew/lib -liconv to the linker.

Also I don’t want to pretend that I “magically” knew that those were the right arguments to pass, figuring them out involved a bunch of confused Googling that I skipped over in this post. I will say that:

  • the -I compiler flag tells the compiler which directory to find header files in, like /opt/homebrew/include/qpdf/QPDF.hh
  • the -L linker flag tells the linker which directory to find libraries in, like /opt/homebrew/lib/libqpdf.a
  • the -l linker flag tells the linker which libraries to link in, like -liconv means “link in the iconv library”, or -lm means “link math

tip: how to just build 1 specific file: make $FILENAME

Yesterday I discovered this cool tool called qf which you can use to quickly open files from the output of ripgrep.

qf is in a big directory of various tools, but I only wanted to compile qf. So I just compiled qf, like this:

make qf

Basically if you know (or can guess) the output filename of the file you’re trying to build, you can tell make to just build that file by running make $FILENAME

tip: you don’t need a Makefile

I sometimes write 5-line C programs with no dependencies, and I just learned that if I have a file called blah.c, I can just compile it like this without creating a Makefile:

make blah

It gets automaticaly expanded to cc -o blah blah.c, which saves a bit of typing. I have no idea if I’m going to remember this (I might just keep typing gcc -o blah blah.c anyway) but it seems like a fun trick.

tip: look at how other packaging systems built the same C program

If you’re having trouble building a C program, maybe other people had problems building it too! Every Linux distribution has build files for every package that they build, so even if you can’t install packages from that distribution directly, maybe you can get tips from that Linux distro for how to build the package. Realizing this (thanks to my friend Dave) was a huge ah-ha moment for me.

For example, this line from the nix package for paperjam says:

  env.NIX_LDFLAGS = lib.optionalString stdenv.hostPlatform.isDarwin "-liconv";

This is basically saying “pass the linker flag -liconv to build this on a Mac”, so that’s a clue we could use to build it.

That same file also says env.NIX_CFLAGS_COMPILE = "-DPOINTERHOLDER_TRANSITION=1";. I’m not sure what this means, but when I try to build the paperjam package I do get an error about something called a PointerHolder, so I guess that’s somehow related to the “PointerHolder transition”.

step 5: installing the binary

Once you’ve managed to compile the program, probably you want to install it somewhere! Some Makefiles have an install target that let you install the tool on your system with make install. I’m always a bit scared of this (where is it going to put the files? what if I want to uninstall them later?), so if I’m compiling a pretty simple program I’ll often just manually copy the binary to install it instead, like this:

cp qf ~/bin

step 6: maybe make your own package!

Once I figured out how to do all of this, I realized that I could use my new make knowledge to contribute a paperjam package to Homebrew! Then I could just brew install paperjam on future systems.

The good thing is that even if the details of how all of the different packaging systems, they fundamentally all use C compilers and linkers.

it can be useful to understand a little about C even if you’re not a C programmer

I think all of this is an interesting example of how it can useful to understand some basics of how C programs work (like “they have header files”) even if you’re never planning to write a nontrivial C program if your life.

It feels good to have some ability to compile C/C++ programs myself, even though I’m still not totally confident about all of the compiler and linker flags and I still plan to never learn anything about how autotools works other than “you run ./configure to generate the Makefile”.

Two things I left out of this post:

  • LD_LIBRARY_PATH / DYLD_LIBRARY_PATH (which you use to tell the dynamic linker at runtime where to find dynamically linked files) because I can’t remember the last time I ran into an LD_LIBRARY_PATH issue and couldn’t find an example.
  • pkg-config, which I think is important but I don’t understand yet
2025-05-12T22:01:23-07:00 Fullscreen Open in Tab
Enterprise-Ready MCP

I've seen a lot of complaints about how MCP isn't ready for the enterprise.

I agree, although maybe not for the reasons you think. But don't worry, this isn't just a rant! I believe we can fix it!

The good news is the recent updates to the MCP authorization spec that separate out the role of the authorization server from the MCP server have now put the building blocks in place to make this a lot easier.

But let's back up and talk about what enterprise buyers expect when they are evaluating AI tools to bring into their companies.

Single Sign-On

At a minimum, an enterprise admin expects to be able to put an application under their single sign-on system. This enables the company to manage which users are allowed to use which applications, and prevents their users from needing to have their own passwords at the applications. The goal is to get every application managed under their single sign-on (SSO) system. Many large companies have more than 200 applications, so having them all managed through their SSO solution is a lot better than employees having to manage 200 passwords for each application!

There's a lot more than SSO too, like lifecycle management, entitlements, and logout. We're tackling these in the IPSIE working group in the OpenID Foundation. But for the purposes of this discussion, let's stick to the basics of SSO.

So what does this have to do with MCP?

An AI agent using MCP is just another application enterprises expect to be able to integrate into their single-sign-on (SSO) system. Let's take the example of Claude. When rolled out at a company, ideally every employee would log in to their company Claude account using the company identity provider (IdP). This lets the enterprise admin decide how many Claude licenses to purchase and who should be able to use it.

Connecting to External Apps

The next thing that should happen after a user logs in to Claude via SSO is they need to connect Claude to their other enterprise apps. This includes the built-in integrations in Claude like Google Calendar and Google Drive, as well as any MCP servers exposed by other apps in use within the enterprise. That could cover other SaaS apps like Zoom, Atlassian, and Slack, as well as home-grown internal apps.

Today, this process involves a somewhat cumbersome series of steps each individual employee must take. Here's an example of what the user needs to do to connect their AI agent to external apps:

First, the user logs in to Claude using SSO. This involves a redirect from Claude to the enterprise IdP where they authenticate with one or more factors, and then are redirected back.

SSO Log in to Claude

Next, they need to connect the external app from within Claude. Claude provides a button to initiate the connection. This takes the user to that app (in this example, Google), which redirects them to the IdP to authenticate again, eventually getting redirected back to the app where an OAuth consent prompt is displayed asking the user to approve access, and finally the user is redirected back to Claude and the connection is established.

Connect Google

The user has to repeat these steps for every MCP server that they want to connect to Claude. There are two main problems with this:

  • This user experience is not great. That's a lot of clicking that the user has to do.
  • The enterprise admin has no visibility or control over the connection established between the two applications.

Both of these are significant problems. If you have even just 10 MCP servers rolled out in the enterprise, you're asking users to click through 10 SSO and OAuth prompts to establish the connections, and it will only get worse as MCP is more widely adopted within apps. But also, should we really be asking the user if it's okay for Claude to access their data in Google Drive? In a company context, that's not actually the user's decision. That decision should be made by the enterprise IT admin.

In "An Open Letter to Third-party Suppliers", Patrick Opet, Chief Information Security Officer of JPMorgan Chase writes:

"Modern integration patterns, however, dismantle these essential boundaries, relying heavily on modern identity protocols (e.g., OAuth) to create direct, often unchecked interactions between third-party services and firms' sensitive internal resources."

Right now, these app-to-app connections are happening behind the back of the IdP. What we need is a way to move the connections between the applications into the IdP where they can be managed by the enterprise admin.

Let's see how this works if we leverage a new (in-progress) OAuth extension called "Identity and Authorization Chaining Across Domains", which I'll refer to as "Cross-App Access" for short, enabling the enterprise IdP to sit in the middle of the OAuth exchange between the two apps.

A Brief Intro to Cross-App Access

In this example, we'll use Claude as the application that is trying to connect to Slack's (hypothetical) MCP server. We'll start with a high-level overview of the flow, and later go over the detailed protocol.

First, the user logs in to Claude through the IdP as normal. This results in Claude getting either an ID token or SAML assertion from the IdP, which tells Claude who the user is. (This works the same for SAML assertions or ID tokens, so I'll use ID tokens in the example from here out.) This is no different than what the user would do today when signing in to Claude.

Step 1 and 2 SSO

Then, instead of prompting the user to connect Slack, Claude takes the ID token back to the IdP in a request that says "Claude is requesting access to this user's Slack account."

The IdP validates the ID token, sees it was issued to Claude, and verifies that the admin has allowed Claude to access Slack on behalf of the given user. Assuming everything checks out, the IdP issues a new token back to Claude.

Step 3 and 4 Cross-Domain Request

Claude takes the intermediate token from the IdP to Slack saying "hi, I would like an access token for the Slack MCP server. The IdP gave me this token with the details of the user to issue the access token for." Slack validates the token the same way it would have validated an ID token. (Remember, Slack is already configured for SSO to the IdP for this customer as well, so it already has a way to validate these tokens.) Slack is able to issue an access token giving Claude access to this user's resources in its MCP server.

Step 5-7 Access Token Request

This solves the two big problems:

  • The exchange happens entirely without any user interaction, so the user never sees any prompts or any OAuth consent screens.
  • Since the IdP sits in between the exchange, this gives the enterprise admin a chance to configure the policies around which applications are allowed this direct connection.

The other nice side effect of this is since there is no user interaction required, the first time a new user logs in to Claude, all their enterprise apps will be automatically connected without them having to click any buttons!

Cross-App Access Protocol

Now let's look at what this looks like in the actual protocol. This is based on the adopted in-progress OAuth specification "Identity and Authorization Chaining Across Domains". This spec is actually a combination of two RFCs: Token Exchange (RFC 8693), and JWT Profile for Authorization Grants (RFC 7523). Both RFCs as well as the "Identity and Authorization Chaining Across Domains" spec are very flexible. While this means it is possible to apply this to many different use cases, it does mean we need to be a bit more specific in how to use it for this use case. For that purpose, I've written a profile of the Identity Chaining draft called "Identity Assertion Authorization Grant" to fill in the missing pieces for the specific use case detailed here.

Let's go through it step by step. For this example we'll use the following entities:

  • Claude - the "Requesting Application", which is attempting to access Slack
  • Slack - the "Resource Application", which has the resources being accessed through MCP
  • Okta - the enterprise identity provider which users at the example company can use to sign in to both apps

Cross-App Access Diagram

Single Sign-On

First, Claude gets the user to sign in using a standard OpenID Connect (or SAML) flow in order to obtain an ID token. There isn't anything unique to this spec regarding this first stage, so I will skip the details of the OpenID Connect flow and we'll start with the ID token as the input to the next step.

Token Exchange

Claude, the requesting application, then makes a Token Exchange request (RFC 8693) to the IdP's token endpoint with the following parameters:

  • requested_token_type: The value urn:ietf:params:oauth:token-type:id-jag indicates that an ID Assertion JWT is being requested.
  • audience: The Issuer URL of the Resource Application's authorization server.
  • subject_token: The identity assertion (e.g. the OpenID Connect ID Token or SAML assertion) for the target end-user.
  • subject_token_type: Either urn:ietf:params:oauth:token-type:id_token or urn:ietf:params:oauth:token-type:saml2 as defined by RFC 8693.

This request will also include the client credentials that Claude would use in a traditional OAuth token request, which could be a client secret or a JWT Bearer Assertion.

POST /oauth2/token HTTP/1.1
Host: acme.okta.com
Content-Type: application/x-www-form-urlencoded

grant_type=urn:ietf:params:oauth:grant-type:token-exchange
&requested_token_type=urn:ietf:params:oauth:token-type:id-jag
&audience=https://auth.slack.com/
&subject_token=eyJraWQiOiJzMTZ0cVNtODhwREo4VGZCXzdrSEtQ...
&subject_token_type=urn:ietf:params:oauth:token-type:id_token
&client_assertion_type=urn:ietf:params:oauth:client-assertion-type:jwt-bearer
&client_assertion=eyJhbGciOiJSUzI1NiIsImtpZCI6IjIyIn0...

ID Assertion Validation and Policy Evaluation

At this point, the IdP evaluates the request and decides whether to issue the requested "ID Assertion JWT". The request will be evaluated based on the validity of the arguments, as well as the configured policy by the customer.

For example, the IdP validates that the ID token in this request was issued to the same client that matches the provided client authentication. It evaluates that the user still exists and is active, and that the user is assigned the Resource Application. Other policies can be evaluated at the discretion of the IdP, just like it can during a single sign-on flow.

If the IdP agrees that the requesting app should be authorized to access the given user's data in the resource app's MCP server, it will respond with a Token Exchange response to issue the token:

HTTP/1.1 200 OK
Content-Type: application/json
Cache-Control: no-store

{
  "issued_token_type": "urn:ietf:params:oauth:token-type:id-jag",
  "access_token": "eyJhbGciOiJIUzI1NiIsI...",
  "token_type": "N_A",
  "expires_in": 300
}

The claims in the issued JWT are defined in "Identity Assertion Authorization Grant". The JWT is signed using the same key that the IdP signs ID tokens with. This is a critical aspect that makes this work, since again we assumed that both apps would already be configured for SSO to the IdP so would already be aware of the signing key for that purpose.

At this point, Claude is ready to request a token for the Resource App's MCP server

Access Token Request

The JWT received in the previous request can now be used as a "JWT Authorization Grant" as described by RFC 7523. To do this, Claude makes a request to the MCP authorization server's token endpoint with the following parameters:

  • grant_type: urn:ietf:params:oauth:grant-type:jwt-bearer
  • assertion: The Identity Assertion Authorization Grant JWT obtained in the previous token exchange step

For example:

POST /oauth2/token HTTP/1.1
Host: auth.slack.com
Authorization: Basic yZS1yYW5kb20tc2VjcmV0v3JOkF0XG5Qx2

grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer
assertion=eyJhbGciOiJIUzI1NiIsI...

Slack's authorization server can now evaluate this request to determine whether to issue an access token. The authorization server can validate the JWT by checking the issuer (iss) in the JWT to determine which enterprise IdP the token is from, and then check the signature using the public key discovered at that server. There are other claims to be validated as well, described in Section 6.1 of the Identity Assertion Authorization Grant.

Assuming all the validations pass, Slack is ready to issue an access token to Claude in the token response:

HTTP/1.1 200 OK
Content-Type: application/json
Cache-Control: no-store

{
  "token_type": "Bearer",
  "access_token": "2YotnFZFEjr1zCsicMWpAA",
  "expires_in": 86400
}

This token response is the same format that Slack's authorization server would be responding to a traditional OAuth flow. That's another key aspect of this design that makes it scalable. We don't need the resource app to use any particular access token format, since only that server is responsible for validating those tokens.

Now that Claude has the access token, it can make a request to the (hypothetical) Slack MCP server using the bearer token the same way it would have if it got the token using the traditional redirect-based OAuth flow.

Note: Eventually we'll need to define the specific behavior of when to return a refresh token in this token response. The goal is to ensure the client goes through the IdP often enough for the IdP to enforce its access policies. A refresh token could potentially undermine that if the refresh token lifetime is too long. It follows that ultimately the IdP should enforce the refresh token lifetime, so we will need to define a way for the IdP to communicate to the authorization server whether and how long to issue refresh tokens. This would enable the authorization server to make its own decision on access token lifetime, while still respecting the enterprise IdP policy.

Cross-App Access Sequence Diagram

Here's the flow again, this time as a sequence diagram.

Cross-App Access Sequence Diagram

  1. The client initiates a login request
  2. The user's browser is redirected to the IdP
  3. The user logs in at the IdP
  4. The IdP returns an OAuth authorization code to the user's browser
  5. The user's browser delivers the authorization code to the client
  6. The client exchanges the authorization code for an ID token at the IdP
  7. The IdP returns an ID token to the client

At this point, the user is logged in to the MCP client. Everything up until this point has been a standard OpenID Connect flow.

  1. The client makes a direct Token Exchange request to the IdP to exchange the ID token for a cross-domain "ID Assertion JWT"
  2. The IdP validates the request and checks the internal policy
  3. The IdP returns the ID-JAG to the client
  4. The client makes a token request using the ID-JAG to the MCP authorization server
  5. The authorization server validates the token using the signing key it also uses for its OpenID Connect flow with the IdP
  6. The authorization server returns an access token
  7. The client makes a request with the access token to the MCP server
  8. The MCP server returns the response

For a more detailed step by step of the flow, see Appendix A.3 of the Identity Assertion Authorization Grant.

Next Steps

If this is something you're interested in, we'd love your help! The in-progress spec is publicly available, and we're looking for people interested in helping prototype it. If you're building an MCP server and you want to make it enterprise-ready, I'd be happy to help you build this!

You can find me at a few related events coming up:

And of course you can always find me on LinkedIn or email me at aaron.parecki@okta.com.

2025-04-03T16:39:37-07:00 Fullscreen Open in Tab
Let's fix OAuth in MCP
Update: The changes described in this blog post have been incorporated into the 2025-06-18 version of the MCP spec!

Let's not overthink auth in MCP.

Yes, the MCP server is going to need its own auth server. But it's not as bad as it sounds. Let me explain.

First let's get a few pieces of terminology straight.

The confusion that's happening in the discussions I've seen so far is because the spec and diagrams show that the MCP server itself is handing authorization. That's not necessary.

oauth roles

In OAuth, we talk about the "authorization server" and "resource server" as distinct roles. I like to think of the authorization server as the "token factory", that's the thing that makes the access tokens. The resource server (usually an API) needs to be able to validate the tokens created by the authorization server.

combined AS and RS

It's possible to build a single server that is both a resource server and authorization server, and in fact many OAuth systems are built that way, especially large consumer services.

separate AS and RS

But nothing about the spec requires that the two roles are combined, it's also possible to run these as two totally unrelated services.

This flexibility that's been baked into OAuth for over a decade is what has led to the rapid adoption, as well the proliferation of open source and commercial products that provide an OAuth authorization server as a service.

So how does this relate to MCP?

I can annotate the flow from the Model Context Protocol spec to show the parts where the client talks to the MCP Resource Server separately from where the client talks to the MCP Authorization Server.

MCP Flow showing AS and RS highlighted

Here is the updated sequence diagram showing communication with each role separately.

New MCP diagram showing separate AS and RS

Why is it important to call out this change?

I've seen a few conversations in various places about how requiring the MCP Server to be both an authorization server and resource server is too much of a burden. But actually, very little needs to change about the spec to enable this separation of concerns that OAuth already provides.

I've also seen various suggestions of other ways to separate the authorization server from the MCP server, like delegating to an enterprise IdP and having the MCP server validate access tokens issued by the IdP. These other options also conflate the OAuth roles in an awkward way and would result in some undesirable properties or relationships between the various parties involved.

So what needs to change in the MCP spec to enable this?

Discovery

The main thing currently forcing the MCP Server to be both the authorization server and resource server is how the client does discovery.

One design goal of MCP is to enable a client to bootstrap everything it needs based on only the server URL provided. I think this is a great design goal, and luckily is something that can be achieved even when separating the roles in the way I've described.

The MCP spec currently says that clients are expected to fetch the OAuth Server Metadata (RFC8414) file from the MCP Server base URL, resulting in a URL such as:

https://example.com/.well-known/oauth-authorization-server

This ends up meaning the MCP Resource Server must also be an Authorization Server, which leads to the complications the community has encountered so far. The good news is there is an OAuth spec we can apply here instead: Protected Resource Metadata.

Protected Resource Metadata

The Protected Resource Metadata spec is used by a Resource Server to advertise metadata about itself, including which Authorization Server can be used with it. This spec is both new and old. It was started in 2016, but was never adopted by the OAuth working group until 2023, after I had presented at an IETF meeting about the need for clients to be able to bootstrap OAuth flows given an OAuth resource server. The spec is now awaiting publication as an RFC, and should get its RFC number in a couple months. (Update: This became RFC 9728 on April 23, 2025!)

Applying this to the MCP server would result in a sequence like the following:

New discovery flow for MCP

  1. The MCP Client fetches the Resource Server Metadata file by appending /.well-known/oauth-protected-resource to the MCP Server base URL.
  2. The MCP Client finds the authorization_servers property in the JSON response, and builds the Authorization Server Metadata URL by appending /.well-known/oauth-authorization-server
  3. The MCP Client fetches the Authorization Server Metadata to find the endpoints it needs for the OAuth flow, the authorization endpoint and token endpoint
  4. The MCP Client initiates an OAuth flow and continues as normal


Note: The Protected Resource Metadata spec also supports the Resource Server returning WWW-Authenticate with a link to the resource metadata URL if you want to avoid the requirement that MCP Servers host their metadata URLs at the .well-known endpoint, it just requires an extra HTTP request to support this.

Access Token Validation

Two things to keep in mind about how the MCP Server validates access tokens with this new separation of concerns.

If you do build the MCP Authorization Server and Resource Server as part of the same system, you don't need to do anything special to validate the access tokens the Authorization Server issues. You probably already have some sort of infrastructure in place for your normal API to validate tokens issued by your Authorization Server, so nothing changes there.

If you are using an external Authorization Server, whether that's an open source product or a commercial hosted service, that product will have its own docs for how you can validate the tokens it creates. There's a good chance it already supports the standardized JWT Access Tokens described in RFC 9068, in which case you can use off-the-shelf JWT validation middleware for common frameworks.

In either case, the critical design goal here is that the MCP Authorization Server issues access tokens that only ever need to be validated by the MCP Resource Server. This is in line with the security recommendations in Section 2.3 of RFC 9700, in particular that "access tokens SHOULD be audience-restricted to a specific resource server". In other words, it would be a bad idea for the MCP Client to be issued an access token that works with both the MCP Resource Server and the service's REST API.

Why Require the MCP Server to have an Authorization Server in the first place?

Another argument I've seen is that MCP Server developers shouldn't have to build any OAuth infrastructure at all, instead they should be able to delegate all the OAuth bits to an external service.

In principle, I agree. Getting API access and authorization right is tricky, that's why there are entire companies dedicated to solving the problem.

The architecture laid out above enables this exact separation of concerns. The difference between this architecture and some of the other proposals I've seen is that this cleanly separates the security boundaries so that there are minimal dependencies among the parties involved.

But, one thing I haven't seen mentioned in the discussions is that there actually is no requirement than an OAuth Authorization Server provide any UI itself.

An Authorization Server with no UI?

While it is desirable from a security perspective that the MCP Resource Server has a corresponding Authorization Server that issues access tokens for it, that Authorization Server doesn't actually need to have any UI or even any concept of user login or accounts. You can actually build an Authorization Server that delegates all user account management to an external service. You can see an example of this in PayPal's MCP server they recently launched.

PayPal's traditional API already supports OAuth, the authorization and token endpoints are:

  • https://www.paypal.com/signin/authorize
  • https://api-m.paypal.com/v1/oauth2/token

When PayPal built their MCP server, they launched it at https://mcp.paypal.com. If you fetch the metadata for the MCP Server, you'll find the two OAuth endpoints for the MCP Authorization Server:

  • https://mcp.paypal.com/authorize
  • https://mcp.paypal.com/token

When the MCP Client redirects the user to the authorization endpoint, the MCP server itself doesn't provide any UI. Instead, it immediately redirects the user to the real PayPal authorization endpoint which then prompts the user to log in and authorize the client.

Roles with backend API and Authorization Servers

This points to yet another benefit of architecting the MCP Authorization Server and Resource Server this way. It enables implementers to delegate the actual user management to their existing OAuth server with no changes needed to the MCP Client. The MCP Client isn't even aware that this extra redirect step was inserted in the middle. As far as the MCP Client is concerned, it has been talking to only the MCP Authorization Server. It just so happens that the MCP Authorization Server has sent the user elsewhere to actually log in.

Dynamic Client Registration

There's one more point I want to make about why having a dedicated MCP Authorization Server is helpful architecturally.

The MCP spec strongly recommends that MCP Servers (authorization servers) support Dynamic Client Registration. If MCP is successful, there will be a large number of MCP Clients talking to a large number of MCP Servers, and the user is the one deciding which combinations of clients and servers to use. This means it is not scalable to require that every MCP Client developer register their client with every MCP Server.

This is similar to the idea of using an email client with the user's chosen email server. Obviously Mozilla can't register Thunderbird with every email server out there. Instead, there needs to be a way to dynamically establish a client's identity with the OAuth server at runtime. Dynamic Client Registration is one option for how to do that.

The problem is most commercial APIs are not going to enable Dynamic Client Registration on their production servers. For example, in order to get client credentials to use the Google APIs, you need to register as a developer and then register an OAuth client after logging in. Dynamic Client Registration would allow a client to register itself without the link to the developer's account. That would mean there is no paper trail for who the client was developed by. The Dynamic Client Registration endpoint can't require authentication by definition, so is a public endpoint that can create clients, which as you can imagine opens up some potential security issues.

I do, however, think it would be reasonable to expect production services to enable Dynamic Client Registration only on the MCP's Authorization Server. This way the dynamically-registered clients wouldn't be able to use the regular REST API, but would only be able to interact with the MCP API.

Mastodon and BlueSky also have a similar problem of needing clients to show up at arbitrary authorization servers without prior coordination between the client developer and authorization server operator. I call this the "OAuth for the Open Web" problem. Mastodon used Dynamic Client Registration as their solution, and has since documented some of the issues that this creates, linked here and here.

BlueSky decided to take a different approach and instead uses an https URL as a client identifier, bypassing the need for a client registration step entirely. This has the added bonus of having at least some level of confidence of the client identity because the client identity is hosted at a domain. It would be a perfectly viable approach to use this method for MCP as well. There is a discussion on that within MCP here. This is an ongoing topic within the OAuth working group, I have a couple of drafts in progress to formalize this pattern, Client ID Metadata Document and Client ID Scheme.

Enterprise IdP Integration

Lastly, I want to touch on the idea of enabling users to log in to MCP Servers with their enterprise IdP.

When an enterprise company purchases software, they expect to be able to tie it in to their single-sign-on solution. For example, when I log in to work Slack, I enter my work email and Slack redirects me to my work IdP where I log in. This way employees don't need to have passwords with every app they use in the enterprise, they can log in to everything with the same enterprise account, and all the apps can be protected with multi-factor authentication through the IdP. This also gives the company control over which users can access which apps, as well as a way to revoke a user's access at any time.

So how does this relate to MCP?

Well, plenty of people are already trying to figure out how to let their employees safely use AI tools within the enterprise. So we need a way to let employees use their enterprise IdP to log in and authorize MCP Clients to access MCP Servers.

If you're building an MCP Server in front of an existing application that already supports enterprise Single Sign-On, then you don't need to do anything differently in the MCP Client or Server and you already have support for this. When the MCP Client redirects to the MCP Authorization Server, the MCP Authorization Server redirects to the main Authorization Server, which would then prompt the user for their company email/domain and redirect to the enterprise IdP to log in.

This brings me to yet another thing I've been seeing conflated in the discussions: user login and user authorization.

OAuth is an authorization delegation protocol. OAuth doesn't actually say anything about how users authenticate at the OAuth server, it only talks about how the user can authorize access to an application. This is actually a really great thing, because it means we can get super creative with how users authenticate.

User logs in and authorizes

Remember the yellow box "User logs in and authorizes" from the original sequence diagram? These are actually two totally distinct steps. The OAuth authorization server is responsible for getting the user to log in somehow, but there's no requirement that how the user logs in is with a username/password. This is where we can insert a single-sign-on flow to an enterprise IdP, or really anything you can imagine.

So think of this as two separate boxes: "user logs in", and "user authorizes". Then, we can replace the "user logs in" box with an entirely new OpenID Connect flow out to the enterprise IdP to log the user in, and after they are logged in they can authorize the client.

User logs in with OIDC

I'll spare you the complete expanded sequence diagram, since it looks a lot more complicated than it actually is. But I again want to stress that this is nothing new, this is already how things are commonly done today.

This all just becomes cleaner to understand when you separate the MCP Authorization Server from the MCP Resource Server.

We can push all the complexity of user login, token minting, and more onto the MCP Authorization Server, keeping the MCP Resource Server free to do the much simpler task of validating access tokens and serving resources.

Future Improvements of Enterprise IdP Integration

There are two things I want to call out about how enterprise IdP integration could be improved. Both of these are entire topics on their own, so I will only touch on the problems and link out to other places where work is happening to solve them.

There are two points of friction with the current state of enterprise login for SaaS apps.

  • IdP discovery
  • User consent

IdP Discovery

When a user logs in to a SaaS app, they need to tell the app how to find their enterprise IdP. This is commonly done by either asking the user to enter their work email, or asking the user to enter their tenant URL at the service.

Sign in with SSO

Neither of these is really a great user experience. It would be a lot better if the browser already knew which enterprise IdP the user should be sent to. This is one of my goals with the work happening in FedCM. With this new browser API, the browser can mediate the login, telling the SaaS app which enterprise IdP to use automatically only needing the user to click their account icon rather than type anything in.

User Consent

Another point of friction in the enterprise happens when a user starts connecting multiple applications to each other within the company. For example, if you drop in a Google Docs link into Slack, Slack will prompt you to connect your Google account to preview the link. Multiply this by N number of applications that can preview links, and M number of applications you might drop links to, and you end up sending the user through a huge number of OAuth consent flows.

The problem is only made worse with the explosion of AI tools. Every AI tool will need access to data in every other application in the enterprise. That is a lot of OAuth consent flows for the user to manage. Plus, the user shouldn't really be the one granting consent for Slack to access the company Google Docs account anyway. That consent should ideally be managed by the enterprise IT admin.

What we actually need is a way to enable the IT admin to grant consent for apps to talk to each other company-wide, removing the need for users to be sent through an OAuth flow at all.

This is the basis of another OAuth spec I've been working on, the Identity Assertion Authorization Grant.

The same problem applies to MCP Servers, and with the separation of concerns laid out above, it becomes straightforward to add this extension to move the consent to the enterprise and streamline the user experience.

Get in touch!

If these sound like interesting problems, please get in touch! You can find me on LinkedIn or reach me via email at aaron@parecki.com.

2025-03-07T00:00:00+00:00 Fullscreen Open in Tab
Standards for ANSI escape codes

Hello! Today I want to talk about ANSI escape codes.

For a long time I was vaguely aware of ANSI escape codes (“that’s how you make text red in the terminal and stuff”) but I had no real understanding of where they were supposed to be defined or whether or not there were standards for them. I just had a kind of vague “there be dragons” feeling around them. While learning about the terminal this year, I’ve learned that:

  1. ANSI escape codes are responsible for a lot of usability improvements in the terminal (did you know there’s a way to copy to your system clipboard when SSHed into a remote machine?? It’s an escape code called OSC 52!)
  2. They aren’t completely standardized, and because of that they don’t always work reliably. And because they’re also invisible, it’s extremely frustrating to troubleshoot escape code issues.

So I wanted to put together a list for myself of some standards that exist around escape codes, because I want to know if they have to feel unreliable and frustrating, or if there’s a future where we could all rely on them with more confidence.

what’s an escape code?

Have you ever pressed the left arrow key in your terminal and seen ^[[D? That’s an escape code! It’s called an “escape code” because the first character is the “escape” character, which is usually written as ESC, \x1b, \E, \033, or ^[.

Escape codes are how your terminal emulator communicates various kinds of information (colours, mouse movement, etc) with programs running in the terminal. There are two kind of escape codes:

  1. input codes which your terminal emulator sends for keypresses or mouse movements that don’t fit into Unicode. For example “left arrow key” is ESC[D, “Ctrl+left arrow” might be ESC[1;5D, and clicking the mouse might be something like ESC[M :3.
  2. output codes which programs can print out to colour text, move the cursor around, clear the screen, hide the cursor, copy text to the clipboard, enable mouse reporting, set the window title, etc.

Now let’s talk about standards!

ECMA-48

The first standard I found relating to escape codes was ECMA-48, which was originally published in 1976.

ECMA-48 does two things:

  1. Define some general formats for escape codes (like “CSI” codes, which are ESC[ + something and “OSC” codes, which are ESC] + something)
  2. Define some specific escape codes, like how “move the cursor to the left” is ESC[D, or “turn text red” is ESC[31m. In the spec, the “cursor left” one is called CURSOR LEFT and the one for changing colours is called SELECT GRAPHIC RENDITION.

The formats are extensible, so there’s room for others to define more escape codes in the future. Lots of escape codes that are popular today aren’t defined in ECMA-48: for example it’s pretty common for terminal applications (like vim, htop, or tmux) to support using the mouse, but ECMA-48 doesn’t define escape codes for the mouse.

xterm control sequences

There are a bunch of escape codes that aren’t defined in ECMA-48, for example:

  • enabling mouse reporting (where did you click in your terminal?)
  • bracketed paste (did you paste that text or type it in?)
  • OSC 52 (which terminal applications can use to copy text to your system clipboard)

I believe (correct me if I’m wrong!) that these and some others came from xterm, are documented in XTerm Control Sequences, and have been widely implemented by other terminal emulators.

This list of “what xterm supports” is not a standard exactly, but xterm is extremely influential and so it seems like an important document.

terminfo

In the 80s (and to some extent today, but my understanding is that it was MUCH more dramatic in the 80s) there was a huge amount of variation in what escape codes terminals actually supported.

To deal with this, there’s a database of escape codes for various terminals called “terminfo”.

It looks like the standard for terminfo is called X/Open Curses, though you need to create an account to view that standard for some reason. It defines the database format as well as a C library interface (“curses”) for accessing the database.

For example you can run this bash snippet to see every possible escape code for “clear screen” for all of the different terminals your system knows about:

for term in $(toe -a | awk '{print $1}')
do
  echo $term
  infocmp -1 -T "$term" 2>/dev/null | grep 'clear=' | sed 's/clear=//g;s/,//g'
done

On my system (and probably every system I’ve ever used?), the terminfo database is managed by ncurses.

should programs use terminfo?

I think it’s interesting that there are two main approaches that applications take to handling ANSI escape codes:

  1. Use the terminfo database to figure out which escape codes to use, depending on what’s in the TERM environment variable. Fish does this, for example.
  2. Identify a “single common set” of escape codes which works in “enough” terminal emulators and just hardcode those.

Some examples of programs/libraries that take approach #2 (“don’t use terminfo”) include:

I got curious about why folks might be moving away from terminfo and I found this very interesting and extremely detailed rant about terminfo from one of the fish maintainers, which argues that:

[the terminfo authors] have done a lot of work that, at the time, was extremely important and helpful. My point is that it no longer is.

I’m not going to do it justice so I’m not going to summarize it, I think it’s worth reading.

is there a “single common set” of escape codes?

I was just talking about the idea that you can use a “common set” of escape codes that will work for most people. But what is that set? Is there any agreement?

I really do not know the answer to this at all, but from doing some reading it seems like it’s some combination of:

  • The codes that the VT100 supported (though some aren’t relevant on modern terminals)
  • what’s in ECMA-48 (which I think also has some things that are no longer relevant)
  • What xterm supports (though I’d guess that not everything in there is actually widely supported enough)

and maybe ultimately “identify the terminal emulators you think your users are going to use most frequently and test in those”, the same way web developers do when deciding which CSS features are okay to use

I don’t think there are any resources like Can I use…? or Baseline for the terminal though. (in theory terminfo is supposed to be the “caniuse” for the terminal but it seems like it often takes 10+ years to add new terminal features when people invent them which makes it very limited)

some reasons to use terminfo

I also asked on Mastodon why people found terminfo valuable in 2025 and got a few reasons that made sense to me:

  • some people expect to be able to use the TERM environment variable to control how programs behave (for example with TERM=dumb), and there’s no standard for how that should work in a post-terminfo world
  • even though there’s less variation between terminal emulators than there was in the 80s, there’s far from zero variation: there are graphical terminals, the Linux framebuffer console, the situation you’re in when connecting to a server via its serial console, Emacs shell mode, and probably more that I’m missing
  • there is no one standard for what the “single common set” of escape codes is, and sometimes programs use escape codes which aren’t actually widely supported enough

terminfo & user agent detection

The way that ncurses uses the TERM environment variable to decide which escape codes to use reminds me of how webservers used to sometimes use the browser user agent to decide which version of a website to serve.

It also seems like it’s had some of the same results – the way iTerm2 reports itself as being “xterm-256color” feels similar to how Safari’s user agent is “Mozilla/5.0 (Macintosh; Intel Mac OS X 14_7_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.3 Safari/605.1.15”. In both cases the terminal emulator / browser ends up changing its user agent to get around user agent detection that isn’t working well.

On the web we ended up deciding that user agent detection was not a good practice and to instead focus on standardization so we can serve the same HTML/CSS to all browsers. I don’t know if the same approach is the future in the terminal though – I think the terminal landscape today is much more fragmented than the web ever was as well as being much less well funded.

some more documents/standards

A few more documents and standards related to escape codes, in no particular order:

why I think this is interesting

I sometimes see people saying that the unix terminal is “outdated”, and since I love the terminal so much I’m always curious about what incremental changes might make it feel less “outdated”.

Maybe if we had a clearer standards landscape (like we do on the web!) it would be easier for terminal emulator developers to build new features and for authors of terminal applications to more confidently adopt those features so that we can all benefit from them and have a richer experience in the terminal.

Obviously standardizing ANSI escape codes is not easy (ECMA-48 was first published almost 50 years ago and we’re still not there!). I don’t even know what all of the challenges are. But the situation with HTML/CSS/JS used to be extremely bad too and now it’s MUCH better, so maybe there’s hope.

2025-02-13T12:27:56+00:00 Fullscreen Open in Tab
How to add a directory to your PATH

I was talking to a friend about how to add a directory to your PATH today. It’s something that feels “obvious” to me since I’ve been using the terminal for a long time, but when I searched for instructions for how to do it, I actually couldn’t find something that explained all of the steps – a lot of them just said “add this to ~/.bashrc”, but what if you’re not using bash? What if your bash config is actually in a different file? And how are you supposed to figure out which directory to add anyway?

So I wanted to try to write down some more complete directions and mention some of the gotchas I’ve run into over the years.

Here’s a table of contents:

step 1: what shell are you using?

If you’re not sure what shell you’re using, here’s a way to find out. Run this:

ps -p $$ -o pid,comm=
  • if you’re using bash, it’ll print out 97295 bash
  • if you’re using zsh, it’ll print out 97295 zsh
  • if you’re using fish, it’ll print out an error like “In fish, please use $fish_pid” ($$ isn’t valid syntax in fish, but in any case the error message tells you that you’re using fish, which you probably already knew)

Also bash is the default on Linux and zsh is the default on Mac OS (as of 2024). I’ll only cover bash, zsh, and fish in these directions.

step 2: find your shell’s config file

  • in zsh, it’s probably ~/.zshrc
  • in bash, it might be ~/.bashrc, but it’s complicated, see the note in the next section
  • in fish, it’s probably ~/.config/fish/config.fish (you can run echo $__fish_config_dir if you want to be 100% sure)

a note on bash’s config file

Bash has three possible config files: ~/.bashrc, ~/.bash_profile, and ~/.profile.

If you’re not sure which one your system is set up to use, I’d recommend testing this way:

  1. add echo hi there to your ~/.bashrc
  2. Restart your terminal
  3. If you see “hi there”, that means ~/.bashrc is being used! Hooray!
  4. Otherwise remove it and try the same thing with ~/.bash_profile
  5. You can also try ~/.profile if the first two options don’t work.

(there are a lot of elaborate flow charts out there that explain how bash decides which config file to use but IMO it’s not worth it to internalize them and just testing is the fastest way to be sure)

step 3: figure out which directory to add

Let’s say that you’re trying to install and run a program called http-server and it doesn’t work, like this:

$ npm install -g http-server
$ http-server
bash: http-server: command not found

How do you find what directory http-server is in? Honestly in general this is not that easy – often the answer is something like “it depends on how npm is configured”. A few ideas:

  • Often when setting up a new installer (like cargo, npm, homebrew, etc), when you first set it up it’ll print out some directions about how to update your PATH. So if you’re paying attention you can get the directions then.
  • Sometimes installers will automatically update your shell’s config file to update your PATH for you
  • Sometimes just Googling “where does npm install things?” will turn up the answer
  • Some tools have a subcommand that tells you where they’re configured to install things, like:
    • Node/npm: npm config get prefix (then append /bin/)
    • Go: go env GOPATH (then append /bin/)
    • asdf: asdf info | grep ASDF_DIR (then append /bin/ and /shims/)

step 3.1: double check it’s the right directory

Once you’ve found a directory you think might be the right one, make sure it’s actually correct! For example, I found out that on my machine, http-server is in ~/.npm-global/bin. I can make sure that it’s the right directory by trying to run the program http-server in that directory like this:

$ ~/.npm-global/bin/http-server
Starting up http-server, serving ./public

It worked! Now that you know what directory you need to add to your PATH, let’s move to the next step!

step 4: edit your shell config

Now we have the 2 critical pieces of information we need:

  1. Which directory you’re trying to add to your PATH (like ~/.npm-global/bin/)
  2. Where your shell’s config is (like ~/.bashrc, ~/.zshrc, or ~/.config/fish/config.fish)

Now what you need to add depends on your shell:

bash instructions:

Open your shell’s config file, and add a line like this:

export PATH=$PATH:~/.npm-global/bin/

(obviously replace ~/.npm-global/bin with the actual directory you’re trying to add)

zsh instructions:

You can do the same thing as in bash, but zsh also has some slightly fancier syntax you can use if you prefer:

path=(
  $path
  ~/.npm-global/bin
)

fish instructions:

In fish, the syntax is different:

set PATH $PATH ~/.npm-global/bin

(in fish you can also use fish_add_path, some notes on that further down)

step 5: restart your shell

Now, an extremely important step: updating your shell’s config won’t take effect if you don’t restart it!

Two ways to do this:

  1. open a new terminal (or terminal tab), and maybe close the old one so you don’t get confused
  2. Run bash to start a new shell (or zsh if you’re using zsh, or fish if you’re using fish)

I’ve found that both of these usually work fine.

And you should be done! Try running the program you were trying to run and hopefully it works now.

If not, here are a couple of problems that you might run into:

problem 1: it ran the wrong program

If the wrong version of a program is running, you might need to add the directory to the beginning of your PATH instead of the end.

For example, on my system I have two versions of python3 installed, which I can see by running which -a:

$ which -a python3
/usr/bin/python3
/opt/homebrew/bin/python3

The one your shell will use is the first one listed.

If you want to use the Homebrew version, you need to add that directory (/opt/homebrew/bin) to the beginning of your PATH instead, by putting this in your shell’s config file (it’s /opt/homebrew/bin/:$PATH instead of the usual $PATH:/opt/homebrew/bin/)

export PATH=/opt/homebrew/bin/:$PATH

or in fish:

set PATH ~/.cargo/bin $PATH

problem 2: the program isn’t being run from your shell

All of these directions only work if you’re running the program from your shell. If you’re running the program from an IDE, from a GUI, in a cron job, or some other way, you’ll need to add the directory to your PATH in a different way, and the exact details might depend on the situation.

in a cron job

Some options:

  • use the full path to the program you’re running, like /home/bork/bin/my-program
  • put the full PATH you want as the first line of your crontab (something like PATH=/bin:/usr/bin:/usr/local/bin:….). You can get the full PATH you’re using in your shell by running echo "PATH=$PATH".

I’m honestly not sure how to handle it in an IDE/GUI because I haven’t run into that in a long time, will add directions here if someone points me in the right direction.

problem 3: duplicate PATH entries making it harder to debug

If you edit your path and start a new shell by running bash (or zsh, or fish), you’ll often end up with duplicate PATH entries, because the shell keeps adding new things to your PATH every time you start your shell.

Personally I don’t think I’ve run into a situation where this kind of duplication breaks anything, but the duplicates can make it harder to debug what’s going on with your PATH if you’re trying to understand its contents.

Some ways you could deal with this:

  1. If you’re debugging your PATH, open a new terminal to do it in so you get a “fresh” state. This should avoid the duplication.
  2. Deduplicate your PATH at the end of your shell’s config (for example in zsh apparently you can do this with typeset -U path)
  3. Check that the directory isn’t already in your PATH when adding it (for example in fish I believe you can do this with fish_add_path --path /some/directory)

How to deduplicate your PATH is shell-specific and there isn’t always a built in way to do it so you’ll need to look up how to accomplish it in your shell.

problem 4: losing your history after updating your PATH

Here’s a situation that’s easy to get into in bash or zsh:

  1. Run a command (it fails)
  2. Update your PATH
  3. Run bash to reload your config
  4. Press the up arrow a couple of times to rerun the failed command (or open a new terminal)
  5. The failed command isn’t in your history! Why not?

This happens because in bash, by default, history is not saved until you exit the shell.

Some options for fixing this:

  • Instead of running bash to reload your config, run source ~/.bashrc (or source ~/.zshrc in zsh). This will reload the config inside your current session.
  • Configure your shell to continuously save your history instead of only saving the history when the shell exits. (How to do this depends on whether you’re using bash or zsh, the history options in zsh are a bit complicated and I’m not exactly sure what the best way is)

a note on source

When you install cargo (Rust’s installer) for the first time, it gives you these instructions for how to set up your PATH, which don’t mention a specific directory at all.

This is usually done by running one of the following (note the leading DOT):

. "$HOME/.cargo/env"        	# For sh/bash/zsh/ash/dash/pdksh
source "$HOME/.cargo/env.fish"  # For fish

The idea is that you add that line to your shell’s config, and their script automatically sets up your PATH (and potentially other things) for you.

This is pretty common (for example Homebrew suggests you eval brew shellenv), and there are two ways to approach this:

  1. Just do what the tool suggests (like adding . "$HOME/.cargo/env" to your shell’s config)
  2. Figure out which directories the script they’re telling you to run would add to your PATH, and then add those manually. Here’s how I’d do that:
    • Run . "$HOME/.cargo/env" in my shell (or the fish version if using fish)
    • Run echo "$PATH" | tr ':' '\n' | grep cargo to figure out which directories it added
    • See that it says /Users/bork/.cargo/bin and shorten that to ~/.cargo/bin
    • Add the directory ~/.cargo/bin to PATH (with the directions in this post)

I don’t think there’s anything wrong with doing what the tool suggests (it might be the “best way”!), but personally I usually use the second approach because I prefer knowing exactly what configuration I’m changing.

a note on fish_add_path

fish has a handy function called fish_add_path that you can run to add a directory to your PATH like this:

fish_add_path /some/directory

This is cool (it’s such a simple command!) but I’ve stopped using it for a couple of reasons:

  1. Sometimes fish_add_path will update the PATH for every session in the future (with a “universal variable”) and sometimes it will update the PATH just for the current session and it’s hard for me to tell which one it will do. In theory the docs explain this but I could not understand them.
  2. If you ever need to remove the directory from your PATH a few weeks or months later because maybe you made a mistake, it’s kind of hard to do (there are instructions in this comments of this github issue though).

that’s all

Hopefully this will help some people. Let me know (on Mastodon or Bluesky) if you there are other major gotchas that have tripped you up when adding a directory to your PATH, or if you have questions about this post!

2025-02-05T16:57:00+00:00 Fullscreen Open in Tab
Some terminal frustrations

A few weeks ago I ran a terminal survey (you can read the results here) and at the end I asked:

What’s the most frustrating thing about using the terminal for you?

1600 people answered, and I decided to spend a few days categorizing all the responses. Along the way I learned that classifying qualitative data is not easy but I gave it my best shot. I ended up building a custom tool to make it faster to categorize everything.

As with all of my surveys the methodology isn’t particularly scientific. I just posted the survey to Mastodon and Twitter, ran it for a couple of days, and got answers from whoever happened to see it and felt like responding.

Here are the top categories of frustrations!

I think it’s worth keeping in mind while reading these comments that

  • 40% of people answering this survey have been using the terminal for 21+ years
  • 95% of people answering the survey have been using the terminal for at least 4 years

These comments aren’t coming from total beginners.

Here are the categories of frustrations! The number in brackets is the number of people with that frustration. I’m mostly writing this up for myself because I’m trying to write a zine about the terminal and I wanted to get a sense for what people are having trouble with.

remembering syntax (115)

People talked about struggles remembering:

  • the syntax for CLI tools like awk, jq, sed, etc
  • the syntax for redirects
  • keyboard shortcuts for tmux, text editing, etc

One example comment:

There are just so many little “trivia” details to remember for full functionality. Even after all these years I’ll sometimes forget where it’s 2 or 1 for stderr, or forget which is which for > and >>.

switching terminals is hard (91)

People talked about struggling with switching systems (for example home/work computer or when SSHing) and running into:

  • OS differences in keyboard shortcuts (like Linux vs Mac)
  • systems which don’t have their preferred text editor (“no vim” or “only vim”)
  • different versions of the same command (like Mac OS grep vs GNU grep)
  • no tab completion
  • a shell they aren’t used to (“the subtle differences between zsh and bash”)

as well as differences inside the same system like pagers being not consistent with each other (git diff pagers, other pagers).

One example comment:

I got used to fish and vi mode which are not available when I ssh into servers, containers.

color (85)

Lots of problems with color, like:

  • programs setting colors that are unreadable with a light background color
  • finding a colorscheme they like (and getting it to work consistently across different apps)
  • color not working inside several layers of SSH/tmux/etc
  • not liking the defaults
  • not wanting color at all and struggling to turn it off

This comment felt relatable to me:

Getting my terminal theme configured in a reasonable way between the terminal emulator and fish (I did this years ago and remember it being tedious and fiddly and now feel like I’m locked into my current theme because it works and I dread touching any of that configuration ever again).

keyboard shortcuts (84)

Half of the comments on keyboard shortcuts were about how on Linux/Windows, the keyboard shortcut to copy/paste in the terminal is different from in the rest of the OS.

Some other issues with keyboard shortcuts other than copy/paste:

  • using Ctrl-W in a browser-based terminal and closing the window
  • the terminal only supports a limited set of keyboard shortcuts (no Ctrl-Shift-, no Super, no Hyper, lots of ctrl- shortcuts aren’t possible like Ctrl-,)
  • the OS stopping you from using a terminal keyboard shortcut (like by default Mac OS uses Ctrl+left arrow for something else)
  • issues using emacs in the terminal
  • backspace not working (2)

other copy and paste issues (75)

Aside from “the keyboard shortcut for copy and paste is different”, there were a lot of OTHER issues with copy and paste, like:

  • copying over SSH
  • how tmux and the terminal emulator both do copy/paste in different ways
  • dealing with many different clipboards (system clipboard, vim clipboard, the “middle click” clipboard on Linux, tmux’s clipboard, etc) and potentially synchronizing them
  • random spaces added when copying from the terminal
  • pasting multiline commands which automatically get run in a terrifying way
  • wanting a way to copy text without using the mouse

discoverability (55)

There were lots of comments about this, which all came down to the same basic complaint – it’s hard to discover useful tools or features! This comment kind of summed it all up:

How difficult it is to learn independently. Most of what I know is an assorted collection of stuff I’ve been told by random people over the years.

steep learning curve (44)

A lot of comments about it generally having a steep learning curve. A couple of example comments:

After 15 years of using it, I’m not much faster than using it than I was 5 or maybe even 10 years ago.

and

That I know I could make my life easier by learning more about the shortcuts and commands and configuring the terminal but I don’t spend the time because it feels overwhelming.

history (42)

Some issues with shell history:

  • history not being shared between terminal tabs (16)
  • limits that are too short (4)
  • history not being restored when terminal tabs are restored
  • losing history because the terminal crashed
  • not knowing how to search history

One example comment:

It wasted a lot of time until I figured it out and still annoys me that “history” on zsh has such a small buffer; I have to type “history 0” to get any useful length of history.

bad documentation (37)

People talked about:

  • documentation being generally opaque
  • lack of examples in man pages
  • programs which don’t have man pages

Here’s a representative comment:

Finding good examples and docs. Man pages often not enough, have to wade through stack overflow

scrollback (36)

A few issues with scrollback:

  • programs printing out too much data making you lose scrollback history
  • resizing the terminal messes up the scrollback
  • lack of timestamps
  • GUI programs that you start in the background printing stuff out that gets in the way of other programs’ outputs

One example comment:

When resizing the terminal (in particular: making it narrower) leads to broken rewrapping of the scrollback content because the commands formatted their output based on the terminal window width.

“it feels outdated” (33)

Lots of comments about how the terminal feels hampered by legacy decisions and how users often end up needing to learn implementation details that feel very esoteric. One example comment:

Most of the legacy cruft, it would be great to have a green field implementation of the CLI interface.

shell scripting (32)

Lots of complaints about POSIX shell scripting. There’s a general feeling that shell scripting is difficult but also that switching to a different less standard scripting language (fish, nushell, etc) brings its own problems.

Shell scripting. My tolerance to ditch a shell script and go to a scripting language is pretty low. It’s just too messy and powerful. Screwing up can be costly so I don’t even bother.

more issues

Some more issues that were mentioned at least 10 times:

  • (31) inconsistent command line arguments: is it -h or help or –help?
  • (24) keeping dotfiles in sync across different systems
  • (23) performance (e.g. “my shell takes too long to start”)
  • (20) window management (potentially with some combination of tmux tabs, terminal tabs, and multiple terminal windows. Where did that shell session go?)
  • (17) generally feeling scared/uneasy (“The debilitating fear that I’m going to do some mysterious Bad Thing with a command and I will have absolutely no idea how to fix or undo it or even really figure out what happened”)
  • (16) terminfo issues (“Having to learn about terminfo if/when I try a new terminal emulator and ssh elsewhere.”)
  • (16) lack of image support (sixel etc)
  • (15) SSH issues (like having to start over when you lose the SSH connection)
  • (15) various tmux/screen issues (for example lack of integration between tmux and the terminal emulator)
  • (15) typos & slow typing
  • (13) the terminal getting messed up for various reasons (pressing Ctrl-S, cating a binary, etc)
  • (12) quoting/escaping in the shell
  • (11) various Windows/PowerShell issues

n/a (122)

There were also 122 answers to the effect of “nothing really” or “only that I can’t do EVERYTHING in the terminal”

One example comment:

Think I’ve found work arounds for most/all frustrations

that’s all!

I’m not going to make a lot of commentary on these results, but here are a couple of categories that feel related to me:

  • remembering syntax & history (often the thing you need to remember is something you’ve run before!)
  • discoverability & the learning curve (the lack of discoverability is definitely a big part of what makes it hard to learn)
  • “switching systems is hard” & “it feels outdated” (tools that haven’t really changed in 30 or 40 years have many problems but they do tend to be always there no matter what system you’re on, which is very useful and makes them hard to stop using)

Trying to categorize all these results in a reasonable way really gave me an appreciation for social science researchers’ skills.

2025-01-11T09:46:01+00:00 Fullscreen Open in Tab
What's involved in getting a "modern" terminal setup?

Hello! Recently I ran a terminal survey and I asked people what frustrated them. One person commented:

There are so many pieces to having a modern terminal experience. I wish it all came out of the box.

My immediate reaction was “oh, getting a modern terminal experience isn’t that hard, you just need to….”, but the more I thought about it, the longer the “you just need to…” list got, and I kept thinking about more and more caveats.

So I thought I would write down some notes about what it means to me personally to have a “modern” terminal experience and what I think can make it hard for people to get there.

what is a “modern terminal experience”?

Here are a few things that are important to me, with which part of the system is responsible for them:

  • multiline support for copy and paste: if you paste 3 commands in your shell, it should not immediately run them all! That’s scary! (shell, terminal emulator)
  • infinite shell history: if I run a command in my shell, it should be saved forever, not deleted after 500 history entries or whatever. Also I want commands to be saved to the history immediately when I run them, not only when I exit the shell session (shell)
  • a useful prompt: I can’t live without having my current directory and current git branch in my prompt (shell)
  • 24-bit colour: this is important to me because I find it MUCH easier to theme neovim with 24-bit colour support than in a terminal with only 256 colours (terminal emulator)
  • clipboard integration between vim and my operating system so that when I copy in Firefox, I can just press p in vim to paste (text editor, maybe the OS/terminal emulator too)
  • good autocomplete: for example commands like git should have command-specific autocomplete (shell)
  • having colours in ls (shell config)
  • a terminal theme I like: I spend a lot of time in my terminal, I want it to look nice and I want its theme to match my terminal editor’s theme. (terminal emulator, text editor)
  • automatic terminal fixing: If a programs prints out some weird escape codes that mess up my terminal, I want that to automatically get reset so that my terminal doesn’t get messed up (shell)
  • keybindings: I want Ctrl+left arrow to work (shell or application)
  • being able to use the scroll wheel in programs like less: (terminal emulator and applications)

There are a million other terminal conveniences out there and different people value different things, but those are the ones that I would be really unhappy without.

how I achieve a “modern experience”

My basic approach is:

  1. use the fish shell. Mostly don’t configure it, except to:
    • set the EDITOR environment variable to my favourite terminal editor
    • alias ls to ls --color=auto
  2. use any terminal emulator with 24-bit colour support. In the past I’ve used GNOME Terminal, Terminator, and iTerm, but I’m not picky about this. I don’t really configure it other than to choose a font.
  3. use neovim, with a configuration that I’ve been very slowly building over the last 9 years or so (the last time I deleted my vim config and started from scratch was 9 years ago)
  4. use the base16 framework to theme everything

A few things that affect my approach:

  • I don’t spend a lot of time SSHed into other machines
  • I’d rather use the mouse a little than come up with keyboard-based ways to do everything
  • I work on a lot of small projects, not one big project

some “out of the box” options for a “modern” experience

What if you want a nice experience, but don’t want to spend a lot of time on configuration? Figuring out how to configure vim in a way that I was satisfied with really did take me like ten years, which is a long time!

My best ideas for how to get a reasonable terminal experience with minimal config are:

  • shell: either fish or zsh with oh-my-zsh
  • terminal emulator: almost anything with 24-bit colour support, for example all of these are popular:
    • linux: GNOME Terminal, Konsole, Terminator, xfce4-terminal
    • mac: iTerm (Terminal.app doesn’t have 256-colour support)
    • cross-platform: kitty, alacritty, wezterm, or ghostty
  • shell config:
    • set the EDITOR environment variable to your favourite terminal text editor
    • maybe alias ls to ls --color=auto
  • text editor: this is a tough one, maybe micro or helix? I haven’t used either of them seriously but they both seem like very cool projects and I think it’s amazing that you can just use all the usual GUI editor commands (Ctrl-C to copy, Ctrl-V to paste, Ctrl-A to select all) in micro and they do what you’d expect. I would probably try switching to helix except that retraining my vim muscle memory seems way too hard. Also helix doesn’t have a GUI or plugin system yet.

Personally I wouldn’t use xterm, rxvt, or Terminal.app as a terminal emulator, because I’ve found in the past that they’re missing core features (like 24-bit colour in Terminal.app’s case) that make the terminal harder to use for me.

I don’t want to pretend that getting a “modern” terminal experience is easier than it is though – I think there are two issues that make it hard. Let’s talk about them!

issue 1 with getting to a “modern” experience: the shell

bash and zsh are by far the two most popular shells, and neither of them provide a default experience that I would be happy using out of the box, for example:

  • you need to customize your prompt
  • they don’t come with git completions by default, you have to set them up
  • by default, bash only stores 500 (!) lines of history and (at least on Mac OS) zsh is only configured to store 2000 lines, which is still not a lot
  • I find bash’s tab completion very frustrating, if there’s more than one match then you can’t tab through them

And even though I love fish, the fact that it isn’t POSIX does make it hard for a lot of folks to make the switch.

Of course it’s totally possible to learn how to customize your prompt in bash or whatever, and it doesn’t even need to be that complicated (in bash I’d probably start with something like export PS1='[\u@\h \W$(__git_ps1 " (%s)")]\$ ', or maybe use starship). But each of these “not complicated” things really does add up and it’s especially tough if you need to keep your config in sync across several systems.

An extremely popular solution to getting a “modern” shell experience is oh-my-zsh. It seems like a great project and I know a lot of people use it very happily, but I’ve struggled with configuration systems like that in the past – it looks like right now the base oh-my-zsh adds about 3000 lines of config, and often I find that having an extra configuration system makes it harder to debug what’s happening when things go wrong. I personally have a tendency to use the system to add a lot of extra plugins, make my system slow, get frustrated that it’s slow, and then delete it completely and write a new config from scratch.

issue 2 with getting to a “modern” experience: the text editor

In the terminal survey I ran recently, the most popular terminal text editors by far were vim, emacs, and nano.

I think the main options for terminal text editors are:

  • use vim or emacs and configure it to your liking, you can probably have any feature you want if you put in the work
  • use nano and accept that you’re going to have a pretty limited experience (for example I don’t think you can select text with the mouse and then “cut” it in nano)
  • use micro or helix which seem to offer a pretty good out-of-the-box experience, potentially occasionally run into issues with using a less mainstream text editor
  • just avoid using a terminal text editor as much as possible, maybe use VSCode, use VSCode’s terminal for all your terminal needs, and mostly never edit files in the terminal. Or I know a lot of people use code as their EDITOR in the terminal.

issue 3: individual applications

The last issue is that sometimes individual programs that I use are kind of annoying. For example on my Mac OS machine, /usr/bin/sqlite3 doesn’t support the Ctrl+Left Arrow keyboard shortcut. Fixing this to get a reasonable terminal experience in SQLite was a little complicated, I had to:

  • realize why this is happening (Mac OS won’t ship GNU tools, and “Ctrl-Left arrow” support comes from GNU readline)
  • find a workaround (install sqlite from homebrew, which does have readline support)
  • adjust my environment (put Homebrew’s sqlite3 in my PATH)

I find that debugging application-specific issues like this is really not easy and often it doesn’t feel “worth it” – often I’ll end up just dealing with various minor inconveniences because I don’t want to spend hours investigating them. The only reason I was even able to figure this one out at all is that I’ve been spending a huge amount of time thinking about the terminal recently.

A big part of having a “modern” experience using terminal programs is just using newer terminal programs, for example I can’t be bothered to learn a keyboard shortcut to sort the columns in top, but in htop I can just click on a column heading with my mouse to sort it. So I use htop instead! But discovering new more “modern” command line tools isn’t easy (though I made a list here), finding ones that I actually like using in practice takes time, and if you’re SSHed into another machine, they won’t always be there.

everything affects everything else

Something I find tricky about configuring my terminal to make everything “nice” is that changing one seemingly small thing about my workflow can really affect everything else. For example right now I don’t use tmux. But if I needed to use tmux again (for example because I was doing a lot of work SSHed into another machine), I’d need to think about a few things, like:

  • if I wanted tmux’s copy to synchronize with my system clipboard over SSH, I’d need to make sure that my terminal emulator has OSC 52 support
  • if I wanted to use iTerm’s tmux integration (which makes tmux tabs into iTerm tabs), I’d need to change how I configure colours – right now I set them with a shell script that I run when my shell starts, but that means the colours get lost when restoring a tmux session.

and probably more things I haven’t thought of. “Using tmux means that I have to change how I manage my colours” sounds unlikely, but that really did happen to me and I decided “well, I don’t want to change how I manage colours right now, so I guess I’m not using that feature!”.

It’s also hard to remember which features I’m relying on – for example maybe my current terminal does have OSC 52 support and because copying from tmux over SSH has always Just Worked I don’t even realize that that’s something I need, and then it mysteriously stops working when I switch terminals.

change things slowly

Personally even though I think my setup is not that complicated, it’s taken me 20 years to get to this point! Because terminal config changes are so likely to have unexpected and hard-to-understand consequences, I’ve found that if I change a lot of terminal configuration all at once it makes it much harder to understand what went wrong if there’s a problem, which can be really disorienting.

So I usually prefer to make pretty small changes, and accept that changes can might take me a REALLY long time to get used to. For example I switched from using ls to eza a year or two ago and while I like it (because eza -l prints human-readable file sizes by default) I’m still not quite sure about it. But also sometimes it’s worth it to make a big change, like I made the switch to fish (from bash) 10 years ago and I’m very happy I did.

getting a “modern” terminal is not that easy

Trying to explain how “easy” it is to configure your terminal really just made me think that it’s kind of hard and that I still sometimes get confused.

I’ve found that there’s never one perfect way to configure things in the terminal that will be compatible with every single other thing. I just need to try stuff, figure out some kind of locally stable state that works for me, and accept that if I start using a new tool it might disrupt the system and I might need to rethink things.

2024-12-12T09:28:22+00:00 Fullscreen Open in Tab
"Rules" that terminal programs follow

Recently I’ve been thinking about how everything that happens in the terminal is some combination of:

  1. Your operating system’s job
  2. Your shell’s job
  3. Your terminal emulator’s job
  4. The job of whatever program you happen to be running (like top or vim or cat)

The first three (your operating system, shell, and terminal emulator) are all kind of known quantities – if you’re using bash in GNOME Terminal on Linux, you can more or less reason about how how all of those things interact, and some of their behaviour is standardized by POSIX.

But the fourth one (“whatever program you happen to be running”) feels like it could do ANYTHING. How are you supposed to know how a program is going to behave?

This post is kind of long so here’s a quick table of contents:

programs behave surprisingly consistently

As far as I know, there are no real standards for how programs in the terminal should behave – the closest things I know of are:

  • POSIX, which mostly dictates how your terminal emulator / OS / shell should work together. I think it does specify a few things about how core utilities like cp should work but AFAIK it doesn’t have anything to say about how for example htop should behave.
  • these command line interface guidelines

But even though there are no standards, in my experience programs in the terminal behave in a pretty consistent way. So I wanted to write down a list of “rules” that in my experience programs mostly follow.

these are meant to be descriptive, not prescriptive

My goal here isn’t to convince authors of terminal programs that they should follow any of these rules. There are lots of exceptions to these and often there’s a good reason for those exceptions.

But it’s very useful for me to know what behaviour to expect from a random new terminal program that I’m using. Instead of “uh, programs could do literally anything”, it’s “ok, here are the basic rules I expect, and then I can keep a short mental list of exceptions”.

So I’m just writing down what I’ve observed about how programs behave in my 20 years of using the terminal, why I think they behave that way, and some examples of cases where that rule is “broken”.

it’s not always obvious which “rules” are the program’s responsibility to implement

There are a bunch of common conventions that I think are pretty clearly the program’s responsibility to implement, like:

  • config files should go in ~/.BLAHrc or ~/.config/BLAH/FILE or /etc/BLAH/ or something
  • --help should print help text
  • programs should print “regular” output to stdout and errors to stderr

But in this post I’m going to focus on things that it’s not 100% obvious are the program’s responsibility. For example it feels to me like a “law of nature” that pressing Ctrl-D should quit a REPL, but programs often need to explicitly implement support for it – even though cat doesn’t need to implement Ctrl-D support, ipython does. (more about that in “rule 3” below)

Understanding which things are the program’s responsibility makes it much less surprising when different programs’ implementations are slightly different.

rule 1: noninteractive programs should quit when you press Ctrl-C

The main reason for this rule is that noninteractive programs will quit by default on Ctrl-C if they don’t set up a SIGINT signal handler, so this is kind of a “you should act like the default” rule.

Something that trips a lot of people up is that this doesn’t apply to interactive programs like python3 or bc or less. This is because in an interactive program, Ctrl-C has a different job – if the program is running an operation (like for example a search in less or some Python code in python3), then Ctrl-C will interrupt that operation but not stop the program.

As an example of how this works in an interactive program: here’s the code in prompt-toolkit (the library that iPython uses for handling input) that aborts a search when you press Ctrl-C.

rule 2: TUIs should quit when you press q

TUI programs (like less or htop) will usually quit when you press q.

This rule doesn’t apply to any program where pressing q to quit wouldn’t make sense, like tmux or text editors.

rule 3: REPLs should quit when you press Ctrl-D on an empty line

REPLs (like python3 or ed) will usually quit when you press Ctrl-D on an empty line. This rule is similar to the Ctrl-C rule – the reason for this is that by default if you’re running a program (like cat) in “cooked mode”, then the operating system will return an EOF when you press Ctrl-D on an empty line.

Most of the REPLs I use (sqlite3, python3, fish, bash, etc) don’t actually use cooked mode, but they all implement this keyboard shortcut anyway to mimic the default behaviour.

For example, here’s the code in prompt-toolkit that quits when you press Ctrl-D, and here’s the same code in readline.

I actually thought that this one was a “Law of Terminal Physics” until very recently because I’ve basically never seen it broken, but you can see that it’s just something that each individual input library has to implement in the links above.

Someone pointed out that the Erlang REPL does not quit when you press Ctrl-D, so I guess not every REPL follows this “rule”.

rule 4: don’t use more than 16 colours

Terminal programs rarely use colours other than the base 16 ANSI colours. This is because if you specify colours with a hex code, it’s very likely to clash with some users’ background colour. For example if I print out some text as #EEEEEE, it would be almost invisible on a white background, though it would look fine on a dark background.

But if you stick to the default 16 base colours, you have a much better chance that the user has configured those colours in their terminal emulator so that they work reasonably well with their background color. Another reason to stick to the default base 16 colours is that it makes less assumptions about what colours the terminal emulator supports.

The only programs I usually see breaking this “rule” are text editors, for example Helix by default will use a purple background which is not a default ANSI colour. It seems fine for Helix to break this rule since Helix isn’t a “core” program and I assume any Helix user who doesn’t like that colorscheme will just change the theme.

rule 5: vaguely support readline keybindings

Almost every program I use supports readline keybindings if it would make sense to do so. For example, here are a bunch of different programs and a link to where they define Ctrl-E to go to the end of the line:

None of those programs actually uses readline directly, they just sort of mimic emacs/readline keybindings. They don’t always mimic them exactly: for example atuin seems to use Ctrl-A as a prefix, so Ctrl-A doesn’t go to the beginning of the line.

Also all of these programs seem to implement their own internal cut and paste buffers so you can delete a line with Ctrl-U and then paste it with Ctrl-Y.

The exceptions to this are:

  • some programs (like git, cat, and nc) don’t have any line editing support at all (except for backspace, Ctrl-W, and Ctrl-U)
  • as usual text editors are an exception, every text editor has its own approach to editing text

I wrote more about this “what keybindings does a program support?” question in entering text in the terminal is complicated.

rule 5.1: Ctrl-W should delete the last word

I’ve never seen a program (other than a text editor) where Ctrl-W doesn’t delete the last word. This is similar to the Ctrl-C rule – by default if a program is in “cooked mode”, the OS will delete the last word if you press Ctrl-W, and delete the whole line if you press Ctrl-U. So usually programs will imitate that behaviour.

I can’t think of any exceptions to this other than text editors but if there are I’d love to hear about them!

rule 6: disable colours when writing to a pipe

Most programs will disable colours when writing to a pipe. For example:

  • rg blah will highlight all occurrences of blah in the output, but if the output is to a pipe or a file, it’ll turn off the highlighting.
  • ls --color=auto will use colour when writing to a terminal, but not when writing to a pipe

Both of those programs will also format their output differently when writing to the terminal: ls will organize files into columns, and ripgrep will group matches with headings.

If you want to force the program to use colour (for example because you want to look at the colour), you can use unbuffer to force the program’s output to be a tty like this:

unbuffer rg blah |  less -R

I’m sure that there are some programs that “break” this rule but I can’t think of any examples right now. Some programs have an --color flag that you can use to force colour to be on, in the example above you could also do rg --color=always | less -R.

rule 7: - means stdin/stdout

Usually if you pass - to a program instead of a filename, it’ll read from stdin or write to stdout (whichever is appropriate). For example, if you want to format the Python code that’s on your clipboard with black and then copy it, you could run:

pbpaste | black - | pbcopy

(pbpaste is a Mac program, you can do something similar on Linux with xclip)

My impression is that most programs implement this if it would make sense and I can’t think of any exceptions right now, but I’m sure there are many exceptions.

these “rules” take a long time to learn

These rules took me a long time for me to learn because I had to:

  1. learn that the rule applied anywhere at all ("Ctrl-C will exit programs")
  2. notice some exceptions (“okay, Ctrl-C will exit find but not less”)
  3. subconsciously figure out what the pattern is ("Ctrl-C will generally quit noninteractive programs, but in interactive programs it might interrupt the current operation instead of quitting the program")
  4. eventually maybe formulate it into an explicit rule that I know

A lot of my understanding of the terminal is honestly still in the “subconscious pattern recognition” stage. The only reason I’ve been taking the time to make things explicit at all is because I’ve been trying to explain how it works to others. Hopefully writing down these “rules” explicitly will make learning some of this stuff a little bit faster for others.

2024-11-29T08:23:31+00:00 Fullscreen Open in Tab
Why pipes sometimes get "stuck": buffering

Here’s a niche terminal problem that has bothered me for years but that I never really understood until a few weeks ago. Let’s say you’re running this command to watch for some specific output in a log file:

tail -f /some/log/file | grep thing1 | grep thing2

If log lines are being added to the file relatively slowly, the result I’d see is… nothing! It doesn’t matter if there were matches in the log file or not, there just wouldn’t be any output.

I internalized this as “uh, I guess pipes just get stuck sometimes and don’t show me the output, that’s weird”, and I’d handle it by just running grep thing1 /some/log/file | grep thing2 instead, which would work.

So as I’ve been doing a terminal deep dive over the last few months I was really excited to finally learn exactly why this happens.

why this happens: buffering

The reason why “pipes get stuck” sometimes is that it’s VERY common for programs to buffer their output before writing it to a pipe or file. So the pipe is working fine, the problem is that the program never even wrote the data to the pipe!

This is for performance reasons: writing all output immediately as soon as you can uses more system calls, so it’s more efficient to save up data until you have 8KB or so of data to write (or until the program exits) and THEN write it to the pipe.

In this example:

tail -f /some/log/file | grep thing1 | grep thing2

the problem is that grep thing1 is saving up all of its matches until it has 8KB of data to write, which might literally never happen.

programs don’t buffer when writing to a terminal

Part of why I found this so disorienting is that tail -f file | grep thing will work totally fine, but then when you add the second grep, it stops working!! The reason for this is that the way grep handles buffering depends on whether it’s writing to a terminal or not.

Here’s how grep (and many other programs) decides to buffer its output:

  • Check if stdout is a terminal or not using the isatty function
    • If it’s a terminal, use line buffering (print every line immediately as soon as you have it)
    • Otherwise, use “block buffering” – only print data if you have at least 8KB or so of data to print

So if grep is writing directly to your terminal then you’ll see the line as soon as it’s printed, but if it’s writing to a pipe, you won’t.

Of course the buffer size isn’t always 8KB for every program, it depends on the implementation. For grep the buffering is handled by libc, and libc’s buffer size is defined in the BUFSIZ variable. Here’s where that’s defined in glibc.

(as an aside: “programs do not use 8KB output buffers when writing to a terminal” isn’t, like, a law of terminal physics, a program COULD use an 8KB buffer when writing output to a terminal if it wanted, it would just be extremely weird if it did that, I can’t think of any program that behaves that way)

commands that buffer & commands that don’t

One annoying thing about this buffering behaviour is that you kind of need to remember which commands buffer their output when writing to a pipe.

Some commands that don’t buffer their output:

  • tail
  • cat
  • tee

I think almost everything else will buffer output, especially if it’s a command where you’re likely to be using it for batch processing. Here’s a list of some common commands that buffer their output when writing to a pipe, along with the flag that disables block buffering.

  • grep (--line-buffered)
  • sed (-u)
  • awk (there’s a fflush() function)
  • tcpdump (-l)
  • jq (-u)
  • tr (-u)
  • cut (can’t disable buffering)

Those are all the ones I can think of, lots of unix commands (like sort) may or may not buffer their output but it doesn’t matter because sort can’t do anything until it finishes receiving input anyway.

Also I did my best to test both the Mac OS and GNU versions of these but there are a lot of variations and I might have made some mistakes.

programming languages where the default “print” statement buffers

Also, here are a few programming language where the default print statement will buffer output when writing to a pipe, and some ways to disable buffering if you want:

  • C (disable with setvbuf)
  • Python (disable with python -u, or PYTHONUNBUFFERED=1, or sys.stdout.reconfigure(line_buffering=False), or print(x, flush=True))
  • Ruby (disable with STDOUT.sync = true)
  • Perl (disable with $| = 1)

I assume that these languages are designed this way so that the default print function will be fast when you’re doing batch processing.

Also whether output is buffered or not might depend on how you print, for example in C++ cout << "hello\n" buffers when writing to a pipe but cout << "hello" << endl will flush its output.

when you press Ctrl-C on a pipe, the contents of the buffer are lost

Let’s say you’re running this command as a hacky way to watch for DNS requests to example.com, and you forgot to pass -l to tcpdump:

sudo tcpdump -ni any port 53 | grep example.com

When you press Ctrl-C, what happens? In a magical perfect world, what I would want to happen is for tcpdump to flush its buffer, grep would search for example.com, and I would see all the output I missed.

But in the real world, what happens is that all the programs get killed and the output in tcpdump’s buffer is lost.

I think this problem is probably unavoidable – I spent a little time with strace to see how this works and grep receives the SIGINT before tcpdump anyway so even if tcpdump tried to flush its buffer grep would already be dead.

After a little more investigation, there is a workaround: if you find tcpdump’s PID and kill -TERM $PID, then tcpdump will flush the buffer so you can see the output. That’s kind of a pain but I tested it and it seems to work.

redirecting to a file also buffers

It’s not just pipes, this will also buffer:

sudo tcpdump -ni any port 53 > output.txt

Redirecting to a file doesn’t have the same “Ctrl-C will totally destroy the contents of the buffer” problem though – in my experience it usually behaves more like you’d want, where the contents of the buffer get written to the file before the program exits. I’m not 100% sure whether this is something you can always rely on or not.

a bunch of potential ways to avoid buffering

Okay, let’s talk solutions. Let’s say you’ve run this command:

tail -f /some/log/file | grep thing1 | grep thing2

I asked people on Mastodon how they would solve this in practice and there were 5 basic approaches. Here they are:

solution 1: run a program that finishes quickly

Historically my solution to this has been to just avoid the “command writing to pipe slowly” situation completely and instead run a program that will finish quickly like this:

cat /some/log/file | grep thing1 | grep thing2 | tail

This doesn’t do the same thing as the original command but it does mean that you get to avoid thinking about these weird buffering issues.

(you could also do grep thing1 /some/log/file but I often prefer to use an “unnecessary” cat)

solution 2: remember the “line buffer” flag to grep

You could remember that grep has a flag to avoid buffering and pass it like this:

tail -f /some/log/file | grep --line-buffered thing1 | grep thing2

solution 3: use awk

Some people said that if they’re specifically dealing with a multiple greps situation, they’ll rewrite it to use a single awk instead, like this:

tail -f /some/log/file |  awk '/thing1/ && /thing2/'

Or you would write a more complicated grep, like this:

tail -f /some/log/file |  grep -E 'thing1.*thing2'

(awk also buffers, so for this to work you’ll want awk to be the last command in the pipeline)

solution 4: use stdbuf

stdbuf uses LD_PRELOAD to turn off libc’s buffering, and you can use it to turn off output buffering like this:

tail -f /some/log/file | stdbuf -o0 grep thing1 | grep thing2

Like any LD_PRELOAD solution it’s a bit unreliable – it doesn’t work on static binaries, I think won’t work if the program isn’t using libc’s buffering, and doesn’t always work on Mac OS. Harry Marr has a really nice How stdbuf works post.

solution 5: use unbuffer

unbuffer program will force the program’s output to be a TTY, which means that it’ll behave the way it normally would on a TTY (less buffering, colour output, etc). You could use it in this example like this:

tail -f /some/log/file | unbuffer grep thing1 | grep thing2

Unlike stdbuf it will always work, though it might have unwanted side effects, for example grep thing1’s will also colour matches.

If you want to install unbuffer, it’s in the expect package.

that’s all the solutions I know about!

It’s a bit hard for me to say which one is “best”, I think personally I’m mostly likely to use unbuffer because I know it’s always going to work.

If I learn about more solutions I’ll try to add them to this post.

I’m not really sure how often this comes up

I think it’s not very common for me to have a program that slowly trickles data into a pipe like this, normally if I’m using a pipe a bunch of data gets written very quickly, processed by everything in the pipeline, and then everything exits. The only examples I can come up with right now are:

  • tcpdump
  • tail -f
  • watching log files in a different way like with kubectl logs
  • the output of a slow computation

what if there were an environment variable to disable buffering?

I think it would be cool if there were a standard environment variable to turn off buffering, like PYTHONUNBUFFERED in Python. I got this idea from a couple of blog posts by Mark Dominus in 2018. Maybe NO_BUFFER like NO_COLOR?

The design seems tricky to get right; Mark points out that NETBSD has environment variables called STDBUF, STDBUF1, etc which gives you a ton of control over buffering but I imagine most developers don’t want to implement many different environment variables to handle a relatively minor edge case.

I’m also curious about whether there are any programs that just automatically flush their output buffers after some period of time (like 1 second). It feels like it would be nice in theory but I can’t think of any program that does that so I imagine there are some downsides.

stuff I left out

Some things I didn’t talk about in this post since these posts have been getting pretty long recently and seriously does anyone REALLY want to read 3000 words about buffering?

  • the difference between line buffering and having totally unbuffered output
  • how buffering to stderr is different from buffering to stdout
  • this post is only about buffering that happens inside the program, your operating system’s TTY driver also does a little bit of buffering sometimes
  • other reasons you might need to flush your output other than “you’re writing to a pipe”
2024-11-18T09:35:42+00:00 Fullscreen Open in Tab
Importing a frontend Javascript library without a build system

I like writing Javascript without a build system and for the millionth time yesterday I ran into a problem where I needed to figure out how to import a Javascript library in my code without using a build system, and it took FOREVER to figure out how to import it because the library’s setup instructions assume that you’re using a build system.

Luckily at this point I’ve mostly learned how to navigate this situation and either successfully use the library or decide it’s too difficult and switch to a different library, so here’s the guide I wish I had to importing Javascript libraries years ago.

I’m only going to talk about using Javacript libraries on the frontend, and only about how to use them in a no-build-system setup.

In this post I’m going to talk about:

  1. the three main types of Javascript files a library might provide (ES Modules, the “classic” global variable kind, and CommonJS)
  2. how to figure out which types of files a Javascript library includes in its build
  3. ways to import each type of file in your code

the three kinds of Javascript files

There are 3 basic types of Javascript files a library can provide:

  1. the “classic” type of file that defines a global variable. This is the kind of file that you can just <script src> and it’ll Just Work. Great if you can get it but not always available
  2. an ES module (which may or may not depend on other files, we’ll get to that)
  3. a “CommonJS” module. This is for Node, you can’t use it in a browser at all without using a build system.

I’m not sure if there’s a better name for the “classic” type but I’m just going to call it “classic”. Also there’s a type called “AMD” but I’m not sure how relevant it is in 2024.

Now that we know the 3 types of files, let’s talk about how to figure out which of these the library actually provides!

where to find the files: the NPM build

Every Javascript library has a build which it uploads to NPM. You might be thinking (like I did originally) – Julia! The whole POINT is that we’re not using Node to build our library! Why are we talking about NPM?

But if you’re using a link from a CDN like https://cdnjs.cloudflare.com/ajax/libs/Chart.js/4.4.1/chart.umd.min.js, you’re still using the NPM build! All the files on the CDNs originally come from NPM.

Because of this, I sometimes like to npm install the library even if I’m not planning to use Node to build my library at all – I’ll just create a new temp folder, npm install there, and then delete it when I’m done. I like being able to poke around in the files in the NPM build on my filesystem, because then I can be 100% sure that I’m seeing everything that the library is making available in its build and that the CDN isn’t hiding something from me.

So let’s npm install a few libraries and try to figure out what types of Javascript files they provide in their builds!

example library 1: chart.js

First let’s look inside Chart.js, a plotting library.

$ cd /tmp/whatever
$ npm install chart.js
$ cd node_modules/chart.js/dist
$ ls *.*js
chart.cjs  chart.js  chart.umd.js  helpers.cjs  helpers.js

This library seems to have 3 basic options:

option 1: chart.cjs. The .cjs suffix tells me that this is a CommonJS file, for using in Node. This means it’s impossible to use it directly in the browser without some kind of build step.

option 2:chart.js. The .js suffix by itself doesn’t tell us what kind of file it is, but if I open it up, I see import '@kurkle/color'; which is an immediate sign that this is an ES module – the import ... syntax is ES module syntax.

option 3: chart.umd.js. “UMD” stands for “Universal Module Definition”, which I think means that you can use this file either with a basic <script src>, CommonJS, or some third thing called AMD that I don’t understand.

how to use a UMD file

When I was using Chart.js I picked Option 3. I just needed to add this to my code:

<script src="./chart.umd.js"> </script>

and then I could use the library with the global Chart environment variable. Couldn’t be easier. I just copied chart.umd.js into my Git repository so that I didn’t have to worry about using NPM or the CDNs going down or anything.

the build files aren’t always in the dist directory

A lot of libraries will put their build in the dist directory, but not always! The build files’ location is specified in the library’s package.json.

For example here’s an excerpt from Chart.js’s package.json.

  "jsdelivr": "./dist/chart.umd.js",
  "unpkg": "./dist/chart.umd.js",
  "main": "./dist/chart.cjs",
  "module": "./dist/chart.js",

I think this is saying that if you want to use an ES Module (module) you should use dist/chart.js, but the jsDelivr and unpkg CDNs should use ./dist/chart.umd.js. I guess main is for Node.

chart.js’s package.json also says "type": "module", which according to this documentation tells Node to treat files as ES modules by default. I think it doesn’t tell us specifically which files are ES modules and which ones aren’t but it does tell us that something in there is an ES module.

example library 2: @atcute/oauth-browser-client

@atcute/oauth-browser-client is a library for logging into Bluesky with OAuth in the browser.

Let’s see what kinds of Javascript files it provides in its build!

$ npm install @atcute/oauth-browser-client
$ cd node_modules/@atcute/oauth-browser-client/dist
$ ls *js
constants.js  dpop.js  environment.js  errors.js  index.js  resolvers.js

It seems like the only plausible root file in here is index.js, which looks something like this:

export { configureOAuth } from './environment.js';
export * from './errors.js';
export * from './resolvers.js';

This export syntax means it’s an ES module. That means we can use it in the browser without a build step! Let’s see how to do that.

how to use an ES module with importmaps

Using an ES module isn’t an easy as just adding a <script src="whatever.js">. Instead, if the ES module has dependencies (like @atcute/oauth-browser-client does) the steps are:

  1. Set up an import map in your HTML
  2. Put import statements like import { configureOAuth } from '@atcute/oauth-browser-client'; in your JS code
  3. Include your JS code in your HTML like this: <script type="module" src="YOURSCRIPT.js"></script>

The reason we need an import map instead of just doing something like import { BrowserOAuthClient } from "./oauth-client-browser.js" is that internally the module has more import statements like import {something} from @atcute/client, and we need to tell the browser where to get the code for @atcute/client and all of its other dependencies.

Here’s what the importmap I used looks like for @atcute/oauth-browser-client:

<script type="importmap">
{
  "imports": {
    "nanoid": "./node_modules/nanoid/bin/dist/index.js",
    "nanoid/non-secure": "./node_modules/nanoid/non-secure/index.js",
    "nanoid/url-alphabet": "./node_modules/nanoid/url-alphabet/dist/index.js",
    "@atcute/oauth-browser-client": "./node_modules/@atcute/oauth-browser-client/dist/index.js",
    "@atcute/client": "./node_modules/@atcute/client/dist/index.js",
    "@atcute/client/utils/did": "./node_modules/@atcute/client/dist/utils/did.js"
  }
}
</script>

Getting these import maps to work is pretty fiddly, I feel like there must be a tool to generate them automatically but I haven’t found one yet. It’s definitely possible to write a script that automatically generates the importmaps using esbuild’s metafile but I haven’t done that and maybe there’s a better way.

I decided to set up importmaps yesterday to get github.com/jvns/bsky-oauth-example to work, so there’s some example code in that repo.

Also someone pointed me to Simon Willison’s download-esm, which will download an ES module and rewrite the imports to point to the JS files directly so that you don’t need importmaps. I haven’t tried it yet but it seems like a great idea.

problems with importmaps: too many files

I did run into some problems with using importmaps in the browser though – it needed to download dozens of Javascript files to load my site, and my webserver in development couldn’t keep up for some reason. I kept seeing files fail to load randomly and then had to reload the page and hope that they would succeed this time.

It wasn’t an issue anymore when I deployed my site to production, so I guess it was a problem with my local dev environment.

Also one slightly annoying thing about ES modules in general is that you need to be running a webserver to use them, I’m sure this is for a good reason but it’s easier when you can just open your index.html file without starting a webserver.

Because of the “too many files” thing I think actually using ES modules with importmaps in this way isn’t actually that appealing to me, but it’s good to know it’s possible.

how to use an ES module without importmaps

If the ES module doesn’t have dependencies then it’s even easier – you don’t need the importmaps! You can just:

  • put <script type="module" src="YOURCODE.js"></script> in your HTML. The type="module" is important.
  • put import {whatever} from "https://example.com/whatever.js" in YOURCODE.js

alternative: use esbuild

If you don’t want to use importmaps, you can also use a build system like esbuild. I talked about how to do that in Some notes on using esbuild, but this blog post is about ways to avoid build systems completely so I’m not going to talk about that option here. I do still like esbuild though and I think it’s a good option in this case.

what’s the browser support for importmaps?

CanIUse says that importmaps are in “Baseline 2023: newly available across major browsers” so my sense is that in 2024 that’s still maybe a little bit too new? I think I would use importmaps for some fun experimental code that I only wanted like myself and 12 people to use, but if I wanted my code to be more widely usable I’d use esbuild instead.

example library 3: @atproto/oauth-client-browser

Let’s look at one final example library! This is a different Bluesky auth library than @atcute/oauth-browser-client.

$ npm install @atproto/oauth-client-browser
$ cd node_modules/@atproto/oauth-client-browser/dist
$ ls *js
browser-oauth-client.js  browser-oauth-database.js  browser-runtime-implementation.js  errors.js  index.js  indexed-db-store.js  util.js

Again, it seems like only real candidate file here is index.js. But this is a different situation from the previous example library! Let’s take a look at index.js:

There’s a bunch of stuff like this in index.js:

__exportStar(require("@atproto/oauth-client"), exports);
__exportStar(require("./browser-oauth-client.js"), exports);
__exportStar(require("./errors.js"), exports);
var util_js_1 = require("./util.js");

This require() syntax is CommonJS syntax, which means that we can’t use this file in the browser at all, we need to use some kind of build step, and ESBuild won’t work either.

Also in this library’s package.json it says "type": "commonjs" which is another way to tell it’s CommonJS.

how to use a CommonJS module with esm.sh

Originally I thought it was impossible to use CommonJS modules without learning a build system, but then someone Bluesky told me about esm.sh! It’s a CDN that will translate anything into an ES Module. skypack.dev does something similar, I’m not sure what the difference is but one person mentioned that if one doesn’t work sometimes they’ll try the other one.

For @atproto/oauth-client-browser using it seems pretty simple, I just need to put this in my HTML:

<script type="module" src="script.js"> </script>

and then put this in script.js.

import { BrowserOAuthClient } from "https://esm.sh/@atproto/oauth-client-browser@0.3.0"

It seems to Just Work, which is cool! Of course this is still sort of using a build system – it’s just that esm.sh is running the build instead of me. My main concerns with this approach are:

  • I don’t really trust CDNs to keep working forever – usually I like to copy dependencies into my repository so that they don’t go away for some reason in the future.
  • I’ve heard of some issues with CDNs having security compromises which scares me.
  • I don’t really understand what esm.sh is doing.

esbuild can also convert CommonJS modules into ES modules

I also learned that you can also use esbuild to convert a CommonJS module into an ES module, though there are some limitations – the import { BrowserOAuthClient } from syntax doesn’t work. Here’s a github issue about that.

I think the esbuild approach is probably more appealing to me than the esm.sh approach because it’s a tool that I already have on my computer so I trust it more. I haven’t experimented with this much yet though.

summary of the three types of files

Here’s a summary of the three types of JS files you might encounter, options for how to use them, and how to identify them.

Unhelpfully a .js or .min.js file extension could be any of these 3 options, so if the file is something.js you need to do more detective work to figure out what you’re dealing with.

  1. “classic” JS files
    • How to use it:: <script src="whatever.js"></script>
    • Ways to identify it:
      • The website has a big friendly banner in its setup instructions saying “Use this with a CDN!” or something
      • A .umd.js extension
      • Just try to put it in a <script src=... tag and see if it works
  2. ES Modules
    • Ways to use it:
      • If there are no dependencies, just import {whatever} from "./my-module.js" directly in your code
      • If there are dependencies, create an importmap and import {whatever} from "my-module"
      • Use esbuild or any ES Module bundler
    • Ways to identify it:
      • Look for an import or export statement. (not module.exports = ..., that’s CommonJS)
      • An .mjs extension
      • maybe "type": "module" in package.json (though it’s not clear to me which file exactly this refers to)
  3. CommonJS Modules
    • Ways to use it:
      • Use https://esm.sh to convert it into an ES module, like https://esm.sh/@atproto/oauth-client-browser@0.3.0
      • Use a build somehow (??)
    • Ways to identify it:
      • Look for require() or module.exports = ... in the code
      • A .cjs extension
      • maybe "type": "commonjs" in package.json (though it’s not clear to me which file exactly this refers to)

it’s really nice to have ES modules standardized

The main difference between CommonJS modules and ES modules from my perspective is that ES modules are actually a standard. This makes me feel a lot more confident using them, because browsers commit to backwards compatibility for web standards forever – if I write some code using ES modules today, I can feel sure that it’ll still work the same way in 15 years.

It also makes me feel better about using tooling like esbuild because even if the esbuild project dies, because it’s implementing a standard it feels likely that there will be another similar tool in the future that I can replace it with.

the JS community has built a lot of very cool tools

A lot of the time when I talk about this stuff I get responses like “I hate javascript!!! it’s the worst!!!”. But my experience is that there are a lot of great tools for Javascript (I just learned about https://esm.sh yesterday which seems great! I love esbuild!), and that if I take the time to learn how things works I can take advantage of some of those tools and make my life a lot easier.

So the goal of this post is definitely not to complain about Javascript, it’s to understand the landscape so I can use the tooling in a way that feels good to me.

questions I still have

Here are some questions I still have, I’ll add the answers into the post if I learn the answer.

  • Is there a tool that automatically generates importmaps for an ES Module that I have set up locally? (apparently yes: jspm)
  • How can I convert a CommonJS module into an ES module on my computer, the way https://esm.sh does? (apparently esbuild can sort of do this, though named exports don’t work)
  • When people normally build CommonJS modules into regular JS code, what’s code is doing that? Obviously there are tools like webpack, rollup, esbuild, etc, but do those tools all implement their own JS parsers/static analysis? How many JS parsers are there out there?
  • Is there any way to bundle an ES module into a single file (like atcute-client.js), but so that in the browser I can still import multiple different paths from that file (like both @atcute/client/lexicons and @atcute/client)?

all the tools

Here’s a list of every tool we talked about in this post:

Writing this post has made me think that even though I usually don’t want to have a build that I run every time I update the project, I might be willing to have a build step (using download-esm or something) that I run only once when setting up the project and never run again except maybe if I’m updating my dependency versions.

that’s all!

Thanks to Marco Rogers who taught me a lot of the things in this post. I’ve probably made some mistakes in this post and I’d love to know what they are – let me know on Bluesky or Mastodon!

2024-11-09T09:24:29+00:00 Fullscreen Open in Tab
New microblog with TILs

I added a new section to this site a couple weeks ago called TIL (“today I learned”).

the goal: save interesting tools & facts I posted on social media

One kind of thing I like to post on Mastodon/Bluesky is “hey, here’s a cool thing”, like the great SQLite repl litecli, or the fact that cross compiling in Go Just Works and it’s amazing, or cryptographic right answers, or this great diff tool. Usually I don’t want to write a whole blog post about those things because I really don’t have much more to say than “hey this is useful!”

It started to bother me that I didn’t have anywhere to put those things: for example recently I wanted to use diffdiff and I just could not remember what it was called.

the solution: make a new section of this blog

So I quickly made a new folder called /til/, added some custom styling (I wanted to style the posts to look a little bit like a tweet), made a little Rake task to help me create new posts quickly (rake new_til), and set up a separate RSS Feed for it.

I think this new section of the blog might be more for myself than anything, now when I forget the link to Cryptographic Right Answers I can hopefully look it up on the TIL page. (you might think “julia, why not use bookmarks??” but I have been failing to use bookmarks for my whole life and I don’t see that changing ever, putting things in public is for whatever reason much easier for me)

So far it’s been working, often I can actually just make a quick post in 2 minutes which was the goal.

inspired by Simon Willison’s TIL blog

My page is inspired by Simon Willison’s great TIL blog, though my TIL posts are a lot shorter.

I don’t necessarily want everything to be archived

This came about because I spent a lot of time on Twitter, so I’ve been thinking about what I want to do about all of my tweets.

I keep reading the advice to “POSSE” (“post on your own site, syndicate elsewhere”), and while I find the idea appealing in principle, for me part of the appeal of social media is that it’s a little bit ephemeral. I can post polls or questions or observations or jokes and then they can just kind of fade away as they become less relevant.

I find it a lot easier to identify specific categories of things that I actually want to have on a Real Website That I Own:

and then let everything else be kind of ephemeral.

I really believe in the advice to make email lists though – the first two (blog posts & comics) both have email lists and RSS feeds that people can subscribe to if they want. I might add a quick summary of any TIL posts from that week to the “blog posts from this week” mailing list.

2024-11-04T09:18:03+00:00 Fullscreen Open in Tab
My IETF 121 Agenda

Here's where you can find me at IETF 121 in Dublin!

Monday

Tuesday

  • 9:30 - 11:30 • oauth
  • 13:00 - 14:30 • spice
  • 16:30 - 17:30 • scim

Thursday

Get in Touch

My Current Drafts

2024-10-31T08:00:10+00:00 Fullscreen Open in Tab
ASCII control characters in my terminal

Hello! I’ve been thinking about the terminal a lot and yesterday I got curious about all these “control codes”, like Ctrl-A, Ctrl-C, Ctrl-W, etc. What’s the deal with all of them?

a table of ASCII control characters

Here’s a table of all 33 ASCII control characters, and what they do on my machine (on Mac OS), more or less. There are about a million caveats, but I’ll talk about what it means and all the problems with this diagram that I know about.

You can also view it as an HTML page (I just made it an image so it would show up in RSS).

different kinds of codes are mixed together

The first surprising thing about this diagram to me is that there are 33 control codes, split into (very roughly speaking) these categories:

  1. Codes that are handled by the operating system’s terminal driver, for example when the OS sees a 3 (Ctrl-C), it’ll send a SIGINT signal to the current program
  2. Everything else is passed through to the application as-is and the application can do whatever it wants with them. Some subcategories of those:
    • Codes that correspond to a literal keypress of a key on your keyboard (Enter, Tab, Backspace). For example when you press Enter, your terminal gets sent 13.
    • Codes used by readline: “the application can do whatever it wants” often means “it’ll do more or less what the readline library does, whether the application actually uses readline or not”, so I’ve labelled a bunch of the codes that readline uses
    • Other codes, for example I think Ctrl-X has no standard meaning in the terminal in general but emacs uses it very heavily

There’s no real structure to which codes are in which categories, they’re all just kind of randomly scattered because this evolved organically.

(If you’re curious about readline, I wrote more about readline in entering text in the terminal is complicated, and there are a lot of cheat sheets out there)

there are only 33 control codes

Something else that I find a little surprising is that are only 33 control codes – A to Z, plus 7 more (@, [, \, ], ^, _, ?). This means that if you want to have for example Ctrl-1 as a keyboard shortcut in a terminal application, that’s not really meaningful – on my machine at least Ctrl-1 is exactly the same thing as just pressing 1, Ctrl-3 is the same as Ctrl-[, etc.

Also Ctrl+Shift+C isn’t a control code – what it does depends on your terminal emulator. On Linux Ctrl-Shift-X is often used by the terminal emulator to copy or open a new tab or paste for example, it’s not sent to the TTY at all.

Also I use Ctrl+Left Arrow all the time, but that isn’t a control code, instead it sends an ANSI escape sequence (ctrl-[[1;5D) which is a different thing which we absolutely do not have space for in this post.

This “there are only 33 codes” thing is totally different from how keyboard shortcuts work in a GUI where you can have Ctrl+KEY for any key you want.

the official ASCII names aren’t very meaningful to me

Each of these 33 control codes has a name in ASCII (for example 3 is ETX). When all of these control codes were originally defined, they weren’t being used for computers or terminals at all, they were used for the telegraph machine. Telegraph machines aren’t the same as UNIX terminals so a lot of the codes were repurposed to mean something else.

Personally I don’t find these ASCII names very useful, because 50% of the time the name in ASCII has no actual relationship to what that code does on UNIX systems today. So it feels easier to just ignore the ASCII names completely instead of trying to figure which ones still match their original meaning.

It’s hard to use Ctrl-M as a keyboard shortcut

Another thing that’s a bit weird is that Ctrl-M is literally the same as Enter, and Ctrl-I is the same as Tab, which makes it hard to use those two as keyboard shortcuts.

From some quick research, it seems like some folks do still use Ctrl-I and Ctrl-M as keyboard shortcuts (here’s an example), but to do that you need to configure your terminal emulator to treat them differently than the default.

For me the main takeaway is that if I ever write a terminal application I should avoid Ctrl-I and Ctrl-M as keyboard shortcuts in it.

how to identify what control codes get sent

While writing this I needed to do a bunch of experimenting to figure out what various key combinations did, so I wrote this Python script echo-key.py that will print them out.

There’s probably a more official way but I appreciated having a script I could customize.

caveat: on canonical vs noncanonical mode

Two of these codes (Ctrl-W and Ctrl-U) are labelled in the table as “handled by the OS”, but actually they’re not always handled by the OS, it depends on whether the terminal is in “canonical” mode or in “noncanonical mode”.

In canonical mode, programs only get input when you press Enter (and the OS is in charge of deleting characters when you press Backspace or Ctrl-W). But in noncanonical mode the program gets input immediately when you press a key, and the Ctrl-W and Ctrl-U codes are passed through to the program to handle any way it wants.

Generally in noncanonical mode the program will handle Ctrl-W and Ctrl-U similarly to how the OS does, but there are some small differences.

Some examples of programs that use canonical mode:

  • probably pretty much any noninteractive program, like grep or cat
  • git, I think

Examples of programs that use noncanonical mode:

  • python3, irb and other REPLs
  • your shell
  • any full screen TUI like less or vim

caveat: all of the “OS terminal driver” codes are configurable with stty

I said that Ctrl-C sends SIGINT but technically this is not necessarily true, if you really want to you can remap all of the codes labelled “OS terminal driver”, plus Backspace, using a tool called stty, and you can view the mappings with stty -a.

Here are the mappings on my machine right now:

$ stty -a
cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>;
	eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V;
	min = 1; quit = ^\; reprint = ^R; start = ^Q; status = ^T;
	stop = ^S; susp = ^Z; time = 0; werase = ^W;

I have personally never remapped any of these and I cannot imagine a reason I would (I think it would be a recipe for confusion and disaster for me), but I asked on Mastodon and people said the most common reasons they used stty were:

  • fix a broken terminal with stty sane
  • set stty erase ^H to change how Backspace works
  • set stty ixoff
  • some people even map SIGINT to a different key, like their DELETE key

caveat: on signals

Two signals caveats:

  1. If the ISIG terminal mode is turned off, then the OS won’t send signals. For example vim turns off ISIG
  2. Apparently on BSDs, there’s an extra control code (Ctrl-T) which sends SIGINFO

You can see which terminal modes a program is setting using strace like this, terminal modes are set with the ioctl system call:

$ strace -tt -o out  vim
$ grep ioctl out | grep SET

here are the modes vim sets when it starts (ISIG and ICANON are missing!):

17:43:36.670636 ioctl(0, TCSETS, {c_iflag=IXANY|IMAXBEL|IUTF8,
c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST, c_cflag=B38400|CS8|CREAD,
c_lflag=ECHOK|ECHOCTL|ECHOKE|PENDIN, ...}) = 0

and it resets the modes when it exits:

17:43:38.027284 ioctl(0, TCSETS, {c_iflag=ICRNL|IXANY|IMAXBEL|IUTF8,
c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST|ONLCR, c_cflag=B38400|CS8|CREAD,
c_lflag=ISIG|ICANON|ECHO|ECHOE|ECHOK|IEXTEN|ECHOCTL|ECHOKE|PENDIN, ...}) = 0

I think the specific combination of modes vim is using here might be called “raw mode”, man cfmakeraw talks about that.

there are a lot of conflicts

Related to “there are only 33 codes”, there are a lot of conflicts where different parts of the system want to use the same code for different things, for example by default Ctrl-S will freeze your screen, but if you turn that off then readline will use Ctrl-S to do a forward search.

Another example is that on my machine sometimes Ctrl-T will send SIGINFO and sometimes it’ll transpose 2 characters and sometimes it’ll do something completely different depending on:

  • whether the program has ISIG set
  • whether the program uses readline / imitates readline’s behaviour

caveat: on “backspace” and “other backspace”

In this diagram I’ve labelled code 127 as “backspace” and 8 as “other backspace”. Uh, what?

I think this was the single biggest topic of discussion in the replies on Mastodon – apparently there’s a LOT of history to this and I’d never heard of any of it before.

First, here’s how it works on my machine:

  1. I press the Backspace key
  2. The TTY gets sent the byte 127, which is called DEL in ASCII
  3. the OS terminal driver and readline both have 127 mapped to “backspace” (so it works both in canonical mode and noncanonical mode)
  4. The previous character gets deleted

If I press Ctrl+H, it has the same effect as Backspace if I’m using readline, but in a program without readline support (like cat for instance), it just prints out ^H.

Apparently Step 2 above is different for some folks – their Backspace key sends the byte 8 instead of 127, and so if they want Backspace to work then they need to configure the OS (using stty) to set erase = ^H.

There’s an incredible section of the Debian Policy Manual on keyboard configuration that describes how Delete and Backspace should work according to Debian policy, which seems very similar to how it works on my Mac today. My understanding (via this mastodon post) is that this policy was written in the 90s because there was a lot of confusion about what Backspace should do in the 90s and there needed to be a standard to get everything to work.

There’s a bunch more historical terminal stuff here but that’s all I’ll say for now.

there’s probably a lot more diversity in how this works

I’ve probably missed a bunch more ways that “how it works on my machine” might be different from how it works on other people’s machines, and I’ve probably made some mistakes about how it works on my machine too. But that’s all I’ve got for today.

Some more stuff I know that I’ve left out: according to stty -a Ctrl-O is “discard”, Ctrl-R is “reprint”, and Ctrl-Y is “dsusp”. I have no idea how to make those actually do anything (pressing them does not do anything obvious, and some people have told me what they used to do historically but it’s not clear to me if they have a use in 2024), and a lot of the time in practice they seem to just be passed through to the application anyway so I just labelled Ctrl-R and Ctrl-Y as readline.

not all of this is that useful to know

Also I want to say that I think the contents of this post are kind of interesting but I don’t think they’re necessarily that useful. I’ve used the terminal pretty successfully every day for the last 20 years without knowing literally any of this – I just knew what Ctrl-C, Ctrl-D, Ctrl-Z, Ctrl-R, Ctrl-L did in practice (plus maybe Ctrl-A, Ctrl-E and Ctrl-W) and did not worry about the details for the most part, and that was almost always totally fine except when I was trying to use xterm.js.

But I had fun learning about it so maybe it’ll be interesting to you too.

2024-10-27T07:47:04+00:00 Fullscreen Open in Tab
Using less memory to look up IP addresses in Mess With DNS

I’ve been having problems for the last 3 years or so where Mess With DNS periodically runs out of memory and gets OOM killed.

This hasn’t been a big priority for me: usually it just goes down for a few minutes while it restarts, and it only happens once a day at most, so I’ve just been ignoring. But last week it started actually causing a problem so I decided to look into it.

This was kind of winding road where I learned a lot so here’s a table of contents:

there’s about 100MB of memory available

I run Mess With DNS on a VM without about 465MB of RAM, which according to ps aux (the RSS column) is split up something like:

  • 100MB for PowerDNS
  • 200MB for Mess With DNS
  • 40MB for hallpass

That leaves about 110MB of memory free.

A while back I set GOMEMLIMIT to 250MB to try to make sure the garbage collector ran if Mess With DNS used more than 250MB of memory, and I think this helped but it didn’t solve everything.

the problem: OOM killing the backup script

A few weeks ago I started backing up Mess With DNS’s database for the first time using restic.

This has been working okay, but since Mess With DNS operates without much extra memory I think restic sometimes needed more memory than was available on the system, and so the backup script sometimes got OOM killed.

This was a problem because

  1. backups might be corrupted sometimes
  2. more importantly, restic takes out a lock when it runs, and so I’d have to manually do an unlock if I wanted the backups to continue working. Doing manual work like this is the #1 thing I try to avoid with all my web services (who has time for that!) so I really wanted to do something about it.

There’s probably more than one solution to this, but I decided to try to make Mess With DNS use less memory so that there was more available memory on the system, mostly because it seemed like a fun problem to try to solve.

what’s using memory: IP addresses

I’d run a memory profile of Mess With DNS a bunch of times in the past, so I knew exactly what was using most of Mess With DNS’s memory: IP addresses.

When it starts, Mess With DNS loads this database where you can look up the ASN of every IP address into memory, so that when it receives a DNS query it can take the source IP address like 74.125.16.248 and tell you that IP address belongs to GOOGLE.

This database by itself used about 117MB of memory, and a simple du told me that was too much – the original text files were only 37MB!

$ du -sh *.tsv
26M	ip2asn-v4.tsv
11M	ip2asn-v6.tsv

The way it worked originally is that I had an array of these:

type IPRange struct {
	StartIP net.IP
	EndIP   net.IP
	Num     int
	Name    string
	Country string
}

and I searched through it with a binary search to figure out if any of the ranges contained the IP I was looking for. Basically the simplest possible thing and it’s super fast, my machine can do about 9 million lookups per second.

attempt 1: use SQLite

I’ve been using SQLite recently, so my first thought was – maybe I can store all of this data on disk in an SQLite database, give the tables an index, and that’ll use less memory.

So I:

  • wrote a quick Python script using sqlite-utils to import the TSV files into an SQLite database
  • adjusted my code to select from the database instead

This did solve the initial memory goal (after a GC it now hardly used any memory at all because the table was on disk!), though I’m not sure how much GC churn this solution would cause if we needed to do a lot of queries at once. I did a quick memory profile and it seemed to allocate about 1KB of memory per lookup.

Let’s talk about the issues I ran into with using SQLite though.

problem: how to store IPv6 addresses

SQLite doesn’t have support for big integers and IPv6 addresses are 128 bits, so I decided to store them as text. I think BLOB might have been better, I originally thought BLOBs couldn’t be compared but the sqlite docs say they can.

I ended up with this schema:

CREATE TABLE ipv4_ranges (
   start_ip INTEGER NOT NULL,
   end_ip INTEGER NOT NULL,
   asn INTEGER NOT NULL,
   country TEXT NOT NULL,
   name TEXT NOT NULL
);
CREATE TABLE ipv6_ranges (
   start_ip TEXT NOT NULL,
   end_ip TEXT NOT NULL,
   asn INTEGER,
   country TEXT,
   name TEXT
);
CREATE INDEX idx_ipv4_ranges_start_ip ON ipv4_ranges (start_ip);
CREATE INDEX idx_ipv6_ranges_start_ip ON ipv6_ranges (start_ip);
CREATE INDEX idx_ipv4_ranges_end_ip ON ipv4_ranges (end_ip);
CREATE INDEX idx_ipv6_ranges_end_ip ON ipv6_ranges (end_ip);

Also I learned that Python has an ipaddress module, so I could use ipaddress.ip_address(s).exploded to make sure that the IPv6 addresses were expanded so that a string comparison would compare them properly.

problem: it’s 500x slower

I ran a quick microbenchmark, something like this. It printed out that it could look up 17,000 IPv6 addresses per second, and similarly for IPv4 addresses.

This was pretty discouraging – being able to look up 17k addresses per section is kind of fine (Mess With DNS does not get a lot of traffic), but I compared it to the original binary search code and the original code could do 9 million per second.

	ips := []net.IP{}
	count := 20000
	for i := 0; i < count; i++ {
		// create a random IPv6 address
		bytes := randomBytes()
		ip := net.IP(bytes[:])
		ips = append(ips, ip)
	}
	now := time.Now()
	success := 0
	for _, ip := range ips {
		_, err := ranges.FindASN(ip)
		if err == nil {
			success++
		}
	}
	fmt.Println(success)
	elapsed := time.Since(now)
	fmt.Println("number per second", float64(count)/elapsed.Seconds())

time for EXPLAIN QUERY PLAN

I’d never really done an EXPLAIN in sqlite, so I thought it would be a fun opportunity to see what the query plan was doing.

sqlite> explain query plan select * from ipv6_ranges where '2607:f8b0:4006:0824:0000:0000:0000:200e' BETWEEN start_ip and end_ip;
QUERY PLAN
`--SEARCH ipv6_ranges USING INDEX idx_ipv6_ranges_end_ip (end_ip>?)

It looks like it’s just using the end_ip index and not the start_ip index, so maybe it makes sense that it’s slower than the binary search.

I tried to figure out if there was a way to make SQLite use both indexes, but I couldn’t find one and maybe it knows best anyway.

At this point I gave up on the SQLite solution, I didn’t love that it was slower and also it’s a lot more complex than just doing a binary search. I felt like I’d rather keep something much more similar to the binary search.

A few things I tried with SQLite that did not cause it to use both indexes:

  • using a compound index instead of two separate indexes
  • running ANALYZE
  • using INTERSECT to intersect the results of start_ip < ? and ? < end_ip. This did make it use both indexes, but it also seemed to make the query literally 1000x slower, probably because it needed to create the results of both subqueries in memory and intersect them.

attempt 2: use a trie

My next idea was to use a trie, because I had some vague idea that maybe a trie would use less memory, and I found this library called ipaddress-go that lets you look up IP addresses using a trie.

I tried using it here’s the code, but I think I was doing something wildly wrong because, compared to my naive array + binary search:

  • it used WAY more memory (800MB to store just the IPv4 addresses)
  • it was a lot slower to do the lookups (it could do only 100K/second instead of 9 million/second)

I’m not really sure what went wrong here but I gave up on this approach and decided to just try to make my array use less memory and stick to a simple binary search.

some notes on memory profiling

One thing I learned about memory profiling is that you can use runtime package to see how much memory is currently allocated in the program. That’s how I got all the memory numbers in this post. Here’s the code:

func memusage() {
	runtime.GC()
	var m runtime.MemStats
	runtime.ReadMemStats(&m)
	fmt.Printf("Alloc = %v MiB\n", m.Alloc/1024/1024)
	// write mem.prof
	f, err := os.Create("mem.prof")
	if err != nil {
		log.Fatal(err)
	}
	pprof.WriteHeapProfile(f)
	f.Close()
}

Also I learned that if you use pprof to analyze a heap profile there are two ways to analyze it: you can pass either --alloc-space or --inuse-space to go tool pprof. I don’t know how I didn’t realize this before but alloc-space will tell you about everything that was allocated, and inuse-space will just include memory that’s currently in use.

Anyway I ran go tool pprof -pdf --inuse_space mem.prof > mem.pdf a lot. Also every time I use pprof I find myself referring to my own intro to pprof, it’s probably the blog post I wrote that I use the most often. I should add --alloc-space and --inuse-space to it.

attempt 3: make my array use less memory

I was storing my ip2asn entries like this:

type IPRange struct {
	StartIP net.IP
	EndIP   net.IP
	Num     int
	Name    string
	Country string
}

I had 3 ideas for ways to improve this:

  1. There was a lot of repetition of Name and the Country, because a lot of IP ranges belong to the same ASN
  2. net.IP is an []byte under the hood, which felt like it involved an unnecessary pointer, was there a way to inline it into the struct?
  3. Maybe I didn’t need both the start IP and the end IP, often the ranges were consecutive so maybe I could rearrange things so that I only had the start IP

idea 3.1: deduplicate the Name and Country

I figured I could store the ASN info in an array, and then just store the index into the array in my IPRange struct. Here are the structs so you can see what I mean:

type IPRange struct {
	StartIP netip.Addr
	EndIP   netip.Addr
	ASN     uint32
	Idx     uint32
}

type ASNInfo struct {
	Country string
	Name    string
}

type ASNPool struct {
	asns   []ASNInfo
	lookup map[ASNInfo]uint32
}

This worked! It brought memory usage from 117MB to 65MB – a 50MB savings. I felt good about this.

Here’s all of the code for that part.

how big are ASNs?

As an aside – I’m storing the ASN in a uint32, is that right? I looked in the ip2asn file and the biggest one seems to be 401307, though there are a few lines that say 4294901931 which is much bigger, but also are just inside the range of a uint32. So I can definitely use a uint32.

59.101.179.0	59.101.179.255	4294901931	Unknown	AS4294901931

idea 3.2: use netip.Addr instead of net.IP

It turns out that I’m not the only one who felt that net.IP was using an unnecessary amount of memory – in 2021 the folks at Tailscale released a new IP address library for Go which solves this and many other issues. They wrote a great blog post about it.

I discovered (to my delight) that not only does this new IP address library exist and do exactly what I want, it’s also now in the Go standard library as netip.Addr. Switching to netip.Addr was very easy and saved another 20MB of memory, bringing us to 46MB.

I didn’t try my third idea (remove the end IP from the struct) because I’d already been programming for long enough on a Saturday morning and I was happy with my progress.

It’s always such a great feeling when I think “hey, I don’t like this, there must be a better way” and then immediately discover that someone has already made the exact thing I want, thought about it a lot more than me, and implemented it much better than I would have.

all of this was messier in real life

Even though I tried to explain this in a simple linear way “I tried X, then I tried Y, then I tried Z”, that’s kind of a lie – I always try to take my actual debugging process (total chaos) and make it seem more linear and understandable because the reality is just too annoying to write down. It’s more like:

  • try sqlite
  • try a trie
  • second guess everything that I concluded about sqlite, go back and look at the results again
  • wait what about indexes
  • very very belatedly realize that I can use runtime to check how much memory everything is using, start doing that
  • look at the trie again, maybe I misunderstood everything
  • give up and go back to binary search
  • look at all of the numbers for tries/sqlite again to make sure I didn’t misunderstand

A note on using 512MB of memory

Someone asked why I don’t just give the VM more memory. I could very easily afford to pay for a VM with 1GB of memory, but I feel like 512MB really should be enough (and really that 256MB should be enough!) so I’d rather stay inside that constraint. It’s kind of a fun puzzle.

a few ideas from the replies

Folks had a lot of good ideas I hadn’t thought of. Recording them as inspiration if I feel like having another Fun Performance Day at some point.

  • Try Go’s unique package for the ASNPool. Someone tried this and it uses more memory, probably because Go’s pointers are 64 bits
  • Try compiling with GOARCH=386 to use 32-bit pointers to sace space (maybe in combination with using unique!)
  • It should be possible to store all of the IPv6 addresses in just 64 bits, because only the first 64 bits of the address are public
  • Interpolation search might be faster than binary search since IP addresses are numeric
  • Try the MaxMind db format with mmdbwriter or mmdbctl
  • Tailscale’s art routing table package

the result: saved 70MB of memory!

I deployed the new version and now Mess With DNS is using less memory! Hooray!

A few other notes:

  • lookups are a little slower – in my microbenchmark they went from 9 million lookups/second to 6 million, maybe because I added a little indirection. Using less memory and a little more CPU seemed like a good tradeoff though.
  • it’s still using more memory than the raw text files do (46MB vs 37MB), I guess pointers take up space and that’s okay.

I’m honestly not sure if this will solve all my memory problems, probably not! But I had fun, I learned a few things about SQLite, I still don’t know what to think about tries, and it made me love binary search even more than I already did.

2024-10-07T09:19:57+00:00 Fullscreen Open in Tab
Some notes on upgrading Hugo

Warning: this is a post about very boring yakshaving, probably only of interest to people who are trying to upgrade Hugo from a very old version to a new version. But what are blogs for if not documenting one’s very boring yakshaves from time to time?

So yesterday I decided to try to upgrade Hugo. There’s no real reason to do this – I’ve been using Hugo version 0.40 to generate this blog since 2018, it works fine, and I don’t have any problems with it. But I thought – maybe it won’t be as hard as I think, and I kind of like a tedious computer task sometimes!

I thought I’d document what I learned along the way in case it’s useful to anyone else doing this very specific migration. I upgraded from Hugo v0.40 (from 2018) to v0.135 (from 2024).

Here are most of the changes I had to make:

change 1: template "theme/partials/thing.html is now partial thing.html

I had to replace a bunch of instances of {{ template "theme/partials/header.html" . }} with {{ partial "header.html" . }}.

This happened in v0.42:

We have now virtualized the filesystems for project and theme files. This makes everything simpler, faster and more powerful. But it also means that template lookups on the form {{ template “theme/partials/pagination.html” . }} will not work anymore. That syntax has never been documented, so it’s not expected to be in wide use.

change 2: .Data.Pages is now site.RegularPages

This seems to be discussed in the release notes for 0.57.2

I just needed to replace .Data.Pages with site.RegularPages in the template on the homepage as well as in my RSS feed template.

change 3: .Next and .Prev got flipped

I had this comment in the part of my theme where I link to the next/previous blog post:

“next” and “previous” in hugo apparently mean the opposite of what I’d think they’d mean intuitively. I’d expect “next” to mean “in the future” and “previous” to mean “in the past” but it’s the opposite

It looks they changed this in ad705aac064 so that “next” actually is in the future and “prev” actually is in the past. I definitely find the new behaviour more intuitive.

downloading the Hugo changelogs with a script

Figuring out why/when all of these changes happened was a little difficult. I ended up hacking together a bash script to download all of the changelogs from github as text files, which I could then grep to try to figure out what happened. It turns out it’s pretty easy to get all of the changelogs from the GitHub API.

So far everything was not so bad – there was also a change around taxonomies that’s I can’t quite explain, but it was all pretty manageable, but then we got to the really tough one: the markdown renderer.

change 4: the markdown renderer (blackfriday -> goldmark)

The blackfriday markdown renderer (which was previously the default) was removed in v0.100.0. This seems pretty reasonable:

It has been deprecated for a long time, its v1 version is not maintained anymore, and there are many known issues. Goldmark should be a mature replacement by now.

Fixing all my Markdown changes was a huge pain – I ended up having to update 80 different Markdown files (out of 700) so that they would render properly, and I’m not totally sure

why bother switching renderers?

The obvious question here is – why bother even trying to upgrade Hugo at all if I have to switch Markdown renderers? My old site was running totally fine and I think it wasn’t necessarily a good use of time, but the one reason I think it might be useful in the future is that the new renderer (goldmark) uses the CommonMark markdown standard, which I’m hoping will be somewhat more futureproof. So maybe I won’t have to go through this again? We’ll see.

Also it turned out that the new Goldmark renderer does fix some problems I had (but didn’t know that I had) with smart quotes and how lists/blockquotes interact.

finding all the Markdown problems: the process

The hard part of this Markdown change was even figuring out what changed. Almost all of the problems (including #2 and #3 above) just silently broke the site, they didn’t cause any errors or anything. So I had to diff the HTML to hunt them down.

Here’s what I ended up doing:

  1. Generate the site with the old version, put it in public_old
  2. Generate the new version, put it in public
  3. Diff every single HTML file in public/ and public_old with this diff.sh script and put the results in a diffs/ folder
  4. Run variations on find diffs -type f | xargs cat | grep -C 5 '(31m|32m)' | less -r over and over again to look at every single change until I found something that seemed wrong
  5. Update the Markdown to fix the problem
  6. Repeat until everything seemed okay

(the grep 31m|32m thing is searching for red/green text in the diff)

This was very time consuming but it was a little bit fun for some reason so I kept doing it until it seemed like nothing too horrible was left.

the new markdown rules

Here’s a list of every type of Markdown change I had to make. It’s very possible these are all extremely specific to me but it took me a long time to figure them all out so maybe this will be helpful to one other person who finds this in the future.

4.1: mixing HTML and markdown

This doesn’t work anymore (it doesn’t expand the link):

<small>
[a link](https://example.com)
</small>

I need to do this instead:

<small>

[a link](https://example.com)

</small>

This works too:

<small> [a link](https://example.com) </small>

4.2: << is changed into «

I didn’t want this so I needed to configure:

markup:
  goldmark:
    extensions:
      typographer:
        leftAngleQuote: '&lt;&lt;'
        rightAngleQuote: '&gt;&gt;'

4.3: nested lists sometimes need 4 space indents

This doesn’t render as a nested list anymore if I only indent by 2 spaces, I need to put 4 spaces.

1. a
  * b
  * c
2. b

The problem is that the amount of indent needed depends on the size of the list markers. Here’s a reference in CommonMark for this.

4.4: blockquotes inside lists work better

Previously the > quote here didn’t render as a blockquote, and with the new renderer it does.

* something
> quote
* something else

I found a bunch of Markdown that had been kind of broken (which I hadn’t noticed) that works better with the new renderer, and this is an example of that.

Lists inside blockquotes also seem to work better.

4.5: headings inside lists

Previously this didn’t render as a heading, but now it does. So I needed to replace the # with &num;.

* # passengers: 20

4.6: + or 1) at the beginning of the line makes it a list

I had something which looked like this:

`1 / (1
+ exp(-1)) = 0.73`

With Blackfriday it rendered like this:

<p><code>1 / (1
+ exp(-1)) = 0.73</code></p>

and with Goldmark it rendered like this:

<p>`1 / (1</p>
<ul>
<li>exp(-1)) = 0.73`</li>
</ul>

Same thing if there was an accidental 1) at the beginning of a line, like in this Markdown snippet

I set up a small Hadoop cluster (1 master, 2 workers, replication set to 
1) on 

To fix this I just had to rewrap the line so that the + wasn’t the first character.

The Markdown is formatted this way because I wrap my Markdown to 80 characters a lot and the wrapping isn’t very context sensitive.

4.7: no more smart quotes in code blocks

There were a bunch of places where the old renderer (Blackfriday) was doing unwanted things in code blocks like replacing ... with or replacing quotes with smart quotes. I hadn’t realized this was happening and I was very happy to have it fixed.

4.8: better quote management

The way this gets rendered got better:

"Oh, *interesting*!"
  • old: “Oh, interesting!“
  • new: “Oh, interesting!”

Before there were two left smart quotes, now the quotes match.

4.9: images are no longer wrapped in a p tag

Previously if I had an image like this:

<img src="https://jvns.ca/images/rustboot1.png">

it would get wrapped in a <p> tag, now it doesn’t anymore. I dealt with this just by adding a margin-bottom: 0.75em to images in the CSS, hopefully that’ll make them display well enough.

4.10: <br> is now wrapped in a p tag

Previously this wouldn’t get wrapped in a p tag, but now it seems to:

<br><br>

I just gave up on fixing this though and resigned myself to maybe having some extra space in some cases. Maybe I’ll try to fix it later if I feel like another yakshave.

4.11: some more goldmark settings

I also needed to

  • turn off code highlighting (because it wasn’t working properly and I didn’t have it before anyway)
  • use the old “blackfriday” method to generate heading IDs so they didn’t change
  • allow raw HTML in my markdown

Here’s what I needed to add to my config.yaml to do all that:

markup:
  highlight:
    codeFences: false
  goldmark:
    renderer:
      unsafe: true
    parser:
      autoHeadingIDType: blackfriday

Maybe I’ll try to get syntax highlighting working one day, who knows. I might prefer having it off though.

a little script to compare blackfriday and goldmark

I also wrote a little program to compare the Blackfriday and Goldmark output for various markdown snippets, here it is in a gist.

It’s not really configured the exact same way Blackfriday and Goldmark were in my Hugo versions, but it was still helpful to have to help me understand what was going on.

a quick note on maintaining themes

My approach to themes in Hugo has been:

  1. pay someone to make a nice design for the site (for example wizardzines.com was designed by Melody Starling)
  2. use a totally custom theme
  3. commit that theme to the same Github repo as the site

So I just need to edit the theme files to fix any problems. Also I wrote a lot of the theme myself so I’m pretty familiar with how it works.

Relying on someone else to keep a theme updated feels kind of scary to me, I think if I were using a third-party theme I’d just copy the code into my site’s github repo and then maintain it myself.

which static site generators have better backwards compatibility?

I asked on Mastodon if anyone had used a static site generator with good backwards compatibility.

The main answers seemed to be Jekyll and 11ty. Several people said they’d been using Jekyll for 10 years without any issues, and 11ty says it has stability as a core goal.

I think a big factor in how appealing Jekyll/11ty are is how easy it is for you to maintain a working Ruby / Node environment on your computer: part of the reason I stopped using Jekyll was that I got tired of having to maintain a working Ruby installation. But I imagine this wouldn’t be a problem for a Ruby or Node developer.

Several people said that they don’t build their Jekyll site locally at all – they just use GitHub Pages to build it.

that’s it!

Overall I’ve been happy with Hugo – I started using it because it had fast build times and it was a static binary, and both of those things are still extremely useful to me. I might have spent 10 hours on this upgrade, but I’ve probably spent 1000+ hours writing blog posts without thinking about Hugo at all so that seems like an extremely reasonable ratio.

I find it hard to be too mad about the backwards incompatible changes, most of them were quite a long time ago, Hugo does a great job of making their old releases available so you can use the old release if you want, and the most difficult one is removing support for the blackfriday Markdown renderer in favour of using something CommonMark-compliant which seems pretty reasonable to me even if it is a huge pain.

But it did take a long time and I don’t think I’d particularly recommend moving 700 blog posts to a new Markdown renderer unless you’re really in the mood for a lot of computer suffering for some reason.

The new renderer did fix a bunch of problems so I think overall it might be a good thing, even if I’ll have to remember to make 2 changes to how I write Markdown (4.1 and 4.3).

Also I’m still using Hugo 0.54 for https://wizardzines.com so maybe these notes will be useful to Future Me if I ever feel like upgrading Hugo for that site.

Hopefully I didn’t break too many things on the blog by doing this, let me know if you see anything broken!

2024-10-01T10:01:44+00:00 Fullscreen Open in Tab
Terminal colours are tricky

Yesterday I was thinking about how long it took me to get a colorscheme in my terminal that I was mostly happy with (SO MANY YEARS), and it made me wonder what about terminal colours made it so hard.

So I asked people on Mastodon what problems they’ve run into with colours in the terminal, and I got a ton of interesting responses! Let’s talk about some of the problems and a few possible ways to fix them.

problem 1: blue on black

One of the top complaints was “blue on black is hard to read”. Here’s an example of that: if I open Terminal.app, set the background to black, and run ls, the directories are displayed in a blue that isn’t that easy to read:

To understand why we’re seeing this blue, let’s talk about ANSI colours!

the 16 ANSI colours

Your terminal has 16 numbered colours – black, red, green, yellow, blue, magenta, cyan, white, and “bright” version of each of those.

Programs can use them by printing out an “ANSI escape code” – for example if you want to see each of the 16 colours in your terminal, you can run this Python program:

def color(num, text):
    return f"\033[38;5;{num}m{text}\033[0m"

for i in range(16):
    print(color(i, f"number {i:02}"))

what are the ANSI colours?

This made me wonder – if blue is colour number 5, who decides what hex color that should correspond to?

The answer seems to be “there’s no standard, terminal emulators just choose colours and it’s not very consistent”. Here’s a screenshot of a table from Wikipedia, where you can see that there’s a lot of variation:

problem 1.5: bright yellow on white

Bright yellow on white is even worse than blue on black, here’s what I get in a terminal with the default settings:

That’s almost impossible to read (and some other colours like light green cause similar issues), so let’s talk about solutions!

two ways to reconfigure your colours

If you’re annoyed by these colour contrast issues (or maybe you just think the default ANSI colours are ugly), you might think – well, I’ll just choose a different “blue” and pick something I like better!

There are two ways you can do this:

Way 1: Configure your terminal emulator: I think most modern terminal emulators have a way to reconfigure the colours, and some of them even come with some preinstalled themes that you might like better than the defaults.

Way 2: Run a shell script: There are ANSI escape codes that you can print out to tell your terminal emulator to reconfigure its colours. Here’s a shell script that does that, from the base16-shell project. You can see that it has a few different conventions for changing the colours – I guess different terminal emulators have different escape codes for changing their colour palette, and so the script is trying to pick the right style of escape code based on the TERM environment variable.

what are the pros and cons of the 2 ways of configuring your colours?

I prefer to use the “shell script” method, because:

  • if I switch terminal emulators for some reason, I don’t need to a different configuration system, my colours still Just Work
  • I use base16-shell with base16-vim to make my vim colours match my terminal colours, which is convenient

some advantages of configuring colours in your terminal emulator:

  • if you use a popular terminal emulator, there are probably a lot more nice terminal themes out there that you can choose from
  • not all terminal emulators support the “shell script method”, and even if they do, the results can be a little inconsistent

This is what my shell has looked like for probably the last 5 years (using the solarized light base16 theme), and I’m pretty happy with it. Here’s htop:

Okay, so let’s say you’ve found a terminal colorscheme that you like. What else can go wrong?

problem 2: programs using 256 colours

Here’s what some output of fd, a find alternative, looks like in my colorscheme:

The contrast is pretty bad here, and I definitely don’t have that lime green in my normal colorscheme. What’s going on?

We can see what color codes fd is using using the unbuffer program to capture its output including the color codes:

$ unbuffer fd . > out
$ vim out
^[[38;5;48mbad-again.sh^[[0m
^[[38;5;48mbad.sh^[[0m
^[[38;5;48mbetter.sh^[[0m
out

^[[38;5;48 means “set the foreground color to color 48”. Terminals don’t only have 16 colours – many terminals these days actually have 3 ways of specifying colours:

  1. the 16 ANSI colours we already talked about
  2. an extended set of 256 colours
  3. a further extended set of 24-bit hex colours, like #ffea03

So fd is using one of the colours from the extended 256-color set. bat (a cat alternative) does something similar – here’s what it looks like by default in my terminal.

This looks fine though and it really seems like it’s trying to work well with a variety of terminal themes.

some newer tools seem to have theme support

I think it’s interesting that some of these newer terminal tools (fd, cat, delta, and probably more) have support for arbitrary custom themes. I guess the downside of this approach is that the default theme might clash with your terminal’s background, but the upside is that it gives you a lot more control over theming the tool’s output than just choosing 16 ANSI colours.

I don’t really use bat, but if I did I’d probably use bat --theme ansi to just use the ANSI colours that I have set in my normal terminal colorscheme.

problem 3: the grays in Solarized

A bunch of people on Mastodon mentioned a specific issue with grays in the Solarized theme: when I list a directory, the base16 Solarized Light theme looks like this:

but iTerm’s default Solarized Light theme looks like this:

This is because in the iTerm theme (which is the original Solarized design), colors 9-14 (the “bright blue”, “bright red”, etc) are mapped to a series of grays, and when I run ls, it’s trying to use those “bright” colours to color my directories and executables.

My best guess for why the original Solarized theme is designed this way is to make the grays available to the vim Solarized colorscheme.

I’m pretty sure I prefer the modified base16 version I use where the “bright” colours are actually colours instead of all being shades of gray though. (I didn’t actually realize the version I was using wasn’t the “original” Solarized theme until I wrote this post)

In any case I really love Solarized and I’m very happy it exists so that I can use a modified version of it.

problem 4: a vim theme that doesn’t match the terminal background

If I my vim theme has a different background colour than my terminal theme, I get this ugly border, like this:

This one is a pretty minor issue though and I think making your terminal background match your vim background is pretty straightforward.

problem 5: programs setting a background color

A few people mentioned problems with terminal applications setting an unwanted background colour, so let’s look at an example of that.

Here ngrok has set the background to color #16 (“black”), but the base16-shell script I use sets color 16 to be bright orange, so I get this, which is pretty bad:

I think the intention is for ngrok to look something like this:

I think base16-shell sets color #16 to orange (instead of black) so that it can provide extra colours for use by base16-vim. This feels reasonable to me – I use base16-vim in the terminal, so I guess I’m using that feature and it’s probably more important to me than ngrok (which I rarely use) behaving a bit weirdly.

This particular issue is a maybe obscure clash between ngrok and my colorschem, but I think this kind of clash is pretty common when a program sets an ANSI background color that the user has remapped for some reason.

a nice solution to contrast issues: “minimum contrast”

A bunch of terminals (iTerm2, tabby, kitty’s text_fg_override_threshold, and folks tell me also Ghostty and Windows Terminal) have a “minimum contrast” feature that will automatically adjust colours to make sure they have enough contrast.

Here’s an example from iTerm. This ngrok accident from before has pretty bad contrast, I find it pretty difficult to read:

With “minimum contrast” set to 40 in iTerm, it looks like this instead:

I didn’t have minimum contrast turned on before but I just turned it on today because it makes such a big difference when something goes wrong with colours in the terminal.

problem 6: TERM being set to the wrong thing

A few people mentioned that they’ll SSH into a system that doesn’t support the TERM environment variable that they have set locally, and then the colours won’t work.

I think the way TERM works is that systems have a terminfo database, so if the value of the TERM environment variable isn’t in the system’s terminfo database, then it won’t know how to output colours for that terminal. I don’t know too much about terminfo, but someone linked me to this terminfo rant that talks about a few other issues with terminfo.

I don’t have a system on hand to reproduce this one so I can’t say for sure how to fix it, but this stackoverflow question suggests running something like TERM=xterm ssh instead of ssh.

problem 7: picking “good” colours is hard

A couple of problems people mentioned with designing / finding terminal colorschemes:

  • some folks are colorblind and have trouble finding an appropriate colorscheme
  • accidentally making the background color too close to the cursor or selection color, so they’re hard to find
  • generally finding colours that work with every program is a struggle (for example you can see me having a problem with this with ngrok above!)

problem 8: making nethack/mc look right

Another problem people mentioned is using a program like nethack or midnight commander which you might expect to have a specific colourscheme based on the default ANSI terminal colours.

For example, midnight commander has a really specific classic look:

But in my Solarized theme, midnight commander looks like this:

The Solarized version feels like it could be disorienting if you’re very used to the “classic” look.

One solution Simon Tatham mentioned to this is using some palette customization ANSI codes (like the ones base16 uses that I talked about earlier) to change the color palette right before starting the program, for example remapping yellow to a brighter yellow before starting Nethack so that the yellow characters look better.

problem 9: commands disabling colours when writing to a pipe

If I run fd | less, I see something like this, with the colours disabled.

In general I find this useful – if I pipe a command to grep, I don’t want it to print out all those color escape codes, I just want the plain text. But what if you want to see the colours?

To see the colours, you can run unbuffer fd | less -r! I just learned about unbuffer recently and I think it’s really cool, unbuffer opens a tty for the command to write to so that it thinks it’s writing to a TTY. It also fixes issues with programs buffering their output when writing to a pipe, which is why it’s called unbuffer.

Here’s what the output of unbuffer fd | less -r looks like for me:

Also some commands (including fd) support a --color=always flag which will force them to always print out the colours.

problem 10: unwanted colour in ls and other commands

Some people mentioned that they don’t want ls to use colour at all, perhaps because ls uses blue, it’s hard to read on black, and maybe they don’t feel like customizing their terminal’s colourscheme to make the blue more readable or just don’t find the use of colour helpful.

Some possible solutions to this one:

  • you can run ls --color=never, which is probably easiest
  • you can also set LS_COLORS to customize the colours used by ls. I think some other programs other than ls support the LS_COLORS environment variable too.
  • also some programs support setting NO_COLOR=true (there’s a list here)

Here’s an example of running LS_COLORS="fi=0:di=0:ln=0:pi=0:so=0:bd=0:cd=0:or=0:ex=0" ls:

problem 11: the colours in vim

I used to have a lot of problems with configuring my colours in vim – I’d set up my terminal colours in a way that I thought was okay, and then I’d start vim and it would just be a disaster.

I think what was going on here is that today, there are two ways to set up a vim colorscheme in the terminal:

  1. using your ANSI terminal colours – you tell vim which ANSI colour number to use for the background, for functions, etc.
  2. using 24-bit hex colours – instead of ANSI terminal colours, the vim colorscheme can use hex codes like #faea99 directly

20 years ago when I started using vim, terminals with 24-bit hex color support were a lot less common (or maybe they didn’t exist at all), and vim certainly didn’t have support for using 24-bit colour in the terminal. From some quick searching through git, it looks like vim added support for 24-bit colour in 2016 – just 8 years ago!

So to get colours to work properly in vim before 2016, you needed to synchronize your terminal colorscheme and your vim colorscheme. Here’s what that looked like, the colorscheme needed to map the vim color classes like cterm05 to ANSI colour numbers.

But in 2024, the story is really different! Vim (and Neovim, which I use now) support 24-bit colours, and as of Neovim 0.10 (released in May 2024), the termguicolors setting (which tells Vim to use 24-bit hex colours for colorschemes) is turned on by default in any terminal with 24-bit color support.

So this “you need to synchronize your terminal colorscheme and your vim colorscheme” problem is not an issue anymore for me in 2024, since I don’t plan to use terminals without 24-bit color support in the future.

The biggest consequence for me of this whole thing is that I don’t need base16 to set colors 16-21 to weird stuff anymore to integrate with vim – I can just use a terminal theme and a vim theme, and as long as the two themes use similar colours (so it’s not jarring for me to switch between them) there’s no problem. I think I can just remove those parts from my base16 shell script and totally avoid the problem with ngrok and the weird orange background I talked about above.

some more problems I left out

I think there are a lot of issues around the intersection of multiple programs, like using some combination tmux/ssh/vim that I couldn’t figure out how to reproduce well enough to talk about them. Also I’m sure I missed a lot of other things too.

base16 has really worked for me

I’ve personally had a lot of success with using base16-shell with base16-vim – I just need to add a couple of lines to my fish config to set it up (+ a few .vimrc lines) and then I can move on and accept any remaining problems that that doesn’t solve.

I don’t think base16 is for everyone though, some limitations I’m aware of with base16 that might make it not work for you:

  • it comes with a limited set of builtin themes and you might not like any of them
  • the Solarized base16 theme (and maybe all of the themes?) sets the “bright” ANSI colours to be exactly the same as the normal colours, which might cause a problem if you’re relying on the “bright” colours to be different from the regular ones
  • it sets colours 16-21 in order to give the vim colorschemes from base16-vim access to more colours, which might not be relevant if you always use a terminal with 24-bit color support, and can cause problems like the ngrok issue above
  • also the way it sets colours 16-21 could be a problem in terminals that don’t have 256-color support, like the linux framebuffer terminal

Apparently there’s a community fork of base16 called tinted-theming, which I haven’t looked into much yet.

some other colorscheme tools

Just one so far but I’ll link more if people tell me about them:

okay, that was a lot

We talked about a lot in this post and while I think learning about all these details is kind of fun if I’m in the mood to do a deep dive, I find it SO FRUSTRATING to deal with it when I just want my colours to work! Being surprised by unreadable text and having to find a workaround is just not my idea of a good day.

Personally I’m a zero-configuration kind of person and it’s not that appealing to me to have to put together a lot of custom configuration just to make my colours in the terminal look acceptable. I’d much rather just have some reasonable defaults that I don’t have to change.

minimum contrast seems like an amazing feature

My one big takeaway from writing this was to turn on “minimum contrast” in my terminal, I think it’s going to fix most of the occasional accidental unreadable text issues I run into and I’m pretty excited about it.

2024-09-27T11:16:00+00:00 Fullscreen Open in Tab
Some Go web dev notes

I spent a lot of time in the past couple of weeks working on a website in Go that may or may not ever see the light of day, but I learned a couple of things along the way I wanted to write down. Here they are:

go 1.22 now has better routing

I’ve never felt motivated to learn any of the Go routing libraries (gorilla/mux, chi, etc), so I’ve been doing all my routing by hand, like this.

	// DELETE /records:
	case r.Method == "DELETE" && n == 1 && p[0] == "records":
		if !requireLogin(username, r.URL.Path, r, w) {
			return
		}
		deleteAllRecords(ctx, username, rs, w, r)
	// POST /records/<ID>
	case r.Method == "POST" && n == 2 && p[0] == "records" && len(p[1]) > 0:
		if !requireLogin(username, r.URL.Path, r, w) {
			return
		}
		updateRecord(ctx, username, p[1], rs, w, r)

But apparently as of Go 1.22, Go now has better support for routing in the standard library, so that code can be rewritten something like this:

	mux.HandleFunc("DELETE /records/", app.deleteAllRecords)
	mux.HandleFunc("POST /records/{record_id}", app.updateRecord)

Though it would also need a login middleware, so maybe something more like this, with a requireLogin middleware.

	mux.Handle("DELETE /records/", requireLogin(http.HandlerFunc(app.deleteAllRecords)))

a gotcha with the built-in router: redirects with trailing slashes

One annoying gotcha I ran into was: if I make a route for /records/, then a request for /records will be redirected to /records/.

I ran into an issue with this where sending a POST request to /records redirected to a GET request for /records/, which broke the POST request because it removed the request body. Thankfully Xe Iaso wrote a blog post about the exact same issue which made it easier to debug.

I think the solution to this is just to use API endpoints like POST /records instead of POST /records/, which seems like a more normal design anyway.

sqlc automatically generates code for my db queries

I got a little bit tired of writing so much boilerplate for my SQL queries, but I didn’t really feel like learning an ORM, because I know what SQL queries I want to write, and I didn’t feel like learning the ORM’s conventions for translating things into SQL queries.

But then I found sqlc, which will compile a query like this:


-- name: GetVariant :one
SELECT *
FROM variants
WHERE id = ?;

into Go code like this:

const getVariant = `-- name: GetVariant :one
SELECT id, created_at, updated_at, disabled, product_name, variant_name
FROM variants
WHERE id = ?
`

func (q *Queries) GetVariant(ctx context.Context, id int64) (Variant, error) {
	row := q.db.QueryRowContext(ctx, getVariant, id)
	var i Variant
	err := row.Scan(
		&i.ID,
		&i.CreatedAt,
		&i.UpdatedAt,
		&i.Disabled,
		&i.ProductName,
		&i.VariantName,
	)
	return i, err
}

What I like about this is that if I’m ever unsure about what Go code to write for a given SQL query, I can just write the query I want, read the generated function and it’ll tell me exactly what to do to call it. It feels much easier to me than trying to dig through the ORM’s documentation to figure out how to construct the SQL query I want.

Reading Brandur’s sqlc notes from 2024 also gave me some confidence that this is a workable path for my tiny programs. That post gives a really helpful example of how to conditionally update fields in a table using CASE statements (for example if you have a table with 20 columns and you only want to update 3 of them).

sqlite tips

Someone on Mastodon linked me to this post called Optimizing sqlite for servers. My projects are small and I’m not so concerned about performance, but my main takeaways were:

  • have a dedicated object for writing to the database, and run db.SetMaxOpenConns(1) on it. I learned the hard way that if I don’t do this then I’ll get SQLITE_BUSY errors from two threads trying to write to the db at the same time.
  • if I want to make reads faster, I could have 2 separate db objects, one for writing and one for reading

There are a more tips in that post that seem useful (like “COUNT queries are slow” and “Use STRICT tables”), but I haven’t done those yet.

Also sometimes if I have two tables where I know I’ll never need to do a JOIN beteween them, I’ll just put them in separate databases so that I can connect to them independently.

Go 1.19 introduced a way to set a GC memory limit

I run all of my Go projects in VMs with relatively little memory, like 256MB or 512MB. I ran into an issue where my application kept getting OOM killed and it was confusing – did I have a memory leak? What?

After some Googling, I realized that maybe I didn’t have a memory leak, maybe I just needed to reconfigure the garbage collector! It turns out that by default (according to A Guide to the Go Garbage Collector), Go’s garbage collector will let the application allocate memory up to 2x the current heap size.

Mess With DNS’s base heap size is around 170MB and the amount of memory free on the VM is around 160MB right now, so if its memory doubled, it’ll get OOM killed.

In Go 1.19, they added a way to tell Go “hey, if the application starts using this much memory, run a GC”. So I set the GC memory limit to 250MB and it seems to have resulted in the application getting OOM killed less often:

export GOMEMLIMIT=250MiB

some reasons I like making websites in Go

I’ve been making tiny websites (like the nginx playground) in Go on and off for the last 4 years or so and it’s really been working for me. I think I like it because:

  • there’s just 1 static binary, all I need to do to deploy it is copy the binary. If there are static files I can just embed them in the binary with embed.
  • there’s a built-in webserver that’s okay to use in production, so I don’t need to configure WSGI or whatever to get it to work. I can just put it behind Caddy or run it on fly.io or whatever.
  • Go’s toolchain is very easy to install, I can just do apt-get install golang-go or whatever and then a go build will build my project
  • it feels like there’s very little to remember to start sending HTTP responses – basically all there is are functions like Serve(w http.ResponseWriter, r *http.Request) which read the request and send a response. If I need to remember some detail of how exactly that’s accomplished, I just have to read the function!
  • also net/http is in the standard library, so you can start making websites without installing any libraries at all. I really appreciate this one.
  • Go is a pretty systems-y language, so if I need to run an ioctl or something that’s easy to do

In general everything about it feels like it makes projects easy to work on for 5 days, abandon for 2 years, and then get back into writing code without a lot of problems.

For contrast, I’ve tried to learn Rails a couple of times and I really want to love Rails – I’ve made a couple of toy websites in Rails and it’s always felt like a really magical experience. But ultimately when I come back to those projects I can’t remember how anything works and I just end up giving up. It feels easier to me to come back to my Go projects that are full of a lot of repetitive boilerplate, because at least I can read the code and figure out how it works.

things I haven’t figured out yet

some things I haven’t done much of yet in Go:

  • rendering HTML templates: usually my Go servers are just APIs and I make the frontend a single-page app with Vue. I’ve used html/template a lot in Hugo (which I’ve used for this blog for the last 8 years) but I’m still not sure how I feel about it.
  • I’ve never made a real login system, usually my servers don’t have users at all.
  • I’ve never tried to implement CSRF

In general I’m not sure how to implement security-sensitive features so I don’t start projects which need login/CSRF/etc. I imagine this is where a framework would help.

it’s cool to see the new features Go has been adding

Both of the Go features I mentioned in this post (GOMEMLIMIT and the routing) are new in the last couple of years and I didn’t notice when they came out. It makes me think I should pay closer attention to the release notes for new Go versions.

2024-09-12T15:09:12+00:00 Fullscreen Open in Tab
Reasons I still love the fish shell

I wrote about how much I love fish in this blog post from 2017 and, 7 years of using it every day later, I’ve found even more reasons to love it. So I thought I’d write a new post with both the old reasons I loved it and some reasons.

This came up today because I was trying to figure out why my terminal doesn’t break anymore when I cat a binary to my terminal, the answer was “fish fixes the terminal!”, and I just thought that was really nice.

1. no configuration

In 10 years of using fish I have never found a single thing I wanted to configure. It just works the way I want. My fish config file just has:

  • environment variables
  • aliases (alias ls eza, alias vim nvim, etc)
  • the occasional direnv hook fish | source to integrate a tool like direnv
  • a script I run to set up my terminal colours

I’ve been told that configuring things in fish is really easy if you ever do want to configure something though.

2. autosuggestions from my shell history

My absolute favourite thing about fish is that I type, it’ll automatically suggest (in light grey) a matching command that I ran recently. I can press the right arrow key to accept the completion, or keep typing to ignore it.

Here’s what that looks like. In this example I just typed the “v” key and it guessed that I want to run the previous vim command again.

2.5 “smart” shell autosuggestions

One of my favourite subtle autocomplete features is how fish handles autocompleting commands that contain paths in them. For example, if I run:

$ ls blah.txt

that command will only be autocompleted in directories that contain blah.txt – it won’t show up in a different directory. (here’s a short comment about how it works)

As an example, if in this directory I type bash scripts/, it’ll only suggest history commands including files that actually exist in my blog’s scripts folder, and not the dozens of other irrelevant scripts/ commands I’ve run in other folders.

I didn’t understand exactly how this worked until last week, it just felt like fish was magically able to suggest the right commands. It still feels a little like magic and I love it.

3. pasting multiline commands

If I copy and paste multiple lines, bash will run them all, like this:

[bork@grapefruit linux-playground (main)]$ echo hi
hi
[bork@grapefruit linux-playground (main)]$ touch blah
[bork@grapefruit linux-playground (main)]$ echo hi
hi

This is a bit alarming – what if I didn’t actually want to run all those commands?

Fish will paste them all at a single prompt, so that I can press Enter if I actually want to run them. Much less scary.

bork@grapefruit ~/work/> echo hi

                         touch blah
                         echo hi

4. nice tab completion

If I run ls and press tab, it’ll display all the filenames in a nice grid. I can use either Tab, Shift+Tab, or the arrow keys to navigate the grid.

Also, I can tab complete from the middle of a filename – if the filename starts with a weird character (or if it’s just not very unique), I can type some characters from the middle and press tab.

Here’s what the tab completion looks like:

bork@grapefruit ~/work/> ls 
api/  blah.py     fly.toml   README.md
blah  Dockerfile  frontend/  test_websocket.sh

I honestly don’t complete things other than filenames very much so I can’t speak to that, but I’ve found the experience of tab completing filenames to be very good.

5. nice default prompt (including git integration)

Fish’s default prompt includes everything I want:

  • username
  • hostname
  • current folder
  • git integration
  • status of last command exit (if the last command failed)

Here’s a screenshot with a few different variations on the default prompt, including if the last command was interrupted (the SIGINT) or failed.

6. nice history defaults

In bash, the maximum history size is 500 by default, presumably because computers used to be slow and not have a lot of disk space. Also, by default, commands don’t get added to your history until you end your session. So if your computer crashes, you lose some history.

In fish:

  1. the default history size is 256,000 commands. I don’t see any reason I’d ever need more.
  2. if you open a new tab, everything you’ve ever run (including commands in open sessions) is immediately available to you
  3. in an existing session, the history search will only include commands from the current session, plus everything that was in history at the time that you started the shell

I’m not sure how clearly I’m explaining how fish’s history system works here, but it feels really good to me in practice. My impression is that the way it’s implemented is the commands are continually added to the history file, but fish only loads the history file once, on startup.

I’ll mention here that if you want to have a fancier history system in another shell it might be worth checking out atuin or fzf.

7. press up arrow to search history

I also like fish’s interface for searching history: for example if I want to edit my fish config file, I can just type:

$ config.fish

and then press the up arrow to go back the last command that included config.fish. That’ll complete to:

$ vim ~/.config/fish/config.fish

and I’m done. This isn’t so different from using Ctrl+R in bash to search your history but I think I like it a little better over all, maybe because Ctrl+R has some behaviours that I find confusing (for example you can end up accidentally editing your history which I don’t like).

8. the terminal doesn’t break

I used to run into issues with bash where I’d accidentally cat a binary to the terminal, and it would break the terminal.

Every time fish displays a prompt, it’ll try to fix up your terminal so that you don’t end up in weird situations like this. I think this is some of the code in fish to prevent broken terminals.

Some things that it does are:

  • turn on echo so that you can see the characters you type
  • make sure that newlines work properly so that you don’t get that weird staircase effect
  • reset your terminal background colour, etc

I don’t think I’ve run into any of these “my terminal is broken” issues in a very long time, and I actually didn’t even realize that this was because of fish – I thought that things somehow magically just got better, or maybe I wasn’t making as many mistakes. But I think it was mostly fish saving me from myself, and I really appreciate that.

9. Ctrl+S is disabled

Also related to terminals breaking: fish disables Ctrl+S (which freezes your terminal and then you need to remember to press Ctrl+Q to unfreeze it). It’s a feature that I’ve never wanted and I’m happy to not have it.

Apparently you can disable Ctrl+S in other shells with stty -ixon.

10. nice syntax highlighting

By default commands that don’t exist are highlighted in red, like this.

11. easier loops

I find the loop syntax in fish a lot easier to type than the bash syntax. It looks like this:

for i in *.yaml
  echo $i
end

Also it’ll add indentation in your loops which is nice.

12. easier multiline editing

Related to loops: you can edit multiline commands much more easily than in bash (just use the arrow keys to navigate the multiline command!). Also when you use the up arrow to get a multiline command from your history, it’ll show you the whole command the exact same way you typed it instead of squishing it all onto one line like bash does:

$ bash
$ for i in *.png
> do
> echo $i
> done
$ # press up arrow
$ for i in *.png; do echo $i; done ink

13. Ctrl+left arrow

This might just be me, but I really appreciate that fish has the Ctrl+left arrow / Ctrl+right arrow keyboard shortcut for moving between words when writing a command.

I’m honestly a bit confused about where this keyboard shortcut is coming from (the only documented keyboard shortcut for this I can find in fish is Alt+left arrow / Alt + right arrow which seems to do the same thing), but I’m pretty sure this is a fish shortcut.

A couple of notes about getting this shortcut to work / where it comes from:

  • one person said they needed to switch their terminal emulator from the “Linux console” keybindings to “Default (XFree 4)” to get it to work in fish
  • on Mac OS, Ctrl+left arrow switches workspaces by default, so I had to turn that off.
  • Also apparently Ubuntu configures libreadline in /etc/inputrc to make Ctrl+left/right arrow go back/forward a word, so it’ll work in bash on Ubuntu and maybe other Linux distros too. Here’s a stack overflow question talking about that

a downside: not everything has a fish integration

Sometimes tools don’t have instructions for integrating them with fish. That’s annoying, but:

  • I’ve found this has gotten better over the last 10 years as fish has gotten more popular. For example Python’s virtualenv has had a fish integration for a long time now.
  • If I need to run a POSIX shell command real quick, I can always just run bash or zsh
  • I’ve gotten much better over the years at translating simple commands to fish syntax when I need to

My biggest day-to-day to annoyance is probably that for whatever reason I’m still not used to fish’s syntax for setting environment variables, I get confused about set vs set -x.

another downside: fish_add_path

fish has a function called fish_add_path that you can run to add a directory to your PATH like this:

fish_add_path /some/directory

I love the idea of it and I used to use it all the time, but I’ve stopped using it for two reasons:

  1. Sometimes fish_add_path will update the PATH for every session in the future (with a “universal variable”) and sometimes it will update the PATH just for the current session. It’s hard for me to tell which one it will do: in theory the docs explain this but I could not understand them.
  2. If you ever need to remove the directory from your PATH a few weeks or months later because maybe you made a mistake, that’s also kind of hard to do (there are instructions in this comments of this github issue though).

Instead I just update my PATH like this, similarly to how I’d do it in bash:

set PATH $PATH /some/directory/bin

on POSIX compatibility

When I started using fish, you couldn’t do things like cmd1 && cmd2 – it would complain “no, you need to run cmd1; and cmd2” instead.

It seems like over the years fish has started accepting a little more POSIX-style syntax than it used to, like:

  • cmd1 && cmd2
  • export a=b to set an environment variable (though this seems a bit limited, you can’t do export PATH=$PATH:/whatever so I think it’s probably better to learn set instead)

on fish as a default shell

Changing my default shell to fish is always a little annoying, I occasionally get myself into a situation where

  1. I install fish somewhere like maybe /home/bork/.nix-stuff/bin/fish
  2. I add the new fish location to /etc/shells as an allowed shell
  3. I change my shell with chsh
  4. at some point months/years later I reinstall fish in a different location for some reason and remove the old one
  5. oh no!!! I have no valid shell! I can’t open a new terminal tab anymore!

This has never been a major issue because I always have a terminal open somewhere where I can fix the problem and rescue myself, but it’s a bit alarming.

If you don’t want to use chsh to change your shell to fish (which is very reasonable, maybe I shouldn’t be doing that), the Arch wiki page has a couple of good suggestions – either configure your terminal emulator to run fish or add an exec fish to your .bashrc.

I’ve never really learned the scripting language

Other than occasionally writing a for loop interactively on the command line, I’ve never really learned the fish scripting language. I still do all of my shell scripting in bash.

I don’t think I’ve ever written a fish function or if statement.

I ran a highly unscientific poll on Mastodon asking people what shell they use interactively. The results were (of 2600 responses):

  • 46% bash
  • 49% zsh
  • 16% fish
  • 5% other

I think 16% for fish is pretty remarkable, since (as far as I know) there isn’t any system where fish is the default shell, and my sense is that it’s very common to just stick to whatever your system’s default shell is.

It feels like a big achievement for the fish project, even if maybe my Mastodon followers are more likely than the average shell user to use fish for some reason.

who might fish be right for?

Fish definitely isn’t for everyone. I think I like it because:

  1. I really dislike configuring my shell (and honestly my dev environment in general), I want things to “just work” with the default settings
  2. fish’s defaults feel good to me
  3. I don’t spend that much time logged into random servers using other shells so there’s not too much context switching
  4. I liked its features so much that I was willing to relearn how to do a few “basic” shell things, like using parentheses (seq 1 10) to run a command instead of backticks or using set instead of export

Maybe you’re also a person who would like fish! I hope a few more of the people who fish is for can find it, because I spend so much of my time in the terminal and it’s made that time much more pleasant.

2024-08-31T18:36:50-07:00 Fullscreen Open in Tab
Thoughts on the Resiliency of Web Projects

I just did a massive spring cleaning of one of my servers, trying to clean up what has become quite the mess of clutter. For every website on the server, I either:

  • Documented what it is, who is using it, and what version of language and framework it uses
  • Archived it as static HTML flat files
  • Moved the source code from GitHub to a private git server
  • Deleted the files

It feels good to get rid of old code, and to turn previously dynamic sites (with all of the risk they come with) into plain HTML.

This is also making me seriously reconsider the value of spinning up any new projects. Several of these are now 10 years old, still churning along fine, but difficult to do any maintenance on because of versions and dependencies. For example:

  • indieauth.com - this has been on the chopping block for years, but I haven't managed to build a replacement yet, and is still used by a lot of people
  • webmention.io - this is a pretty popular service, and I don't want to shut it down, but there's a lot of problems with how it's currently built and no easy way to make changes
  • switchboard.p3k.io - this is a public WebSub (PubSubHubbub) hub, like Superfeedr, and has weirdly gained a lot of popularity in the podcast feed space in the last few years

One that I'm particularly happy with, despite it being an ugly pile of PHP, is oauth.net. I inherited this site in 2012, and it hasn't needed any framework upgrades since it's just using PHP templates. My ham radio website w7apk.com is similarly a small amount of templated PHP, and it is low stress to maintain, and actually fun to quickly jot some notes down when I want. I like not having to go through the whole ceremony of setting up a dev environment, installing dependencies, upgrading things to the latest version, checking for backwards incompatible changes, git commit, deploy, etc. I can just sftp some changes up to the server and they're live.

Some questions for myself for the future, before starting a new project:

  • Could this actually just be a tag page on my website, like #100DaysOfMusic or #BikeTheEclipse?
  • If it really needs to be a new project, then:
  • Can I create it in PHP without using any frameworks or libraries? Plain PHP ages far better than pulling in any dependencies which inevitably stop working with a version 2-3 EOL cycles back, so every library brought in means signing up for annual maintenance of the whole project. Frameworks can save time in the short term, but have a huge cost in the long term.
  • Is it possible to avoid using a database? Databases aren't inherently bad, but using one does make the project slightly more fragile, since it requires plans for migrations and backups, and 
  • If a database is required, is it possible to create it in a way that does not result in ever-growing storage needs?
  • Is this going to store data or be a service that other people are going to use? If so, plan on a registration form so that I have a way to contact people eventually when I need to change it or shut it down.
  • If I've got this far with the questions, am I really ready to commit to supporting this code base for the next 10 years?

One project I've been committed to maintaining and doing regular (ok fine, "semi-regular") updates for is Meetable, the open source events website that I run on a few domains:

I started this project in October 2019, excited for all the IndieWebCamps we were going to run in 2020. Somehow that is already 5 years ago now. Well that didn't exactly pan out, but I did quickly pivot it to add a bunch of features that are helpful for virtual events, so it worked out ok in the end. We've continued to use it for posting IndieWeb events, and I also run an instance for two IETF working groups. I'd love to see more instances pop up, I've only encountered one or two other ones in the wild. I even spent a significant amount of time on the onboarding flow so that it's relatively easy to install and configure. I even added passkeys for the admin login so you don't need any external dependencies on auth providers. It's a cool project if I may say so myself.

Anyway, this is not a particularly well thought out blog post, I just wanted to get my thoughts down after spending all day combing through the filesystem of my web server and uncovering a lot of ancient history.

2024-08-19T08:15:28+00:00 Fullscreen Open in Tab
Migrating Mess With DNS to use PowerDNS

About 3 years ago, I announced Mess With DNS in this blog post, a playground where you can learn how DNS works by messing around and creating records.

I wasn’t very careful with the DNS implementation though (to quote the release blog post: “following the DNS RFCs? not exactly”), and people started reporting problems that eventually I decided that I wanted to fix.

the problems

Some of the problems people have reported were:

  • domain names with underscores weren’t allowed, even though they should be
  • If there was a CNAME record for a domain name, it allowed you to create other records for that domain name, even if it shouldn’t
  • you could create 2 different CNAME records for the same domain name, which shouldn’t be allowed
  • no support for the SVCB or HTTPS record types, which seemed a little complex to implement
  • no support for upgrading from UDP to TCP for big responses

And there are certainly more issues that nobody got around to reporting, for example that if you added an NS record for a subdomain to delegate it, Mess With DNS wouldn’t handle the delegation properly.

the solution: PowerDNS

I wasn’t sure how to fix these problems for a long time – technically I could have started addressing them individually, but it felt like there were a million edge cases and I’d never get there.

But then one day I was chatting with someone else who was working on a DNS server and they said they were using PowerDNS: an open source DNS server with an HTTP API!

This seemed like an obvious solution to my problems – I could just swap out my own crappy DNS implementation for PowerDNS.

There were a couple of challenges I ran into when setting up PowerDNS that I’ll talk about here. I really don’t do a lot of web development and I think I’ve never built a website that depends on a relatively complex API before, so it was a bit of a learning experience.

challenge 1: getting every query made to the DNS server

One of the main things Mess With DNS does is give you a live view of every DNS query it receives for your subdomain, using a websocket. To make this work, it needs to intercept every DNS query before they it gets sent to the PowerDNS DNS server:

There were 2 options I could think of for how to intercept the DNS queries:

  1. dnstap: dnsdist (a DNS load balancer from the PowerDNS project) has support for logging all DNS queries it receives using dnstap, so I could put dnsdist in front of PowerDNS and then log queries that way
  2. Have my Go server listen on port 53 and proxy the queries myself

I originally implemented option #1, but for some reason there was a 1 second delay before every query got logged. I couldn’t figure out why, so I implemented my own very simple proxy instead.

challenge 2: should the frontend have direct access to the PowerDNS API?

The frontend used to have a lot of DNS logic in it – it converted emoji domain names to ASCII using punycode, had a lookup table to convert numeric DNS query types (like 1) to their human-readable names (like A), did a little bit of validation, and more.

Originally I considered keeping this pattern and just giving the frontend (more or less) direct access to the PowerDNS API to create and delete, but writing even more complex code in Javascript didn’t feel that appealing to me – I don’t really know how to write tests in Javascript and it seemed like it wouldn’t end well.

So I decided to take all of the DNS logic out of the frontend and write a new DNS API for managing records, shaped something like this:

  • GET /records
  • DELETE /records/<ID>
  • DELETE /records/ (delete all records for a user)
  • POST /records/ (create record)
  • POST /records/<ID> (update record)

This meant that I could actually write tests for my code, since the backend is in Go and I do know how to write tests in Go.

what I learned: it’s okay for an API to duplicate information

I had this idea that APIs shouldn’t return duplicate information – for example if I get a DNS record, it should only include a given piece of information once.

But I ran into a problem with that idea when displaying MX records: an MX record has 2 fields, “preference”, and “mail server”. And I needed to display that information in 2 different ways on the frontend:

  1. In a form, where “Preference” and “Mail Server” are 2 different form fields (like 10 and mail.example.com)
  2. In a summary view, where I wanted to just show the record (10 mail.example.com)

This is kind of a small problem, but it came up in a few different places.

I talked to my friend Marco Rogers about this, and based on some advice from him I realized that I could return the same information in the API in 2 different ways! Then the frontend just has to display it. So I started just returning duplicate information in the API, something like this:

{
  values: {'Preference': 10, 'Server': 'mail.example.com'},
  content: '10 mail.example.com',
  ...
}

I ended up using this pattern in a couple of other places where I needed to display the same information in 2 different ways and it was SO much easier.

I think what I learned from this is that if I’m making an API that isn’t intended for external use (there are no users of this API other than the frontend!), I can tailor it very specifically to the frontend’s needs and that’s okay.

challenge 3: what’s a record’s ID?

In Mess With DNS (and I think in most DNS user interfaces!), you create, add, and delete records.

But that’s not how the PowerDNS API works. In PowerDNS, you create a zone, which is made of record sets. Records don’t have any ID in the API at all.

I ended up solving this by generate a fake ID for each records which is made of:

  • its name
  • its type
  • and its content (base64-encoded)

For example one record’s ID is brooch225.messwithdns.com.|NS|bnMxLm1lc3N3aXRoZG5zLmNvbS4=

Then I can search through the zone and find the appropriate record to update it.

This means that if you update a record then its ID will change which isn’t usually what I want in an ID, but that seems fine.

challenge 4: making clear error messages

I think the error messages that the PowerDNS API returns aren’t really intended to be shown to end users, for example:

  • Name 'new\032site.island358.messwithdns.com.' contains unsupported characters (this error encodes the space as \032, which is a bit disorienting if you don’t know that the space character is 32 in ASCII)
  • RRset test.pear5.messwithdns.com. IN CNAME: Conflicts with pre-existing RRset (this talks about RRsets, which aren’t a concept that the Mess With DNS UI has at all)
  • Record orange.beryl5.messwithdns.com./A '1.2.3.4$': Parsing record content (try 'pdnsutil check-zone'): unable to parse IP address, strange character: $ (mentions “pdnsutil”, a utility which Mess With DNS’s users don’t have access to in this context)

I ended up handling this in two ways:

  1. Do some initial basic validation of values that users enter (like IP addresses), so I can just return errors like Invalid IPv4 address: "1.2.3.4$
  2. If that goes well, send the request to PowerDNS and if we get an error back, then do some hacky translation of those messages to make them clearer.

Sometimes users will still get errors from PowerDNS directly, but I added some logging of all the errors that users see, so hopefully I can review them and add extra translations if there are other common errors that come up.

I think what I learned from this is that if I’m building a user-facing application on top of an API, I need to be pretty thoughtful about how I resurface those errors to users.

challenge 5: setting up SQLite

Previously Mess With DNS was using a Postgres database. This was problematic because I only gave the Postgres machine 256MB of RAM, which meant that the database got OOM killed almost every single day. I never really worked out exactly why it got OOM killed every day, but that’s how it was. I spent some time trying to tune Postgres’ memory usage by setting the max connections / work-mem / maintenance-work-mem and it helped a bit but didn’t solve the problem.

So for this refactor I decided to use SQLite instead, because the website doesn’t really get that much traffic. There are some choices involved with using SQLite, and I decided to:

  1. Run db.SetMaxOpenConns(1) to make sure that we only open 1 connection to the database at a time, to prevent SQLITE_BUSY errors from two threads trying to access the database at the same time (just setting WAL mode didn’t work)
  2. Use separate databases for each of the 3 tables (users, records, and requests) to reduce contention. This maybe isn’t really necessary, but there was no reason I needed the tables to be in the same database so I figured I’d set up separate databases to be safe.
  3. Use the cgo-free modernc.org/sqlite, which translates SQLite’s source code to Go. I might switch to a more “normal” sqlite implementation instead at some point and use cgo though. I think the main reason I prefer to avoid cgo is that cgo has landed me with difficult-to-debug errors in the past.
  4. use WAL mode

I still haven’t set up backups, though I don’t think my Postgres database had backups either. I think I’m unlikely to use litestream for backups – Mess With DNS is very far from a critical application, and I think daily backups that I could recover from in case of a disaster are more than good enough.

challenge 6: upgrading Vue & managing forms

This has nothing to do with PowerDNS but I decided to upgrade Vue.js from version 2 to 3 as part of this refresh. The main problem with that is that the form validation library I was using (FormKit) completely changed its API between Vue 2 and Vue 3, so I decided to just stop using it instead of learning the new API.

I ended up switching to some form validation tools that are built into the browser like required and oninvalid (here’s the code). I think it could use some of improvement, I still don’t understand forms very well.

challenge 7: managing state in the frontend

This also has nothing to do with PowerDNS, but when modifying the frontend I realized that my state management in the frontend was a mess – in every place where I made an API request to the backend, I had to try to remember to add a “refresh records” call after that in every place that I’d modified the state and I wasn’t always consistent about it.

With some more advice from Marco, I ended up implementing a single global state management store which stores all the state for the application, and which lets me create/update/delete records.

Then my components can just call store.createRecord(record), and the store will automatically resynchronize all of the state as needed.

challenge 8: sequencing the project

This project ended up having several steps because I reworked the whole integration between the frontend and the backend. I ended up splitting it into a few different phases:

  1. Upgrade Vue from v2 to v3
  2. Make the state management store
  3. Implement a different backend API, move a lot of DNS logic out of the frontend, and add tests for the backend
  4. Integrate PowerDNS

I made sure that the website was (more or less) 100% working and then deployed it in between phases, so that the amount of changes I was managing at a time stayed somewhat under control.

the new website is up now!

I released the upgraded website a few days ago and it seems to work! The PowerDNS API has been great to work on top of, and I’m relieved that there’s a whole class of problems that I now don’t have to think about at all, other than potentially trying to make the error messages from PowerDNS a little clearer. Using PowerDNS has fixed a lot of the DNS issues that folks have reported in the last few years and it feels great.

If you run into problems with the new Mess With DNS I’d love to hear about them here.

2024-08-06T08:38:35+00:00 Fullscreen Open in Tab
Go structs are copied on assignment (and other things about Go I'd missed)

I’ve been writing Go pretty casually for years – the backends for all of my playgrounds (nginx, dns, memory, more DNS) are written in Go, but many of those projects are just a few hundred lines and I don’t come back to those codebases much.

I thought I more or less understood the basics of the language, but this week I’ve been writing a lot more Go than usual while working on some upgrades to Mess with DNS, and ran into a bug that revealed I was missing a very basic concept!

Then I posted about this on Mastodon and someone linked me to this very cool site (and book) called 100 Go Mistakes and How To Avoid Them by Teiva Harsanyi. It just came out in 2022 so it’s relatively new.

I decided to read through the site to see what else I was missing, and found a couple of other misconceptions I had about Go. I’ll talk about some of the mistakes that jumped out to me the most, but really the whole 100 Go Mistakes site is great and I’d recommend reading it.

Here’s the initial mistake that started me on this journey:

mistake 1: not understanding that structs are copied on assignment

Let’s say we have a struct:

type Thing struct {
    Name string
}

and this code:

thing := Thing{"record"}
other_thing := thing
other_thing.Name = "banana"
fmt.Println(thing)

This prints “record” and not “banana” (play.go.dev link), because thing is copied when you assign it to other_thing.

the problem this caused me: ranges

The bug I spent 2 hours of my life debugging last week was effectively this code (play.go.dev link):

type Thing struct {
  Name string
}
func findThing(things []Thing, name string) *Thing {
  for _, thing := range things {
    if thing.Name == name {
      return &thing
    }
  }
  return nil
}

func main() {
  things := []Thing{Thing{"record"}, Thing{"banana"}}
  thing := findThing(things, "record")
  thing.Name = "gramaphone"
  fmt.Println(things)
}

This prints out [{record} {banana}] – because findThing returned a copy, we didn’t change the name in the original array.

This mistake is #30 in 100 Go Mistakes.

I fixed the bug by changing it to something like this (play.go.dev link), which returns a reference to the item in the array we’re looking for instead of a copy.

func findThing(things []Thing, name string) *Thing {
  for i := range things {
    if things[i].Name == name {
      return &things[i]
    }
  }
  return nil
}

why didn’t I realize this?

When I learned that I was mistaken about how assignment worked in Go I was really taken aback, like – it’s such a basic fact about the language works! If I was wrong about that then what ELSE am I wrong about in Go????

My best guess for what happened is:

  1. I’ve heard for my whole life that when you define a function, you need to think about whether its arguments are passed by reference or by value
  2. So I’d thought about this in Go, and I knew that if you pass a struct as a value to a function, it gets copied – if you want to pass a reference then you have to pass a pointer
  3. But somehow it never occurred to me that you need to think about the same thing for assignments, perhaps because in most of the other languages I use (Python, JS, Java) I think everything is a reference anyway. Except for in Rust, where you do have values that you make copies of but I think most of the time I had to run .clone() explicitly. (though apparently structs will be automatically copied on assignment if the struct implements the Copy trait)
  4. Also obviously I just don’t write that much Go so I guess it’s never come up.

mistake 2: side effects appending slices (#25)

When you subset a slice with x[2:3], the original slice and the sub-slice share the same backing array, so if you append to the new slice, it can unintentionally change the old slice:

For example, this code prints [1 2 3 555 5] (code on play.go.dev)

x := []int{1, 2, 3, 4, 5}
y := x[2:3]
y = append(y, 555)
fmt.Println(x)

I don’t think this has ever actually happened to me, but it’s alarming and I’m very happy to know about it.

Apparently you can avoid this problem by changing y := x[2:3] to y := x[2:3:3], which restricts the new slice’s capacity so that appending to it will re-allocate a new slice. Here’s some code on play.go.dev that does that.

mistake 3: not understanding the different types of method receivers (#42)

This one isn’t a “mistake” exactly, but it’s been a source of confusion for me and it’s pretty simple so I’m glad to have it cleared up.

In Go you can declare methods in 2 different ways:

  1. func (t Thing) Function() (a “value receiver”)
  2. func (t *Thing) Function() (a “pointer receiver”)

My understanding now is that basically:

  • If you want the method to mutate the struct t, you need a pointer receiver.
  • If you want to make sure the method doesn’t mutate the struct t, use a value receiver.

Explanation #42 has a bunch of other interesting details though. There’s definitely still something I’m missing about value vs pointer receivers (I got a compile error related to them a couple of times in the last week that I still don’t understand), but hopefully I’ll run into that error again soon and I can figure it out.

more interesting things I noticed

Some more notes from 100 Go Mistakes:

Also there are some things that have tripped me up in the past, like:

this “100 common mistakes” format is great

I really appreciated this “100 common mistakes” format – it made it really easy for me to skim through the mistakes and very quickly mentally classify them into:

  1. yep, I know that
  2. not interested in that one right now
  3. WOW WAIT I DID NOT KNOW THAT, THAT IS VERY USEFUL!!!!

It looks like “100 Common Mistakes” is a series of books from Manning and they also have “100 Java Mistakes” and an upcoming “100 SQL Server Mistakes”.

Also I enjoyed what I’ve read of Effective Python by Brett Slatkin, which has a similar “here are a bunch of short Python style tips” structure where you can quickly skim it and take what’s useful to you. There’s also Effective C++, Effective Java, and probably more.

some other Go resources

other resources I’ve appreciated: