Split keyboards are simply better, but there just aren't enough options at retail. Prepare to enter the world of DIY Keyboards
dammit. turns out the cold i got for Christmas is actually COVID š almost made it six years without getting it, too
Today's links
- The Post-American Internet: My speech from Hamburg's Chaos Communications Congress.
- Hey look at this: Delights to delectate.
- Object permanence: Error code 451; Public email address Mansplaining Lolita; NSA backdoor in Juniper Networks; Don't bug out; Nurses whose shitty boss is a shitty app.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
The Post-American Internet (permalink)
On December 28th, I delivered a speech entitled "A post-American, enshittification-resistant internet" for 39C3, the 39th Chaos Communications Congress in Hamburg, Germany. This is the transcript of that speech.
Many of you know that I'm an activist with the Electronic Frontier Foundation ā EFF. I'm about to start my 25th year there. I know that I'm hardly unbiased, but as far as I'm concerned, there's no group anywhere on Earth that does the work of defending our digital rights better than EFF.
I'm an activist there, and for the past quarter-century, I've been embroiled in something I call "The War on General Purpose Computing."
If you were at 28C3, 14 years ago, you may have heard me give a talk with that title. Those are the trenches I've been in since my very first day on the job at EFF, when I flew to Los Angeles to crash the inaugural meeting of something called the "Broadcast Protection Discussion Group," an unholy alliance of tech companies, media companies, broadcasters and cable operators.
They'd gathered because this lavishly corrupt American congressman, Billy Tauzin, had promised them a new regulation ā a rule banning the manufacture and sale of digital computers, unless they had been backdoored to specifications set by that group, specifications for technical measures to block computers from performing operations that were dispreferred by these companies' shareholders.
That rule was called "the Broadcast Flag," and it actually passed through the American telecoms regulator, the Federal Communications Commission. So we sued the FCC in federal court, and overturned the rule.
We won that skirmish, but friends, I have bad news, news that will not surprise you. Despite wins like that one, we have been losing the war on the general purpose computer for the past 25 years.
Which is why I've come to Hamburg today. Because, after decades of throwing myself against a locked door, the door that leads to a new, good internet, one that delivers both the technological self-determination of the old, good internet, and the ease of use of Web 2.0 that let our normie friends join the party, that door has been unlocked.
Today, it is open a crack. It's open a crack!
And here's the weirdest part: Donald Trump is the guy who's unlocked that door.
Oh, he didn't do it on purpose! But, thanks to Trump's incontinent belligerence, we are on the cusp of a "Post-American Internet," a new digital nervous system for the 21st century. An internet that we can build without worrying about America's demands and priorities.
Now, don't get me wrong, I'm not happy about Trump or his policies. But as my friend Joey DaVilla likes to say "When life gives you SARS, you make sarsaparilla." The only thing worse than experiencing all the terror that Trump has unleashed on America and the world would be going through all that and not salvaging anything out of the wreckage.
That's what I want to talk to you about today: the post-American Internet we can wrest from Trump's chaos.
A post-American Internet that is possible because Trump has mobilized new coalition partners to join the fight on our side. In politics, coalitions are everything. Any time you see a group of people suddenly succeeding at a goal they have been failing to achieve, it's a sure bet that they've found some coalition partners, new allies who don't want all the same thing as the original forces, but want enough of the same things to fight on their side.
That's where Trump came from: a coalition of billionaires, white nationalists, Christian bigots, authoritarians, conspiratorialists, imperialists, and self-described "libertarians" who've got such a scorching case of low-tax brain worms that they'd vote for Mussolini if he'd promise to lower their taxes by a nickel.
And what's got me so excited is that we've got a new coalition in the War on General Purpose Computers: a coalition that includes the digital rights activists who've been on the lines for decades, but also people who want to turn America's Big Tech trillions into billions for their own economy, and national security hawks who are quite rightly worried about digital sovereignty.
My thesis here is that this is an unstoppable coalition. Which is good news! For the first time in decades, victory is in our grasp.
#
So let me explain: 14 years ago, I stood in front of this group and explained the "War on General Purpose Computing." That was my snappy name for this fight, but the boring name that they use in legislatures for it is "anticircumvention,"
Under anticircumvention law, it's a crime to alter the functioning of a digital product or service, unless the manufacturer approves of your modification, and ā crucially ā this is true whether or not your modification violates any other law.
Anticircumvention law originates in the USA: Section 1201 of the Digital Millennium Copyright Act of 1998 establishes a felony punishable by a five year prison sentence and a $500,000 fine for a first offense for bypassing an "access control" for a copyrighted work.
So practically speaking, if you design a device or service with even the flimsiest of systems to prevent modification of its application code or firmware, it's a felony ā a jailable felony ā to modify that code or firmware. It's also a felony to disclose information about how to bypass that access control, which means that pen-testers who even describe how they access a device or system face criminal liability.
Under anticircumvention law any manufacturer can trivially turn their product into a no-go zone, criminalizing the act of investigating its defects, criminalizing the act of reporting on its defects, and criminalizing the act of remediating its defects.
This is a law that Jay Freeman rightly calls "Felony Contempt of Business Model." Anticircumvention became the law of the land in 1998 when Bill Clinton signed the DMCA. But before you start snickering at those stupid Americans, know this: every other country in the world has passed a law just like this in the years since. Here in the EU, it came in through Article 6 of the 2001 EU Copyright Directive.
Now, it makes a certain twisted sense for the US to enact a law like this, after all, they are the world's tech powerhouse, home to the biggest, most powerful tech companies in the world. By making it illegal to modify digital products without the manufacturer's permission, America enhances the rent-extracting power of the most valuable companies on US stock exchanges.
But why would Europe pass a law like this? Europe is a massive tech importer. By extending legal protection to tech companies that want to steal their users' data and money, the EU was facilitating a one-way transfer of value from Europe to America. So why would Europe do this?
Well, let me tell you about the circumstances under which other countries came to enact their anticircumvention laws and maybe you'll spot a pattern that will answer this question.
Australia got its anticircumvention law through the US-Australia Free Trade Agreement, which obliges Australia to enact anticircumvention law.
Canada and Mexico got it through the US-Mexico-Canada Free Trade Agreement, which obliges Canada and Mexico to enact anticircumvention laws.
Andean nations like Chile got their anticircumvention laws through bilateral US free trade agreements, which oblige them to enact anticircumvention laws.
And the Central American nations got their anticircumvention laws through CAFTA ā The Central American Free Trade Agreement with the USA ā which obliges them to enact anticircumvention laws, too.
I assume you've spotted the pattern by now: the US trade representative has forced every one of its trading partners to adopt anticircumvention law, to facilitate the extraction of their own people's data and money by American firms. But of course, that only raises a further question: Why would every other country in the world agree to let America steal its own people's money and data, and block its domestic tech sector from making interoperable products that would prevent this theft?
Here's an anecdote that unravels this riddle: many years ago, in the years before Viktor Orban rose to power, I used to guest-lecture at a summer PhD program in political science at Budapest's Central European University. And one summer, after I'd lectured to my students about anticircumvention law, one of them approached me.
They had been the information minister of a Central American nation during the CAFTA negotiations, and one day, they'd received a phone-call from their trade negotiator, calling from the CAFTA bargaining table. The negotiator said, "You know how you told me not to give the Americans anticircumvention under any circumstances? Well, they're saying that they won't take our coffee unless we give them anticircumvention. And I'm sorry, but we just can't lose the US coffee market. Our economy would collapse. So we're going to give them anticircumvention. I'm really sorry."
That's it. That's why every government in the world allowed US Big Tech companies to declare open season on their people's private data and ready cash.
The alternative was tariffs. Well, I don't know if you've heard, but we've got tariffs now!
I mean, if someone threatens to burn your house down unless you follow their orders, and then they burn your house down anyway, you don't have to keep following their orders. Soā¦Happy Liberation Day?
So far, every country in the world has had one of two responses to the Trump tariffs. The first one is: "Give Trump everything he asks for (except Greenland) and hope he stops being mad at you." This has been an absolute failure. Give Trump an inch, he'll take a mile. He'll take fucking Greenland. Capitulation is a failure.
But so is the other tactic: retaliatory tariffs. That's what we've done in Canada (like all the best Americans, I'm Canadian). Our top move has been to levy tariffs on the stuff we import from America, making the things we buy more expensive. That's a weird way to punish America! It's like punching yourself in the face as hard as you can, and hoping the downstairs neighbor says "Ouch!"
And it's indiscriminate. Why whack some poor farmer from a state that begins and ends with a vowel with tariffs on his soybeans. That guy never did anything bad to Canada.
But there's a third possible response to tariffs, one that's just sitting there, begging to be tried: what about repealing anticircumvention law?
If you're a technologist or an investor based in a country that's repealed its anticircumvention law, you can go into business making disenshittificatory products that plug into America's defective tech exports, allowing the people who own and use those products to use them in ways that are good for them, even if those uses make the company's shareholders mad.
Think of John Deere tractors: when a farmer's John Deere tractor breaks down, they are expected to repair it, swapping in new parts and assemblies to replace whatever's malfing. But the tractor won't recognize that new part and will not start working again, not until the farmer spends a couple hundred bucks on a service callout from an official John Deere tractor repair rep, whose only job is to type an unlock code into the tractor's console, to initialize the part and pair it with the tractor's main computing unit.
Modding a tractor to bypass this activation step violates anticircumvention law, meaning farmers all over the world are stuck with this ripoff garbage, because their own government will lock up anyone who makes a tractor mod that disables the parts-pairing check in this American product.
So what if Canada repealed Bill C-11, the Copyright Modernization Act of 2012 (that's our anticircumvention law)? Well, then a company like Honeybee, which makes tractor front-ends and attachments, could hire some smart University of Waterloo computer science grads, and put 'em to work jailbreaking the John Deere tractor's firmware, and offer it to everyone in the world. They could sell the crack to anyone with an internet connection and a payment method, including that poor American farmer whose soybeans we're currently tariffing.
It's hard to convey how much money is on the table here. Take just one example: Apple's App Store. Apple forces all app vendors into using its payment processor, and charges them a 30 percent commission on every euro spent inside of an app.
30 percent! That's such a profitable business that Apple makes $100 billion per year on it. If the EU repeals Article 6 of the Copyright Directive, some smart geeks in Finland could reverse-engineer Apple's bootloaders and make a hardware dongle that jailbreaks phones so that they can use alternative app stores, and sell the dongle ā along with the infrastructure to operate an app store ā to anyone in the world who wants to go into business competing with Apple for users and app vendors.
Those competitors could offer a 90% discount to every crafter on Etsy, every performer on Patreon, every online news outlet, every game dev, every media store. Offer them a 90% discount on payments, and still make $10b/year.
Maybe Finland will never see another Nokia, but Nokia's a tough business to be in. You've got to make hardware, which is expensive and risky. But if the EU legalizes jailbreaking, then Apple would have to incur all the expense and risk of making and fielding hardware, while those Finnish geeks could cream off the $100b Apple sucks out of the global economy in an act of a disgusting, rip-off rent-seeking.
As Jeff Bezos said to the publishers: "Your margin is my opportunity." With these guys, it's always "disruption for thee, but not for me." When they do it to us, that's progress. When we do it to them, it's piracy, and every pirate wants to be an admiral.
Well, screw that. Move fast and break Tim Cook's things. Move fast and break kings!
It's funny: I spent 25 years getting my ass kicked by the US Trade Representative (in my defense, it wasn't a fair fight). I developed a kind of grudging admiration for the skill with which the USTR bound the entire world to a system of trade that conferred parochial advantages to America and its tech firms, giving them free rein to loot the world's data and economies. So it's been pretty amazing to watch Trump swiftly and decisively dismantle the global system of trade and destroy the case for the world continuing to arrange its affairs to protect the interests of America's capital class.
I mean, it's not a path I would have chosen. I'd have preferred no Trump at all to this breakthrough. But I'll take this massive own-goal if Trump insists. I mean, I'm not saying I've become an accelerationist, but at this point, I'm not exactly not an accelerationist.
Now, you might have heard that governments around the world have been trying to get Apple to open its App Store, and they've totally failed at this. When the EU hit Apple with an enforcement order under the Digital Markets Act, Apple responded by offering to allow third party app stores, but it would only allow those stores to sell apps that Apple had approved of.
And while those stores could use their own payment processors, Apple would charge them so much in junk fees that it would be more expensive to process a payment using your own system, and if Apple believed that a user's phone had been outside of the EU for 21 days, they'd remotely delete all that user's data and apps.
When the EU explained that this would not satisfy the regulation, Apple threatened to pull out of the EU. Then, once everyone had finished laughing, Apple filed more than a dozen bullshit objections to the order hoping to tie this up in court for a decade, the way Google and Meta did for the GDPR.
It's not clear that the EU can force Apple to write code that opens up the iOS platform for alternative app stores and payment methods, but there is one thing that the EU can absolutely do with 100% reliability, any time they want: the EU can decide not to let Apple use Europe's courts to shut down European companies that defend European merchants, performers, makers, news outlets, game devs and creative workers, from Apple's ripoff, by jailbreaking phones.
All the EU has to do is repeal Article 6 of the Copyright Directive, and, in so doing, strip Apple of the privilege of mobilizing the European justice system to shore up Apple's hundred billion dollar annual tax on the world's digital economy. The EU company that figures out how to reliably jailbreak iPhones will have customers all over the world, including in the USA, where Apple doesn't just use its veto over which apps you can run on your phone to suck 30% out of every dollar you spend, but where Apple also uses its control over the platform to strip out apps that protect Apple's customers from Trump's fascist takeover.
Back in October, Apple kicked the "ICE Block" app out of the App Store. That's an app that warns the user if there's a snatch squad of masked ICE thugs nearby looking to grab you off the street and send you to an offshore gulag. Apple internally classified ICE kidnappers as a "protected class," and then declared the ICE Block infringed on the rights of these poor, beset ICE goons.
And speaking of ICE thugs, there are plenty of qualified technologists who have fled the US this year, one step ahead of an ICE platoon looking to put them and their children into a camp. Those skilled hackers are now living all over the world, joined by investors who'd like to back a business whose success will be determined by how awesome its products are, and not how many $TRUMP coins they buy.
Apple's margin could be their opportunity.
Legalizing jailbreaking, raiding the highest margin lines of business of the most profitable companies in America is a much better response to the Trump tariffs than retaliatory tariffs.
For one thing, this is a targeted response: go after Big Tech's margins and you're mounting a frontal assault on the businesses whose CEOs each paid a million bucks to sit behind Trump on the inauguration dais.
Raiding Big Tech's margins is not an attack on the American people, nor on the small American businesses that are ripped off by Big Tech. It's a raid on the companies that screw everyday Americans and everyone else in the world. It's a way to make everyone in the world richer at the expense of these ripoff companies.
It beats the shit out of blowing hundreds of billions of dollars building AI data-centers in the hopes that someday, a sector that's lost nearly a trillion dollars shipping defective chatbots will figure out a use for GPUs that doesn't start hemorrhaging money the minute they plug them in.
So here are our new allies in the war on general-purpose computation: businesses and technologists who want to make billions of dollars raiding Big Tech's margins, and policymakers who want their country to be the disenshittification nation ā the country that doesn't merely protect its people's money and privacy by buying jailbreaks from other countries, but rather, the country that makes billions of dollars selling that privacy and pocketbook-defending tech to the rest of the world.
That's a powerful alliance, but those are not the only allies Trump has pushed into our camp. There's another powerful ally waiting in the wings.
Remember last June, when the International Criminal Court in the Hague issued an arrest warrant for the gƩnocidaire Benjamin Netanyahu, and Trump denounced the ICC, and then the ICC lost its Outlook access, its email archives, its working files, its address books, its calendars?
Microsoft says they didn't brick the ICC ā that it's a coincidence. But when it comes to a he-said/Clippy-said between the justices of the ICC and the convicted monopolists of Microsoft, I know who I believe.
This is exactly the kind of infrastructural risk that we were warned of if we let Chinese companies like Huawei supply our critical telecoms equipment. Virtually every government ministry, every major corporation, every small business and every household in the world have locked themselves into a US-based, cloud-based service.
The handful of US Big Tech companies that supply the world's administrative tools are all vulnerable to pressure from the Trump admin, and that means that Trump can brick an entire nation.
The attack on the ICC was an act of cyberwarfare, like the Russian hackers who shut down Ukrainian power-generation facilities, except that Microsoft doesn't have to hack Outlook to brick the ICC ā they own Outlook.
Under the US CLOUD Act of 2018, the US government can compel any US-based company to disclose any of its users' data ā including foreign governments ā and this is true no matter where that data is stored. Last July, Anton Carniaux, Director of Public and Legal Affairs at Microsoft France, told a French government inquiry that he "couldn't guarantee" that Microsoft wouldn't hand sensitive French data over to the US government, even if that data was stored in a European data-center.
And under the CLOUD Act, the US government can slap gag orders on the companies that it forces to cough up that data, so there'd be no way to even know if this happened, or whether it's already happened.
It doesn't stop at administrative tools, either: remember back in 2022, when Putin's thugs looted millions of dollars' worth of John Deere tractors from Ukraine and those tractors showed up in Chechnya? The John Deere company pushed an over-the-air kill signal to those tractors and bricked 'em.
John Deere is every bit as politically vulnerable to the Trump admin as Microsoft is, and they can brick most of the tractors in the world, and the tractors they can't brick are probably made by Massey Ferguson, the number-two company in the ag-tech cartel, which is also an American company and just as vulnerable to political attacks from the US government.
Now, none of this will be news to global leaders. Even before Trump and Microsoft bricked the ICC they were trying to figure out a path to "digital sovereignty." But the Trump administration's outrageous conduct and rhetoric over past 11 months has turned "digital sovereignty" from a nice-to-have into a must-have.
So finally, we're seeing some movement, like "Eurostack," a project to clone the functionality of US Big Tech silos in free/open source software, and to build EU-based data-centers that this code can run on.
But Eurostack is heading for a crisis. It's great to build open, locally hosted, auditable, trustworthy services that replicate the useful features of Big Tech, but you also need to build the adversarial interoperability tools that allow for mass exporting of millions of documents, the sensitive data-structures and edit histories.
We need scrapers and headless browsers to accomplish the adversarial interoperability that will guarantee ongoing connectivity to institutions that are still hosted on US cloud-based services, because US companies are not going to facilitate the mass exodus of international customers from their platform.
Just think of how Apple responded to the relatively minor demand to open up the iOS App Store, and now imagine the thermonuclear foot-dragging, tantrum-throwing and malicious compliance they'll come up with when faced with the departure of a plurality of the businesses and governments in a 27-nation bloc of 500,000,000 affluent consumers.
Any serious attempt at digital sovereignty needs migration tools that work without the cooperation of the Big Tech companies. Otherwise, this is like building housing for East Germans and locating it in West Berlin. It doesn't matter how great the housing is, your intended audience is going to really struggle to move in unless you tear down the wall.
Step one of tearing down that wall is killing anticircumvention law, so that we can run virtual devices that can be scripted, break bootloaders to swap out firmware and generally seize the means of computation.
So this is the third bloc in the disenshittification army: not just digital rights hippies like me; not just entrepreneurs and economic development wonks rubbing their hands together at the thought of transforming American trillions into European billions; but also the national security hawks who are 100% justified in their extreme concern about their country's reliance on American platforms that have been shown to be totally unreliable.
This is how we'll get a post-American internet: with an unstoppable coalition of activists, entrepreneurs and natsec hawks.
This has been a long time coming. Since the post-war settlement, the world has treated the US as a neutral platform, a trustworthy and stable maintainer of critical systems for global interchange, what the political scientists Henry Farrell and Abraham Newman call the "Underground Empire." But over the past 15 years, the US has systematically shattered global trust in its institutions, a process that only accelerated under Trump.
Take transoceanic fiber optic cables: the way the transoceanic fiber routes were planned, the majority of these cables make landfall on the coasts of the USA where the interconnections are handled. There's a good case for this hub-and-spoke network topology, especially compared to establishing direct links between every country. That's an Order(N^2) problem: directly linking each of the planet Earth's 205 countries to every other country would require 20,910 fiber links.
But putting all the world's telecoms eggs in America's basket only works if the US doesn't take advantage of its centrality, and while many people worried about what the US could do with the head-ends of the world's global fiber infra, it wasn't until Mark Klein's 2006 revelations about the NSA's nation-scale fiber optic taps in AT&T's network, and Ed Snowden's 2013 documents showing the global scale of this wiretapping, that the world had to confront the undeniable reality that the US could not be trusted to serve as the world's fiber hub.
It's not just fiber. The world does business in dollars. Most countries maintain dollar accounts at the Fed in New York as their major source of foreign reserves. But in 2005, American vulture capitalists bought up billions of dollars worth of Argentinian government bonds after the sovereign nation of Argentina had declared bankruptcy.
They convinced a judge in New York to turn over the government of Argentina's US assets to them to make good on loans that these debt collectors had not issued, but had bought up at pennies on the dollar. At that moment, every government in the world had to confront the reality that they could not trust the US Federal Reserve with their foreign reserves. But what else could they use?
Without a clear answer, dollar dominance continued, but then, under Biden, Putin-aligned oligarchs and Russian firms lost access to the SWIFT system for dollar clearing. This is when goods ā like oil ā are priced in dollars, so that buyers only need to find someone who will trade their own currency for dollars, which they can then swap for any commodity in the world.
Again, there's a sound case for dollar clearing: it's just not practical to establish deep, liquid pairwise trading market for all of the world's nearly 200 currencies, it's another O(N^2) problem.
But it only works if the dollar is a neutral platform. Once the dollar becomes an instrument of US foreign policy ā whether or not you agree with that policy ā it's no longer a neutral platform, and the world goes looking for an alternative.
No one knows what that alternative's going to be, just as no one knows what configuration the world's fiber links will end up taking. There's kilometers of fiber being stretched across the ocean floor, and countries are trying out some pretty improbable gambits as dollar alternatives, like Ethiopia revaluing its sovereign debt in Chinese renminbi. Without a clear alternative to America's enshittified platforms, the post-American century is off to a rocky start.
But there's one post-American system that's easy to imagine. The project to rip out all the cloud connected, backdoored, untrustworthy black boxes that power our institutions, our medical implants, our vehicles and our tractors; and replace it with collectively maintained, open, free, trustworthy, auditable code.
This project is the only one that benefits from economies of scale, rather than being paralyzed by exponential crises of scale. That's because any open, free tool adopted by any public institution ā like the Eurostack services ā can be audited, localized, pen-tested, debugged and improved by institutions in every other country.
It's a commons, more like a science than a technology, in that it is universal and international and collaborative. We don't have dueling western and Chinese principles of structural engineering. Rather, we have universal principles for making sure buildings don't fall down, adapted to local circumstances.
We wouldn't tolerate secrecy in the calculations used to keep our buildings upright, and we shouldn't tolerate opacity in the software that keeps our tractors, hearing aids, ventilators, pacemakers, trains, games consoles, phones, CCTVs, door locks, and government ministries working.
The thing is, software is not an asset, it's a liability. The capabilities that running software delivers ā automation, production, analysis and administration ā those are assets. But the software itself? That's a liability. Brittle, fragile, forever breaking down as the software upstream of it, downstream of it, and adjacent to it is updated or swapped out, revealing defects and deficiencies in systems that may have performed well for years.
Shifting software to commons-based production is a way to reduce the liability that software imposes on its makers and users, balancing out that liability among many players.
Now, obviously, tech bosses are totally clueless when it comes to this. They really do think that software is an asset. That's why they're so fucking horny to have chatbots shit out software at superhuman speeds. That's why they think it's good that they've got a chatbot that "produces a thousand times more code than a human programmer."
Producing code that isn't designed for legibility and maintainability, that is optimized, rather, for speed of production, is a way to incur tech debt at scale.
This is a neat encapsulation of the whole AI story: the chatbot can't do your job, but an AI salesman can convince your boss to fire you and replace you with a chatbot that can't do your job.
Your boss is an easy mark for that chatbot hustler because your boss hates you. In their secret hearts, bosses understand that if they stopped coming to work, the business would run along just fine, but if the workers stopped showing up, the company would grind to a halt.
Bosses like to tell themselves that they're in the driver's seat, but really, they fear that they're strapped into the back seat playing with a Fisher Price steering wheel. For them, AI is a way to wire the toy steering wheel directly into the company's drive-train. It's the realization of the fantasy of a company without workers.
When I was walking the picket line in Hollywood during the writer's strike, a writer told me that you prompt an AI the same way a studio boss gives shitty notes to a writer's room: "Make me ET, but make it about a dog, and give it a love interest, and a car-chase in the third act."
Say that to a writer's room and they will call you a fucking idiot suit and tell you "Why don't you go back to your office and make a spreadsheet, you nitwit. The grownups here are writing a movie."
Meanwhile, if you give that prompt to a chatbot, it will cheerfully shit out a script exactly to spec. The fact that this script will be terrible and unusable is less important than the prospect of a working life in which no one calls you a fucking idiot suit.
AI dangles the promise of a writer's room without writers, a movie without actors, a hospital without nurses, a coding shop without coders.
When Mark Zuckerberg went on a podcast and announced that the average American had three friends, but wanted 15 friends, and that he could solve this by giving us chatbots instead of friends, we all dunked on him as an out-of-touch billionaire Martian who didn't understand the nature of friendship.
But the reality is that for Zuck, your friends are a problem. Your friends' interactions with you determine how much time you spend on his platforms, and thus how many revenue-generating ads he can show you.
Your friends stubbornly refuse to organize their relationship with you in a way that maximizes the return to his shareholders. So Zuck is over there in Menlo Park, furiously fantasizing about replacing your friends with chatbots, because that way, he can finally realize the dream of a social media service without any socializing.
Rich, powerful people are, at root, solipsists. The only way to amass a billion dollars is to inflict misery and privation on whole populations. The only way to look yourself in the mirror after you've done that, is to convince yourself that those people don't matter, that, in some important sense, they aren't real.
Think of Elon Musk calling everyone who disagrees with him an "NPC,ā or all those "Effective Altruists," who claimed the moral high ground by claiming to care about 53 trillion imaginary artificial humans who will come into existence in 10,000 years at the expense of extending moral consideration to people alive today.
Or think of how Trump fired all the US government scientists, and then announced the "Genesis" program, declaring that the US would begin generating annual "moonshot"-scale breakthroughs, with a chatbot. It's science without scientists.
Chatbots can't really do science, but from Trump's perspective, they're still better than scientists, because a chatbot won't ever tell him not to stare at an eclipse, or not to inject bleach. A chatbot won't ever tell him that trans people exist, or that the climate emergency is real.
Powerful people are suckers for AI, because AI fuels the fantasy of a world without people: just a boss and a computer, and no ego-shattering confrontations with people who know how to do things telling you "no."
AI is a way to produce tech debt at scale, to replace skilled writers with defective spicy autocomplete systems, to lose money at a rate not seen in living memory.
Now, compare that with the project of building a post-American internet: a project to reduce tech debt, to unlock America's monopoly trillions and divide them among the world's entrepreneurs (for whom they represent untold profits), and the world's technology users (for whom they represent untold savings); all while building resiliency and sovereignty.
Now, some of you are probably feeling pretty cynical about this right now. After all, your political leaders have demonstrated decades of ineffectual and incompetent deference to the US, and an inability to act, even when the need was dire. If your leaders couldn't act decisively on the climate emergency, what hope do we have of them taking this moment seriously?
But crises precipitate change. Remember when another mad emperor ā Vladimir Putin ā invaded Ukraine, and Europe experienced a dire energy shortage? In three short years, the continent's solar uptake skyrocketed. The EU went from being 15 years behind in its energy transition, to ten years ahead of schedule.
Because when you're shivering in the dark, a lot of fights you didn't think were worth it are suddenly existential battles you can't afford to lose. Sure, no one wants to argue with a tedious neighbor who has an aesthetic temper tantrum at the thought of a solar panel hanging from their neighbor's balcony.
But when it's winter, and there's no Russian gas, and you're shivering in the dark, then that person can take their aesthetic objection to balcony solar, fold it until it's all corners, and shove it right up their ass.
Besides, we don't need Europe to lead the charge on a post-American internet by repealing anticircumvention. Any country could do it! And the country that gets there first gets to reap the profits from supplying jailbreaking tools to the rest of the world, it gets to be the Disenshittification Nation, and everyone else in the world gets to buy those tools and defend themselves from US tech companies' monetary and privacy plunder.
Just one country has to break the consensus, and the case for every country doing so is the strongest it's ever been. It used to be that countries that depended on USAID had to worry about losing food, medical and cash supports if they pissed off America. But Trump killed USAID, so now that's a dead letter.
Meanwhile, America's status as the planet's most voracious consumer has been gutted by decades of anti-worker, pro-billionaire policies. Today, the US is in the grips of its third consecutive "K-shaped" recovery, that's an economic rally where the rich get richer, and everyone else gets poorer. For a generation, America papered over that growing inequality with easy credit, with everyday Americans funding their consumption with credit cards and second and third mortgages.
So long as they could all afford to keep buying, other countries had to care about America as an export market. But a generation of extraction has left the bottom 90% of Americans struggling to buy groceries and other necessities, carrying crushing debt from skyrocketing shelter, education and medical expenses that they can't hope to pay down, thanks to 50 years of wage stagnation.
The Trump administration has sided firmly with debt collectors, price gougers, and rent extractors. Trump neutered enforcement against rent-fixing platforms like Realpage, restarted debt payments for eight million student borrowers, and killed a plan to make live-saving drugs a little cheaper, leaving Americans to continue to pay the highest drug prices in the world.
Every dollar spent servicing a loan is a dollar that can't go to consumption. And as more and more Americans slip into poverty, the US is gutting programs that spend money on the public's behalf, like SNAP, the food stamps program that helps an ever-larger slice of the American public stave off hunger.
America is chasing the "world without people" dream, where working people have nothing, spend nothing, and turn every penny over to rentiers who promptly flush that money into the stock market, shitcoins, or gambling sites. But I repeat myself.
Even the US military ā long a sacrosanct institution ā is being kneecapped to enrich rent-seekers. Congress just killed a military "right to repair" law. So now, US soldiers stationed abroad will have to continue the Pentagon's proud tradition of shipping materiel from generators to jeeps back to America to be fixed by their manufacturers at a 10,000% markup, because the Pentagon routinely signs maintenance contracts that prohibit it from teaching a Marine how to fix an engine.
The post-American world is really coming on fast. As we repeal our anticircumvention laws, we don't have to care what America thinks, we don't have to care about their tariffs, because they're already whacking us with tariffs; and because the only people left in the US who can afford to buy things are rich people, who just don't buy enough stuff. There's only so many Lambos and Sub-Zeros even the most guillotineable plute can usefully own.
But what if European firms want to go on taking advantage of anticircumvention laws? Well, there's good news there, too. "Good news," because the EU firms that rely on anticircumvention are engaged in the sleaziest, most disgusting frauds imaginable.
Anticircumvention law is the reason that Volkswagen could get away with Dieselgate. By imposing legal liability on reverse-engineers who might have discovered this lethal crime, Article 6 of the Copyright Directive created a chilling effect, and thousands of Europeans died, every year.
Today, Germany's storied automakers are carrying on the tradition of Dieselgate, sabotaging their cars to extract rent from drivers. From Mercedes, which rents you the accelerator pedal in your luxury car, only unlocking the full acceleration curve of your engine if you buy a monthly subscription; to BMW, which rents you the automated system that automatically dims your high-beams if there's oncoming traffic.
Legalize jailbreaking and any mechanic in Europe could unlock those subscription features for one price, and not share any of that money with BMW and Mercedes.
Then there's Medtronic, a company that pretends it is Irish. Medtronic is the world's largest med-tech company, having purchased all their competitors, and then undertaken the largest "tax-inversion" in history, selling themselves to a tiny Irish firm, in order to magick their profits into a state of untaxable grace, floating in the Irish Sea.
Medtronic supplies the world's most widely used ventilators, and it booby-traps them the same way John Deere booby-traps its tractors. After a hospital technician puts a new part in a Medtronic ventilator, the ventilator's central computing unit refuses to recognize the part until it completes a cryptographic handshake, proving that an authorized Medtronic technician was paid hundreds of euros to certify a repair that the hospital's own technician probably performed.
It's just a way to suck hundreds of euros out of hospitals every time a ventilator breaks. This would be bad enough, but during the covid lockdowns, when every ventilator was desperately needed, and when the planes stopped flying, there was no way for a Medtronic tech to come and bless the hospital technicians' repairs. This was lethal. It killed people.
There's one more European company that relies on anticircumvention that I want to discuss here, because they're old friends of CCC: that's the Polish train company Newag. Newag sabotages its own locomotives, booby-trapping them so that if they sense they have been taken to a rival's service yard, the train bricks itself. When the train operator calls Newag about this mysterious problem, the company "helpfully" remotes into the locomotive's computers, to perform "diagnostics," which is just sending a unbricking command to the vehicle, a service for which they charge 20,000 euros.
Last year, Polish hackers from the security research firm Dragon Sector presented on their research into this disgusting racket in this very hall, and now, they're being sued by Newag under anticircumvention law, for making absolutely true disclosures about Newag's deliberately defective products.
So these are the European stakeholders for anticircumvention law: the Dieselgate killers, the car companies who want to rent you your high-beams and accelerator, the med-tech giant that bricked all the ventilators during the pandemic, and the company that tied Poland to the train-tracks.
I relish the opportunity to fight these bastards in Brussels, as they show up and cry "Won't someone think of the train saboteurs?"
The enshittification of technology ā the decay of the platforms and systems we rely on ā has many causes: the collapse of competition, regulatory capture, the smashing of tech workers' power. But most of all, enshittification is the result of anticircumvention law's ban on interoperability.
By blocking interop, by declaring war on the general-purpose computer, our policy-makers created an enshittogenic environment that rewarded companies for being shitty, and ushered in the enshittocene, in which everything is turning to shit.
Let's call time on enshittification. Let's seize the means of computation. Let's build the drop-in, free, open, auditable alternatives to the services and firmware we rely on.
Let's end the era of silos. I mean, isn't it fucking weird how you have to care which network someone is using if you want to talk to them? Instead of just deciding who you want to talk to?
The fact that you have to figure out whether the discussion you're trying to join is on Twitter or Bluesky, Mastodon or Instagram ā that is just the most Prodigy/AOL/Compuserve-ass way of running a digital world. I mean, 1990 called and they want their walled gardens back
Powerful allies are joining our side in the War on General Purpose Computation. It's not just people like us, who've been fighting for this whole goddamned century, but also countries that want to convert American tech's hoarded trillions into fuel for a single-use rocket that boosts their own tech sector into a stable orbit.
It's national security hawks who are worried about Trump bricking their ministries or their tractors, and who are also worried ā with just cause ā about Xi Jinping bricking all their solar inverters and batteries. Because, after all, the post-American internet is also a post-Chinese internet!
Nothing should be designed to be field updatable without the user's permission. Nothing critical should be a black box.
Like I said at the start of this talk, I have been doing this work for 24 years at the Electronic Frontier Foundation, throwing myself at a door that was double-locked and deadbolted, and now that door is open a crack and goddammit, I am hopeful.
Not optimistic. Fuck optimism! Optimism is the idea that things will get better no matter what we do. I know that what we do matters. Hope is the belief that if we can improve things, even in small ways, we can ascend the gradient toward the world we want, and attain higher vantage points from which new courses of action, invisible to us here at our lower elevation, will be revealed.
Hope is a discipline. It requires that you not give in to despair. So I'm here to tell you: don't despair.
All this decade, all over the world, countries have taken up arms against concentrated corporate power. We've had big, muscular antitrust attacks on big corporations in the US (under Trump I and Biden); in Canada; in the UK; in the EU and member states like Germany, France and Spain; in Australia; in Japan and South Korea and Singapore; in Brazil; and in China.
This is a near-miraculous turn of affairs. All over the world, governments are declaring war on monopolies, the source of billionaires' wealth and power.
Even the most forceful wind is invisible. We can only see it by its effects. What we're seeing here is that whenever a politician bent on curbing corporate power unfurls a sail, no matter where in the world that politician is, that sail fills with wind and propels the policy in ways that haven't been seen in generations.
The long becalming of the fight over corporate power has ended, and a fierce, unstoppable wind is blowing. It's not just blowing in Europe, or in Canada, or in South Korea, Japan, China, Australia or Brazil. It's blowing in America, too. Never forget that as screwed up and terrifying as things are in America, the country has experienced, and continues to experience, a tsunami of antitrust bills and enforcement actions at the local, state and federal level.
And never forget that the post-American internet will be good for Americans. Because, in a K-shaped, bifurcated, unequal America, the trillions that American companies loot from the world don't trickle down to Americans. The average American holds a portfolio of assets that rounds to zero, and that includes stock in US tech companies.
The average American isn't a shareholder in Big Tech, the average American is a victim of Big Tech. Liberating the world from US Big Tech is also liberating America from US Big Tech.
That's been EFF's mission for 35 years. It's been my mission at EFF for 25 years. If you want to get involved in this fight ā and I hope you do ā it can be your mission, too. You can join EFF, and you can join groups in your own country, like Netzpolitik here in Germany, or the Irish Council for Civil Liberties, or La Quadrature du Net in France, or the Open Rights Group in the UK, or EF Finland, or ISOC Bulgaria, XNet, DFRI, Quintessenz, Bits of Freedom, Openmedia, FSFE, or any of dozens of organizations around the world.
The door is open a crack, the wind is blowing, the post-American internet is upon us: a new, good internet that delivers all the technological self-determination of the old, good internet, and the ease of use of Web 2.0 so that our normie friends can use it, too.
And I can't wait for all of us to get to hang out there. It's gonna be great.
Hey look at this (permalink)

- The Enshittifinancial Crisis https://www.wheresyoured.at/the-enshittifinancial-crisis/
-
Austrian Supreme Court: Meta must give users full access to their data https://noyb.eu/en/austrian-supreme-court-meta-must-give-users-full-access-their-data
-
the myth of merit in the managerial class https://backofmind.substack.com/p/the-myth-of-merit-in-the-managerial
-
ECI, Ethical Computing Initiative https://aol.codeberg.page/eci/
-
BMW Patents Proprietary Screws That Only Dealerships Can Remove https://carbuzz.com/bmw-roundel-logo-screw-patent/
Object permanence (permalink)
#20yrsago Online sf mag Infinite Matrix goes out with a bang ā new Gibson, Rucker, Kelly https://web.archive.org/web/20060101120510/https://www.infinitematrix.net/
#20yrsago Wil McCarthyās wonderful āHacking Matterā as a free download https://web.archive.org/web/20060103052051/http://wilmccarthy.com/hm.htm
#15yrsago Papa Sangre: binaural video game with no video https://web.archive.org/web/20101224170833/http://www.papasangre.com/
#15yrsago DDoS versus human rights organizations https://cyber.harvard.edu/publications/2010/DDoS_Independent_Media_Human_Rights
#15yrsago Why I have a public email address https://www.theguardian.com/technology/2010/dec/21/keeping-email-address-secret-spambots
#15yrsago How the FCC failed the nation on Net Neutrality https://web.archive.org/web/20101224075655/https://www.salon.com/technology/network_neutrality/index.html?story=/tech/dan_gillmor/2010/12/21/fcc_network_neutrality
#15yrsago Bankster robberies: Bank of America and friends wrongfully foreclose on customers, steal all their belongings https://www.nytimes.com/2010/12/22/business/22lockout.html?_r=1&hp
#10yrsago Indiaās deadly exam-rigging scandal: murder, corruption, suicide and scapegoats https://www.theguardian.com/world/2015/dec/17/the-mystery-of-indias-deadly-exam-scam
#10yrsago Copyright infringement āgangā raided by UK cops: 3 harmless middle-aged karaoke fans https://arstechnica.com/tech-policy/2015/12/uk-police-busts-karaoke-gang-for-sharing-songs-that-arent-commercially-available/
#10yrsago IETF approves HTTP error code 451 for Internet censorship https://web.archive.org/web/20151222155906/https://motherboard.vice.com/read/the-http-451-error-code-for-censorship-is-now-an-internet-standard
#10yrsago Billionaire Sheldon Adelson secretly bought newspaper, ordered all hands to investigate judges he hated https://web.archive.org/web/20151220081546/http://www.reviewjournal.com/news/las-vegas/judge-adelson-lawsuit-subject-unusual-scrutiny-amid-review-journal-sale
#10yrsago Tax havens hold $7.6 trillion; 8% of worldās total wealth https://web.archive.org/web/20160103142942/https://www.nybooks.com/articles/2016/01/14/parking-the-big-money/
#10yrsago Mansplaining Lolita https://lithub.com/men-explain-lolita-to-me/
#10yrsago Lifelock admits it lied in its ads (again), agrees to $100M fine https://web.archive.org/web/20151218000258/https://consumerist.com/2015/12/17/identity-theft-company-lifelock-once-again-failed-to-actually-keep-identities-protected-must-pay-100m/
#10yrsago Uninsured driver plows through gamerās living-room wall and creams him mid-Fallout 4 https://www.gofundme.com/f/helpforbenzo
#10yrsago Juniper Networks backdoor confirmed, password revealed, NSA suspected https://www.wired.com/2015/12/juniper-networks-hidden-backdoors-show-the-risk-of-government-backdoors/
#10yrsago A survivalist on why you shouldnāt bug out https://waldenlabs.com/10-reasons-not-to-bug-out/
#1yrago Nurses whose shitty boss is a shitty app https://pluralistic.net/2024/12/18/loose-flapping-ends/#luigi-has-a-point
#1yrago Proud to be a blockhead https://pluralistic.net/2024/12/21/blockheads-r-us/#vocational-awe
Upcoming appearances (permalink)

- Denver: Enshittification at Tattered Cover Colfax, Jan 22
https://www.eventbrite.com/e/cory-doctorow-live-at-tattered-cover-colfax-tickets-1976644174937 -
Colorado Springs: Guest of Honor at COSine, Jan 23-25
https://www.firstfridayfandom.org/cosine/ -
Ottawa: Enshittification at Perfect Books, Jan 28
https://www.instagram.com/p/DS2nGiHiNUh/ -
Toronto: Enshittification and the Age of Extraction with Tim Wu, Jan 30
https://nowtoronto.com/event/cory-doctorow-and-tim-wu-enshittification-and-extraction/
Recent appearances (permalink)
- The Enshitification Life Cycle with David Dayen (Organized Money)
https://www.buzzsprout.com/2412334/episodes/18399894 -
Enshittificaition on The Last Show With David Cooper:
https://www.iheart.com/podcast/256-the-last-show-with-david-c-31145360/episode/cory-doctorow-enshttification-december-16-2025-313385767 -
(Digital) Elbows Up (OCADU)
https://vimeo.com/1146281673 -
How to Stop āEnsh*ttificationā Before It Kills the Internet (Capitalisn't)
https://www.youtube.com/watch?v=34gkIvYiHxU -
Enshittification on The Daily Show
https://www.youtube.com/watch?v=d2e-c9SF5nE
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
-
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
-
A Little Brother short story about DIY insulin PLANNING

This work ā excluding any serialized fiction ā is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
out of context graphs from Vitalik Buterin ās latest blog post
Chris Person speaking my language with this morning's bonus Aftermath blog about split keyboards
the keyboards that you buy at the store simply will not suffice, are not specific and perverted enough to accommodate my aberrant typing
planning baking for holiday cookie tins, asking questions like: if i make lemon curd x days in advance, i need to make y extra cups of it to still end up with ½ cup for cookies on the 24th. thank goodness i took multivariable calculus
Today's links
- A perfect distillation of the social uselessness of finance: A final thought for the yule.
- Hey look at this: Delights to delectate.
- Object permanence: Droidflake; Spy Skymall; Malthus is a dope; Happy Public Domain Day 2025.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
A perfect distillation of the social uselessness of finance (permalink)
I'm about to sign off for the year ā actually, I was ready to do it yesterday, but then I happened upon a brief piece of writing that was so perfect that I decided I'd do one more edition of Pluralistic for 2025.
The piece in question is John Lanchester's "For Every Winner A Loser," in the London Review of Books, in which Lanchester reviews two books about the finance sector: Gary Stevenson's The Trading Game and Rob Copeland's The Fund:
https://www.lrb.co.uk/the-paper/v46/n17/john-lanchester/for-every-winner-a-loser
It's a long and fascinating piece and it's certainly left me wanting to read both books, but that's not what convinced me to do one more newsletter before going on break ā rather, it was a brief passage in the essay's preamble, a passage that perfectly captures the total social uselessness of the finance sector as a whole.
Lanchester starts by stating that while we think of the role of the finance sector as "capital allocation" ā that is, using investors' money to fund new businesses and expansions for existing business ā that hasn't been important to finance for quite some time. Today, only 3% of bank activity consists of "lending to firms and individuals engaged in the production of goods and services."
The other 97% of finance is gambling. Here's how Stevenson breaks it down: say your farm grows mangoes. You need money before the mangoes are harvested, so you sell the future ownership of the harvest to a broker at $1/crate.
The broker immediately flips that interest in your harvest to a dealer who believes (on the basis of a rumor about bad weather) that mangoes will be scarce this year and is willing to pay $1.10/crate. Next, an international speculator (trading on the same rumor) buys the rights from the broker at $1.20/crate.
Now come the side bets: a "momentum trader" (who specializing in bets on market trends continuing) buys the rights to your crop for $1.30/crate. A contrarian trader (who bets against momentum traders) short-sells the momentum trader's bet at $1.20. More short sellers pile in and drive the price down to $1/crate.
Now, a new rumor circulates, about conditions being ripe for a bounteous mango harvest, so more short-sellers appear, and push the price to $0.90/crate. This tempts the original broker back in, and he buys your crop back at $1/crate.
That's when the harvest comes. You bring in the mangoes. They go to market, and fetch $1.10/crate.
This is finance ā a welter of transactions, only one of which (selling your mangoes to people who eat them) involves the real economy. Everything else is "speculation on the movement of prices." The nine transactions that took place between your planting the crop and someone eating the mangoes are all zero sum ā every trade has an evenly matched winner and loser, and when you sum them all up, they come out to zero. In other words, no value was created.
This is the finance sector. In a world where the real economy generates $105 trillion/year, the financial derivatives market adds up to $667 trillion/year. This is "the biggest business in the world" ā and it's useless. It produces nothing. It adds no value.
If you work a job where you do something useful, you are on the losing side of this economy. All the real money is in this socially useless, no-value-creating, hypertrophied, metastasized finance sector. Every gain in finance is matched by a loss. It all amounts to ā literally ā nothing.
So that's what tempted me into one more blog post for the year ā an absolutely perfect distillation of the uselessness of "the biggest business in the world," whose masters are the degenerate gamblers who buy and sell our politicians, set our policy, and control our lives. They're the ones enshittifying the internet, burning down the planet, and pushing Elon Musk towards trillionairedom.
It's their world, and we just live on it.
For now.
(Image: Sam Valadi, CC BY 2.0, modified)
Hey look at this (permalink)

- Meta Is Considering Charging Business Pages To Post Links https://www.socialmediatoday.com/news/meta-considering-charging-business-pages-to-post-links/808099/
-
The original Mozilla "Dinosaur" logo artwork https://www.jwz.org/blog/2025/12/the-original-mozilla-dinosaur-logo-artwork/
-
A Local Self-Reliance Agenda for New York City: ILSRās Memo to Mamdani https://ilsr.org/articles/memo-mamdani/
-
Apple loses its appeal of a scathing contempt ruling in iOS payments case https://arstechnica.com/tech-policy/2025/12/epic-celebrates-the-end-of-the-apple-tax-after-appeals-court-win-in-ios-payments-case/
-
The Internetās Tollbooth Operators https://prospect.org/2025/12/10/internets-tollbooth-operators-wu-review/
-
Barnum's Law of CEOs https://www.antipope.org/charlie/blog-static/2025/12/barnums-law-of-ceos.html
-
Google Starts Sharing All Your Text Messages With Your Employer https://archive.ph/wE2U7#selection-3936.0-3936.1
Object permanence (permalink)
#15yrsago Star Wars droidflake https://twitpic.com/3guwfq
#15yrsago TSA misses enormous, loaded .40 calibre handgun in carry-on bag https://web.archive.org/web/20101217223617/https://abclocal.go.com/ktrk/story?section=news/local&id=7848683
#15yrsago Brazilian TV clown elected to high office, passes literacy test https://web.archive.org/web/20111217233812/https://www.google.com/hostednews/afp/article/ALeqM5jmbXSjCjZBZ4z8VUcAZFCyY_n6dA?docId=CNG.b7f4655178d3435c9a54db2e30817efb.381
#15yrsago My Internet problem: an abundance of choice https://www.theguardian.com/technology/blog/2010/dec/17/internet-problem-choice-self-publishing
#10yrsago LEAKED: The secret catalog American law enforcement orders cellphone-spying gear from https://theintercept.com/2015/12/16/a-secret-catalogue-of-government-gear-for-spying-on-your-cellphone/#10yrsago
#10yrsago Putin: Give Sepp Blatter the Nobel; Trump should be president https://www.theguardian.com/football/2015/dec/17/sepp-blatter-fifa-putin-nobel-peace-prize
#10yrsago Star Wars medical merch from Scarfolk, the horror-town stuck in the 1970s https://scarfolk.blogspot.com/2015/12/unreleased-star-wars-merchandise.html
#10yrsago Some countries learned from Americaās copyright mistakes: TPP will undo that https://www.eff.org/deeplinks/2015/12/how-tpp-perpetuates-mistakes-dmca
#10yrsago No evidence that San Bernardino shooters posted about jihad on Facebook https://web.archive.org/web/20151217003406/https://www.washingtonpost.com/news/post-nation/wp/2015/12/16/fbi-san-bernardino-attackers-didnt-show-public-support-for-jihad-on-social-media/
#10yrsago Exponential population growth and other unkillable science myths https://web.archive.org/web/20151217205215/http://www.nature.com/news/the-science-myths-that-will-not-die-1.19022
#10yrsago UKās unaccountable crowdsourced blacklist to be crosslinked to facial recognition system https://arstechnica.com/tech-policy/2015/12/pre-crime-arrives-in-the-uk-better-make-sure-your-face-stays-off-the-crowdsourced-watch-list/
#1yrago Happy Public Domain Day 2025 to all who celebrate https://pluralistic.net/2024/12/17/dastar-dly-deeds/#roast-in-piss-sonny-bono
Upcoming appearances (permalink)

- Hamburg: Chaos Communications Congress, Dec 27-30
https://events.ccc.de/congress/2025/infos/index.html -
Denver: Enshittification at Tattered Cover Colfax, Jan 22
https://www.eventbrite.com/e/cory-doctorow-live-at-tattered-cover-colfax-tickets-1976644174937 -
Colorado Springs: Guest of Honor at COSine, Jan 23-25
https://www.firstfridayfandom.org/cosine/
Recent appearances (permalink)
- Enshittification on The Last Show With David Cooper:
https://www.iheart.com/podcast/256-the-last-show-with-david-c-31145360/episode/cory-doctorow-enshttification-december-16-2025-313385767 -
(Digital) Elbows Up (OCADU)
https://vimeo.com/1146281673 -
How to Stop āEnsh*ttificationā Before It Kills the Internet (Capitalisn't)
https://www.youtube.com/watch?v=34gkIvYiHxU -
Enshittification on The Daily Show
https://www.youtube.com/watch?v=d2e-c9SF5nE -
Enshittification with Four Ways to Change the World (Channel 4)
https://www.youtube.com/watch?v=tZQaEeuuI3Q
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
Colophon (permalink)
Today's top sources: John Naughton (https://memex.naughtons.org/).
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
-
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
-
A Little Brother short story about DIY insulin PLANNING

This work ā excluding any serialized fiction ā is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Today's links
- Happy Public Domain Day 2026! The best way to cut through the hellishly complex thicket and bring our culture back to life.
- Hey look at this: Delights to delectate.
- Object permanence: Weird D&D advice; Email sabbaticals.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Happy Public Domain Day 2026! (permalink)
In 1998, Congress committed an act of mass cultural erasure, extending copyright by 20 years, including for existing works (including ones that were already in the public domain), and for 20 years, virtually nothing entered the US public domain.
But then, on January 1, 2019, the public domain reopened. A crop of works from 1923 entered the public domain, to great fanfare ā though honestly, precious few of those works were still known (that's what happens when you lock up 50 year old works for an extra 20 years, ensuring they don't circulate, or get reissued or reworked). Sure, I sang Yes, We Have No Bananas along with everyone else, but the most important aspect of the Grand Reopening of the Public Domain was the works that were to come:
https://www.youtube.com/watch?v=Z2ryWm0bziE
The mid/late-1920s were extraordinarily fecund, culturally speaking. A surprising volume of creative work from that era remains in our consciousness, and so, every January 1, we have been treated to a fresh delivery of gifts from the past, works that are free and open and ours to claim and copy and use and remix.
No one chronicles this better than Jennifer Jenkins and James Boyle, the dynamic duo of copyright scholars who run Duke's Center for the Public Domain. During the 20 year public domain drought, Jenkins and Boyle kept the flame of hope, publishing an annual roundup of all the works that would have entered the public domain, but for Congress's act of wanton cultural vandalism. But starting in 2019, these yearly reports were transformed ā no longer are they laments for the past we're losing; today, they are celebrations of the past that's showering down around us.
2024 marked another turning point for the public domain: that was the year that the first Mickey Mouse cartoons entered the public domain:
https://pluralistic.net/2023/12/20/em-oh-you-ess-ee/#sexytimes
Does that mean that Mickey Mouse is in the public domain? Well, it's complicated. Really complicated. To a first approximation, the aspects of Mickey that were present in those early cartoons enterted the public domain that year, while other, later aspects of his character design (e.g. the big white gloves) wouldn't enter the public domain until later. But that's not the whole story, because not every aspect of character design is even copyrightable, so some later refinements to The Mouse were immediately public. This is such a chewy subject that Jenkins devoted a whole separate (and brilliant) article to it:
https://pluralistic.net/2023/12/15/mouse-liberation-front/#free-mickey
You see, Jenkins is a generationally brilliant legal communicator, much sought after for her commentary of these abstract matters. You may have heard her giving her characteristically charming, crisp and clear commentaries on NPR's Planet Money:
She and Boyle have produced some of the best copyright textbooks ā from popular explainers to the definitive casebooks for classroom use ā in circulation today, and they release these as free, shareable, open-access works:
Yesterday, Jenkins and Boyle published the 2026 edition of their Public Domain Day omnibus:
https://web.law.duke.edu/cspd/publicdomainday/2026/
There are some spectacular works that are being freed on January 1:
- Dashiell Hammett's Maltese Falcon
-
Agatha Christie's Murder at the Vicarage (Miss Marple's debut)
-
The first four Nancy Drew books
-
The first Dick and Jane book
-
TS Eliot's Ash Wednesday
-
Olaf Stapledon's Last and First Men
-
Sigmund Freud's Civilization and Its Discontents (in German)
-
Somerset Maugham's Cakes and Ale
-
Bertrand Russell's The Conquest of Happiness
That's just a small selection from thousands of books.
Things are pretty amazing on the film side too: we're getting Academy Award winners like All Quiet on the Western Front, another Marx Brothers movie (Animal Crackers); the debut film appearance of two of the Three Stooges (Soup To Nuts); a Gary Cooper/Marlene Dietrich vehicle (Morocco); Garbo's first talkie (Anna Christie); John Wayne's big break (The Big Trail); a Hitchcock (Murder!); Jean Harlow's debut (Hell's Angels, directed by Howard Hughes); and so, so many more.
Then there's music. On the composition side, there's some great Gershwins (I Got Rhythm, I've Got a Crush on You, Embraceable You). There's Hoagy Carmichael's Georgia On My Mind. There's Dream a Little Dream of Me, Sunny Side of the Street, Livin' in the Sunlight, Lovin' in the Moonlight, Just a Gigolo; and a Sousa march, The Royal Welch Fusiliers.
There's also some banger recordings: Marian Anderson's Nobody Knows the Trouble I've Seen; Bessie Smith and Louis Armstrong's St Louis Blues; Clarence Williamsā Blue Five's Everybody Loves My Baby (but My Baby Don't Love Nobody but Me); Louis Armstrong's If I Lose, Let me Lose; and (again) so many more!
On top of that, there's a bunch of 2D art, including a Mondrian, a Klee, and a ton more work from 1930, which means a lot of Deco, Constructivism, and Neoplasticism. As a collagist, I find this very exciting:
https://pluralistic.net/2025/12/03/cannier-valley/#bricoleur
As with previous editions, Jenkins and Boyle use this year's public domain report as a jumping-off point to explain some of the gnarlier aspects of copyright law. This year's casus belli is the bizarre copyright status of Betty Boop.
https://web.law.duke.edu/cspd/publicdomainday/2026/#boopanchor
On January 1, the first Betty Boop cartoon, Dizzy Dishes, will enter the public domain. But there are many aspects of Betty Boop that are already in the public domain, because the copyright on many later Boop cartoons was never renewed ā until 1976, copyright holders were required to file some paperwork at fixed intervals to extend the copyright on their works. While the Fleischer studio (where Betty Boop was created) renewed the copyright on Dizzy Dishes, there were many other shorts that entered the public domain years ago.
That means that all the aspects of Betty Boop that were developed for Dizzy Dishes are about to enter the public domain. But also, all the aspects of Betty Boop from those non-renewed shorts are already in the public domain. But some of the remaining aspects of Betty Boop's character design ā those developed in subsequent shorts that were also renewed ā are also in the public domain, because they aren't copyrightable in the first place, because they're "generic," or "trivial," constitute "minuscule variations," or be so standard or indispensable as to be a "scĆØne Ć faire."
On top of that, there are aspects of the Betty Boop design that may be in copyright, but no one is sure who they belong to, because a lot of the paperwork establishing title to those copyrights vanished during the various times when the Fleischer studio and its archives changed hands.
But we're not done yet! Just because some later aspects of the Betty Boop character design are still in copyright, it doesn't follow that you aren't allowed to use them! US Copyright law has a broad set "limitations and exceptions," including fair use, and if your usage fits into one of these exceptions, you are allowed to reproduce, adapt, display and perform copyrighted works without permission from the copyright holder ā even (especially) if the copyright holder objects.
And finally, on top of all of this, there's trademark, which is often lumped in with copyright as part of an incoherent, messy category we call "intellectual property." But trademark is absolutely unlike copyright in virtually every way. Unlike copyright, trademarks don't automatically expire. Trademarks remain in force for so long as they are used in commerce (which is why a group of cheeky ex-Twitter lawyers are trying to get the rights to the Twitter trademarks that Musk abandoned when he rebranded the company as "X"):
But also, trademark exists to prevent marketplace confusion, which means that you're allowed to use trademarks in ways that don't lead to consumers being misled about the origin of goods or services. Even the Supreme Court has (repeatedly) upheld the principle that trademark can't be used as a backdoor to extend copyright.
That's important, because the current Betty Boop license-holders have been sending out baseless legal threats claiming that their trademarks over Betty Boop mean that she's not going into the public domain. They're not the only ones, either! This is a routine, petty scam perpetrated by marketing companies that have scooped up the (usually confused and difficult-to-verify) title to cultural icons and then gone into business extracting rent from people and businesses who want to make new works with them. Scammers in this mold energetically send out bullshit legal threats on behalf of the estates of Charlie Chaplin, Alfred Hitchcock, and Herge, salting their threats with nonsense about different terms of copyright in the UK and elsewhere.
As Jenkins and Boyle point out, the thing that copyright expiration get us is clarity. When the heroic lawyer and Sherlockian Les Klinger succesfully wrestled the Sherlock Holmes rights out of the Doyle estate, he did us all a solid:
https://esl-bits.eu/ESL.English.Listening.Short.Stories/Rendition/01/default.html
But "wait until Les gets angry enough to spend five years in court" isn't a scalable solution to the scourge of copyfraud. It's only through the unambiguous expiry of copyright that we can all get clarity on which parts of our culture are free for all to use.
Now, that being said, copyright's limitations and exceptions are also hugely important, because there are plenty of beneficial uses that arise long before a work enters the public domain. To take just one example: for the past week, the song in top rotation on my music player has been the newly (officially) released Fatboy Slim track Satisfaction Skank, a mashup of Slim's giant hit Rockefeller Skank and the Rolling Stones' even bigger hit (I Can't Get No) Satisfaction:
https://www.youtube.com/watch?v=_c_V3oPCe-s
This track is one of Fatboy Slim's all-time crowd-pleasers, the song he would bust out during live shows to get everyone on the dance-floor. But for more than 20 years, the track has been exclusive to his live shows ā despite multiple overtures, Fatboy Slim couldn't get the Rolling Stones to respond to his attempts to license Satisfaction for an official release.
That changed when ā without explanation ā the Rolling Stones reached out the Slim and offered to license the rights, even giving him access to the masters:
https://www.bbc.com/news/articles/c2dzre3z96go
This is a happy ending, but it's also a rarity. For every track like this ā where the rightsholders decide to grant permission, even if it takes decades ā there are thousands more that can't be officially released. This serves no one's interests ā not musicians, not fans. The irony is that in the golden age of sampling, everyone operated from the presumption that sampling was fair use. High profile lawsuits and gunshy labels killed that presumption, and today, sampling remains a gigantic, ugly mess:
Which is all to say that the ongoing growth of the public domain, after its 20-year coma, is a most welcome experience ā but if you think the public domain is great, wait'll you see what fair use can do for creativity!
(Image: Jennifer Jenkins and James Boyle, CC BY 4.0)
Hey look at this (permalink)

- NVIDIA Isn't Enron ā So What Is It? https://www.wheresyoured.at/nvidia-isnt-enron-so-what-is-it/
-
How Google Maps quietly allocates survival across Londonās restaurants ā and how I built a dashboard to see through it https://laurenleek.substack.com/p/how-google-maps-quietly-allocates
-
Who do they think you are? https://hidden-selves.wove.co/
-
Datacenters in space are a terrible, horrible, no good idea. https://taranis.ie/datacenters-in-space-are-a-terrible-horrible-no-good-idea/
-
Mobile Voting Projectās vote-by-smartphone has real security gaps https://blog.citp.princeton.edu/2025/12/16/mobile-voting-projects-vote-by-smartphone-has-real-security-gaps/
Object permanence (permalink)
#20yrago Sony DRM Debacle Roundup Part V https://memex.craphound.com/2005/12/16/sony-drm-debacle-roundup-part-v/
#15yrsago Weird D&D advice-column questions https://comicsalliance.com/weird-dd-questions-dungeons-dragons/
#10yrsago Americaās permanent, ubiquitous tent-cities https://placesjournal.org/article/tent-city-america/
#10yrsago The changing world of webcomics business models https://web.archive.org/web/20151218130702/http://shadowbinders.com/webcomics-changing-business-model-podcast/
#10yrsago Cop who demanded photo of sexting-accused teenās penis commits suicide https://arstechnica.com/tech-policy/2015/12/cop-who-wanted-to-take-pic-of-erection-in-sexting-case-commits-suicide/
#10yrsago Saudi millionaire acquitted of raping teen in London, says he tripped and accidentally penetrated her https://www.telegraph.co.uk/news/uknews/crime/12052901/Ehsan-Abdulaziz-Saudi-millionaire-cleared-of-raping-teenager.html
#10yrsago Someone snuck skimmers into Safeway stores https://krebsonsecurity.com/2015/12/skimmers-found-at-some-calif-colo-safeways/
#10yrsago Philips promises new firmware to permit third-party lightbulbs https://web.archive.org/web/20151216182639/http://www.developers.meethue.com/content/friends-hue-program-update
#5yrsago Jan 1 is Public Domain Day for 1925 https://pluralistic.net/2020/12/16/fraught-superpowers/#public-domain-day
#5yrsago Landmark US financial transparency law https://pluralistic.net/2020/12/16/fraught-superpowers/#financial-secrecy
#5yrsago Chaos Communications Congress https://pluralistic.net/2020/12/16/fraught-superpowers/#rc3
#5yrsago Email sabbaticals https://pluralistic.net/2020/12/16/fraught-superpowers/#email-sabbatical
Upcoming appearances (permalink)

- Hamburg: Chaos Communications Congress, Dec 27-30
https://events.ccc.de/congress/2025/infos/index.html -
Denver: Enshittification at Tattered Cover Colfax, Jan 22
https://www.eventbrite.com/e/cory-doctorow-live-at-tattered-cover-colfax-tickets-1976644174937 -
Colorado Springs: Guest of Honor at COSine, Jan 23-25
https://www.firstfridayfandom.org/cosine/
Recent appearances (permalink)
- (Digital) Elbows Up (OCADU)
https://vimeo.com/1146281673 -
How to Stop āEnsh*ttificationā Before It Kills the Internet (Capitalisn't)
https://www.youtube.com/watch?v=34gkIvYiHxU -
Enshittification on The Daily Show
https://www.youtube.com/watch?v=d2e-c9SF5nE -
Enshittification with Four Ways to Change the World (Channel 4)
https://www.youtube.com/watch?v=tZQaEeuuI3Q -
The Plan is to Make the Internet Worse. Forever. (Novarra Media)
https://www.youtube.com/watch?v=7wE8G-d7SnY
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
-
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
-
A Little Brother short story about DIY insulin PLANNING

This work ā excluding any serialized fiction ā is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Issue 98 ā The worldās most corrupt crypto startup operation
Today's links
- America's collapsing consumption is the world's disenshittification opportunity: America's loss is the post-American internet's gain.
- Hey look at this: Delights to delectate.
- Object permanence: DanKam; Backyard M*A*S*H; Blockchain voting is bullshit.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
America's collapsing consumption is the world's disenshittification opportunity (permalink)
We are about to get a "post-American internet," because we are entering a post-American era and a post-American world. Some of that is Trump's doing, and some of that is down to his predecessors.
When we think about the American century, we rightly focus on America's hard power ā the invasions, military bases, arms exports, and CIA coups. But it's America's soft power that established and maintained true American dominance, the "weaponized interdependence" that Henry Farrell and Abraham Newman describe in their 2023 book The Underground Empire:
https://pluralistic.net/2023/10/10/weaponized-interdependence/#the-other-swifties
As Farrell and Newman lay out, America established itself as more than a global power ā it is a global platform. If you want to buy things from another country, you use dollars, which you keep in an account at the US Federal Reserve, and which you exchange using the US-dominated SWIFT system. If you want to transmit data across a border, chances are you'll use a fiber link that makes its first landfall on the USA, the global center of the world's hub-and-spoke telecoms system.
No one serious truly believed that these US systems were entirely trustworthy, but there was always an assumption that if the US were to instrumentalize (or, less charitably, weaponize) the dollar, or fiber, that they would do so subtly, selectively, and judiciously. Instead, we got the Snowden revelations that the US was using its position in the center of the world's fiber web to spy on pretty much every person in the world ā lords and peasants, presidents and peons.
Instead, we got the US confiscating Argentina's foreign reserves to pay back American vulture capitalists who bought distressed Argentine bonds for pennies on the dollar and then got to raid a sovereign nation's treasury in order to recoup a loan they never issued. Instead we saw the SWIFT system mobilized to achieve tactical goals from the War on Terror and Russia-Ukraine sanctions.
These systems are now no longer trustworthy. It's as though the world's brakes have started to fail intermittently, but we are still obliged to drive down the road at 100mph, desperately casting about for some other way to control the system, and forced to rely on this critical, unreliable mechanism while we do:
https://pluralistic.net/2025/11/26/difficult-multipolarism/#eurostack
This process was well underway before Trump, but Trump's incontinent belligerence has only accelerated the process ā made us keenly aware that a sudden stop might be in our immediate future, heightening the urgency of finding some alternative to America's faulty brakes. Through trade policy (tariffs) and rhetoric, Trump has called the question:
One of the most urgent questions Trump has forced the world to confront is what we will do about America's control over the internet. By this, I mean both the abstract "governance" control (such as the fact that ICANN is a US corporation, subject to US government coercion), and the material fact that virtually every government, large corporation, small business and household keeps its data (files, email, records) in a US Big Tech silo (also subject to US government control).
When Trump and Microsoft colluded to shut down the International Criminal Court by killing its access to Outlook and Office365 (in retaliation for the ICC issuing an arrest warrant for the gĆ©nocidaire Benjamin Netanyahu), the world took notice. Trump and Microsoft bricked the ICC, effectively shuttering its operations. If they could do that to the ICC, they could do it to any government agency, any nationally important corporation, any leader ā anyone. It was an act of blatant cyberwarfare, no different from Russian hackers bricking Ukrainian power plants (except that Microsoft didn't have to hack Outlook, they own it).
The move put teeth into Trump's frequent reminders that America no longer has allies or trading partners ā it only has rivals and adversaries. That has been the subtext ā and overt message ā of the Trump tariffs, ever since "liberation day" on April 2, 2025.
When Americans talk about the Trump tariffs, they focus on what these will do to the cost of living in the USA. When other countries discuss the tariffs, they focus on what this will do to their export markets, and whether their leaders will capitulate to America's absurd demands.
This makes sense: America is gripped by a brutal cost of living crisis, and contrary to Trump's assertions, this is not a Democratic hoax. We know this because (as The Onion points out), "Democrats would never run on a salient issue":
https://theonion.com/fact-checking-trump-on-affordability/
It also makes sense that Canadians and Britons would focus on this because Prime Ministers Carney and Starmer have caved on their plans to tax US Big Tech, ensuring that these companies will always have a cash-basis advantage over domestic rivals (Starmer also rolled over by promising to allow American pharma companies to gouge the NHS):
https://www.independent.co.uk/news/uk/politics/nhs-drug-prices-starmer-trump-tariffs-b2841490.html
But there's another, highly salient aspect to tariffs that is much neglected ā one that is, ultimately, far more important than these short-run changes to other countries' plans to tax American tech giants. Namely: for decades, the US has used the threat of tariffs to force its trading partners into policies that keep their tech companies from competing with American tech giants.
The most important of these Big Tech-defending policy demands is something called "anticircumvention law." This is a law that bans changing how a product works without the manufacturer's permission: for example, modifying your printer so it can use generic ink, or modifying your car so it can be fixed by an independent repair depot, or modifying your phone or games console so it can use a third-party app store.
This ban on modification means that when a US tech giant uses its products to steal money and/or private information from the people in your country (that is, "enshittification"), no one is allowed to give your people the tools to escape these scams. Your domestic investors can't invest in your domestic technologists' startups, which cannot make the disenshittifying products that also cannot be exported globally, to anyone with an internet connection and a payment method.
It's a double whammy: your people are plundered, and your businesses are strangled. The whole world has been made poorer, to the tune of trillions of dollars, by this scam. And the only reason everyone puts up with it is that the US threatened them with tariffs if they didn't.
So now we have tariffs, and if someone threatens to burn your house down unless you follow orders, and then they burn it down anyway, you really don't have to keep following their orders.
This is a point I've been making in many forums lately, including, most recently, on a stage in Canada, where I made the case that rather than whacking Americans with retaliatory tariffs, Canada should legalize reverse-engineering and go into business directly attacking the highest margin lines of business of America's most profitable corporations, making everything in Canada cheaper and better, and turning America's trillions in Big Tech ripoffs into Canadian billions by selling these tools to everyone else in the world:
https://pluralistic.net/2025/11/28/disenshittification-nation/#post-american-internet
There's lots of reasons to like this plan. Not only is it a double reverse whammy ā making everything cheaper and making billions for a new, globally important domestic tech sector ā but it's also unambiguously within Canada's power to do. After all, it's very hard to get American tech giants to do things they don't want to do. Canada tried to do this with Facebook, and failed miserably:
The EU ā a far more powerful entity than Canada ā has been trying to get Apple to open up its App Store, and Apple has repeatedly told them to go fuck themselves:
https://pluralistic.net/2025/09/26/empty-threats/#500-million-affluent-consumers
Apple, being a truly innovative company, has come up with a whole lot of exciting new ways to tell the EU to fuck itself:
https://www.theregister.com/2025/12/16/apple_dma_complaint/
But anticircumvention law is something that every government has total, absolute control over. Maybe Canada can't order Apple, Google and Facebook to pay their taxes, but it can absolutely decide to stop giving these American companies access to Canada's courts to shut down Canadian competitors so that US companies can go on stealing data and money from the Canadian people:
https://pluralistic.net/2025/11/01/redistribution-vs-predistribution/#elbows-up-eurostack
Funnily enough, this case is so convincing that I've started to hear from Canadian Trump appeasers who insist that we must not repeal our anticircumvention laws because this would work too well. It would inflict too much pain on America's looting tech sector, and save Canadians too much money, and make too much money for Canadian tech businesses. If Canada becomes the world's first disenshittification nation (they say), we will make Trump too angry.
Apparently, these people think that Canada should confine its tariff response to measures that don't work, because anything effective would provoke Trump.
When I try to draw these critics out about what the downside of "provoking Trump" is, they moot the possibility that Trump would roll tanks across the Rainbow Bridge and down Lundy's Lane. This seems a remote possibility to me ā and ultimately, they agree. The international response to Trump invading Canada because we made it easier for people (including Americans) to buy cheap printer ink would beā¦intense.
Next, they mumble something about tariffs. When I point out that the US is already imposing tariffs on Canadian exports, they say "well, it could be worse," and point to various moments when Trump has hiked the tariffs on Canada, e.g. because he was angry over being reminded that Ronald Reagan would have hated his guts:
https://www.youtube.com/watch?v=dCKmMEFiLrI
But of course, the fact that Trump's tariffs yo-yo up and down depending on the progress of his white matter disease means that anyone trying to do forward planning for something they anticipate exporting to America should assume that there might be infinity tariffs the day they load up their shipping container.
But there's another way in which the threat of tariffs is ringing increasingly hollow: American consumption power is collapsing, because billionaires and looters have hoarded all the country's wealth, and no one can afford to buy things anymore.
America is in the grips of its third consecutive "K-shaped recovery":
https://prospect.org/2025/12/01/premiumization-plutonomy-middle-class-spending-gilded-age/
A K-shaped recovery is when the richest people get richer, but everyone else gets worse off. Working people in America have gotten steadily poorer since the 1970s, even as America's wealthiest have seen their net worth skyrocket.
The declining economic power of everyday Americans has multiple causes: stagnating wages, monopoly price-gouging, and the blistering increase in education, housing and medical debt. These all have the same underlying cause, of course: the capture of both political parties ā and the courts and administrative agencies ā by billionaires, who have neutered antitrust law, jacked up the price of health care and a college educaton, smashed unions, and cornered entire housing markets.
For decades, America's consumption power has been kept on life-support through consumer debt and second (or third, or fourth) mortgages. But America's monopoly credit card companies are every bit as capable of price-gouging as America's hospitals, colleges and landlords are, and Americans don't just carry more credit-card debt than their foreign counterparts, they also pay more to service that debt:
https://www.justice.gov/archives/opa/pr/justice-department-sues-visa-monopolizing-debit-markets
The point is that every dollar that goes into servicing a debt is a dollar that can't be used to buy something useful. A dollar spent on consumption has the potential to generate multiple, knock-on transactions, as the merchant spends your dollar on a coffee, and the coffee-shop owner spends it on a meal out, and the restaurateur spends it on a local printer who runs off a new set of menus. But a dollar that's shoveled into the debt markets is almost immediately transferred out of the real economy and into the speculative financial economy, landing in the pocket of a one-percenter who buys stocks or other assets with it.
The rich just don't buy enough stuff. There's a limit to how many Lambos, Picassos, and Sub-Zero fridges even the most guillotineable plute can usefully own.
Meanwhile, consumers keep having their consumption power siphoned off by debt-collectors and price-gougers, with Trump's help. The GOP just forced eight million student borrowers back into repayment:
https://prospect.org/2025/12/16/gop-forcing-eight-million-student-loan-borrowers-into-repayment/
They've killed a monopolization case against Pepsi and Walmart for colluding to rig grocery prices across the entire economy:
https://www.thebignewsletter.com/p/secret-documents-show-pepsi-and-walmart
They've sanctioned the use of price-fixing algorithms to raise rent:
https://www.thebignewsletter.com/p/an-odd-settlement-on-rent-fixing
As Tim Wu points out in his new book, The Age of Extraction, one consequence of allowing monopoly pricing is that it reduces spending power across the entire economy:
https://www.penguinrandomhouse.com/books/691177/the-age-of-extraction-by-tim-wu/
Take electricity: you would probably pay your power bill even if it tripled. Sure, you'd find ways to conserve electricity and eliminate many discretionary power uses, but anyone who can pay for electricity will, if the alternative is no electricity. Electricity ā like health, shelter, food, and education ā is so essential that you'd forego a vacation, a new car, Christmas gifts, dinners out, a new winter coat, or a vet's visit for your cat if that was the only way to keep the lights on.
Trump's unshakable class solidarity with rent extractors, debt collectors and price gougers has significantly accelerated the collapse of the consumption power of Americans (AKA "the affordability crisis").
But it gets worse: Americans' consumption power isn't limited to the dollars they spend, it also includes the dollars that the government spends on their behalf, through programs like SNAP (food stamps) and Medicaid/Medicare. Those programs have been slashed to the bone and beyond by Trump, Musk, DOGE and the Republican majority in Congress and the Senate.
The reason that other countries took the threat of US tariffs so seriously ā seriously enough to hamstring their own tech sector and render their own people defenseless against US tech ā is that the US has historically bought a lot of stuff. For any export economy, the US was a critical market, a must-have.
But that has been waning for a generation, as the Lambo-and-Sub-Zero set hoarded more and more of the wealth and the rest of us were able to afford less and less. In less than a year, Trump has slashed the consumption power of an increasing share of the American public to levels approaching the era of WWII ration-books.
The remaining American economy is a collection of cheap gimmicks that are forever on the brink of falling apart. Most of the economy is propped up by building data-centers for AI that no one wants and that can't be powered thanks to Trump's attacks on renewables. The remainder consists of equal parts MLMs, Labubus, Lafufus, cryptocurrency speculation, and degenerate app-based gambling.
None of this is good. This is all fucking terrible. But I raise it here to point out that "Do as I say or Americans won't buy your stuff anymore" starts to ring hollow once most Americans can't afford to buy anything anymore.
America is running out of levers to pull in order to get the rest of the world to do its bidding. American fossil fuels are increasingly being outcompeted by an explosion of cheap, evergreen Chinese solar panels, inverters, batteries, and related technology:
https://pluralistic.net/2025/10/02/there-goes-the-sun/#carbon-shifting
And the US can't exactly threaten to withhold foreign aid to get leverage over other countries ā US foreign aid has dropped to homeopathic levels:
https://www.factcheck.org/2025/02/sorting-out-the-facts-on-waste-and-abuse-at-usaid/
What's more, it's gonna be increasingly difficult for the US to roll tanks anywhere, even across the Rainbow Bridge, now that Pete Hegseth is purging the troops of anyone who can't afford Ozempic:
And Congress just gutted the US military's Right to Repair, meaning that the Pentagon will be forced to continue its proud tradition of shipping busted generators, vehicles and materiel back to the USA for repair:
Eventually, some foreign government is going to wake up to the fact that they can make billions by raiding the US tech giants that have been draining their economy, and, in so doing, defend themselves against Trump's cyberwar threat to order Microsoft (or Oracle, or Apple, or Google) to brick their key ministries and corporations. When they do, US Big Tech will squeal, the way they always do:
https://economicpopulist.substack.com/p/big-tech-zeal-to-weaponize-trade
But money talks and bullshit walks. There's a generation of shit-hot technologists who've been chased out of America by mask-wearing ICE goons who wanted to throw them in a gulag, and a massive cohort of investors looking for alpha who don't want to have to budget for a monthly $TRUMP coin spend in order to remain in business.
And when we do finally get a disenshittification nation, it will be great news for Americans. After all, everyday Americans either own no stock, or so little stock as to be indistinguishable from no stock. We don't benefit from US tech companies' ripoffs ā we are the victims of those ripoffs. America is ground zero for every terrible scam and privacy invasion that a US tech giant can conceive of. No one needs the disenshittification tools that let us avoid surveillance, rent-seeking and extraction more than Americans. And once someone else goes into business selling them, we'll be able to buy them.
Buying digital tools that are delivered over the internet is a hell of a lot simpler than buying cheap medicine online and getting it shipped from a Canadian pharmacy.
For an America First guy, Trump is sure hell-bent on ending the American century.
Hey look at this (permalink)

- The Ross Dowson Archive https://archive.org/details/rossdowson?tab=collection
-
The Reverse Centaur's Guide to Criticizing AI https://distro.f-91w.club/reverse-centaur/reverse-centaur_imposed.pdf
-
Daddy-Daughter Podcast, 2025 Edition https://craphound.com/news/2025/12/14/daddy-daughter-podcast-2025-edition/
-
Old Teslas Are Falling Apart https://futurism.com/advanced-transport/old-teslas-falling-apart
-
EFF Launches Age Verification Hub as Resource Against Misguided Laws https://www.eff.org/press/releases/eff-launches-age-verification-hub-resource-against-misguided-laws
Object permanence (permalink)
#20yrsago PSP 2.01 firmware unlocked https://web.archive.org/web/20060115012844/https://psp3d.com/showthread.php?t=874
#20yrsago HOWTO make a DRM CD https://blog.citp.princeton.edu/2005/12/15/make-your-own-copy-protected-cd-passive-protection/
#15yrsago DanKam: mobile app to correct color blindness https://web.archive.org/web/20101217043921/https://dankaminsky.com/2010/12/15/dankam/
#15yrsago UBSās 43-page dress code requires tie-knots that match your facial morphology https://web.archive.org/web/20151115074222/https://www.wsj.com/articles/SB10001424052748704694004576019783931381042
#15yrsago UK demonstrator challenges legality of ākettlingā protestors https://web.archive.org/web/20101219075643/https://www.google.com/hostednews/ukpress/article/ALeqM5hK97JtRIOOeKUxESqXRLSeUDBTJw?docId=B39208111292330372A000
#15yrsago Backyard MAS*H set replica https://imgur.com/a/mash-ztcon
#15yrsago Bottle-opener shaped like a prohibitionist https://web.archive.org/web/20101222062101/https://blog.modernmechanix.com/2010/12/15/booze-foe-image-opens-bottles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ModernMechanix+(Modern+Mechanix)
#15yrsago Typewriter ribbon tins https://thedieline.com/vintage-packaging-typewriter-tins.html/
#10yrsago Sometimes, starting the Y-axis at zero is the BEST way to lie with statistics https://www.youtube.com/watch?v=14VYnFhBKcY
#10yrsago DEA ignored prosecutorās warning about illegal wiretap warrants, now itās losing big https://www.usatoday.com/story/news/2015/12/09/illegal-dea-wiretap-riverside-money-laundering/77050442/
#10yrsago Lifelock anti-identity theft service helped man stalk his ex-wife https://www.azcentral.com/story/money/business/consumers/2015/11/23/lifelock-used-electronically-track-arizona-woman/75535470/
#10yrsago EFF and Human Rights Watch force DEA to destroy its mass surveillance database https://www.eff.org/deeplinks/2015/12/victory-privacy-and-transparency-hrw-v-dea
#10yrsago Do Androids Dream of Electric Victim-Blamers? https://neverbeenmad.tumblr.com/post/134528463529/voight-kampff-empathy-test-2015-by-smlxist-and
#10yrsago Billionaire GOP superdonors arenāt getting what they paid for https://web.archive.org/web/20181119192737/https://nymag.com/intelligencer/2015/12/gop-billionaires-cant-seem-to-buy-this-election.html
#5yrsago EU competition rules have real teeth https://pluralistic.net/2020/12/15/iowa-vs-16-tons-of-bricks/#dsm
#5yrsago Asset forfeiture is just theft https://pluralistic.net/2020/12/15/iowa-vs-16-tons-of-bricks/#stand-and-delivery
#5yrsago Pornhub and payment processors https://pluralistic.net/2020/12/15/iowa-vs-16-tons-of-bricks/#chokepoints
#5yrsago Blockchain voting is bullshit https://pluralistic.net/2020/12/15/iowa-vs-16-tons-of-bricks/#sudoku-voting
Upcoming appearances (permalink)

- Hamburg: Chaos Communications Congress, Dec 27-30
https://events.ccc.de/congress/2025/infos/index.html -
Denver: Enshittification at Tattered Cover Colfax, Jan 22
https://www.eventbrite.com/e/cory-doctorow-live-at-tattered-cover-colfax-tickets-1976644174937 -
Colorado Springs: Guest of Honor at COSine, Jan 23-25
https://www.firstfridayfandom.org/cosine/
Recent appearances (permalink)
- (Digital) Elbows Up (OCADU)
https://vimeo.com/1146281673 -
How to Stop āEnsh*ttificationā Before It Kills the Internet (Capitalisn't)
https://www.youtube.com/watch?v=34gkIvYiHxU -
Enshittification on The Daily Show
https://www.youtube.com/watch?v=d2e-c9SF5nE -
Enshittification with Four Ways to Change the World (Channel 4)
https://www.youtube.com/watch?v=tZQaEeuuI3Q -
The Plan is to Make the Internet Worse. Forever. (Novarra Media)
https://www.youtube.com/watch?v=7wE8G-d7SnY
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
-
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
-
A Little Brother short story about DIY insulin PLANNING

This work ā excluding any serialized fiction ā is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Today's links
- Break up bad companies; replace bad union bosses: Labor should be fixed, capital should be vanquished.
- Hey look at this: Delights to delectate.
- Object permanence: "Star Island"; "Mediactive"; Afraid of solar; Well-Armed Peasants; Dumpster fire exits; Wikipedia v Brittanica; Pentagon v Quakers; Stealing whole houses; "Situation Normal."
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Break up bad companies; replace bad union bosses (permalink)
Unions are not perfect. Indeed, it is possible to belong to a union that is bad for workers: either because it is weak, or corrupt, or captured (or some combination of the three).
Take the "two-tier contract." As unions lost ground ā thanks to changes in labor law enforcement under a succession of both Republican and Democratic administrations ā labor bosses hit on a suicidal strategy for contract negotiations. Rather than bargaining for a single contract that covered all the union's dues-paying members, these bosses negotiated contracts that guaranteed benefits for existing members, but did not extend these benefits to new members:
https://pluralistic.net/2021/11/25/strikesgiving/#shed-a-tier
A two-tier contract is one where all workers pay dues, but only the dwindling rump of older, more established workers get any protection or representation from their union. An ever-larger portion of the membership have to pay dues, but get nothing for them. You couldn't come up with a better way to destroy unions if you tried.
Thankfully, union workers figured out that the answer to this problem was firing their leaders and replacing them with militant, principled leaders who cared about workers, not just a subsection of their members. Radicals in big unions ā like the UAW ā teamed up with comrades from university grad students' unions to master the arcane rules that had been weaponized by corrupt bosses to prevent free and fair union elections. Together, they forced the first legitimate union elections in generations, and then the newly elected leaders ran historic strikes that won huge gains for workers (and killed off the two-tier contract):
https://theintercept.com/2023/04/07/deconstructed-union-dhl-teamsters-uaw/
Corrupt unions aren't the only life-destroying institutions that radicals have set their sights on this decade. Concentrated corporate power is the most dangerous force in the world today (indeed, it's large, powerful corporations that corrupted those unions). Antitrust activists, environmental activists, consumer rights activists, privacy activists and labor activists have stepped up the global war on big business all through this decade. From new antitrust laws to antitrust lawsuits to strikes to boycotts to mass protests and direct action, this decade has marked a turning point in the global consciousness about the danger of corporate power and the need to fight it.
But there's a big, important difference between bad corporations and bad unions: what we should do about them.
The answer to a powerful, corrupt corporation is to take action that strips it of its power: break the company up, whack it with fines, take away its corporate charter, strip its executives of their fortunes, even put them in prison. That's because corporations are foundationally undemocratic institutions, governed by "one share, one vote" (and the billionaires who benefit from corporate power are building a society that's "one dollar, one vote").
They fundamentally exist to consolidate power at the expense of workers, suppliers and customers, to extract wealth by imposing costs on the rest of us, from pollution to political corruption. When a corporation gets big enough to pose a risk to societal wellbeing, we need to smash that corporation, not reform it.
But the answer to a corrupt union is to fire the union bosses and replace them with better ones. The mission of a union is foundationally pro-democratic. A unionized workplace is a democratic workplace. As in any democracy, workplace democracies can be led by bad or incompetent people. But, as with any democracy, the way you fix this is by swapping out the bad leaders for good ones ā not by abolishing democracy and replacing it with an atomized society in which it's every worker for themself, bargaining with a boss who will always win a one-on-one fight in the long run.
I raise this because a general strike is back on the table, likely for May Day 2028 (5/1/28):
https://labornotes.org/2025/12/maybe-general-strike-isnt-so-impossible-now
Unions are an important check against fascism. That's why fascists always start by attacking organized labor: solidarity is the opposite of fascism.
To have unions that are fit for purpose in this existential battle for the future of the nation ā and, quite possibly, the human race ā we desperately need better leaders. Like the union bosses who gave us the two-tier contract, many of our union leaders see their mission as narrowly serving their existing members, and not other workers ā not even workers who might some day become their members.
To get a sense of how bad it's gotten, consider these five facts:
I. Public support for unions is at its highest level since the Carter administration;
II. More workers want to join unions than at any time in living memory;
III. Unions have larger cash reserves than at any time in history;
IV. Under Biden, the National Labor Relations Board was more friendly to unions than at any time in generations; and
V. During the Biden years, the number of unionized workers in America went down, not up.
That's because union bosses ā sitting on a mountain of cash, surrounded by workers begging to be organized ā decided that their priority was their existing members, and declined to spend more than a pittance of their cash reserves on organizing efforts.
This is suicidal ā as self-destructive as the two-tier contract was. To pull off a general strike, we will need mass civil disobedience, and a willingness to ignore the Taft-Hartley Act's ban on solidarity strikes. Trump's NLRB isn't just hostile to workers ā he's illegally fired so many of its commissioners that they can't even perform most of their functions. But a militant labor movement could turn that to its advantage, because militants know that when Trump fires the refs, you don't have to stop the game ā you can throw out the rule book:
https://pluralistic.net/2025/01/29/which-side-are-you-on-2/#strike-three-yer-out
This is the historic opportunity and challenge before us ā to occupy our unions, save our workplace democracies, and then save our national democracy itself.
Hey look at this (permalink)

- Secret Documents Show Pepsi and Walmart Colluded to Raise Food Prices Across the Economy https://www.thebignewsletter.com/p/secret-documents-show-pepsi-and-walmart
-
20 Years of Digital Life, Gone in an Instant, thanks to Apple https://hey.paris/posts/appleid/
-
Enjoy the new year in your headset https://brucesterling.tumblr.com/post/802750890885906432/gartner-predicts-25-of-people-will-spend-at-least
-
Merry Mixmas 2025 https://djriko.com/merry-mixmas-mixes/
-
I Wasted 8 Years of My Life in Crypto https://x.com/kenchangh/status/1994854381267947640
Object permanence (permalink)
#20yrsago Sony Artists offering home-burned CDs to replace spyware-infected discs https://web.archive.org/web/20060719082355/http://www.rollingstone.com/news/story/8950981/copyprotection_troubles_grow
#20yrsago Pentagon bravely vigilant against sinister, threatening Quakers https://www.nbcnews.com/id/wbna10454316
#20yrsago Brooklyn camera-store crooks threaten activistās life https://thomashawk.com/2005/12/brooklyn-photographer-don-wiss.html
#20yrsago Britannica averages 3 bugs per entry; Wikipedia averages 4 https://www.nature.com/articles/438900a
#20yrsago Diane Duane wonders if she should self-publish trilogy conclusion https://web.archive.org/web/20051215151654/https://outofambit.blogspot.com/archives/2005_12_01_outofambit_archive.html#113446948274092674
#20yrsago Table coverts to truncheon and shield http://www.jamesmcadam.co.uk/portfolio_html/sb_table.html
#20yrsago Royal Society members speak out for open access science publishing https://web.archive.org/web/20051210023301/https://www.frsopenletter.org/
#20yrsago TiVo upgrading company offers $25k for hacks to the new DirecTV PVR https://web.archive.org/web/20051215050848/https://www.wkblog.com/2005/12/weaknees_offers_up_to_25000_fo.html
#20yrsago Michigan HS students will need to take online course to graduate https://web.archive.org/web/20051215052603/https://www.chronicle.com/free/2005/12/2005121301t.htm
#15yrsago Hiaasenās STAR ISLAND: blisteringly funny tale of sleazy popstars and paparazzi https://memex.craphound.com/2010/12/13/hiaasens-star-island-blisteringly-funny-tale-of-sleazy-popstars-and-paparazzi/
#15yrsago Dan Gillmorās Mediactive: masterclass in 21st century journalism demands a net-native news-media https://memex.craphound.com/2010/12/13/dan-gillmors-mediactive-masterclass-in-21st-century-journalism-demands-a-net-native-news-media/
#15yrsago Council of Europe accuses Kosovoās prime minister of organlegging https://www.theguardian.com/world/2010/dec/14/kosovo-prime-minister-llike-mafia-boss
#15yrsago Gold pills turn your innermost parts into chambers of wealth https://web.archive.org/web/20110930011010/https://www.citizen-citizen.com/collections/all/products/gold-pills
#10yrsago The Red Cross brought in an AT&T exec as CEO and now itās a national disaster https://www.propublica.org/article/the-corporate-takeover-of-the-red-cross
#10yrsago Philips pushes lightbulb firmware update that locks out third-party bulbs https://www.techdirt.com/2015/12/14/lightbulb-drm-philips-locks-purchasers-out-third-party-bulbs-with-firmware-update/
#10yrsago UK spy agency posts data-mining software to Github https://github.com/gchq/Gaffer
#10yrsago Cybercrime 3.0: stealing whole houses https://memex.craphound.com/2015/12/14/cybercrime-3-0-stealing-whole-houses/
#10yrsago US politicians, ranked by their willingness to lie https://www.nytimes.com/2015/12/13/opinion/campaign-stops/all-politicians-lie-some-lie-more-than-others.html
#10yrsago 24 privacy tools ā not messaging apps ā that donāt exist https://dymaxion.org/essays/pleasestop.html
#10yrsago North Carolina town rejects solar because itāll suck up sunlight and kill the plants https://web.archive.org/web/20250813151735/https://www.roanoke-chowannewsherald.com/2015/12/08/woodland-rejects-solar-farm/
#10yrsago Giant hats were the cellphones of the silent movie era https://pipedreamdragon.tumblr.com/post/135065922736/movie-movie-etiquette-warnings-shown-before
#10yrsago Plaid Lumberjack Cake https://www.youtube.com/watch?v=_1hDl53c-kw
#10yrsago MRA Scott Adams: pictures and words by Scott Adams, together at last https://web.archive.org/web/20151214002415/https://mradilbert.tumblr.com/
#10yrsago American rents reach record levels of unaffordability https://www.nbcnews.com/business/economy/its-not-just-poor-who-cant-make-rent-n478501
#5yrsago Well-Armed Peasants https://pluralistic.net/2020/12/13/art-thou-down/#forsooth
#5yrsago Where money comes from https://pluralistic.net/2020/12/14/situation-normal/#mmt
#5yrsago China's best investigative stories of 2020 https://pluralistic.net/2020/12/14/situation-normal/#gijn
#5yrsago Situation Normal https://pluralistic.net/2020/12/14/situation-normal/#more-constellation-games
#1yrago Social media needs (dumpster) fire exits https://pluralistic.net/2024/12/14/fire-exits/#graceful-failure-modes
#1yrago The GOP is not the party of workers https://pluralistic.net/2024/12/13/occupy-the-democrats/#manchin-synematic-universe
Upcoming appearances (permalink)

- Hamburg: Chaos Communications Congress, Dec 27-30
https://events.ccc.de/congress/2025/infos/index.html -
Denver: Enshittification at Tattered Cover Colfax, Jan 22
https://www.eventbrite.com/e/cory-doctorow-live-at-tattered-cover-colfax-tickets-1976644174937 -
Colorado Springs: Guest of Honor at COSine, Jan 23-25
https://www.firstfridayfandom.org/cosine/
Recent appearances (permalink)
- (Digital) Elbows Up (OCADU)
https://vimeo.com/1146281673 -
How to Stop āEnsh*ttificationā Before It Kills the Internet (Capitalisn't)
https://www.youtube.com/watch?v=34gkIvYiHxU -
Enshittification on The Daily Show
https://www.youtube.com/watch?v=d2e-c9SF5nE -
Enshittification with Four Ways to Change the World (Channel 4)
https://www.youtube.com/watch?v=tZQaEeuuI3Q -
The Plan is to Make the Internet Worse. Forever. (Novarra Media)
https://www.youtube.com/watch?v=7wE8G-d7SnY
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
-
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
-
A Little Brother short story about DIY insulin PLANNING

This work ā excluding any serialized fiction ā is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
For those in the US doing holiday shopping, you have a couple more days to order from my store and receive your items by Christmas! Stickers and prints should arrive in time if ordered by December 17, other items should be ordered in the next day or two š
Today's links
- Federal Wallet Inspectors: Does tech *really* move too fast to regulate?
- Hey look at this: Delights to delectate.
- Object permanence: Soda can Van de Graff; Amazon rents a copy of the web; Boardgame Remix Kit; No furniture photos please we're British; Youtube vs fair use; Carbon offsets are bullshit; Arkham model railroad; Happy Birthday is in the public domain; Ted Cruz hires Cambridge Analytica; The kid who wanted to join the NSA; Daddy Daughter Xmas Podcast 2020.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Federal Wallet Inspectors (permalink)
Look, I'm not trying to say that new technologies never raise gnarly new legal questions, but what I am saying is that a lot of the time, the "new legal challenges" raised by technology are somewhere between 95% to 100% bullshit, ginned up by none-too-bright tech bros and their investors, and then swallowed by regulators and lawmakers who are either so credulous they'd lose a game of peek-a-boo, or (likely) in on the scam.
Take "fintech." As Trashfuture's Riley Quinn is fond of saying, "when you hear 'fintech,' think 'unregulated bank'":
https://pluralistic.net/2022/03/02/shadow-banking-2-point-oh/#leverage
I mean, the whole history of banking is: "Bankers think of a way to do reckless things that are wildly profitable (in the short term) and catastrophic (in the long term). They offer bribes and other corrupt incentives to their watchdogs to let them violate the rules, which leads to utter disaster." From the 19th century "panics" to the crash of '29 to the S&L collapse to the 2008 Great Financial Crisis and beyond, this just keeps happening.
Much of the time, the bankers involved have some tissue-thin explanation for why what they're doing isn't really a violation of the rules. Think of the lenders who, in the runup to the Great Financial Crisis, insisted that they weren't engaged in risky lending because they had a fancy equation that proved that the mortgage-backed securities they were issuing were all sound, and it was literally impossible that they'd all default at once.
The fact that regulators were bamboozled by this is enraging. In hindsight (and for many of us at least, at the time), it's obvious that the bankers went to their watchdogs and said, "We'd like to break the law," and the watchdogs said, "Sure, but would you mind coming up with some excuse that I can repeat later when someone asks me why I let you do this crime?"
It's like in the old days of medical marijuana, where you'd get on a call with a dial-a-doc and say, "Please can I have some weed?" and the doc would say, "Tell me about your headaches," and you'd say, "Uh, I have headaches?" and they'd say "Great, here's your weed!"
The alternative is that these regulators are so bafflingly stupid that they can't be trusted to dress themselves. "My stablecoin is a fit financial instrument to integrate into the financial system" is as credible a wheeze as some crypto bro walking up to Cory Booker, flashing a homemade badge, and snapping out, "Federal Wallet Inspector, hand it over."
I mean, at that point, I kind of hope they're corrupt, because the alternative is that they are basically a brainstem and a couple of eyestalks in a suit.
What I'm saying is, "We just can't figure out if crypto is violating finance laws" is a statement that can only be attributed to someone very stupid, or in on the game.
Speaking of "someone very stupid, or in on the game," Congress just killed a rule that would have guaranteed that the US military could repair its own materiel:
Military right to repair is the most brainless of all possible no-brainers. When a generator breaks down in the field ā even in an active war-zone ā the US military has to ship it back to America to be serviced by the manufacturer. That's not because you can't train a Marine to fix a generator ā it's because the contractual and technical restrictions that military contractors insist on ban the military from fixing its stuff:
https://www.pogo.org/fact-sheets/fact-sheet-the-right-to-repair-for-the-united-states-military
This violates a very old principle in sound military administration. Abraham Lincoln insisted that the contractors who supplied the Union army had to use standardized tooling and ammo, because it would be very embarrassing for the Commander-in-Chief to have to stand on the field at Gettysburg with a megaphone and shout, "Sorry boys, war's canceled this week, our sole supplier's gone on vacation."
And yet, after mergers of large military contractors resulted in just a handful of "primary" companies serving the Pentagon, private equity vampires snapped up all the subcontractors who were sole-source suppliers of parts to those giants. They slashed the prices of those parts so that the primary contractors used as many as possible in the materiel they provided to the US DoD, and then raised the prices of replacement parts, some with 10,000% margins, which the Pentagon now has to pay for so long as they own those jets and other big-ticket items:
https://pluralistic.net/2021/01/29/fractal-bullshit/#dayenu
This isn't a complicated scam. It's super straightforward, and the right to repair rule that Congress killed addressed it head on. But Congressional enemies of this bill insisted that it would have untold "unintended consequences" and instead passed a complex rule, riddled with loopholes, because there was something unique and subtle about the blunt issue of price-gouging:
Either these lawmakers are so stupid that they fell for the ole "Federal Wallet Inspector" gambit, or they're in on the game. I know which explanation my money is on.
Maybe this has already occurred to you, but lately I've come to realize that there's another dimension to this, a way in which critics of tech help this gambit along. After all, it's pretty common for tech critics to preface their critiques with words to the effect of, "Of course, this technology has raced ahead of regulators' ability to keep pace with it. Those dastardly tech-bros have slipped the net once again!"
The unspoken (and sometimes very loudly spoken) corollary of this is, "Only a tech-critic as perspicacious and forward looking as me is capable of matching wits with those slippery tech-bros, and I have formulated a sui generis policy prescription that can head them off at the pass."
Take the problem of deepfakes, including deepfake porn. There's a pretty straightforward policy response to this: a privacy law that allows you to prevent the abuse of your private information (including to create deepfakes) that unlawfully process your personal information for an illegitimate purpose. To make sure that this law can be enforced, include a "private right of action," which means that individuals can sue to enforce it (and activist orgs and no-win/no-fee lawyers can sue on their behalf). That way, you can get justice even if the state Attorney General or the federal Department of Justice decides not to take your case.
Privacy law is a great idea. It can navigate nuances, like the fact that privacy is collective, not individual ā for example, it can intervene when your family members give their (your) DNA to a scam like 23andme, or when a friend posts photos of you online:
https://jacobin.com/2021/05/cory-doctorow-interview-bill-gates-intellectual-property
But privacy law gets a bad rap. In the EU, they've had the GDPR ā a big, muscular privacy law ā for nine years, and all it's really done is drown the continent in cookie-consent pop-ups. But that's not because the GDPR is flawed, it's because Ireland is a tax-haven that has lured in the world's worst corporate privacy-violators, and to keep them from moving to another tax haven (like Malta or Cyprus or Luxembourg), it has to turn itself into a crime-haven. So for the entire life of the GDPR, all the important privacy cases in Europe have gone to Ireland, and died there:
https://pluralistic.net/2025/12/01/erin-go-blagged/#big-tech-omerta
Now, again, this isn't a complicated technical question that is hard to resolve through regulation. It's just boring old corruption. I'm not saying that corruption is easy to solve, but I am saying that it's not complicated. Irish politicians made the country's economy dependent on the Irish state facilitating criminal activity by American firms. The EU doesn't want to provoke a constitutional crisis by forcing Ireland (and the EU's other crime-havens) to halt this behavior.
That's a hard thing to do! It's just not a complicated thing to do. The routine violations of EU privacy law by American tech companies aren't the result of "tech racing ahead of the law." It's just corruption. You can't fix corruption by passing more laws; they'll just be corruptly enforced, too.
But thanks to a mix of bad incentives ā politicians wanting to be seen to do something without actually upsetting the apple-cart; AI critics wanting to inflate their importance by claiming that they're fighting something novel and complex, as opposed to something that's merely boring and hard ā we get policy proposals that will likely worsen the problem.
Take Denmark's decision to fight deepfakes by creating a new copyright over your likeness:
Copyright ā a property right ā is an incredibly bad way to deal with human rights like privacy. For one thing, it makes privacy into a luxury good that only the wealthy can afford (remember, a discount for clicking through a waiver of your privacy right is the same thing as an extra charge for not waiving your privacy rights). For another, property rights are very poorly suited to managing things that have joint ownership, such as private information. As soon as you turn private information into private property, you have to answer questions like, "Which twin owns the right to their face" and "Who owns the right to the fact that your abusive mother is your mother ā you, or her? And if it's her, does she get to stop you from publishing a memoir about the abuse?"
Copyright ā a state-backed transferable monopoly over expression ā is really hard to get right. Legislatures and courts have struggled to balance free expression and copyright for centuries, and there's a complex web of "limitations and exceptions" to copyright. Privacy is also incredibly complex, and has its own limitations and exceptions, and they are very different from copyright's limits. I mean, they have to be: privacy rules defend your human right to a personal zone of autonomy; copyright is intended to create economic incentives to produce new creative works. It would be very weird if the same rules served both ends.
I can't believe that Denmark's legislators failed to consider privacy as the solution to deepfakes. If they did, they are very, very stupid. Rather, they decided that fighting the corruption that keeps privacy law from being enforced in the EU was too hard, so they just did something performative, creating a raft of new problems, without solving the old one.
Here in the USA, there's lots of lawmakers who are falling into this trap. Take the response to chatbots that give harmful advice to children and teens. The answer that many American politicians (as well as lawmakers abroad, in Australia, Canada, the UK and elsewhere) have come up with is to force AI companies to identify who is and is not a child and treat them differently.
This boils down to a requirement for AI companies to collect much more information on their users (to establish their age), which means that all the AI harms that stem from privacy violations (AI algorithms that steal wages, hike prices, discriminate in hiring and lending and policing, etc) are now even harder to stop.
A simple alternative to this would be updating privacy law to limit how AI companies can gather and use everyone's data ā which would mean that you could protect kids from privacy invasions without (paradoxically) requiring them (and you) to disclose all kinds of private information to determine how old they are.
The insistence ā by AI critics and AI boosters ā that AI is so different from other technologies that you can't address it by limiting the collection, retention and processing of private information is a way in which AI critics and AI hucksters end up colluding to promote a view of AI as an exceptional technology. It's not. AI is a normal technology:
https://www.aisnakeoil.com/p/ai-as-normal-technology
Sometimes this argument descends into grimly hilarious parody. Argue for limits on AI companies' collection, retention and processing of private information and AI boosters will tell you that this would require so much labor-intensive discernment about training data that it would make it impossible to continue training AI until it becomes intelligent enough to solve all our problems. But also, when you press the issue, they'll sometimes say that AI is already so "intelligent" that it can derive (that is, guess) private information about you without needing your data, so a new privacy law won't help.
In other words, applying privacy limitations to AI means we'll never get a "superintelligence,"; and also, we already have a superintelligence so there's no point in applying privacy limitations to AI.
It's true that technology can give rise to novel regulatory challenges, but it's also true that claiming that a technology is so novel that existing regulation can't resolve its problems is just a way of buying time to commit more crimes before the regulators finally realize that your flashy new technology is just a boring old scam.
Hey look at this (permalink)

- Every Drink in āCasablancaā (1942) https://bruces.medium.com/every-drink-in-casablanca-1942-348e7c543810
-
clbre is a fork of calibre with the aim of stripping out the AI integration https://github.com/grimthorpe/clbre
-
EU Report Distills AI-Training Lessons from Napster Piracy Era: Donāt Sue, License https://torrentfreak.com/eu-report-distills-ai-training-lessons-from-napster-piracy-era-dont-sue-license/
-
Rebuilding Imaginary Futures: Il Versificatore, 2025 https://bruces.medium.com/rebuilding-imaginary-futures-il-versificatore-2025-3178a12be2aa
-
John Varley, 1947-2025 https://floggingbabel.blogspot.com/2025/12/john-varley-1947-2025.html
Object permanence (permalink)
#20yrsago Americans smile, Brits grimace? https://www.nytimes.com/2005/12/11/magazine/national-smiles.html
#20yrsago HOWTO make a soda-can Van de Graaf https://scitoys.com/scitoys/scitoys/electro/electro6.html
#20yrsago Credit-card-sized USB drive https://web.archive.org/web/20051214084824/http://walletex.com/
#20yrsago Homeland Security: Mini-golf courses are terrorist targets https://web.archive.org/web/20060215153821/https://www.kron.com/Global/story.asp?S=4226663
#20yrsago Amazon rents access to a copy of the Web https://battellemedia.com/archives/2005/12/alexa_make_that_amazon_looks_to_change_the_game
#15yrsago Pornoscanners trivially defeated by pancake-shaped explosives https://web.archive.org/web/20101225211840/http://springerlink.com/content/g6620thk08679160/fulltext.pdf
#10yrsago HO fhtagn! Detailed model railroad layout recreates HP Lovecraftās Arkham https://web.archive.org/web/20131127042302/http://www.ottgallery.com/MRR.html
#10yrsago Suicide rates are highest in spring ā not around Christmas https://www.theatlantic.com/health/archive/2015/12/no-suicides-dont-rise-during-the-holidays/419436/
#10yrsago Airbnb hosts consistently discriminate against black people https://www.benedelman.org/publications/airbnb-011014.pdf
#10yrsago What will it take to get MIT to stand up for its own students and researchers? https://www.youtube.com/watch?v=cQdl_JdTars
#10yrsago Experts baffled to learn that 2 years olds are being prescribed psychiatric drugs https://www.nytimes.com/2015/12/11/us/psychiatric-drugs-are-being-prescribed-to-infants.html?_r=0
#10yrsago Happy Birthdayās copyright status is finally, mysteriously settled https://www.nytimes.com/2015/12/10/business/media/happy-birthday-copyright-case-reaches-a-settlement.html?_r=0
#10yrsago Proposal: keep the nuclear launch codes in an innocent volunteerās chest-cavity https://blog.nuclearsecrecy.com/2012/09/19/the-heart-of-deterrence/
#10yrsago Obama promises statement on encryption before Xmas (maybe) https://web.archive.org/web/20151211042128/https://www.dailydot.com/politics/white-house-encryption-policy-response-petition/
#10yrsago Harlem Cryptoparty: Crypto matters for #blacklivesmatter https://web.archive.org/web/20151218183924/https://motherboard.vice.com/read/the-black-community-needs-encryption
#10yrsago Backslash: a toolkit for protesters facing hyper-militarized, surveillance-heavy police https://arstechnica.com/tech-policy/2015/12/backslash-anti-surveillance-gadgets-for-protesters/
#10yrsago Ted Cruz campaign hires dirty data-miners who slurped up millions of Facebook usersā data https://www.theguardian.com/us-news/2015/dec/11/senator-ted-cruz-president-campaign-facebook-user-data
#10yrsago The Tor Project has a new executive director: former EFF director Shari Steele! https://blog.torproject.org/greetings-tors-new-executive-director/
#10yrsago What I told the kid who wanted to join the NSA https://www.theguardian.com/us-news/2015/dec/11/west-point-cybersecurity-nsa-privacy-edward-snowden
#10yrsago Copyfraud: Disneyās bogus complaint over toy photo gets a fan kicked off Facebook https://arstechnica.com/tech-policy/2015/12/disney-initially-drops-then-doubles-down-on-dmca-claim-over-star-wars-figure-pic/
#15yrsago Sales pitch from an ATM-skimmer vendor https://krebsonsecurity.com/2010/12/why-gsm-based-atm-skimmers-rule/
#15yrsago Boardgame Remix Kit makes inspired new games out of old Monopoly, Clue, Trivial Pursuit and Scrabble sets https://web.archive.org/web/20101214210548/http://www.boardgame-remix-kit.com/sample/boardgame-remix-kit-sample.pdf
#10yrsago Britons will need copyright licenses to post photos of their own furniture https://arstechnica.com/tech-policy/2015/12/you-may-soon-need-a-licence-to-take-photos-of-that-classic-designer-chair-you-bought/
#5yrsago Outgoing Facebookers blast the company https://pluralistic.net/2020/12/12/fairy-use-tale/#badge-posts
#5yrsago Carbon offsets are bullshit https://pluralistic.net/2020/12/12/fairy-use-tale/#greenwashing
#5yrsago Youtube, fair use, competition, and the death of the artist https://pluralistic.net/2020/12/12/fairy-use-tale/#content-id
#5yrsago A lethally boring story https://pluralistic.net/2020/12/11/number-eight/#erisa
#5yrsago Daddy Daughter Xmas Podcast 2020 https://pluralistic.net/2020/12/11/number-eight/#youll-go-down-in-mystery
#5yrsago Antitrust and Facebook's paid disinformation https://pluralistic.net/2020/12/11/number-eight/#curse-of-bigness
#1yrago The housing emergency and the second Trump term https://pluralistic.net/2024/12/11/nimby-yimby-fimby/#home-team-advantage
#1yrago A Democratic media strategy to save journalism and the nation https://pluralistic.net/2024/12/12/the-view-from-somewhere/#abolish-rogan
Upcoming appearances (permalink)

- Hamburg: Chaos Communications Congress, Dec 27-30
https://events.ccc.de/congress/2025/infos/index.html -
Denver: Enshittification at Tattered Cover Colfax, Jan 22
https://www.eventbrite.com/e/cory-doctorow-live-at-tattered-cover-colfax-tickets-1976644174937 -
Colorado Springs: Guest of Honor at COSine, Jan 23-25
https://www.firstfridayfandom.org/cosine/
Recent appearances (permalink)
- How to Stop āEnsh*ttificationā Before It Kills the Internet (Capitalisn't)
https://www.youtube.com/watch?v=34gkIvYiHxU -
Enshittification on The Daily Show
https://www.youtube.com/watch?v=d2e-c9SF5nE -
Enshittification with Four Ways to Change the World (Channel 4)
https://www.youtube.com/watch?v=tZQaEeuuI3Q -
The Plan is to Make the Internet Worse. Forever. (Novarra Media)
https://www.youtube.com/watch?v=7wE8G-d7SnY -
Enshittification (Future Knowledge)
https://futureknowledge.transistor.fm/episodes/enshittification
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
-
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
-
A Little Brother short story about DIY insulin PLANNING

This work ā excluding any serialized fiction ā is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Today's links
- Instacart reaches into your pocket and lops a third off your dollars: A/B splitting your way into doing a total fucking racism.
- Hey look at this: Delights to delectate.
- Object permanence: Predicting the present; Caller eye-deer; RIP Robert Sheckley; Student protesters and Google Maps vs kettling; If your kids like computers, they're criminals; A billion CC licenses; The moral character of cryptographic work; EC resurrects link-taxes.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Instacart reaches into your pocket and lops a third off your dollars (permalink)
There's a whole greedflation-denial cottage industry that insists that rising prices are either the result of unknowable, untameable and mysterious economic forces, or they're the result of workers having too much money and too many jobs.
The one thing we're absolutely not allowed to talk about is the fact that CEOs keep going on earnings calls to announce that they are hiking prices way ahead of any increase in their costs, and blaming inflation:
https://pluralistic.net/2021/11/20/quiet-part-out-loud/#profiteering
Nor are we supposed to notice the "price consultancies" that let the dominant firms in many sectors ā from potatoes to meat to rental housing ā fix prices in illegal collusive arrangements that are figleafed by the tissue-thin excuse that "if you use an app to fix prices, it's not a crime":
https://pluralistic.net/2025/01/25/potatotrac/#carbo-loading
And we're especially not supposed to notice the proliferation of "personalized pricing" businesses that use surveillance data to figure out how desperate you are and charge you a premium based on that desperation:
https://pluralistic.net/2024/06/05/your-price-named/#privacy-first-again
Surveillance pricing ā when you are charged more for the same goods than someone else, based on surveillance data about the urgency of your need and the cash in your bank account ā is a way for companies to reach into your pocket and devalue the dollars in your wallet. After all, if you pay $2 for something that I pay $1 for, that's just the company saying that your dollars are only worth half as much as mine:
https://pluralistic.net/2025/06/24/price-discrimination/
It's a form of cod-Marxism: "from each according to their desperation":
https://pluralistic.net/2025/01/11/socialism-for-the-wealthy/#rugged-individualism-for-the-poor
The economy is riddled with surveillance pricing gouging. You are almost certainly paying more than your neighbors for various items, based on algorithmic price-setting, every day. Case in point: More Perfect Union and Groundwork Collaborative teamed up with Consumer Reports to recruit 437 volunteers from across America to log in to Instacart at the same time and buy the same items from 15 stores, and found evidence of surveillance pricing at Albertsons, Costco, Kroger, and Sprouts Farmers Market:
https://groundworkcollaborative.org/work/instacart/
The price-swings are wild. Some test subjects are being charged 23% more than others. The average variance for "the exact same items, from the exact same locations, at the exact same time" comes out to 7%, or "$1,200 per year for groceries" for a family of four.
The process by which your greedflation premium is assigned is opaque. The researchers found that Instacart shoppers ordering from Target clustered into seven groups, but it's not clear how Instacart decides how much extra to charge any given shopper.
Instacart ā who acquired Eversight, a surveillance pricing company, in 2022 ā blamed the merchants (who, in turn, blamed Instacart). Instacart also claimed that they didn't use surveillance data to price goods, but hedged, admitting that the consumer packaged goods duopoly of Unilever and Procter & Gamble do use surveillance data in connection with their pricing strategies.
Finally, Instacart claimed that this was all an "experiment" to "learn what matters most to consumers and how to keep essential items affordable." In other words, they were secretly charging you more (for things like eggs and bread) because somehow that lets them "keep essential items affordable."
Instacart said their goal was to help "retail partners understand consumer preferences and identify categories where they should invest in lower prices."
Anyone who's done online analytics can easily pierce this obfuscation, but for those of you who haven't had the misfortune of directing an iterated, A/B tested optimization effort, I'll unpack this statement.
Say you have a pool of users and a bunch of variations on a headline. You randomly assign different variants to different users and measure clickthroughs. Then you check to see which variants performed best, and dig into the data you have on those users to see if there are any correlations that tie together users who liked a given approach.
This might let you discover that, say, women over 40 click more often on headlines that mention kittens. Then you generate more variations based on these conclusions ā different ways of mentioning kittens ā and see which of these variations perform best, and whether the targeted group of users split into smaller subgroups (women over 40 in the midwest prefer "tabby kitten" while their southern sisters prefer "kitten" without a mention of breed).
By repeatedly iterating over these steps, you can come up with many highly refined variants, and you can use surveillance data to target them to ever narrower, more optimized slices of your user-base.
Obviously, this is very labor intensive. You have to do a lot of tedious analysis, and generate a lot of variants. This is one of the reasons that slopvertising is so exciting to the worst people on earth: they imagine that they can use AI to create a self-licking ice-cream cone, performing the analysis and generating endless new variations, all untouched by human hands.
But when it comes to prices, it's much easier to produce variants ā all you're doing is adding or subtracting from the price you show to shoppers. You don't need to get the writing team together to come up with new ways of mentioning kittens in a headline ā you can just raise the price from $6.23 to $6.45 and see if midwestern women over 40 balk or add the item to their shopping baskets.
And here's the kicker: you don't need to select by gender, racial or economic criteria to end up with a super-racist and exploitative arrangement. That's because race, gender and socioeconomic status have broad correlates that are easily discoverable through automated means.
For example, thanks to generations of redlining, discriminatory housing policy, wage discrimination and environmental racism, the poorest, sickest neighborhoods in the country are also the most racialized and are also most likely to be "food deserts" where you can't just go to the grocery store and shop for your family.
What's more, the private equity-backed dollar store duopoly have waged a decades-long war on community grocery stores, enveloping them with dollar stores that use their access to preferential discounts (from companies like Unilever and Procter & Gamble, another duopoly) to force grocers out of business:
https://pluralistic.net/2023/03/27/walmarts-jackals/#cheater-sizes
Then these dollar stores run a greedflation scam that is so primitive, it's almost laughable: they just charge customers much higher amounts than the prices shown on the shelves and price-tags:
https://www.consumeraffairs.com/news/do-all-those-low-dollar-store-prices-really-add-up-120325.html
When you live in a food desert where your only store is a Dollar General that defrauds you at the cash-register, you are more likely to accept a higher price from Instacart, because you have fewer choices than someone in a middle-class neighborhood with two or three competing grocers. And the people who live in those food deserts are more likely to be poor, which, in America, is an excellent predictor of whether they are Black or brown.
Which is to say, without ever saying "Charge Black people more for groceries," Instacart can easily A/B split its way into a system where they predictably and reliably charge Black people more for groceries. That's the old cod-Marxism at work: "from each according to their desperation."
This is so well-understood that anyone who sets one of these systems in motion should be understood to be deliberately seeking to do racist profiteering under cover of an algorithm. It's empiricism-washing: "I'm not racist, I just did some math" (that produced a predictably racist outcome):
This is the dark side and true meaning of "business optimization." The optimal business pays its suppliers and workers nothing, and charges its customers everything it can. Obviously, businesses need to settle for suboptimal outcomes, because workers won't show up if they don't get paid, and customers won't buy things that cost everything they haveā¹.
ā¹ Unless, of course, you are an academic publisher, in which case this is just how you do business.
A business "optimizes" its workforce by finding ways to get them to accept lower wages. For example, they can bind their workers with noncompete "agreements" that ban Wendy's cashiers from quitting their job and making $0.25 more per hour at the McDonald's next door (one in 18 American workers have been locked into one of these contracts):
https://pluralistic.net/2025/09/09/germanium-valley/#i-cant-quit-you
Or they can lock their workers in with "training repayment agreement provisions" (TRAPs) ā contractual clauses that force workers to pay their bosses thousands of dollars if they quit or get fired:
https://pluralistic.net/2022/08/04/its-a-trap/#a-little-on-the-nose
But the most insidious form of worker optimization is "algorithmic wage discrimination." That's when a company uses surveillance data to lower the wages of workers. For example, contract nurses are paid less if the app that hires them discovers (through the unregulated data-broker sector) that they have a lot of credit-card debt. After all, nurses who are heavily indebted can't afford to be choosy and turn down lowball offers:
https://pluralistic.net/2024/12/18/loose-flapping-ends/#luigi-has-a-point
This is the other form of surveillance pricing: pricing labor based on surveillance data. It's more cod-Marxism: "From each according to their desperation."
Forget "becoming ungovernable": to defeat these evil fuckers, we have to become unoptimizable:
https://pluralistic.net/2025/08/20/billionaireism/#surveillance-infantalism
How do we do that? Well, nearly every form of "optimization" begins with surveillance. They can't figure out whether they can charge you more if they can't spy on you. They can't figure out whether they can pay you less if they can't spy on you, either.
And the reason they can spy on you is because we let them. The last consumer privacy law to pass out of Congress was a 1988 bill that bans video-store clerks from disclosing your VHS rental history. Every other form of consumer surveillance is permitted under US federal law.
So step one of this process is to ban commercial surveillance. Banning algorithmic price discrimination is all well and good, but it is, ultimately, a form of redistribution. We're trying to make the companies share some of the excess they extract from our surveillance data. But predistribution ā ending surveillance itself, in this case ā is always far more effective than redistribution:
https://pluralistic.net/2025/10/31/losing-the-crypto-wars/#surveillance-monopolism
How do we do that? Well, we need to build a coalition. At the Electronic Frontier Foundation, we call this "privacy first": you can't solve all the internet's problems by fixing privacy, but you won't fix most of them unless we get privacy right, and so the (potential) coalition for a strong privacy regime is large and powerful:
https://pluralistic.net/2023/12/06/privacy-first/#but-not-just-privacy
But of course, "privacy first," doesn't mean "just privacy." We also need tools that target algorithmic pricing per se. In New York State, there's a new law that requires disclosure of algorithmic pricing, in the form of a prominent notification reading, "THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA."
This is extremely weaksauce, and might even be worse than nothing. In California we have Prop 65, a rule that requires businesses to post signs and add labels any time they expose you to chemicals "known to the state of California to cause cancer." This caveat emptor approach (warn people, let them vote with their wallets) has led to every corner of California's built environment being festooned with these warnings. Today, Californians just ignore these warnings, the same way that web users ignore the "privacy policy" disclosures on the sites they visit:
https://pluralistic.net/2025/04/19/gotcha/#known-to-the-state-of-california-to-cause-cancer
The right approach isn't to (merely) warn people about carcinogens (or privacy risks). The right approach is regulating harmful business practices, whether those practices give you a tumor or pick your pocket.
Under Biden, former FTC chair Lina Khan undertook proceedings to ban algorithmic pricing altogether. Trump's FTC killed that, along with all the other quality-of-life enhancing measures the FTC had in train (Trump's FTC chair replaced these with a program to root out "wokeness" in the agency).
Today, Khan is co-chair of Zohran Mamdani's transition team, and she will use the mayor's authority (under the New York City Consumer Protection Law of 1969, which addresses "unconscionable" commercial practices) to ban algorithmic pricing in NYC:
https://pluralistic.net/2025/11/15/unconscionability/#standalone-authority
Khan wasn't Biden's only de-optimizer. Under chair Rohit Chopra, Biden's Consumer Finance Protection Bureau actually banned the data-brokers who power surveillance pricing:
And of course, Trump's CFPB (neutered by Musk and his broccoli-haired brownshirts at DOGE) killed that effort:
https://pluralistic.net/2025/05/15/asshole-to-appetite/#ssn-for-sale
But the CFPB staffer who ran that effort has gone to work on an effort to leverage a New Jersey state privacy law to crush the data-broker industry:
https://www.wired.com/story/daniels-law-new-jersey-online-privacy-matt-adkisson-atlas-lawsuits/
These are efforts to optimize corporations for human thriving, by making them charge us less and pay us more. For while we are best off when we are unoptimizable, we are also best off when corporations are totally optimized ā for our benefit.
(Image: Cryteria, CC BY 3.0, modified)
Hey look at this (permalink)

- Uber is selling your ride and food ordering data to advertisers for marketing insights https://boingboing.net/2025/12/08/uber-is-selling-your-ride-and-food-ordering-data-to-advertisers-for-marketing-insights.html
-
404 Media Is Making a Zine https://www.404media.co/404-media-is-making-a-zine/
-
Maybe a General Strike Isnāt So Impossible Now https://labornotes.org/2025/12/maybe-general-strike-isnt-so-impossible-now
-
The Naibbe cipher: a substitution cipher that encrypts Latin and Italian as Voynich Manuscript-like ciphertext https://www.tandfonline.com/doi/full/10.1080/01611194.2025.2566408
-
Bringing organizational maturity to radical groups https://blog.bl00cyb.org/2025/12/bringing-organizational-maturity-to-radical-groups/
Object permanence (permalink)
#20yrsago Free voicemail helps homeless people get jobs https://web.archive.org/web/20051210021850/http://www.cvm.org/
#20yrsago Anti-P2P company decides to focus on selling music instead https://de.advfn.com/borse/NASDAQ/LOUD/nachrichten/13465769/loudeye-to-exit-content-protection-services-busine
#20yrsago Caller Eye-Deerās eyes glow when phone rings https://www.flickr.com/photos/84221353@N00/71889050/in/pool-69453349@N00
#20yrsago EFF to Sunncomm: release a list of all infected CDs! https://web.archive.org/web/20051212072537/https://www.eff.org/deeplinks/archives/004245.php
#20yrsago Only 2% of music-store downloaders care about legality of their music https://web.archive.org/web/20051225200658/http://www.mp3newswire.net/stories/5002/tempo2005.html
#20yrsago Dykes on Bikes gives the Trademark Office a linguistics lesson https://web.archive.org/web/20060523133217/https://www.sfgate.com/cgi-bin/article.cgi?file=/c/a/2005/12/09/MNGQOG5D7P1.DTL&type=printable
#20yrsago Robert Sheckley has died https://nielsenhayden.com/makinglight/archives/007078.html
#20yrsago Xbox 360 DRM makes your rip your CDs again https://www.gamespot.com/articles/microsoft-xbox-360-hands-on-report/1100-6139672/
#20yrsago Music publishers: Jail for lyric-sites http://news.bbc.co.uk/2/hi/entertainment/4508158.stm
#15yrsago 2600 Magazine condemns DDoS attacks against Wikileaks censors https://web.archive.org/web/20101210213130/https://www.2600.com/news/view/article/12037
#15yrsago UK supergroup records 4ā33ā, hopes to top Xmas charts https://www.theguardian.com/music/2010/dec/06/cage-against-machine-x-factor
#15yrsago FarmVilleās secret: making you anxious https://web.archive.org/web/20101211120105/http://www.gamasutra.com/view/feature/6224/catching_up_with_jonathan_blow.php?print=1
#15yrsago Rogue Archivist beer https://web.archive.org/web/20101214060929/https://livingproofbrewcast.com/2010/12/giving-the-rogue-archivist-to-its-namesake/
#15yrsago Hossein āHoderā Derakhshan temporarily released from Iranian prison https://cyrusfarivar.com/blog/2010/12/09/iranian-blogging-pioneer-temporarily-released-from-prison/
#15yrsago Student protesters in London use Google Maps to outwit police ākettlingā https://web.archive.org/web/20101212042006/https://bengoldacre.posterous.com/student-protestors-using-live-tech-to-outwit
#15yrsago Google foreclosure maps https://web.archive.org/web/20170412162114/http://ritholtz.com/2010/12/google-map-foreclosures/
#15yrsago Theory and practice of queue design https://passport2dreams.blogspot.com/2010/12/third-queue.html
#15yrsago Legal analysis of the problems of superherodom https://lawandthemultiverse.com/
#10yrsago A great, low-tech hack for teaching high-tech skills https://miriamposner.com/blog/a-better-way-to-teach-technical-skills-to-a-group/
#10yrsago In case you were wondering, thereās no reason to squirt coffee up your ass https://scienceblogs.com/insolence/2015/12/10/starbutts-or-how-is-it-still-a-thing-that-people-are-shooting-coffee-up-their-nether-regions
#10yrsago Survey of wealthy customers leads insurer to offer ātroll insuranceā https://www.telegraph.co.uk/finance/newsbysector/banksandfinance/insurance/12041832/Troll-insurance-to-cover-the-cost-of-internet-bullying.html
#10yrsago US State Department staffer sexually blackmailed women while working at US embassy https://web.archive.org/web/20151210230259/https://www.networkworld.com/article/3013633/security/ex-us-state-dept-worker-pleads-guilty-to-extensive-sextortion-hacking-and-cyberstalking-acts.html
#10yrsago Robert Silverbergās government-funded guide to the psychoactive drugs of sf https://web.archive.org/web/20151211050648/https://motherboard.vice.com/read/the-us-government-funded-an-investigation-into-sci-fi-drug-use-in-the-70s
#10yrsago Toy demands that kids catch crickets and stuff them into an electronic car https://www.wired.com/2015/12/um-so-the-bug-racer-is-an-actual-toy-car-driven-by-crickets/
#10yrsago The crypto explainer you should send to your boss (and the FBI) https://web.archive.org/web/20151209011457/https://www.washingtonpost.com/news/the-switch/wp/2015/12/08/you-already-use-encryption-heres-what-you-need-to-know-about-it/
#10yrsago French PM defies Ministry of Interior, says he wonāt ban open wifi or Tor https://web.archive.org/web/20160726031106/https://www.connexionfrance.com/Wifi-internet-ban-banned-17518-view-article.html
#10yrsago The no-fly list really is a no-brainer https://www.theguardian.com/us-news/2015/dec/09/no-fly-list-errors-gun-control-obama
#10yrsago America: shrinking middle class, growing poverty, the rich are getting richer https://www.pewresearch.org/social-trends/2015/12/09/the-american-middle-class-is-losing-ground/
#10yrsago Marriott removing desks from its hotel rooms ābecause Millennialsā https://web.archive.org/web/20151210034312/http://danwetzelsports.tumblr.com/post/134754150507/who-stole-the-desk-from-my-hotel-room
#10yrsago Chinaās top Internet censor: āThereās no Internet censorship in Chinaā https://hongkongfp.com/2015/12/09/there-is-no-internet-censorship-in-china-says-chinas-top-censor/
#10yrsago Stolen-card crime sites use ācop detectionā algorithms to flag purchases https://krebsonsecurity.com/2015/12/when-undercover-credit-card-buys-go-bad/
#10yrsago UK National Crime Agency: if your kids like computers, theyāre probably criminals https://www.youtube.com/watch?v=DjYrxzSe3DU
#10yrsago US immigration law: so fāed up that Trumpās no-Muslim plan would be constitutional https://www.nytimes.com/2015/12/10/opinion/trumps-anti-muslim-plan-is-awful-and-constitutional.html?_r=0
#10yrsago Ecuadorās draft copyright law: legal to break DRM to achieve fair use https://medium.com/@AndresDelgadoEC/big-achievement-for-creative-commons-in-ecuador-national-assembly-decides-that-fair-use-trumps-drm-c8cdd9c57e01#.n1vkccd3r
#10yrsago One billion Creative Commons licenses in use https://stateof.creativecommons.org/2015/
#10yrsago The moral character of cryptographic work https://web.cs.ucdavis.edu/~rogaway/papers/moral-fn.pdf
#10yrsago Everybody knows: FBI wonāt confirm or deny buying cyberweapons from Hacking Team https://web.archive.org/web/20151209163839/https://motherboard.vice.com/read/the-fbi-wont-confirm-or-deny-buying-hacking-team-spyware-even-though-it-did
#10yrsago European Commission resurrects an unkillable stupid: the link tax https://web.archive.org/web/20160913095014/https://openmedia.org/en/bad-idea-just-got-worse-how-todays-european-copyright-plans-will-damage-internet
#5yrsago Why we can't have nice things https://pluralistic.net/2020/12/10/borked/#bribery
#5yrsago Facebook vs Robert Bork https://pluralistic.net/2020/12/10/borked/#zucked
#1yrago Tech's benevolent-dictator-for-life to authoritarian pipeline https://pluralistic.net/2024/12/10/bdfl/#high-on-your-own-supply
#1yrago Predicting the present https://pluralistic.net/2024/12/09/radicalized/#deny-defend-depose
Upcoming appearances (permalink)

- Hamburg: Chaos Communications Congress, Dec 27-30
https://events.ccc.de/congress/2025/infos/index.html -
Denver: Enshittification at Tattered Cover Colfax, Jan 22
https://www.eventbrite.com/e/cory-doctorow-live-at-tattered-cover-colfax-tickets-1976644174937 -
Colorado Springs: Guest of Honor at COSine, Jan 23-25
https://www.firstfridayfandom.org/cosine/
Recent appearances (permalink)
- How to Stop āEnsh*ttificationā Before It Kills the Internet (Capitalisn't)
https://www.youtube.com/watch?v=34gkIvYiHxU -
Enshittification on The Daily Show
https://www.youtube.com/watch?v=d2e-c9SF5nE -
Enshittification with Four Ways to Change the World (Channel 4)
https://www.youtube.com/watch?v=tZQaEeuuI3Q -
The Plan is to Make the Internet Worse. Forever. (Novarra Media)
https://www.youtube.com/watch?v=7wE8G-d7SnY -
Enshittification (Future Knowledge)
https://futureknowledge.transistor.fm/episodes/enshittification
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
-
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
-
A Little Brother short story about DIY insulin PLANNING

This work ā excluding any serialized fiction ā is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Today's links
- Big Tech joins the race to build the world's heaviest airplane: Die as Microsoft, or live to become the IBM you overthrew.
- Hey look at this: Delights to delectate.
- Object permanence: Bean-sprouting keyboard; Ink rant; FBI wanted to deport John Lennon; "Concrete Park"; Plutocratic lane-changes.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Big Tech joins the race to build the world's heaviest airplane (permalink)
I have a weird fascination with early-stage Bill Gates, after his mother convinced a pal of hers ā chairman of IBM's board of directors ā to give her son the contract to provide the operating system for the new IBM PC. Gates and his pal Paul Allen tricked another programmer into selling them the rights to DOS, which they sold to IBM, setting Microsoft on the path to be one of the most profitable businesses in human history.
IBM could have made its own OS, of course. They were just afraid to, because they'd just narrowly squeaked out of a 12-year antitrust war with the Department of Justice (evocatively memorialized as "Antitrust's Vietnam"):
https://pluralistic.net/2022/10/02/the-true-genius-of-tech-leaders/
The US government traumatized IBM so badly that they turned over their crown jewels to these two prep-school kids, who scammed a pal out of his operating system for $50k and made billions from it. Despite owing his business to IBM (or perhaps because of this fact), Gates routinely mocked IBM as a lumbering dinosaur that was headed for history's scrapheap. He was particularly scornful of IBM's software development methodology, which, to be fair, was pretty terrible: IBM paid programmers by the line of code. Gates called this "the race to build the world's heaviest airplane."
After all, judging software by lines of code is a terrible idea. To the extent that "number of lines of code" has any correlation with software quality, reliability or performance, it has a negative correlation. While it's certainly possible to write software with too few lines of code (e.g. when instructions are stacked on a single line, obfuscating its functionality and making it hard to maintain), it's far more common for programmers to use too many steps to solve a problem. The ideal software is just right: verbose enough to be legible to future maintainers, streamlined enough to omit redundancies.
This is broadly true of many products, and not just airplanes. Office memos should be long enough to be clear, but no longer. Home insulation should be sufficient to maintain the internal temperature, but no more.
Ironically, enterprise tech companies' bread and butter is selling exactly this kind of qualitative measurement for bosses who want an easy, numeric way to decide which of their workers to fire, and leading the pack is Microsoft, whose flagship Office365 lets bosses assess their workers' performance on meaningless metrics like how many words they type, ranking each worker against other workers within the division, with rival divisions and within rival firms. Yes, Microsoft actually boasts to companies about the fact that if you use their products, they will gather sensitive data about how your workers perform individually and as a team, and share that information with your competitors!
https://pluralistic.net/2020/11/25/the-peoples-amazon/#clippys-revenge
But while tech companies employed programmers to develop this kind of bossware to be used on other companies' employees, they were loath to apply them to their own workers. For one thing, it's just a very stupid way to manage a workforce, as Bill Gates himself would be the first to tell you (candidly, provided he wasn't trying to sell you an enterprise Office 365 license). For another, tech workers wouldn't stand for it. After all, these were the "princes of labor," each adding a million dollars or more to their boss's bottom line, and in such scarce supply that a coder could quit a job after the morning scrum and have a new one by the pre-dinner pickleball break:
https://pluralistic.net/2025/04/27/some-animals/#are-more-equal-than-others
Tech workers mistook the fear this dynamic instilled in their bosses for respect. They thought the reason their bosses gave them free massage therapists and kombucha on tap and a gourmet cafeteria was that their bosses liked them. After all, these bosses were all techies. A coder wasn't a worker, they were a temporarily embarrassed founder. That's why Zuck and Sergey tuned into those engineering town hall meetings and tolerated being pelted with impertinent questions about the company's technology and business strategy.
Actually, tech bosses didn't like tech workers. They didn't see them as peers. They saw them as workers. Problem workers, at that. Problems to be solved.
And wouldn't you know it, supply caught up with demand and tech companies instituted a program of mass layoffs. When Google laid off 12,000 workers (just before a $80b stock buyback that would have paid their wages for 27 years), they calmed investors by claiming that they weren't doing this because business was bad ā they were just correcting some pandemic-era overhiring. But Google didn't just fire junior programmers ā they targeted some of their most senior (and thus mouthiest and highest-paid) techies for the chop.
Today, Sergey and Zuck no longer attend engineering meetings ("Not a good use of my time" -M. Zuckerberg). Tech workers are getting laid off at the rate of naughts. And none of these bastards can shut up about how many programmers they plan on replacing with AI:
https://pluralistic.net/2025/08/05/ex-princes-of-labor/#hyper-criti-hype
And wouldn't you know it, the shitty monitoring and ranking technology that programmers made to be used on other workers is finally being used on them:
https://jonready.com/blog/posts/everyone-in-seattle-hates-ai.html
Naturally, the excuse is monitoring AI usage. Microsoft ā along with all the other AI-peddling tech companies ā keeps claiming that their workers adore using AI to write software, but somehow, also have to monitor workers so they can figure out which ones to fire because they're not using AI enough:
This is the "shitty technology adoption curve" in action. When you have a terrible, destructive technology, you can't just deploy it on privileged people who get taken seriously in policy circles. You start with people at the bottom of the privilege gradient: prisoners, mental patients, asylum-seekers. Then, you work your way up the curve ā kids, gig workers, blue collar workers, pink collar workers. Eventually, it comes for all of us:
https://pluralistic.net/2021/02/24/gwb-rumsfeld-monsters/#bossware
As Ed Zitron writes, tech hasn't had a big, successful product (on the scale of, say, the browser or the smartphone) in more than a decade. Tech companies have seemingly run out of new trillion-dollar industries to spawn. Tech bosses are pulling out all the stops to make their companies seem as dynamic and profitable as they were in tech's heyday.
Firing workers and blaming it on AI lets tech bosses transform a story that would freak out investors ("Our business is flagging and we had to fire a bunch of valuable techies") into one that will shake loose fresh billions in capital ("Our AI product is so powerful it let us fire a zillion workers!").
And for tech bosses, mass layoffs offer another, critical advantage: pauperizing those princes of labor, so that they can shed their company gyms and luxury commuter busses, cut wages and benefits, and generally reset the working expectations of the tech workers who sit behind a keyboard to match the expectations of tech workers who assemble iPhones, drive delivery vans, and pack boxes in warehouses.
For tech workers who currently don't have a pee bottle or a suicide net at their job-site, it's long past time to get over this founder-in-waiting bullshit and get organized. Recognize that you're a worker, and that workers' only real source of power isn't ephemeral scarcity, it's durable solidarity:
https://techworkerscoalition.org/
(Image: Cryteria, CC BY 3.0, modified)
Hey look at this (permalink)

- Your Data Might Determine How Much You Pay for Eggs https://www.wired.com/story/algorithmic-pricing-eggs-ny-law/
-
Judge hints Vizio TV buyers may have rights to source code licensed under GPL https://www.theregister.com/2025/12/05/vizio_gpl_source_code_ruling/
-
Chamberlain blocks smart home integrations with its garage door openers ā again https://www.theverge.com/tech/839294/chamberlain-myq-garage-door-opener-update-blocks-aftermarket-controllers
-
Smart Garage Door Opener https://3reality.com/product/smartgarage-door-opener/
-
The Best Books in eBooks and Audiobooks of 2025 https://www.kobo.com/us/en/p/best-books-of-2025
Object permanence (permalink)
#20yrsago WaWa Digital Cameras threatens to break customerās neck https://thomashawk.com/2005/12/abusive-new-york-camera-store.html
#20yrsago Keyboard used as bean-sprouting medium https://web.archive.org/web/20051205011830/http://www.nada.kth.se/~hjorth/krasse/english.html
#15yrsago Judge to copyright troll: get lost https://torrentfreak.com/acslaw-take-alleged-file-sharers-to-court-but-fail-on-a-grand-scale-101209/
#15yrsago Ink cartridge rant https://web.archive.org/web/20101211080931/http://www.inkcartridges.uk.com/Remanufactured-HP-300-CC640EE-Black.html
#15yrsago 1.1 billion US$100 notes out of circulation due to printing error https://www.cnbc.com/2010/12/07/the-fed-has-a-110-billion-problem-with-new-benjamins.html
#15yrsago EFF wants Righthaven to pay for its own ass-kicking https://web.archive.org/web/20101211011932/https://www.wired.com/threatlevel/2010/12/payup-troll/
#15yrsago danah boyd explains email sabbaticals https://www.zephoria.org/thoughts/archives/2010/12/08/i-am-offline-on-email-sabbatical-from-december-9-january-12.html
#15yrsago TSA subjects Indiaās US ambassador to public grope because of her sari https://web.archive.org/web/20101211113821/http://travel.usatoday.com/flights/post/2010/12/india-diplomat-gets-humiliating-pat-down-at-mississippi-airport-/134197/5?csp=outbrain&csp=obnetwork
#15yrsago Californiaās safety codes are now open source! https://code.google.com/archive/p/title24/
#10yrsago When the INS tried to deport John Lennon, the FBI pitched in to help https://www.muckrock.com/news/archives/2015/dec/08/john-lennons-fbi-file-1/
#10yrsago The Big List of Whatās Wrong with the TPP https://www.eff.org/deeplinks/2015/12/how-tpp-will-affect-you-and-your-digital-rights
#10yrsago Concrete Park: apocalyptic, afrofuturistic graphic novel of greatness https://memex.craphound.com/2015/12/08/concrete-park-apocalyptic-afrofuturistic-graphic-novel-of-greatness/
#10yrsago Denmarkās top anti-piracy law firm pocketed $25m from rightsholders, then went bankrupt https://torrentfreak.com/anti-piracy-lawyer-milked-copyright-holders-for-millions-151208/
#5yrsago Uber pays to get rid of its self-driving cars https://pluralistic.net/2020/12/08/required-reading/#goober
#5yrsago All the books I reviewed in 2020 https://pluralistic.net/2020/12/08/required-reading/#recommended-reading
#5yrsago Ford patents plutocratic lane-changes https://pluralistic.net/2020/12/08/required-reading/#walkaway
Upcoming appearances (permalink)

- Hamburg: Chaos Communications Congress, Dec 27-30
https://events.ccc.de/congress/2025/infos/index.html -
Denver: Enshittification at Tattered Cover Colfax, Jan 22
https://www.eventbrite.com/e/cory-doctorow-live-at-tattered-cover-colfax-tickets-1976644174937 -
Colorado Springs: Guest of Honor at COSine, Jan 23-25
https://www.firstfridayfandom.org/cosine/
Recent appearances (permalink)
- Enshittification with Four Ways to Change the World (Channel 4)
https://www.youtube.com/watch?v=tZQaEeuuI3Q -
The Plan is to Make the Internet Worse. Forever. (Novarra Media)
https://www.youtube.com/watch?v=7wE8G-d7SnY -
Enshittification (Future Knowledge)
https://futureknowledge.transistor.fm/episodes/enshittification -
We have become slaves to Silicon Valley (Politics JOE)
https://www.youtube.com/watch?v=JzEUvh1r5-w -
How Enshittification is Destroying The Internet (Frontline Club)
https://www.youtube.com/watch?v=oovsyzB9L-s
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
-
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
-
A Little Brother short story about DIY insulin PLANNING

This work ā excluding any serialized fiction ā is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Today's links
- Elon Musk's Blue Tick scam: The EU bans giant teddybears.
- Hey look at this: Delights to delectate.
- Object permanence: Denver bomb squad vs 8" toy robot; Iceland's atheist religion; Largest strike in human history; Ad-tech is a bubble; Battery rationality; Pasta carpet; "With a Little Help"; Crooked Timber on Pikett; Tiki-mug menorah; China vs Big Data-backstabbing.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Elon Musk's Blue Tick scam (permalink)
In my book Enshittification, I develop the concept of "giant teddybears," a scam that has been transposed from carnival midway games to digital platforms. The EU has just fined Elon Musk $140m for running a giant teddybear scam on Twitter:
Growing up, August 15 always meant two things for my family: my mother's birthday and the first day of the CNE, a giant traveling fair that would park itself on Toronto's waterfront for the last three weeks of summer. We'd get there early, and by 10AM, there'd always be some poor bastard lugging around a galactic-scale giant teddybear that was offered as a prize at one of the midway games.
Now, nominally, the way you won a giant teddybear was by getting five balls in a peach basket. To a first approximation, this is a feat that no one has ever accomplished. Rather, a carny had beckoned this guy over and said, "Hey, fella, I like your face. Tell you what I'm gonna do: you get just one ball in the basket and I'll give you one of these beautiful, luxurious keychains. If you win two keychains, I'll let you trade them in for one of these gigantic teddybears."
Why would the carny do this? Because once this poor bastard took possession of the giant teddybear, he was obliged to conspicuously lug it around the CNE midway in the blazing, muggy August heat. All who saw him would think, "Hell if that dumbass can win a giant teddybear, I'm gonna go win one, too!" Charitably, you could call him a walking advertisement. More accurately, though, he was a Judas goat.
Digital platforms have the ability to give out giant teddybears at scale. Because digital platforms have the flexibility that comes with running things on computers, platforms can pick out individual platform participants and make them King For the Day, showering them in riches that they will boast of, luring in other suckers who will lose everything:
https://pluralistic.net/2023/02/19/twiddler/
That's how Tiktok works: the company's "heating tool" lets them drive traffic to Tiktok performers by cramming their videos into millions of random people's feeds, overriding Tiktok's legendary recommendation algorithm. Those "heated" performers get millions of views on their videos and go on to spam all the spaces where similar performers hang out, boasting of the fame and riches that await other people in their niche if they start producing for Tiktok:
https://pluralistic.net/2023/01/21/potemkin-ai/#hey-guys
Uber does it, too: as Veena Dubal documents in her work on "algorithmic wage discrimination," Uber offers different drivers wildly different wages for performing the same work. The lucky few who get an Uber giant teddybear hang out in rideshare groupchats and forums, trumpeting their incredible gains from the platform, while everyone else blames themselves for "being bad at the app," as they drive and drive, only to go deeper and deeper into debt:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
Everywhere you look online, you see giant teddybears. Think of Joe Rogan being handed hundreds of millions of dollars to relocate his podcast to Spotify, an also-ran podcast platform that is desperately trying to capture the medium of podcasting, turning an open protocol into a proprietary, enclosed, Spotify-exclusive content stream:
https://pluralistic.net/2023/01/27/enshittification-resistance/#ummauerter-garten-nein
The point of the conspicuous, over-the-odds payment to Rogan isn't just to get Rogan onto Spotify ā it's to convince every other podcaster that Spotify is a great place to make podcasts for. It isn't, though: when Spotify bought Gimlet Media, they locked Gimlet's podcasts inside Spotify's walled garden/maximum security prison. If you wanted to listen to a Gimlet podcast, you'd have to switch to using Spotify's app (and submitting to Spotify's invasive surveillance and restrictions on fast-forwarding through ads, etc).
Pretty much no one did this. After an internal revolt by Gimlet podcast hosts ā whose podcasts were dwindling to utter irrelevance because no one was listening to them anymore ā Spotify moved those Gimlet podcasts back onto the real internet, where they belong.
When Musk bought Twitter, he started handing out tons of giant teddybears ā most notably, he created an opaque monetization scheme for popular Twitter posters, which allowed him to thumb the scales for a few trolls he liked, who obliged him by loudly proclaiming just how much money you could make by trolling professionally on Twitter. Needless to say, the vast majority of people who try this make either nothing, or a sum so small that it rounds to nothing.
But Musk's main revenue plan for Twitter ā the thing he repeatedly promised would allow him to recoup the tens of billions he borrowed to buy the platform ā was selling blue tick verification.
Twitter created blue ticks to solve a serious platform problem. Twitter users kept getting sucked in by impersonators who would trick them into participating in scams or believing false things. To protect those users, Twitter offered a verification scheme for "notable people" who were likely to face impersonation. The verification system was never very good ā I successfully lobbied them to improve it a little when I was being impersonated on Twitter (I got them to stop insisting that users fax them a scan of their ID, or, more realistically, to send them ID via a random, insecure email-to-fax gateway). But it did the job reasonably well.
Predictably, though, the verification scheme also became something of a (weird and unimportant) status-symbol, allowing a certain kind of culture warrior to peddle grievances about how only "lamestream media libs" were getting blue ticks, while brave Pizzagaters and 4chan refugees were denied this important recognition.
Musk's plan to sell blue ticks leaned heavily into these grievances. He promised to "democratize" verification, for $8/month (or, for businesses, many thousands of dollars per month). Users who didn't buy blue ticks would have their content demoted and hidden from their own followers. Users who paid for blue ticks would have their content jammed into everyone's feeds, irrespective of whether Twitter's own content recommendation algorithms predicted those users would enjoy it. Best of all, Twitter wouldn't do much verifying ā you could give Twitter $8, claim to be anyone at all, and chances are, you would be able to assume any identity you wanted, post any bullshit you wanted, and get priority placement in millions of users' feeds.
This was a massive gift to scammers, trolls and disinformation peddlers. For $8, you could pretend to be a celebrity in order to endorse a stock swindle, shitcoin hustle, or identity theft scheme. You could post market-moving disinformation from official-looking corporate accounts. You could pose as a campaigning politician or a reporter and post reputation-destroying nonsense.
This is where the EU comes in. In 2024, the EU enacted a pair of big, muscular Big Tech antitrust laws, the Digital Services Act (DSA) and the Digital Markets Act (DMA). These are complex pieces of legislation, and I don't like everything in them, but some parts of them are amazing: bold and imaginative breaks from the dismal history of ineffective or counterproductive tech regulation.
Under the DSA, the EU has fined Twitter about $140m for exposing users to scams via this blue tick giant teddybear wheeze (much of that sum is punitive, because Twitter flagrantly obstructed the Commission's investigations). The DSA (sensibly) doesn't require user verification, but it does expect companies that tell their users that some accounts are verified and can be trusted, to actually verify that they actually can be trusted.
I think there's a second DSA claim to be made here, beyond the failure to verify. Musk's plan to sell blue ticks was a disaster: while many, many scammers (and a few trolls) bought blue ticks, no one else did. The blue tick ā which Musk thought of as a valuable status symbol that he could sell ā was quickly devalued. "Account with a blue tick" was never all that prestigious, but under Musk, it came to mean "account that pushes scams, gore, disinformation, porn and/or hate."
So Musk did something very funny and sweaty. He restored blue ticks to millions of high-follower accounts (including my own). And despite the fact that Musk had created about a million different kinds of blue ticks that denoted different kinds of organizations and payment schemes, these free blue ticks were indistinguishable from the paid ones.
In other words, Musk set out to trick users into thinking that the most prominent people they followed believed that it was worth spending $8/month on a blue tick. It was an involuntary giant teddybear scam. Every time a prominent user with a free blue tick posts, they help Musk trick regular Twitter users into thinking that these worthless $8/month subscriptions are worth shelling out for.
I think the Commission could run another, equally successful enforcement action against Musk and Twitter over this scam, too.
Trump has been bellyaching nonstop about the DSA and DMA, threatening EU nations and businesses with tariffs and other TACO retribution if they go ahead with DSA/DMA enforcement. Let's hope the EU calls his bluff.
Of course, Musk could get out of paying these fines by moving all his businesses out of the EU, which, frankly, would be a major result for Europe.
(Image: Gage Skidmore, CC BY-SA 4.0, modified)
Hey look at this (permalink)

- Netflix Is Trying to Buy Warner Bros Discovery. That Would Be a Disaster for America. https://www.thebignewsletter.com/p/netflix-is-trying-to-buy-warner-bros
-
How popular is ecosocialist transformation? https://jasonhickel.substack.com/p/how-popular-is-ecosocialist-transformation
-
Luigi Mangione Official Legal Fund for all 3 Cases https://www.givesendgo.com/luigi-defense-fund
-
Trumpās Katrina Is Coming https://prospect.org/2025/12/05/trumps-katrina-is-coming-fema/
-
DEFT: DSPs for Equitable and Fair Treatment https://deft-us.com/
Object permanence (permalink)
#20yrsago Whatās involved in different publishing jobs? https://web.archive.org/web/20050306095536/http://www.penguin.co.uk/static/packages/uk/aboutus/jobs_workingpeng.html
#20yrsago Sony finally releases rookit uninstaller ā sort of https://web.archive.org/web/20051204015131/http://cp.sonybmg.com/xcp/english/updates.html
#20yrsago EFF forces Sony/Suncomm to fix its spyware https://web.archive.org/web/20051210024413/https://www.eff.org/news/archives/2005_12.php#004234
#20yrsago Warner Music attacks specialized web-browser https://web.archive.org/web/20051210024927/http://www.pearworks.com/pages/pearLyrics.html
#20yrsago Sonyās DRM security fix leaves your computer more vulnerable https://blog.citp.princeton.edu/2005/12/07/mediamax-bug-found-patch-issued-patch-suffers-same-bug/
#15yrsago Internet furnishes fascinating tale of a civil rights era ghosttown on demandhttps://www.reddit.com/r/AskReddit/comments/eddwx/what_the_hell_happened_to_cairo_illinois/
#15yrsago Pasta carpet! https://wemakecarpets.wordpress.com/2010/11/02/pasta-carpet-2/
#15yrsago With a Little Help launch! https://memex.craphound.com/2010/12/07/with-a-little-help-launch/
#15yrsago Denver bomb squad defeats 8ā³ toy robot after hours-long standoff https://www.denverpost.com/2010/12/01/toy-robot-detours-traffic-near-coors-field/
#15yrsago UK govt demands an end to evidence-based drug policy https://www.theguardian.com/politics/2010/dec/05/government-scientific-advice-drugs-policy?&
#10yrsago Icelandās fastest-growing āreligionā courts atheists by promising to rebate religious tax https://icelandmonitor.mbl.is/news/politics_and_society/2015/12/01/icelanders_flocking_to_the_zuist_religion/
#10yrsago Springer Nature to release 100,000 titles as DRM-free bundles https://web.archive.org/web/20151210051243/https://www.digitalbookworld.com/2015/bitlit-partners-with-springer-to-offer-ebook-bundles/
#10yrsago Solo: Hope Larsonās webcomic of rock-n-roll, romance, and desperation https://memex.craphound.com/2015/12/07/solo-hope-larsons-webcomic-of-rock-n-roll-romance-and-desperation/
#10yrsago Body-painted models disappear into the Wonders of the World https://www.trinamerry.com/trinamerryblog/sevenwondersbodypaint
#10yrsago Make: the simplest electric car toy, a homopolar motor https://www.youtube.com/watch?v=oPzJr1jjHnQ
#10yrsago Thomas Piketty seminar on Crooked Timber https://crookedtimber.org/2016/01/04/thomas-piketty-seminar/
#10yrsago MAKE: a tiki-mug menorah https://web.archive.org/web/20151208123229/http://news.critiki.com/2015/12/05/tiki-mug-menorah-a-how-to-from-poly-hai/
#10yrsago Harvard Business School: Talented assholes are more trouble than theyāre worth https://www.hbs.edu/ris/Publication
#10yrsago Multi-generational cruelty: Americaās prisons shutting down kidsā visitations https://web.archive.org/web/20151204063410/https://www.thenation.com/article/2-7m-kids-have-parents-in-prison-theyre-losing-their-right-to-visit/
#10yrsago READ: Kim Stanley Robinsonās first standalone story in 25 years! https://reactormag.com/oral-argument-kim-stanley-robinson//
#10yrsago French Ministry of Interior wants to ban open wifi, Tor https://arstechnica.com/tech-policy/2015/12/france-looking-at-banning-tor-blocking-public-wi-fi/
#5yrsago Chinaās war on big data backstabbing https://pluralistic.net/2020/12/07/backstabbed/#big-data-backstabbing
#5yrsago The largest strike in human history https://pluralistic.net/2020/12/06/surveillance-tulip-bulbs/#modi-miscalulation
#5yrsago Ad-tech as a bubble overdue for a bursting https://pluralistic.net/2020/12/06/surveillance-tulip-bulbs/#adtech-bubble
#1yrago Battery rationality https://pluralistic.net/2024/12/06/shoenabombers/#paging-dick-cheney
#1yrago A year in illustration (2024) https://pluralistic.net/2024/12/07/great-kepplers-ghost/#art-adjacent
Upcoming appearances (permalink)

- Virtual: Poetic Technologies with Brian Eno (David Graeber Institute), Dec 8
https://davidgraeber.institute/poetic-technologies-with-cory-doctorow-and-brian-eno/ -
Madison, CT: Enshittification at RJ Julia, Dec 8
https://rjjulia.com/event/2025-12-08/cory-doctorow-enshittification -
Hamburg: Chaos Communications Congress, Dec 27-30
https://events.ccc.de/congress/2025/infos/index.html -
Denver: Enshittification at Tattered Cover Colfax, Jan 22
https://www.eventbrite.com/e/cory-doctorow-live-at-tattered-cover-colfax-tickets-1976644174937 -
Colorado Springs: Guest of Honor at COSine, Jan 23-25
https://www.firstfridayfandom.org/cosine/
Recent appearances (permalink)
>
- The Plan is to Make the Internet Worse. Forever. (Novarra Media)
https://www.youtube.com/watch?v=7wE8G-d7SnY -
Enshittification (Future Knowledge)
https://futureknowledge.transistor.fm/episodes/enshittification -
We have become slaves to Silicon Valley (Politics JOE)
https://www.youtube.com/watch?v=JzEUvh1r5-w -
How Enshittification is Destroying The Internet (Frontline Club)
https://www.youtube.com/watch?v=oovsyzB9L-s -
Escape Forward with Cristina Caffarra
https://escape-forward.com/2025/11/27/enshittification-of-our-digital-experience/
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
-
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
-
A Little Brother short story about DIY insulin PLANNING

This work ā excluding any serialized fiction ā is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Today's links
- Metabolizing the theory of "political capitalism": How many $TRUMP coins should your company buy?
- Hey look at this: Delights to delectate.
- Object permanence: NZ doesn't want US-style copyright; Tron: Reloaded; Mass shootings and gun profits; Descartes God has failed; "The Big Fix."
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Metabolizing the theory of "political capitalism" (permalink)
It's a strange fact that the more sophisticated and polished a theory gets, the simpler it tends to be. New theories tend to be inspired by a confluence of many factors, and early attempts to express the theory will seek to enumerate and connect everything that seems related, which is a lot.
But as you develop the theory, it gets progressively more streamlined as you realize which parts can be safely omitted or combined without sacrificing granularity or clarity. This simplification requires a lot of iteration and reiteration, over a lot of time, for a lot of different audiences and critics. As Thoreau wrote (paraphrasing Pascal), "Not that the story need be long, but it will take a long while to make it short."
This week, I encountered a big, exciting theory that is still in the "long and complicated" phase, with so many moving parts that I'm having trouble keeping them straight in my head. But the idea itself is fascinating and has so much explanatory power, and I've been thinking about it nonstop, so I'm going to try to metabolize a part of it here today, both to bring it to your attention, and to try and find some clarity for myself.
At issue is Dylan Riley and Robert Brenner's theory of "political capitalism," which I encountered through John Ganz's writeup of a panel he attended to discuss Riley and Brenner's work:
https://www.unpopularfront.news/p/politics-and-capitalist-stagnation
Riley and Brenner developed this theory through a pair of very long (and paywalled) articles in the New Left Review. First is 2022's "Seven Theses on American Politics" (Ā£3), which followed the Democrats' surprisingly good showing in the 2022 midterms:
https://newleftreview.org/issues/ii138/articles/4813
The second article, "The Long Downturn and Its Political Results" (Ā£4), is even longer, and it both restates the theory of "Seven Theses" and addresses several prominent critics of their work:
(If you're thinking about reading the source materials ā and I urge you to do so ā I think you can safely just read the second article, as it really does recap and streamline the original.)
So what is this theory? Ganz does a good job of breaking it down (better than Riley and Brenner, who, I think, still have a lot of darlings they can't bring themselves to murder). Here's my recap of Ganz's, then, with a few notes from the source texts thrown in.
Riley and Brenner are advancing both an economic and a political theory, with the latter growing out of the former. The economic theory seeks to explain two phenomena, the "Long Boom" (post-WWII to the 1960s or so), and the "Long Downturn" (ever since).
During the Long Boom, the US economy (and some other economies) experienced a period of sustained growth, without the crashes that had been the seemingly inevitable end-point of previous growth periods. Riley and Brenner say that these crashes were the result of business owners making the (locally) rational decision to hang on to older machines and tools even as new ones came online.
Businesses are always looking to invest in new automation in a bid to wring more productivity from their workers. Profits come from labor, not machines, and as your competitors invest in the same machines as you've just bought, the higher rate of profit you got when you upgraded your machines will be eroded, as competitors chase each others' customers with lower prices.
But not everyone is willing to upgrade when a new machine is invented. If you're still paying for the old machines, you just can't afford to throw them away and get the latest and greatest ones. Instead, as your competitors slash prices (because they have new machines that let them make the same stuff at a lower price), you must lower your prices too, accepting progressively lower profits.
Eventually, your whole sector is using superannuated machines that they're still making payments on, and the overall rate of profit in the sector has dwindled to unsustainable levels. "Zombie companies" (companies that have no plausible chance of paying off their debts) dominate the economy. This is the "secular stagnation" that economists dread. Note that this whole thing is driven by the very same forces that make capitalism so dynamic: the falling rate of profit that gives rise to a relentless chase for new, more efficient processes. This is a stagnation born of dynamism, and the harder you yank on the "make capitalism more dynamic" lever, the more stagnant it becomes.
Hoover and Mellon's austerity agenda in the 1920s sought to address this by triggering mass bankruptcies, in a brutal bid to "purge" those superannuated machines and the companies that owned them, at the expense of both workers and creditors. This wasn't enough.
Instead, we got WWII, in which the government stepped in to buy things at rates that paid for factories to be retooled, and which pressed the entire workforce into employment. This is the trigger for the Long Boom, as America got a do-over with all-new capital and a freshly trained workforce with high morale and up-to-date skills.
So that's the Long Boom. What about the Great Downturn? This is where Ganz's account begins. As the "late arrivals" (Japan, West Germany, South Korea, and, eventually China) show up on the world stage, they do their own Long Boom, having experienced an even more extreme "purge" of their zombie firms and obsolete machines. This puts downward pressure on profits in the USA (and, eventually, the late arrivals), leading to the Long Stagnation, a 50 year period in which the rate of profit in the USA has steadily declined.
This is most of the economic theory, and it contains the germ of the political theory, too. During the Long Boom, there was plenty to go around, and the US was able to build out a welfare state, its ruling class was willing to tolerate unions, and movements for political and economic equality for women, sexual minorities, disabled people, racial minorities, etc, were able to make important inroads.
But the political theory gets into high gear after years of Great Downturn. That's when the world has an oversupply of cheap goods and a sustained decline in the rate of profit, and the rate of profit declines every time someone invents a more efficient and productive technology. Companies in Downturn countries need to find a new way to improve their profits ā they need to invest in something other than improved methods of production.
That's where "political capitalism" comes in. Political capitalism is the capitalism you get when the cheapest, most reliable way to improve your rate of profit is to invest in the political process, to get favorable regulation, pork barrel government contracts, and cash bailouts. As Ganz puts it, "capitalists have gone from profit-seekers to rent-seekers," or, as Brenner and Riley write, capitalists now seek "a return on investment largely or completely divorced from material production."
There's a sense in which this is immediately recognizable. The ascendancy of political capitalism tracks with the decline in antitrust enforcement, the rise of monopolies, a series of massive bailouts, and, under Trump, naked kleptocracy. In the US, "raw political power is the main source of return on capital."
The "neoliberal turn" of late Carter/Reagan is downstream of political capitalism. When there was plenty to go around, the capital classes and the political classes were willing to share with workers. When the Great Downturn takes hold, bosses turn instead to screwing workers and taking over the political system. Fans of Bridget Read's Little Bosses Everywhere will know this as the moment in which Gerry Ford legalized pyramid schemes in order to save the founders of Amway, who were big GOP donors who lived in Ford's congressional district:
https://pluralistic.net/2025/05/05/free-enterprise-system/#amway-or-the-highway
Manufacturing's rate of profit has never recovered from this period ā there have been temporary rallies, but the overall trend is down, down, down.
But this is just the beginning of the political economy of Brenner and Riley's theory. Remember, this all started with an essay that sought to make sense of the 2022 midterms. Much of the political theory deals with electoral politics, and what has happened with America's two major political parties.
Under political capitalism, workers are split into different groups depending on their relationship to political corruption. The "professional managerial class" (workers with degrees and other credentials) end up aligned with center-left parties, betting that these parties will use political power to fund the kinds of industries that hire credentialed workers, like health and education. Non-credentialed workers align themselves with right-wing parties that promise to raise their wages by banning immigrants and ending free trade.
Ganz's most recent book, When the Clock Broke: Con Men, Conspiracists, and How America Cracked Up in the Early 1990s looks at the origins of the conspiratorial right that became MAGA:
https://us.macmillan.com/books/9780374605445/whentheclockbroke/
He says that Riley and Brenner's theory really helps explain the moment he chronicled in his own book, for example, the way that Ross Perot (an important Trump predecessor) built power by railing against "late arrivals" like Japan, Germany and South Korea.
This is also the heyday of corporate "finacialization," which can be thought of as the process by which companies stop concerning themselves with how to make and sell superior products more efficiently, and instead devote themselves to financial gimmicks that allow shareholders to extract wealth from the firm. It's a period of slashed R&D budgets, mass layoffs, union-busting, and massive corporate borrowing.
In the original papers, Riley and Brenner drop all kinds of juicy, eye-opening facts and arguments to support their thesis. For example, in the US, more and more machinery is idle. In the 1960s, the US employed 85% of its manufacturing capacity. It was 78% in the 1980s, and now it's 75%. One quarter of "US plant and equipment is simply stagnating."
Today's economic growth doesn't come from making stuff, it comes from extraction, buttressed by law. Looser debt rules allowed households to continue to consume by borrowing, with the effect that a substantial share of workers' wages go to servicing debt, which is to say, paying corporations for the privilege of existing, over and above the cost of the goods and services we consume.
But the debt industry itself hasn't gotten any more efficient: "the cost of moving a dollar from a saver to a borrower was about two cents in 1910; a hundred years later, it was the same." They're making more, but they haven't made any improvements ā all the talk of "fintech" and "financial engineering" have not produced any efficiencies. "This puzzle resolves itself once we recognize that the vast majority of financial innovation is geared towards figuring out how to siphon off resources through fees, insider information and lobbying."
Reading these arguments, I was struck by how this period also covers the rise and rise of "IP." This is a period in which your ability to simply buy things declined, replaced with a system in which you rent and subscribe to things ā forever. From your car to your thermostat, the key systems in your life are increasingly a monthly bill, meaning that every time you add something to your life, it's not a one-time expenditure; it's a higher monthly cost of living, forever.
The rise and rise of IP is certainly part of political capitalism. The global system of IP comes from political capture, such as the inclusion of an IP chapter ("TRIPS") in the World Trade Agreement, as well as the WIPO Copyright Treaties. This is basically a process by which large (mostly American) businesses reorganized the world's system of governance and law to allow them to extract rents and slash R&D. The absurd, inevitable consequence of this nonsense is today's "capital light" chip companies, that don't make chips, just designs, which are turned out by one or two gigantic companies, mostly in Taiwan.
Of course, Riley and Brenner aren't the first theorists to observe that our modern economy is organized around extracting rents, rather than winning profits. Yanis Varoufakis likens the modern economy to medieval feudalism, dubbing the new form "technofeudalism":
https://pluralistic.net/2023/09/28/cloudalists/#cloud-capital
Riley and Brenner harken back to a different kind of feudal practice as the antecedant to political capitalism: "tax-farming."
Groups of entrepreneurs would advance money to the sovereign in exchange for the right to collect taxes from a given territory or population. Their āprofitā consisted in the difference between the money that they advanced to the ruler for the right to tax and what they could extract from the population through the exercise of that right. So, these entrepreneurs invested in politics, the control of means of administration and the means of violence, as a method for extracting surplus, in this way making for a politically constituted form of rent.
Unlike profits, rents are "largely or completely divorced from material production," "they ācreate no wealthā and ⦠they āreduce economic growth and reallocate incomes from the bottom to the top.'"
To make a rent, you need an asset, and in today's system, high asset prices are a top political priority: governments intervene to keep the prices of houses high, to protect corporate bonds, and, of course, to keep AI companies' shares and IOUs from going to zero. The economy is dominated by "a large group of politically dependent firms and householdsā¦profoundly reliant on a policy of easy credit on the part of government⦠The US economy as a whole is sustained by lending, backed up by government, with profits accruing from production under excruciating pressure."
Our social programs have been replaced by public-private partnerships that benefit these "politically dependent firms." Bush's Prescription Drug Act didn't seek to recoup public investment in pharma research through lower prices ā it offered a (further) subsidy to pharma companies in exchange for (paltry/nonexistent) price breaks. Obama's Affordable Care Act transferred hundreds of billions to investors in health corporations, who raised prices and increased their profits. Trump's CARES Act bailed out every corporate debtor in the country. Biden's American Rescue Plan, CHIPS Act and Inflation Reduction Act don't offer public services or transfer funds to workers ā instead, they offer subsidies to the for-profit sector.
Electorally, political capitalism is a system of "vertiginous levels of campaign expenditure and open corruption on a vast scale." It pushed workers into the arms of far-right parties, while re-organizing center-left parties as center-right parties of the lanyard class. Both parties are hamstrung because "in a persistently low- or no-growth environmentā¦parties can no longer operate on the basis of programmes for growth."
This is really just scraping the surface. I think it's well worth £4 to read the source document. I look forward to the further development of this theory, to its being streamlined. It's got a lot of important things to say, even if it is a little hard to metabolize at present.
Hey look at this (permalink)

- EU's New Digital Package Proposal Promises Red Tape Cuts but Guts GDPR Privacy Rights https://www.eff.org/deeplinks/2025/12/eus-new-digital-package-proposal-promises-red-tape-cuts-guts-gdpr-privacy-rights
-
Looks Like We Can Finally Kiss the Metaverse Goodbye https://gizmodo.com/looks-like-we-can-finally-kiss-the-metaverse-goodbye-2000695825
-
A New Anonymous Phone Carrier Lets You Sign Up With Nothing but a Zip Code https://www.wired.com/story/new-anonymous-phone-carrier-sign-up-with-nothing-but-a-zip-code/
-
Microsoft drops AI sales targets in half after salespeople miss their quotas https://arstechnica.com/ai/2025/12/microsoft-slashes-ai-sales-growth-targets-as-customers-resist-unproven-agents/
-
The Hidden Cost of Ceding Government Procurement to a Monopoly Gatekeeper https://ilsr.org/article/independent-business/turning-public-money-into-amazons-profits/
Object permanence (permalink)
#20yrsago Student ethnographies of World of Warcraft https://web.archive.org/web/20051208020004/http://www.trinity.edu/adelwich/mmo/students.html
#20yrsago Sony rootkit ripped off anti-DRM code to break into iTunes https://blog.citp.princeton.edu/2005/12/04/hidden-feature-sony-drm-uses-open-source-code-add-apple-drm/
#20yrsago English info on Franceās terrible proposed copyright law https://web.archive.org/web/20060111032903/http://eucd.info/index.php?English-readers
#15yrsago New Zealand leak: US-style copyright rules are a bad deal https://web.archive.org/web/20101206090519/http://www.michaelgeist.ca/content/view/5498/125/
#15yrsago Tron: Reloaded, come for the action, stay for the aesthetics https://memex.craphound.com/2010/12/05/tron-reloaded-come-for-the-action-stay-for-the-aesthetics/
#10yrsago Unelectable Lindsey Graham throws caution to the wind https://web.archive.org/web/20151206030630/https://gawker.com/i-am-tired-of-this-crap-lindsey-graham-plays-thunderi-1746116881
#10yrsago Every time thereās a mass shooting, gun execs & investors gloat about future earnings https://theintercept.com/2015/12/03/mass-shooting-wall-st/
#10yrsago How to bake spice-filled sandworm bread https://web.archive.org/web/20151205193104/https://kitchenoverlord.com/2015/12/03/dune-week-spice-filled-sandworm/
#5yrsago Descartes' God has failed and Thompson's Satan rules our computers https://pluralistic.net/2020/12/05/trusting-trust/#thompsons-devil
#5yrsago Denise Hearn and Vass Bednar's "The Big Fix" https://pluralistic.net/2024/12/05/ted-rogers-is-a-dope/#galen-weston-is-even-worse
Upcoming appearances (permalink)

- Virtual: Poetic Technologies with Brian Eno (David Graeber Institute), Dec 8
https://davidgraeber.institute/poetic-technologies-with-cory-doctorow-and-brian-eno/ -
Madison, CT: Enshittification at RJ Julia, Dec 8
https://rjjulia.com/event/2025-12-08/cory-doctorow-enshittification -
Hamburg: Chaos Communications Congress, Dec 27-30
https://events.ccc.de/congress/2025/infos/index.html -
Denver: Enshittification at Tattered Cover Colfax, Jan 22
https://www.eventbrite.com/e/cory-doctorow-live-at-tattered-cover-colfax-tickets-1976644174937
Recent appearances (permalink)
- Enshittification (Future Knowledge)
https://futureknowledge.transistor.fm/episodes/enshittification -
We have become slaves to Silicon Valley (Politics JOE)
https://www.youtube.com/watch?v=JzEUvh1r5-w -
How Enshittification is Destroying The Internet (Frontline Club)
https://www.youtube.com/watch?v=oovsyzB9L-s -
Escape Forward with Cristina Caffarra
https://escape-forward.com/2025/11/27/enshittification-of-our-digital-experience/ -
Why Every Platform Betrays You (Trust Revolution)
https://fountain.fm/episode/bJgdt0hJAnppEve6Qmt8
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
-
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
-
A Little Brother short story about DIY insulin PLANNING

This work ā excluding any serialized fiction ā is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
The new MCP authorization spec is here! Today marks the one-year anniversary of the Model Context Protocol, and with it, the launch of the new 2025-11-25 specification.
Iāve been helping out with the authorization part of the spec for the last several months, working to make sure we aren't just shipping something that works for hobbyists, but something that even scales to the enterprise. If youāve been following my posts like Enterprise-Ready MCP or Let's Fix OAuth in MCP, you know this has been a bit of a journey over the past year.
The new spec just dropped, and while there are a ton of great updates across the board, far more than I can get in to in this blog post, there are two changes in the authorization layer that I am most excited about. They fundamentally change how clients identify themselves and how enterprises manage access to AI-enabled apps.
Client ID Metadata Documents (CIMD)
If youāve ever tried to work with an open ecosystem of OAuth clients and servers, you know the "Client Registration" problem. In traditional OAuth, you go to a developer portal, register your app, and get a client_id and client_secret. That works great when there is one central server (like Google or GitHub) and many clients that want to use that server.
It breaks down completely in an open ecosystem like MCP, where we have many clients talking to many servers. You can't expect a developer of a new AI Agent to manually register with every single one of the 2,000 MCP servers in the MCP server registry. Plus, when a new MCP server launches, that server wouldn't be able to ask every client developer to register either.
Until now, the answer for MCP was Dynamic Client Registration (DCR). But as implementation experiences has shown us over the last several months, DCR introduces a massive amount of complexity and risk for both sides.
For Authorization Servers, DCR endpoints are a headache. They require public-facing APIs that need strict rate limiting to prevent abuse, and they lead to unbounded database growth as thousands of random clients register themselves. The number of client registrations will only ever increase, so the authorization server is likely to implement some sort of "cleanup" mechanism to delete old client registrations. The problem is there is no clear definition of what an "old" client is. And if a dynamically registered client is deleted, the client doesn't know about it, and the user is often stuck with no way to recover. Because of the security implications of an endpoint like this, DCR has also been a massive barrier to enterprise adoption of MCP.
For Clients, itās just as bad. They have to manage the lifecycle of their client credentials on top of the actual access tokens, and there is no standardized way to check if the client registration is still valid. This frequently leads to sloppy implementations where clients simply register a brand new client_id every single time a user logs in, further increasing the number of client registrations at the authorization server. This isn't a theoretical problem, this is also how Mastodon has worked for the last several years, and has some GitHub issue threads describing the challenges it creates.
The new MCP spec solves this by adopting Client ID Metadata Documents.
The OAuth Working Group adopted the Client ID Metadata Document spec in October after about a year of discussion, so it's still relatively new. But seeing it land as the default mechanism in MCP is huge. Instead of the client registering with each authorization server, the client establishes its own identity with a URL it controls and uses the URL to identify itself during an OAuth flow.
When the client starts an OAuth request to the MCP authorization server, it says, "Hi, I'm https://example-app.com/client.json." The server fetches the JSON document at that URL and finds the client's metadata (logo, name, redirect URIs) and proceeds on as usual.
This creates a decentralized trust model based on DNS. If you trust example.com, you trust the client. It removes the registration friction entirely while keeping the security guarantees we need. Itās the same pattern weāve used in IndieAuth for over a decade, and it fits MCP perfectly.
There are definitely some new considerations and risks this brings, so it's worth diving into the details about Client ID Metadata Documents in the MCP spec as well as the IETF spec. For example, if you're building an MCP client that is running on a web server, you can actually manage private keys and publish the public keys in your metadata document, enabling strong client authentication. And like Dynamic Client Registration, there are still limitations for how desktop clients can leverage this, which can hopefully be solved by a future extension. I talked more about this during a hugely popular session at the Internet Identity Workshop in October, you can find the slides here.
You can try out this new flow today in VSCode, the first MCP client to ship support for CIMD even before it was officially in the spec. You can also learn more and test it out at the excellent website the folks at Stytch created: client.dev.
Enterprise-Managed Authorization (Cross App Access)
This is the big one for anyone asking, "Is MCP safe to use in the enterprise?"
Until now, when an AI agent connected to an MCP server, the connection was established directly between the MCP client and server. For example if you are using ChatGPT to connect to the Asana MCP server, ChatGPT would start an OAuth flow to Asana. But if your Asana account is actually connected to an enterprise IdP like Okta, Okta would only see that you're logging in to Asana, and wouldn't be aware of the connection established between ChatGPT and Asana. This means today there are a huge number of what are effectively unmanaged connections between MCP clients and servers in the enterprise. Enterprise IT admins hate this because it creates "Shadow IT" connections that bypass enterprise policy.
The new MCP spec incorporates Cross App Access (XAA) as the authorization extension "Enterprise-Managed Authorization".
This builds on the work I discussed in Enterprise-Ready MCP leveraging the Identity Assertion Authorization Grant. The flow puts the enterprise Identity Provider (IdP) back in the driver's seat.
Here is how it works:
-
Single Sign-On: First you log into an MCP Client (like Claude or an IDE) using your corporate SSO, the client gets an ID token.
-
Token Exchange: Instead of the client starting an OAuth flow to ask the user to manually approve access to a downstream tool (like an Asana MCP server), the client takes that ID token back to the Enterprise IdP to ask for access.
-
Policy Check: The IdP checks corporate policy. "Is
Engineeringallowed to useClaudeto accessAsana?" If the policy passes, the IdP issues a temporary token (ID-JAG) that the client can take to the MCP authorization server. -
Access Token Request: The MCP client takes the ID-JAG to the MCP authorization server saying "hey this IdP says you can issue me an access token for this user". The authorization server validates the ID-JAG the same way it would have validated an ID Token (remember this app is also set up for SSO to the same corporate IdP), and issues an access token.
This happens entirely behind the scenes without user interaction. The user doesn't get bombarded with consent screens, and the enterprise admin gets full visibility and revocability. If you want to shut down AI access to a specific internal tool, you do it in one place: your IdP.
Further Reading
There is a lot more in the full spec update, but these two piecesāCIMD for scalable client identity and Cross App Access for enterprise securityāare the two I am most excited about. They take MCP to the next level by solving the biggest challenges that were preventing scalable adoption of MCP in the enterprise.
You can read more about the MCP authorization spec update in Den's excellent post, and more about all the updates to the MCP spec in the official announcement post.
Links to docs and specs about everything mentioned in this post are below.
- MCP Authorization Spec 2025-11-25
- Client ID Metadata Document (ietf.org)
- Identity Assertion Authorization Grant (ietf.org)
- Enterprise-Ready MCP
- Evolving Client Registration (blog.modelcontextprotocol.io)
- Cross App Access (oauth.net)
In October, I launched an instance of Meetable for the MCP Community. They've been using it to post working group meetings as well as in-person community events. In just 2 months it already has 41 events listed!
One of the aspects of opening up the software to a new community is stress testing some of the design decisions. An early design decision was intentionally to not support recurring events. For a community calendar, recurring events are often problematic. Once a recurring event is created for something like a weekly meetup, it's no longer clear whether the event is actually going to happen, which is especially true for virtual events. If an organizer of the event silently drops away from the community, it's very likely they will not go delete the event, and you can end up with stale events on the calendar quickly. It's better to have people explicitly create the event on the calendar so that every event was created with intention. To support this, I made a "Clone Event" button to quickly copy the details from a previous instance, and it even predicts the next date based on how often the event has been happening in the past.
But for the MCP community, which is a bit more formal than a purely community calendar, most of the events on their site are weekly or biweekly working group meetings. I had been hearing quite a bit of feedback that the current process of scheduling out the events manually, even with the "clone event" feature, was too much of a burden. So I set out to design a solution for recurring events to strike a balance between ease of use and hopefully avoiding some of the pitfalls of recurring events.
What I landed on is this:
You can create an "event template" from any existing event on the calendar, and give it a recurrence interval like "Every week on Tuesdays" or "Monthly on the 9th".

(I'll add an option for "Monthly on the second Tuesday" later if this ends up being used enough.)
Once the schedule is created, copies of the event will be created at the chosen interval, but only a few weeks out. For weekly events, 4 weeks in advance will be created, biweekly will get scheduled 8 weeks out, monthly events 4 months out, and yearly events will have only the next year scheduled. Every day a cron job will create future events at the scheduled interval in advance. If the event template is deleted, future scheduled events will also be deleted.
So effectively for organizers there is nothing they need to do after creating the recurring event schedule. My hope is by having it work this way, instead of like recurring events on a typical Google calendar, it strikes a balance between ease of use but avoids orphaned events on the calendar. It still requires an organizer to delete a recurrence, so should only be used for events that truly have a schedule and are unlikely to be cancelled often.
Hopefully this makes Meetable even more useful for different kinds of communities! You can install your own copy of Meetable from the source code on GitHub.
Today I just launched support for BlueSky as a new authentication option in IndieLogin.com!
IndieLogin.com is a developer service that allows users to log in to a website with their domain. It delegates the actual user authentication out to various external services, whether that is an IndieAuth server, GitHub, GitLab, Codeberg, or just an email confirmation code, and now also BlueSky.
This means if you have a custom domain as your BlueSky handle, you can now use it to log in to websites like indieweb.org directly!

Alternatively, you can add a link to your BlueSky handle from your website with a rel="me atproto" attribute, similar to how you would link to your GitHub profile from your website.
<a href="https://example.bsky.social" rel="me atproto">example.bsky.social</a>
This is made possible thanks to BlueSky's support of the new OAuth Client ID Metadata Document specification, which was recently adopted by the OAuth Working Group. This means as the developer of the IndieLogin.com service, I didn't have to register for any BlueSky API keys in order to use the OAuth server! The IndieLogin.com website publishes its own metadata which the BlueSky OAuth server can use to fetch the metadata from. This is the same client metadata that an IndieAuth server will parse as well! Aren't standards fun!
The hardest part about the whole process was probably adding DPoP support. Actually creating the DPoP JWT wasn't that bad but the tricky part was handling the DPoP server nonces sent back. I do wish we had a better solution for that mechanism in DPoP, but I remember the reasoning for doing it this way and I guess we just have to live with it now.
This was a fun exercise in implementing a bunch of the specs I've been working on recently!
- OAuth 2.1
- DPoP
- Client ID Metadata Document
- Pushed Authorization Requests
- OAuth for Browser-Based Apps
- Protected Resource Metadata
Here's the link to the full ATProto OAuth docs for reference.
Hello! Earlier this summer I was talking to a friend about how much I love using fish, and how I love that I donāt have to configure it. They said that they feel the same way about the helix text editor, and so I decided to give it a try.
Iāve been using it for 3 months now and here are a few notes.
why helix: language servers
I think what motivated me to try Helix is that Iāve been trying to get a working language server setup (so I can do things like āgo to definitionā) and getting a setup that feels good in Vim or Neovim just felt like too much work.
After using Vim/Neovim for 20 years, Iāve tried both ābuild my own custom configuration from scratchā and āuse someone elseās pre-buld configuration systemā and even though I love Vim I was excited about having things just work without having to work on my configuration at all.
Helix comes with built in language server support, and it feels nice to be able to do things like ārename this symbolā in any language.
the search is great
One of my favourite things about Helix is the search! If Iām searching all the files in my repository for a string, it lets me scroll through the potential matching files and see the full context of the match, like this:
For comparison, hereās what the vim ripgrep plugin Iāve been using looks like:
Thereās no context for what else is around that line.
the quick reference is nice
One thing I like about Helix is that when I press g, I get a little help popup
telling me places I can go. I really appreciate this because I donāt often use
the āgo to definitionā or āgo to referenceā feature and I often forget the
keyboard shortcut.
some vim -> helix translations
- Helix doesnāt have marks like
ma,'a, instead Iāve been usingCtrl+OandCtrl+Ito go back (or forward) to the last cursor location - I think Helix does have macros, but Iāve been using multiple cursors in every
case that I would have previously used a macro. I like multiple cursors a lot
more than writing macros all the time. If I want to batch change something in
the document, my workflow is to press
%(to highlight everything), thensto select (with a regex) the things I want to change, then I can just edit all of them as needed. - Helix doesnāt have neovim-style tabs, instead it has a nice buffer switcher (
<space>b) I can use to switch to the buffer I want. Thereās a pull request here to implement neovim-style tabs. Thereās also a settingbufferline="multiple"which can act a bit like tabs withgp,gnfor prev/next ātabā and:bcto close a ātabā.
some helix annoyances
Hereās everything thatās annoyed me about Helix so far.
- I like the way Helixās
:reflowworks much less than how vim reflows text withgq. It doesnāt work as well with lists. (github issue) - If Iām making a Markdown list, pressing āenterā at the end of a list item wonāt continue the list. Thereās a partial workaround for bulleted lists but I donāt know one for numbered lists.
- No persistent undo yet: in vim I could use an undofile so that I could undo changes even after quitting. Helix doesnāt have that feature yet. (github PR)
- Helix doesnāt autoreload files after they change on disk, I have to run
:reload-all(:ra<tab>) to manually reload them. Not a big deal. - Sometimes it crashes, maybe every week or so. I think it might be this issue.
The āmarkdown listā and reflowing issues come up a lot for me because I spend a lot of time editing Markdown lists, but I keep using Helix anyway so I guess they canāt be making me that mad.
switching was easier than I thought
I was worried that relearning 20 years of Vim muscle memory would be really hard.
It turned out to be easier than I expected, I started using Helix on a vacation for a little low-stakes coding project I was doing on the side and after a week or two it didnāt feel so disorienting anymore. I think it might be hard to switch back and forth between Vim and Helix, but I havenāt needed to use Vim recently so I donāt know if thatāll ever become an issue for me.
The first time I tried Helix I tried to force it to use keybindings that were more similar to Vim and that did not work for me. Just learning the āHelix wayā was a lot easier.
There are still some things that throw me off: for example w in vim and w in
Helix donāt have the same idea of what a āwordā is (the Helix one includes the
space after the word, the Vim one doesnāt).
using a terminal-based text editor
For many years Iād mostly been using a GUI version of vim/neovim, so switching to actually using an editor in the terminal was a bit of an adjustment.
I ended up deciding on:
- Every project gets its own terminal window, and all of the tabs in that window (mostly) have the same working directory
- I make my Helix tab the first tab in the terminal window
It works pretty well, I might actually like it better than my previous workflow.
my configuration
I appreciate that my configuration is really simple, compared to my neovim configuration which is hundreds of lines. Itās mostly just 4 keyboard shortcuts.
theme = "solarized_light"
[editor]
# Sync clipboard with system clipboard
default-yank-register = "+"
[keys.normal]
# I didn't like that Ctrl+C was the default "toggle comments" shortcut
"#" = "toggle_comments"
# I didn't feel like learning a different way
# to go to the beginning/end of a line so
# I remapped ^ and $
"^" = "goto_first_nonwhitespace"
"$" = "goto_line_end"
[keys.select]
"^" = "goto_first_nonwhitespace"
"$" = "goto_line_end"
[keys.normal.space]
# I write a lot of text so I need to constantly reflow,
# and missed vim's `gq` shortcut
l = ":reflow"
Thereās a separate languages.toml configuration where I set some language
preferences, like turning off autoformatting.
For example, hereās my Python configuration:
[[language]]
name = "python"
formatter = { command = "black", args = ["--stdin-filename", "%{buffer_name}", "-"] }
language-servers = ["pyright"]
auto-format = false
weāll see how it goes
Three months is not that long, and itās possible that Iāll decide to go back to Vim at some point. For example, I wrote a post about switching to nix a while back but after maybe 8 months I switched back to Homebrew (though Iām still using NixOS to manage one little server, and Iām still satisfied with that).
The IETF OAuth Working Group has adopted the Client ID Metadata Document specification!
This specification defines a mechanism through which an OAuth client can identify itself to authorization servers, without prior dynamic client registration or other existing registration.
Clients identify themselves with their own URL, and host their metadata (name, logo, redirect URL) in a JSON document at that URL. They then use that URL as the client_id to introduce themselves to an authorization server for the first time.
The mechanism of clients identifying themselves as a URL has been in use in IndieAuth for over a decade, and more recently has been adopted by BlueSky for their OAuth API. The recent surge in interest in MCP has further demonstrated the need for this to be a standardized mechanism, and was the main driver in the latest round of discussion for the document! This could replace Dynamic Client Registration in MCP, dramatically simplifying management of clients, as well as enabling servers to limit access to specific clients if they want.
The folks at Stytch put together a really nice explainer website about it too! cimd.dev
Thanks to everyone for your contributions and feedback so far! And thanks to my co-author Emilia Smith for her work on the document!
I just released some updates for Meetable, my open source event listing website.
The major new feature is the ability to let users log in with a Discord account. A Meetable instance can be linked to a Discord server to enable any member of the server to log in to the site. You can also restrict who can log in based on Discord "roles", so you can limit who can edit events to only certain Discord members.
One of the first questions I get about Meetable is whether recurring events are supported. My answer has always been "no". In general, it's too easy for recurring events on community calendars go get stale. If an organizer forgets to cancel or just stops showing up, that isn't visible unless someone takes the time to clean up the recurrence. Instead, it's healthier to require each event be created manually. There is a "clone event" feature that makes it easy to copy all the details from a previous event to be able to quickly manually create these sorts of recurring events. In this update, I just added a feature to streamline this even further. The next recurrence is now predicted based on the past interval of the event.
For example, for a biweekly cadence, the following steps happen now:
- You would create the first instance manually, say for October 1
- You click "Clone Event" and change the date of the new event to October 15
- Now when you click "Clone Event" on the October 15 event, it will pre-fill October 29 based on the fact that the October 15 event was created 2 weeks after the event it was cloned from
Currently this only works by counting days, so wouldn't work for things like "first Tuesday of the month" or "the 1st of the month", but I hope this saves some time in the future regardless. If "first Tuesday" or specific days of the month are an important use case for you, let me know and I can try to come up with a solution.
Minor changes/fixes below:
- Added "Create New Event" to the "Add Event" dropdown menu because it wasn't obvious "Add Event" was clickable.
- Meeting link no longer appears for cancelled events. (Actually the meeting link only appears for "confirmed" events.)
- If you add a meeting link but don't set a timezone, a warning message appears on the event.
- Added a setting to show a message when uploading a photo, you can use this to describe a photo license policy for example.
- Added a "user profile" page, and if users are configured to fetch profile info from their website, a button to re-fetch the profile info will appear.
Every time I take a Lyft from the San Francisco airport to downtown going up 101, I notice the billboards. The billboards on 101 are always such a good snapshot in time of the current peak of the Silicon Valley hype cycle. I've decided to capture photos of the billboards every time I am there, to see how this changes over time.
Here's a photo dump from the 101 billboards from August 2025. The theme is clearly AI. Apologies for the slightly blurry photos, these were taken while driving 60mph down the highway, some of them at night.
Hello! After many months of writing deep dive blog posts about the terminal, on Tuesday I released a new zine called āThe Secret Rules of the Terminalā!
You can get it for $12 here: https://wizardzines.com/zines/terminal, or get an 15-pack of all my zines here.
Hereās the cover:
the table of contents
Hereās the table of contents:
why the terminal?
Iāve been using the terminal every day for 20 years but even though Iām very confident in the terminal, Iāve always had a bit of an uneasy feeling about it. Usually things work fine, but sometimes something goes wrong and it just feels like investigating it is impossible, or at least like it would open up a huge can of worms.
So I started trying to write down a list of weird problems Iāve run into in terminal and I realized that the terminal has a lot of tiny inconsistencies like:
- sometimes you can use the arrow keys to move around, but sometimes pressing the arrow keys just prints
^[[D - sometimes you can use the mouse to select text, but sometimes you canāt
- sometimes your commands get saved to a history when you run them, and sometimes they donāt
- some shells let you use the up arrow to see the previous command, and some donāt
If you use the terminal daily for 10 or 20 years, even if you donāt understand exactly why these things happen, youāll probably build an intuition for them.
But having an intuition for them isnāt the same as understanding why they happen. When writing this zine I actually had to do a lot of work to figure out exactly what was happening in the terminal to be able to talk about how to reason about it.
the rules arenāt written down anywhere
It turns out that the ārulesā for how the terminal works (how do
you edit a command you type in? how do you quit a program? how do you fix your
colours?) are extremely hard to fully understand, because āthe terminalā is actually
made of many different pieces of software (your terminal emulator, your
operating system, your shell, the core utilities like grep, and every other random
terminal program youāve installed) which are written by different people with different
ideas about how things should work.
So I wanted to write something that would explain:
- how the 4 pieces of the terminal (your shell, terminal emulator, programs, and TTY driver) fit together to make everything work
- some of the core conventions for how you can expect things in your terminal to work
- lots of tips and tricks for how to use terminal programs
this zine explains the most useful parts of terminal internals
Terminal internals are a mess. A lot of it is just the way it is because someone made a decision in the 80s and now itās impossible to change, and honestly I donāt think learning everything about terminal internals is worth it.
But some parts are not that hard to understand and can really make your experience in the terminal better, like:
- if you understand what your shell is responsible for, you can configure your shell (or use a different one!) to access your history more easily, get great tab completion, and so much more
- if you understand escape codes, itās much less scary when
cating a binary to stdout messes up your terminal, you can just typeresetand move on - if you understand how colour works, you can get rid of bad colour contrast in your terminal so you can actually read the text
I learned a surprising amount writing this zine
When I wrote How Git Works, I thought I
knew how Git worked, and I was right. But the terminal is different. Even
though I feel totally confident in the terminal and even though Iāve used it
every day for 20 years, I had a lot of misunderstandings about how the terminal
works and (unless youāre the author of tmux or something) I think thereās a
good chance you do too.
A few things I learned that are actually useful to me:
- I understand the structure of the terminal better and so I feel more confident debugging weird terminal stuff that happens to me (I was even able to suggest a small improvement to fish!). Identifying exactly which piece of software is causing a weird thing to happen in my terminal still isnāt easy but Iām a lot better at it now.
- you can write a shell script to copy to your clipboard over SSH
- how
resetworks under the hood (it does the equivalent ofstty sane; sleep 1; tput reset) ā basically I learned that I donāt ever need to worry about rememberingstty saneortput resetand I can just runresetinstead - how to look at the invisible escape codes that a program is printing out (run
unbuffer program > out; less out) - why the builtin REPLs on my Mac like
sqlite3are so annoying to use (they uselibeditinstead ofreadline)
blog posts I wrote along the way
As usual these days I wrote a bunch of blog posts about various side quests:
- How to add a directory to your PATH
- ārulesā that terminal problems follow
- why pipes sometimes get āstuckā: buffering
- some terminal frustrations
- ASCII control characters in my terminal on āwhatās the deal with Ctrl+A, Ctrl+B, Ctrl+C, etc?ā
- entering text in the terminal is complicated
- whatās involved in getting a āmodernā terminal setup?
- reasons to use your shellās job control
- standards for ANSI escape codes, which is really me trying to figure out if I think the
terminfodatabase is serving us well today
people who helped with this zine
A long time ago I used to write zines mostly by myself but with every project I get more and more help. I met with Marie Claire LeBlanc Flanagan every weekday from September to June to work on this one.
The cover is by Vladimir KaÅ”ikoviÄ, Lesley Trites did copy editing, Simon Tatham (who wrote PuTTY) did technical review, our Operations Manager Lee did the transcription as well as a million other things, and Jesse Luehrs (who is one of the very few people I know who actually understands the terminalās cursed inner workings) had so many incredibly helpful conversations with me about what is going on in the terminal.
get the zine
Here are some links to get the zine again:
As always, you can get either a PDF version to print at home or a print version shipped to your house. The only caveat is print orders will ship in August ā I need to wait for orders to come in to get an idea of how many I should print before sending it to the printer.
I have never been a C programmer but every so often I need to compile a C/C++
program from source. This has been kind of a struggle for me: for a
long time, my approach was basically āinstall the dependencies, run make, if
it doesnāt work, either try to find a binary someone has compiled or give upā.
āHope someone else has compiled itā worked pretty well when I was running Linux but since Iāve been using a Mac for the last couple of years Iāve been running into more situations where I have to actually compile programs myself.
So letās talk about what you might have to do to compile a C program! Iāll use a couple of examples of specific C programs Iāve compiled and talk about a few things that can go wrong. Here are three programs weāll be talking about compiling:
step 1: install a C compiler
This is pretty simple: on an Ubuntu system if I donāt already have a C compiler Iāll install one with:
sudo apt-get install build-essential
This installs gcc, g++, and make. The situation on a Mac is more
confusing but itās something like āinstall xcode command line toolsā.
step 2: install the programās dependencies
Unlike some newer programming languages, C doesnāt have a dependency manager. So if a program has any dependencies, you need to hunt them down yourself. Thankfully because of this, C programmers usually keep their dependencies very minimal and often the dependencies will be available in whatever package manager youāre using.
Thereās almost always a section explaining how to get the dependencies in the README, for example in paperjamās README, it says:
To compile PaperJam, you need the headers for the libqpdf and libpaper libraries (usually available as libqpdf-dev and libpaper-dev packages).
You may need
a2x(found in AsciiDoc) for building manual pages.
So on a Debian-based system you can install the dependencies like this.
sudo apt install -y libqpdf-dev libpaper-dev
If a README gives a name for a package (like libqpdf-dev), Iād basically
always assume that they mean āin a Debian-based Linux distroā: if youāre on a
Mac brew install libqpdf-dev will not work. I still have not 100% gotten
the hang of developing on a Mac yet so I donāt have many tips there yet. I
guess in this case it would be brew install qpdf if youāre using Homebrew.
step 3: run ./configure (if needed)
Some C programs come with a Makefile and some instead come with a script called
./configure. For example, if you download sqliteās source code, it has a ./configure script in
it instead of a Makefile.
My understanding of this ./configure script is:
- You run it, it prints out a lot of somewhat inscrutable output, and then it
either generates a
Makefileor fails because youāre missing some dependency - The
./configurescript is part of a system called autotools that I have never needed to learn anything about beyond ārun it to generate aMakefileā.
I think there might be some options you can pass to get the ./configure
script to produce a different Makefile but I have never done that.
step 4: run make
The next step is to run make to try to build a program. Some notes about
make:
- Sometimes you can run
make -j8to parallelize the build and make it go faster - It usually prints out a million compiler warnings when compiling the program. I always just ignore them. I didnāt write the software! The compiler warnings are not my problem.
compiler errors are often dependency problems
Hereās an error I got while compiling paperjam on my Mac:
/opt/homebrew/Cellar/qpdf/12.0.0/include/qpdf/InputSource.hh:85:19: error: function definition does not declare parameters
85 | qpdf_offset_t last_offset{0};
| ^
Over the years Iāve learned itās usually best not to overthink problems like
this: if itās talking about qpdf, thereās a good change it just means that
Iāve done something wrong with how Iām including the qpdf dependency.
Now letās talk about some ways to get the qpdf dependency included in the right way.
the worldās shortest introduction to the compiler and linker
Before we talk about how to fix dependency problems: building C programs is split into 2 steps:
- Compiling the code into object files (with
gccorclang) - Linking those object files into a final binary (with
ld)
Itās important to know this when building a C program because sometimes you need to pass the right flags to the compiler and linker to tell them where to find the dependencies for the program youāre compiling.
make uses environment variables to configure the compiler and linker
If I run make on my Mac to install paperjam, I get this error:
c++ -o paperjam paperjam.o pdf-tools.o parse.o cmds.o pdf.o -lqpdf -lpaper
ld: library 'qpdf' not found
This is not because qpdf is not installed on my system (it actually is!). But
the compiler and linker donāt know how to find the qpdf library. To fix this, we need to:
- pass
"-I/opt/homebrew/include"to the compiler (to tell it where to find the header files) - pass
"-L/opt/homebrew/lib -liconv"to the linker (to tell it where to find library files and to link iniconv)
And we can get make to pass those extra parameters to the compiler and linker using environment variables!
To see how this works: inside paperjamās Makefile you can see a bunch of environment variables, like LDLIBS here:
paperjam: $(OBJS)
$(LD) -o $@ $^ $(LDLIBS)
Everything you put into the LDLIBS environment variable gets passed to the
linker (ld) as a command line argument.
secret environment variable: CPPFLAGS
Makefiles sometimes define their own environment variables that they pass to
the compiler/linker, but make also has a bunch of āimplicitā environment
variables which it will automatically pass to the C compiler and linker. Thereās a full list of implicit environment variables here,
but one of them is CPPFLAGS, which gets automatically passed to the C compiler.
(technically it would be more normal to use CXXFLAGS for this, but this
particular Makefile hardcodes CXXFLAGS so setting CPPFLAGS was the only
way I could find to set the compiler flags without editing the Makefile)
two ways to pass environment variables to make
I learned thanks to @zwol that there are actually two ways to pass environment variables to make:
CXXFLAGS=xyz make(the usual way)make CXXFLAGS=xyz
The difference between them is that make CXXFLAGS=xyz will override the
value of CXXFLAGS set in the Makefile but CXXFLAGS=xyz make wonāt.
Iām not sure which way is the norm but Iām going to use the first way in this post.
how to use CPPFLAGS and LDLIBS to fix this compiler error
Now that weāve talked about how CPPFLAGS and LDLIBS get passed to the
compiler and linker, hereās the final incantation that I used to get the
program to build successfully!
CPPFLAGS="-I/opt/homebrew/include" LDLIBS="-L/opt/homebrew/lib -liconv" make paperjam
This passes -I/opt/homebrew/include to the compiler and -L/opt/homebrew/lib -liconv to the linker.
Also I donāt want to pretend that I āmagicallyā knew that those were the right arguments to pass, figuring them out involved a bunch of confused Googling that I skipped over in this post. I will say that:
- the
-Icompiler flag tells the compiler which directory to find header files in, like/opt/homebrew/include/qpdf/QPDF.hh - the
-Llinker flag tells the linker which directory to find libraries in, like/opt/homebrew/lib/libqpdf.a - the
-llinker flag tells the linker which libraries to link in, like-liconvmeans ālink in theiconvlibraryā, or-lmmeans ālinkmathā
tip: how to just build 1 specific file: make $FILENAME
Yesterday I discovered this cool tool called
qf which you can use to quickly
open files from the output of ripgrep.
qf is in a big directory of various tools, but I only wanted to compile qf.
So I just compiled qf, like this:
make qf
Basically if you know (or can guess) the output filename of the file youāre
trying to build, you can tell make to just build that file by running make $FILENAME
tip: you donāt need a Makefile
I sometimes write 5-line C programs with no dependencies, and I just learned
that if I have a file called blah.c, I can just compile it like this without creating a Makefile:
make blah
It gets automaticaly expanded to cc -o blah blah.c, which saves a bit of
typing. I have no idea if Iām going to remember this (I might just keep typing
gcc -o blah blah.c anyway) but it seems like a fun trick.
tip: look at how other packaging systems built the same C program
If youāre having trouble building a C program, maybe other people had problems building it too! Every Linux distribution has build files for every package that they build, so even if you canāt install packages from that distribution directly, maybe you can get tips from that Linux distro for how to build the package. Realizing this (thanks to my friend Dave) was a huge ah-ha moment for me.
For example, this line from the nix package for paperjam says:
env.NIX_LDFLAGS = lib.optionalString stdenv.hostPlatform.isDarwin "-liconv";
This is basically saying āpass the linker flag -liconv to build this on a
Macā, so thatās a clue we could use to build it.
That same file also says env.NIX_CFLAGS_COMPILE = "-DPOINTERHOLDER_TRANSITION=1";. Iām not sure what this means, but when I try
to build the paperjam package I do get an error about something called a
PointerHolder, so I guess thatās somehow related to the āPointerHolder
transitionā.
step 5: installing the binary
Once youāve managed to compile the program, probably you want to install it somewhere!
Some Makefiles have an install target that let you install the tool on your
system with make install. Iām always a bit scared of this (where is it going
to put the files? what if I want to uninstall them later?), so if Iām compiling
a pretty simple program Iāll often just manually copy the binary to install it
instead, like this:
cp qf ~/bin
step 6: maybe make your own package!
Once I figured out how to do all of this, I realized that I could use my new
make knowledge to contribute a paperjam package to Homebrew! Then I could
just brew install paperjam on future systems.
The good thing is that even if the details of how all of the different packaging systems, they fundamentally all use C compilers and linkers.
it can be useful to understand a little about C even if youāre not a C programmer
I think all of this is an interesting example of how it can useful to understand some basics of how C programs work (like āthey have header filesā) even if youāre never planning to write a nontrivial C program if your life.
It feels good to have some ability to compile C/C++ programs myself, even
though Iām still not totally confident about all of the compiler and linker
flags and I still plan to never learn anything about how autotools works other
than āyou run ./configure to generate the Makefileā.
Two things I left out of this post:
LD_LIBRARY_PATH / DYLD_LIBRARY_PATH(which you use to tell the dynamic linker at runtime where to find dynamically linked files) because I canāt remember the last time I ran into anLD_LIBRARY_PATHissue and couldnāt find an example.pkg-config, which I think is important but I donāt understand yet
I've seen a lot of complaints about how MCP isn't ready for the enterprise.
I agree, although maybe not for the reasons you think. But don't worry, this isn't just a rant! I believe we can fix it!
The good news is the recent updates to the MCP authorization spec that separate out the role of the authorization server from the MCP server have now put the building blocks in place to make this a lot easier.
But let's back up and talk about what enterprise buyers expect when they are evaluating AI tools to bring into their companies.
Single Sign-On
At a minimum, an enterprise admin expects to be able to put an application under their single sign-on system. This enables the company to manage which users are allowed to use which applications, and prevents their users from needing to have their own passwords at the applications. The goal is to get every application managed under their single sign-on (SSO) system. Many large companies have more than 200 applications, so having them all managed through their SSO solution is a lot better than employees having to manage 200 passwords for each application!
There's a lot more than SSO too, like lifecycle management, entitlements, and logout. We're tackling these in the IPSIE working group in the OpenID Foundation. But for the purposes of this discussion, let's stick to the basics of SSO.
So what does this have to do with MCP?
An AI agent using MCP is just another application enterprises expect to be able to integrate into their single-sign-on (SSO) system. Let's take the example of Claude. When rolled out at a company, ideally every employee would log in to their company Claude account using the company identity provider (IdP). This lets the enterprise admin decide how many Claude licenses to purchase and who should be able to use it.
Connecting to External Apps
The next thing that should happen after a user logs in to Claude via SSO is they need to connect Claude to their other enterprise apps. This includes the built-in integrations in Claude like Google Calendar and Google Drive, as well as any MCP servers exposed by other apps in use within the enterprise. That could cover other SaaS apps like Zoom, Atlassian, and Slack, as well as home-grown internal apps.
Today, this process involves a somewhat cumbersome series of steps each individual employee must take. Here's an example of what the user needs to do to connect their AI agent to external apps:
First, the user logs in to Claude using SSO. This involves a redirect from Claude to the enterprise IdP where they authenticate with one or more factors, and then are redirected back.

Next, they need to connect the external app from within Claude. Claude provides a button to initiate the connection. This takes the user to that app (in this example, Google), which redirects them to the IdP to authenticate again, eventually getting redirected back to the app where an OAuth consent prompt is displayed asking the user to approve access, and finally the user is redirected back to Claude and the connection is established.

The user has to repeat these steps for every MCP server that they want to connect to Claude. There are two main problems with this:
- This user experience is not great. That's a lot of clicking that the user has to do.
- The enterprise admin has no visibility or control over the connection established between the two applications.
Both of these are significant problems. If you have even just 10 MCP servers rolled out in the enterprise, you're asking users to click through 10 SSO and OAuth prompts to establish the connections, and it will only get worse as MCP is more widely adopted within apps. But also, should we really be asking the user if it's okay for Claude to access their data in Google Drive? In a company context, that's not actually the user's decision. That decision should be made by the enterprise IT admin.
In "An Open Letter to Third-party Suppliers", Patrick Opet, Chief Information Security Officer of JPMorgan Chase writes:
"Modern integration patterns, however, dismantle these essential boundaries, relying heavily on modern identity protocols (e.g., OAuth) to create direct, often unchecked interactions between third-party services and firms' sensitive internal resources."
Right now, these app-to-app connections are happening behind the back of the IdP. What we need is a way to move the connections between the applications into the IdP where they can be managed by the enterprise admin.
Let's see how this works if we leverage a new (in-progress) OAuth extension called "Identity and Authorization Chaining Across Domains", which I'll refer to as "Cross-App Access" for short, enabling the enterprise IdP to sit in the middle of the OAuth exchange between the two apps.
A Brief Intro to Cross-App Access
In this example, we'll use Claude as the application that is trying to connect to Slack's (hypothetical) MCP server. We'll start with a high-level overview of the flow, and later go over the detailed protocol.
First, the user logs in to Claude through the IdP as normal. This results in Claude getting either an ID token or SAML assertion from the IdP, which tells Claude who the user is. (This works the same for SAML assertions or ID tokens, so I'll use ID tokens in the example from here out.) This is no different than what the user would do today when signing in to Claude.

Then, instead of prompting the user to connect Slack, Claude takes the ID token back to the IdP in a request that says "Claude is requesting access to this user's Slack account."
The IdP validates the ID token, sees it was issued to Claude, and verifies that the admin has allowed Claude to access Slack on behalf of the given user. Assuming everything checks out, the IdP issues a new token back to Claude.

Claude takes the intermediate token from the IdP to Slack saying "hi, I would like an access token for the Slack MCP server. The IdP gave me this token with the details of the user to issue the access token for." Slack validates the token the same way it would have validated an ID token. (Remember, Slack is already configured for SSO to the IdP for this customer as well, so it already has a way to validate these tokens.) Slack is able to issue an access token giving Claude access to this user's resources in its MCP server.

This solves the two big problems:
- The exchange happens entirely without any user interaction, so the user never sees any prompts or any OAuth consent screens.
- Since the IdP sits in between the exchange, this gives the enterprise admin a chance to configure the policies around which applications are allowed this direct connection.
The other nice side effect of this is since there is no user interaction required, the first time a new user logs in to Claude, all their enterprise apps will be automatically connected without them having to click any buttons!
Cross-App Access Protocol
Now let's look at what this looks like in the actual protocol. This is based on the adopted in-progress OAuth specification "Identity and Authorization Chaining Across Domains". This spec is actually a combination of two RFCs: Token Exchange (RFC 8693), and JWT Profile for Authorization Grants (RFC 7523). Both RFCs as well as the "Identity and Authorization Chaining Across Domains" spec are very flexible. While this means it is possible to apply this to many different use cases, it does mean we need to be a bit more specific in how to use it for this use case. For that purpose, I've written a profile of the Identity Chaining draft called "Identity Assertion Authorization Grant" to fill in the missing pieces for the specific use case detailed here.
Let's go through it step by step. For this example we'll use the following entities:
- Claude - the "Requesting Application", which is attempting to access Slack
- Slack - the "Resource Application", which has the resources being accessed through MCP
- Okta - the enterprise identity provider which users at the example company can use to sign in to both apps

Single Sign-On
First, Claude gets the user to sign in using a standard OpenID Connect (or SAML) flow in order to obtain an ID token. There isn't anything unique to this spec regarding this first stage, so I will skip the details of the OpenID Connect flow and we'll start with the ID token as the input to the next step.
Token Exchange
Claude, the requesting application, then makes a Token Exchange request (RFC 8693) to the IdP's token endpoint with the following parameters:
requested_token_type: The valueurn:ietf:params:oauth:token-type:id-jagindicates that an ID Assertion JWT is being requested.audience: The Issuer URL of the Resource Application's authorization server.subject_token: The identity assertion (e.g. the OpenID Connect ID Token or SAML assertion) for the target end-user.subject_token_type: Eitherurn:ietf:params:oauth:token-type:id_tokenorurn:ietf:params:oauth:token-type:saml2as defined by RFC 8693.
This request will also include the client credentials that Claude would use in a traditional OAuth token request, which could be a client secret or a JWT Bearer Assertion.
POST /oauth2/token HTTP/1.1
Host: acme.okta.com
Content-Type: application/x-www-form-urlencoded
grant_type=urn:ietf:params:oauth:grant-type:token-exchange
&requested_token_type=urn:ietf:params:oauth:token-type:id-jag
&audience=https://auth.slack.com/
&subject_token=eyJraWQiOiJzMTZ0cVNtODhwREo4VGZCXzdrSEtQ...
&subject_token_type=urn:ietf:params:oauth:token-type:id_token
&client_assertion_type=urn:ietf:params:oauth:client-assertion-type:jwt-bearer
&client_assertion=eyJhbGciOiJSUzI1NiIsImtpZCI6IjIyIn0...
ID Assertion Validation and Policy Evaluation
At this point, the IdP evaluates the request and decides whether to issue the requested "ID Assertion JWT". The request will be evaluated based on the validity of the arguments, as well as the configured policy by the customer.
For example, the IdP validates that the ID token in this request was issued to the same client that matches the provided client authentication. It evaluates that the user still exists and is active, and that the user is assigned the Resource Application. Other policies can be evaluated at the discretion of the IdP, just like it can during a single sign-on flow.
If the IdP agrees that the requesting app should be authorized to access the given user's data in the resource app's MCP server, it will respond with a Token Exchange response to issue the token:
HTTP/1.1 200 OK
Content-Type: application/json
Cache-Control: no-store
{
"issued_token_type": "urn:ietf:params:oauth:token-type:id-jag",
"access_token": "eyJhbGciOiJIUzI1NiIsI...",
"token_type": "N_A",
"expires_in": 300
}
The claims in the issued JWT are defined in "Identity Assertion Authorization Grant". The JWT is signed using the same key that the IdP signs ID tokens with. This is a critical aspect that makes this work, since again we assumed that both apps would already be configured for SSO to the IdP so would already be aware of the signing key for that purpose.
At this point, Claude is ready to request a token for the Resource App's MCP server
Access Token Request
The JWT received in the previous request can now be used as a "JWT Authorization Grant" as described by RFC 7523. To do this, Claude makes a request to the MCP authorization server's token endpoint with the following parameters:
grant_type:urn:ietf:params:oauth:grant-type:jwt-bearerassertion: The Identity Assertion Authorization Grant JWT obtained in the previous token exchange step
For example:
POST /oauth2/token HTTP/1.1
Host: auth.slack.com
Authorization: Basic yZS1yYW5kb20tc2VjcmV0v3JOkF0XG5Qx2
grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer
assertion=eyJhbGciOiJIUzI1NiIsI...
Slack's authorization server can now evaluate this request to determine whether to issue an access token. The authorization server can validate the JWT by checking the issuer (iss) in the JWT to determine which enterprise IdP the token is from, and then check the signature using the public key discovered at that server. There are other claims to be validated as well, described in Section 6.1 of the Identity Assertion Authorization Grant.
Assuming all the validations pass, Slack is ready to issue an access token to Claude in the token response:
HTTP/1.1 200 OK
Content-Type: application/json
Cache-Control: no-store
{
"token_type": "Bearer",
"access_token": "2YotnFZFEjr1zCsicMWpAA",
"expires_in": 86400
}
This token response is the same format that Slack's authorization server would be responding to a traditional OAuth flow. That's another key aspect of this design that makes it scalable. We don't need the resource app to use any particular access token format, since only that server is responsible for validating those tokens.
Now that Claude has the access token, it can make a request to the (hypothetical) Slack MCP server using the bearer token the same way it would have if it got the token using the traditional redirect-based OAuth flow.
Note: Eventually we'll need to define the specific behavior of when to return a refresh token in this token response. The goal is to ensure the client goes through the IdP often enough for the IdP to enforce its access policies. A refresh token could potentially undermine that if the refresh token lifetime is too long. It follows that ultimately the IdP should enforce the refresh token lifetime, so we will need to define a way for the IdP to communicate to the authorization server whether and how long to issue refresh tokens. This would enable the authorization server to make its own decision on access token lifetime, while still respecting the enterprise IdP policy.
Cross-App Access Sequence Diagram
Here's the flow again, this time as a sequence diagram.

- The client initiates a login request
- The user's browser is redirected to the IdP
- The user logs in at the IdP
- The IdP returns an OAuth authorization code to the user's browser
- The user's browser delivers the authorization code to the client
- The client exchanges the authorization code for an ID token at the IdP
- The IdP returns an ID token to the client
At this point, the user is logged in to the MCP client. Everything up until this point has been a standard OpenID Connect flow.
- The client makes a direct Token Exchange request to the IdP to exchange the ID token for a cross-domain "ID Assertion JWT"
- The IdP validates the request and checks the internal policy
- The IdP returns the ID-JAG to the client
- The client makes a token request using the ID-JAG to the MCP authorization server
- The authorization server validates the token using the signing key it also uses for its OpenID Connect flow with the IdP
- The authorization server returns an access token
- The client makes a request with the access token to the MCP server
- The MCP server returns the response
For a more detailed step by step of the flow, see Appendix A.3 of the Identity Assertion Authorization Grant.
Next Steps
If this is something you're interested in, we'd love your help! The in-progress spec is publicly available, and we're looking for people interested in helping prototype it. If you're building an MCP server and you want to make it enterprise-ready, I'd be happy to help you build this!
You can find me at a few related events coming up:
- MCP Night on May 14
- MCP Developers Summit on May 23
- AWS MCP Agents Hackathon on May 30
- Identiverse 2025 on June 3-6
And of course you can always find me on LinkedIn or email me at aaron.parecki@okta.com.
Let's not overthink auth in MCP.
Yes, the MCP server is going to need its own auth server. But it's not as bad as it sounds. Let me explain.
First let's get a few pieces of terminology straight.
The confusion that's happening in the discussions I've seen so far is because the spec and diagrams show that the MCP server itself is handing authorization. That's not necessary.

In OAuth, we talk about the "authorization server" and "resource server" as distinct roles. I like to think of the authorization server as the "token factory", that's the thing that makes the access tokens. The resource server (usually an API) needs to be able to validate the tokens created by the authorization server.

It's possible to build a single server that is both a resource server and authorization server, and in fact many OAuth systems are built that way, especially large consumer services.

But nothing about the spec requires that the two roles are combined, it's also possible to run these as two totally unrelated services.
This flexibility that's been baked into OAuth for over a decade is what has led to the rapid adoption, as well the proliferation of open source and commercial products that provide an OAuth authorization server as a service.
So how does this relate to MCP?
I can annotate the flow from the Model Context Protocol spec to show the parts where the client talks to the MCP Resource Server separately from where the client talks to the MCP Authorization Server.
Here is the updated sequence diagram showing communication with each role separately.
Why is it important to call out this change?
I've seen a few conversations in various places about how requiring the MCP Server to be both an authorization server and resource server is too much of a burden. But actually, very little needs to change about the spec to enable this separation of concerns that OAuth already provides.
I've also seen various suggestions of other ways to separate the authorization server from the MCP server, like delegating to an enterprise IdP and having the MCP server validate access tokens issued by the IdP. These other options also conflate the OAuth roles in an awkward way and would result in some undesirable properties or relationships between the various parties involved.
So what needs to change in the MCP spec to enable this?
Discovery
The main thing currently forcing the MCP Server to be both the authorization server and resource server is how the client does discovery.
One design goal of MCP is to enable a client to bootstrap everything it needs based on only the server URL provided. I think this is a great design goal, and luckily is something that can be achieved even when separating the roles in the way I've described.
The MCP spec currently says that clients are expected to fetch the OAuth Server Metadata (RFC8414) file from the MCP Server base URL, resulting in a URL such as:
https://example.com/.well-known/oauth-authorization-server
This ends up meaning the MCP Resource Server must also be an Authorization Server, which leads to the complications the community has encountered so far. The good news is there is an OAuth spec we can apply here instead: Protected Resource Metadata.
Protected Resource Metadata
The Protected Resource Metadata spec is used by a Resource Server to advertise metadata about itself, including which Authorization Server can be used with it. This spec is both new and old. It was started in 2016, but was never adopted by the OAuth working group until 2023, after I had presented at an IETF meeting about the need for clients to be able to bootstrap OAuth flows given an OAuth resource server. The spec is now awaiting publication as an RFC, and should get its RFC number in a couple months. (Update: This became RFC 9728 on April 23, 2025!)
Applying this to the MCP server would result in a sequence like the following:
- The MCP Client fetches the Resource Server Metadata file by appending
/.well-known/oauth-protected-resourceto the MCP Server base URL. - The MCP Client finds the
authorization_serversproperty in the JSON response, and builds the Authorization Server Metadata URL by appending/.well-known/oauth-authorization-server - The MCP Client fetches the Authorization Server Metadata to find the endpoints it needs for the OAuth flow, the authorization endpoint and token endpoint
- The MCP Client initiates an OAuth flow and continues as normal
Note: The Protected Resource Metadata spec also supports the Resource Server returning WWW-Authenticate with a link to the resource metadata URL if you want to avoid the requirement that MCP Servers host their metadata URLs at the .well-known endpoint, it just requires an extra HTTP request to support this.
Access Token Validation
Two things to keep in mind about how the MCP Server validates access tokens with this new separation of concerns.
If you do build the MCP Authorization Server and Resource Server as part of the same system, you don't need to do anything special to validate the access tokens the Authorization Server issues. You probably already have some sort of infrastructure in place for your normal API to validate tokens issued by your Authorization Server, so nothing changes there.
If you are using an external Authorization Server, whether that's an open source product or a commercial hosted service, that product will have its own docs for how you can validate the tokens it creates. There's a good chance it already supports the standardized JWT Access Tokens described in RFC 9068, in which case you can use off-the-shelf JWT validation middleware for common frameworks.
In either case, the critical design goal here is that the MCP Authorization Server issues access tokens that only ever need to be validated by the MCP Resource Server. This is in line with the security recommendations in Section 2.3 of RFC 9700, in particular that "access tokens SHOULD be audience-restricted to a specific resource server". In other words, it would be a bad idea for the MCP Client to be issued an access token that works with both the MCP Resource Server and the service's REST API.
Why Require the MCP Server to have an Authorization Server in the first place?
Another argument I've seen is that MCP Server developers shouldn't have to build any OAuth infrastructure at all, instead they should be able to delegate all the OAuth bits to an external service.
In principle, I agree. Getting API access and authorization right is tricky, that's why there are entire companies dedicated to solving the problem.
The architecture laid out above enables this exact separation of concerns. The difference between this architecture and some of the other proposals I've seen is that this cleanly separates the security boundaries so that there are minimal dependencies among the parties involved.
But, one thing I haven't seen mentioned in the discussions is that there actually is no requirement than an OAuth Authorization Server provide any UI itself.
An Authorization Server with no UI?
While it is desirable from a security perspective that the MCP Resource Server has a corresponding Authorization Server that issues access tokens for it, that Authorization Server doesn't actually need to have any UI or even any concept of user login or accounts. You can actually build an Authorization Server that delegates all user account management to an external service. You can see an example of this in PayPal's MCP server they recently launched.
PayPal's traditional API already supports OAuth, the authorization and token endpoints are:
https://www.paypal.com/signin/authorizehttps://api-m.paypal.com/v1/oauth2/token
When PayPal built their MCP server, they launched it at https://mcp.paypal.com. If you fetch the metadata for the MCP Server, you'll find the two OAuth endpoints for the MCP Authorization Server:
https://mcp.paypal.com/authorizehttps://mcp.paypal.com/token
When the MCP Client redirects the user to the authorization endpoint, the MCP server itself doesn't provide any UI. Instead, it immediately redirects the user to the real PayPal authorization endpoint which then prompts the user to log in and authorize the client.

This points to yet another benefit of architecting the MCP Authorization Server and Resource Server this way. It enables implementers to delegate the actual user management to their existing OAuth server with no changes needed to the MCP Client. The MCP Client isn't even aware that this extra redirect step was inserted in the middle. As far as the MCP Client is concerned, it has been talking to only the MCP Authorization Server. It just so happens that the MCP Authorization Server has sent the user elsewhere to actually log in.
Dynamic Client Registration
There's one more point I want to make about why having a dedicated MCP Authorization Server is helpful architecturally.
The MCP spec strongly recommends that MCP Servers (authorization servers) support Dynamic Client Registration. If MCP is successful, there will be a large number of MCP Clients talking to a large number of MCP Servers, and the user is the one deciding which combinations of clients and servers to use. This means it is not scalable to require that every MCP Client developer register their client with every MCP Server.
This is similar to the idea of using an email client with the user's chosen email server. Obviously Mozilla can't register Thunderbird with every email server out there. Instead, there needs to be a way to dynamically establish a client's identity with the OAuth server at runtime. Dynamic Client Registration is one option for how to do that.
The problem is most commercial APIs are not going to enable Dynamic Client Registration on their production servers. For example, in order to get client credentials to use the Google APIs, you need to register as a developer and then register an OAuth client after logging in. Dynamic Client Registration would allow a client to register itself without the link to the developer's account. That would mean there is no paper trail for who the client was developed by. The Dynamic Client Registration endpoint can't require authentication by definition, so is a public endpoint that can create clients, which as you can imagine opens up some potential security issues.
I do, however, think it would be reasonable to expect production services to enable Dynamic Client Registration only on the MCP's Authorization Server. This way the dynamically-registered clients wouldn't be able to use the regular REST API, but would only be able to interact with the MCP API.
Mastodon and BlueSky also have a similar problem of needing clients to show up at arbitrary authorization servers without prior coordination between the client developer and authorization server operator. I call this the "OAuth for the Open Web" problem. Mastodon used Dynamic Client Registration as their solution, and has since documented some of the issues that this creates, linked here and here.
BlueSky decided to take a different approach and instead uses an https URL as a client identifier, bypassing the need for a client registration step entirely. This has the added bonus of having at least some level of confidence of the client identity because the client identity is hosted at a domain. It would be a perfectly viable approach to use this method for MCP as well. There is a discussion on that within MCP here. This is an ongoing topic within the OAuth working group, I have a couple of drafts in progress to formalize this pattern, Client ID Metadata Document and Client ID Scheme.
Enterprise IdP Integration
Lastly, I want to touch on the idea of enabling users to log in to MCP Servers with their enterprise IdP.
When an enterprise company purchases software, they expect to be able to tie it in to their single-sign-on solution. For example, when I log in to work Slack, I enter my work email and Slack redirects me to my work IdP where I log in. This way employees don't need to have passwords with every app they use in the enterprise, they can log in to everything with the same enterprise account, and all the apps can be protected with multi-factor authentication through the IdP. This also gives the company control over which users can access which apps, as well as a way to revoke a user's access at any time.
So how does this relate to MCP?
Well, plenty of people are already trying to figure out how to let their employees safely use AI tools within the enterprise. So we need a way to let employees use their enterprise IdP to log in and authorize MCP Clients to access MCP Servers.
If you're building an MCP Server in front of an existing application that already supports enterprise Single Sign-On, then you don't need to do anything differently in the MCP Client or Server and you already have support for this. When the MCP Client redirects to the MCP Authorization Server, the MCP Authorization Server redirects to the main Authorization Server, which would then prompt the user for their company email/domain and redirect to the enterprise IdP to log in.
This brings me to yet another thing I've been seeing conflated in the discussions: user login and user authorization.
OAuth is an authorization delegation protocol. OAuth doesn't actually say anything about how users authenticate at the OAuth server, it only talks about how the user can authorize access to an application. This is actually a really great thing, because it means we can get super creative with how users authenticate.

Remember the yellow box "User logs in and authorizes" from the original sequence diagram? These are actually two totally distinct steps. The OAuth authorization server is responsible for getting the user to log in somehow, but there's no requirement that how the user logs in is with a username/password. This is where we can insert a single-sign-on flow to an enterprise IdP, or really anything you can imagine.
So think of this as two separate boxes: "user logs in", and "user authorizes". Then, we can replace the "user logs in" box with an entirely new OpenID Connect flow out to the enterprise IdP to log the user in, and after they are logged in they can authorize the client.

I'll spare you the complete expanded sequence diagram, since it looks a lot more complicated than it actually is. But I again want to stress that this is nothing new, this is already how things are commonly done today.
This all just becomes cleaner to understand when you separate the MCP Authorization Server from the MCP Resource Server.
We can push all the complexity of user login, token minting, and more onto the MCP Authorization Server, keeping the MCP Resource Server free to do the much simpler task of validating access tokens and serving resources.
Future Improvements of Enterprise IdP Integration
There are two things I want to call out about how enterprise IdP integration could be improved. Both of these are entire topics on their own, so I will only touch on the problems and link out to other places where work is happening to solve them.
There are two points of friction with the current state of enterprise login for SaaS apps.
- IdP discovery
- User consent
IdP Discovery
When a user logs in to a SaaS app, they need to tell the app how to find their enterprise IdP. This is commonly done by either asking the user to enter their work email, or asking the user to enter their tenant URL at the service.

Neither of these is really a great user experience. It would be a lot better if the browser already knew which enterprise IdP the user should be sent to. This is one of my goals with the work happening in FedCM. With this new browser API, the browser can mediate the login, telling the SaaS app which enterprise IdP to use automatically only needing the user to click their account icon rather than type anything in.
User Consent
Another point of friction in the enterprise happens when a user starts connecting multiple applications to each other within the company. For example, if you drop in a Google Docs link into Slack, Slack will prompt you to connect your Google account to preview the link. Multiply this by N number of applications that can preview links, and M number of applications you might drop links to, and you end up sending the user through a huge number of OAuth consent flows.
The problem is only made worse with the explosion of AI tools. Every AI tool will need access to data in every other application in the enterprise. That is a lot of OAuth consent flows for the user to manage. Plus, the user shouldn't really be the one granting consent for Slack to access the company Google Docs account anyway. That consent should ideally be managed by the enterprise IT admin.
What we actually need is a way to enable the IT admin to grant consent for apps to talk to each other company-wide, removing the need for users to be sent through an OAuth flow at all.
This is the basis of another OAuth spec I've been working on, the Identity Assertion Authorization Grant.
The same problem applies to MCP Servers, and with the separation of concerns laid out above, it becomes straightforward to add this extension to move the consent to the enterprise and streamline the user experience.
Get in touch!
If these sound like interesting problems, please get in touch! You can find me on LinkedIn or reach me via email at aaron@parecki.com.
Hello! Today I want to talk about ANSI escape codes.
For a long time I was vaguely aware of ANSI escape codes (āthatās how you make text red in the terminal and stuffā) but I had no real understanding of where they were supposed to be defined or whether or not there were standards for them. I just had a kind of vague āthere be dragonsā feeling around them. While learning about the terminal this year, Iāve learned that:
- ANSI escape codes are responsible for a lot of usability improvements in the terminal (did you know thereās a way to copy to your system clipboard when SSHed into a remote machine?? Itās an escape code called OSC 52!)
- They arenāt completely standardized, and because of that they donāt always work reliably. And because theyāre also invisible, itās extremely frustrating to troubleshoot escape code issues.
So I wanted to put together a list for myself of some standards that exist around escape codes, because I want to know if they have to feel unreliable and frustrating, or if thereās a future where we could all rely on them with more confidence.
- whatās an escape code?
- ECMA-48
- xterm control sequences
- terminfo
- should programs use terminfo?
- is there a āsingle common setā of escape codes?
- some reasons to use terminfo
- some more documents/standards
- why I think this is interesting
whatās an escape code?
Have you ever pressed the left arrow key in your terminal and seen ^[[D?
Thatās an escape code! Itās called an āescape codeā because the first character
is the āescapeā character, which is usually written as ESC, \x1b, \E,
\033, or ^[.
Escape codes are how your terminal emulator communicates various kinds of information (colours, mouse movement, etc) with programs running in the terminal. There are two kind of escape codes:
- input codes which your terminal emulator sends for keypresses or mouse
movements that donāt fit into Unicode. For example āleft arrow keyā is
ESC[D, āCtrl+left arrowā might beESC[1;5D, and clicking the mouse might be something likeESC[M :3. - output codes which programs can print out to colour text, move the cursor around, clear the screen, hide the cursor, copy text to the clipboard, enable mouse reporting, set the window title, etc.
Now letās talk about standards!
ECMA-48
The first standard I found relating to escape codes was ECMA-48, which was originally published in 1976.
ECMA-48 does two things:
- Define some general formats for escape codes (like āCSIā codes, which are
ESC[+ something and āOSCā codes, which areESC]+ something) - Define some specific escape codes, like how āmove the cursor to the leftā is
ESC[D, or āturn text redā isESC[31m. In the spec, the ācursor leftā one is calledCURSOR LEFTand the one for changing colours is calledSELECT GRAPHIC RENDITION.
The formats are extensible, so thereās room for others to define more escape codes in the future. Lots of escape codes that are popular today arenāt defined in ECMA-48: for example itās pretty common for terminal applications (like vim, htop, or tmux) to support using the mouse, but ECMA-48 doesnāt define escape codes for the mouse.
xterm control sequences
There are a bunch of escape codes that arenāt defined in ECMA-48, for example:
- enabling mouse reporting (where did you click in your terminal?)
- bracketed paste (did you paste that text or type it in?)
- OSC 52 (which terminal applications can use to copy text to your system clipboard)
I believe (correct me if Iām wrong!) that these and some others came from xterm, are documented in XTerm Control Sequences, and have been widely implemented by other terminal emulators.
This list of āwhat xterm supportsā is not a standard exactly, but xterm is extremely influential and so it seems like an important document.
terminfo
In the 80s (and to some extent today, but my understanding is that it was MUCH more dramatic in the 80s) there was a huge amount of variation in what escape codes terminals actually supported.
To deal with this, thereās a database of escape codes for various terminals called āterminfoā.
It looks like the standard for terminfo is called X/Open Curses, though you need to create an account to view that standard for some reason. It defines the database format as well as a C library interface (ācursesā) for accessing the database.
For example you can run this bash snippet to see every possible escape code for āclear screenā for all of the different terminals your system knows about:
for term in $(toe -a | awk '{print $1}')
do
echo $term
infocmp -1 -T "$term" 2>/dev/null | grep 'clear=' | sed 's/clear=//g;s/,//g'
done
On my system (and probably every system Iāve ever used?), the terminfo database is managed by ncurses.
should programs use terminfo?
I think itās interesting that there are two main approaches that applications take to handling ANSI escape codes:
- Use the terminfo database to figure out which escape codes to use, depending
on whatās in the
TERMenvironment variable. Fish does this, for example. - Identify a āsingle common setā of escape codes which works in āenoughā terminal emulators and just hardcode those.
Some examples of programs/libraries that take approach #2 (ādonāt use terminfoā) include:
I got curious about why folks might be moving away from terminfo and I found this very interesting and extremely detailed rant about terminfo from one of the fish maintainers, which argues that:
[the terminfo authors] have done a lot of work that, at the time, was extremely important and helpful. My point is that it no longer is.
Iām not going to do it justice so Iām not going to summarize it, I think itās worth reading.
is there a āsingle common setā of escape codes?
I was just talking about the idea that you can use a ācommon setā of escape codes that will work for most people. But what is that set? Is there any agreement?
I really do not know the answer to this at all, but from doing some reading it seems like itās some combination of:
- The codes that the VT100 supported (though some arenāt relevant on modern terminals)
- whatās in ECMA-48 (which I think also has some things that are no longer relevant)
- What xterm supports (though Iād guess that not everything in there is actually widely supported enough)
and maybe ultimately āidentify the terminal emulators you think your users are going to use most frequently and test in thoseā, the same way web developers do when deciding which CSS features are okay to use
I donāt think there are any resources like Can I useā¦? or Baseline for the terminal though. (in theory terminfo is supposed to be the ācaniuseā for the terminal but it seems like it often takes 10+ years to add new terminal features when people invent them which makes it very limited)
some reasons to use terminfo
I also asked on Mastodon why people found terminfo valuable in 2025 and got a few reasons that made sense to me:
- some people expect to be able to use the
TERMenvironment variable to control how programs behave (for example withTERM=dumb), and thereās no standard for how that should work in a post-terminfo world - even though thereās less variation between terminal emulators than there was in the 80s, thereās far from zero variation: there are graphical terminals, the Linux framebuffer console, the situation youāre in when connecting to a server via its serial console, Emacs shell mode, and probably more that Iām missing
- there is no one standard for what the āsingle common setā of escape codes is, and sometimes programs use escape codes which arenāt actually widely supported enough
terminfo & user agent detection
The way that ncurses uses the TERM environment variable to decide which
escape codes to use reminds me of how webservers used to sometimes use the
browser user agent to decide which version of a website to serve.
It also seems like itās had some of the same results ā the way iTerm2 reports itself as being āxterm-256colorā feels similar to how Safariās user agent is āMozilla/5.0 (Macintosh; Intel Mac OS X 14_7_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.3 Safari/605.1.15ā. In both cases the terminal emulator / browser ends up changing its user agent to get around user agent detection that isnāt working well.
On the web we ended up deciding that user agent detection was not a good practice and to instead focus on standardization so we can serve the same HTML/CSS to all browsers. I donāt know if the same approach is the future in the terminal though ā I think the terminal landscape today is much more fragmented than the web ever was as well as being much less well funded.
some more documents/standards
A few more documents and standards related to escape codes, in no particular order:
- the Linux console_codes man page documents escape codes that Linux supports
- how the VT 100 handles escape codes & control sequences
- the kitty keyboard protocol
- OSC 8 for links in the terminal (and notes on adoption)
- A summary of ANSI standards from tmux
- this terminal features reporting specification from iTerm
- sixel graphics
why I think this is interesting
I sometimes see people saying that the unix terminal is āoutdatedā, and since I love the terminal so much Iām always curious about what incremental changes might make it feel less āoutdatedā.
Maybe if we had a clearer standards landscape (like we do on the web!) it would be easier for terminal emulator developers to build new features and for authors of terminal applications to more confidently adopt those features so that we can all benefit from them and have a richer experience in the terminal.
Obviously standardizing ANSI escape codes is not easy (ECMA-48 was first published almost 50 years ago and weāre still not there!). I donāt even know what all of the challenges are. But the situation with HTML/CSS/JS used to be extremely bad too and now itās MUCH better, so maybe thereās hope.
I was talking to a friend about how to add a directory to your PATH today. Itās
something that feels āobviousā to me since Iāve been using the terminal for a
long time, but when I searched for instructions for how to do it, I actually
couldnāt find something that explained all of the steps ā a lot of them just
said āadd this to ~/.bashrcā, but what if youāre not using bash? What if your
bash config is actually in a different file? And how are you supposed to figure
out which directory to add anyway?
So I wanted to try to write down some more complete directions and mention some of the gotchas Iāve run into over the years.
Hereās a table of contents:
- step 1: what shell are you using?
- step 2: find your shellās config file
- step 3: figure out which directory to add
- step 4: edit your shell config
- step 5: restart your shell
- problems:
- notes:
step 1: what shell are you using?
If youāre not sure what shell youāre using, hereās a way to find out. Run this:
ps -p $$ -o pid,comm=
- if youāre using bash, itāll print out
97295 bash - if youāre using zsh, itāll print out
97295 zsh - if youāre using fish, itāll print out an error like āIn fish, please use
$fish_pidā (
$$isnāt valid syntax in fish, but in any case the error message tells you that youāre using fish, which you probably already knew)
Also bash is the default on Linux and zsh is the default on Mac OS (as of 2024). Iāll only cover bash, zsh, and fish in these directions.
step 2: find your shellās config file
- in zsh, itās probably
~/.zshrc - in bash, it might be
~/.bashrc, but itās complicated, see the note in the next section - in fish, itās probably
~/.config/fish/config.fish(you can runecho $__fish_config_dirif you want to be 100% sure)
a note on bashās config file
Bash has three possible config files: ~/.bashrc, ~/.bash_profile, and ~/.profile.
If youāre not sure which one your system is set up to use, Iād recommend testing this way:
- add
echo hi thereto your~/.bashrc - Restart your terminal
- If you see āhi thereā, that means
~/.bashrcis being used! Hooray! - Otherwise remove it and try the same thing with
~/.bash_profile - You can also try
~/.profileif the first two options donāt work.
(there are a lot of elaborate flow charts out there that explain how bash decides which config file to use but IMO itās not worth it to internalize them and just testing is the fastest way to be sure)
step 3: figure out which directory to add
Letās say that youāre trying to install and run a program called http-server
and it doesnāt work, like this:
$ npm install -g http-server
$ http-server
bash: http-server: command not found
How do you find what directory http-server is in? Honestly in general this is
not that easy ā often the answer is something like āit depends on how npm is
configuredā. A few ideas:
- Often when setting up a new installer (like
cargo,npm,homebrew, etc), when you first set it up itāll print out some directions about how to update your PATH. So if youāre paying attention you can get the directions then. - Sometimes installers will automatically update your shellās config file
to update your
PATHfor you - Sometimes just Googling āwhere does npm install things?ā will turn up the answer
- Some tools have a subcommand that tells you where theyāre configured to
install things, like:
- Node/npm:
npm config get prefix(then append/bin/) - Go:
go env GOPATH(then append/bin/) - asdf:
asdf info | grep ASDF_DIR(then append/bin/and/shims/)
- Node/npm:
step 3.1: double check itās the right directory
Once youāve found a directory you think might be the right one, make sure itās
actually correct! For example, I found out that on my machine, http-server is
in ~/.npm-global/bin. I can make sure that itās the right directory by trying to
run the program http-server in that directory like this:
$ ~/.npm-global/bin/http-server
Starting up http-server, serving ./public
It worked! Now that you know what directory you need to add to your PATH,
letās move to the next step!
step 4: edit your shell config
Now we have the 2 critical pieces of information we need:
- Which directory youāre trying to add to your PATH (like
~/.npm-global/bin/) - Where your shellās config is (like
~/.bashrc,~/.zshrc, or~/.config/fish/config.fish)
Now what you need to add depends on your shell:
bash instructions:
Open your shellās config file, and add a line like this:
export PATH=$PATH:~/.npm-global/bin/
(obviously replace ~/.npm-global/bin with the actual directory youāre trying to add)
zsh instructions:
You can do the same thing as in bash, but zsh also has some slightly fancier syntax you can use if you prefer:
path=(
$path
~/.npm-global/bin
)
fish instructions:
In fish, the syntax is different:
set PATH $PATH ~/.npm-global/bin
(in fish you can also use fish_add_path, some notes on that further down)
step 5: restart your shell
Now, an extremely important step: updating your shellās config wonāt take effect if you donāt restart it!
Two ways to do this:
- open a new terminal (or terminal tab), and maybe close the old one so you donāt get confused
- Run
bashto start a new shell (orzshif youāre using zsh, orfishif youāre using fish)
Iāve found that both of these usually work fine.
And you should be done! Try running the program you were trying to run and hopefully it works now.
If not, here are a couple of problems that you might run into:
problem 1: it ran the wrong program
If the wrong version of a program is running, you might need to add the directory to the beginning of your PATH instead of the end.
For example, on my system I have two versions of python3 installed, which I
can see by running which -a:
$ which -a python3
/usr/bin/python3
/opt/homebrew/bin/python3
The one your shell will use is the first one listed.
If you want to use the Homebrew version, you need to add that directory
(/opt/homebrew/bin) to the beginning of your PATH instead, by putting this in
your shellās config file (itās /opt/homebrew/bin/:$PATH instead of the usual $PATH:/opt/homebrew/bin/)
export PATH=/opt/homebrew/bin/:$PATH
or in fish:
set PATH ~/.cargo/bin $PATH
problem 2: the program isnāt being run from your shell
All of these directions only work if youāre running the program from your shell. If youāre running the program from an IDE, from a GUI, in a cron job, or some other way, youāll need to add the directory to your PATH in a different way, and the exact details might depend on the situation.
in a cron job
Some options:
- use the full path to the program youāre running, like
/home/bork/bin/my-program - put the full PATH you want as the first line of your crontab (something like
PATH=/bin:/usr/bin:/usr/local/bin:ā¦.). You can get the full PATH youāre
using in your shell by running
echo "PATH=$PATH".
Iām honestly not sure how to handle it in an IDE/GUI because I havenāt run into that in a long time, will add directions here if someone points me in the right direction.
problem 3: duplicate PATH entries making it harder to debug
If you edit your path and start a new shell by running bash (or zsh, or
fish), youāll often end up with duplicate PATH entries, because the shell
keeps adding new things to your PATH every time you start your shell.
Personally I donāt think Iāve run into a situation where this kind of
duplication breaks anything, but the duplicates can make it harder to debug
whatās going on with your PATH if youāre trying to understand its contents.
Some ways you could deal with this:
- If youāre debugging your
PATH, open a new terminal to do it in so you get a āfreshā state. This should avoid the duplication. - Deduplicate your
PATHat the end of your shellās config (for example in zsh apparently you can do this withtypeset -U path) - Check that the directory isnāt already in your
PATHwhen adding it (for example in fish I believe you can do this withfish_add_path --path /some/directory)
How to deduplicate your PATH is shell-specific and there isnāt always a
built in way to do it so youāll need to look up how to accomplish it in your
shell.
problem 4: losing your history after updating your PATH
Hereās a situation thatās easy to get into in bash or zsh:
- Run a command (it fails)
- Update your
PATH - Run
bashto reload your config - Press the up arrow a couple of times to rerun the failed command (or open a new terminal)
- The failed command isnāt in your history! Why not?
This happens because in bash, by default, history is not saved until you exit the shell.
Some options for fixing this:
- Instead of running
bashto reload your config, runsource ~/.bashrc(orsource ~/.zshrcin zsh). This will reload the config inside your current session. - Configure your shell to continuously save your history instead of only saving the history when the shell exits. (How to do this depends on whether youāre using bash or zsh, the history options in zsh are a bit complicated and Iām not exactly sure what the best way is)
a note on source
When you install cargo (Rustās installer) for the first time, it gives you
these instructions for how to set up your PATH, which donāt mention a specific
directory at all.
This is usually done by running one of the following (note the leading DOT):
. "$HOME/.cargo/env" # For sh/bash/zsh/ash/dash/pdksh
source "$HOME/.cargo/env.fish" # For fish
The idea is that you add that line to your shellās config, and their script
automatically sets up your PATH (and potentially other things) for you.
This is pretty common (for example Homebrew suggests you eval brew shellenv), and there are
two ways to approach this:
- Just do what the tool suggests (like adding
. "$HOME/.cargo/env"to your shellās config) - Figure out which directories the script theyāre telling you to run would add
to your PATH, and then add those manually. Hereās how Iād do that:
- Run
. "$HOME/.cargo/env"in my shell (or the fish version if using fish) - Run
echo "$PATH" | tr ':' '\n' | grep cargoto figure out which directories it added - See that it says
/Users/bork/.cargo/binand shorten that to~/.cargo/bin - Add the directory
~/.cargo/binto PATH (with the directions in this post)
- Run
I donāt think thereās anything wrong with doing what the tool suggests (it might be the ābest wayā!), but personally I usually use the second approach because I prefer knowing exactly what configuration Iām changing.
a note on fish_add_path
fish has a handy function called fish_add_path that you can run to add a directory to your PATH like this:
fish_add_path /some/directory
This is cool (itās such a simple command!) but Iāve stopped using it for a couple of reasons:
- Sometimes
fish_add_pathwill update thePATHfor every session in the future (with a āuniversal variableā) and sometimes it will update thePATHjust for the current session and itās hard for me to tell which one it will do. In theory the docs explain this but I could not understand them. - If you ever need to remove the directory from your
PATHa few weeks or months later because maybe you made a mistake, itās kind of hard to do (there are instructions in this comments of this github issue though).
thatās all
Hopefully this will help some people. Let me know (on Mastodon or Bluesky) if you there are other major gotchas that have tripped you up when adding a directory to your PATH, or if you have questions about this post!
A few weeks ago I ran a terminal survey (you can read the results here) and at the end I asked:
Whatās the most frustrating thing about using the terminal for you?
1600 people answered, and I decided to spend a few days categorizing all the responses. Along the way I learned that classifying qualitative data is not easy but I gave it my best shot. I ended up building a custom tool to make it faster to categorize everything.
As with all of my surveys the methodology isnāt particularly scientific. I just posted the survey to Mastodon and Twitter, ran it for a couple of days, and got answers from whoever happened to see it and felt like responding.
Here are the top categories of frustrations!
I think itās worth keeping in mind while reading these comments that
- 40% of people answering this survey have been using the terminal for 21+ years
- 95% of people answering the survey have been using the terminal for at least 4 years
These comments arenāt coming from total beginners.
Here are the categories of frustrations! The number in brackets is the number of people with that frustration. Iām mostly writing this up for myself because Iām trying to write a zine about the terminal and I wanted to get a sense for what people are having trouble with.
remembering syntax (115)
People talked about struggles remembering:
- the syntax for CLI tools like awk, jq, sed, etc
- the syntax for redirects
- keyboard shortcuts for tmux, text editing, etc
One example comment:
There are just so many little ātriviaā details to remember for full functionality. Even after all these years Iāll sometimes forget where itās 2 or 1 for stderr, or forget which is which for
>and>>.
switching terminals is hard (91)
People talked about struggling with switching systems (for example home/work computer or when SSHing) and running into:
- OS differences in keyboard shortcuts (like Linux vs Mac)
- systems which donāt have their preferred text editor (āno vimā or āonly vimā)
- different versions of the same command (like Mac OS grep vs GNU grep)
- no tab completion
- a shell they arenāt used to (āthe subtle differences between zsh and bashā)
as well as differences inside the same system like pagers being not consistent with each other (git diff pagers, other pagers).
One example comment:
I got used to fish and vi mode which are not available when I ssh into servers, containers.
color (85)
Lots of problems with color, like:
- programs setting colors that are unreadable with a light background color
- finding a colorscheme they like (and getting it to work consistently across different apps)
- color not working inside several layers of SSH/tmux/etc
- not liking the defaults
- not wanting color at all and struggling to turn it off
This comment felt relatable to me:
Getting my terminal theme configured in a reasonable way between the terminal emulator and fish (I did this years ago and remember it being tedious and fiddly and now feel like Iām locked into my current theme because it works and I dread touching any of that configuration ever again).
keyboard shortcuts (84)
Half of the comments on keyboard shortcuts were about how on Linux/Windows, the keyboard shortcut to copy/paste in the terminal is different from in the rest of the OS.
Some other issues with keyboard shortcuts other than copy/paste:
- using
Ctrl-Win a browser-based terminal and closing the window - the terminal only supports a limited set of keyboard shortcuts (no
Ctrl-Shift-, noSuper, noHyper, lots ofctrl-shortcuts arenāt possible likeCtrl-,) - the OS stopping you from using a terminal keyboard shortcut (like by default
Mac OS uses
Ctrl+left arrowfor something else) - issues using emacs in the terminal
- backspace not working (2)
other copy and paste issues (75)
Aside from āthe keyboard shortcut for copy and paste is differentā, there were a lot of OTHER issues with copy and paste, like:
- copying over SSH
- how tmux and the terminal emulator both do copy/paste in different ways
- dealing with many different clipboards (system clipboard, vim clipboard, the āmiddle clickā clipboard on Linux, tmuxās clipboard, etc) and potentially synchronizing them
- random spaces added when copying from the terminal
- pasting multiline commands which automatically get run in a terrifying way
- wanting a way to copy text without using the mouse
discoverability (55)
There were lots of comments about this, which all came down to the same basic complaint ā itās hard to discover useful tools or features! This comment kind of summed it all up:
How difficult it is to learn independently. Most of what I know is an assorted collection of stuff Iāve been told by random people over the years.
steep learning curve (44)
A lot of comments about it generally having a steep learning curve. A couple of example comments:
After 15 years of using it, Iām not much faster than using it than I was 5 or maybe even 10 years ago.
and
That I know I could make my life easier by learning more about the shortcuts and commands and configuring the terminal but I donāt spend the time because it feels overwhelming.
history (42)
Some issues with shell history:
- history not being shared between terminal tabs (16)
- limits that are too short (4)
- history not being restored when terminal tabs are restored
- losing history because the terminal crashed
- not knowing how to search history
One example comment:
It wasted a lot of time until I figured it out and still annoys me that āhistoryā on zsh has such a small buffer; I have to type āhistory 0ā to get any useful length of history.
bad documentation (37)
People talked about:
- documentation being generally opaque
- lack of examples in man pages
- programs which donāt have man pages
Hereās a representative comment:
Finding good examples and docs. Man pages often not enough, have to wade through stack overflow
scrollback (36)
A few issues with scrollback:
- programs printing out too much data making you lose scrollback history
- resizing the terminal messes up the scrollback
- lack of timestamps
- GUI programs that you start in the background printing stuff out that gets in the way of other programsā outputs
One example comment:
When resizing the terminal (in particular: making it narrower) leads to broken rewrapping of the scrollback content because the commands formatted their output based on the terminal window width.
āit feels outdatedā (33)
Lots of comments about how the terminal feels hampered by legacy decisions and how users often end up needing to learn implementation details that feel very esoteric. One example comment:
Most of the legacy cruft, it would be great to have a green field implementation of the CLI interface.
shell scripting (32)
Lots of complaints about POSIX shell scripting. Thereās a general feeling that shell scripting is difficult but also that switching to a different less standard scripting language (fish, nushell, etc) brings its own problems.
Shell scripting. My tolerance to ditch a shell script and go to a scripting language is pretty low. Itās just too messy and powerful. Screwing up can be costly so I donāt even bother.
more issues
Some more issues that were mentioned at least 10 times:
- (31) inconsistent command line arguments: is it -h or help or āhelp?
- (24) keeping dotfiles in sync across different systems
- (23) performance (e.g. āmy shell takes too long to startā)
- (20) window management (potentially with some combination of tmux tabs, terminal tabs, and multiple terminal windows. Where did that shell session go?)
- (17) generally feeling scared/uneasy (āThe debilitating fear that Iām going to do some mysterious Bad Thing with a command and I will have absolutely no idea how to fix or undo it or even really figure out what happenedā)
- (16) terminfo issues (āHaving to learn about terminfo if/when I try a new terminal emulator and ssh elsewhere.ā)
- (16) lack of image support (sixel etc)
- (15) SSH issues (like having to start over when you lose the SSH connection)
- (15) various tmux/screen issues (for example lack of integration between tmux and the terminal emulator)
- (15) typos & slow typing
- (13) the terminal getting messed up for various reasons (pressing
Ctrl-S,cating a binary, etc) - (12) quoting/escaping in the shell
- (11) various Windows/PowerShell issues
n/a (122)
There were also 122 answers to the effect of ānothing reallyā or āonly that I canāt do EVERYTHING in the terminalā
One example comment:
Think Iāve found work arounds for most/all frustrations
thatās all!
Iām not going to make a lot of commentary on these results, but here are a couple of categories that feel related to me:
- remembering syntax & history (often the thing you need to remember is something youāve run before!)
- discoverability & the learning curve (the lack of discoverability is definitely a big part of what makes it hard to learn)
- āswitching systems is hardā & āit feels outdatedā (tools that havenāt really changed in 30 or 40 years have many problems but they do tend to be always there no matter what system youāre on, which is very useful and makes them hard to stop using)
Trying to categorize all these results in a reasonable way really gave me an appreciation for social science researchersā skills.
Hello! Recently I ran a terminal survey and I asked people what frustrated them. One person commented:
There are so many pieces to having a modern terminal experience. I wish it all came out of the box.
My immediate reaction was āoh, getting a modern terminal experience isnāt that hard, you just need toā¦.ā, but the more I thought about it, the longer the āyou just need toā¦ā list got, and I kept thinking about more and more caveats.
So I thought I would write down some notes about what it means to me personally to have a āmodernā terminal experience and what I think can make it hard for people to get there.
what is a āmodern terminal experienceā?
Here are a few things that are important to me, with which part of the system is responsible for them:
- multiline support for copy and paste: if you paste 3 commands in your shell, it should not immediately run them all! Thatās scary! (shell, terminal emulator)
- infinite shell history: if I run a command in my shell, it should be saved forever, not deleted after 500 history entries or whatever. Also I want commands to be saved to the history immediately when I run them, not only when I exit the shell session (shell)
- a useful prompt: I canāt live without having my current directory and current git branch in my prompt (shell)
- 24-bit colour: this is important to me because I find it MUCH easier to theme neovim with 24-bit colour support than in a terminal with only 256 colours (terminal emulator)
- clipboard integration between vim and my operating system so that when I copy in Firefox, I can just press
pin vim to paste (text editor, maybe the OS/terminal emulator too) - good autocomplete: for example commands like git should have command-specific autocomplete (shell)
- having colours in
ls(shell config) - a terminal theme I like: I spend a lot of time in my terminal, I want it to look nice and I want its theme to match my terminal editorās theme. (terminal emulator, text editor)
- automatic terminal fixing: If a programs prints out some weird escape codes that mess up my terminal, I want that to automatically get reset so that my terminal doesnāt get messed up (shell)
- keybindings: I want
Ctrl+left arrowto work (shell or application) - being able to use the scroll wheel in programs like
less: (terminal emulator and applications)
There are a million other terminal conveniences out there and different people value different things, but those are the ones that I would be really unhappy without.
how I achieve a āmodern experienceā
My basic approach is:
- use the
fishshell. Mostly donāt configure it, except to:- set the
EDITORenvironment variable to my favourite terminal editor - alias
lstols --color=auto
- set the
- use any terminal emulator with 24-bit colour support. In the past Iāve used GNOME Terminal, Terminator, and iTerm, but Iām not picky about this. I donāt really configure it other than to choose a font.
- use
neovim, with a configuration that Iāve been very slowly building over the last 9 years or so (the last time I deleted my vim config and started from scratch was 9 years ago) - use the base16 framework to theme everything
A few things that affect my approach:
- I donāt spend a lot of time SSHed into other machines
- Iād rather use the mouse a little than come up with keyboard-based ways to do everything
- I work on a lot of small projects, not one big project
some āout of the boxā options for a āmodernā experience
What if you want a nice experience, but donāt want to spend a lot of time on configuration? Figuring out how to configure vim in a way that I was satisfied with really did take me like ten years, which is a long time!
My best ideas for how to get a reasonable terminal experience with minimal config are:
- shell: either
fishorzshwith oh-my-zsh - terminal emulator: almost anything with 24-bit colour support, for example all of these are popular:
- linux: GNOME Terminal, Konsole, Terminator, xfce4-terminal
- mac: iTerm (Terminal.app doesnāt have 256-colour support)
- cross-platform: kitty, alacritty, wezterm, or ghostty
- shell config:
- set the
EDITORenvironment variable to your favourite terminal text editor - maybe alias
lstols --color=auto
- set the
- text editor: this is a tough one, maybe micro or helix? I havenāt used
either of them seriously but they both seem like very cool projects and I
think itās amazing that you can just use all the usual GUI editor commands
(
Ctrl-Cto copy,Ctrl-Vto paste,Ctrl-Ato select all) in micro and they do what youād expect. I would probably try switching to helix except that retraining my vim muscle memory seems way too hard. Also helix doesnāt have a GUI or plugin system yet.
Personally I wouldnāt use xterm, rxvt, or Terminal.app as a terminal emulator, because Iāve found in the past that theyāre missing core features (like 24-bit colour in Terminal.appās case) that make the terminal harder to use for me.
I donāt want to pretend that getting a āmodernā terminal experience is easier than it is though ā I think there are two issues that make it hard. Letās talk about them!
issue 1 with getting to a āmodernā experience: the shell
bash and zsh are by far the two most popular shells, and neither of them provide a default experience that I would be happy using out of the box, for example:
- you need to customize your prompt
- they donāt come with git completions by default, you have to set them up
- by default, bash only stores 500 (!) lines of history and (at least on Mac OS) zsh is only configured to store 2000 lines, which is still not a lot
- I find bashās tab completion very frustrating, if thereās more than one match then you canāt tab through them
And even though I love fish, the fact that it isnāt POSIX does make it hard for a lot of folks to make the switch.
Of course itās totally possible to learn how to customize your prompt in bash
or whatever, and it doesnāt even need to be that complicated (in bash Iād
probably start with something like export PS1='[\u@\h \W$(__git_ps1 " (%s)")]\$ ', or maybe use starship).
But each of these ānot complicatedā things really does add up and itās
especially tough if you need to keep your config in sync across several
systems.
An extremely popular solution to getting a āmodernā shell experience is oh-my-zsh. It seems like a great project and I know a lot of people use it very happily, but Iāve struggled with configuration systems like that in the past ā it looks like right now the base oh-my-zsh adds about 3000 lines of config, and often I find that having an extra configuration system makes it harder to debug whatās happening when things go wrong. I personally have a tendency to use the system to add a lot of extra plugins, make my system slow, get frustrated that itās slow, and then delete it completely and write a new config from scratch.
issue 2 with getting to a āmodernā experience: the text editor
In the terminal survey I ran recently, the most popular terminal text editors
by far were vim, emacs, and nano.
I think the main options for terminal text editors are:
- use vim or emacs and configure it to your liking, you can probably have any feature you want if you put in the work
- use nano and accept that youāre going to have a pretty limited experience (for example I donāt think you can select text with the mouse and then ācutā it in nano)
- use
microorhelixwhich seem to offer a pretty good out-of-the-box experience, potentially occasionally run into issues with using a less mainstream text editor - just avoid using a terminal text editor as much as possible, maybe use VSCode, use
VSCodeās terminal for all your terminal needs, and mostly never edit files in
the terminal. Or I know a lot of people use
codeas theirEDITORin the terminal.
issue 3: individual applications
The last issue is that sometimes individual programs that I use are kind of
annoying. For example on my Mac OS machine, /usr/bin/sqlite3 doesnāt support
the Ctrl+Left Arrow keyboard shortcut. Fixing this to get a reasonable
terminal experience in SQLite was a little complicated, I had to:
- realize why this is happening (Mac OS wonāt ship GNU tools, and āCtrl-Left arrowā support comes from GNU readline)
- find a workaround (install sqlite from homebrew, which does have readline support)
- adjust my environment (put Homebrewās sqlite3 in my PATH)
I find that debugging application-specific issues like this is really not easy and often it doesnāt feel āworth itā ā often Iāll end up just dealing with various minor inconveniences because I donāt want to spend hours investigating them. The only reason I was even able to figure this one out at all is that Iāve been spending a huge amount of time thinking about the terminal recently.
A big part of having a āmodernā experience using terminal programs is just
using newer terminal programs, for example I canāt be bothered to learn a
keyboard shortcut to sort the columns in top, but in htop I can just click
on a column heading with my mouse to sort it. So I use htop instead! But discovering new more āmodernā command line tools isnāt easy (though
I made a list here),
finding ones that I actually like using in practice takes time, and if youāre
SSHed into another machine, they wonāt always be there.
everything affects everything else
Something I find tricky about configuring my terminal to make everything āniceā is that changing one seemingly small thing about my workflow can really affect everything else. For example right now I donāt use tmux. But if I needed to use tmux again (for example because I was doing a lot of work SSHed into another machine), Iād need to think about a few things, like:
- if I wanted tmuxās copy to synchronize with my system clipboard over SSH, Iād need to make sure that my terminal emulator has OSC 52 support
- if I wanted to use iTermās tmux integration (which makes tmux tabs into iTerm tabs), Iād need to change how I configure colours ā right now I set them with a shell script that I run when my shell starts, but that means the colours get lost when restoring a tmux session.
and probably more things I havenāt thought of. āUsing tmux means that I have to change how I manage my coloursā sounds unlikely, but that really did happen to me and I decided āwell, I donāt want to change how I manage colours right now, so I guess Iām not using that feature!ā.
Itās also hard to remember which features Iām relying on ā for example maybe my current terminal does have OSC 52 support and because copying from tmux over SSH has always Just Worked I donāt even realize that thatās something I need, and then it mysteriously stops working when I switch terminals.
change things slowly
Personally even though I think my setup is not that complicated, itās taken me 20 years to get to this point! Because terminal config changes are so likely to have unexpected and hard-to-understand consequences, Iāve found that if I change a lot of terminal configuration all at once it makes it much harder to understand what went wrong if thereās a problem, which can be really disorienting.
So I usually prefer to make pretty small changes, and accept that changes can
might take me a REALLY long time to get used to. For example I switched from
using ls to eza a year or two ago and
while I like it (because eza -l prints human-readable file sizes by default)
Iām still not quite sure about it. But also sometimes itās worth it to make a
big change, like I made the switch to fish (from bash) 10 years ago and Iām
very happy I did.
getting a āmodernā terminal is not that easy
Trying to explain how āeasyā it is to configure your terminal really just made me think that itās kind of hard and that I still sometimes get confused.
Iāve found that thereās never one perfect way to configure things in the terminal that will be compatible with every single other thing. I just need to try stuff, figure out some kind of locally stable state that works for me, and accept that if I start using a new tool it might disrupt the system and I might need to rethink things.
Recently Iāve been thinking about how everything that happens in the terminal is some combination of:
- Your operating systemās job
- Your shellās job
- Your terminal emulatorās job
- The job of whatever program you happen to be running (like
toporvimorcat)
The first three (your operating system, shell, and terminal emulator) are all kind of known quantities ā if youāre using bash in GNOME Terminal on Linux, you can more or less reason about how how all of those things interact, and some of their behaviour is standardized by POSIX.
But the fourth one (āwhatever program you happen to be runningā) feels like it could do ANYTHING. How are you supposed to know how a program is going to behave?
This post is kind of long so hereās a quick table of contents:
- programs behave surprisingly consistently
- these are meant to be descriptive, not prescriptive
- itās not always obvious which ārulesā are the programās responsibility to implement
- rule 1: noninteractive programs should quit when you press
Ctrl-C - rule 2: TUIs should quit when you press
q - rule 3: REPLs should quit when you press
Ctrl-Don an empty line - rule 4: donāt use more than 16 colours
- rule 5: vaguely support readline keybindings
- rule 5.1:
Ctrl-Wshould delete the last word - rule 6: disable colours when writing to a pipe
- rule 7:
-means stdin/stdout - these ārulesā take a long time to learn
programs behave surprisingly consistently
As far as I know, there are no real standards for how programs in the terminal should behave ā the closest things I know of are:
- POSIX, which mostly dictates how your terminal emulator / OS / shell should
work together. I think it does specify a few things about how core utilities like
cpshould work but AFAIK it doesnāt have anything to say about how for examplehtopshould behave. - these command line interface guidelines
But even though there are no standards, in my experience programs in the terminal behave in a pretty consistent way. So I wanted to write down a list of ārulesā that in my experience programs mostly follow.
these are meant to be descriptive, not prescriptive
My goal here isnāt to convince authors of terminal programs that they should follow any of these rules. There are lots of exceptions to these and often thereās a good reason for those exceptions.
But itās very useful for me to know what behaviour to expect from a random new terminal program that Iām using. Instead of āuh, programs could do literally anythingā, itās āok, here are the basic rules I expect, and then I can keep a short mental list of exceptionsā.
So Iām just writing down what Iāve observed about how programs behave in my 20 years of using the terminal, why I think they behave that way, and some examples of cases where that rule is ābrokenā.
itās not always obvious which ārulesā are the programās responsibility to implement
There are a bunch of common conventions that I think are pretty clearly the programās responsibility to implement, like:
- config files should go in
~/.BLAHrcor~/.config/BLAH/FILEor/etc/BLAH/or something --helpshould print help text- programs should print āregularā output to stdout and errors to stderr
But in this post Iām going to focus on things that itās not 100% obvious are
the programās responsibility. For example it feels to me like a ālaw of natureā
that pressing Ctrl-D should quit a REPL, but programs often
need to explicitly implement support for it ā even though cat doesnāt need
to implement Ctrl-D support, ipython does. (more about that in ārule 3ā below)
Understanding which things are the programās responsibility makes it much less surprising when different programsā implementations are slightly different.
rule 1: noninteractive programs should quit when you press Ctrl-C
The main reason for this rule is that noninteractive programs will quit by
default on Ctrl-C if they donāt set up a SIGINT signal handler, so this is
kind of a āyou should act like the defaultā rule.
Something that trips a lot of people up is that this doesnāt apply to
interactive programs like python3 or bc or less. This is because in
an interactive program, Ctrl-C has a different job ā if the program is
running an operation (like for example a search in less or some Python code
in python3), then Ctrl-C will interrupt that operation but not stop the
program.
As an example of how this works in an interactive program: hereās the code in prompt-toolkit (the library that iPython uses for handling input)
that aborts a search when you press Ctrl-C.
rule 2: TUIs should quit when you press q
TUI programs (like less or htop) will usually quit when you press q.
This rule doesnāt apply to any program where pressing q to quit wouldnāt make
sense, like tmux or text editors.
rule 3: REPLs should quit when you press Ctrl-D on an empty line
REPLs (like python3 or ed) will usually quit when you press Ctrl-D on an
empty line. This rule is similar to the Ctrl-C rule ā the reason for this is
that by default if youāre running a program (like cat) in ācooked modeā, then
the operating system will return an EOF when you press Ctrl-D on an empty
line.
Most of the REPLs I use (sqlite3, python3, fish, bash, etc) donāt actually use cooked mode, but they all implement this keyboard shortcut anyway to mimic the default behaviour.
For example, hereās the code in prompt-toolkit that quits when you press Ctrl-D, and hereās the same code in readline.
I actually thought that this one was a āLaw of Terminal Physicsā until very recently because Iāve basically never seen it broken, but you can see that itās just something that each individual input library has to implement in the links above.
Someone pointed out that the Erlang REPL does not quit when you press Ctrl-D,
so I guess not every REPL follows this āruleā.
rule 4: donāt use more than 16 colours
Terminal programs rarely use colours other than the base 16 ANSI colours. This
is because if you specify colours with a hex code, itās very likely to clash
with some usersā background colour. For example if I print out some text as
#EEEEEE, it would be almost invisible on a white background, though it would
look fine on a dark background.
But if you stick to the default 16 base colours, you have a much better chance that the user has configured those colours in their terminal emulator so that they work reasonably well with their background color. Another reason to stick to the default base 16 colours is that it makes less assumptions about what colours the terminal emulator supports.
The only programs I usually see breaking this āruleā are text editors, for example Helix by default will use a purple background which is not a default ANSI colour. It seems fine for Helix to break this rule since Helix isnāt a ācoreā program and I assume any Helix user who doesnāt like that colorscheme will just change the theme.
rule 5: vaguely support readline keybindings
Almost every program I use supports readline keybindings if it would make
sense to do so. For example, here are a bunch of different programs and a link
to where they define Ctrl-E to go to the end of the line:
- ipython (Ctrl-E defined here)
- atuin (Ctrl-E defined here)
- fzf (Ctrl-E defined here)
- zsh (Ctrl-E defined here)
- fish (Ctrl-E defined here)
- tmuxās command prompt (Ctrl-E defined here)
None of those programs actually uses readline directly, they just sort of
mimic emacs/readline keybindings. They donāt always mimic them exactly: for
example atuin seems to use Ctrl-A as a prefix, so Ctrl-A doesnāt go to the
beginning of the line.
Also all of these programs seem to implement their own internal cut and paste
buffers so you can delete a line with Ctrl-U and then paste it with Ctrl-Y.
The exceptions to this are:
- some programs (like
git,cat, andnc) donāt have any line editing support at all (except for backspace,Ctrl-W, andCtrl-U) - as usual text editors are an exception, every text editor has its own approach to editing text
I wrote more about this āwhat keybindings does a program support?ā question in entering text in the terminal is complicated.
rule 5.1: Ctrl-W should delete the last word
Iāve never seen a program (other than a text editor) where Ctrl-W doesnāt
delete the last word. This is similar to the Ctrl-C rule ā by default if a
program is in ācooked modeā, the OS will delete the last word if you press
Ctrl-W, and delete the whole line if you press Ctrl-U. So usually programs
will imitate that behaviour.
I canāt think of any exceptions to this other than text editors but if there are Iād love to hear about them!
rule 6: disable colours when writing to a pipe
Most programs will disable colours when writing to a pipe. For example:
rg blahwill highlight all occurrences ofblahin the output, but if the output is to a pipe or a file, itāll turn off the highlighting.ls --color=autowill use colour when writing to a terminal, but not when writing to a pipe
Both of those programs will also format their output differently when writing
to the terminal: ls will organize files into columns, and ripgrep will group
matches with headings.
If you want to force the program to use colour (for example because you want to
look at the colour), you can use unbuffer to force the programās output to be
a tty like this:
unbuffer rg blah | less -R
Iām sure that there are some programs that ābreakā this rule but I canāt think
of any examples right now. Some programs have an --color flag that you can
use to force colour to be on, in the example above you could also do rg --color=always | less -R.
rule 7: - means stdin/stdout
Usually if you pass - to a program instead of a filename, itāll read from
stdin or write to stdout (whichever is appropriate). For example, if you want
to format the Python code thatās on your clipboard with black and then copy
it, you could run:
pbpaste | black - | pbcopy
(pbpaste is a Mac program, you can do something similar on Linux with xclip)
My impression is that most programs implement this if it would make sense and I canāt think of any exceptions right now, but Iām sure there are many exceptions.
these ārulesā take a long time to learn
These rules took me a long time for me to learn because I had to:
- learn that the rule applied anywhere at all ("
Ctrl-Cwill exit programs") - notice some exceptions (āokay,
Ctrl-Cwill exitfindbut notlessā) - subconsciously figure out what the pattern is ("
Ctrl-Cwill generally quit noninteractive programs, but in interactive programs it might interrupt the current operation instead of quitting the program") - eventually maybe formulate it into an explicit rule that I know
A lot of my understanding of the terminal is honestly still in the āsubconscious pattern recognitionā stage. The only reason Iāve been taking the time to make things explicit at all is because Iāve been trying to explain how it works to others. Hopefully writing down these ārulesā explicitly will make learning some of this stuff a little bit faster for others.
Hereās a niche terminal problem that has bothered me for years but that I never really understood until a few weeks ago. Letās say youāre running this command to watch for some specific output in a log file:
tail -f /some/log/file | grep thing1 | grep thing2
If log lines are being added to the file relatively slowly, the result Iād see is⦠nothing! It doesnāt matter if there were matches in the log file or not, there just wouldnāt be any output.
I internalized this as āuh, I guess pipes just get stuck sometimes and donāt
show me the output, thatās weirdā, and Iād handle it by just
running grep thing1 /some/log/file | grep thing2 instead, which would work.
So as Iāve been doing a terminal deep dive over the last few months I was really excited to finally learn exactly why this happens.
why this happens: buffering
The reason why āpipes get stuckā sometimes is that itās VERY common for programs to buffer their output before writing it to a pipe or file. So the pipe is working fine, the problem is that the program never even wrote the data to the pipe!
This is for performance reasons: writing all output immediately as soon as you can uses more system calls, so itās more efficient to save up data until you have 8KB or so of data to write (or until the program exits) and THEN write it to the pipe.
In this example:
tail -f /some/log/file | grep thing1 | grep thing2
the problem is that grep thing1 is saving up all of its matches until it has
8KB of data to write, which might literally never happen.
programs donāt buffer when writing to a terminal
Part of why I found this so disorienting is that tail -f file | grep thing
will work totally fine, but then when you add the second grep, it stops
working!! The reason for this is that the way grep handles buffering depends
on whether itās writing to a terminal or not.
Hereās how grep (and many other programs) decides to buffer its output:
- Check if stdout is a terminal or not using the
isattyfunction- If itās a terminal, use line buffering (print every line immediately as soon as you have it)
- Otherwise, use āblock bufferingā ā only print data if you have at least 8KB or so of data to print
So if grep is writing directly to your terminal then youāll see the line as
soon as itās printed, but if itās writing to a pipe, you wonāt.
Of course the buffer size isnāt always 8KB for every program, it depends on the implementation. For grep the buffering is handled by libc, and libcās buffer size is
defined in the BUFSIZ variable. Hereās where thatās defined in glibc.
(as an aside: āprograms do not use 8KB output buffers when writing to a terminalā isnāt, like, a law of terminal physics, a program COULD use an 8KB buffer when writing output to a terminal if it wanted, it would just be extremely weird if it did that, I canāt think of any program that behaves that way)
commands that buffer & commands that donāt
One annoying thing about this buffering behaviour is that you kind of need to remember which commands buffer their output when writing to a pipe.
Some commands that donāt buffer their output:
- tail
- cat
- tee
I think almost everything else will buffer output, especially if itās a command where youāre likely to be using it for batch processing. Hereās a list of some common commands that buffer their output when writing to a pipe, along with the flag that disables block buffering.
- grep (
--line-buffered) - sed (
-u) - awk (thereās a
fflush()function) - tcpdump (
-l) - jq (
-u) - tr (
-u) - cut (canāt disable buffering)
Those are all the ones I can think of, lots of unix commands (like sort) may
or may not buffer their output but it doesnāt matter because sort canāt do
anything until it finishes receiving input anyway.
Also I did my best to test both the Mac OS and GNU versions of these but there are a lot of variations and I might have made some mistakes.
programming languages where the default āprintā statement buffers
Also, here are a few programming language where the default print statement will buffer output when writing to a pipe, and some ways to disable buffering if you want:
- C (disable with
setvbuf) - Python (disable with
python -u, orPYTHONUNBUFFERED=1, orsys.stdout.reconfigure(line_buffering=False), orprint(x, flush=True)) - Ruby (disable with
STDOUT.sync = true) - Perl (disable with
$| = 1)
I assume that these languages are designed this way so that the default print function will be fast when youāre doing batch processing.
Also whether output is buffered or not might depend on how you print, for
example in C++ cout << "hello\n" buffers when writing to a pipe but cout << "hello" << endl will flush its output.
when you press Ctrl-C on a pipe, the contents of the buffer are lost
Letās say youāre running this command as a hacky way to watch for DNS requests
to example.com, and you forgot to pass -l to tcpdump:
sudo tcpdump -ni any port 53 | grep example.com
When you press Ctrl-C, what happens? In a magical perfect world, what I would
want to happen is for tcpdump to flush its buffer, grep would search for
example.com, and I would see all the output I missed.
But in the real world, what happens is that all the programs get killed and the
output in tcpdumpās buffer is lost.
I think this problem is probably unavoidable ā I spent a little time with
strace to see how this works and grep receives the SIGINT before
tcpdump anyway so even if tcpdump tried to flush its buffer grep would
already be dead.
After a little more investigation, there is a workaround: if you find
tcpdumpās PID and kill -TERM $PID, then tcpdump will flush the buffer so
you can see the output. Thatās kind of a pain but I tested it and it seems to
work.
redirecting to a file also buffers
Itās not just pipes, this will also buffer:
sudo tcpdump -ni any port 53 > output.txt
Redirecting to a file doesnāt have the same āCtrl-C will totally destroy the
contents of the bufferā problem though ā in my experience it usually behaves
more like youād want, where the contents of the buffer get written to the file
before the program exits. Iām not 100% sure whether this is something you can
always rely on or not.
a bunch of potential ways to avoid buffering
Okay, letās talk solutions. Letās say youāve run this command:
tail -f /some/log/file | grep thing1 | grep thing2
I asked people on Mastodon how they would solve this in practice and there were 5 basic approaches. Here they are:
solution 1: run a program that finishes quickly
Historically my solution to this has been to just avoid the ācommand writing to pipe slowlyā situation completely and instead run a program that will finish quickly like this:
cat /some/log/file | grep thing1 | grep thing2 | tail
This doesnāt do the same thing as the original command but it does mean that you get to avoid thinking about these weird buffering issues.
(you could also do grep thing1 /some/log/file but I often prefer to use an
āunnecessaryā cat)
solution 2: remember the āline bufferā flag to grep
You could remember that grep has a flag to avoid buffering and pass it like this:
tail -f /some/log/file | grep --line-buffered thing1 | grep thing2
solution 3: use awk
Some people said that if theyāre specifically dealing with a multiple greps
situation, theyāll rewrite it to use a single awk instead, like this:
tail -f /some/log/file | awk '/thing1/ && /thing2/'
Or you would write a more complicated grep, like this:
tail -f /some/log/file | grep -E 'thing1.*thing2'
(awk also buffers, so for this to work youāll want awk to be the last command in the pipeline)
solution 4: use stdbuf
stdbuf uses LD_PRELOAD to turn off libcās buffering, and you can use it to turn off output buffering like this:
tail -f /some/log/file | stdbuf -o0 grep thing1 | grep thing2
Like any LD_PRELOAD solution itās a bit unreliable ā it doesnāt work on
static binaries, I think wonāt work if the program isnāt using libcās
buffering, and doesnāt always work on Mac OS. Harry Marr has a really nice How stdbuf works post.
solution 5: use unbuffer
unbuffer program will force the programās output to be a TTY, which means
that itāll behave the way it normally would on a TTY (less buffering, colour
output, etc). You could use it in this example like this:
tail -f /some/log/file | unbuffer grep thing1 | grep thing2
Unlike stdbuf it will always work, though it might have unwanted side
effects, for example grep thing1ās will also colour matches.
If you want to install unbuffer, itās in the expect package.
thatās all the solutions I know about!
Itās a bit hard for me to say which one is ābestā, I think personally Iām
mostly likely to use unbuffer because I know itās always going to work.
If I learn about more solutions Iāll try to add them to this post.
Iām not really sure how often this comes up
I think itās not very common for me to have a program that slowly trickles data into a pipe like this, normally if Iām using a pipe a bunch of data gets written very quickly, processed by everything in the pipeline, and then everything exits. The only examples I can come up with right now are:
- tcpdump
tail -f- watching log files in a different way like with
kubectl logs - the output of a slow computation
what if there were an environment variable to disable buffering?
I think it would be cool if there were a standard environment variable to turn
off buffering, like PYTHONUNBUFFERED in Python. I got this idea from a
couple of blog posts by Mark Dominus
in 2018. Maybe NO_BUFFER like NO_COLOR?
The design seems tricky to get right; Mark points out that NETBSD has environment variables called STDBUF, STDBUF1, etc which gives you a
ton of control over buffering but I imagine most developers donāt want to
implement many different environment variables to handle a relatively minor
edge case.
Iām also curious about whether there are any programs that just automatically flush their output buffers after some period of time (like 1 second). It feels like it would be nice in theory but I canāt think of any program that does that so I imagine there are some downsides.
stuff I left out
Some things I didnāt talk about in this post since these posts have been getting pretty long recently and seriously does anyone REALLY want to read 3000 words about buffering?
- the difference between line buffering and having totally unbuffered output
- how buffering to stderr is different from buffering to stdout
- this post is only about buffering that happens inside the program, your operating systemās TTY driver also does a little bit of buffering sometimes
- other reasons you might need to flush your output other than āyouāre writing to a pipeā
I like writing Javascript without a build system and for the millionth time yesterday I ran into a problem where I needed to figure out how to import a Javascript library in my code without using a build system, and it took FOREVER to figure out how to import it because the libraryās setup instructions assume that youāre using a build system.
Luckily at this point Iāve mostly learned how to navigate this situation and either successfully use the library or decide itās too difficult and switch to a different library, so hereās the guide I wish I had to importing Javascript libraries years ago.
Iām only going to talk about using Javacript libraries on the frontend, and only about how to use them in a no-build-system setup.
In this post Iām going to talk about:
- the three main types of Javascript files a library might provide (ES Modules, the āclassicā global variable kind, and CommonJS)
- how to figure out which types of files a Javascript library includes in its build
- ways to import each type of file in your code
the three kinds of Javascript files
There are 3 basic types of Javascript files a library can provide:
- the āclassicā type of file that defines a global variable. This is the kind
of file that you can just
<script src>and itāll Just Work. Great if you can get it but not always available - an ES module (which may or may not depend on other files, weāll get to that)
- a āCommonJSā module. This is for Node, you canāt use it in a browser at all without using a build system.
Iām not sure if thereās a better name for the āclassicā type but Iām just going to call it āclassicā. Also thereās a type called āAMDā but Iām not sure how relevant it is in 2024.
Now that we know the 3 types of files, letās talk about how to figure out which of these the library actually provides!
where to find the files: the NPM build
Every Javascript library has a build which it uploads to NPM. You might be thinking (like I did originally) ā Julia! The whole POINT is that weāre not using Node to build our library! Why are we talking about NPM?
But if youāre using a link from a CDN like https://cdnjs.cloudflare.com/ajax/libs/Chart.js/4.4.1/chart.umd.min.js, youāre still using the NPM build! All the files on the CDNs originally come from NPM.
Because of this, I sometimes like to npm install the library even if Iām not
planning to use Node to build my library at all ā Iāll just create a new temp
folder, npm install there, and then delete it when Iām done. I like being able to poke
around in the files in the NPM build on my filesystem, because then I can be
100% sure that Iām seeing everything that the library is making available in
its build and that the CDN isnāt hiding something from me.
So letās npm install a few libraries and try to figure out what types of
Javascript files they provide in their builds!
example library 1: chart.js
First letās look inside Chart.js, a plotting library.
$ cd /tmp/whatever
$ npm install chart.js
$ cd node_modules/chart.js/dist
$ ls *.*js
chart.cjs chart.js chart.umd.js helpers.cjs helpers.js
This library seems to have 3 basic options:
option 1: chart.cjs. The .cjs suffix tells me that this is a CommonJS
file, for using in Node. This means itās impossible to use it directly in the
browser without some kind of build step.
option 2:chart.js. The .js suffix by itself doesnāt tell us what kind of
file it is, but if I open it up, I see import '@kurkle/color'; which is an
immediate sign that this is an ES module ā the import ... syntax is ES
module syntax.
option 3: chart.umd.js. āUMDā stands for āUniversal Module Definitionā,
which I think means that you can use this file either with a basic <script src>, CommonJS,
or some third thing called AMD that I donāt understand.
how to use a UMD file
When I was using Chart.js I picked Option 3. I just needed to add this to my code:
<script src="./chart.umd.js"> </script>
and then I could use the library with the global Chart environment variable.
Couldnāt be easier. I just copied chart.umd.js into my Git repository so that
I didnāt have to worry about using NPM or the CDNs going down or anything.
the build files arenāt always in the dist directory
A lot of libraries will put their build in the dist directory, but not
always! The build filesā location is specified in the libraryās package.json.
For example hereās an excerpt from Chart.jsās package.json.
"jsdelivr": "./dist/chart.umd.js",
"unpkg": "./dist/chart.umd.js",
"main": "./dist/chart.cjs",
"module": "./dist/chart.js",
I think this is saying that if you want to use an ES Module (module) you
should use dist/chart.js, but the jsDelivr and unpkg CDNs should use
./dist/chart.umd.js. I guess main is for Node.
chart.jsās package.json also says "type": "module", which according to this documentation
tells Node to treat files as ES modules by default. I think it doesnāt tell us
specifically which files are ES modules and which ones arenāt but it does tell
us that something in there is an ES module.
example library 2: @atcute/oauth-browser-client
@atcute/oauth-browser-client
is a library for logging into Bluesky with OAuth in the browser.
Letās see what kinds of Javascript files it provides in its build!
$ npm install @atcute/oauth-browser-client
$ cd node_modules/@atcute/oauth-browser-client/dist
$ ls *js
constants.js dpop.js environment.js errors.js index.js resolvers.js
It seems like the only plausible root file in here is index.js, which looks
something like this:
export { configureOAuth } from './environment.js';
export * from './errors.js';
export * from './resolvers.js';
This export syntax means itās an ES module. That means we can use it in
the browser without a build step! Letās see how to do that.
how to use an ES module with importmaps
Using an ES module isnāt an easy as just adding a <script src="whatever.js">. Instead, if
the ES module has dependencies (like @atcute/oauth-browser-client does) the
steps are:
- Set up an import map in your HTML
- Put import statements like
import { configureOAuth } from '@atcute/oauth-browser-client';in your JS code - Include your JS code in your HTML like this:
<script type="module" src="YOURSCRIPT.js"></script>
The reason we need an import map instead of just doing something like import { BrowserOAuthClient } from "./oauth-client-browser.js" is that internally the module has more import statements like import {something} from @atcute/client, and we need to tell the browser where to get the code for @atcute/client and all of its other dependencies.
Hereās what the importmap I used looks like for @atcute/oauth-browser-client:
<script type="importmap">
{
"imports": {
"nanoid": "./node_modules/nanoid/bin/dist/index.js",
"nanoid/non-secure": "./node_modules/nanoid/non-secure/index.js",
"nanoid/url-alphabet": "./node_modules/nanoid/url-alphabet/dist/index.js",
"@atcute/oauth-browser-client": "./node_modules/@atcute/oauth-browser-client/dist/index.js",
"@atcute/client": "./node_modules/@atcute/client/dist/index.js",
"@atcute/client/utils/did": "./node_modules/@atcute/client/dist/utils/did.js"
}
}
</script>
Getting these import maps to work is pretty fiddly, I feel like there must be a tool to generate them automatically but I havenāt found one yet. Itās definitely possible to write a script that automatically generates the importmaps using esbuildās metafile but I havenāt done that and maybe thereās a better way.
I decided to set up importmaps yesterday to get github.com/jvns/bsky-oauth-example to work, so thereās some example code in that repo.
Also someone pointed me to Simon Willisonās download-esm, which will download an ES module and rewrite the imports to point to the JS files directly so that you donāt need importmaps. I havenāt tried it yet but it seems like a great idea.
problems with importmaps: too many files
I did run into some problems with using importmaps in the browser though ā it needed to download dozens of Javascript files to load my site, and my webserver in development couldnāt keep up for some reason. I kept seeing files fail to load randomly and then had to reload the page and hope that they would succeed this time.
It wasnāt an issue anymore when I deployed my site to production, so I guess it was a problem with my local dev environment.
Also one slightly annoying thing about ES modules in general is that you need to
be running a webserver to use them, Iām sure this is for a good reason but itās
easier when you can just open your index.html file without starting a
webserver.
Because of the ātoo many filesā thing I think actually using ES modules with importmaps in this way isnāt actually that appealing to me, but itās good to know itās possible.
how to use an ES module without importmaps
If the ES module doesnāt have dependencies then itās even easier ā you donāt need the importmaps! You can just:
- put
<script type="module" src="YOURCODE.js"></script>in your HTML. Thetype="module"is important. - put
import {whatever} from "https://example.com/whatever.js"inYOURCODE.js
alternative: use esbuild
If you donāt want to use importmaps, you can also use a build system like esbuild. I talked about how to do that in Some notes on using esbuild, but this blog post is about ways to avoid build systems completely so Iām not going to talk about that option here. I do still like esbuild though and I think itās a good option in this case.
whatās the browser support for importmaps?
CanIUse says that importmaps are in
āBaseline 2023: newly available across major browsersā so my sense is that in
2024 thatās still maybe a little bit too new? I think I would use importmaps
for some fun experimental code that I only wanted like myself and 12 people to
use, but if I wanted my code to be more widely usable Iād use esbuild instead.
example library 3: @atproto/oauth-client-browser
Letās look at one final example library! This is a different Bluesky auth
library than @atcute/oauth-browser-client.
$ npm install @atproto/oauth-client-browser
$ cd node_modules/@atproto/oauth-client-browser/dist
$ ls *js
browser-oauth-client.js browser-oauth-database.js browser-runtime-implementation.js errors.js index.js indexed-db-store.js util.js
Again, it seems like only real candidate file here is index.js. But this is a
different situation from the previous example library! Letās take a look at
index.js:
Thereās a bunch of stuff like this in index.js:
__exportStar(require("@atproto/oauth-client"), exports);
__exportStar(require("./browser-oauth-client.js"), exports);
__exportStar(require("./errors.js"), exports);
var util_js_1 = require("./util.js");
This require() syntax is CommonJS syntax, which means that we canāt use this
file in the browser at all, we need to use some kind of build step, and
ESBuild wonāt work either.
Also in this libraryās package.json it says "type": "commonjs" which is
another way to tell itās CommonJS.
how to use a CommonJS module with esm.sh
Originally I thought it was impossible to use CommonJS modules without learning a build system, but then someone Bluesky told me about esm.sh! Itās a CDN that will translate anything into an ES Module. skypack.dev does something similar, Iām not sure what the difference is but one person mentioned that if one doesnāt work sometimes theyāll try the other one.
For @atproto/oauth-client-browser using it seems pretty simple, I just need to put this in my HTML:
<script type="module" src="script.js"> </script>
and then put this in script.js.
import { BrowserOAuthClient } from "https://esm.sh/@atproto/oauth-client-browser@0.3.0"
It seems to Just Work, which is cool! Of course this is still sort of using a build system ā itās just that esm.sh is running the build instead of me. My main concerns with this approach are:
- I donāt really trust CDNs to keep working forever ā usually I like to copy dependencies into my repository so that they donāt go away for some reason in the future.
- Iāve heard of some issues with CDNs having security compromises which scares me.
- I donāt really understand what esm.sh is doing.
esbuild can also convert CommonJS modules into ES modules
I also learned that you can also use esbuild to convert a CommonJS module
into an ES module, though there are some limitations ā the import { BrowserOAuthClient } from syntax doesnāt work. Hereās a github issue about that.
I think the esbuild approach is probably more appealing to me than the
esm.sh approach because itās a tool that I already have on my computer so I
trust it more. I havenāt experimented with this much yet though.
summary of the three types of files
Hereās a summary of the three types of JS files you might encounter, options for how to use them, and how to identify them.
Unhelpfully a .js or .min.js file extension could be any of these 3
options, so if the file is something.js you need to do more detective work to
figure out what youāre dealing with.
- āclassicā JS files
- How to use it::
<script src="whatever.js"></script> - Ways to identify it:
- The website has a big friendly banner in its setup instructions saying āUse this with a CDN!ā or something
- A
.umd.jsextension - Just try to put it in a
<script src=...tag and see if it works
- How to use it::
- ES Modules
- Ways to use it:
- If there are no dependencies, just
import {whatever} from "./my-module.js"directly in your code - If there are dependencies, create an importmap and
import {whatever} from "my-module"- or use download-esm to remove the need for an importmap
- Use esbuild or any ES Module bundler
- If there are no dependencies, just
- Ways to identify it:
- Look for an
importorexportstatement. (notmodule.exports = ..., thatās CommonJS) - An
.mjsextension - maybe
"type": "module"inpackage.json(though itās not clear to me which file exactly this refers to)
- Look for an
- Ways to use it:
- CommonJS Modules
- Ways to use it:
- Use https://esm.sh to convert it into an ES module, like
https://esm.sh/@atproto/oauth-client-browser@0.3.0 - Use a build somehow (??)
- Use https://esm.sh to convert it into an ES module, like
- Ways to identify it:
- Look for
require()ormodule.exports = ...in the code - A
.cjsextension - maybe
"type": "commonjs"inpackage.json(though itās not clear to me which file exactly this refers to)
- Look for
- Ways to use it:
itās really nice to have ES modules standardized
The main difference between CommonJS modules and ES modules from my perspective is that ES modules are actually a standard. This makes me feel a lot more confident using them, because browsers commit to backwards compatibility for web standards forever ā if I write some code using ES modules today, I can feel sure that itāll still work the same way in 15 years.
It also makes me feel better about using tooling like esbuild because even if
the esbuild project dies, because itās implementing a standard it feels likely
that there will be another similar tool in the future that I can replace it
with.
the JS community has built a lot of very cool tools
A lot of the time when I talk about this stuff I get responses like āI hate javascript!!! itās the worst!!!ā. But my experience is that there are a lot of great tools for Javascript (I just learned about https://esm.sh yesterday which seems great! I love esbuild!), and that if I take the time to learn how things works I can take advantage of some of those tools and make my life a lot easier.
So the goal of this post is definitely not to complain about Javascript, itās to understand the landscape so I can use the tooling in a way that feels good to me.
questions I still have
Here are some questions I still have, Iāll add the answers into the post if I learn the answer.
- Is there a tool that automatically generates importmaps for an ES Module that I have set up locally? (apparently yes: jspm)
- How can I convert a CommonJS module into an ES module on my computer, the way https://esm.sh does? (apparently esbuild can sort of do this, though named exports donāt work)
- When people normally build CommonJS modules into regular JS code, whatās code is doing that? Obviously there are tools like webpack, rollup, esbuild, etc, but do those tools all implement their own JS parsers/static analysis? How many JS parsers are there out there?
- Is there any way to bundle an ES module into a single file (like
atcute-client.js), but so that in the browser I can still import multiple different paths from that file (like both@atcute/client/lexiconsand@atcute/client)?
all the tools
Hereās a list of every tool we talked about in this post:
- Simon Willisonās download-esm which will download an ES module and convert the imports to point at JS files so you donāt need an importmap
- https://esm.sh/ and skypack.dev
- esbuild
- JSPM can generate importmaps
Writing this post has made me think that even though I usually donāt want to
have a build that I run every time I update the project, I might be willing to
have a build step (using download-esm or something) that I run only once
when setting up the project and never run again except maybe if Iām updating my
dependency versions.
thatās all!
Thanks to Marco Rogers who taught me a lot of the things in this post. Iāve probably made some mistakes in this post and Iād love to know what they are ā let me know on Bluesky or Mastodon!
I added a new section to this site a couple weeks ago called TIL (ātoday I learnedā).
the goal: save interesting tools & facts I posted on social media
One kind of thing I like to post on Mastodon/Bluesky is āhey, hereās a cool thingā, like the great SQLite repl litecli, or the fact that cross compiling in Go Just Works and itās amazing, or cryptographic right answers, or this great diff tool. Usually I donāt want to write a whole blog post about those things because I really donāt have much more to say than āhey this is useful!ā
It started to bother me that I didnāt have anywhere to put those things: for example recently I wanted to use diffdiff and I just could not remember what it was called.
the solution: make a new section of this blog
So I quickly made a new folder called /til/, added some
custom styling (I wanted to style the posts to look a little bit like a tweet),
made a little Rake task to help me create new posts quickly (rake new_til), and
set up a separate RSS Feed for it.
I think this new section of the blog might be more for myself than anything, now when I forget the link to Cryptographic Right Answers I can hopefully look it up on the TIL page. (you might think ājulia, why not use bookmarks??ā but I have been failing to use bookmarks for my whole life and I donāt see that changing ever, putting things in public is for whatever reason much easier for me)
So far itās been working, often I can actually just make a quick post in 2 minutes which was the goal.
inspired by Simon Willisonās TIL blog
My page is inspired by Simon Willisonās great TIL blog, though my TIL posts are a lot shorter.
I donāt necessarily want everything to be archived
This came about because I spent a lot of time on Twitter, so Iāve been thinking about what I want to do about all of my tweets.
I keep reading the advice to āPOSSEā (āpost on your own site, syndicate elsewhereā), and while I find the idea appealing in principle, for me part of the appeal of social media is that itās a little bit ephemeral. I can post polls or questions or observations or jokes and then they can just kind of fade away as they become less relevant.
I find it a lot easier to identify specific categories of things that I actually want to have on a Real Website That I Own:
- blog posts here!
- comics at https://wizardzines.com/comics/!
- now TILs at https://jvns.ca/til/)
and then let everything else be kind of ephemeral.
I really believe in the advice to make email lists though ā the first two (blog posts & comics) both have email lists and RSS feeds that people can subscribe to if they want. I might add a quick summary of any TIL posts from that week to the āblog posts from this weekā mailing list.

Here's where you can find me at IETF 121 in Dublin!
Monday
- 9:30 - 11:30 ⢠oauth
- 15:30 - 17:00 ⢠alldispatch
Tuesday
Thursday
- 9:30 - 11:30 ⢠oauth
Get in Touch
My Current Drafts
Hello! Iāve been thinking about the terminal a lot and yesterday I got curious
about all these ācontrol codesā, like Ctrl-A, Ctrl-C, Ctrl-W, etc. Whatās
the deal with all of them?
a table of ASCII control characters
Hereās a table of all 33 ASCII control characters, and what they do on my machine (on Mac OS), more or less. There are about a million caveats, but Iāll talk about what it means and all the problems with this diagram that I know about.
You can also view it as an HTML page (I just made it an image so it would show up in RSS).
different kinds of codes are mixed together
The first surprising thing about this diagram to me is that there are 33 control codes, split into (very roughly speaking) these categories:
- Codes that are handled by the operating systemās terminal driver, for
example when the OS sees a
3(Ctrl-C), itāll send aSIGINTsignal to the current program - Everything else is passed through to the application as-is and the
application can do whatever it wants with them. Some subcategories of
those:
- Codes that correspond to a literal keypress of a key on your keyboard
(
Enter,Tab,Backspace). For example when you pressEnter, your terminal gets sent13. - Codes used by
readline: āthe application can do whatever it wantsā often means āitāll do more or less what thereadlinelibrary does, whether the application actually usesreadlineor notā, so Iāve labelled a bunch of the codes thatreadlineuses - Other codes, for example I think
Ctrl-Xhas no standard meaning in the terminal in general but emacs uses it very heavily
- Codes that correspond to a literal keypress of a key on your keyboard
(
Thereās no real structure to which codes are in which categories, theyāre all just kind of randomly scattered because this evolved organically.
(If youāre curious about readline, I wrote more about readline in entering text in the terminal is complicated, and there are a lot of cheat sheets out there)
there are only 33 control codes
Something else that I find a little surprising is that are only 33 control codes ā
A to Z, plus 7 more (@, [, \, ], ^, _, ?). This means that if you want to
have for example Ctrl-1 as a keyboard shortcut in a terminal application,
thatās not really meaningful ā on my machine at least Ctrl-1 is exactly the
same thing as just pressing 1, Ctrl-3 is the same as Ctrl-[, etc.
Also Ctrl+Shift+C isnāt a control code ā what it does depends on your
terminal emulator. On Linux Ctrl-Shift-X is often used by the terminal
emulator to copy or open a new tab or paste for example, itās not sent to the
TTY at all.
Also I use Ctrl+Left Arrow all the time, but that isnāt a control code,
instead it sends an ANSI escape sequence (ctrl-[[1;5D) which is a different
thing which we absolutely do not have space for in this post.
This āthere are only 33 codesā thing is totally different from how keyboard
shortcuts work in a GUI where you can have Ctrl+KEY for any key you want.
the official ASCII names arenāt very meaningful to me
Each of these 33 control codes has a name in ASCII (for example 3 is ETX).
When all of these control codes were originally defined, they werenāt being
used for computers or terminals at all, they were used for the telegraph machine.
Telegraph machines arenāt the same as UNIX terminals so a lot of the codes were repurposed to mean something else.
Personally I donāt find these ASCII names very useful, because 50% of the time the name in ASCII has no actual relationship to what that code does on UNIX systems today. So it feels easier to just ignore the ASCII names completely instead of trying to figure which ones still match their original meaning.
Itās hard to use Ctrl-M as a keyboard shortcut
Another thing thatās a bit weird is that Ctrl-M is literally the same as
Enter, and Ctrl-I is the same as Tab, which makes it hard to use those two as keyboard shortcuts.
From some quick research, it seems like some folks do still use Ctrl-I and
Ctrl-M as keyboard shortcuts (hereās an example), but to do that
you need to configure your terminal emulator to treat them differently than the
default.
For me the main takeaway is that if I ever write a terminal application I
should avoid Ctrl-I and Ctrl-M as keyboard shortcuts in it.
how to identify what control codes get sent
While writing this I needed to do a bunch of experimenting to figure out what various key combinations did, so I wrote this Python script echo-key.py that will print them out.
Thereās probably a more official way but I appreciated having a script I could customize.
caveat: on canonical vs noncanonical mode
Two of these codes (Ctrl-W and Ctrl-U) are labelled in the table as
āhandled by the OSā, but actually theyāre not always handled by the OS, it
depends on whether the terminal is in ācanonicalā mode or in ānoncanonical modeā.
In canonical mode,
programs only get input when you press Enter (and the OS is in charge of deleting characters when you press Backspace or Ctrl-W). But in noncanonical mode the program gets
input immediately when you press a key, and the Ctrl-W and Ctrl-U codes are passed through to the program to handle any way it wants.
Generally in noncanonical mode the program will handle Ctrl-W and Ctrl-U
similarly to how the OS does, but there are some small differences.
Some examples of programs that use canonical mode:
- probably pretty much any noninteractive program, like
greporcat git, I think
Examples of programs that use noncanonical mode:
python3,irband other REPLs- your shell
- any full screen TUI like
lessorvim
caveat: all of the āOS terminal driverā codes are configurable with stty
I said that Ctrl-C sends SIGINT but technically this is not necessarily
true, if you really want to you can remap all of the codes labelled āOS
terminal driverā, plus Backspace, using a tool called stty, and you can view
the mappings with stty -a.
Here are the mappings on my machine right now:
$ stty -a
cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>;
eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V;
min = 1; quit = ^\; reprint = ^R; start = ^Q; status = ^T;
stop = ^S; susp = ^Z; time = 0; werase = ^W;
I have personally never remapped any of these and I cannot imagine a reason I
would (I think it would be a recipe for confusion and disaster for me), but I
asked on Mastodon and people said the most common reasons they used
stty were:
- fix a broken terminal with
stty sane - set
stty erase ^Hto change how Backspace works - set
stty ixoff - some people even map
SIGINTto a different key, like theirDELETEkey
caveat: on signals
Two signals caveats:
- If the
ISIGterminal mode is turned off, then the OS wonāt send signals. For examplevimturns offISIG - Apparently on BSDs, thereās an extra control code (
Ctrl-T) which sendsSIGINFO
You can see which terminal modes a program is setting using strace like this,
terminal modes are set with the ioctl system call:
$ strace -tt -o out vim
$ grep ioctl out | grep SET
here are the modes vim sets when it starts (ISIG and ICANON are
missing!):
17:43:36.670636 ioctl(0, TCSETS, {c_iflag=IXANY|IMAXBEL|IUTF8,
c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST, c_cflag=B38400|CS8|CREAD,
c_lflag=ECHOK|ECHOCTL|ECHOKE|PENDIN, ...}) = 0
and it resets the modes when it exits:
17:43:38.027284 ioctl(0, TCSETS, {c_iflag=ICRNL|IXANY|IMAXBEL|IUTF8,
c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST|ONLCR, c_cflag=B38400|CS8|CREAD,
c_lflag=ISIG|ICANON|ECHO|ECHOE|ECHOK|IEXTEN|ECHOCTL|ECHOKE|PENDIN, ...}) = 0
I think the specific combination of modes vim is using here might be called āraw modeā, man cfmakeraw talks about that.
there are a lot of conflicts
Related to āthere are only 33 codesā, there are a lot of conflicts where
different parts of the system want to use the same code for different things,
for example by default Ctrl-S will freeze your screen, but if you turn that
off then readline will use Ctrl-S to do a forward search.
Another example is that on my machine sometimes Ctrl-T will send SIGINFO
and sometimes itāll transpose 2 characters and sometimes itāll do something
completely different depending on:
- whether the program has
ISIGset - whether the program uses
readline/ imitates readlineās behaviour
caveat: on ābackspaceā and āother backspaceā
In this diagram Iāve labelled code 127 as ābackspaceā and 8 as āother backspaceā. Uh, what?
I think this was the single biggest topic of discussion in the replies on Mastodon ā apparently thereās a LOT of history to this and Iād never heard of any of it before.
First, hereās how it works on my machine:
- I press the
Backspacekey - The TTY gets sent the byte
127, which is calledDELin ASCII - the OS terminal driver and readline both have
127mapped to ābackspaceā (so it works both in canonical mode and noncanonical mode) - The previous character gets deleted
If I press Ctrl+H, it has the same effect as Backspace if Iām using
readline, but in a program without readline support (like cat for instance),
it just prints out ^H.
Apparently Step 2 above is different for some folks ā their Backspace key sends
the byte 8 instead of 127, and so if they want Backspace to work then they
need to configure the OS (using stty) to set erase = ^H.
Thereās an incredible section of the Debian Policy Manual on keyboard configuration
that describes how Delete and Backspace should work according to Debian
policy, which seems very similar to how it works on my Mac today. My
understanding (via this mastodon post)
is that this policy was written in the 90s because there was a lot of confusion
about what Backspace should do in the 90s and there needed to be a standard
to get everything to work.
Thereās a bunch more historical terminal stuff here but thatās all Iāll say for now.
thereās probably a lot more diversity in how this works
Iāve probably missed a bunch more ways that āhow it works on my machineā might be different from how it works on other peopleās machines, and Iāve probably made some mistakes about how it works on my machine too. But thatās all Iāve got for today.
Some more stuff I know that Iāve left out: according to stty -a Ctrl-O is
ādiscardā, Ctrl-R is āreprintā, and Ctrl-Y is ādsuspā. I have no idea how
to make those actually do anything (pressing them does not do anything
obvious, and some people have told me what they used to do historically but
itās not clear to me if they have a use in 2024), and a lot of the time in practice
they seem to just be passed through to the application anyway so I just
labelled Ctrl-R and Ctrl-Y as
readline.
not all of this is that useful to know
Also I want to say that I think the contents of this post are kind of interesting
but I donāt think theyāre necessarily that useful. Iāve used the terminal
pretty successfully every day for the last 20 years without knowing literally
any of this ā I just knew what Ctrl-C, Ctrl-D, Ctrl-Z, Ctrl-R,
Ctrl-L did in practice (plus maybe Ctrl-A, Ctrl-E and Ctrl-W) and did
not worry about the details for the most part, and that was
almost always totally fine except when I was trying to use xterm.js.
But I had fun learning about it so maybe itāll be interesting to you too.
Iāve been having problems for the last 3 years or so where Mess With DNS periodically runs out of memory and gets OOM killed.
This hasnāt been a big priority for me: usually it just goes down for a few minutes while it restarts, and it only happens once a day at most, so Iāve just been ignoring. But last week it started actually causing a problem so I decided to look into it.
This was kind of winding road where I learned a lot so hereās a table of contents:
- thereās about 100MB of memory available
- the problem: OOM killing the backup script
- attempt 1: use SQLite
- attempt 2: use a trie
- attempt 3: make my array use less memory
thereās about 100MB of memory available
I run Mess With DNS on a VM without about 465MB of RAM, which according to
ps aux (the RSS column) is split up something like:
- 100MB for PowerDNS
- 200MB for Mess With DNS
- 40MB for hallpass
That leaves about 110MB of memory free.
A while back I set GOMEMLIMIT to 250MB to try to make sure the garbage collector ran if Mess With DNS used more than 250MB of memory, and I think this helped but it didnāt solve everything.
the problem: OOM killing the backup script
A few weeks ago I started backing up Mess With DNSās database for the first time using restic.
This has been working okay, but since Mess With DNS operates without much extra
memory I think restic sometimes needed more memory than was available on the
system, and so the backup script sometimes got OOM killed.
This was a problem because
- backups might be corrupted sometimes
- more importantly, restic takes out a lock when it runs, and so Iād have to manually do an unlock if I wanted the backups to continue working. Doing manual work like this is the #1 thing I try to avoid with all my web services (who has time for that!) so I really wanted to do something about it.
Thereās probably more than one solution to this, but I decided to try to make Mess With DNS use less memory so that there was more available memory on the system, mostly because it seemed like a fun problem to try to solve.
whatās using memory: IP addresses
Iād run a memory profile of Mess With DNS a bunch of times in the past, so I knew exactly what was using most of Mess With DNSās memory: IP addresses.
When it starts, Mess With DNS loads this database where you can look up the
ASN of every IP address into memory, so that when it
receives a DNS query it can take the source IP address like 74.125.16.248 and
tell you that IP address belongs to GOOGLE.
This database by itself used about 117MB of memory, and a simple du told me
that was too much ā the original text files were only 37MB!
$ du -sh *.tsv
26M ip2asn-v4.tsv
11M ip2asn-v6.tsv
The way it worked originally is that I had an array of these:
type IPRange struct {
StartIP net.IP
EndIP net.IP
Num int
Name string
Country string
}
and I searched through it with a binary search to figure out if any of the ranges contained the IP I was looking for. Basically the simplest possible thing and itās super fast, my machine can do about 9 million lookups per second.
attempt 1: use SQLite
Iāve been using SQLite recently, so my first thought was ā maybe I can store all of this data on disk in an SQLite database, give the tables an index, and thatāll use less memory.
So I:
- wrote a quick Python script using sqlite-utils to import the TSV files into an SQLite database
- adjusted my code to select from the database instead
This did solve the initial memory goal (after a GC it now hardly used any memory at all because the table was on disk!), though Iām not sure how much GC churn this solution would cause if we needed to do a lot of queries at once. I did a quick memory profile and it seemed to allocate about 1KB of memory per lookup.
Letās talk about the issues I ran into with using SQLite though.
problem: how to store IPv6 addresses
SQLite doesnāt have support for big integers and IPv6 addresses are 128 bits,
so I decided to store them as text. I think BLOB might have been better, I
originally thought BLOBs couldnāt be compared but the sqlite docs say they can.
I ended up with this schema:
CREATE TABLE ipv4_ranges (
start_ip INTEGER NOT NULL,
end_ip INTEGER NOT NULL,
asn INTEGER NOT NULL,
country TEXT NOT NULL,
name TEXT NOT NULL
);
CREATE TABLE ipv6_ranges (
start_ip TEXT NOT NULL,
end_ip TEXT NOT NULL,
asn INTEGER,
country TEXT,
name TEXT
);
CREATE INDEX idx_ipv4_ranges_start_ip ON ipv4_ranges (start_ip);
CREATE INDEX idx_ipv6_ranges_start_ip ON ipv6_ranges (start_ip);
CREATE INDEX idx_ipv4_ranges_end_ip ON ipv4_ranges (end_ip);
CREATE INDEX idx_ipv6_ranges_end_ip ON ipv6_ranges (end_ip);
Also I learned that Python has an ipaddress module, so I could use
ipaddress.ip_address(s).exploded to make sure that the IPv6 addresses were
expanded so that a string comparison would compare them properly.
problem: itās 500x slower
I ran a quick microbenchmark, something like this. It printed out that it could look up 17,000 IPv6 addresses per second, and similarly for IPv4 addresses.
This was pretty discouraging ā being able to look up 17k addresses per section is kind of fine (Mess With DNS does not get a lot of traffic), but I compared it to the original binary search code and the original code could do 9 million per second.
ips := []net.IP{}
count := 20000
for i := 0; i < count; i++ {
// create a random IPv6 address
bytes := randomBytes()
ip := net.IP(bytes[:])
ips = append(ips, ip)
}
now := time.Now()
success := 0
for _, ip := range ips {
_, err := ranges.FindASN(ip)
if err == nil {
success++
}
}
fmt.Println(success)
elapsed := time.Since(now)
fmt.Println("number per second", float64(count)/elapsed.Seconds())
time for EXPLAIN QUERY PLAN
Iād never really done an EXPLAIN in sqlite, so I thought it would be a fun opportunity to see what the query plan was doing.
sqlite> explain query plan select * from ipv6_ranges where '2607:f8b0:4006:0824:0000:0000:0000:200e' BETWEEN start_ip and end_ip;
QUERY PLAN
`--SEARCH ipv6_ranges USING INDEX idx_ipv6_ranges_end_ip (end_ip>?)
It looks like itās just using the end_ip index and not the start_ip index,
so maybe it makes sense that itās slower than the binary search.
I tried to figure out if there was a way to make SQLite use both indexes, but I couldnāt find one and maybe it knows best anyway.
At this point I gave up on the SQLite solution, I didnāt love that it was slower and also itās a lot more complex than just doing a binary search. I felt like Iād rather keep something much more similar to the binary search.
A few things I tried with SQLite that did not cause it to use both indexes:
- using a compound index instead of two separate indexes
- running
ANALYZE - using
INTERSECTto intersect the results ofstart_ip < ?and? < end_ip. This did make it use both indexes, but it also seemed to make the query literally 1000x slower, probably because it needed to create the results of both subqueries in memory and intersect them.
attempt 2: use a trie
My next idea was to use a trie, because I had some vague idea that maybe a trie would use less memory, and I found this library called ipaddress-go that lets you look up IP addresses using a trie.
I tried using it hereās the code, but I think I was doing something wildly wrong because, compared to my naive array + binary search:
- it used WAY more memory (800MB to store just the IPv4 addresses)
- it was a lot slower to do the lookups (it could do only 100K/second instead of 9 million/second)
Iām not really sure what went wrong here but I gave up on this approach and decided to just try to make my array use less memory and stick to a simple binary search.
some notes on memory profiling
One thing I learned about memory profiling is that you can use runtime
package to see how much memory is currently allocated in the program. Thatās
how I got all the memory numbers in this post. Hereās the code:
func memusage() {
runtime.GC()
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("Alloc = %v MiB\n", m.Alloc/1024/1024)
// write mem.prof
f, err := os.Create("mem.prof")
if err != nil {
log.Fatal(err)
}
pprof.WriteHeapProfile(f)
f.Close()
}
Also I learned that if you use pprof to analyze a heap profile there are two
ways to analyze it: you can pass either --alloc-space or --inuse-space to
go tool pprof. I donāt know how I didnāt realize this before but
alloc-space will tell you about everything that was allocated, and
inuse-space will just include memory thatās currently in use.
Anyway I ran go tool pprof -pdf --inuse_space mem.prof > mem.pdf a lot. Also
every time I use pprof I find myself referring to my own intro to pprof, itās probably
the blog post I wrote that I use the most often. I should add --alloc-space
and --inuse-space to it.
attempt 3: make my array use less memory
I was storing my ip2asn entries like this:
type IPRange struct {
StartIP net.IP
EndIP net.IP
Num int
Name string
Country string
}
I had 3 ideas for ways to improve this:
- There was a lot of repetition of
Nameand theCountry, because a lot of IP ranges belong to the same ASN net.IPis an[]byteunder the hood, which felt like it involved an unnecessary pointer, was there a way to inline it into the struct?- Maybe I didnāt need both the start IP and the end IP, often the ranges were consecutive so maybe I could rearrange things so that I only had the start IP
idea 3.1: deduplicate the Name and Country
I figured I could store the ASN info in an array, and then just store the index
into the array in my IPRange struct. Here are the structs so you can see what
I mean:
type IPRange struct {
StartIP netip.Addr
EndIP netip.Addr
ASN uint32
Idx uint32
}
type ASNInfo struct {
Country string
Name string
}
type ASNPool struct {
asns []ASNInfo
lookup map[ASNInfo]uint32
}
This worked! It brought memory usage from 117MB to 65MB ā a 50MB savings. I felt good about this.
Hereās all of the code for that part.
how big are ASNs?
As an aside ā Iām storing the ASN in a uint32, is that right? I looked in the ip2asn
file and the biggest one seems to be 401307, though there are a few lines that
say 4294901931 which is much bigger, but also are just inside the range of a
uint32. So I can definitely use a uint32.
59.101.179.0 59.101.179.255 4294901931 Unknown AS4294901931
idea 3.2: use netip.Addr instead of net.IP
It turns out that Iām not the only one who felt that net.IP was using an
unnecessary amount of memory ā in 2021 the folks at Tailscale released a new
IP address library for Go which solves this and many other issues. They wrote a great blog post about it.
I discovered (to my delight) that not only does this new IP address library exist and do exactly what I want, itās also now in the Go
standard library as netip.Addr. Switching to netip.Addr was
very easy and saved another 20MB of memory, bringing us to 46MB.
I didnāt try my third idea (remove the end IP from the struct) because Iād already been programming for long enough on a Saturday morning and I was happy with my progress.
Itās always such a great feeling when I think āhey, I donāt like this, there must be a better wayā and then immediately discover that someone has already made the exact thing I want, thought about it a lot more than me, and implemented it much better than I would have.
all of this was messier in real life
Even though I tried to explain this in a simple linear way āI tried X, then I tried Y, then I tried Zā, thatās kind of a lie ā I always try to take my actual debugging process (total chaos) and make it seem more linear and understandable because the reality is just too annoying to write down. Itās more like:
- try sqlite
- try a trie
- second guess everything that I concluded about sqlite, go back and look at the results again
- wait what about indexes
- very very belatedly realize that I can use
runtimeto check how much memory everything is using, start doing that - look at the trie again, maybe I misunderstood everything
- give up and go back to binary search
- look at all of the numbers for tries/sqlite again to make sure I didnāt misunderstand
A note on using 512MB of memory
Someone asked why I donāt just give the VM more memory. I could very easily afford to pay for a VM with 1GB of memory, but I feel like 512MB really should be enough (and really that 256MB should be enough!) so Iād rather stay inside that constraint. Itās kind of a fun puzzle.
a few ideas from the replies
Folks had a lot of good ideas I hadnāt thought of. Recording them as inspiration if I feel like having another Fun Performance Day at some point.
- Try Goās unique package for the
ASNPool. Someone tried this and it uses more memory, probably because Goās pointers are 64 bits - Try compiling with
GOARCH=386to use 32-bit pointers to sace space (maybe in combination with usingunique!) - It should be possible to store all of the IPv6 addresses in just 64 bits, because only the first 64 bits of the address are public
- Interpolation search might be faster than binary search since IP addresses are numeric
- Try the MaxMind db format with mmdbwriter or mmdbctl
- Tailscaleās art routing table package
the result: saved 70MB of memory!
I deployed the new version and now Mess With DNS is using less memory! Hooray!
A few other notes:
- lookups are a little slower ā in my microbenchmark they went from 9 million lookups/second to 6 million, maybe because I added a little indirection. Using less memory and a little more CPU seemed like a good tradeoff though.
- itās still using more memory than the raw text files do (46MB vs 37MB), I guess pointers take up space and thatās okay.
Iām honestly not sure if this will solve all my memory problems, probably not! But I had fun, I learned a few things about SQLite, I still donāt know what to think about tries, and it made me love binary search even more than I already did.
Warning: this is a post about very boring yakshaving, probably only of interest to people who are trying to upgrade Hugo from a very old version to a new version. But what are blogs for if not documenting oneās very boring yakshaves from time to time?
So yesterday I decided to try to upgrade Hugo. Thereās no real reason to do this ā Iāve been using Hugo version 0.40 to generate this blog since 2018, it works fine, and I donāt have any problems with it. But I thought ā maybe it wonāt be as hard as I think, and I kind of like a tedious computer task sometimes!
I thought Iād document what I learned along the way in case itās useful to anyone else doing this very specific migration. I upgraded from Hugo v0.40 (from 2018) to v0.135 (from 2024).
Here are most of the changes I had to make:
change 1: template "theme/partials/thing.html is now partial thing.html
I had to replace a bunch of instances of {{ template "theme/partials/header.html" . }} with {{ partial "header.html" . }}.
This happened in v0.42:
We have now virtualized the filesystems for project and theme files. This makes everything simpler, faster and more powerful. But it also means that template lookups on the form {{ template ātheme/partials/pagination.htmlā . }} will not work anymore. That syntax has never been documented, so itās not expected to be in wide use.
change 2: .Data.Pages is now site.RegularPages
This seems to be discussed in the release notes for 0.57.2
I just needed to replace .Data.Pages with site.RegularPages in the template on the homepage as well as in my RSS feed template.
change 3: .Next and .Prev got flipped
I had this comment in the part of my theme where I link to the next/previous blog post:
ānextā and āpreviousā in hugo apparently mean the opposite of what Iād think theyād mean intuitively. Iād expect ānextā to mean āin the futureā and āpreviousā to mean āin the pastā but itās the opposite
It looks they changed this in ad705aac064 so that ānextā actually is in the future and āprevā actually is in the past. I definitely find the new behaviour more intuitive.
downloading the Hugo changelogs with a script
Figuring out why/when all of these changes happened was a little difficult. I ended up hacking together a bash script to download all of the changelogs from github as text files, which I could then grep to try to figure out what happened. It turns out itās pretty easy to get all of the changelogs from the GitHub API.
So far everything was not so bad ā there was also a change around taxonomies thatās I canāt quite explain, but it was all pretty manageable, but then we got to the really tough one: the markdown renderer.
change 4: the markdown renderer (blackfriday -> goldmark)
The blackfriday markdown renderer (which was previously the default) was removed in v0.100.0. This seems pretty reasonable:
It has been deprecated for a long time, its v1 version is not maintained anymore, and there are many known issues. Goldmark should be a mature replacement by now.
Fixing all my Markdown changes was a huge pain ā I ended up having to update 80 different Markdown files (out of 700) so that they would render properly, and Iām not totally sure
why bother switching renderers?
The obvious question here is ā why bother even trying to upgrade Hugo at all if I have to switch Markdown renderers? My old site was running totally fine and I think it wasnāt necessarily a good use of time, but the one reason I think it might be useful in the future is that the new renderer (goldmark) uses the CommonMark markdown standard, which Iām hoping will be somewhat more futureproof. So maybe I wonāt have to go through this again? Weāll see.
Also it turned out that the new Goldmark renderer does fix some problems I had (but didnāt know that I had) with smart quotes and how lists/blockquotes interact.
finding all the Markdown problems: the process
The hard part of this Markdown change was even figuring out what changed. Almost all of the problems (including #2 and #3 above) just silently broke the site, they didnāt cause any errors or anything. So I had to diff the HTML to hunt them down.
Hereās what I ended up doing:
- Generate the site with the old version, put it in
public_old - Generate the new version, put it in
public - Diff every single HTML file in
public/andpublic_oldwith this diff.sh script and put the results in adiffs/folder - Run variations on
find diffs -type f | xargs cat | grep -C 5 '(31m|32m)' | less -rover and over again to look at every single change until I found something that seemed wrong - Update the Markdown to fix the problem
- Repeat until everything seemed okay
(the grep 31m|32m thing is searching for red/green text in the diff)
This was very time consuming but it was a little bit fun for some reason so I kept doing it until it seemed like nothing too horrible was left.
the new markdown rules
Hereās a list of every type of Markdown change I had to make. Itās very possible these are all extremely specific to me but it took me a long time to figure them all out so maybe this will be helpful to one other person who finds this in the future.
4.1: mixing HTML and markdown
This doesnāt work anymore (it doesnāt expand the link):
<small>
[a link](https://example.com)
</small>
I need to do this instead:
<small>
[a link](https://example.com)
</small>
This works too:
<small> [a link](https://example.com) </small>
4.2: << is changed into Ā«
I didnāt want this so I needed to configure:
markup:
goldmark:
extensions:
typographer:
leftAngleQuote: '<<'
rightAngleQuote: '>>'
4.3: nested lists sometimes need 4 space indents
This doesnāt render as a nested list anymore if I only indent by 2 spaces, I need to put 4 spaces.
1. a
* b
* c
2. b
The problem is that the amount of indent needed depends on the size of the list markers. Hereās a reference in CommonMark for this.
4.4: blockquotes inside lists work better
Previously the > quote here didnāt render as a blockquote, and with the new renderer it does.
* something
> quote
* something else
I found a bunch of Markdown that had been kind of broken (which I hadnāt noticed) that works better with the new renderer, and this is an example of that.
Lists inside blockquotes also seem to work better.
4.5: headings inside lists
Previously this didnāt render as a heading, but now it does. So I needed to
replace the # with #.
* # passengers: 20
4.6: + or 1) at the beginning of the line makes it a list
I had something which looked like this:
`1 / (1
+ exp(-1)) = 0.73`
With Blackfriday it rendered like this:
<p><code>1 / (1
+ exp(-1)) = 0.73</code></p>
and with Goldmark it rendered like this:
<p>`1 / (1</p>
<ul>
<li>exp(-1)) = 0.73`</li>
</ul>
Same thing if there was an accidental 1) at the beginning of a line, like in this Markdown snippet
I set up a small Hadoop cluster (1 master, 2 workers, replication set to
1) on
To fix this I just had to rewrap the line so that the + wasnāt the first character.
The Markdown is formatted this way because I wrap my Markdown to 80 characters a lot and the wrapping isnāt very context sensitive.
4.7: no more smart quotes in code blocks
There were a bunch of places where the old renderer (Blackfriday) was doing
unwanted things in code blocks like replacing ... with ⦠or replacing
quotes with smart quotes. I hadnāt realized this was happening and I was very
happy to have it fixed.
4.8: better quote management
The way this gets rendered got better:
"Oh, *interesting*!"
- old: āOh, interesting!ā
- new: āOh, interesting!ā
Before there were two left smart quotes, now the quotes match.
4.9: images are no longer wrapped in a p tag
Previously if I had an image like this:
<img src="https://jvns.ca/images/rustboot1.png">
it would get wrapped in a <p> tag, now it doesnāt anymore. I dealt with this
just by adding a margin-bottom: 0.75em to images in the CSS, hopefully
thatāll make them display well enough.
4.10: <br> is now wrapped in a p tag
Previously this wouldnāt get wrapped in a p tag, but now it seems to:
<br><br>
I just gave up on fixing this though and resigned myself to maybe having some extra space in some cases. Maybe Iāll try to fix it later if I feel like another yakshave.
4.11: some more goldmark settings
I also needed to
- turn off code highlighting (because it wasnāt working properly and I didnāt have it before anyway)
- use the old āblackfridayā method to generate heading IDs so they didnāt change
- allow raw HTML in my markdown
Hereās what I needed to add to my config.yaml to do all that:
markup:
highlight:
codeFences: false
goldmark:
renderer:
unsafe: true
parser:
autoHeadingIDType: blackfriday
Maybe Iāll try to get syntax highlighting working one day, who knows. I might prefer having it off though.
a little script to compare blackfriday and goldmark
I also wrote a little program to compare the Blackfriday and Goldmark output for various markdown snippets, here it is in a gist.
Itās not really configured the exact same way Blackfriday and Goldmark were in my Hugo versions, but it was still helpful to have to help me understand what was going on.
a quick note on maintaining themes
My approach to themes in Hugo has been:
- pay someone to make a nice design for the site (for example wizardzines.com was designed by Melody Starling)
- use a totally custom theme
- commit that theme to the same Github repo as the site
So I just need to edit the theme files to fix any problems. Also I wrote a lot of the theme myself so Iām pretty familiar with how it works.
Relying on someone else to keep a theme updated feels kind of scary to me, I think if I were using a third-party theme Iād just copy the code into my siteās github repo and then maintain it myself.
which static site generators have better backwards compatibility?
I asked on Mastodon if anyone had used a static site generator with good backwards compatibility.
The main answers seemed to be Jekyll and 11ty. Several people said theyād been using Jekyll for 10 years without any issues, and 11ty says it has stability as a core goal.
I think a big factor in how appealing Jekyll/11ty are is how easy it is for you to maintain a working Ruby / Node environment on your computer: part of the reason I stopped using Jekyll was that I got tired of having to maintain a working Ruby installation. But I imagine this wouldnāt be a problem for a Ruby or Node developer.
Several people said that they donāt build their Jekyll site locally at all ā they just use GitHub Pages to build it.
thatās it!
Overall Iāve been happy with Hugo ā I started using it because it had fast build times and it was a static binary, and both of those things are still extremely useful to me. I might have spent 10 hours on this upgrade, but Iāve probably spent 1000+ hours writing blog posts without thinking about Hugo at all so that seems like an extremely reasonable ratio.
I find it hard to be too mad about the backwards incompatible changes, most of
them were quite a long time ago, Hugo does a great job of making their old
releases available so you can use the old release if you want, and the most
difficult one is removing support for the blackfriday Markdown renderer in
favour of using something CommonMark-compliant which seems pretty reasonable to
me even if it is a huge pain.
But it did take a long time and I donāt think Iād particularly recommend moving 700 blog posts to a new Markdown renderer unless youāre really in the mood for a lot of computer suffering for some reason.
The new renderer did fix a bunch of problems so I think overall it might be a good thing, even if Iāll have to remember to make 2 changes to how I write Markdown (4.1 and 4.3).
Also Iām still using Hugo 0.54 for https://wizardzines.com so maybe these notes will be useful to Future Me if I ever feel like upgrading Hugo for that site.
Hopefully I didnāt break too many things on the blog by doing this, let me know if you see anything broken!
Yesterday I was thinking about how long it took me to get a colorscheme in my terminal that I was mostly happy with (SO MANY YEARS), and it made me wonder what about terminal colours made it so hard.
So I asked people on Mastodon what problems theyāve run into with colours in the terminal, and I got a ton of interesting responses! Letās talk about some of the problems and a few possible ways to fix them.
problem 1: blue on black
One of the top complaints was āblue on black is hard to readā. Hereās an
example of that: if I open Terminal.app, set the background to black, and run
ls, the directories are displayed in a blue that isnāt that easy to read:
To understand why weāre seeing this blue, letās talk about ANSI colours!
the 16 ANSI colours
Your terminal has 16 numbered colours ā black, red, green, yellow, blue, magenta, cyan, white, and ābrightā version of each of those.
Programs can use them by printing out an āANSI escape codeā ā for example if you want to see each of the 16 colours in your terminal, you can run this Python program:
def color(num, text):
return f"\033[38;5;{num}m{text}\033[0m"
for i in range(16):
print(color(i, f"number {i:02}"))
what are the ANSI colours?
This made me wonder ā if blue is colour number 5, who decides what hex color that should correspond to?
The answer seems to be āthereās no standard, terminal emulators just choose colours and itās not very consistentā. Hereās a screenshot of a table from Wikipedia, where you can see that thereās a lot of variation:
problem 1.5: bright yellow on white
Bright yellow on white is even worse than blue on black, hereās what I get in a terminal with the default settings:
Thatās almost impossible to read (and some other colours like light green cause similar issues), so letās talk about solutions!
two ways to reconfigure your colours
If youāre annoyed by these colour contrast issues (or maybe you just think the default ANSI colours are ugly), you might think ā well, Iāll just choose a different āblueā and pick something I like better!
There are two ways you can do this:
Way 1: Configure your terminal emulator: I think most modern terminal emulators have a way to reconfigure the colours, and some of them even come with some preinstalled themes that you might like better than the defaults.
Way 2: Run a shell script: There are ANSI escape codes that you can print
out to tell your terminal emulator to reconfigure its colours. Hereās a shell script that does that,
from the base16-shell project.
You can see that it has a few different conventions for changing the colours ā
I guess different terminal emulators have different escape codes for changing
their colour palette, and so the script is trying to pick the right style of
escape code based on the TERM environment variable.
what are the pros and cons of the 2 ways of configuring your colours?
I prefer to use the āshell scriptā method, because:
- if I switch terminal emulators for some reason, I donāt need to a different configuration system, my colours still Just Work
- I use base16-shell with base16-vim to make my vim colours match my terminal colours, which is convenient
some advantages of configuring colours in your terminal emulator:
- if you use a popular terminal emulator, there are probably a lot more nice terminal themes out there that you can choose from
- not all terminal emulators support the āshell script methodā, and even if they do, the results can be a little inconsistent
This is what my shell has looked like for probably the last 5 years (using the
solarized light base16 theme), and Iām pretty happy with it. Hereās htop:
Okay, so letās say youāve found a terminal colorscheme that you like. What else can go wrong?
problem 2: programs using 256 colours
Hereās what some output of fd, a find alternative, looks like in my
colorscheme:
The contrast is pretty bad here, and I definitely donāt have that lime green in my normal colorscheme. Whatās going on?
We can see what color codes fd is using using the unbuffer program to
capture its output including the color codes:
$ unbuffer fd . > out
$ vim out
^[[38;5;48mbad-again.sh^[[0m
^[[38;5;48mbad.sh^[[0m
^[[38;5;48mbetter.sh^[[0m
out
^[[38;5;48 means āset the foreground color to color 48ā. Terminals donāt
only have 16 colours ā many terminals these days actually have 3 ways of
specifying colours:
- the 16 ANSI colours we already talked about
- an extended set of 256 colours
- a further extended set of 24-bit hex colours, like
#ffea03
So fd is using one of the colours from the extended 256-color set. bat (a
cat alternative) does something similar ā hereās what it looks like by
default in my terminal.
This looks fine though and it really seems like itās trying to work well with a variety of terminal themes.
some newer tools seem to have theme support
I think itās interesting that some of these newer terminal tools (fd, cat,
delta, and probably more) have support for arbitrary custom themes. I guess
the downside of this approach is that the default theme might clash with your
terminalās background, but the upside is that it gives you a lot more control
over theming the toolās output than just choosing 16 ANSI colours.
I donāt really use bat, but if I did Iād probably use bat --theme ansi to
just use the ANSI colours that I have set in my normal terminal colorscheme.
problem 3: the grays in Solarized
A bunch of people on Mastodon mentioned a specific issue with grays in the Solarized theme: when I list a directory, the base16 Solarized Light theme looks like this:
but iTermās default Solarized Light theme looks like this:
This is because in the iTerm theme (which is the original Solarized design), colors 9-14 (the ābright blueā, ābright
redā, etc) are mapped to a series of grays, and when I run ls, itās trying to
use those ābrightā colours to color my directories and executables.
My best guess for why the original Solarized theme is designed this way is to make the grays available to the vim Solarized colorscheme.
Iām pretty sure I prefer the modified base16 version I use where the ābrightā colours are actually colours instead of all being shades of gray though. (I didnāt actually realize the version I was using wasnāt the āoriginalā Solarized theme until I wrote this post)
In any case I really love Solarized and Iām very happy it exists so that I can use a modified version of it.
problem 4: a vim theme that doesnāt match the terminal background
If I my vim theme has a different background colour than my terminal theme, I get this ugly border, like this:
This one is a pretty minor issue though and I think making your terminal background match your vim background is pretty straightforward.
problem 5: programs setting a background color
A few people mentioned problems with terminal applications setting an unwanted background colour, so letās look at an example of that.
Here ngrok has set the background to color #16 (āblackā), but the
base16-shell script I use sets color 16 to be bright orange, so I get this,
which is pretty bad:
I think the intention is for ngrok to look something like this:
I think base16-shell sets color #16 to orange (instead of black)
so that it can provide extra colours for use by base16-vim.
This feels reasonable to me ā I use base16-vim in the terminal, so I guess Iām
using that feature and itās probably more important to me than ngrok (which I
rarely use) behaving a bit weirdly.
This particular issue is a maybe obscure clash between ngrok and my colorschem, but I think this kind of clash is pretty common when a program sets an ANSI background color that the user has remapped for some reason.
a nice solution to contrast issues: āminimum contrastā
A bunch of terminals (iTerm2, tabby, kittyās text_fg_override_threshold, and folks tell me also Ghostty and Windows Terminal) have a āminimum contrastā feature that will automatically adjust colours to make sure they have enough contrast.
Hereās an example from iTerm. This ngrok accident from before has pretty bad contrast, I find it pretty difficult to read:
With āminimum contrastā set to 40 in iTerm, it looks like this instead:
I didnāt have minimum contrast turned on before but I just turned it on today because it makes such a big difference when something goes wrong with colours in the terminal.
problem 6: TERM being set to the wrong thing
A few people mentioned that theyāll SSH into a system that doesnāt support the
TERM environment variable that they have set locally, and then the colours
wonāt work.
I think the way TERM works is that systems have a terminfo database, so if
the value of the TERM environment variable isnāt in the systemās terminfo
database, then it wonāt know how to output colours for that terminal. I donāt
know too much about terminfo, but someone linked me to this terminfo rant that talks about a few other
issues with terminfo.
I donāt have a system on hand to reproduce this one so I canāt say for sure how
to fix it, but this stackoverflow question
suggests running something like TERM=xterm ssh instead of ssh.
problem 7: picking āgoodā colours is hard
A couple of problems people mentioned with designing / finding terminal colorschemes:
- some folks are colorblind and have trouble finding an appropriate colorscheme
- accidentally making the background color too close to the cursor or selection color, so theyāre hard to find
- generally finding colours that work with every program is a struggle (for example you can see me having a problem with this with ngrok above!)
problem 8: making nethack/mc look right
Another problem people mentioned is using a program like nethack or midnight commander which you might expect to have a specific colourscheme based on the default ANSI terminal colours.
For example, midnight commander has a really specific classic look:
But in my Solarized theme, midnight commander looks like this:
The Solarized version feels like it could be disorienting if youāre very used to the āclassicā look.
One solution Simon Tatham mentioned to this is using some palette customization ANSI codes (like the ones base16 uses that I talked about earlier) to change the color palette right before starting the program, for example remapping yellow to a brighter yellow before starting Nethack so that the yellow characters look better.
problem 9: commands disabling colours when writing to a pipe
If I run fd | less, I see something like this, with the colours disabled.
In general I find this useful ā if I pipe a command to grep, I donāt want it
to print out all those color escape codes, I just want the plain text. But what if you want to see the colours?
To see the colours, you can run unbuffer fd | less -r! I just learned about
unbuffer recently and I think itās really cool, unbuffer opens a tty for the
command to write to so that it thinks itās writing to a TTY. It also fixes
issues with programs buffering their output when writing to a pipe, which is
why itās called unbuffer.
Hereās what the output of unbuffer fd | less -r looks like for me:
Also some commands (including fd) support a --color=always flag which will
force them to always print out the colours.
problem 10: unwanted colour in ls and other commands
Some people mentioned that they donāt want ls to use colour at all, perhaps
because ls uses blue, itās hard to read on black, and maybe they donāt feel like
customizing their terminalās colourscheme to make the blue more readable or
just donāt find the use of colour helpful.
Some possible solutions to this one:
- you can run
ls --color=never, which is probably easiest - you can also set
LS_COLORSto customize the colours used byls. I think some other programs other thanlssupport theLS_COLORSenvironment variable too. - also some programs support setting
NO_COLOR=true(thereās a list here)
Hereās an example of running LS_COLORS="fi=0:di=0:ln=0:pi=0:so=0:bd=0:cd=0:or=0:ex=0" ls:
problem 11: the colours in vim
I used to have a lot of problems with configuring my colours in vim ā Iād set up my terminal colours in a way that I thought was okay, and then Iād start vim and it would just be a disaster.
I think what was going on here is that today, there are two ways to set up a vim colorscheme in the terminal:
- using your ANSI terminal colours ā you tell vim which ANSI colour number to use for the background, for functions, etc.
- using 24-bit hex colours ā instead of ANSI terminal colours, the vim colorscheme can use hex codes like #faea99 directly
20 years ago when I started using vim, terminals with 24-bit hex color support were a lot less common (or maybe they didnāt exist at all), and vim certainly didnāt have support for using 24-bit colour in the terminal. From some quick searching through git, it looks like vim added support for 24-bit colour in 2016 ā just 8 years ago!
So to get colours to work properly in vim before 2016, you needed to synchronize
your terminal colorscheme and your vim colorscheme. Hereās what that looked like,
the colorscheme needed to map the vim color classes like cterm05 to ANSI colour numbers.
But in 2024, the story is really different! Vim (and Neovim, which I use now)
support 24-bit colours, and as of Neovim 0.10 (released in May 2024), the
termguicolors setting (which tells Vim to use 24-bit hex colours for
colorschemes) is turned on by default in any terminal with 24-bit
color support.
So this āyou need to synchronize your terminal colorscheme and your vim colorschemeā problem is not an issue anymore for me in 2024, since I donāt plan to use terminals without 24-bit color support in the future.
The biggest consequence for me of this whole thing is that I donāt need base16
to set colors 16-21 to weird stuff anymore to integrate with vim ā I can just
use a terminal theme and a vim theme, and as long as the two themes use similar
colours (so itās not jarring for me to switch between them) thereās no problem.
I think I can just remove those parts from my base16 shell script and totally
avoid the problem with ngrok and the weird orange background I talked about
above.
some more problems I left out
I think there are a lot of issues around the intersection of multiple programs, like using some combination tmux/ssh/vim that I couldnāt figure out how to reproduce well enough to talk about them. Also Iām sure I missed a lot of other things too.
base16 has really worked for me
Iāve personally had a lot of success with using
base16-shell with
base16-vim ā I just need to add a couple of lines to my
fish config to set it up (+ a few .vimrc lines) and then I can move on and
accept any remaining problems that that doesnāt solve.
I donāt think base16 is for everyone though, some limitations Iām aware of with base16 that might make it not work for you:
- it comes with a limited set of builtin themes and you might not like any of them
- the Solarized base16 theme (and maybe all of the themes?) sets the ābrightā ANSI colours to be exactly the same as the normal colours, which might cause a problem if youāre relying on the ābrightā colours to be different from the regular ones
- it sets colours 16-21 in order to give the vim colorschemes from
base16-vimaccess to more colours, which might not be relevant if you always use a terminal with 24-bit color support, and can cause problems like the ngrok issue above - also the way it sets colours 16-21 could be a problem in terminals that donāt have 256-color support, like the linux framebuffer terminal
Apparently thereās a community fork of base16 called tinted-theming, which I havenāt looked into much yet.
some other colorscheme tools
Just one so far but Iāll link more if people tell me about them:
- rootloops.sh for generating colorschemes (and āletās create a terminal color schemeā)
- Some popular colorschemes (according to people I asked on Mastodon): catpuccin, Monokai, Gruvbox, Dracula, Modus (a high contrast theme), Tokyo Night, Nord, RosƩ Pine
okay, that was a lot
We talked about a lot in this post and while I think learning about all these details is kind of fun if Iām in the mood to do a deep dive, I find it SO FRUSTRATING to deal with it when I just want my colours to work! Being surprised by unreadable text and having to find a workaround is just not my idea of a good day.
Personally Iām a zero-configuration kind of person and itās not that appealing to me to have to put together a lot of custom configuration just to make my colours in the terminal look acceptable. Iād much rather just have some reasonable defaults that I donāt have to change.
minimum contrast seems like an amazing feature
My one big takeaway from writing this was to turn on āminimum contrastā in my terminal, I think itās going to fix most of the occasional accidental unreadable text issues I run into and Iām pretty excited about it.
I spent a lot of time in the past couple of weeks working on a website in Go that may or may not ever see the light of day, but I learned a couple of things along the way I wanted to write down. Here they are:
go 1.22 now has better routing
Iāve never felt motivated to learn any of the Go routing libraries (gorilla/mux, chi, etc), so Iāve been doing all my routing by hand, like this.
// DELETE /records:
case r.Method == "DELETE" && n == 1 && p[0] == "records":
if !requireLogin(username, r.URL.Path, r, w) {
return
}
deleteAllRecords(ctx, username, rs, w, r)
// POST /records/<ID>
case r.Method == "POST" && n == 2 && p[0] == "records" && len(p[1]) > 0:
if !requireLogin(username, r.URL.Path, r, w) {
return
}
updateRecord(ctx, username, p[1], rs, w, r)
But apparently as of Go 1.22, Go now has better support for routing in the standard library, so that code can be rewritten something like this:
mux.HandleFunc("DELETE /records/", app.deleteAllRecords)
mux.HandleFunc("POST /records/{record_id}", app.updateRecord)
Though it would also need a login middleware, so maybe something more like
this, with a requireLogin middleware.
mux.Handle("DELETE /records/", requireLogin(http.HandlerFunc(app.deleteAllRecords)))
a gotcha with the built-in router: redirects with trailing slashes
One annoying gotcha I ran into was: if I make a route for /records/, then a
request for /records will be redirected to /records/.
I ran into an issue with this where sending a POST request to /records
redirected to a GET request for /records/, which broke the POST request
because it removed the request body. Thankfully Xe Iaso wrote a blog post about the exact same issue which made it
easier to debug.
I think the solution to this is just to use API endpoints like POST /records
instead of POST /records/, which seems like a more normal design anyway.
sqlc automatically generates code for my db queries
I got a little bit tired of writing so much boilerplate for my SQL queries, but I didnāt really feel like learning an ORM, because I know what SQL queries I want to write, and I didnāt feel like learning the ORMās conventions for translating things into SQL queries.
But then I found sqlc, which will compile a query like this:
-- name: GetVariant :one
SELECT *
FROM variants
WHERE id = ?;
into Go code like this:
const getVariant = `-- name: GetVariant :one
SELECT id, created_at, updated_at, disabled, product_name, variant_name
FROM variants
WHERE id = ?
`
func (q *Queries) GetVariant(ctx context.Context, id int64) (Variant, error) {
row := q.db.QueryRowContext(ctx, getVariant, id)
var i Variant
err := row.Scan(
&i.ID,
&i.CreatedAt,
&i.UpdatedAt,
&i.Disabled,
&i.ProductName,
&i.VariantName,
)
return i, err
}
What I like about this is that if Iām ever unsure about what Go code to write for a given SQL query, I can just write the query I want, read the generated function and itāll tell me exactly what to do to call it. It feels much easier to me than trying to dig through the ORMās documentation to figure out how to construct the SQL query I want.
Reading Brandurās sqlc notes from 2024 also gave me some confidence that this is a workable path for my tiny programs. That post gives a really helpful example of how to conditionally update fields in a table using CASE statements (for example if you have a table with 20 columns and you only want to update 3 of them).
sqlite tips
Someone on Mastodon linked me to this post called Optimizing sqlite for servers. My projects are small and Iām not so concerned about performance, but my main takeaways were:
- have a dedicated object for writing to the database, and run
db.SetMaxOpenConns(1)on it. I learned the hard way that if I donāt do this then Iāll getSQLITE_BUSYerrors from two threads trying to write to the db at the same time. - if I want to make reads faster, I could have 2 separate db objects, one for writing and one for reading
There are a more tips in that post that seem useful (like āCOUNT queries are slowā and āUse STRICT tablesā), but I havenāt done those yet.
Also sometimes if I have two tables where I know Iāll never need to do a JOIN
beteween them, Iāll just put them in separate databases so that I can connect
to them independently.
Go 1.19 introduced a way to set a GC memory limit
I run all of my Go projects in VMs with relatively little memory, like 256MB or 512MB. I ran into an issue where my application kept getting OOM killed and it was confusing ā did I have a memory leak? What?
After some Googling, I realized that maybe I didnāt have a memory leak, maybe I just needed to reconfigure the garbage collector! It turns out that by default (according to A Guide to the Go Garbage Collector), Goās garbage collector will let the application allocate memory up to 2x the current heap size.
Mess With DNSās base heap size is around 170MB and the amount of memory free on the VM is around 160MB right now, so if its memory doubled, itāll get OOM killed.
In Go 1.19, they added a way to tell Go āhey, if the application starts using this much memory, run a GCā. So I set the GC memory limit to 250MB and it seems to have resulted in the application getting OOM killed less often:
export GOMEMLIMIT=250MiB
some reasons I like making websites in Go
Iāve been making tiny websites (like the nginx playground) in Go on and off for the last 4 years or so and itās really been working for me. I think I like it because:
- thereās just 1 static binary, all I need to do to deploy it is copy the binary. If there are static files I can just embed them in the binary with embed.
- thereās a built-in webserver thatās okay to use in production, so I donāt need to configure WSGI or whatever to get it to work. I can just put it behind Caddy or run it on fly.io or whatever.
- Goās toolchain is very easy to install, I can just do
apt-get install golang-goor whatever and then ago buildwill build my project - it feels like thereās very little to remember to start sending HTTP responses
ā basically all there is are functions like
Serve(w http.ResponseWriter, r *http.Request)which read the request and send a response. If I need to remember some detail of how exactly thatās accomplished, I just have to read the function! - also
net/httpis in the standard library, so you can start making websites without installing any libraries at all. I really appreciate this one. - Go is a pretty systems-y language, so if I need to run an
ioctlor something thatās easy to do
In general everything about it feels like it makes projects easy to work on for 5 days, abandon for 2 years, and then get back into writing code without a lot of problems.
For contrast, Iāve tried to learn Rails a couple of times and I really want to love Rails ā Iāve made a couple of toy websites in Rails and itās always felt like a really magical experience. But ultimately when I come back to those projects I canāt remember how anything works and I just end up giving up. It feels easier to me to come back to my Go projects that are full of a lot of repetitive boilerplate, because at least I can read the code and figure out how it works.
things I havenāt figured out yet
some things I havenāt done much of yet in Go:
- rendering HTML templates: usually my Go servers are just APIs and I make the
frontend a single-page app with Vue. Iāve used
html/templatea lot in Hugo (which Iāve used for this blog for the last 8 years) but Iām still not sure how I feel about it. - Iāve never made a real login system, usually my servers donāt have users at all.
- Iāve never tried to implement CSRF
In general Iām not sure how to implement security-sensitive features so I donāt start projects which need login/CSRF/etc. I imagine this is where a framework would help.
itās cool to see the new features Go has been adding
Both of the Go features I mentioned in this post (GOMEMLIMIT and the routing)
are new in the last couple of years and I didnāt notice when they came out. It
makes me think I should pay closer attention to the release notes for new Go
versions.
I wrote about how much I love fish in this blog post from 2017 and, 7 years of using it every day later, Iāve found even more reasons to love it. So I thought Iād write a new post with both the old reasons I loved it and some reasons.
This came up today because I was trying to figure out why my terminal doesnāt break anymore when I cat a binary to my terminal, the answer was āfish fixes the terminal!ā, and I just thought that was really nice.
1. no configuration
In 10 years of using fish I have never found a single thing I wanted to configure. It just works the way I want. My fish config file just has:
- environment variables
- aliases (
alias ls eza,alias vim nvim, etc) - the occasional
direnv hook fish | sourceto integrate a tool like direnv - a script I run to set up my terminal colours
Iāve been told that configuring things in fish is really easy if you ever do want to configure something though.
2. autosuggestions from my shell history
My absolute favourite thing about fish is that I type, itāll automatically suggest (in light grey) a matching command that I ran recently. I can press the right arrow key to accept the completion, or keep typing to ignore it.
Hereās what that looks like. In this example I just typed the āvā key and it guessed that I want to run the previous vim command again.
2.5 āsmartā shell autosuggestions
One of my favourite subtle autocomplete features is how fish handles autocompleting commands that contain paths in them. For example, if I run:
$ ls blah.txt
that command will only be autocompleted in directories that contain blah.txt ā it wonāt show up in a different directory. (hereās a short comment about how it works)
As an example, if in this directory I type bash scripts/, itāll only suggest
history commands including files that actually exist in my blogās scripts
folder, and not the dozens of other irrelevant scripts/ commands Iāve run in
other folders.
I didnāt understand exactly how this worked until last week, it just felt like fish was magically able to suggest the right commands. It still feels a little like magic and I love it.
3. pasting multiline commands
If I copy and paste multiple lines, bash will run them all, like this:
[bork@grapefruit linux-playground (main)]$ echo hi
hi
[bork@grapefruit linux-playground (main)]$ touch blah
[bork@grapefruit linux-playground (main)]$ echo hi
hi
This is a bit alarming ā what if I didnāt actually want to run all those commands?
Fish will paste them all at a single prompt, so that I can press Enter if I actually want to run them. Much less scary.
bork@grapefruit ~/work/> echo hi
touch blah
echo hi
4. nice tab completion
If I run ls and press tab, itāll display all the filenames in a nice grid. I can use either Tab, Shift+Tab, or the arrow keys to navigate the grid.
Also, I can tab complete from the middle of a filename ā if the filename starts with a weird character (or if itās just not very unique), I can type some characters from the middle and press tab.
Hereās what the tab completion looks like:
bork@grapefruit ~/work/> ls
api/ blah.py fly.toml README.md
blah Dockerfile frontend/ test_websocket.sh
I honestly donāt complete things other than filenames very much so I canāt speak to that, but Iāve found the experience of tab completing filenames to be very good.
5. nice default prompt (including git integration)
Fishās default prompt includes everything I want:
- username
- hostname
- current folder
- git integration
- status of last command exit (if the last command failed)
Hereās a screenshot with a few different variations on the default prompt,
including if the last command was interrupted (the SIGINT) or failed.
6. nice history defaults
In bash, the maximum history size is 500 by default, presumably because computers used to be slow and not have a lot of disk space. Also, by default, commands donāt get added to your history until you end your session. So if your computer crashes, you lose some history.
In fish:
- the default history size is 256,000 commands. I donāt see any reason Iād ever need more.
- if you open a new tab, everything youāve ever run (including commands in open sessions) is immediately available to you
- in an existing session, the history search will only include commands from the current session, plus everything that was in history at the time that you started the shell
Iām not sure how clearly Iām explaining how fishās history system works here, but it feels really good to me in practice. My impression is that the way itās implemented is the commands are continually added to the history file, but fish only loads the history file once, on startup.
Iāll mention here that if you want to have a fancier history system in another shell it might be worth checking out atuin or fzf.
7. press up arrow to search history
I also like fishās interface for searching history: for example if I want to edit my fish config file, I can just type:
$ config.fish
and then press the up arrow to go back the last command that included config.fish. Thatāll complete to:
$ vim ~/.config/fish/config.fish
and Iām done. This isnāt so different from using Ctrl+R in bash to search
your history but I think I like it a little better over all, maybe because
Ctrl+R has some behaviours that I find confusing (for example you can
end up accidentally editing your history which I donāt like).
8. the terminal doesnāt break
I used to run into issues with bash where Iād accidentally cat a binary to
the terminal, and it would break the terminal.
Every time fish displays a prompt, itāll try to fix up your terminal so that you donāt end up in weird situations like this. I think this is some of the code in fish to prevent broken terminals.
Some things that it does are:
- turn on
echoso that you can see the characters you type - make sure that newlines work properly so that you donāt get that weird staircase effect
- reset your terminal background colour, etc
I donāt think Iāve run into any of these āmy terminal is brokenā issues in a very long time, and I actually didnāt even realize that this was because of fish ā I thought that things somehow magically just got better, or maybe I wasnāt making as many mistakes. But I think it was mostly fish saving me from myself, and I really appreciate that.
9. Ctrl+S is disabled
Also related to terminals breaking: fish disables Ctrl+S (which freezes your terminal and then you need to remember to press Ctrl+Q to unfreeze it). Itās a feature that Iāve never wanted and Iām happy to not have it.
Apparently you can disable Ctrl+S in other shells with stty -ixon.
10. nice syntax highlighting
By default commands that donāt exist are highlighted in red, like this.
11. easier loops
I find the loop syntax in fish a lot easier to type than the bash syntax. It looks like this:
for i in *.yaml
echo $i
end
Also itāll add indentation in your loops which is nice.
12. easier multiline editing
Related to loops: you can edit multiline commands much more easily than in bash (just use the arrow keys to navigate the multiline command!). Also when you use the up arrow to get a multiline command from your history, itāll show you the whole command the exact same way you typed it instead of squishing it all onto one line like bash does:
$ bash
$ for i in *.png
> do
> echo $i
> done
$ # press up arrow
$ for i in *.png; do echo $i; done ink
13. Ctrl+left arrow
This might just be me, but I really appreciate that fish has the Ctrl+left arrow / Ctrl+right arrow keyboard shortcut for moving between
words when writing a command.
Iām honestly a bit confused about where this keyboard shortcut is coming from
(the only documented keyboard shortcut for this I can find in fish is Alt+left arrow / Alt + right arrow which seems to do the same thing), but Iām pretty
sure this is a fish shortcut.
A couple of notes about getting this shortcut to work / where it comes from:
- one person said they needed to switch their terminal emulator from the āLinux consoleā keybindings to āDefault (XFree 4)ā to get it to work in fish
- on Mac OS,
Ctrl+left arrowswitches workspaces by default, so I had to turn that off. - Also apparently Ubuntu configures libreadline in
/etc/inputrcto makeCtrl+left/right arrowgo back/forward a word, so itāll work in bash on Ubuntu and maybe other Linux distros too. Hereās a stack overflow question talking about that
a downside: not everything has a fish integration
Sometimes tools donāt have instructions for integrating them with fish. Thatās annoying, but:
- Iāve found this has gotten better over the last 10 years as fish has gotten more popular. For example Pythonās virtualenv has had a fish integration for a long time now.
- If I need to run a POSIX shell command real quick, I can always just run
bashorzsh - Iāve gotten much better over the years at translating simple commands to fish syntax when I need to
My biggest day-to-day to annoyance is probably that for whatever reason Iām
still not used to fishās syntax for setting environment variables, I get confused
about set vs set -x.
another downside: fish_add_path
fish has a function called fish_add_path that you can run to add a directory
to your PATH like this:
fish_add_path /some/directory
I love the idea of it and I used to use it all the time, but Iāve stopped using it for two reasons:
- Sometimes
fish_add_pathwill update thePATHfor every session in the future (with a āuniversal variableā) and sometimes it will update thePATHjust for the current session. Itās hard for me to tell which one it will do: in theory the docs explain this but I could not understand them. - If you ever need to remove the directory from your
PATHa few weeks or months later because maybe you made a mistake, thatās also kind of hard to do (there are instructions in this comments of this github issue though).
Instead I just update my PATH like this, similarly to how Iād do it in bash:
set PATH $PATH /some/directory/bin
on POSIX compatibility
When I started using fish, you couldnāt do things like cmd1 && cmd2 ā it
would complain āno, you need to run cmd1; and cmd2ā instead.
It seems like over the years fish has started accepting a little more POSIX-style syntax than it used to, like:
cmd1 && cmd2export a=bto set an environment variable (though this seems a bit limited, you canāt doexport PATH=$PATH:/whateverso I think itās probably better to learnsetinstead)
on fish as a default shell
Changing my default shell to fish is always a little annoying, I occasionally get myself into a situation where
- I install fish somewhere like maybe
/home/bork/.nix-stuff/bin/fish - I add the new fish location to
/etc/shellsas an allowed shell - I change my shell with
chsh - at some point months/years later I reinstall fish in a different location for some reason and remove the old one
- oh no!!! I have no valid shell! I canāt open a new terminal tab anymore!
This has never been a major issue because I always have a terminal open somewhere where I can fix the problem and rescue myself, but itās a bit alarming.
If you donāt want to use chsh to change your shell to fish (which is very reasonable,
maybe I shouldnāt be doing that), the Arch wiki page has a couple of good suggestions ā
either configure your terminal emulator to run fish or add an exec fish to
your .bashrc.
Iāve never really learned the scripting language
Other than occasionally writing a for loop interactively on the command line, Iāve never really learned the fish scripting language. I still do all of my shell scripting in bash.
I donāt think Iāve ever written a fish function or if statement.
it seems like fish is getting pretty popular
I ran a highly unscientific poll on Mastodon asking people what shell they use interactively. The results were (of 2600 responses):
- 46% bash
- 49% zsh
- 16% fish
- 5% other
I think 16% for fish is pretty remarkable, since (as far as I know) there isnāt any system where fish is the default shell, and my sense is that itās very common to just stick to whatever your systemās default shell is.
It feels like a big achievement for the fish project, even if maybe my Mastodon followers are more likely than the average shell user to use fish for some reason.
who might fish be right for?
Fish definitely isnāt for everyone. I think I like it because:
- I really dislike configuring my shell (and honestly my dev environment in general), I want things to ājust workā with the default settings
- fishās defaults feel good to me
- I donāt spend that much time logged into random servers using other shells so thereās not too much context switching
- I liked its features so much that I was willing to relearn how to do a few
ābasicā shell things, like using parentheses
(seq 1 10)to run a command instead of backticks or usingsetinstead ofexport
Maybe youāre also a person who would like fish! I hope a few more of the people who fish is for can find it, because I spend so much of my time in the terminal and itās made that time much more pleasant.
I just did a massive spring cleaning of one of my servers, trying to clean up what has become quite the mess of clutter. For every website on the server, I either:
- Documented what it is, who is using it, and what version of language and framework it uses
- Archived it as static HTML flat files
- Moved the source code from GitHub to a private git server
- Deleted the files
It feels good to get rid of old code, and to turn previously dynamic sites (with all of the risk they come with) into plain HTML.
This is also making me seriously reconsider the value of spinning up any new projects. Several of these are now 10 years old, still churning along fine, but difficult to do any maintenance on because of versions and dependencies. For example:
- indieauth.com - this has been on the chopping block for years, but I haven't managed to build a replacement yet, and is still used by a lot of people
- webmention.io - this is a pretty popular service, and I don't want to shut it down, but there's a lot of problems with how it's currently built and no easy way to make changes
- switchboard.p3k.io - this is a public WebSub (PubSubHubbub) hub, like Superfeedr, and has weirdly gained a lot of popularity in the podcast feed space in the last few years
One that I'm particularly happy with, despite it being an ugly pile of PHP, is oauth.net. I inherited this site in 2012, and it hasn't needed any framework upgrades since it's just using PHP templates. My ham radio website w7apk.com is similarly a small amount of templated PHP, and it is low stress to maintain, and actually fun to quickly jot some notes down when I want. I like not having to go through the whole ceremony of setting up a dev environment, installing dependencies, upgrading things to the latest version, checking for backwards incompatible changes, git commit, deploy, etc. I can just sftp some changes up to the server and they're live.
Some questions for myself for the future, before starting a new project:
- Could this actually just be a tag page on my website, like #100DaysOfMusic or #BikeTheEclipse?
- If it really needs to be a new project, then:
- Can I create it in PHP without using any frameworks or libraries? Plain PHP ages far better than pulling in any dependencies which inevitably stop working with a version 2-3 EOL cycles back, so every library brought in means signing up for annual maintenance of the whole project. Frameworks can save time in the short term, but have a huge cost in the long term.
- Is it possible to avoid using a database? Databases aren't inherently bad, but using one does make the project slightly more fragile, since it requires plans for migrations and backups, and
- If a database is required, is it possible to create it in a way that does not result in ever-growing storage needs?
- Is this going to store data or be a service that other people are going to use? If so, plan on a registration form so that I have a way to contact people eventually when I need to change it or shut it down.
- If I've got this far with the questions, am I really ready to commit to supporting this code base for the next 10 years?
One project I've been committed to maintaining and doing regular (ok fine, "semi-regular") updates for is Meetable, the open source events website that I run on a few domains:
I started this project in October 2019, excited for all the IndieWebCamps we were going to run in 2020. Somehow that is already 5 years ago now. Well that didn't exactly pan out, but I did quickly pivot it to add a bunch of features that are helpful for virtual events, so it worked out ok in the end. We've continued to use it for posting IndieWeb events, and I also run an instance for two IETF working groups. I'd love to see more instances pop up, I've only encountered one or two other ones in the wild. I even spent a significant amount of time on the onboarding flow so that it's relatively easy to install and configure. I even added passkeys for the admin login so you don't need any external dependencies on auth providers. It's a cool project if I may say so myself.
Anyway, this is not a particularly well thought out blog post, I just wanted to get my thoughts down after spending all day combing through the filesystem of my web server and uncovering a lot of ancient history.
About 3 years ago, I announced Mess With DNS in this blog post, a playground where you can learn how DNS works by messing around and creating records.
I wasnāt very careful with the DNS implementation though (to quote the release blog post: āfollowing the DNS RFCs? not exactlyā), and people started reporting problems that eventually I decided that I wanted to fix.
the problems
Some of the problems people have reported were:
- domain names with underscores werenāt allowed, even though they should be
- If there was a CNAME record for a domain name, it allowed you to create other records for that domain name, even if it shouldnāt
- you could create 2 different CNAME records for the same domain name, which shouldnāt be allowed
- no support for the SVCB or HTTPS record types, which seemed a little complex to implement
- no support for upgrading from UDP to TCP for big responses
And there are certainly more issues that nobody got around to reporting, for example that if you added an NS record for a subdomain to delegate it, Mess With DNS wouldnāt handle the delegation properly.
the solution: PowerDNS
I wasnāt sure how to fix these problems for a long time ā technically I could have started addressing them individually, but it felt like there were a million edge cases and Iād never get there.
But then one day I was chatting with someone else who was working on a DNS server and they said they were using PowerDNS: an open source DNS server with an HTTP API!
This seemed like an obvious solution to my problems ā I could just swap out my own crappy DNS implementation for PowerDNS.
There were a couple of challenges I ran into when setting up PowerDNS that Iāll talk about here. I really donāt do a lot of web development and I think Iāve never built a website that depends on a relatively complex API before, so it was a bit of a learning experience.
challenge 1: getting every query made to the DNS server
One of the main things Mess With DNS does is give you a live view of every DNS query it receives for your subdomain, using a websocket. To make this work, it needs to intercept every DNS query before they it gets sent to the PowerDNS DNS server:
There were 2 options I could think of for how to intercept the DNS queries:
- dnstap:
dnsdist(a DNS load balancer from the PowerDNS project) has support for logging all DNS queries it receives using dnstap, so I could put dnsdist in front of PowerDNS and then log queries that way - Have my Go server listen on port 53 and proxy the queries myself
I originally implemented option #1, but for some reason there was a 1 second delay before every query got logged. I couldnāt figure out why, so I implemented my own very simple proxy instead.
challenge 2: should the frontend have direct access to the PowerDNS API?
The frontend used to have a lot of DNS logic in it ā it converted emoji domain
names to ASCII using punycode, had a lookup table to convert numeric DNS query
types (like 1) to their human-readable names (like A), did a little bit of
validation, and more.
Originally I considered keeping this pattern and just giving the frontend (more or less) direct access to the PowerDNS API to create and delete, but writing even more complex code in Javascript didnāt feel that appealing to me ā I donāt really know how to write tests in Javascript and it seemed like it wouldnāt end well.
So I decided to take all of the DNS logic out of the frontend and write a new DNS API for managing records, shaped something like this:
GET /recordsDELETE /records/<ID>DELETE /records/(delete all records for a user)POST /records/(create record)POST /records/<ID>(update record)
This meant that I could actually write tests for my code, since the backend is in Go and I do know how to write tests in Go.
what I learned: itās okay for an API to duplicate information
I had this idea that APIs shouldnāt return duplicate information ā for example if I get a DNS record, it should only include a given piece of information once.
But I ran into a problem with that idea when displaying MX records: an MX record has 2 fields, āpreferenceā, and āmail serverā. And I needed to display that information in 2 different ways on the frontend:
- In a form, where āPreferenceā and āMail Serverā are 2 different form fields (like
10andmail.example.com) - In a summary view, where I wanted to just show the record (
10 mail.example.com)
This is kind of a small problem, but it came up in a few different places.
I talked to my friend Marco Rogers about this, and based on some advice from him I realized that I could return the same information in the API in 2 different ways! Then the frontend just has to display it. So I started just returning duplicate information in the API, something like this:
{
values: {'Preference': 10, 'Server': 'mail.example.com'},
content: '10 mail.example.com',
...
}
I ended up using this pattern in a couple of other places where I needed to display the same information in 2 different ways and it was SO much easier.
I think what I learned from this is that if Iām making an API that isnāt intended for external use (there are no users of this API other than the frontend!), I can tailor it very specifically to the frontendās needs and thatās okay.
challenge 3: whatās a recordās ID?
In Mess With DNS (and I think in most DNS user interfaces!), you create, add, and delete records.
But thatās not how the PowerDNS API works. In PowerDNS, you create a zone, which is made of record sets. Records donāt have any ID in the API at all.
I ended up solving this by generate a fake ID for each records which is made of:
- its name
- its type
- and its content (base64-encoded)
For example one recordās ID is brooch225.messwithdns.com.|NS|bnMxLm1lc3N3aXRoZG5zLmNvbS4=
Then I can search through the zone and find the appropriate record to update it.
This means that if you update a record then its ID will change which isnāt usually what I want in an ID, but that seems fine.
challenge 4: making clear error messages
I think the error messages that the PowerDNS API returns arenāt really intended to be shown to end users, for example:
Name 'new\032site.island358.messwithdns.com.' contains unsupported characters(this error encodes the space as\032, which is a bit disorienting if you donāt know that the space character is 32 in ASCII)RRset test.pear5.messwithdns.com. IN CNAME: Conflicts with pre-existing RRset(this talks about RRsets, which arenāt a concept that the Mess With DNS UI has at all)Record orange.beryl5.messwithdns.com./A '1.2.3.4$': Parsing record content (try 'pdnsutil check-zone'): unable to parse IP address, strange character: $(mentions āpdnsutilā, a utility which Mess With DNSās users donāt have access to in this context)
I ended up handling this in two ways:
- Do some initial basic validation of values that users enter (like IP addresses), so I can just return errors like
Invalid IPv4 address: "1.2.3.4$ - If that goes well, send the request to PowerDNS and if we get an error back, then do some hacky translation of those messages to make them clearer.
Sometimes users will still get errors from PowerDNS directly, but I added some logging of all the errors that users see, so hopefully I can review them and add extra translations if there are other common errors that come up.
I think what I learned from this is that if Iām building a user-facing application on top of an API, I need to be pretty thoughtful about how I resurface those errors to users.
challenge 5: setting up SQLite
Previously Mess With DNS was using a Postgres database. This was problematic
because I only gave the Postgres machine 256MB of RAM, which meant that the
database got OOM killed almost every single day. I never really worked out
exactly why it got OOM killed every day, but thatās how it was. I spent some
time trying to tune Postgresā memory usage by setting the max connections /
work-mem / maintenance-work-mem and it helped a bit but didnāt solve the
problem.
So for this refactor I decided to use SQLite instead, because the website doesnāt really get that much traffic. There are some choices involved with using SQLite, and I decided to:
- Run
db.SetMaxOpenConns(1)to make sure that we only open 1 connection to the database at a time, to preventSQLITE_BUSYerrors from two threads trying to access the database at the same time (just setting WAL mode didnāt work) - Use separate databases for each of the 3 tables (users, records, and requests) to reduce contention. This maybe isnāt really necessary, but there was no reason I needed the tables to be in the same database so I figured Iād set up separate databases to be safe.
- Use the cgo-free modernc.org/sqlite, which translates SQLiteās source code to Go. I might switch to a more ānormalā sqlite implementation instead at some point and use cgo though. I think the main reason I prefer to avoid cgo is that cgo has landed me with difficult-to-debug errors in the past.
- use WAL mode
I still havenāt set up backups, though I donāt think my Postgres database had backups either. I think Iām unlikely to use litestream for backups ā Mess With DNS is very far from a critical application, and I think daily backups that I could recover from in case of a disaster are more than good enough.
challenge 6: upgrading Vue & managing forms
This has nothing to do with PowerDNS but I decided to upgrade Vue.js from version 2 to 3 as part of this refresh. The main problem with that is that the form validation library I was using (FormKit) completely changed its API between Vue 2 and Vue 3, so I decided to just stop using it instead of learning the new API.
I ended up switching to some form validation tools that are built into the
browser like required and oninvalid (hereās the code).
I think it could use some of improvement, I still donāt understand forms very well.
challenge 7: managing state in the frontend
This also has nothing to do with PowerDNS, but when modifying the frontend I realized that my state management in the frontend was a mess ā in every place where I made an API request to the backend, I had to try to remember to add a ārefresh recordsā call after that in every place that Iād modified the state and I wasnāt always consistent about it.
With some more advice from Marco, I ended up implementing a single global state management store which stores all the state for the application, and which lets me create/update/delete records.
Then my components can just call store.createRecord(record), and the store
will automatically resynchronize all of the state as needed.
challenge 8: sequencing the project
This project ended up having several steps because I reworked the whole integration between the frontend and the backend. I ended up splitting it into a few different phases:
- Upgrade Vue from v2 to v3
- Make the state management store
- Implement a different backend API, move a lot of DNS logic out of the frontend, and add tests for the backend
- Integrate PowerDNS
I made sure that the website was (more or less) 100% working and then deployed it in between phases, so that the amount of changes I was managing at a time stayed somewhat under control.
the new website is up now!
I released the upgraded website a few days ago and it seems to work! The PowerDNS API has been great to work on top of, and Iām relieved that thereās a whole class of problems that I now donāt have to think about at all, other than potentially trying to make the error messages from PowerDNS a little clearer. Using PowerDNS has fixed a lot of the DNS issues that folks have reported in the last few years and it feels great.
If you run into problems with the new Mess With DNS Iād love to hear about them here.
Iāve been writing Go pretty casually for years ā the backends for all of my playgrounds (nginx, dns, memory, more DNS) are written in Go, but many of those projects are just a few hundred lines and I donāt come back to those codebases much.
I thought I more or less understood the basics of the language, but this week Iāve been writing a lot more Go than usual while working on some upgrades to Mess with DNS, and ran into a bug that revealed I was missing a very basic concept!
Then I posted about this on Mastodon and someone linked me to this very cool site (and book) called 100 Go Mistakes and How To Avoid Them by Teiva Harsanyi. It just came out in 2022 so itās relatively new.
I decided to read through the site to see what else I was missing, and found a couple of other misconceptions I had about Go. Iāll talk about some of the mistakes that jumped out to me the most, but really the whole 100 Go Mistakes site is great and Iād recommend reading it.
Hereās the initial mistake that started me on this journey:
mistake 1: not understanding that structs are copied on assignment
Letās say we have a struct:
type Thing struct {
Name string
}
and this code:
thing := Thing{"record"}
other_thing := thing
other_thing.Name = "banana"
fmt.Println(thing)
This prints ārecordā and not ābananaā (play.go.dev link), because thing is copied when you
assign it to other_thing.
the problem this caused me: ranges
The bug I spent 2 hours of my life debugging last week was effectively this code (play.go.dev link):
type Thing struct {
Name string
}
func findThing(things []Thing, name string) *Thing {
for _, thing := range things {
if thing.Name == name {
return &thing
}
}
return nil
}
func main() {
things := []Thing{Thing{"record"}, Thing{"banana"}}
thing := findThing(things, "record")
thing.Name = "gramaphone"
fmt.Println(things)
}
This prints out [{record} {banana}] ā because findThing returned a copy, we didnāt change the name in the original array.
This mistake is #30 in 100 Go Mistakes.
I fixed the bug by changing it to something like this (play.go.dev link), which returns a reference to the item in the array weāre looking for instead of a copy.
func findThing(things []Thing, name string) *Thing {
for i := range things {
if things[i].Name == name {
return &things[i]
}
}
return nil
}
why didnāt I realize this?
When I learned that I was mistaken about how assignment worked in Go I was really taken aback, like ā itās such a basic fact about the language works! If I was wrong about that then what ELSE am I wrong about in Go????
My best guess for what happened is:
- Iāve heard for my whole life that when you define a function, you need to think about whether its arguments are passed by reference or by value
- So Iād thought about this in Go, and I knew that if you pass a struct as a value to a function, it gets copied ā if you want to pass a reference then you have to pass a pointer
- But somehow it never occurred to me that you need to think about the same
thing for assignments, perhaps because in most of the other languages I
use (Python, JS, Java) I think everything is a reference anyway. Except for
in Rust, where you do have values that you make copies of but I think most of the time I had to run
.clone()explicitly. (though apparently structs will be automatically copied on assignment if the struct implements theCopytrait) - Also obviously I just donāt write that much Go so I guess itās never come up.
mistake 2: side effects appending slices (#25)
When you subset a slice with x[2:3], the original slice and the sub-slice
share the same backing array, so if you append to the new slice, it can
unintentionally change the old slice:
For example, this code prints [1 2 3 555 5] (code on play.go.dev)
x := []int{1, 2, 3, 4, 5}
y := x[2:3]
y = append(y, 555)
fmt.Println(x)
I donāt think this has ever actually happened to me, but itās alarming and Iām very happy to know about it.
Apparently you can avoid this problem by changing y := x[2:3] to y := x[2:3:3], which restricts the new sliceās capacity so that appending to it
will re-allocate a new slice. Hereās some code on play.go.dev that does that.
mistake 3: not understanding the different types of method receivers (#42)
This one isnāt a āmistakeā exactly, but itās been a source of confusion for me and itās pretty simple so Iām glad to have it cleared up.
In Go you can declare methods in 2 different ways:
func (t Thing) Function()(a āvalue receiverā)func (t *Thing) Function()(a āpointer receiverā)
My understanding now is that basically:
- If you want the method to mutate the struct
t, you need a pointer receiver. - If you want to make sure the method doesnāt mutate the struct
t, use a value receiver.
Explanation #42 has a bunch of other interesting details though. Thereās definitely still something Iām missing about value vs pointer receivers (I got a compile error related to them a couple of times in the last week that I still donāt understand), but hopefully Iāll run into that error again soon and I can figure it out.
more interesting things I noticed
Some more notes from 100 Go Mistakes:
- apparently you can name the outputs of your function (#43), though that can have issues (#44) and Iām not sure I want to
- apparently you can put tests in a different package (#90) to ensure that you only use the packageās public interfaces, which seems really useful
- there are a lots of notes about how to use contexts, channels, goroutines, mutexes, sync.WaitGroup, etc. Iām sure I have something to learn about all of those but today is not the day Iām going to learn them.
Also there are some things that have tripped me up in the past, like:
- forgetting the return statement after replying to an HTTP request (#80)
- not realizing the httptest package exists (#88)
this ā100 common mistakesā format is great
I really appreciated this ā100 common mistakesā format ā it made it really easy for me to skim through the mistakes and very quickly mentally classify them into:
- yep, I know that
- not interested in that one right now
- WOW WAIT I DID NOT KNOW THAT, THAT IS VERY USEFUL!!!!
It looks like ā100 Common Mistakesā is a series of books from Manning and they also have ā100 Java Mistakesā and an upcoming ā100 SQL Server Mistakesā.
Also I enjoyed what Iāve read of Effective Python by Brett Slatkin, which has a similar āhere are a bunch of short Python style tipsā structure where you can quickly skim it and take whatās useful to you. Thereās also Effective C++, Effective Java, and probably more.
some other Go resources
other resources Iāve appreciated:
- Go by example for basic syntax
- go.dev/play
- obviously https://pkg.go.dev for documentation about literally everything
- staticcheck seems like a useful linter ā for example I just started using it to tell me when Iāve forgotten to handle an error
- apparently golangci-lint includes a bunch of different linters
The other day I asked what folks on Mastodon find confusing about working in the terminal, and one thing that stood out to me was āediting a command you already typed inā.
This really resonated with me: even though entering some text and editing it is
a very ābasicā task, it took me maybe 15 years of using the terminal every
single day to get used to using Ctrl+A to go to the beginning of the line (or
Ctrl+E for the end ā I think I used Home/End instead).
So letās talk about why entering text might be hard! Iāll also share a few tips that I wish Iād learned earlier.
itās very inconsistent between programs
A big part of what makes entering text in the terminal hard is the inconsistency between how different programs handle entering text. For example:
- some programs (
cat,nc,git commit --interactive, etc) donāt support using arrow keys at all: if you press arrow keys, youāll just see^[[D^[[D^[[C^[[C^ - many programs (like
irb,python3on a Linux machine and many many more) use thereadlinelibrary, which gives you a lot of basic functionality (history, arrow keys, etc) - some programs (like
/usr/bin/python3on my Mac) do support very basic features like arrow keys, but not other features likeCtrl+leftor reverse searching withCtrl+R - some programs (like the
fishshell oripython3ormicroorvim) have their own fancy system for accepting input which is totally custom
So thereās a lot of variation! Letās talk about each of those a little more.
mode 1: the baseline
First, thereās āthe baselineā ā what happens if a program just accepts text by
calling fgets() or whatever and doing absolutely nothing else to provide a
nicer experience. Hereās what using these tools typically looks for me ā If I
start the version of dash installed on
my machine (a pretty minimal shell) press the left arrow keys, it just prints
^[[D to the terminal.
$ ls l-^[[D^[[D^[[D
At first it doesnāt seem like all of these ābaselineā tools have much in common, but there are actually a few features that you get for free just from your terminal, without the program needing to do anything special at all.
The things you get for free are:
- typing in text, obviously
- backspace
Ctrl+W, to delete the previous wordCtrl+U, to delete the whole line- a few other things unrelated to text editing (like
Ctrl+Cto interrupt the process,Ctrl+Zto suspend, etc)
This is not great, but it means that if you want to delete a word you
generally can do it with Ctrl+W instead of pressing backspace 15 times, even
if youāre in an environment which is offering you absolutely zero features.
You can get a list of all the ctrl codes that your terminal supports with stty -a.
mode 2: tools that use readline
The next group is tools that use readline! Readline is a GNU library to make entering text more pleasant, and itās very widely used.
My favourite readline keyboard shortcuts are:
Ctrl+E(orEnd) to go to the end of the lineCtrl+A(orHome) to go to the beginning of the lineCtrl+left/right arrowto go back/forward 1 word- up arrow to go back to the previous command
Ctrl+Rto search your history
And you can use Ctrl+W / Ctrl+U from the ābaselineā list, though Ctrl+U
deletes from the cursor to the beginning of the line instead of deleting the
whole line. I think Ctrl+W might also have a slightly different definition of
what a āwordā is.
There are a lot more (hereās a full list), but those are the only ones that I personally use.
The bash shell is probably the most famous readline user (when you use
Ctrl+R to search your history in bash, that feature actually comes from
readline), but there are TONS of programs that use it ā for example psql,
irb, python3, etc.
tip: you can make ANYTHING use readline with rlwrap
One of my absolute favourite things is that if you have a program like nc
without readline support, you can just run rlwrap nc to turn it into a
program with readline support!
This is incredible and makes a lot of tools that are borderline unusable MUCH more pleasant to use. You can even apparently set up rlwrap to include your own custom autocompletions, though Iāve never tried that.
some reasons tools might not use readline
I think reasons tools might not use readline might include:
- the program is very simple (like
catornc) and maybe the maintainers donāt want to bring in a relatively large dependency - license reasons, if the programās license is not GPL-compatible ā readline is GPL-licensed, not LGPL
- only a very small part of the program is interactive, and maybe readline
support isnāt seen as important. For example
githas a few interactive features (likegit add -p), but not very many, and usually youāre just typing a single character likeyornā most of the time you need to really type something significant in git, itāll drop you into a text editor instead.
For example idris2 says they donāt use readline
to keep dependencies minimal and suggest using rlwrap to get better
interactive features.
how to know if youāre using readline
The simplest test I can think of is to press Ctrl+R, and if you see:
(reverse-i-search)`':
then youāre probably using readline. This obviously isnāt a guarantee (some
other library could use the term reverse-i-search too!), but I donāt know of
another system that uses that specific term to refer to searching history.
the readline keybindings come from Emacs
Because Iām a vim user, It took me a very long time to understand where these
keybindings come from (why Ctrl+A to go to the beginning of a line??? so
weird!)
My understanding is these keybindings actually come from Emacs ā Ctrl+A and
Ctrl+E do the same thing in Emacs as they do in Readline and I assume the
other keyboard shortcuts mostly do as well, though I tried out Ctrl+W and
Ctrl+U in Emacs and they donāt do the same thing as they do in the terminal
so I guess there are some differences.
Thereās some more history of the Readline project here.
mode 3: another input library (like libedit)
On my Mac laptop, /usr/bin/python3 is in a weird middle ground where it
supports some readline features (for example the arrow keys), but not the
other ones. For example when I press Ctrl+left arrow, it prints out ;5D,
like this:
$ python3
>>> importt subprocess;5D
Folks on Mastodon helped me figure out that this is because in the default
Python install on Mac OS, the Python readline module is actually backed by
libedit, which is a similar library which has fewer features, presumably
because Readline is GPL licensed.
Hereās how I was eventually able to figure out that Python was using libedit on my system:
$ python3 -c "import readline; print(readline.__doc__)"
Importing this module enables command line editing using libedit readline.
Generally Python uses readline though if you install it on Linux or through Homebrew. Itās just that the specific version that Apple includes on their systems doesnāt have readline. Also Python 3.13 is going to remove the readline dependency in favour of a custom library, so āPython uses readlineā wonāt be true in the future.
I assume that there are more programs on my Mac that use libedit but I havenāt looked into it.
mode 4: something custom
The last group of programs is programs that have their own custom (and sometimes much fancier!) system for editing text. This includes:
- most terminal text editors (nano, micro, vim, emacs, etc)
- some shells (like fish), for example it seems like fish supports
Ctrl+Zfor undo when typing in a command. Zshās line editor is called zle. - some REPLs (like
ipython), for example IPython uses the prompt_toolkit library instead of readline - lots of other programs (like
atuin)
Some features you might see are:
- better autocomplete which is more customized to the tool
- nicer history management (for example with syntax highlighting) than the default you get from readline
- more keyboard shortcuts
custom input systems are often readline-inspired
I went looking at how Atuin (a wonderful tool for searching your shell history that I started using recently) handles text input. Looking at the code and some of the discussion around it, their implementation is custom but itās inspired by readline, which makes sense to me ā a lot of users are used to those keybindings, and itās convenient for them to work even though atuin doesnāt use readline.
prompt_toolkit (the library IPython uses) is similar ā it actually supports a lot of options (including vi-like keybindings), but the default is to support the readline-style keybindings.
This is like how you see a lot of programs which support very basic vim
keybindings (like j for down and k for up). For example Fastmail supports
j and k even though most of its other keybindings donāt have much
relationship to vim.
I assume that most āreadline-inspiredā custom input systems have various subtle incompatibilities with readline, but this doesnāt really bother me at all personally because Iām extremely ignorant of most of readlineās features. I only use maybe 5 keyboard shortcuts, so as long as they support the 5 basic commands I know (which they always do!) I feel pretty comfortable. And usually these custom systems have much better autocomplete than youād get from just using readline, so generally I prefer them over readline.
lots of shells support vi keybindings
Bash, zsh, and fish all have a āvi modeā for entering text. In a very unscientific poll I ran on Mastodon, 12% of people said they use it, so it seems pretty popular.
Readline also has a āvi modeā (which is how Bashās support for it works), so by extension lots of other programs have it too.
Iāve always thought that vi mode seems really cool, but for some reason even though Iām a vim user itās never stuck for me.
understanding what situation youāre in really helps
Iāve spent a lot of my life being confused about why a command line application I was using wasnāt behaving the way I wanted, and it feels good to be able to more or less understand whatās going on.
I think this is roughly my mental flowchart when Iām entering text at a command line prompt:
- Do the arrow keys not work? Probably thereās no input system at all, but at
least I can use
Ctrl+WandCtrl+U, and I canrlwrapthe tool if I want more features. - Does
Ctrl+Rprintreverse-i-search? Probably itās readline, so I can use all of the readline shortcuts Iām used to, and I know I can get some basic history and press up arrow to get the previous command. - Does
Ctrl+Rdo something else? This is probably some custom input library: itāll probably act more or less like readline, and I can check the documentation if I really want to know how it works.
Being able to diagnose whatās going on like this makes the command line feel a more predictable and less chaotic.
some things this post left out
There are lots more complications related to entering text that we didnāt talk about at all here, like:
- issues related to ssh / tmux / etc
- the
TERMenvironment variable - how different terminals (gnome terminal, iTerm, xterm, etc) have different kinds of support for copying/pasting text
- unicode
- probably a lot more









































![described UST as a stablecoin that would always hold its $1 peg, and those protocols as safe, reliable way to earn 19ā40% yield. It didnāt feel like gambling; it felt like the answer we had been praying for. We sold everything at discounted prices: the house, the business, the cars. We scraped together just over $190,000 ā every dollar of seventeen years of sacrifice ā and in late April 2022 I converted it all to UST and locked it into Anchor and Mirror, trusting the promises that had been repeated for months. Just two weeks later, on May 9, 2022, I watched UST fall off its peg. In panic, I almost sold everything to limit the damage, but then I saw Mr. Kwonās tweet: āDeploying more capitalā steady lads.ā. The price started recovering that day. I trusted him again and decided to keep the UST locked. What followed were two weeks of pure terror. The Terra blockchain halted. Mirror Protocol liquidity vanished. I couldnāt withdraw a single cent. When the system finally let me out on May 23, UST was trading below $0.07. Our $190,000 had become less than $13,000. Seventeen years of our lives ā gone in two weeks. People say you should never risk money you canāt afford to lose. But this never felt like risk. The founder of the project had called it ātoo big to fail,ā boasted that the Luna Foundation Guard would protect every dollar. To a desperate father trying to feed two families in a foreign country, it sounded like the safest investment ever. The money was only the beginning of the catastrophe. Less than two weeks after the collapse, my wife packed her bags. She could no longer trust that I could keep her and her mother financially safe. A few months later the divorce papers arrived. Our teenage sons chose to stay with me and my parents, to hold what was left of the family together. College was never mentioned again. Both boys quit school and now work as car mechanics to put food on the table. Today I am [redacted]. I live in [redacted] with my elderly parents, taking whatever cash jobs I can find, waiting for a war in my home country to end. My ex-wife lives in another country, babysitting strangersā children to survive. My sons fix cars instead of studying the engineering degrees they once dreamed of. I cannot return to the United States ā there is nothing left for us there: no home, no business, no savings, no future. Everything we built over nearly two decades vanished in May 2022. I never imagined a person I had never met, never spoken to, could destroy my family so](https://storage.mollywhite.net/micro/bc295b997c4a6e98cf8e_Screenshot-2025-12-11-at-4.42.55---PM.png)







![Hi, i was a victim of Do Kwonās terra luna scam. I am a carer for my Dad, [redacted] and i invested money into terra luna believing that it was a profitable strategy to support my Dad better with medical expenses and being able to sacrifice taking time off work to support him. When i lost all of my funds in this scam i was devastated. I had to take out loans to get by to fund my Dadās medical bills and just to get by with food and necessities for us both. I am still in debt to this day paying back loans and credit cards. My family has been through immense suffering and i suffer from depression daily since the losses of my life savings. I urge the honourable judge to please hand down harse sentencing to Do Kwon for the crimes that he has committed, i sincerely hope that justice will prevail to recieve some sollace from the torment he has put my family and i through. Kind regards Paul Lynn](https://storage.mollywhite.net/micro/542695386b9c27632d06_Screenshot-2025-12-11-at-5.05.38---PM.png)















