Norah O'Donnell in CBS News.
2025-11-06T17:52:51+00:00
Fullscreen
Open in Tab
Finished reading:
Thu, 06 Nov 2025 12:57:32 +0000
Fullscreen
Open in Tab
Today's links
- The 40-year economic mistake that let Google conquer (and enshittify) the world: If reality doesn't fit the theory, ignore reality.
- Hey look at this: Delights to delectate.
- Object permanence: The Master Switch; Dueling useless machines; Chrome delists Symantec; Someone tried to buy the UK; "Made to Kill"; #Audiblegate; Sony lies about de-rootkitifier; “Aurora”; Bluesky and enshittification; Polostan; New Zealand's 3 strikes law; Open Kinekt drivers; Co-op platforms vs Uber; Chelsea Manning's surveillance reform bill.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
The 40-year economic mistake that let Google conquer (and enshittify) the world (permalink)
A central fact of enshittification is that the growth of quality-destroying, pocket-picking monopolists wasn't an accident, nor was it inevitable. Rather, named individuals, in living memory, advocated for and created pro-enshittificatory policies, ushering in the enshittocene.
The greatest enshittifiers of all are the neoliberal economists who advocated for the idea that monopolies are good, because (in their perfect economic models), the only way for a company to secure a monopoly is to be so amazing that we all voluntarily start buying its products and services, and the instant a monopoly starts to abuse its market power, new companies will enter the market and poach us all from the bloated incumbent.
This "consumer welfare" theory of antitrust is obviously wrong, and it's the best-known neoliberal monopoly delusion. But it's not the only one! Another pro-monopoly ideology we can thank the Chicago School economists for is "industrial organization" (IO), a theory that insists that vertical monopolies are actually really good. This turns out to be one of the most consequentially catastrophic mistakes in modern economic history.
What's a "vertical monopoly"? That's when a company takes over parts of the supply chain both upstream and downstream from it. Take Essilor Luxottica, the eyeglasses monopoly that owns every brand of frames you've ever heard of, from Coach and Oakley to Versace and Bausch and Lomb. That's a horizontal lobby – the company took over every eyewear brand under the sun. But they also created a vertical monopoly by buying most of the major eyeglass retailers (Sunglass Hut, Lenscrafters, etc), and by buying up most of the optical labs in the world (Essilor makes the majority of corrective lenses, worldwide). They also own Eyemed, the world's largest eyeglasses insurer.
IO theory predicts that even if a company like Essilor Luxxotica uses its monopoly power to price gouge in one part of the eyeglass supply chain (e.g. by raising the price of frames, which Essilor Luxxotica has done, by over 1,000%), that they will use some of those extraordinary profits to keep all their other products as cheap as possible. If Luxottica can use its market power to mark up the price of frames by a factor of ten, then IO theory predicts that they'll keep the prices of lenses and insurance as low as possible, in order to make it harder for lens or insurance companies to get into the frame business. By using monopoly frame profits to starve those rivals of profits, Essilor Luxxotica can keep them so poor that they can't afford to branch out and compete with Essilor Luxottica's high-priced frames.
Like so much in neoliberal economics, this is nothing but "a superior moral justification for selfishness" (h/t John Kenneth Galbraith). IO is a way for the greediest among us to convince policymakers that their greed is good, and produces a benefit for all of us. By energetically peddling this economic nonsense, monopolists and their pet economists have done extraordinary harm to the world, while getting very, very rich.
Google is a real poster-child for what happens to a market when regulators adopt IO ideas. "Google’s hidden empire," is a new paper out today from Aline Blankertz, Brianna Rock and Nicholas Shaxson, which tells the story of how IO let Google become the enshittified, thrice-convicted monopolist it is today:
https://arxiv.org/abs/2511.02931
The authors mostly look at the history of how EU regulators dealt with Google's long string of mergers. By the time Google embarked on this shopping spree, the European Commission had already remade itself as a Chicago School, IO-embracing regulator. The authors trace this to 2001, when the EC blocked a merger between GE and Honeywell, which had been approved in the USA. This provoked howls of disapproval and mockery from Chicago School proponents, who mocked the EC for not hiring enough "IO expertise," contrasting the Commission's staff with the US FTC, which had 50 PhD (neoliberal) economists on the payroll. Stung, the EU embarked on a "Big Bang" hiring spree for Chicago School economists in 2004, remaking the way it viewed competition policy for decades to come.
This is the context for Google's wave of highly consequential vertical mergers, the most important of which being its acquisition of Doubleclick, the ad-tech company that allowed Google to acquire the monopoly it was last year convincted of operating:
https://www.thebignewsletter.com/p/google-found-guilty-of-monopolization
When Google sought regulatory approval in the EU for its Doubleclick acquisition, the EC's economists blithely predicted that this wouldn't lead to any harmful consequences. Sure, it would let Google dominate the tools used by publishers to place ads on their pages; and by the advertisers who placed those ads; and the marketplace in which the seller and buyer tools transacted business. But that's a vertical monopoly, and any (IO-trained) fule kno that this is a perfectly innocuous arrangement that can't possibly lead to harmful monopoly conduct.
The EC arrived at this extraordinary conclusion by paying outside economists a lot of money for advice (that kind of pretzel logic doesn't come cheap). Two decades later, Google/Doubleclick was abusing its monopoly so badly that the EU fined the company €2.95 billion.
It's not like Google/Doubleclick took two decades to start screwing over advertisers and publishers. Right from the jump, it was clear that this merger was an anticompetitive disaster, but that didn't stop the EC from waving through more mergers, like 2020's Google acquisition of Fitbit:
https://pluralistic.net/2020/10/01/the-years-of-repair/#google-fitbit
Once again, the EC concluded that this merger, being "vertical," couldn't have any deleterious effects. In reality, Google-Fitbit was a classic "killer acquisition," in which Google bought out and killed the dominant player in a sector it was planning to enter, in order to shut down a competitor. Within a few years, the Fitbit had been enshittified beyond all recognition.
Despite these regulatory failures (and many more like them), the EC remains firmly committed to IO and its supremely chill posture on vertical monopolization. But as bad as IO is for regulating vertical mergers, it's even less well suited for addressing Google's main tactic for shaping markets: vertical investments.
Google Ventures (GV) is Google's investment arm, and it is vastly larger than the venture arms of other Big Tech companies. Google invests in far more companies than it buys outright, and also far more companies than any other Big Tech company does. GV is the only tech company investment fund that shows up in the top-ten list of VCs by deal.
In the paper, the authors use data from Pitchbook to create a sense of Google's remarkable investment portfolio. Many of these deals go through "Google for Startups," which allows Google to acquire an equity stake in companies for "in-kind contributions," mainly access to Google's cloud servers and data.
By investing so widely, Google can exert enormous force on the shape of the entire tech ecosystem, ensuring that the companies that do succeed don't compete with Google's most lucrative lines of business, but rather funnel users and businesses into using Google's services.
This activity isn't tracked by academics, regulators, or stock analysts. It's the "hidden empire" of the paper's title. 9556 companies show up in Pitchbook as receiving Big Tech investments up to 2024. 5,899 of those companies got their investments from Google.
Combine Google's free hand to engage in vertical acquisitions and its invisible empire of portfolio companies, and you have a world-spanning entity with damned few checks on its power.
What's more, as the authors write, Google is becoming an arm of US foreign power. Back in 2024, Google made a $24b acquisition offer to the cybersecurity company Wiz, which turned it down, out of fear that the Biden administration's antitrust enforcers would tank the deal. After Donald Trump's election – which saw antitrust enforcement neutralized except as a tool for blackmailing companies Trump doesn't like – Wiz sold to Google for $32b.
The Wiz acquisition is an incredibly dangerous one from a competitive perspective. Wiz provides realtime cybersecurity monitoring for the networks of large corporations, meaning that any Wiz customer necessarily shares a gigantic amount of sensitive data with the company – and now, with Google, which owns Wiz, and competes with many of its customers.
Google has already mastered the art of weaponizing the data that it collects from users, but with Wiz, it gains unprecedented access to sensitive data from the world's businesses.
Google's consolidation of market power – power it has abused so badly that it has lost three federal antitrust cases – can be directly traced to the foolish notions of Industrial Organization theory and its misplaced faith in vertical mergers.
As the authors write, it's long past time we abandoned this failed ideology. The Google/Wiz merger still has to clear regulatory approval in the EU. This represents a chance for the EC to abandon its tragic, decades-long, unrequited love affair with IO and block this nakedly anticompetitive merger.
Hey look at this (permalink)

- Lina Khan co-chairs Mamdani's transition team https://www.nytimes.com/live/2025/11/05/nyregion/nyc-mayor-mamdani
-
The Democrats' problem in the Senate is not progressives https://www.gelliottmorris.com/p/democrats-need-a-bigger-senate-solution
-
How anti-cybercrime laws are being weaponized to repress journalism. https://www.cjr.org/analysis/nigeria-pakistan-jordan-cybercrime-laws-journalism.php
-
Internet Archive’s legal fights are over, but its founder mourns what was lost https://arstechnica.com/tech-policy/2025/11/the-internet-archive-survived-major-copyright-losses-whats-next/
-
Hack Exposes Kansas City’s Secret Police Misconduct List https://www.wired.com/story/hack-exposes-kansas-city-kansas-polices-secret-misconduct-list/
Object permanence (permalink)
#20yrsago BBC Archive database — early info https://web.archive.org/web/20051102024643/https://www.hackdiary.com/archives/000071.html
#20yrsago Sony releases de-rootkit-ifier, lies about risks from rootkits https://web.archive.org/web/20051126084940/http://www.freedom-to-tinker.com/?p=921
#20yrsago Pew study: Kids remix like hell https://web.archive.org/web/20051104022412/http://www.pewinternet.org/PPF/r/166/source/rss/report_display.asp
#15yrsago How I use the Internet when I’m playing with my kid https://www.theguardian.com/technology/2010/nov/02/cory-doctorow-children-and-computers
#15yrsago Bedtime Story: Supernatural thriller about the dark side of “getting lost in a good book” https://memex.craphound.com/2010/11/02/bedtime-story-supernatural-thriller-about-the-dark-side-of-getting-lost-in-a-good-book/
#15yrsago The Master Switch: Tim “Net Neutrality” Wu explains what’s at stake in the battle for net freedom https://memex.craphound.com/2010/11/01/the-master-switch-tim-net-neutrality-wu-explains-whats-at-stake-in-the-battle-for-net-freedom/
#15yrsago Times Online claims 200K paid users: but where’s the detailed breakdown? https://memex.craphound.com/2010/11/01/times-online-claims-200k-paid-users-but-wheres-the-detailed-breakdown/
#15yrsago Duelling useless machines: a metaphor for polarized politics https://www.youtube.com/watch?v=UkgoSOSGrx4
#15yrsago Hari Prasad, India’s evoting researcher, working to save Indian democracy from dirty voting machines https://www.eff.org/deeplinks/2010/11/2010-pioneer-award-winner-hari-prasad-defends
#15yrsago Science fiction tells us all laws are local — just like the Web https://locusmag.com/feature/cory-doctorow-a-cosmopolitan-literature-for-the-cosmopolitan-web/
#15yrsago UK Lord claims mysterious "foundation" wants to give Britain £17B, no strings attached http://www.antipope.org/charlie/blog-static/2010/11/conspiracy-theories.html
#15yrsago New Zealand proposes “guilty until proven innocent” copyright law to punish accused infringers https://arstechnica.com/tech-policy/2010/11/new-zealand-p2p-proposal-guilty-until-proven-innocent/
#15yrsago Toronto cops who removed their name-tags during the G20 to avoid identification will be docked a day’s pay https://web.archive.org/web/20101107144339/https://www.theglobeandmail.com/news/national/toronto/nearly-100-toronto-officers-to-be-disciplined-over-summit-conduct/article1784884/
#15yrsago $2K bounty for free/open Kinect drivers (Microsoft thinks this is illegal!) https://blog.adafruit.com/2010/11/04/the-open-kinect-project-the-ok-prize-get-1000-bounty-for-kinect-for-xbox-360-open-source-drivers/
#15yrsago TSA official slipped white powder into fliers’ bags, told them they’d been caught with coke and were under arrest https://www.thesmokinggun.com/documents/stupid/memos-detail-tsa-officers-cocaine-pranks
#10yrsago Firefox’s new privacy mode also blocks tracking ads https://web.archive.org/web/20151104081611/https://www.eff.org/deeplinks/2015/11/mozilla-ships-tracking-protection-firefox
#10yrsago Predatory lenders trick Google into serving ads to desperate, broke searchers https://www.theatlantic.com/technology/archive/2015/11/google-searches-privacy-danger/413614/
#10yrsago Fighting Uber’s Death Star with a Rebel Alliance of co-op platforms https://web.archive.org/web/20151107021010/http://www.shareable.net/blog/how-platform-coops-can-beat-death-stars-like-uber-to-create-a-real-sharing-economy
#10yrsago If the Kochs want criminal justice reform, why do they fund tough-on-crime GOP candidates? https://theintercept.com/2015/11/03/soft-on-crime-ads/
#10yrsago Chelsea Manning publishes a 129-page surveillance reform bill from her cell in Leavenworth https://web.archive.org/web/20151103175813/https://s3.amazonaws.com/fftf-cms/media/medialibrary/2015/11/manning-memo.pdf
#10yrsago EPA finds more Dieselgate emissions fraud in VW’s Audis and Porsches https://www.nytimes.com/2015/11/03/business/some-porsche-models-found-to-have-emissions-cheating-software.html
#10yrsago Ranking Internet companies’ data-handling: a test they all fail https://www.theguardian.com/technology/2015/nov/03/ranking-digital-rights-project-data-protection
#10yrsago Big Data refusal: the nuclear disarmament movement of the 21st century https://booktwo.org/notebook/big-data-no-thanks/
#10yrsago Made to Kill: 1960s killer-robot noir detective novel https://memex.craphound.com/2015/11/03/made-to-kill-1960s-killer-robot-noir-detective-novel/
#10yrsago Chrome won’t trust Symantec-backed SSL as of Jun 1 unless they account for bogus certs https://security.googleblog.com/2015/10/sustaining-digital-certificate-security.html
#10yrsago Beautiful, free/open 3D printed book of lost Louis H. Sullivan architectural ornaments https://web.archive.org/web/20151109231301/https://twentysomethingsullivan.com/
#10yrsago America’s a rigged carnival game that rips off the poor to fatten the rich https://web.archive.org/web/20151104012651/http://robertreich.org/post/132363519655
#10yrsago As America’s middle class collapses, no one is buying stuff anymore https://web.archive.org/web/20151105142153/http://uk.businessinsider.com/the-disappearing-middle-class-is-threatening-major-retailers-2015-10
#10yrsago Irish government to decriminalise personal quantities of many drugs https://www.irishtimes.com/news/ireland/irish-news/injection-rooms-for-addicts-to-open-next-year-in-drug-law-change-says-minister-1.2413509
#10yrsago Book and Bed: Tokyo’s coffin hotel/bookstore https://bookandbedtokyo.com/en/
#10yrsago Kim Stanley Robinson’s “Aurora”: space is bigger than you think https://memex.craphound.com/2015/11/02/kim-stanley-robinsons-aurora-space-is-bigger-than-you-think/
#5yrsago Trustbusting Google https://pluralistic.net/2020/11/02/unborked/#borked
#5yrsago Trump billed the White House $3 per glass of waterhttps://pluralistic.net/2020/11/02/unborked/#beltway-bandits
#5yrsago Trump's electoral equilibrium https://pluralistic.net/2020/11/02/unborked/#maso-fascism
#5yrsago A hopeful future https://pluralistic.net/2020/11/03/somebody-will/#somebody-will
#5yrsago Get an extra vote https://pluralistic.net/2020/11/03/somebody-will/#nudge-nudge
#5yrsago How Audible robs indie audiobook creatorshttps://pluralistic.net/2020/11/03/somebody-will/#acx
#5yrsago Past Performance is Not Indicative of Future Results https://pluralistic.net/2020/11/03/somebody-will/#a-not-i
#1yrago Bluesky and enshittification https://pluralistic.net/2024/11/02/ulysses-pact/#tie-yourself-to-a-federated-mast
#1yrago Shifting $677m from the banks to the people, every year, forever https://pluralistic.net/2024/11/01/bankshot/#personal-financial-data-rights
#1yrago Neal Stephenson's "Polostan" https://pluralistic.net/2024/11/04/bomb-light/#nukular
Upcoming appearances (permalink)

- Miami: Cloudfest, Nov 6
https://www.cloudfest.com/usa/ -
Burbank: Burbank Book Festival, Nov 8
https://www.burbankbookfestival.com/ -
Lisbon: A post-American, enshittification-resistant internet, with Rabble (Web Summit), Nov 12
https://websummit.com/sessions/lis25/92f47bc9-ca60-4997-bef3-006735b1f9c5/a-post-american-enshittification-resistant-internet/ -
Cardiff: Hay Festival After Hours, Nov 13
https://www.hayfestival.com/c-203-hay-festival-after-hours.aspx -
Oxford: Enshittification and Extraction: The Internet Sucks Now with Tim Wu (Oxford Internet Institute), Nov 14
https://www.oii.ox.ac.uk/news-events/events/enshittification-and-extraction-the-internet-sucks-now/ -
London: Enshittification with Sarah Wynn-Williams and Chris Morris, Nov 15
https://www.barbican.org.uk/whats-on/2025/event/cory-doctorow-with-sarah-wynn-williams -
London: Downstream IRL with Aaron Bastani (Novara Media), Nov 17
https://dice.fm/partner/tickets/event/oen5rr-downstream-irl-aaron-bastani-in-conversation-with-cory-doctorow-17th-nov-earth-london-tickets -
London: Enshittification with Carole Cadwalladr (Frontline Club), Nov 18
https://www.eventbrite.co.uk/e/in-conversation-enshittification-tickets-1785553983029 -
Virtual: Enshittification with Vass Bednar (Vancouver Public Library), Nov 21
https://www.crowdcast.io/@bclibraries-present -
Toronto: Jailbreaking Canada (OCAD U), Nov 27
https://www.ocadu.ca/events-and-exhibitions/jailbreaking-canada -
San Diego: Enshittification at the Mission Hills Branch Library, Dec 1
https://libraryfoundationsd.org/events/doctorow -
Seattle: Neuroscience, AI and Society (University of Washington), Dec 4
https://www.eventbrite.com/e/neuroscience-ai-and-society-cory-doctorow-tickets-1735371255139 -
Madison, CT: Enshittification at RJ Julia, Dec 8
https://rjjulia.com/event/2025-12-08/cory-doctorow-enshittification
Recent appearances (permalink)
- Reimagining Digital Public Infrastructure (Attention: Govern Or Be Governed)
https://www.youtube.com/watch?v=F8JuXDfDtBY -
Enshittification and How To Fight It (ILSR)
https://www.whoshallrule.com/p/enshittification-and-how-to-fight -
Big Tech’s “Enshittification” & Bill McKibben on Solar Hope for the Planet
https://www.writersvoice.net/2025/11/cory-doctorow-on-big-techs-enshittification-bill-mckibben-on-solar-hope-for-the-planet/ -
Enshittification and the Rot Economy with Ed Zitron (Clarion West)
https://www.youtube.com/watch?v=Tz71pIWbFyc -
Amanpour & Co (New Yorker Radio Hour)
https://www.youtube.com/watch?v=I8l1uSb0LZg
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.
-
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Wed, 05 Nov 2025 16:40:38 +0000
Fullscreen
Open in Tab
Today's links
- "Science Comics Computers: How Digital Hardware Works": Steampunk dinosaurs scratch-build a pressurized air-based, Turing complete, universal von Neumann machine.
- Hey look at this: Delights to delectate.
- Object permanence: Google Print vs Great Ormond; Lincolnbot; "Zoo City"; Killed by your tapeworm's cancer; NZ Colossus; How to have cancer.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
"Science Comics Computers: How Digital Hardware Works" (permalink)
In Science Comics Computers: How Digital Hardware Works, legendary cypherpunk Perry Metzger teams up with Penelope Spector and illustrator Jerel Dye for a tour-de-force young adult comic book that uses hilarious steampunk dinosaurs to demystify the most foundational building-blocks of computers. It's astounding:
"Science Comics" is a long-running series from First Second, the imprint that also published my middle-grades comic In Real Life and my picture book Poesy the Monster-Slayer (they are also publishing my forthcoming middle-grades graphic novel Unauthorized Bread and adult graphic novel Enshittification). But long before I was a First Second author, I was a giant First Second fan, totally captivated by their string of brilliant original comics and English translations of beloved comics from France, Spain and elsewhere. The "Science Comics" series really embodies everything I love about the imprint: the combination of whimsy, gorgeous art, and a respectful attitude towards young readers that meets them at their level without ever talking down to them:
https://us.macmillan.com/series/sciencecomics
But as great as the whole "Science Comics" series is, How Digital Hardware Works is even better. Our guide to the most profound principles in computer science is a T Rex named Professor Isabella Brunel, who dresses in steampunk finery that matches the Victorian, dinosaur-filled milieu in which she operates.
Brunel begins by introducing us to "Veniac," a digital computer that consists of a specially designed room in which a person performs all the steps involved in the operations of a computer. This person – a celebrated mathematician (she has a Fields Medal) velociraptor named Edna – moves slips of paper in and out of drawers, looks up their meaning in a decoder book, tacks them up on a corkboard register, painstakingly completing the operations that comprise the foundations of computing.
Here the authors are showing the reader that computing can be abstracted from computing. The foundation of computing isn't electrical engineering, microlithography, or programming: it's logic.
When I was six or seven, my father brought home a computer science teaching tool from Bell Labs called "CARDiac," the "CARDboard Illustrative Aid to Computation." This was a papercraft digital computer that worked in nearly the same way as the Veniac, with you playing the role of Edna, moving little tokens around, penciling and erasing values in registers, and painstakingly performing the operations to run values through adders and then move them to outputs:
https://en.wikipedia.org/wiki/CARDboard_Illustrative_Aid_to_Computation
CARDiac was profoundly formative for me. No matter how infinitesimal and rapid the components of a modern computer are, I have never lost sight of the fact that they are performing the same operations I performed with a CARDiac on my child-sized desk in my bedroom. This is exactly the mission of CARDiac, whose creators, David Hagelbarger and Saul Fingerman, were worried that the miniaturization of computers (in 1968!) was leading to a time where it would be impossible to truly grasp how they worked. If you want to build your own CARDiac, here's a PDF you can download and get started with:
https://www.instructables.com/CARDIAC-CARDboard-Illustrative-Aid-to-Computation-/
But of course, you don't need to print, assemble and operate a CARDIac to get the fingertip feeling of what's going on inside a computer. Watching a sassy velociraptor perform the operations will work just as well. After Edna lays down this conceptual framework, Brunel moves on to building a mechanical digital computer, one composed of mechanical switches that can be built up into logic gates, which can, in turn, be ganged together to create every part of a universal computer that can compute every valid program.
This mechanical computer – the "Brawniac" – runs on compressed air, provided by a system of pumps that either supply positive pressure (forcing corks upwards to either permit or block airflow) or negative pressure (which sucks the corks back down, toggling the switch's state). This simple switch – you could probably build one in your kitchen out of fish-tank tubing and an aquarium pump – is then methodically developed into every type of logic gate. These gates are then combined to replicate every function of Edna in her special Veniac room, firmly anchoring the mechanical nuts-and-bolts of automatic computing with the conceptual framework.
This goes beyond demystification: the authors here are attaching a handle to this big, nebulous, ubiquitous hyperobject that permeates every part of our lives and days, allowing the reader to grasp and examine it from all angles. While there's plenty of great slapstick, fun art, and terrific characters in this book that will make you laugh aloud, the lasting effect upon turning the last page isn't just entertainment, it's empowerment.
No wonder they were able to tap the legendary hardware hacker Andrew "bunnie" Huang to contribute an outstanding introduction to this book, one that echoes the cri de coeur in in the intro that bunnie generously provided for my young adult novel Little Brother. No one writes about the magic of hacking hardware like bunnie:
Bunnie isn't the only computing legend associated with this book. Lead author Perry Metzger founded the Cryptography mailing list and is a computing pioneer in his own right.
The authors have put up a website at veniac.com that promises educator guides and a Veniac simulator. These will doubtless serve as excellent companions to the book itself, but even without them, this is an incredible accomplishment.
Hey look at this (permalink)

- Zohran Mamdani: “Hope Is Alive” https://jacobin.com/2025/11/zohran-mamdani-election-victory-speech/
-
Credit Card-Thin Handheld Has 300 Games and A Multiplayer Cable https://www.yankodesign.com/2025/11/05/credit-card-thin-handheld-has-300-games-and-a-multiplayer-cable/
-
‘The Big Short’ Investor Michael Burry Bets Against AI Hype https://gizmodo.com/the-big-short-investor-michael-burry-bets-against-ai-hype-2000681316
-
I'm an Amazon employee, and I co-sign this letter anonymously https://www.amazonclimatejustice.org/open-letter#sign-form
-
Epic and Google agree to settle their lawsuit and change Android’s fate globally https://www.theverge.com/policy/813991/epic-google-proposed-settlement
Object permanence (permalink)
#20yrsago Google print hurts kids! https://memex.craphound.com/2005/11/05/hospital-google-print-hurts-kids/
#15yrsago HOWTO graft the RFID from a payment-card onto your phone https://www.bunniestudios.com/blog/2010/rfid-transplantation/
#15yrsago Lincolnbot Mark I https://web.archive.org/web/20101107224026/http://disneyparks.disney.go.com/blog/2010/11/walt-disney-one-mans-dream-re-opens-with-new-magic-fond-memories-at-disney’s-hollywood-studios/
#15yrsago Crutchfield Dermatology of Minneapolis claims copyright in everything you write, forever, to keep you from posting complaints on the net https://memex.craphound.com/2010/11/05/crutchfield-dermatology-of-minneapolis-claims-copyright-in-everything-you-write-forever-to-keep-you-from-posting-complaints-on-the-net/
#15yrsago Botmasters include fake control interface to ensnare security researchers https://web.archive.org/web/20101106004833/https://blog.tllod.com/2010/11/03/statistics-dont-lie-or-do-they/
#15yrsago Young Asian refugee claimant sneaks onto Air Canada flight from HK disguised as old white guy https://www.cnn.com/2010/WORLD/americas/11/04/canada.disguised.passenger/index.html
#15yrsago Zoo City: hard-boiled South African urban fantasy makes murder out of magic https://memex.craphound.com/2010/11/05/zoo-city-hard-boiled-south-african-urban-fantasy-makes-murder-out-of-magic/
#15yrsago Shortly after Murdoch buys National Geographic, he fires its award-winning journalists https://www.pewresearch.org/journalism/2011/07/20/wall-street-journal-under-rupert-murdoch/
#10yrsago British government will (unsuccessfully) ban end-to-end encryption https://memex.craphound.com/2015/11/05/british-government-will-unsuccessfully-ban-end-to-end-encryption/
#10yrsago Man killed by his tapeworm’s cancer https://www.livescience.com/52695-tapeworm-cancer.html?cmpid=514645
#10yrsago Washington Redskins’ lawyers enumerate other grossly offensive trademarks for the USPTO https://www.techdirt.com/2015/11/04/how-redskins-delightfully-vulgar-court-filing-won-me-over/
#10yrsago New Zealand’s lost colossus: all-mechanical racetrack oddsmaking computer https://hackaday.com/2015/11/04/tote-boards-the-impressive-engineering-of-horse-gambling/
#5yrsago Ant, Uber, and the true nature of money https://pluralistic.net/2020/11/05/gotta-be-a-pony-under-there/#jack-ma
#1yrago How to have cancer https://pluralistic.net/2024/11/05/carcinoma-angels/#squeaky-nail
Upcoming appearances (permalink)

- Miami: Enshittification at Books & Books, Nov 5
https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 -
Miami: Cloudfest, Nov 6
https://www.cloudfest.com/usa/ -
Burbank: Burbank Book Festival, Nov 8
https://www.burbankbookfestival.com/ -
Lisbon: A post-American, enshittification-resistant internet, with Rabble (Web Summit), Nov 12
https://websummit.com/sessions/lis25/92f47bc9-ca60-4997-bef3-006735b1f9c5/a-post-american-enshittification-resistant-internet/ -
Cardiff: Hay Festival After Hours, Nov 13
https://www.hayfestival.com/c-203-hay-festival-after-hours.aspx -
Oxford: Enshittification and Extraction: The Internet Sucks Now with Tim Wu (Oxford Internet Institute), Nov 14
https://www.oii.ox.ac.uk/news-events/events/enshittification-and-extraction-the-internet-sucks-now/ -
London: Enshittification with Sarah Wynn-Williams and Chris Morris, Nov 15
https://www.barbican.org.uk/whats-on/2025/event/cory-doctorow-with-sarah-wynn-williams -
London: Downstream IRL with Aaron Bastani (Novara Media), Nov 17
https://dice.fm/partner/tickets/event/oen5rr-downstream-irl-aaron-bastani-in-conversation-with-cory-doctorow-17th-nov-earth-london-tickets -
London: Enshittification with Carole Cadwalladr (Frontline Club), Nov 18
https://www.eventbrite.co.uk/e/in-conversation-enshittification-tickets-1785553983029 -
Virtual: Enshittification with Vass Bednar (Vancouver Public Library), Nov 21
https://www.crowdcast.io/@bclibraries-present -
San Diego: Enshittification at the Mission Hills Branch Library, Dec 1
https://libraryfoundationsd.org/events/doctorow -
Seattle: Neuroscience, AI and Society (University of Washington), Dec 4
https://www.eventbrite.com/e/neuroscience-ai-and-society-cory-doctorow-tickets-1735371255139 -
Madison, CT: Enshittification at RJ Julia, Dec 8
https://rjjulia.com/event/2025-12-08/cory-doctorow-enshittification
Recent appearances (permalink)
- Enshittification and How To Fight It (ILSR)
https://www.whoshallrule.com/p/enshittification-and-how-to-fight -
Big Tech’s “Enshittification” & Bill McKibben on Solar Hope for the Planet
https://www.writersvoice.net/2025/11/cory-doctorow-on-big-techs-enshittification-bill-mckibben-on-solar-hope-for-the-planet/ -
Enshittification and the Rot Economy with Ed Zitron (Clarion West)
https://www.youtube.com/watch?v=Tz71pIWbFyc -
Amanpour & Co (New Yorker Radio Hour)
https://www.youtube.com/watch?v=I8l1uSb0LZg -
Enshittification is Not Inevitable (Team Human)
https://www.teamhuman.fm/episodes/339-cory-doctorow-enshittification-is-not-inevitable
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.
-
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
2025-11-03T22:52:08+00:00
Fullscreen
Open in Tab
Published an issue of Citation Needed:
Trump says he has “no idea” who he just pardoned
2025-11-03T19:09:58+00:00
Fullscreen
Open in Tab
The full CBS interview with Trump about the pardon of Binance's Changpeng Zhao is shocking. "Why did you pardon him?" "I have no idea who he is. I was told that he was a victim ... They sent him to jail and they really set him up. That's my opinion. I was told about it."
"I know nothing about it because I'm too busy." He talks about how his sons are in the crypto industry, and how his son and wife published bestselling books. "I'm proud of them for doing that. I'm focused on this."
"[You're] not concerned about the appearance of corruption with this?"
"I'd rather not have you ask the question."
2025-11-02T19:41:06+00:00
Fullscreen
Open in Tab
Reviewing the 13 books I read in September and October
Missed my reading wrap-up for September and have been too busy to read as much as usual, so here’s a combined September/October wrap-up. Lots of litRPG, and James S. A. Corey’s Caliban’s War (The Expanse #2) was definitely a highlight!
@molly0xfff September and October reading wrap-up, reviewing the 13 books I read those months (no spoilers) #readingwrapup #octoberreadingwrapup #booktok #litrpg #bookrecommendations ♬ original sound - Molly White
2025-11-02T14:54:53+00:00
Fullscreen
Open in Tab
Read:
Larissa MacFarquhar writes about the recent research into the neurodiverse syndromes known as aphantasia and hyperphantasia, their effects on our experience of trauma and memory, and the sense of identity that has grown up around them.
Sat, 01 Nov 2025 19:56:25 +0000
Fullscreen
Open in Tab
Today's links
- There's one thing EVERY government can do to shrink Big Tech: The path to a post-American internet.
- Hey look at this: Delights to delectate.
- Object permanence: D2020; Sony rootkit; Public Enemy vs the internet; NYC plute Hallowe'en.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
There's one thing EVERY government can do to shrink Big Tech (permalink)
As the old punchline goes, "If you wanted to get there, I wouldn't start from here." It's a gag that's particularly applicable to monopolies: once a company has secured a monopoly, it doesn't just have the power to block new companies from competing with it, it also has the power to capture governments and thwart attempts to regulate it or break it up.
40 years ago, a group of right-wing economists decided that this was a feature, not a bug, and convinced the world's governments to stop enforcing competition law, anti-monopoly law, and antitrust law, deliberately encouraging a global takeover by monopolies, duopolies and cartels. Today, virtually every sector of our economy is dominated by five or fewer firms:
https://www.openmarketsinstitute.org/learn/monopoly-by-the-numbers
These neoliberal economists knew that in order to stop us from getting there ("there" being a world where everyday people have economic and political freedom), they'd have to get us "here" – a world where even the most powerful governments find themselves unable to address concentrated corporate power. They wanted to drag us into a oligarchy, and take away any hope of us escaping to a fairer, more pluralistic world.
They succeeded. Today, rich and powerful governments struggle to do anything to rein in Big Tech. Canadian Prime Minister Mark Carney contemplated levying a 3% tax on America's tax-dodging tech giants…for all of five seconds. All Trump had to do was meaningfully clear his throat and Carney folded:
Canada also tried forcing payments to Canadian news agencies from tech giants, and failed in the most predictable way imaginable. Facebook simply blocked all Canadian news on its platforms (this being exactly what it had done in every other country where this was tried). Google paid out some money, and the country's largest newspaper killed its long-running investigative series into Big Tech's sins. Then Google slashed its payments.
These payments were always a terrible idea. The only beneficial part of how Big Tech relates to the news is in making it easy for people to find and discuss the news. News you're not allowed to find or talk about isn't "news," it's "a secret." The thing that Big Tech steals from the news isn't links, it's money: 30% of every in-app payment is stolen by the mobile duopoly; 51% of every ad dollar is stolen by the ad-tech duopoly; and social media holds news outlets' subscribers hostage and forces news companies to pay to "boost" their content to reach the people who follow them.
In other words, extracting payments for links is a form of redistribution, a clawback of some of Big Tech's stolen loot. It isn't predistribution, which would block Big Tech from stealing the loot in the first place.
Canada is a wealthy nation, but only 41m people call it home. The EU is also wealthy, and it is home to 500m people. You'd think that the EU could get further than Canada, but, faced with the might of the tech cartel, it has struggled to get anything done.
Take the GDPR, Europe's landmark privacy law. In theory, this law bans the kind of commercial surveillance that Big Tech thrives on. In practice, these companies just flew an Irish flag of convenience, which not only let them avoid paying their taxes – it also let them get away with illegal surveillance, by capturing the Irish privacy regulator, who does nothing to defend Europeans' privacy:
https://pluralistic.net/2023/05/15/finnegans-snooze/#dirty-old-town
It's hard to overstate just how supine the Irish state is in relation to the American tech giants that pretend to call Dublin their home. The country's latest privacy regulator is an ex-Meta executive!
(Perhaps he can hang out with the UK's newly appointed head of competition enforcement, who used to be the head of Amazon UK:)
https://pluralistic.net/2025/01/22/autocrats-of-trade/#dingo-babysitter
For the EU, Ireland is just part of the problem when it comes to regulating Big Tech. The EU's latest tech regulations are the sweeping, even visionary Digital Services Act and Digital Markets Act. If tech companies obeyed these laws, that would go a long way to addressing their monopoly abuses. So of course, they're not obeying the laws.
Apple has threatened to leave the EU altogether rather than comply with a modest order requiring it to allow third party payments and app stores:
https://pluralistic.net/2025/09/26/empty-threats/#500-million-affluent-consumers
And they've buried the EU in complex litigation that could drag on for a decade:
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62025TN0354
And Trump has made it clear that he is Big Tech's puppet, and any attempt to get American tech companies to obey EU law will be met with savage retaliation:
https://www.cnn.com/2025/09/05/tech/google-eu-antitrust-fine-adtech
When it comes to getting Big Tech to obey the law, if we wanted to get there, I wouldn't start from here.
But the fact that it's hard to get Big Tech to do the bidding of publicly accountable governments doesn't mean that those governments are powerless. There's one institution a government has total control over: itself.
The world's governments have all signed up to "anticircumvention" laws that criminalize reverse-engineering and modifying US tech products. This was done at the insistence of the US Trade Rep, who has spent this entire century using the threat of tariffs to bully every country in the world into signing up to laws that ban their own technologists from directly blocking American Big Tech companies' scams.
It's because of anticircumvention laws that a Canadian company can't go into business making an alternative Facebook client that blocks ads but restores the news. It's because of anticircumvention laws that a Canadian company can't go into business with a product that lets media companies bypass the Meta/Google ad-tech duopoly.
It's because of anticircumvention laws that a European company can't go into business modifying your phone, car, apps, smart devices and operating system to block all commercial surveillance. If companies can't get your data, they can't violate the GDPR. It's because of anticircumvention laws that a European company can't sell you a hardware dongle that breaks into your iPhone and replaces Apple's ripoff app store with a Made-in-the-EU alternative.
Anticircumvention law is the reason Canada's only response to Trump's illegal tariffs is more tariffs, which make everything in Canada more expensive. Get rid of anticircumvention law and Canada could get into the business of shifting billions of dollars from American tech monopolists to Canadian startups and the Canadian people:
Anticirumvention law is the reason the EU can't get its data out of the Big Tech silos that Trump controls, which lets Trump shut down any European government agency or official that displeases him:
https://pluralistic.net/2025/10/15/freedom-of-movement/#data-dieselgate
American monopolists like John Deere have installed killswitches in every tractor in the world – killswitches that can't be removed until we get rid of anticircumvention laws, which will let us create open source firmware for tractors. Until we do that, Trump can shut down all the agriculture in any country that makes him angry:
https://pluralistic.net/2025/10/20/post-american-internet/#huawei-with-american-characteristics
For a decade, we've been warned that allowing China to supply our telecoms infrastructure was geopolitical suicide, because it would mean that China could monitor and terminate our network traffic. That's the threat that Trump's America now poses for the whole world, as Trump makes it clear that America doesn't have allies or trading partners, only rivals and competitors, and he will stop at nothing to beat them.
And if you are worried about China, well, perhaps you should be. The world's incredible rush to solarization has left us with millions of solar installations whose inverters are also subject to arbitrary updates by their (Chinese) manufacturers, including updates that could render them inoperable. The only way around this? Get rid of anticircumvention law and replace all the software in these critical systems with open source, transparent, owner-controlled alternatives:
https://pluralistic.net/2025/09/23/our-friend-the-electron/#to-every-man-his-castle
Getting Big Tech to do your government's bidding is a big lift. The companies are too big to jail, especially with Trump behind them. That's why each of America's Big Tech CEOs paid $1m out of their own pockets to sit behind him on the dais at the inauguration:
Even America can't bring its tech companies to heel. When Google was convicted of being an illegal monopolist, the judge punished the company by sentencing it to…nothing:
https://pluralistic.net/2025/09/03/unpunishing-process/#fucking-shit-goddammit-fuck
But ultimately, breakups and fines and interoperabilty mandates are all forms of redistribution – a way to strip the companies of the spoils of their decades-long looting spree. That's a laudable goal, but if we want to get there, we must start with predistribution: halting the companies' ongoing extraction efforts, by getting rid of the laws that prevent other technologists from unfucking their products and halting their cash- and data-ripoffs.
Do that long and hard enough and we stand a real chance of draining off so much of their power that we can get moving on those redistributive moves. And getting rid of anticircumvention laws only requires that governments control their own behavior – unlike taxing or fining companies, which only works if governments can control the behavior of companies that have proven, time and again, to be more powerful than any country in the world.
(Image: Cryteria, CC BY 3.0, modified)
Hey look at this (permalink)

- The Forgotten History of Socialism and the Occult https://jacobin.com/2025/10/socialism-occult-mysticism-marxism-history/
-
Study: AI Models Trained On Clickbait Slop Result In AI ‘Brain Rot,’ ‘Hostility’ https://www.techdirt.com/2025/10/31/study-ai-models-trained-on-clickbait-slop-result-in-ai-brain-rot-hostility/
-
The Validation Machines https://www.theatlantic.com/ideas/archive/2025/10/validation-ai-raffi-krikorian/684764/
-
The Department of Defense Wants Less Proof its Software Works https://www.eff.org/deeplinks/2025/10/department-defense-wants-less-proof-its-software-works
-
Ireland: Adopt new, transparent process to appoint Data Protection Commissioner https://www.article19.org/resources/ireland-adopt-new-transparent-process-to-appoint-data-protection-commissioner/
Object permanence (permalink)
#20yrsago Sony DRM uses black-hat rootkits https://web.archive.org/web/20051102053346/http://www.sysinternals.com/blog/2005/10/sony-rootkits-and-digital-rights.html
#20yrsago Suncomm encourages people to break its DRM https://web.archive.org/web/20051116115847/http://bigpicture.typepad.com/comments/2005/10/drm_crippled_cd.html
#20yrsago Public Enemy’s Internet strategy https://web.archive.org/web/20051103053915/https://www.wired.com/news/print/0,1294,69403,00.html
#10yrsago Petition: Rename Stephen Harper to “Calgary International Airport” https://www.change.org/p/rename-stephen-harper-to-calgary-international-airport
#10yrsago Hallowe’en with NYC’s super-rich https://www.nytimes.com/slideshow/2015/10/29/fashion/halloween-in-manhattans-most-expensive-zip-codes/s/29UESHALLOWEEN-slide-LRGS.html
#5yrsago D2020 https://pluralistic.net/2020/10/31/walkies/#probabilistic
#5yrsago The Americans https://pluralistic.net/2020/10/31/walkies/#among-us
Upcoming appearances (permalink)

- Virtual: Peoples and Things with danah boyd and Lee Vinsel, Nov 3
https://www.youtube.com/live/WjFvGPLpskk -
Miami: Enshittification at Books & Books, Nov 5
https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 -
Miami: Cloudfest, Nov 6
https://www.cloudfest.com/usa/ -
Burbank: Burbank Book Festival, Nov 8
https://www.burbankbookfestival.com/ -
Lisbon: A post-American, enshittification-resistant internet, with Rabble (Web Summit), Nov 12
https://websummit.com/sessions/lis25/92f47bc9-ca60-4997-bef3-006735b1f9c5/a-post-american-enshittification-resistant-internet/ -
Cardiff: Hay Festival After Hours, Nov 13
https://www.hayfestival.com/c-203-hay-festival-after-hours.aspx -
Oxford: Enshittification and Extraction: The Internet Sucks Now with Tim Wu (Oxford Internet Institute), Nov 14
https://www.oii.ox.ac.uk/news-events/events/enshittification-and-extraction-the-internet-sucks-now/ -
London: Enshittification with Sarah Wynn-Williams and Chris Morris, Nov 15
https://www.barbican.org.uk/whats-on/2025/event/cory-doctorow-with-sarah-wynn-williams -
London: Downstream IRL with Aaron Bastani (Novara Media), Nov 17
https://dice.fm/partner/tickets/event/oen5rr-downstream-irl-aaron-bastani-in-conversation-with-cory-doctorow-17th-nov-earth-london-tickets -
London: Enshittification with Carole Cadwalladr (Frontline Club), Nov 18
https://www.eventbrite.co.uk/e/in-conversation-enshittification-tickets-1785553983029 -
Virtual: Enshittification with Vass Bednar (Vancouver Public Library), Nov 21
https://www.crowdcast.io/@bclibraries-present -
Seattle: Neuroscience, AI and Society (University of Washington), Dec 4
https://www.eventbrite.com/e/neuroscience-ai-and-society-cory-doctorow-tickets-1735371255139 -
Madison, CT: Enshittification at RJ Julia, Dec 8
https://rjjulia.com/event/2025-12-08/cory-doctorow-enshittification
Recent appearances (permalink)
- Enshittification and the Rot Economy with Ed Zitron (Clarion West)
https://www.youtube.com/watch?v=Tz71pIWbFyc -
Amanpour & Co (New Yorker Radio Hour)
https://www.youtube.com/watch?v=I8l1uSb0LZg -
Enshittification is Not Inevitable (Team Human)
https://www.teamhuman.fm/episodes/339-cory-doctorow-enshittification-is-not-inevitable -
The Great Enshittening (The Gray Area)
https://www.reddit.com/r/philosophypodcasts/comments/1obghu7/the_gray_area_the_great_enshittening_10202025/ -
Enshittification (Smart Cookies)
https://www.youtube.com/watch?v=-BoORwEPlQ0
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.
-
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Fri, 31 Oct 2025 16:29:32 +0000
Fullscreen
Open in Tab
Today's links
- The internet was made for privacy: And unmade by regulators.
- Hey look at this: Delights to delectate.
- Object permanence: Materialist conspiratorialism; TSA ball-fondlers; Anonymous to dox 1,000 Klansmen; Amazon hates private property; The Great Firewall of Cameron.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
The internet was made for privacy (permalink)
While "tech exceptionalism" can be a grave sin (as with the "move fast and break things" ethos that wrecked so much of our world, especially its labor markets), there are ways in which tech is truly exceptional, in the sense of bringing forth capabilities and affordances that have never existed before, in all of human history.
One obvious way in which tech is exceptional: its flexibility. Digital computers are "Turing-complete, universal von Neumann machines," which means that they are engines capable of computing every valid program. They are truly general purpose. We have many other general purpose machines, of course, but they are simple things, like wheels. Computers are unique in that they are both complex and universal, and every computer can run every program. Just as we don't know how to make knives that only cut in beneficial ways, we also don't know how to make computers that only run desirable programs.
Every computer can run every program, including ones that the user doesn't want (viruses), or that the manufacturer doesn't want (ad-blockers). No one knows how to make a computer that is almost Turing-complete. There's no such thing as "Turing-complete minus one." We can't make a computer that only runs the programs the manufacturer has authorized – all we can do is criminalize the act of modifying your own computer to do what you tell it to, even if the manufacturer objects:
https://memex.craphound.com/2012/01/10/lockdown-the-coming-war-on-general-purpose-computing/
I've devoted a lot of my life to exploring the policy implications of this amazing fact, but that's not the only amazing, exceptional thing about technology. There's at least one other way in which modern digital technology has produced something that is genuinely, civilizationally novel: encryption.
Encryption – scrambling data so that it can only be read by its intended recipient – is an age-old project for both the authorities (who used ciphers to keep their secrets safe since the time of the Caesars) and for those who would overthrow them (revolutionary movements have always used codes to protect themselves from the authorities they sought to dethrone).
But WWII ushered in a new era, in which encryption (and attempts to break it) went digital, as Alan Turing and the codebreakers of Bletchley Park turned their efforts to a computer-aided mathematics of scrambling and descrambling. In the decades that followed, a modern form of encryption emerged, one that was powerful beyond the wildest dreams of the Caesars and their revolutionary adversaries.
Modern, computerized encryption can scramble data to the point where it is literally unscramblable by an unauthorized party. In the eyeblink moment between you pressing the camera button on your phone and the resulting image being saved to its mass storage, the bits that make up that image are scrambled so thoroughly that even if every hydrogen atom in the universe were made into a computer, and even if all those computers were put to work guessing at the key, we would run out of time and universe before we ran out of keys.
Even futuristic, experimental technologies like quantum computing that may revolutionize codebreaking are also revolutionizing scrambling itself:
https://signal.org/blog/pqxdh/
The history of encryption is seriously fraught. Until the early 1990s, the NSA classed working encryption as a munition and banned civilian access to a whole branch of mathematics. It wasn't until Cindy Cohn – then a lawyer for the Electronic Frontier Foundation, now its executive director – convinced a court that the First Amendment protected the right to publish computer code, that we were all able to gain access to this essential technology, which today safeguards your messages, files, banking transactions, and the software updates for your car's brakes, your pacemaker, and the informatics on airplanes. Cohn has announced her retirement from EFF in 2026, and while she will be sorely missed, we do have her memoir, Privacy's Defender, to look forward to:
https://mitpress.mit.edu/9780262051248/privacys-defender/
The legalization of encryption was a starting gun for the internet itself, as true information security entered the picture and pervaded every part of service design. Every security crisis, every scandal (e.g. Snowden), jolted the effort to encrypt the internet forward, and in this way, much of the internet lurched into a state we can call "encrypted by default."
But even as this privacy-preserving technology was perfected and made ubiquitous, something weird and contradictory happened: mass surveillance also took off online. The ad-tech industry – and its handmaidens, the data-broker industry – rigged the game so that our private activities were only encrypted in such a way as to defend their privacy, but not ours. Our data is encrypted in transit to the servers we interact with, and when it is at rest on those servers' mass storage devices, but it is not encrypted in a way that prevents companies from data-mining it, or decrypting it and selling it on or giving it away or combining it with surveillance data purchased or traded from others.
This isn't an inevitability: it's a choice. The ubiquity of surveillance in the age of encryption is a policy choice. The reason companies don't encrypt our data so that they can't use it against us is because they don't have to. Congress hasn't updated American consumer privacy law since 1988, when they passed a law that prohibits video store clerks from disclosing our VHS rentals:
https://pluralistic.net/2025/02/20/privacy-first-second-third/#malvertising
Why hasn't Congress updated our privacy rights since Die Hard was in theaters? Because American cops and spies love commercial internet surveillance. Tech companies and data brokers are a source of fine-grained, off-the-books, warrantless surveillance data that the American state is totally addicted to. There is no difference between "commercial surveillance" and "government surveillance" – they are a fused symbiote and neither could survive without the other:
https://pluralistic.net/2021/04/13/public-interest-pharma/#axciom
Governments have hated encryption since the Clinton era, and have been attempting to subvert it since computers came in beige boxes and modems screamed in agony every time you tried to look at the internet:
It's no mystery why we don't have federal bans on facial recognition – if we did, ICE wouldn't be able to nonconsensually, warrantlessly steal your face and store it for 15 years (at least):
Why did the EU allow Ireland to facilitate mass surveillance for a decade after the GDPR's passage? Because European authorities also hate encryption and say that it is a "totally erroneous perception that it is everyone's civil liberty to communicate on encrypted messaging services":
https://www.eff.org/deeplinks/2025/09/chat-control-back-menu-eu-it-still-must-be-stopped-0
The internet could be the most privacy-preserving communications medium in history. Instead, it has ushered in an era of nightmarish surveillance. This isn't a technology problem. It's a policy problem. Criminals spy on us online because our governments wanted to spy on us online, so they let corporations spy on us online.
Imagine what the internet would look like today if, in its early regulatory moments, our elected representatives had demanded privacy, rather than trying to ban it. Sure, some corporations would have spied on us anyway, and criminals would have done their best to compromise our privacy, but criminals and rogue firms wouldn't have been able to attract capital to engage in conduct that was likely to give rise to massive fines and criminal prosecutions for violating the privacy laws Congress never bothered to write for us.
Think of it this way: sure, there are e-commerce sites that are just scams, that take your money and never ship you goods. Those sites don't have IPOs, they're not listed on stock exchanges, and they get shut down or blocked. They exist in the shadows, not in the light. Imagine if that was the kind of commercial surveillance industry we'd gotten: marginal, shadowy, illegal, forever on the run. There would still have been some bad privacy invasions, but these would have been crimes, not Harvard Business Review case-studies:
https://www.hbs.edu/faculty/Pages/item.aspx?num=51748
(And before you email me about that one time Paypal closed your account and kept your money or Ebay wouldn't give you a refund, sure, that's right, those things suck, and the companies should face penalties for them, but their business model isn't stealing money from their customers; but Google and Meta and Apple's business model is 100% stealing data from their customers.)
Instead of treating data theft the way we treat monetary theft, we're now increasingly treating monetary theft like data theft. The legislative formalization of cryptocurrency will now allow companies to steal your money with the same blissful lack of consequence as Google faced for stealing your private information:
https://www.citationneeded.news/issue-89/
We're rounding the corner on a decade since the beginning of the fight against Big Tech, and the efforts to cut it down to size. These keep foundering on the political economy of crushing an all-powerful monopolist – namely, that it is all-powerful.
You can't tax Big Tech:
You can't break it up:
https://www.thebignewsletter.com/p/a-judge-lets-google-get-away-with
Donald Trump has made it clear that he'd rather let Putin annex Brussels than allow the EU to fine tech companies:
https://www.cnn.com/2025/09/05/tech/google-eu-antitrust-fine-adtech
Breakups, taxes and fines are all forms of redistribution, which seek to address the harms of monopoly after the monopoly has been formed. The failure to make privacy protections as inviolable as financial protections is a missed opportunity for predistribution. Bans on data collection, mining, and sale would have prevented these monopolies from forming in the first place. Predistribution is far more effective than redistribution:
https://jacobin.com/2025/10/predistribution-welfare-state-inequality-class
It's amazing that the privacy-invading internet has somehow beaten the encrypted internet. It's crazy that the only entity that will promise to encrypt your data beyond the reach of a data broker, an ad-tech giant, or a government is a ransomware criminal, who will also encrypt your data beyond your reach:
https://www.wired.com/story/state-of-ransomware-2024/
It didn't have to be this way. This wasn't a technology failure. It wasn't a commercial failure. It was a policy failure. Since the 1990s, whenever push came to shove, governments decided that they would rather preserve their ability to spy on us than keep us safe from private spying.
Hey look at this (permalink)

- Animal Costumes from the 1862 Fairytale Ball of the Jung-München Artist’s Association https://publicdomainreview.org/collection/maskenfest/
-
New physical attacks are quickly diluting secure enclave defenses from Nvidia, AMD, and Intel https://arstechnica.com/security/2025/10/new-physical-attacks-are-quickly-diluting-secure-enclave-defenses-from-nvidia-amd-and-intel/
-
If Musk was broke, he’d just be another asshole with bad ideas https://www.smh.com.au/culture/books/if-musk-was-broke-he-d-just-be-another-asshole-with-bad-ideas-cory-doctorow-20251023-p5n4u6.html
-
FCC Republicans force prisoners and families to pay more for phone calls https://arstechnica.com/tech-policy/2025/10/fcc-republicans-force-prisoners-and-families-to-pay-more-for-phone-calls/
-
norecognition https://github.com/hevnsnt/norecognition
Object permanence (permalink)
#20yrsago Baen Books to launch online sf mag edited by Eric Flint https://web.archive.org/web/20060702073036/http://www.scifi.com/scifiwire/?id=33090
#15yrsago Dirty debt collectors frightened victims with fake “sheriffs,” “courtroom,” “judges” https://web.archive.org/web/20101106001140/https://www.thepittsburghchannel.com/r/25569199/detail.html
#15yrsago TSA demands testicular fondling as an alternative to naked scanners https://www.theatlantic.com/national/archive/2010/10/for-the-first-time-the-tsa-meets-resistance/65390/
#15yrsago Brain-imaging and neurorealism: what does it mean to “feel something” in your brain? https://www.badscience.net/2010/10/neuro-realism/
#15yrsago Animaniacs vs Newt Gingrich — the lost episode https://www2.cruzio.com/~keeper/UAdearmr.html
#15yrsago Canada’s telcoms regulator gives bloated, throttling incumbent the keys to the kingdom https://web.archive.org/web/20101031090505/http://www.theglobeandmail.com/news/technology/globe-on-technology/crtc-ruling-handcuffs-competitive-market-teksavvy/article1778211/
#15yrsago Rent-seeking in the 21st century: where eBay, free software, Foxconn and the MPAA come from https://web.archive.org/web/20101102151059/http://radar.oreilly.com/2010/10/points-of-control-rent-extract.html
#10yrsago Patent trolls: The Eastern District of Texas must die so that we all may live https://www.eff.org/deeplinks/2015/10/its-time-federal-circuit-shut-down-eastern-district-texas
#10yrsago Anonymous threatens to dump real names of 1,000 KKK members https://www.nbcnews.com/news/us-news/anonymous-hackers-threaten-release-names-ku-klux-klan-members-n453246
#10yrsago UK govt: no crypto back doors, just repeal the laws of mathematics https://betanews.com/2015/10/28/uk-government-says-app-developers-wont-be-forced-to-implement-backdoors/
#10yrsago David Cameron promises law to force ISPs to censor a secret blacklist https://web.archive.org/web/20151029155602/http://www.wired.co.uk/news/archive/2015-10/28/cameron-porn-filter-law-net-neutrality
#10yrsago EU Parliament votes to drop criminal charges and grant asylum to Snowden https://www.theguardian.com/us-news/2015/oct/29/edward-snowden-eu-parliament-vote-extradition
#10yrsago Thanks to the meth wars, cold medicine’s effective ingredient isn’t https://www.forbes.com/sites/daviddisalvo/2015/10/26/the-popular-over-the-counter-cold-medicine-that-science-says-doesnt-work/
#10yrsago UK police & spies will have warrantless access to your browsing history https://www.telegraph.co.uk/news/uknews/crime/11964655/Police-to-be-granted-powers-to-view-your-internet-history.html
#10yrsago NM judge believes daily prison rape is a fit punishment for nearly all defendants https://web.archive.org/web/20151030003120/http://www.ijreview.com/2015/10/458319-judge-calls-18-year-old-a-b-but-shes-only-trying-to-help/
#10yrsago Charity with US Characteristics: how our oligarchs buy their way out of criticism https://www.csmonitor.com/Business/Robert-Reich/2015/0408/How-the-Koch-brothers-and-the-super-rich-are-buying-their-way-out-of-criticism
#10yrsago Christ, what an asshole. https://memex.craphound.com/2015/10/30/christ-what-an-asshole-2/
#5yrsago Facebook loses users, makes more money https://pluralistic.net/2020/10/30/rigged-game/#tails-you-lose
#5yrsago Sue your medical bully https://pluralistic.net/2020/10/29/victim-complex/#i-object
#5yrsago Violent cops' deadly victim complex https://pluralistic.net/2020/10/29/victim-complex/#marsys-law
#5yrsago Amazon says only corporations own property https://pluralistic.net/2020/10/29/victim-complex/#digital-feudalism
#1yrago Conspiratorialism as a material phenomenon https://pluralistic.net/2024/10/29/hobbesian-slop/#cui-bono
#1yrago AI's "human in the loop" isn't https://pluralistic.net/2024/10/30/a-neck-in-a-noose/#is-also-a-human-in-the-loop
Upcoming appearances (permalink)

- Virtual: Peoples and Things with danah boyd and Lee Vinsel, Nov 3
https://www.youtube.com/live/WjFvGPLpskk -
Miami: Enshittification at Books & Books, Nov 5
https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 -
Miami: Cloudfest, Nov 6
https://www.cloudfest.com/usa/ -
Burbank: Burbank Book Festival, Nov 8
https://www.burbankbookfestival.com/ -
Lisbon: A post-American, enshittification-resistant internet, with Rabble (Web Summit), Nov 12
https://websummit.com/sessions/lis25/92f47bc9-ca60-4997-bef3-006735b1f9c5/a-post-american-enshittification-resistant-internet/ -
Cardiff: Hay Festival After Hours, Nov 13
https://www.hayfestival.com/c-203-hay-festival-after-hours.aspx -
Oxford: Enshittification and Extraction: The Internet Sucks Now with Tim Wu (Oxford Internet Institute), Nov 14
https://www.oii.ox.ac.uk/news-events/events/enshittification-and-extraction-the-internet-sucks-now/ -
London: Enshittification with Sarah Wynn-Williams and Chris Morris, Nov 15
https://www.barbican.org.uk/whats-on/2025/event/cory-doctorow-with-sarah-wynn-williams -
London: Downstream IRL with Aaron Bastani (Novara Media), Nov 17
https://dice.fm/partner/tickets/event/oen5rr-downstream-irl-aaron-bastani-in-conversation-with-cory-doctorow-17th-nov-earth-london-tickets -
London: Enshittification with Carole Cadwalladr (Frontline Club), Nov 18
https://www.eventbrite.co.uk/e/in-conversation-enshittification-tickets-1785553983029 -
Virtual: Enshittification with Vass Bednar (Vancouver Public Library), Nov 21
https://www.crowdcast.io/@bclibraries-present -
Seattle: Neuroscience, AI and Society (University of Washington), Dec 4
https://www.eventbrite.com/e/neuroscience-ai-and-society-cory-doctorow-tickets-1735371255139 -
Madison, CT: Enshittification at RJ Julia, Dec 8
https://rjjulia.com/event/2025-12-08/cory-doctorow-enshittification
Recent appearances (permalink)
- Enshittification and the Rot Economy with Ed Zitron (Clarion West)
https://www.youtube.com/watch?v=Tz71pIWbFyc -
Amanpour & Co (New Yorker Radio Hour)
https://www.youtube.com/watch?v=I8l1uSb0LZg -
Enshittification is Not Inevitable (Team Human)
https://www.teamhuman.fm/episodes/339-cory-doctorow-enshittification-is-not-inevitable -
The Great Enshittening (The Gray Area)
https://www.reddit.com/r/philosophypodcasts/comments/1obghu7/the_gray_area_the_great_enshittening_10202025/ -
Enshittification (Smart Cookies)
https://www.youtube.com/watch?v=-BoORwEPlQ0
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.
-
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
2025-10-30T15:12:32+00:00
Fullscreen
Open in Tab
“I have a great and considerable fear that people will freeze to death in their homes this winter if we do not turn this around quickly.”
2025-10-29T16:03:25+00:00
Fullscreen
Open in Tab
how many cumulative hours will i spend infuriated at the loose connection on my keyboard before i finally spend the ten minutes to resolder it? stay tuned
Wed, 29 Oct 2025 14:00:07 +0000
Fullscreen
Open in Tab
Today's links
- When AI prophecy fails: Hating workers is a hell of a drug.
- Hey look at this: Delights to delectate.
- Object permanence: SCOTUS lets the FBI kidnap Americans; Inequality perverts justice; Free the McFlurry!
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
When AI prophecy fails (permalink)
Amazon made $35 billion in profit last year, so they're celebrating by laying off 14,000 workers (a number they say will rise to 30,000). This is the kind of thing that Wall Street loves, and this layoff comes after a string of pronouncements from Amazon CEO Andy Jassy about how AI is going to let them fire tons of workers.
That's the AI story, after all. It's not about making workers more productive or creative. The only way to recoup the $700 billion in capital expenditure to date (to say nothing of AI companies' rather fanciful coming capex commitments) is by displacing workers – a lot of workers. Bain & Co say the sector needs to be grossing $2 trillion by 2030 in order to break even, which is more than the combined grosses of Amazon, Google, Microsoft, Apple Nvidia and Meta:
Every investor who has put a nickel into that $700b capex is counting on bosses firing a lot of workers and replacing them with AI. Amazon is also counting on people buying a lot of AI from it after firing those workers. The company has sunk $120b into AI this year alone.
There's just one problem: AI can't do our jobs. Oh, sure, an AI salesman can convince your boss to fire you and replace you with an AI that can't do your job, but that's the world's easiest sales-call. Your boss is relentlessly horny for firing you:
https://pluralistic.net/2025/03/18/asbestos-in-the-walls/#government-by-spicy-autocomplete
But there's a lot of AI buyers' remorse. 95% of AI deployments have either produced no return on capital, or have been money-losing:
AI has "no significant impact on workers’ earnings, recorded hours, or wages":
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
What's Amazon to do? How do they convince you to buy enough AI to justify that $180b in capital expenditure? Somehow, they have to convince you that an AI can do your workers' jobs. One way to sell that pitch is to fire a ton of Amazon workers and announce that their jobs have been given to a chatbot. This isn't a production strategy, it's a marketing strategy – it's Amazon deliberately taking an efficiency loss by firing workers in a desperate bid to convince you that you can fire your workers:
https://pluralistic.net/2025/08/05/ex-princes-of-labor/#hyper-criti-hype
Amazon does use a lot of AI in its production, of course. AI is the "digital whip" that Amazon uses to allow itself to control drivers who (nominally) work for subcontractors. This lets Amazon force workers into unsafe labor practices that endanger them and the people they share the roads with, while offloading responsibility onto "independent delivery service" operators and the drivers themselves:
https://pluralistic.net/2025/10/23/traveling-salesman-solution/#pee-bottles
Amazon leadership has announced that AI has replaced or will shortly replace its coders as well. But chatbots can't do software engineering – sure, they can write code, but writing code is only a small part of software engineering. An engineer's job is to maintain a very deep and wide context window, one that considers how each piece of code interacts with the software that executes before it and after it, and with the systems that feed into it and accept its output.
There's one thing AI struggles with beyond all else: maintaining context. Each linear increase in context that you demand from AI results in an exponential increase in computational expense. AI has no object permanence. It doesn't know where it's been and it doesn't know where it's going. It can't remember how many fingers it's drawn, so it doesn't know when to stop. It can write a routine, but it can't engineer a system.
When tech bosses dream of firing coders and replacing them with AI, they're fantasizing about getting rid of their highest-paid, most self-assured workers and transforming the insecure junior programmers leftover into AI babysitters whose job it is to evaluate and integrate that code at a speed that no one – much less a junior programmer – can meet if they are to do a careful and competent job:
https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-in-the-tech-f39
The jobs that can be replaced with AI are the jobs that companies already gave up on doing well. If you've already outsourced your customer service to an overseas call-center whose workers are not empowered to solve any of your customers' problems, why not fire those workers and replace them with chatbots? The chatbots also can't solve anyone's problems, and they're even cheaper than overseas call-center workers:
https://pluralistic.net/2025/08/06/unmerchantable-substitute-goods/#customer-disservice
Amazon CEO Andy Jassy wrote that he "is convinced" that firing workers will make the company "AI ready," but it's not clear what he means by that. Does he mean that the mass firings will save money while maintaining quality, or that mass firings will help Amazon recoup the $180,000,000,000 it spent on AI this year?
Bosses really want AI to work, because they really, really want to fire you. As Allison Morrow writes for CNN bosses are firing workers in anticipation of the savings AI will produce…someday:
https://www.cnn.com/2025/10/28/business/what-amazons-mass-layoffs-are-really-about
All this can feel improbable. Would bosses really fire workers on the promise of eventual AI replacements, leaving themselves with big bills for AI and falling revenues as the absence of those workers is felt?
The answer is a resounding yes. The AI industry has done such a good job of convincing bosses that AI can do their workers' jobs that each boss for whom AI fails assumes that they've done something wrong. This is a familiar dynamic in con-jobs.
The people who get sucked into pyramid schemes all think that they are the only ones failing to sell any of the "merchandise" they shell out every month to buy, and that no one else has a garage full of unsold leggings or essential oils. They don't know that, to a first approximation, the MLM industry has no sales, and relies entirely on "entrepreneurs" lying to themselves and one another about the demand for their wares, paying out of their own pocket for goods that no one wants.
The MLM industry doesn't just rely on this deception – they capitalize on it, by selling those self-flagellating "entrepreneurs" all kinds of expensive training courses that promise to help them overcome the personal defects that stop them from doing as well as all those desperate liars boasting about their incredible MLM sales success:
https://pluralistic.net/2025/05/05/free-enterprise-system/#amway-or-the-highway
The AI industry has its own version of those sales coaching courses – there's a whole secondary industry of management consultancies and business schools offering high-ticket "continuing education" courses to bosses who think that the only reason the AI they've purchased isn't saving them money is that they're doing AI wrong.
Amazon really needs AI to work. Last week, Ed Zitron published an extensive analysis of leaked documents showing how much Amazon is making from AI companies who are buying cloud services from it. His conclusion? Take away AI and Amazon's cloud division is in steep decline:
https://www.wheresyoured.at/costs/
What's more, those big-money AI customers – like Anthropic – are losing tens of billions of dollars per year, relying on investors to keep handing them money to incinerate. Amazon needs bosses to believe they can fire workers and replace them with AI, because that way, investors will keep giving Anthropic the money it needs to keep Amazon in the black.
Amazon firing 30,000 workers in the run-up to Christmas is a great milestone in enshittification. America's K-shaped recovery means that nearly all of the consumption is coming from the wealthiest American households, and these households overwhelmingly subscribe to Prime. Prime-subscribing households do not comparison shop. After all, they've already prepaid for a year's shipping in advance. These households start and end nearly every shopping trip in the Amazon app.
If Amazon fires 30,000 workers and tanks its logistics network and e-commerce systems, if it allows itself to drown in spam and scam reviews, if it misses its delivery windows and messes up its returns, that will be our problem, not Amazon's. In a world of commerce where Amazon's predatory pricing, lock-in, and serial acquisitions has left us with few alternatives, Amazon can truly be "too big to care":
https://www.theguardian.com/technology/2025/oct/05/way-past-its-prime-how-did-amazon-get-so-rubbish
From that enviable position, Amazon can afford to enshittify its services in order to sell the big AI lie. Killing 30,000 jobs is a small price to pay if it buys them a few months before a reckoning for its wild AI overspending, keeping the AI grift alive for just a little longer.
(Image: Cryteria, CC BY 3.0, modified)
Hey look at this (permalink)

- Eugene Debs and All Of Us https://www.hamiltonnolan.com/p/eugene-debs-and-all-of-us
-
US Business Cycles 1954-2020 https://www.youtube.com/watch?v=vXRC3RrngcI
-
Ed Zitron Gets Paid to Love AI. He Also Gets Paid to Hate AI https://web.archive.org/web/20251029140249/https://www.wired.com/story/ai-pr-ed-zitron-profile/
-
Worried About AI Monopoly? Embrace Copyright’s Limits https://www.lawfaremedia.org/article/worried-about-ai-monopoly–embrace-copyright-s-limits
Object permanence (permalink)
#10yrsago Librarian of Congress puts impossible conditions on your right to jailbreak your 3D printer https://michaelweinberg.org/post/132021560865/unlocking-3d-printers-ruling-is-a-mess
#10yrsago The two brilliant, prescient 20th century science fiction novels you should read this election season https://memex.craphound.com/2015/10/28/the-two-brilliant-prescient-20th-century-science-fiction-novels-you-should-read-this-election-season/
#10yrsago Hundreds of city police license plate cams are insecure and can be watched by anyone https://www.eff.org/deeplinks/2015/10/license-plate-readers-exposed-how-public-safety-agencies-responded-massive
#10yrsago Appeals court holds the FBI is allowed to kidnap and torture Americans outside US borders https://www.techdirt.com/2015/10/28/court-your-fourth-fifth-amendment-rights-no-longer-exist-if-you-leave-country/
#10yrsago South Carolina sheriff fires the school-cop who beat up a black girl at her desk https://www.theguardian.com/us-news/2015/oct/28/south-carolina-parents-speak-out-school-board
#10yrsago The more unequal your society is, the more your laws will favor the rich https://web.archive.org/web/20151028133814/http://america.aljazeera.com/opinions/2015/10/the-more-unequal-the-country-the-more-the-rich-rule.html
#5yrsago Trump abandons supporters to freeze https://pluralistic.net/2020/10/28/trumpcicles/#omaha
#5yrsago RIAA's war on youtube-dl https://pluralistic.net/2020/10/28/trumpcicles/#yt-dl
#1yrago The US Copyright Office frees the McFlurry https://pluralistic.net/2024/10/28/mcbroken/#my-milkshake-brings-all-the-lawyers-to-the-yard
Upcoming appearances (permalink)

- Miami: Enshittification at Books & Books, Nov 5
https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 -
Miami: Cloudfest, Nov 6
https://www.cloudfest.com/usa/ -
Burbank: Burbank Book Festival, Nov 8
https://www.burbankbookfestival.com/ -
Lisbon: A post-American, enshittification-resistant internet, with Rabble (Web Summit), Nov 12
https://websummit.com/sessions/lis25/92f47bc9-ca60-4997-bef3-006735b1f9c5/a-post-american-enshittification-resistant-internet/ -
Cardiff: Hay Festival After Hours, Nov 13
https://www.hayfestival.com/c-203-hay-festival-after-hours.aspx -
Oxford: Enshittification and Extraction: The Internet Sucks Now with Tim Wu (Oxford Internet Institute), Nov 14
https://www.oii.ox.ac.uk/news-events/events/enshittification-and-extraction-the-internet-sucks-now/ -
London: Enshittification with Sarah Wynn-Williams and Chris Morris, Nov 15
https://www.barbican.org.uk/whats-on/2025/event/cory-doctorow-with-sarah-wynn-williams -
London: Downstream IRL with Aaron Bastani (Novara Media), Nov 17
https://dice.fm/partner/tickets/event/oen5rr-downstream-irl-aaron-bastani-in-conversation-with-cory-doctorow-17th-nov-earth-london-tickets -
London: Enshittification with Carole Cadwalladr (Frontline Club), Nov 18
https://www.eventbrite.co.uk/e/in-conversation-enshittification-tickets-1785553983029 -
Virtual: Enshittification with Vass Bednar (Vancouver Public Library), Nov 21
https://www.crowdcast.io/@bclibraries-present -
Seattle: Neuroscience, AI and Society (University of Washington), Dec 4
https://compneuro.washington.edu/news-and-events/neuroscience-ai-and-society/ -
Madison, CT: Enshittification at RJ Julia, Dec 8
https://rjjulia.com/event/2025-12-08/cory-doctorow-enshittification
Recent appearances (permalink)
- Enshittification and the Rot Economy with Ed Zitron (Clarion West)
https://www.youtube.com/watch?v=Tz71pIWbFyc -
Amanpour & Co (New Yorker Radio Hour)
https://www.youtube.com/watch?v=I8l1uSb0LZg -
Enshittification is Not Inevitable (Team Human)
https://www.teamhuman.fm/episodes/339-cory-doctorow-enshittification-is-not-inevitable -
The Great Enshittening (The Gray Area)
https://www.reddit.com/r/philosophypodcasts/comments/1obghu7/the_gray_area_the_great_enshittening_10202025/ -
Enshittification (Smart Cookies)
https://www.youtube.com/watch?v=-BoORwEPlQ0
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.
-
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
2025-10-28T21:58:35+00:00
Fullscreen
Open in Tab
Published an issue of Citation Needed:
Issue 95 – The pardon was the payoff
2025-10-28T16:33:42+00:00
Fullscreen
Open in Tab
Tue, 28 Oct 2025 15:16:00 +0000
Fullscreen
Open in Tab
Today's links
- Raymond Biesinger's "9 Times My Work Has Been Ripped Off": A self-defense guide for creative workers.
- Hey look at this: Delights to delectate.
- Object permanence: Gingerbread Phantom Manor; Orwell estate censors 1984; The Abbadon; Ferris wheel fine dining.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Raymond Biesinger's "9 Times My Work Has Been Ripped Off" (permalink)
Raymond Biesinger's new book 9 Times My Work Has Been Ripped Off is a masterclass in how creative workers can transform the endless, low-grade seething about the endless ripoffs of the industry into something productive and even profound:
https://drawnandquarterly.com/books/9-times-my-work-has-been-ripped-off/
Biesinger is an iconic designer and illustrator whose instantly recognizable style and entrepreneurial hustle have allowed him to achieve the coveted and elusive status of full-time, economically secure(ish) artist. But over the years – and even in recent times – Beisigner has found himself in the all-too-common and endlessly frustrating circumstance of being owed money by people who refuse to pay it. The sums involved are typically small by the standards of corporate budgets, but it's what Biesinger calls "needed money" – money that makes a huge difference to the life of the artist to whom it is owed.
Speaking from personal experience, getting stiffed is one of the most embittering things that can happen to a creative worker – or any worker (as the tradespeople who've had their wages stolen by Trump can attest). I remember every time I got shafted by a client and often find my mind returning to those humiliating, frustrating moments.
There was the "friend" who hired me to do some work and then just decided never to pay me the $150 we agreed on. There was the university prof who asked me to speak to his class and promised me reimbursement for the taxi and then stiffed me for 20 quid. There was the international magazine who commissioned a short story from me, accepted it, then tried to cram a bullshit contract down my throat and refused to discuss any modifications to its terrible terms, finally stiffing me for the $500 they owed me.
There was the largest publisher in the world, who commissioned a novella from me for an anthology, promising me tens of thousands of dollars, who accepted the novella, and then "discovered" they hadn't ever finalized the contract for the anthology and canceled it, stiffing me in the process. The fact that I went on to sell that novella several times over, both in book form and as a graphic novel, and for film rights (twice!), making far more money in the process, doesn't make me any less angry about these fuckers who just screwed me without a second thought.
Objectively speaking, there is no reason for me to dwell on these little humiliations. It doesn't do me any good. It doesn't make the dickheads who screwed me feel bad. It is, as the proverb goes, "drinking poison and hoping your enemy dies." But I can't help it.
Neither, it seems, can Biesinger. But unlike me, Biesinger has found an incredibly productive – and inspiring – way to deal with that otherwise pointless seething. In 9 Times My Work Has Been Ripped Off, Biesinger reflects on the nine titular ripoffs, telling the story of how he got ripped off, what he did to get his own back, how he felt about it at the time, and how he feels about it in retrospect.
The book's subtitle ("An informal self-defense guide for independent creatives") sets up this book as a kind of manual for navigating these situations in your own life, and there's plenty of that in here – successes and failures for the rest of us to learn from. These stories are often very satisfying, as the little guy gets the justice he deserves. But the most interesting part of this book is Biesinger's reflections on the meaning of the different ripoffs he confronted, and how they relate to his own work.
Because – as Biesinger will tell you – he rips stuff off, too. All artists do. "Good artists copy; great artists steal." (said Picasso) (who was ripping off Faulkner) (or Stravinsky) (or Eliot) (or Trilling). He carefully parses through the muddied ethics of lifting elements for collage, for inspiration, and just because you forgot that you weren't supposed to. Much of Biesinger's early work was collage, and (as a collagist myself), I know you can't do that work without developing complicated feelings about creative ownership.
Biesinger also straddles a line between commercial illustrator; producing commissioned pieces to order for magazine and advertising art directors; and fine artist, making "artistic" pieces for his own satisfaction, and selling these as prints. While he's proud of all his work, it's clear that how he relates to this own work depends a great deal on whether it falls into the former category or the latter. Part of that difference is a blanket prohibition on licensing his "artistic" pieces for commercial work.
This just adds to the moral complexity of Biesinger's deliberations: when an extremely well-funded charity misappropriates an "artistic" piece to accompany an exemplary article on women's health advocacy, he wrestles with a whole suite of concerns and mitigations – the "charity"'s reputation as a money-laundry for a wealthy plutocrat, his support for the article, his principle about not licensing his "artistic" work. It's typical of the kind of nuance that Biesinger brings to these chapters.
Also fascinating is Biesinger's chapter about a fan who solicited artistic advice from him, but went on to produce a portfolio of uncredited knock-offs of Biesinger's own signature style. Biesinger describes how he blasted this young artist for abusing his goodwill and unjustly profiting from Biesinger's own work developing his style, and then, in later years, repented of his angry outburst. In a delightful coda, Biesinger recounts how he looked up this artist years later, only to discover that he had matured into a talented, original, successful and ambitious creator. When Biesinger emailed the artist to apologize for his furious letter, the other artist replied that Biesinger's blast had been the kick in the pants he'd needed to finally figure out his own style, and he credits his later success to Biesinger's fury.
At the root of all nine tales of ripoffs is the inadequacy and/or inappropriateness of the legal system as a tool for redress when an independent creator is ripped off. In the case of commercial ripoffs – by agencies large and small, by fly-by-night concert promoters, by gallerists peddling unauthorized reproductions – the sums involved are usually far too small to involve lawyers or the courts. In the case of disputes with other artists – like the copyist who bit Biesinger's style – the law is (rightly) silent, because styles are not copyrightable.
In telling these nine tales, Biesinger beautifully illustrates the limitations of copyright as the sole regulator of creative activity. Copyright law (and its cousin, contract law) might be suitable for mediating commercial transactions between creative workers and businesses, but it's utterly unsuitable for other kinds of interactions, including interactions between artistic peers, or between artists and creators working in related disciplines. The most important thing that Biesinger is doing in this book is setting out a continuum of relationships and detailing many of the different tools available to creators to resolve disputes arising at different points on that continuum.
Given Biesinger's justly deserved fame as an illustrator, this is also a beautiful book, published in pocket-sized trim by Drawn & Quarterly, one of the world's great indie comics presses. The many, many illustrations in this small volume don't just bring the subject matter to life – they're artistic delights in their own right. It's a reminder of how wonderful the "art" part of all this stuff is, and how that complicates the all too familiar labor issues at the book's core.
Hey look at this (permalink)

- Twitter, Free Speech Absolutism, and Adoxastic Enshittification https://contemporaryrhetoric.com/wp-content/uploads/2025/10/Alford_Carter_15_3_1.pdf
-
This Wreckage Courtesy of the Enshittification Administration: Notes On Late-State Trumpism https://www.meditationsinanemergency.com/this-wreckage-courtesy-of-the-enshittification-administration-notes-on-late-state-trumpism/
-
Air filters have DRM now
https://www.youtube.com/watch?v=LCu_n2Nddu0 -
Washington’s Battery Strategy Is Upside Down https://christopherchico.substack.com/p/washingtons-battery-strategy-is-upside
-
The New York Times is wrong about the electoral value of moderation https://www.gelliottmorris.com/p/the-new-york-times-makes-several
Object permanence (permalink)
#20yrsago Build a gingerbread Phantom Manor from Disneyland Paris https://www.haunteddimensions.raykeim.com/index506.html
#15yrsago HOWTO explain the Internet to a Dickensian street urchin https://www.fastcompany.com/1697711/flowchart-understanding-web-fans-charles-dickens#self
#10yrsago Librarian of Congress grants limited DRM-breaking rights for cars, games, phones, tablets, and remixers https://memex.craphound.com/2015/10/27/librarian-of-congress-grants-limited-drm-breaking-rights-for-cars-games-phones-tablets-and-remixers/
#10yrsago EU, worn down by telcoms lobbyists, pass brutal net discrimination rules https://arstechnica.com/tech-policy/2015/10/net-neutrality-eu-votes-in-favour-of-internet-fast-lanes-and-slow-lanes/
#10yrsago Ministry of Irony: Orwell estate tries to censor mentions of the number 1984 https://torrentfreak.com/orwell-estate-sends-copyright-takedown-over-the-number-1984-151027/
#10yrsago Pirates are the best customers: just sell good stuff at a reasonable price in a timely fashion https://www.youtube.com/watch?v=XXxzWgl3nHs
#10yrsago Elite “wealth managers”: Renfields to the one percent bloodsuckers https://www.theatlantic.com/business/archive/2015/10/elite-wealth-management/410842/
#10yrsago The Abaddon: graphic novel based loosely on Sartre’s No Exit https://memex.craphound.com/2015/10/27/the-abaddon-graphic-novel-based-loosely-on-sartres-no-exit/
#5yrsago Surveillance startup protected sexual harassers https://pluralistic.net/2020/10/27/peads-r-us/#Verkada
#5yrsago Comcast v Comcast https://pluralistic.net/2020/10/27/peads-r-us/#diseconomies-of-scale
#5yrsago The president's extraordinary powers https://pluralistic.net/2020/10/27/peads-r-us/#peads
#5yrsago Monopolies Suck https://pluralistic.net/2020/10/27/peads-r-us/#sally-hubbard
#5yrsago Ferris wheel fine dining https://pluralistic.net/2020/10/27/peads-r-us/#ferris-dining
Upcoming appearances (permalink)

- Miami: Enshittification at Books & Books, Nov 5
https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 -
Miami: Cloudfest, Nov 6
https://www.cloudfest.com/usa/ -
Burbank: Burbank Book Festival, Nov 8
https://www.burbankbookfestival.com/ -
Lisbon: A post-American, enshittification-resistant internet, with Rabble (Web Summit), Nov 12
https://websummit.com/sessions/lis25/92f47bc9-ca60-4997-bef3-006735b1f9c5/a-post-american-enshittification-resistant-internet/ -
Cardiff: Hay Festival After Hours, Nov 13
https://www.hayfestival.com/c-203-hay-festival-after-hours.aspx -
Oxford: Enshittification and Extraction: The Internet Sucks Now with Tim Wu (Oxford Internet Institute), Nov 14
https://www.oii.ox.ac.uk/news-events/events/enshittification-and-extraction-the-internet-sucks-now/ -
London: Enshittification with Sarah Wynn-Williams and Chris Morris, Nov 15
https://www.barbican.org.uk/whats-on/2025/event/cory-doctorow-with-sarah-wynn-williams -
London: Downstream IRL with Aaron Bastani (Novara Media), Nov 17
https://dice.fm/partner/tickets/event/oen5rr-downstream-irl-aaron-bastani-in-conversation-with-cory-doctorow-17th-nov-earth-london-tickets -
London: Enshittification with Carole Cadwalladr (Frontline Club), Nov 18
https://www.eventbrite.co.uk/e/in-conversation-enshittification-tickets-1785553983029 -
Virtual: Enshittification with Vass Bednar (Vancouver Public Library), Nov 21
https://www.crowdcast.io/@bclibraries-present -
Seattle: Neuroscience, AI and Society (University of Washington), Dec 4
https://compneuro.washington.edu/news-and-events/neuroscience-ai-and-society/ -
Madison, CT: Enshittification at RJ Julia, Dec 8
https://rjjulia.com/event/2025-12-08/cory-doctorow-enshittification
Recent appearances (permalink)
- Enshittification and the Rot Economy with Ed Zitron (Clarion West)
https://www.youtube.com/watch?v=Tz71pIWbFyc -
Amanpour & Co (New Yorker Radio Hour)
https://www.youtube.com/watch?v=I8l1uSb0LZg -
Enshittification is Not Inevitable (Team Human)
https://www.teamhuman.fm/episodes/339-cory-doctorow-enshittification-is-not-inevitable -
The Great Enshittening (The Gray Area)
https://www.reddit.com/r/philosophypodcasts/comments/1obghu7/the_gray_area_the_great_enshittening_10202025/ -
Enshittification (Smart Cookies)
https://www.youtube.com/watch?v=-BoORwEPlQ0
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.
-
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Mon, 27 Oct 2025 13:41:21 +0000
Fullscreen
Open in Tab
Today's links
- Shake Shack wants you to shit yourself to death: The bifurcation of justice is always and ever a prelude to fascism.
- Hey look at this: Delights to delectate.
- Object permanence: CCTV botnets; CEOs and random chance; Foxconn v Trump.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Shake Shack wants you to shit yourself to death (permalink)
Shake Shack has changed the terms of service for its app, adding a "binding arbitration" clause that bans you from suing the company or joining a class action suit against it:
https://shakeshack.com/terms-conditions#/
As Luke Goldstein writes for Jacobin, the ToS update is part of a wave of companies, including fast-food companies, that are taking away their customers' right to seek redress in the courts, forcing them to pursue justice with a private "arbitrator" who works for the company that harmed them:
https://jacobin.com/2025/10/shake-shack-arbitration-law-terms-service/
Now, obviously you don't have to agree to terms of service just to walk into a Shake Shack and order a burger (yet), but Shake Shack, like other fast food companies, is on a full-court press to corral you into using its app to order your food, even if you're picking up that food from the counter and eating it in the restaurant. This is an easy trick to pull off – all Shake Shack needs to do is starve its cash-registers of personnel, creating untenably long lines for people attempting to order from a human.
Forcing diners to use an app has other advantages as well. Remember, an app is just a website skinned in the right kind of IP to make it a felony to add an ad-blocker to it, which means that whenever you use an app instead of a website, you are vulnerable to deep and ongoing commercial surveillance and can be bombarded with ads without you having any recourse:
https://pluralistic.net/2024/01/24/everything-not-mandatory/#is-prohibited
That surveillance can be weaponized against you, through "surveillance pricing," which is when companies raise prices based on their estimation of your desperation, which they can infer from surveillance data. Surveillance pricing lets a company reach into your wallet and devalue your money – if you are charged $10 for a burger that costs the next person $5, that means your dollar is only worth $0.50:
https://pluralistic.net/2025/06/24/price-discrimination/
But beyond surveillance and price-gouging, app-based ordering offers corporations another way to screw you: they can force you into binding arbitration. Under binding arbitration, you "voluntarily" waive your right to have your grievances heard by a judge. Instead, the corporation hires a fake judge, called an "arbitrator," who hears your case and then a rebuttal from the company that signs their paycheck and decides who is guilty. It will not surprise you to learn that arbitrators overwhelmingly find in favor of their employers and even when they rule in favor of a wronged customer, the penalties they impose on their bosses add up to little more than a wrist-slap:
This binding arbitration bullshit was illegal until the 2010s, when Antonin Scalia authored a string of binding arbitration decisions for the Supreme Court, opening the hellmouth for the mass imposition of arbitration on anyone that a business could stick an "I agree" button in front of:
https://brooklynworks.brooklaw.edu/cgi/viewcontent.cgi?article=1443&context=blr
A fundamental tenet of conservative doctrine is "incentives matter" – that's why they say we can't have universal healthcare (if going to the doctor is free, you will schedule frivolous doctor's visits) or food or housing assistance (unless your boss can threaten you with homelessness and starvation, you won't go to work anymore). However, this is a highly selective bit of dogma, because incentives never seem to matter to rich people or corporations, whom conservatives are on an endless quest to immunize from any consequences for harming their workers or customers, which somehow won't incentivize them to hurt their workers and/or customers:
https://pluralistic.net/2022/06/12/hot-coffee/#mcgeico
At this point, we should probably ask, "Why would anyone sue a Shake Shack?" To answer that, you just need to look at why people sue other fast-food restaurants, like McDonald's and Chipotle. The short answer? Because those restaurants had defective food-handling and sourcing procedures, and this resulted in their customers contracting life-threatening food-borne illnesses:
By immunizing itself from legal consequences for the most common sources of liability for fast-food restaurants, Shake Shack is reserving the right to make you shit yourself to death. Combine this immunity with Trump's unscheduled rapid midair disassembly of all federal regulations (AKA "Project 2025") and you get a situation where Shake Shack can just make up its own money-saving hygiene shortcuts, and face no consequences if these result in your shitting yourself to death. This is both literal and figurative enshittification.
Of course, Shake Shack doesn't believe this should cut both ways. You can't slip out of Shake Shack's noose by walking into a restaurant with a t-shirt reading:
By reading these words, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. This indemnity will survive the termination of your relationship with your employer.
Shake Shack isn't trying to create a simplified, efficient system of justice – they're creating a two-tiered system of justice. They get to go to court if you hurt them. Vandalize a Shack Shack restaurant and they'll drag your ass in front of a judge before you can say "listeria." But if they cause you to shit yourself to death, you are literally and figuratively shit out of luck.
That's really bad. Two-tiered justice is always and ever a prelude to fascism. The way to keep the normies in line while your brownshirts round up their neighbors and seize their property is by maintaining the "normal" justice system for some people, but not for the disfavored group:
https://encyclopedia.ushmm.org/content/en/article/anti-jewish-legislation-in-prewar-germany
Gradually, the group entitled to "normal" justice dwindles and more and more of us get sucked into the "state of exception" where you aren't entitled to a lawyer, a trial, or any human rights.
Trump isn't just dismantling the regulatory state: his fascist snatch-squads ignore the Constitution and the courts. His supine Congress ignores the separation of powers (Trump: "I'm the President and the Speaker of the House"). This rapid erosion of the rule of law is about to meet and merge with the long-run, Federalist Society project to give corporations their own shadow justice system, where they hire the judges who decide whether you can get justice.
Hey look at this (permalink)

- In Memoriam/gbnewby https://www.pgdp.net/wiki/In_Memoriam/gbnewby
-
Zohran Mamdani’s 5 Lessons for the Democrats https://jacobin.com/2025/10/zohran-mamdani-democrats-nyc-strategy/
-
The Internet Doesn’t Have to Suck https://nymag.com/intelligencer/article/google-amazon-slop-internet.html
-
How Elon Musk Ruined Twitter https://jacobin.com/2025/10/enshittification-doctorow-musk-twitter-internet
-
Hackers Say They Have Personal Data of Thousands of NSA and Other Government Officials https://www.404media.co/hackers-say-they-have-personal-data-of-thousands-of-nsa-and-other-government-officials/
Object permanence (permalink)
#20yrsago Katamari Damacy: the text adventure https://web.archive.org/web/20081011210518/http://www.livejournal.com/community/katamari_damacy/262676.html
#20yrsago danah boyd’s Friendster papers, all in one place https://web.archive.org/web/20051029083531/https://www.zephoria.org/thoughts/archives/2005/10/24/my_articles_on.html
#20yrsago Bruce Sterling’s design future manifesto: viva spime! https://memex.craphound.com/2005/10/26/bruce-sterlings-design-future-manifesto-viva-spime/
#15yrsago South Korea’s US-led copyright policy leads to 65,000 acts of extrajudicial censorship/disconnection/threats by govt bureaucrats https://www.techdirt.com/2010/10/26/a-look-at-how-many-people-have-been-kicked-offline-in-korea-on-accusations-not-convictions-of-infringement/
#15yrsago British Airways chairman: “stop kowtowing to US aviation security demands” https://www.theguardian.com/world/2010/oct/27/airport-security-rules-uk-us
#15yrsago France: 25,000 families a day at risk of losing Internet access https://arstechnica.com/tech-policy/2010/10/french-three-strikes-agency-getting-25k-complaints-a-day/
#15yrsago Taste receptors in our lungs sense bitterness and respond with opened airways https://web.archive.org/web/20101028234103/http://www.nature.com/nm/journal/vaop/ncurrent/full/nm.2237.html
#10yrsago Botnets running on CCTVs and NASs https://www.imperva.com/blog/archive/cctv-ddos-botnet-back-yard/?redirect=Incapsula
#10yrsago A beautiful data-driven Tube ad from 1928 https://www.citymonitor.ai/analysis/1928-ad-london-underground-combines-data-awesome-1513/?cf-view
#10yrsago DoJ to Apple: your software is licensed, not sold, so we can force you to decrypt https://ia600301.us.archive.org/35/items/gov.uscourts.nyed.376325/gov.uscourts.nyed.376325.15.0.pdf
#10yrsago FCC trying to stop phone companies that rip off prisoners’ families https://web.archive.org/web/20151023015659/http://www.bloomberg.com/news/articles/2015-10-22/is-this-the-end-of-sky-high-prison-phone-call-rates-
#10yrsago Putting your kettle on the Internet of Things makes your wifi passwords an open secret https://www.techdirt.com/2015/10/23/easily-hacked-tea-kettle-latest-to-highlight-pathetic-internet-things-security/
#10yrsago 70% of CEOs’ effect on company performance can be attributed to random chance https://www.sciencedaily.com/releases/2015/10/151022192337.htm
#10yrsago Astounding showpiece table full of hidden compartments nested in hidden compartments https://www.youtube.com/watch?v=4sWrgIgBT9M
#10yrsago A beautiful data-driven Tube ad from 1928 https://www.citymonitor.ai/analysis/1928-ad-london-underground-combines-data-awesome-1513/
#10yrsago Antioxidants protect cancer cells, help tumors to spread https://arstechnica.com/science/2015/10/myths-about-antioxidant-supplements-need-to-die/
#10yrsago Investing in David v Goliath: hundreds of millions slosh into litigation finance funds https://www.nytimes.com/2015/10/25/magazine/should-you-be-allowed-to-invest-in-a-lawsuit.html?smid=tw-share
#10yrsago Globe and Mail: TPP's copyright chapter will cost Canadians hundreds of millions https://www.theglobeandmail.com/opinion/editorials/copyright-concessions-may-be-downside-of-tpp-deal/article26939204/
#10yrsago Americans are pretty mellow about climate change, terrified of everything else https://blogs.chapman.edu/wilkinson/2015/10/13/americas-top-fears-2015/
#10yrsago NSA spying: judge tosses out case because Wikipedia isn’t widely read enough https://www.aclu.org/news/national-security/court-chooses-ignore-overwhelming-evidence-nsas-mass
#10yrsago Stylish furniture made from discarded supermarket trolleys https://etiennereijnders.blogspot.com/
#10yrsago Youtube’s pay TV service makes video-creators a deal they literally can’t refuse https://techcrunch.com/2015/10/23/youtube-red-creators/
#10yrsago Secret surveillance laws make it impossible to have an informed debate about privacy https://ijoc.org/index.php/ijoc/article/view/3329/1495
#10yrsago Sony licensed stock footage, then branded its creator a pirate for using it himself https://petapixel.com/2015/10/25/sony-filed-a-copyright-claim-against-the-stock-video-i-licensed-to-them/
#10yrsago Pharma company offers $1/dose version of Martin Shkreli’s drug https://www.chicagotribune.com/2015/10/23/drug-firm-offers-1-version-of-750-turing-pill/
#10yrsago IMF: Cheap oil will bankrupt the Saudis in five years https://web.archive.org/web/20151026052347/https://money.cnn.com/2015/10/25/investing/oil-prices-saudi-arabia-cash-opec-middle-east/index.html?sr=twcnnbrk102515oilpricessaudiarabiacashopecmiddleeast512pStoryMoneyPhoto
#5yrsago Chile restores democratic rule https://pluralistic.net/2020/10/26/viva-allende/#bread-a-roof-and-work
#5yrsago Phone surveillance, made in Canada https://pluralistic.net/2020/10/26/viva-allende/#imsi
#5yrsago Bob Dylan sings a EULA https://pluralistic.net/2020/10/25/musical-chairs/#subterranean-termsick-blues
#5yrsago Facebook threatens ad-transparency group https://pluralistic.net/2020/10/25/musical-chairs/#son-of-power-ventures
#5yrsago RIAA kills youtube-dl https://pluralistic.net/2020/10/24/1201-v-dl-youtube/#1201
#5yrsago Foxconn out-trumped Trump https://pluralistic.net/2020/10/23/foxconned/#foxconned
#5yrsago Bring back the CCC https://pluralistic.net/2020/10/23/foxconned/#ccc
#5yrsago Cracking the Ghislaine Maxwell redactions https://pluralistic.net/2020/10/23/foxconned/#redactions
#5yrsago Student loans are dischargeable https://pluralistic.net/2020/10/23/foxconned/#education-benefit
#1yrago Scientific American endorses Harris https://pluralistic.net/2024/10/23/eisegesis/#norm-breaking
#1yrago The housing crisis considered as an income crisis https://pluralistic.net/2024/10/24/i-dream-of-gini/#mean-ole-mr-median
#1yrago Ian McDonald's "The Wilding" https://pluralistic.net/2024/10/25/bogman/#erin-go-aaaaaaargh
#1yrago Keeping a suspense file gives you superpowers https://pluralistic.net/2024/10/26/one-weird-trick/#todo
Upcoming appearances (permalink)

- Miami: Enshittification at Books & Books, Nov 5
https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 -
Miami: Cloudfest, Nov 6
https://www.cloudfest.com/usa/ -
Burbank: Burbank Book Festival, Nov 8
https://www.burbankbookfestival.com/ -
Lisbon: A post-American, enshittification-resistant internet, with Rabble (Web Summit), Nov 12
https://websummit.com/sessions/lis25/92f47bc9-ca60-4997-bef3-006735b1f9c5/a-post-american-enshittification-resistant-internet/ -
Cardiff: Hay Festival After Hours, Nov 13
https://www.hayfestival.com/c-203-hay-festival-after-hours.aspx -
Oxford: Enshittification and Extraction: The Internet Sucks Now with Tim Wu (Oxford Internet Institute), Nov 14
https://www.oii.ox.ac.uk/news-events/events/enshittification-and-extraction-the-internet-sucks-now/ -
London: Enshittification with Sarah Wynn-Williams and Chris Morris, Nov 15
https://www.barbican.org.uk/whats-on/2025/event/cory-doctorow-with-sarah-wynn-williams -
London: Downstream IRL with Aaron Bastani (Novara Media), Nov 17
https://dice.fm/partner/tickets/event/oen5rr-downstream-irl-aaron-bastani-in-conversation-with-cory-doctorow-17th-nov-earth-london-tickets -
London: Enshittification with Carole Cadwalladr (Frontline Club), Nov 18
https://www.eventbrite.co.uk/e/in-conversation-enshittification-tickets-1785553983029 -
Virtual: Enshittification with Vass Bednar (Vancouver Public Library), Nov 21
https://www.crowdcast.io/@bclibraries-present -
Seattle: Neuroscience, AI and Society (University of Washington), Dec 4
https://compneuro.washington.edu/news-and-events/neuroscience-ai-and-society/ -
Madison, CT: Enshittification at RJ Julia, Dec 8
https://rjjulia.com/event/2025-12-08/cory-doctorow-enshittification
Recent appearances (permalink)
- Enshittification and the Rot Economy with Ed Zitron (Clarion West)
https://www.youtube.com/watch?v=Tz71pIWbFyc -
Amanpour & Co (New Yorker Radio Hour)
https://www.youtube.com/watch?v=I8l1uSb0LZg -
Enshittification is Not Inevitable (Team Human)
https://www.teamhuman.fm/episodes/339-cory-doctorow-enshittification-is-not-inevitable -
The Great Enshittening (The Gray Area)
https://www.reddit.com/r/philosophypodcasts/comments/1obghu7/the_gray_area_the_great_enshittening_10202025/ -
Enshittification (Smart Cookies)
https://www.youtube.com/watch?v=-BoORwEPlQ0
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.
-
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Thu, 23 Oct 2025 19:57:19 +0000
Fullscreen
Open in Tab
Today's links
- Checking in on the state of Amazon's chickenized reverse-centaurs: When your shitty boss is a shitty app and you're not even allowed to call yourself an employee.
- Hey look at this: Delights to delectate.
- Object permanence: Correcting the Disneyland Railroad's Morse code; Breathalyzer source-code; Teaching Little Brother to math students; Arson attacks on Ferguson's Black churches; Tom Lehrer in the public domain.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Checking in on the state of Amazon's chickenized reverse-centaurs (permalink)
Amazon has invented a new kind of labor travesty: the chickenized reverse centaur. That's a worker who has to foot the bill to outfit a work environment where they nevertheless have no autonomy (chickenization) and whose body is conscripted to act as a peripheral for a digital system (reverse centaur):
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
"Chickenization" is a term out of labor economics, inspired by the brutal state of the poultry industry, where three giant processing companies have divided up the market so that every chicken farmer has just one place where they can sell their birds. To sell your birds to one of these plants, you have to give them total control over your operation. They sell you the baby chicks, they tell you what kind of coop to build and what lightbulbs to install and when they should be off or on. They tell you which vet to use and which medicines can be administered to your birds. They tell you what to feed your birds and when to feed them. They design your coop and tell you who is allowed to maintain it. The one thing they don't tell you is how much you'll be paid for your birds – that's something you only discover when it's time to sell them, and the sum you're offered is based on the packer's region-wide intelligence on how you and all your competitors are faring, and is calculated to be the smallest amount to allow you to roll over your loans and go into more debt to grow more birds for them.
At its root, "chickenization" is about de-risking, cloaked in the language of entrepreneurship. Chicken farmers assume all the risk for the poultry packers, but they're told that they're their own bosses. The only way in which a chicken farmer resembles an entrepreneur is that they have to bear all the risk of failure – without having any upside for success. Packers can (and do) secretly decide to experiment at farmers' expense, ordering some of their farmers to vary their feeding, light and veterinary routines to see if they can eke new efficiencies out of the process. If that works, the surplus is reaped by the packer. If that fails, the losses are borne by the farmer, who is never told that they were funding an experiment.
Amazon makes extensive use of chickenization in its many commercial arrangements, tightly defining the working conditions of many "self-employed" workers, like the clickwork "turkers" who power the Mechanical Turk service. But the most chickenized of all the people in Amazon's network of cutouts and arm's-length arrangements are the "entrepreneurs" who are lured into starting a "Delivery Service Platform" (DSP) business.
To start a DSP, you borrow lots of money to buy vans that you outfit to Amazon's exacting specifications, filling them with interior and exterior sensors and cameras, painting them with Amazon livery, and kitting them out with shelving and other infrastructure to Amazon's exacting specification. Then, you hire workers – giving Amazon a veto over who you hire – and you train them – using Amazon's training materials. You sign them up for Amazon's platforms, which monitor and rank those workers, and then you get paid either $0.10 per parcel, or maybe $0.50 per parcel, or sometimes $0.00 per parcel, all at Amazon's sole discretion.
That's a pretty chickenized arrangement. But what about reverse centaurs?
In automation theory, a "centaur" is someone who is assisted by some automation system (they are a fragile human head being assisted by a tireless machine). Therefore, a reverse centaur is a person who has been conscripted to serve as a peripheral for a machine, a human body surmounted and directed by a brute and uncaring head that not only uses them, but uses them up.
The drivers that DSPs hire are reverse centaurs. Using various forms of automation, Amazon drives these workers to work at a dangerous, humiliating and unsustainable pace, setting and enforcing not just quotas, but also scripting where drivers' eyes must be pointed, how they must accelerate and decelerate, what routes they take, and more. These edicts are enforced by the in-van and on-body automation systems that direct and discipline workers, tools that labor activists call "electronic whips":
https://crackedlabs.org/en/data-work/publications/callcenter
The chickenized owners of DSPs must enforce the edicts Amazon brings down on their reverse centaur workers – Amazon can terminate any DSP, at any time, for any reason or no reason, stranding an "independent entrepreneur" with heavily mortgaged rolling stock that can only be used to deliver Amazon packages, long term leases on garages and parking lots, liability for driver accidents caused by automation systems that punish drivers for e.g. braking suddenly if someone steps into the road, and massive loans.
So when Amazon directs a DSP to fire or discipline a worker, that worker is in trouble. Amazon has hybridized chickenization and reverse centaurism, creating a chickenized reverse centaur, a new kind of labor travesty never seen before.
In "Driven Down," a new report from the DAIR Institute, authors Adrienne Williams, Alex Hanna and Sandra Barcenas draw on interviews with DSP drivers and Williams's own experience driving for Amazon to document the state of the Chickenized Reverse Centaur. It's not good:
https://www.dair-institute.org/projects/driven-down/
"Driven Down" vividly describes – often in drivers' own words – how the life of a chickenized reverse centaur is one of wage theft, privacy invasions, humilation and on-the-job physical risks, for drivers and the communities they drive in.
DSP drivers interact with multiple automation systems – at least nine apps that monitor, score and discipline them. These apps are supposed to run on employer-supplied phones, but these phones are frequently broken, and drivers face severe punishment if these apps aren't all running during their shifts. As a result, drivers routinely install these apps on their own phones, and must give them broad, far-reaching permissions, such that drivers' own phones are surveilling them for Amazon 24/7, whether or not they're on the clock. It's not just DSP owners who are chickenized – it's also drivers, footing the bill for their own electronic whips.
First and foremost, these apps tell the drivers where to go and how to get there. Drivers are dispatched to hundreds of stops per day, on a computer-generated route that is not vetted or sanity-checked by a human before it is non-negotiably handed to a driver. Famously, plotting an efficient route among many points is one of the most insoluble computing problems, the so-called "traveling salesman" problem:
https://en.wikipedia.org/wiki/Travelling_salesman_problem
But it turns out that there is an optimal solution to the traveling salesman problem: get a computer to make a bizarre and dangerous approximation of the optimal route, and then blame and fine workers when it doesn't work. This doesn't optimize the route, but it does shift all the costs of a suboptimal route to workers.
Crucially, Amazon trusts its computer-generated routes, based on map data, over the word of drivers. For example, drivers are often directed to make "group stops" – where the driver parks the van and then delivers to multiple addresses at once (for example, at an apartment complex or office block). Amazon's mapping service assumes that addresses that are in the same complex or development are close together, even when they are very distant. If a driver dares to move and re-park their van to deliver parcels to distant addresses, the app punishes them for making an unauthorized positional adjustment. If a driver attempts to deliver all the parcels without moving the van, they are penalized for taking too long. Even if drivers report the mapping error, it persists, resulting in strings of infractions, day after day.
When drivers fail to make quota, the DSP's per-parcel payout is reduced. DSPs whose drivers perfectly obey the (irrational, impossible) orders of Amazon's apps get $0.50 per parcel delivered. If drivers fall short of the apps' expectations, the per parcel-rate can fall to $0.10, or, in some cases, zero.
This provides a powerful incentive to DSPs to pressure drivers to engage in unsafe practices if the alternative would displease the app. Drivers are penalized for sudden braking and swerving, for example, but are also penalized for missing quota, which puts drivers in the impossible position of having to drive as quickly as possible but also not to swerve or brake if a sudden traffic hazard pops up. In one absurd tale, a driver describes how they were shifted to an electric van that did regenerative braking when they released the accelerator. The app expected drivers to slow down by releasing the accelerator, not by touching the brakes, but this meant that the van's brake lights never switched on. When a driver slowed at a yellow light, they were badly rear-ended by a following UPS truck, whose driver had assumed the Amazon DSP driver was going to rush the light (because the van's brake lights didn't light up).
Meeting quota means that drivers are also not able to stop for bathroom breaks or to take care of other personal hygiene matters. This is bad enough when it means peeing in a bottle, but it's even worse when the only way to take care of period-related matters is to go into the back of the van – where cameras record everything you do – and manage things there.
Drivers are told many inconsistent things about those cameras. Some drivers have been told that the footage is only reviewed after an accident or complaint, but when drivers do get into accidents or have complaints lodged against them, they are often fired or disciplined without anyone reviewing the footage. Meanwhile, drivers are sometimes punished for things the cameras have recorded even when there was no complaint or accident.
The existence of all that empirical evidence of things happening in and outside an Amazon DSP van makes little to no difference to drivers' employment fairness. When a malfunctioning seatbelt sensor insists that a driver has removed their seatbelt while driving, 80+ times in a single shift, the driver struggled to get their docked wages or lost jobs back. When a driver swerved to avoid an oncoming big rig whose driver had fallen asleep and drifted across the median, the driver was penalized – the driver this happened to had his score in "Mentor" (one of the many apps) docked from 850 to 650. Amazon won't tell drivers what their Mentor scores mean, but many drivers – and DSP owners – believe than anything less than a perfect score will result in punishment or termination.
Attaining and maintaining a perfect score is an impossible task, because Amazon will not disclose what drivers are expected to do – it will only penalize them when they fail to do it. Take the photos that Amazon drivers are expected to snap of parcels after they are delivered. The criteria for these photos is incredibly strict – and also not disclosed. Drivers are penalized for having their hands or shoes or reflections in the image, for capturing customers or their pets, for capturing the house-number. They aren't allowed to photograph shoes that are left on the doormat. Drivers share tips with one another about how to take a picture without losing points, but it's a moving target.
Among drivers, there's a (likely correct) belief that Amazon will not tell them how the apps are generating their scores out of fear that if drivers knew the scoring rubric, they'd start to game it. This is a widespread practice within the world of content moderation and spamfighting, where security practitioners who would normally reject the idea of "security through obscurity" out of hand suddenly embrace secrecy-dependent security measures:
https://pluralistic.net/2022/08/07/como-is-infosec/
All this isn't just dangerous and dehumanizing, it's also impoverishing. Drivers who get downranked by these imperious and unaccountable and unexplained algorithms have their hours cut or get fired altogether. The apps set a quota that can't possibly be reached if drivers take their mandated (and unpaid) 30 minute lunch and two 15-minute breaks (drivers who miss quota twice are automatically terminated). This time is given over to unpaid labor. As the report explains:
Drivers are not paid for their 30 minute lunch. A full-time employee working an 8 to 10 hour shift would be working either 4 or 5 days out of each week. At $20 an hour, that is two hours a week for four-day employees, resulting in $40 of unpaid labor a week, $160 a month, almost $2,000 a year.
Drivers are also assigned "homework" – videos they are required watch and simulator exercises they are required to complete as remediation for their real or imagined infractions. This, too, is unpaid, mandatory work. Drivers are required to attend "stand up" meetings at the start of their shifts, and this is also often unpaid work.
Amazon makes a big show of "listening to drivers," but they're never heard. A driver who reported being held at gunpoint by literal Nazis who objected to having their parcels delivered by a Jew had his complaints ignored, and those violent, armed Nazi customers continued to get their parcels delivered.
Even modest requests go unanswered. Drivers for one DSP begged for porta-toilets in the parking lot, rather than having to waste time (and miss quota) legging it to a distant bathroom. They were ignored, and all 50 drivers continue to share a single toilet.
But – thanks to chickenization – none of this is Amazon's problem. It's all the problem of a chickenized DSP "entrepreneur" who serves as a useful accountability sink for Amazon and who can be bankrupted at a moment's notice should they fail to do Amazon's precise bidding.
There's one bright spot here, though: the National Labor Relations Board has brought a case in California seeking to have Amazon held to be a "joint employer" of those reverse centaurs behind the wheels of those vans:
This is the very last residue of the NLRB's authority, the rest having been drained away by Trump as part of Project 2025. If they prevail, it will open the door to drivers suing Amazon for unfair labor practices under both federal and state law – and in California and New York, that labor law just got a lot tougher for Amazon:
The chickenized reverse centaur is a new circle of labor hell, a genuinely innovative way of making workers' lives worse in order to extract more billions for one of the most profitable companies in history.
(Image: Cryteria, CC BY 3.0, modified)
Hey look at this (permalink)

- Ethiopia in Talks With China to Convert Dollar Loans to Yuan https://www.bloomberg.com/news/articles/2025-10-20/ethiopia-in-talks-with-china-to-convert-dollar-loans-into-yuan
-
This Is How Much Anthropic and Cursor Spend On Amazon Web Services https://www.wheresyoured.at/costs/
-
No Tricks, Just Treats
EFF’s Halloween Signal Stickers Are Here! https://www.eff.org/deeplinks/2025/10/no-tricks-just-treats-effs-halloween-signal-stickers-are-here -
How Trump is Building a Violent, Shadowy Federal Police Force https://www.propublica.org/article/trump-dhs-ice-secret-police-civil-rights-unaccountable
-
Does the Left Have Trouble with Making Things in America? https://www.thebignewsletter.com/p/monopoly-round-up-the-left-can-protest
Object permanence (permalink)
#20yrsago Ham operator corrects Morse code on the Disneyland Railroad https://web.archive.org/web/20050905155040/http://www.hiddenmickeys.org/Disneyland/Secrets/Square/Morse.html
#20yrsago Accused DUIs demand access to breathalyzer software source-code https://blog.citp.princeton.edu/2005/10/21/breathalyzers-and-open-source/
#20yrsago How Disneyland’s Mark Twain riverboat sank https://web.archive.org/web/20051025011944/http://deseretnews.com/dn/view/0,1249,635154764,00.html
#15yrsago Old film rejection slip: “All scenes of an unpleasant nature should be eliminated” https://oldhollywood.tumblr.com/post/1374666427/the-rejection-slip-the-motion-picture-studio
#15yrsago T-shirt turns into a zombie https://web.archive.org/web/20101123131037/http://deezteez.com/funny-t-shirts/460/turn-into-a-zombie-t-shirt.html?SSAID=112726
#15yrsago Terrified feds try to bar Bunnie Huang from testifying at Xbox jailbreaking trial https://web.archive.org/web/20101023061952/https://www.wired.com/threatlevel/2010/10/xbox-modder-tria/
#15yrsago Derren Brown’s Confessions of a Conjuror: funny memoir is also a meditation on attention, theatrics and psychology https://memex.craphound.com/2010/10/21/derren-browns-confessions-of-a-conjuror-funny-memoir-is-also-a-meditation-on-attention-theatrics-and-psychology/
#10yrsago Wikileaks hosting files from CIA director John Brennan’s AOL account https://arstechnica.com/tech-policy/2015/10/wikileaks-publishes-e-mail-from-cia-directors-hacked-aol-account/
#10yrsago Hungarian camerawoman who tripped refugee announces she will sue that refugee https://www.techdirt.com/2015/10/21/hungarian-camera-woman-filmed-tripping-refugees-plans-to-sue-facebook-refugee-she-tripped/
#10yrsago Entropy explained, beautifully, in comic-book form https://www.bostonglobe.com/ideas/2015/10/03/sousanis/XOMd3JBYnEdzQCWHM6twTJ/story.html
#10yrsago How a mathematician teaches “Little Brother” to a first-year seminar https://derekbruff.org/2015/10/21/in-class-collaborative-debate-mapping-or-how-a-mathematician-teaches-a-novel/
#10yrsago UK “anti-radicalisation” law can take kids from thoughtcriming parents in secret trials https://www.techdirt.com/2015/10/21/uk-goes-full-orwell-government-to-take-children-away-parents-if-they-might-become-radicalized/
#10yrsago How enforcing a crappy patent bankrupted the Eskimo Pie company https://web.archive.org/web/20190309071221/https://slate.com/technology/2015/10/what-the-history-of-eskimo-pies-says-about-software-patents-today.html
#10yrsago TPP means no more domain privacy https://www.eff.org/deeplinks/2015/10/us-bypasses-icann-debates-domain-privacy-closed-room-deals-oecd-and-tpp
#10yrsago McDonald’s China debuts a cement-gray bun https://www.telegraph.co.uk/food-and-drink/news/weird-mcdonalds-food-around-the-world/
#10yrsago Terrorists torch five black Ferguson-area churches, nation yawns https://web.archive.org/web/20151020194546/http://usuncut.com/black-lives-matter/black-churches-burning-ferguson-area/
#10yrsago HOWTO make a trashcan Stormtrooper helmet https://scudamor.wordpress.com/2010/10/22/make-your-own-stormtrooper-helmet/
#10yrsago Fable Comics: anthology of great comics artists telling fables from around the world https://memex.craphound.com/2015/10/22/fable-comics-anthology-of-great-comics-artists-telling-fables-from-around-the-world/
#10yrsago J Edgar Hoover fought to write ex-FBI agents out of Hitchcock’s scripts https://www.muckrock.com/news/archives/2015/oct/22/alfred-hitchcocks-fbi-file/
#10yrsago Canada’s new Liberal majority: better than the Tories, still terrible for the Internet https://memex.craphound.com/2015/10/22/canadas-new-liberal-majority-better-than-the-tories-still-terrible-for-the-internet/
#10yrsago Forced laborers sue Mississippi debtors’ prison https://theintercept.com/2015/10/22/lawsuit-challenges-mississippi-debtors-prison/
#10yrsago Son of Dieselgate: second line of VWs may have used “defeat devices” https://www.reuters.com/article/2015/10/22/us-volkswagen-emissions-engines-idUSKCN0SG0US20151022/
#10yrsago Obama administration petitions judge for no mercy in student debt bankruptcy https://readersupportednews.org/news-section2/318-66/33068-obama-administration-urges-no-bankruptcy-relief-for-student-debt
#10yrsago Complexity of financial crimes makes crooks unconvictable https://web.archive.org/web/20151022014805/https://www.bloomberg.com/news/articles/2015-10-21/has-it-become-impossible-to-prosecute-white-collar-crime-
#10yrsago Half of Vanuatu’s government is going to jail https://www.bbc.com/news/world-asia-34600561
#10yrsago DHS admits it uses Stingrays for VIPs, vows to sometimes get warrants, stop lying to judges https://arstechnica.com/tech-policy/2015/10/dhs-now-needs-warrant-for-stingray-use-but-not-when-protecting-president/
#5yrsago Free the law of Wisconsin https://pluralistic.net/2020/10/22/the-robots-are-listening/#rogue-archivist
#5yrsago US border cruelty, powered by Google cloud https://pluralistic.net/2020/10/22/the-robots-are-listening/#poulson
#5yrsago Companies target robots in disclosures https://pluralistic.net/2020/10/22/the-robots-are-listening/#goodharts-bank
#5yrsago ENDSARS https://pluralistic.net/2020/10/22/the-robots-are-listening/#endsars
#5yrsago IDing anonymized cops with facial recognition https://pluralistic.net/2020/10/22/the-robots-are-listening/#sousveillance
#5yrsago Falsehoods programmers believe about time https://pluralistic.net/2020/10/21/each-drop-of-strych-a-nine/#a-sort-of-runic-rhyme
#5yrsago Trustbusting is stimulus https://pluralistic.net/2020/10/21/each-drop-of-strych-a-nine/#break-em-up
#5yrsago Tom Lehrer in the public domain https://pluralistic.net/2020/10/21/each-drop-of-strych-a-nine/#poisoning-pigeons
#1yrago Retiring the US debt would retire the US dollar https://pluralistic.net/2024/10/21/we-can-have-nice-things/#public-funds-not-taxpayer-dollars
Upcoming appearances (permalink)

- Vancouver: Enshittification with David Moscrop (Vancouver Writers Festival), Oct 23
https://www.showpass.com/2025-festival-39/ -
Montreal: Montreal Attention Forum keynote, Oct 24
https://www.attentionconferences.com/conferences/2025-forum -
Montreal: Enshittification at Librarie Drawn and Quarterly, Oct 24
https://mtl.drawnandquarterly.com/events/3757420251024 -
Ottawa: Enshittification (Ottawa Writers Festival), Oct 25
https://writersfestival.org/events/fall-2025/enshittification -
Toronto: Enshittification with Dan Werb (Type Books), Oct 27
https://www.instagram.com/p/DO81_1VDngu/?img_index=1 -
Barcelona: Conferencia EUROPEA 4D (Virtual), Oct 28
https://4d.cat/es/conferencia/ -
Miami: Enshittification at Books & Books, Nov 5
https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 -
Miami: Cloudfest, Nov 6
https://www.cloudfest.com/usa/ -
Burbank: Burbank Book Festival, Nov 8
https://www.burbankbookfestival.com/ -
Lisbon: A post-American, enshittification-resistant internet, with Rabble (Web Summit), Nov 12
https://websummit.com/sessions/lis25/92f47bc9-ca60-4997-bef3-006735b1f9c5/a-post-american-enshittification-resistant-internet/ -
Cardiff: Hay Festival After Hours, Nov 13
https://www.hayfestival.com/c-203-hay-festival-after-hours.aspx -
Oxford: Enshittification and Extraction: The Internet Sucks Now with Tim Wu (Oxford Internet Institute), Nov 14
https://www.oii.ox.ac.uk/news-events/events/enshittification-and-extraction-the-internet-sucks-now/ -
London: Enshittification with Sarah Wynn-Williams and Chris Morris, Nov 15
https://www.barbican.org.uk/whats-on/2025/event/cory-doctorow-with-sarah-wynn-williams -
London: Downstream IRL with Aaron Bastani (Novara Media), Nov 17
https://dice.fm/partner/tickets/event/oen5rr-downstream-irl-aaron-bastani-in-conversation-with-cory-doctorow-17th-nov-earth-london-tickets -
London: Enshittification with Carole Cadwalladr (Frontline Club), Nov 18
https://www.eventbrite.co.uk/e/in-conversation-enshittification-tickets-1785553983029 -
Seattle: Neuroscience, AI and Society (University of Washington), Dec 4
https://compneuro.washington.edu/news-and-events/neuroscience-ai-and-society/ -
Madison, CT: Enshittification at RJ Julia, Dec 8
https://rjjulia.com/event/2025-12-08/cory-doctorow-enshittification
Recent appearances (permalink)
- Enshittification is Not Inevitable (Team Human)
https://www.teamhuman.fm/episodes/339-cory-doctorow-enshittification-is-not-inevitable -
The Great Enshittening (The Gray Area)
https://www.reddit.com/r/philosophypodcasts/comments/1obghu7/the_gray_area_the_great_enshittening_10202025/ -
Enshittification (Smart Cookies)
https://www.youtube.com/watch?v=-BoORwEPlQ0 -
Enshittification (The Gist)
https://www.youtube.com/watch?v=EgBiv_KchI0 -
Canadian tariffs with Avi Lewis
https://plagal.wordpress.com/2025/10/15/cory-doctorow-talks-to-avi-lewis-about-his-proposal-to-fightback-against-trumps-tariff-attack/
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.
-
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
2025-10-23T17:33:54+00:00
Fullscreen
Open in Tab
beside the point, but can you imagine how mad SBF will be when he hears the news? he's sitting in prison for another 20 years, but the President just pardoned the other crypto criminal who he blames for his own company's collapse
2025-10-23T15:36:35+00:00
Fullscreen
Open in Tab
Donald Trump has pardoned Binance founder Changpeng Zhao. Binance has been a major supporter of Trump's crypto projects, and Trump has already made millions after Binance accepted a $2 billion investment from an Emirati fund denominated in the Trump family's USD1 stablecoin.
One of the people CZ hired to lobby for the pardon is Teresa Goody Guillén, a lawyer who has simultaneously represented the Trump World Liberty Financial project. She's also lobbied on behalf of Binance on crypto-related topics.
2025-10-22T19:49:40+00:00
Fullscreen
Open in Tab
2025-10-22T17:36:24+00:00
Fullscreen
Open in Tab
he saw Mamdani talking about housing and thought: but what about crypto?
my guess is this is more of a hail mary attempt to get blockchain/AI money and influence behind him. we haven't seen much crypto industry spending in races below the federal level, but i also wouldn't put it past them to get involved here at the last minute
2025-10-22T14:27:21+00:00
Fullscreen
Open in Tab
Read:
My Signal exchange with the interim U.S. attorney about the Letitia James grand jury.
Tue, 21 Oct 2025 16:18:47 +0000
Fullscreen
Open in Tab
Today's links
- Carl Hiaasen's 'Fever Beach': If you didn't laugh, you'd have to cry.
- Hey look at this: Delights to delectate.
- Object permanence: Scary Godmother; Nightvale novel; The war on Worker's Comp; Cadillac's murdermobiles.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Carl Hiaasen's 'Fever Beach' (permalink)
Every Carl Hiaasen novel is a cause for celebration, but Fever Beach, his latest, makes it abundantly clear that this moment, this moment of Florida Man violent white nationalist grifting, is the moment that Hiaasen has been training for his whole life:
https://carlhiaasen.com/books/fever-beach/
Hiaasen is a crime novelist who got his start as a newspaper writer, writing columns about Florida's, ah, unique politics – and sublime, emperilled wilderness – for the Miami Herald. That beat, combined with enormous humor and literary talent, produced a writer who perfectly hybridizes Dave Barry's lovable absurdism with the hard-boiled pastoralism of the Travis McGee novels (Hiaasen wrote the introductions for a 1990s reissue of all of John D McDonald's McGee books).
Hiaasen's method is diabolical and hilarious: each volume introduces a bewildering cast of odd, crooked, charming, and/or loathsome Floridians drawn from his long experience chronicling the state and its misadventures. Every one of these people is engaged in some form of skulduggery, even the heroes, who are every bit as lawless and wild as their adversaries, though Hiaasen's protagonists are always smarter and more competent than his villains. The plots and schemes play out like an intricate clock that has been much-elaborated by a mad clockmaker with an affinity for eccentric gears, all set against the background of Florida, a glorious and beautiful place being fed into a woodchipper powered by unchecked greed and depravity.
After 20-some volumes in this vein (including Bad Monkey, lately adapted for Apple TV), something far weirder than anything Hiaasen ever dreamed up came to pass: Donald Trump, the most Florida Man ever, was elected president. If you asked an LLM to write a Hiaasen novel, you might get Trump: a hacky, unimaginative version of the wealthy, callous, scheming grifters of the Hiaasenverse. Back in 2020, Hiaasen wrote Trump into Squeeze Me, a tremendous and madcap addition to his canon:
https://pluralistic.net/2020/10/05/florida-man/#disappearing-act
Fever Beach is the first Hiaasen novel since Squeeze Me, and boy, does Hiaasen ever have MAGA's number.
The book revolves around a classic Hiaasen bumbler, Dale Figgo, an incompetent white nationalist who was kicked out of the Proud Boys after the Jan 6 insurrection, when he mistook a statue of a revered Confederate general for Ulysses S Grant (it was the beard) and released a video of himself smearing shit all over it. Cast out from the brotherhood of violent racists, Figgo founds his own white nationalist militia: the Strokers for Liberty, which differentiates itself from the Proud Boys by encouraging (rather than forbidding) frequent masturbation. Figgo takes his inspiration from his day-job, where he packs and ships disembodied torso sex-dolls for an adult e-commerce site, and he entices new Strokers by offering them free limbless fuck-dolls (stolen from work) as a signing bonus.
Figgo lives in a house bought for him by his long-suffering – and seriously boxing gym-addicted – mother, who despairs of his virulent racism. Her one source of comfort is Figgo's tenant, Viva Morales, a smart granting officer in the family office of the Minks (an ultra-wealthy Florida oligarch couple) who does not tolerate any of Figgo's bullshit and also pays her rent like clockwork.
Viva is the other fulcrum of the tale: her employers, the elderly couple behind the Mink Foundation, are secret white nationalist bankrollers who use their charity to funnel money to militia groups, including Strokers For Liberty. The conduit between the Minks and the Strokers is Congressman Clure Boyette, a MAGA Republican failson of an ultra-powerful Florida lobbyist, who (unbeknownst to his father) has raised $2m for the Strokers to finance a "Stop the Steal pollwatching" operation designed to terrorize voters who favor his opponent.
As a front for this dark money op, Boyette has founded the "Wee Hammers," a charity that pulls prepubescent children out of school and puts them to work with heavy power tools to construct houses in a child-labor-centric MAGA version of Habitat for Humanity. This goes about as well as you might expect.
Into this maelstrom, Viva Morales draws Twilly Spree, a recurring character first introduced in 2000's Sick Puppy as a successor to Skink, one of Hiaasen's best heroes. Twilly is a millionaire ecoterrorist who uses his family's obscene wealth – secured through investments in planet-raping extraction – to fund his arson, bombings, and general fuckery directed against Florida's most flagrant despoilers (it helps that Twilly has been psychologically gifted with the literal inability to feel fear). Twilly and Viva become a couple, and Twilly does what Twilly does – wreaking hilarious, violent and spectacular chaos upon the book's many characters.
There are so many characters – I've barely scratched the surface here. There's Galaxy, a dominatrix who loses patience with her long-term client, the MAGA Congressman Clure Boyette, after he stiffs her on a payment because he was too busy tweeting about an alleged plan by woke billiard manufacturers to replace the nation's black 8-balls with Pride-themed rainbow versions. There's Clure Boyette's soon-to-be-ex-wife, who must not, on any account, be shown the photos Galaxy took of Clure in a fur dog-collar and leash defecating on the floor of a luxury hotel suite. There's Jonas Onus, the number two man in the Strokers For Liberty, who terrorizes all and sundry by bringing them into contact with Himmler, his 120lb pitbull mix. There's Noel Kristianson, whom Dale Figgo runs over and nearly kills during an altercation over Figgo's practice of stuffing incoherent antisemitic rants into ziplock bags weighted with beach-sand and tossing them onto the driveways of unsuspecting Floridians. There's a constellation of minor characters and spear-carriers, including Key West drag queen martial artists and assorted discount-store Nazis, long-suffering charter bus drivers and a hit man who cannot abide racial prejudice.
The resulting story has more twists and turns than an invasive Burmese python, that apex predator of the gate-guarded McMansion development. It's screamingly funny, devilishly inventive, and deeply, profoundly satisfying. With Fever Beach, Hiaasen makes a compelling case for Florida as the perfect microcosm of the terrifying state of America, and an even more compelling case for his position as its supreme storyteller.
You do not need to have read any of Hiaasen's other novels to love this one. But I'm pretty sure that if you start with this one, you're going to want to dig into the dozens of other Hiaasen books, and you will not be disappointed if you do.
Hey look at this (permalink)

- The pivot https://www.antipope.org/charlie/blog-static/2025/10/the-pivot-1.html
-
Video Game Union Workers Rally Against $55 Billion Saudi-Backed Private Acquisition of EA https://www.eurogamer.net/ea-union-workers-rally-against-55bn-saudi-backed-private-acquisition-with-formal-petition-to-regulators
-
China Has Overtaken America https://paulkrugman.substack.com/p/china-has-overtaken-america
-
How I Reversed Amazon's Kindle Web Obfuscation Because Their App Sucked https://blog.pixelmelt.dev/kindle-web-drm/
-
OpenAI Needs $400 Billion In The Next 12 Months https://www.wheresyoured.at/openai400bn/
-
China Forces Scott Bessent to Embrace Anti-Monopoly Tactics https://www.thebignewsletter.com/p/welcome-to-the-anti-monopoly-movement
Object permanence (permalink)
#20yrsago WSJ tech writer damns DRM https://web.archive.org/web/20051027023456/http://ptech.wsj.com/archive/ptech-20051020.html
#20yrsago Fundraiser: donate $500 to shut up loudmouth message-board poster https://www.metafilter.com/dios-rothkofundraiser.mefi
#20yrsago Chinese activist to Jerry Yang: You are helping to maintain an evil system https://web.archive.org/web/20051027021122/https://cyberlaw.stanford.edu/blogs/gelman/archives/003388.shtml/
#15yrsago Canadian gov’t scientists protest gag order, go straight to public with own website https://web.archive.org/web/20101020142208/https://www.theglobeandmail.com/news/politics/ottawa-notebook/federal-scientists-go-public-in-face-of-restrictive-media-rules/article1761624/
#15yrsago Scary Godmother: delightful, spooky graphic storybook for kids https://memex.craphound.com/2010/10/20/scary-godmother-delightful-spooky-graphic-storybook-for-kids/
#10yrsago The Welcome to Night Vale novel dances a tightrope between weird humor and real pathos https://memex.craphound.com/2015/10/20/the-welcome-to-night-vale-novel-dances-a-tightrope-between-weird-humor-and-real-pathos/
#10yrsago How a lobbyist/doctor couple are destroying Worker’s Comp across America https://www.propublica.org/article/inside-corporate-americas-plan-to-ditch-workers-comp
#10yrsago How the market for zero-day vulnerabilities works https://arstechnica.com/information-technology/2015/10/the-rise-of-the-zero-day-market/
#10yrsago Reality check: we know nothing whatsoever about simulating human brains https://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/
#10yrsago On saying “no”: creativity, self-care, privilege, and knowing your limits https://tumblr.austinkleon.com/post/120472862666
#5yrsago Solar's "miracle material" https://pluralistic.net/2020/10/20/the-cadillac-of-murdermobiles/#perovskite
#5yrsago Cadillac perfects the murdermobile https://pluralistic.net/2020/10/20/the-cadillac-of-murdermobiles/#caddy
#5yrsago Feds gouge states, subsidize corporations https://pluralistic.net/2020/10/20/the-cadillac-of-murdermobiles/#austerity
Upcoming appearances (permalink)

- Seattle: Enshittification and the Rot Economy, with Ed Zitron (Clarion West), Oct 22
https://www.clarionwest.org/event/2025-deep-dives-cory-doctorow/ -
Vancouver: Enshittification with David Moscrop (Vancouver Writers Festival), Oct 23
https://www.showpass.com/2025-festival-39/ -
Montreal: Montreal Attention Forum keynote, Oct 24
https://www.attentionconferences.com/conferences/2025-forum -
Montreal: Enshittification at Librarie Drawn and Quarterly, Oct 24
https://mtl.drawnandquarterly.com/events/3757420251024 -
Ottawa: Enshittification (Ottawa Writers Festival), Oct 25
https://writersfestival.org/events/fall-2025/enshittification -
Toronto: Enshittification with Dan Werb (Type Books), Oct 27
https://www.instagram.com/p/DO81_1VDngu/?img_index=1 -
Barcelona: Conferencia EUROPEA 4D (Virtual), Oct 28
https://4d.cat/es/conferencia/ -
Miami: Enshittification at Books & Books, Nov 5
https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 -
Miami: Cloudfest, Nov 6
https://www.cloudfest.com/usa/ -
Burbank: Burbank Book Festival, Nov 8
https://www.burbankbookfestival.com/ -
Lisbon: A post-American, enshittification-resistant internet, with Rabble (Web Summit), Nov 12
https://websummit.com/sessions/lis25/92f47bc9-ca60-4997-bef3-006735b1f9c5/a-post-american-enshittification-resistant-internet/ -
Cardiff: Hay Festival After Hours, Nov 13
https://www.hayfestival.com/c-203-hay-festival-after-hours.aspx -
Oxford: Enshittification and Extraction: The Internet Sucks Now with Tim Wu (Oxford Internet Institute), Nov 14
https://www.oii.ox.ac.uk/news-events/events/enshittification-and-extraction-the-internet-sucks-now/ -
London: Enshittification with Sarah Wynn-Williams and Chris Morris, Nov 15
https://www.barbican.org.uk/whats-on/2025/event/cory-doctorow-with-sarah-wynn-williams -
London: Downstream IRL with Aaron Bastani (Novara Media), Nov 17
https://dice.fm/partner/tickets/event/oen5rr-downstream-irl-aaron-bastani-in-conversation-with-cory-doctorow-17th-nov-earth-london-tickets -
Seattle: Neuroscience, AI and Society (University of Washington), Dec 4
https://compneuro.washington.edu/news-and-events/neuroscience-ai-and-society/ -
Madison, CT: Enshittification at RJ Julia, Dec 8
https://rjjulia.com/event/2025-12-08/cory-doctorow-enshittification
Recent appearances (permalink)
- The Great Enshittening (The Gray Area)
https://www.reddit.com/r/philosophypodcasts/comments/1obghu7/the_gray_area_the_great_enshittening_10202025/ -
Enshittification (Smart Cookies)
https://www.youtube.com/watch?v=-BoORwEPlQ0 -
Enshittification (The Gist)
https://www.youtube.com/watch?v=EgBiv_KchI0 -
Canadian tariffs with Avi Lewis
https://plagal.wordpress.com/2025/10/15/cory-doctorow-talks-to-avi-lewis-about-his-proposal-to-fightback-against-trumps-tariff-attack/ -
Enshittification (This Is Hell)
https://thisishell.com/interviews/1864-cory-doctorow
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.
-
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
2025-10-21T15:29:58+00:00
Fullscreen
Open in Tab
Read:
They were journalists at major news outlets in New York and D.C. before taking big pay cuts to run the Midcoast Villager, a paper covering a rocky, coastal part of Maine.
2025-10-20T14:25:52+00:00
Fullscreen
Open in Tab
"it isn't just X—it's Y" is by far the most annoying ChatGPTism
Mon, 20 Oct 2025 14:08:21 +0000
Fullscreen
Open in Tab
Today's links
- The mad king's digital killswitch: Every accusation is a confession.
- Hey look at this: Delights to delectate.
- Object permanence: Use RSS; Lifehackers in the NYT; Banned Verminous Dickens cake; Fake CIA Fox guy; Ferris wheel offices; EFF finds printer snitch-dots; Officer Bubbles sues Youtube; Sued for criticizing Proctorio; Can I sing Happy Birthday? "Under the Poppy"; International Concatenated Order of Hoo-Hoo.
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
The mad king's digital killswitch (permalink)
Remember when we were all worried that Huawei had filled our telecoms infrastructure with listening devices and killswitches? It sure would be dangerous if a corporation beholden to a brutal autocrat became structurally essential to your country's continued operations, huh?
In other, unrelated news, earlier this month, Trump's DoJ ordered Apple and Google to remove apps that allowed users to report ICE's roving gangs of masked thugs, who have kidnapped thousands of our neighbors and sent them to black sites:
https://pluralistic.net/2025/10/06/rogue-capitalism/#orphaned-syrian-refugees-need-not-apply
Apple and Google capitulated. Apple also capitulated to Trump by removing apps that collect hand-verified, double-checked videos of ICE violence. Apple declared ICE's thugs to be a "protected class" that may not be disparaged in apps available to Apple's customers:
Of course, iPhones can (technically) run apps that Apple doesn't want you to run. All you have to do is "jailbreak" your phone and install an independent app store. Just one problem: the US Trade Rep bullied every country in the world into banning jailbreaking, meaning that if Trump (a man who never met a grievance that was too petty to pursue) orders Tim Cook (a man who never found a boot he wouldn't lick) to remove apps from your country's app store, you won't be able to get those apps from anyone else:
https://pluralistic.net/2025/10/15/freedom-of-movement/#data-dieselgate
Now, you could get your government to order Apple to open up its platform to third-party app stores, but they will not comply – instead, they'll drown your country in spurious legal threats:
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62025TN0354
And they'll threaten to pull out of your country altogether:
https://pluralistic.net/2025/09/26/empty-threats/#500-million-affluent-consumers
Of course, Google's no better. Not only do they capitulate to every demand from Trump, but they're also locking down Android so that you'll no longer be allowed to install apps unless Google approves of them (meaning that Trump now has a de facto veto over your Android apps):
https://pluralistic.net/2025/09/01/fulu/#i-am-altering-the-deal
For decades, China hawks have accused Chinese tech giants of being puppeteered by the Chinese state, vehicles for projecting Chinese state power around the world. Meanwhile, the Chinese state has declared war on its tech companies, treating them as competitors, not instruments:
https://pluralistic.net/2021/04/03/ambulatory-wallets/#sectoral-balances
When it comes to US foreign policy, every accusation is a confession. Snowden showed us how the US tech giants were being used to wiretap virtually every person alive for the US government. More than a decade later, Microsoft has been forced to admit that they will still allow Trump's lackeys to plunder Europeans' data, even if that data is stored on servers in the EU:
Microsoft is definitely a means for the US to project its power around the world. When Trump denounced Karim Khan, the Chief Prosecutor of the International Criminal Court, for indicting Netanyahu for genocide, Microsoft obliged by nuking Khan's email, documents, calendar and contacts:
https://apnews.com/article/icc-trump-sanctions-karim-khan-court-a4b4c02751ab84c09718b1b95cbd5db3
This is exactly the kind of thing Trump's toadies warned us would happen if we let Huawei into our countries. Every accusation is a confession.
But it's worse than that. The very worst-case speculative scenario for Huawei-as-Chinese-Trojan-horse is infinitely better than the non-speculative, real ways in which the US has killswitched and bugged the world's devices.
Take CALEA, a Clinton-era law that requires all network switches to be equipped with law-enforcement back-doors that allow anyone who holds the right credential to take over the switch and listen in, block, or spoof its data. Virtually every network switch manufactured is CALEA-compliant, which is how the NSA was able to listen in on the Greek Prime Minister's phone calls to gain competitive advantage for the competing Salt Lake City Olympic bid:
https://en.wikipedia.org/wiki/Greek_wiretapping_case_2004%E2%80%9305
CALEA backdoors are a single point of failure for the world's networking systems. Nominally, CALEA backdoors are under US control, but the reality is that lots of hackers have exploited CALEA to attack governments and corporations, inside the US and abroad. Remember Salt Typhoon, the worst-ever hacking attack on US government agencies and large corporations? The Salt Typhoon hackers used CALEA as their entry point into those networks:
https://pluralistic.net/2024/10/07/foreseeable-outcomes/#calea
US monopolists – within Trump's coercive reach – control so many of the world's critical systems. Take John Deere, the ag-tech monopolist that supplies the majority of the world's tractors. By design, those tractors do not allow the farmers who own them to alter their software. That's so John Deere can force farmers to use Deere's own technicians for repairs, and so that Deere can extract soil data from farmers' tractors to sell into the global futures market.
A tractor is a networked computer in a fancy, expensive case filled with whirling blades, and at any time, Deere can reach into any tractor and permanently immobilize it. Remember when Russian looters stole those Ukrainian tractors and took them to Chechnya, only to have Deere remotely brick their loot, turning the tractors into multi-ton paperweights? A lot of us cheered that high-tech comeuppance, but when you consider that Donald Trump could order Deere to do this to all the tractors, on his whim, this gets a lot more sinister:
https://pluralistic.net/2022/05/08/about-those-kill-switched-ukrainian-tractors/
Any government thinking about the future of geopolitics in an era of Trump's mad king fascism should be thinking about how to flash those tractors – and phones, and games consoles, and medical implants, and ventilators – with free and open software that is under its owner's control. The problem is that every country in the world has signed up to America's ban on jailbreaking.
In the EU, it's Article 6 of the Copyright Directive. In Mexico, it's the IP chapter of the USMCA. In Central America, it's via CAFTA. In Australia, it's the US-Australia Free Trade Agreement. In Canada, it's 2012's Bill C-11, which bans Canadian farmers from fixing their own tractors, Canadian drivers from taking their cars to a mechanic of their choosing, and Canadian iPhone and games console owners from choosing to buy their software from a Canadian store:
These anti-jailbreaking laws were designed as a tool of economic extraction, a way to protect American tech companies' sky-high fees and rampant privacy invasions by making it illegal, everywhere, for anyone to alter how these devices work without the manufacturer's permission.
But today, these laws have created clusters of deep-seated infrastructural vulnerabilities that reach into all our digital devices and services, including the digital devices that harvest our crops, supply oxygen to our lungs, or tell us when Trump's masked shock-troops are hunting people in our vicinity.
It's well past time for a post-American internet. Every device and every service should be designed so that the people who use them have the final say over how they work. Manufacturers' back doors and digital locks that prevent us from updating our devices with software of our choosing were never a good idea. Today, they're a catastrophe.
The world signed up to these laws because the US threatened them with tariffs if they didn't do as they were told. Well, happy Liberation Day, everyone. The US told the world to pass America's tech laws or face American tariffs.
When someone threatens to burn down your house unless you do as you're told, and then they burn your house down anyway, you don't have to keep doing what they told you.
When Putin invaded Ukraine, he inadvertently pushed the EU to accelerate its solarization efforts, to escape their reliance on Russian gas, and now Europe is a decade ahead of schedule in meeting its zero-emissions goals:
https://electrek.co/2025/09/30/solar-leads-eu-electricity-generation-as-renewables-hit-54-percent/
Today, another mad dictator is threatening the world's infrastructure. For the rest of the world to escape dictators' demands, they will have to accelerate their independence from American tech – not just Russian gas. A post-American internet starts with abandoning the laws that give US companies – and therefore Trump – a veto over how your technology works.
Hey look at this (permalink)

- Trump’s EV retreat is a huge win for his No. 1 trade rival https://www.cnn.com/2025/10/15/business/trump-ev-retreat-china-nightcap
-
Tech Workers Versus Enshittification https://cacm.acm.org/opinion/tech-workers-versus-enshittification/
-
Political: Whistle Work https://heidiwaterhouse.com/political-whistle-work/
-
About Cory Doctorow's "Microsoft, Tear Down That Wall!" https://euro-stack.com/blog/2025/10/tear-down-this-wall
-
Atlanta’s city-run grocery sees early success, sparking debate over government’s role https://www.foxnews.com/politics/atlantas-city-run-grocery-sees-early-success-sparking-debate-over-governments-role
-
How Russell Vought Became Trump’s Shadow President https://www.propublica.org/article/russ-vought-trump-shadow-president-omb
Object permanence (permalink)
#20yrsago Fox shuts down Buffy Hallowe’en musical despite Whedon’s protests Fox shuts down Buffy Hallowe’en musical despite Whedon’s protests https://web.archive.org/web/20051021235310/http://www.counterpulse.org/calendar.shtml#buffy
#20yrsago Norway’s public broadcaster sells out taxpayers to Microsoft https://memex.craphound.com/2005/10/16/norways-public-broadcaster-sells-out-taxpayers-to-microsoft/
#20yrsago Lifehackers profile in NYT https://www.nytimes.com/2005/10/16/magazine/meet-the-life-hackers.html
#20yrsago Pan-European DRM proposal https://dissected
#20yrsago EFF cracks hidden snitch codes in color laser prints https://w2.eff.org/Privacy/printers/docucolor/
#20yrsago Nielsen’s top-10 blog usability mistakes https://www.nngroup.com/articles/weblog-usability-top-ten-mistakes/
#20yrsago Microsoft employee calls me a communist and a liar and insists that a Microsoft monopoly will be good for Norwayhttps://memex.craphound.com/2005/10/17/msft-employee-cory-is-a-liar-and-a-communist-msft-is-good-for-norway/
#20yrsago Dear ASCAP: May I sing Happy Birthday for my dad’s 75th? https://web.archive.org/web/20051024004347/https://blog.stayfreemagazine.org/2005/09/happy_birthday.html
#20yrsago 100 oldest .COM names in the registry https://web.archive.org/web/20051024020147/http://www.jottings.com/100-oldest-dot-com-domains.htm
#15yrsago Koja’s UNDER THE POPPY: dark, epic and erotic novel of war and intrigue https://memex.craphound.com/2010/10/18/kojas-under-the-poppy-dark-epic-and-erotic-novel-of-war-and-intrigue/
#15yrsago Ray Ozzie leaves Microsoft https://www.salon.com/2010/10/19/microsoft_roy_ozzie/
#15yrsago Google Book Search will never have an effective competitor https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1417722
#15yrsago Prentiss County, Mississippi Jail requires all inmates to have a Bible, regardless of faith https://web.archive.org/web/20061119033010/https://www.prentisscountysheriff.com/jail.aspx
#15yrsago Early distributed computing video, 1959, prefigures the net https://archive.org/details/AllAboutPolymorphics
#15yrsago Furniture made from rusted Soviet naval mines https://web.archive.org/web/20150206045826/https://marinemine.com/
#15yrsago G20 Toronto cop who was afraid of girl blowing soap bubbles sues YouTube for “ridicule” https://web.archive.org/web/20101019001110/https://www.theglobeandmail.com/news/national/toronto/officer-bubbles-launches-suit-against-youtube/article1760214/
#15yrsago Help wanted: anti-terrorism intern for Disney https://web.archive.org/web/20151015182237/http://thewaltdisneycompany.jobs/burbank-ca/global-intelligence-analyst-intern-corporate-spring-2016/408543725E4D48B196C01CAEEE602D36/job/
#15yrsago Rudy Rucker remembers Benoit Mandelbrot https://www.rudyrucker.com/blog/2010/10/16/remembering-benoit-mandelbrot/
#15yrsago Verminous Dickens cake banned from Melbourne cake show https://web.archive.org/web/20101019004804/https://hothamstreetladies.blogspot.com/2010/09/contraband-cake.html
#15yrsago English Heritage claims it owns every single image of Stonehenge, ever https://blog.fotolibra.com/2010/10/19/stonewalling-stonehenge/
#15yrsago HOWTO Make Mummy Meatloaf https://web.archive.org/web/20101022232509/http://gatherandnest.com/?p=2848
#15yrsago HOWTO catch drilling-dust with a folded Post-It https://cheezburger.com/4078311936
#10yrsago White supremacists call for Star Wars boycott because imaginary brown people https://www.themarysue.com/boycott-star-wars-vii-because-why-again/
#10yrsago In upsidedownland, Verizon upheld its fiber broadband promises to 14 cities https://www.techdirt.com/2015/10/19/close-only-counts-horseshoes-hand-grenades-apparently-verizons-fiber-optic-installs/
#10yrsago Survivor-count for the Chicago PD’s black-site/torture camp climbs to 7,000+ https://www.theguardian.com/us-news/2015/oct/19/homan-square-chicago-police-disappeared-thousands
#10yrsago A Swedish doctor’s collection of English anatomical idioms https://news.harvard.edu/gazette/story/2015/10/body-of-work/
#10yrsago Some suggestions for sad, rich people https://whatever.scalzi.com/2015/10/18/the-1-of-problems/
#10yrsago That “CIA veteran” who was always on Fox News? Arrested for lying about being in the CIA https://www.abc.net.au/news/2015-10-16/fox-news-terrorism-expert-arrested-for-pretending-to-be-cia/6859576
#10yrsago Eric Holder: I didn’t prosecute bankers for reasons unrelated to my $3M/year law firm salary https://theintercept.com/2015/10/16/holder-defends-record-of-not-prosecuting-financial-fraud/
#10yrsago Titanic victory for fair use: appeals court says Google’s book-scanning is legal https://memex.craphound.com/2015/10/16/titanic-victory-for-fair-use-appeals-court-says-googles-book-scanning-is-legal/
#10yrsago Snowden for drones: The Intercept’s expose on US drone attacks, revealed by a new leaker https://theintercept.com/drone-papers/
#10yrsago Tweens are smarter than you think: the wonderful, true story of the ERMAHGERD meme https://www.vanityfair.com/culture/2015/10/ermahgerd-girl-true-story
#10yrsago UK MPs learn that GCHQ can spy on them, too, so now we may get a debate on surveillance https://www.theguardian.com/world/2015/oct/14/gchq-monitor-communications-mps-peers-tribunal-wilson-doctrine
#10yrsago Now we know the NSA blew the black budget breaking crypto, how can you defend yourself? https://www.eff.org/deeplinks/2015/10/how-to-protect-yourself-from-nsa-attacks-1024-bit-DH
#10yrsago 23andme & Ancestry.com aggregated the world’s DNA; the police obliged them by asking for it https://web.archive.org/web/20151023033455/https://fusion.net/story/215204/law-enforcement-agencies-are-asking-ancestry-com-and-23andme-for-their-customers-dna/
#10yrsago A chess-set you wear in a ring https://imgur.com/worlds-smallest-chess-set-ring-Hh3Jeip
#10yrsago Exploiting smartphone cables as antennae that receive silent, pwning voice commands https://www.wired.com/2015/10/this-radio-trick-silently-hacks-siri-from-16-feet-away/
#15yrsago NYPD won’t disclose what it does with its secret military-grade X-ray vans https://web.archive.org/web/20151017212024/http://www.nyclu.org/news/nypd-unlawfully-hiding-x-ray-van-use-city-neighborhoods-nyclu-argues
#10yrsago The International Concatenated Order of Hoo-Hoo: greatly improved, but something important has been lost https://back-then.tumblr.com/post/131407456141/the-international-concatenated-order-of-hoo-hoo
#5yrsago Happy World Standards Day or not https://pluralistic.net/2020/10/18/middle-gauge-muddle/#aoc-flex
#5yrsago Amazon returns end up in landfills https://pluralistic.net/2020/10/16/lucky-ducky/#landfillers
#5yrsago UK to tax Amazon's victims https://pluralistic.net/2020/10/16/lucky-ducky/#amazon-tax
#5yrsago Ferris wheel offices https://pluralistic.net/2020/10/16/lucky-ducky/#gondoliers
#5yrsago Kids reason, adults rationalize https://pluralistic.net/2020/10/19/nanotubes-r-us/#kids-r-alright
#1yrago You should be using an RSS reader https://pluralistic.net/2024/10/16/keep-it-really-simple-stupid/#read-receipts-are-you-kidding-me-seriously-fuck-that-noise
#5yrsago Educator sued for criticising "invigilation" tool https://pluralistic.net/2020/10/17/proctorio-v-linkletter/#proctorio
#1yrago Blue states should play "constitutional hardball" https://pluralistic.net/2024/10/18/states-rights/#cold-civil-war
#1yrago Penguin Random House, AI, and writers' rights https://pluralistic.net/2024/10/19/gander-sauce/#just-because-youre-on-their-side-it-doesnt-mean-theyre-on-your-side
Upcoming appearances (permalink)

- San Francisco: Enshittification at Public Works with Jenny Odell (The Booksmith), Oct 20
https://app.gopassage.com/events/doctorow25 -
PDX: Enshittification at Powell's, Oct 21
https://www.powells.com/events/cory-doctorow-10-21-25 -
Seattle: Enshittification and the Rot Economy, with Ed Zitron (Clarion West), Oct 22
https://www.clarionwest.org/event/2025-deep-dives-cory-doctorow/ -
Vancouver: Enshittification with David Moscrop (Vancouver Writers Festival), Oct 23
https://www.showpass.com/2025-festival-39/ -
Montreal: Montreal Attention Forum keynote, Oct 24
https://www.attentionconferences.com/conferences/2025-forum -
Montreal: Enshittification at Librarie Drawn and Quarterly, Oct 24
https://mtl.drawnandquarterly.com/events/3757420251024 -
Ottawa: Enshittification (Ottawa Writers Festival), Oct 25
https://writersfestival.org/events/fall-2025/enshittification -
Toronto: Enshittification with Dan Werb (Type Books), Oct 27
https://www.instagram.com/p/DO81_1VDngu/?img_index=1 -
Barcelona: Conferencia EUROPEA 4D (Virtual), Oct 28
https://4d.cat/es/conferencia/ -
Miami: Enshittification at Books & Books, Nov 5
https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 -
Miami: Cloudfest, Nov 6
https://www.cloudfest.com/usa/ -
Burbank: Burbank Book Festival, Nov 8
https://www.burbankbookfestival.com/ -
Lisbon: A post-American, enshittification-resistant internet, with Rabble (Web Summit), Nov 12
https://websummit.com/sessions/lis25/92f47bc9-ca60-4997-bef3-006735b1f9c5/a-post-american-enshittification-resistant-internet/ -
Cardiff: Hay Festival After Hours, Nov 13
https://www.hayfestival.com/c-203-hay-festival-after-hours.aspx -
Oxford: Enshittification and Extraction: The Internet Sucks Now with Tim Wu (Oxford Internet Institute), Nov 14
https://www.oii.ox.ac.uk/news-events/events/enshittification-and-extraction-the-internet-sucks-now/ -
London: Enshittification with Sarah Wynn-Williams and Chris Morris, Nov 15
https://www.barbican.org.uk/whats-on/2025/event/cory-doctorow-with-sarah-wynn-williams -
London: Downstream IRL with Aaron Bastani (Novara Media), Nov 17
https://dice.fm/partner/tickets/event/oen5rr-downstream-irl-aaron-bastani-in-conversation-with-cory-doctorow-17th-nov-earth-london-tickets -
Seattle: Neuroscience, AI and Society (University of Washington), Dec 4
https://compneuro.washington.edu/news-and-events/neuroscience-ai-and-society/
Recent appearances (permalink)
- Enshittification (Smart Cookies)
https://www.youtube.com/watch?v=-BoORwEPlQ0 -
Enshittification (The Gist)
https://www.youtube.com/watch?v=EgBiv_KchI0 -
Canadian tariffs with Avi Lewis
https://plagal.wordpress.com/2025/10/15/cory-doctorow-talks-to-avi-lewis-about-his-proposal-to-fightback-against-trumps-tariff-attack/ -
Enshittification (This Is Hell)
https://thisishell.com/interviews/1864-cory-doctorow -
Enshittification (Computer Says Maybe)
https://csm.transistor.fm/episodes/gotcha-enshittification-w-cory-doctorow
Latest books (permalink)
- "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
-
"The Memex Method," Farrar, Straus, Giroux, 2026
-
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026
Colophon (permalink)
Today's top sources:
Currently writing:
- "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED.
-
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
2025-10-20T02:47:19+00:00
Fullscreen
Open in Tab
Finished reading:
2025-10-19T16:32:57+00:00
Fullscreen
Open in Tab
2025-10-18T20:41:26+00:00
Fullscreen
Open in Tab
2025-10-17T21:50:05+00:00
Fullscreen
Open in Tab
Read:
Trump's crypto windfall represents a mixing of personal and government interests at an unprecedented scale.
2025-10-11T09:49:59-07:00
Fullscreen
Open in Tab
Today I just launched support for BlueSky as a new authentication option in IndieLogin.com!
IndieLogin.com is a developer service that allows users to log in to a website with their domain. It delegates the actual user authentication out to various external services, whether that is an IndieAuth server, GitHub, GitLab, Codeberg, or just an email confirmation code, and now also BlueSky.
This means if you have a custom domain as your BlueSky handle, you can now use it to log in to websites like indieweb.org directly!

Alternatively, you can add a link to your BlueSky handle from your website with a rel="me atproto" attribute, similar to how you would link to your GitHub profile from your website.
<a href="https://example.bsky.social" rel="me atproto">example.bsky.social</a>
This is made possible thanks to BlueSky's support of the new OAuth Client ID Metadata Document specification, which was recently adopted by the OAuth Working Group. This means as the developer of the IndieLogin.com service, I didn't have to register for any BlueSky API keys in order to use the OAuth server! The IndieLogin.com website publishes its own metadata which the BlueSky OAuth server can use to fetch the metadata from. This is the same client metadata that an IndieAuth server will parse as well! Aren't standards fun!
The hardest part about the whole process was probably adding DPoP support. Actually creating the DPoP JWT wasn't that bad but the tricky part was handling the DPoP server nonces sent back. I do wish we had a better solution for that mechanism in DPoP, but I remember the reasoning for doing it this way and I guess we just have to live with it now.
This was a fun exercise in implementing a bunch of the specs I've been working on recently!
- OAuth 2.1
- DPoP
- Client ID Metadata Document
- Pushed Authorization Requests
- OAuth for Browser-Based Apps
- Protected Resource Metadata
Here's the link to the full ATProto OAuth docs for reference.
2025-10-10T00:00:00+00:00
Fullscreen
Open in Tab
Hello! Earlier this summer I was talking to a friend about how much I love using fish, and how I love that I don’t have to configure it. They said that they feel the same way about the helix text editor, and so I decided to give it a try.
I’ve been using it for 3 months now and here are a few notes.
why helix: language servers
I think what motivated me to try Helix is that I’ve been trying to get a working language server setup (so I can do things like “go to definition”) and getting a setup that feels good in Vim or Neovim just felt like too much work.
After using Vim/Neovim for 20 years, I’ve tried both “build my own custom configuration from scratch” and “use someone else’s pre-buld configuration system” and even though I love Vim I was excited about having things just work without having to work on my configuration at all.
Helix comes with built in language server support, and it feels nice to be able to do things like “rename this symbol” in any language.
the search is great
One of my favourite things about Helix is the search! If I’m searching all the files in my repository for a string, it lets me scroll through the potential matching files and see the full context of the match, like this:
For comparison, here’s what the vim ripgrep plugin I’ve been using looks like:
There’s no context for what else is around that line.
the quick reference is nice
One thing I like about Helix is that when I press g, I get a little help popup
telling me places I can go. I really appreciate this because I don’t often use
the “go to definition” or “go to reference” feature and I often forget the
keyboard shortcut.
some vim -> helix translations
- Helix doesn’t have marks like
ma,'a, instead I’ve been usingCtrl+OandCtrl+Ito go back (or forward) to the last cursor location - I think Helix does have macros, but I’ve been using multiple cursors in every
case that I would have previously used a macro. I like multiple cursors a lot
more than writing macros all the time. If I want to batch change something in
the document, my workflow is to press
%(to highlight everything), thensto select (with a regex) the things I want to change, then I can just edit all of them as needed. - Helix doesn’t have neovim-style tabs, instead it has a nice buffer switcher (
<space>b) I can use to switch to the buffer I want. There’s a pull request here to implement neovim-style tabs. There’s also a settingbufferline="multiple"which can act a bit like tabs withgp,gnfor prev/next “tab” and:bcto close a “tab”.
some helix annoyances
Here’s everything that’s annoyed me about Helix so far.
- I like the way Helix’s
:reflowworks much less than how vim reflows text withgq. It doesn’t work as well with lists. (github issue) - If I’m making a Markdown list, pressing “enter” at the end of a list item won’t continue the list. There’s a partial workaround for bulleted lists but I don’t know one for numbered lists.
- No persistent undo yet: in vim I could use an undofile so that I could undo changes even after quitting. Helix doesn’t have that feature yet. (github PR)
- Helix doesn’t autoreload files after they change on disk, I have to run
:reload-all(:ra<tab>) to manually reload them. Not a big deal. - Sometimes it crashes, maybe every week or so. I think it might be this issue.
The “markdown list” and reflowing issues come up a lot for me because I spend a lot of time editing Markdown lists, but I keep using Helix anyway so I guess they can’t be making me that mad.
switching was easier than I thought
I was worried that relearning 20 years of Vim muscle memory would be really hard.
It turned out to be easier than I expected, I started using Helix on a vacation for a little low-stakes coding project I was doing on the side and after a week or two it didn’t feel so disorienting anymore. I think it might be hard to switch back and forth between Vim and Helix, but I haven’t needed to use Vim recently so I don’t know if that’ll ever become an issue for me.
The first time I tried Helix I tried to force it to use keybindings that were more similar to Vim and that did not work for me. Just learning the “Helix way” was a lot easier.
There are still some things that throw me off: for example w in vim and w in
Helix don’t have the same idea of what a “word” is (the Helix one includes the
space after the word, the Vim one doesn’t).
using a terminal-based text editor
For many years I’d mostly been using a GUI version of vim/neovim, so switching to actually using an editor in the terminal was a bit of an adjustment.
I ended up deciding on:
- Every project gets its own terminal window, and all of the tabs in that window (mostly) have the same working directory
- I make my Helix tab the first tab in the terminal window
It works pretty well, I might actually like it better than my previous workflow.
my configuration
I appreciate that my configuration is really simple, compared to my neovim configuration which is hundreds of lines. It’s mostly just 4 keyboard shortcuts.
theme = "solarized_light"
[editor]
# Sync clipboard with system clipboard
default-yank-register = "+"
[keys.normal]
# I didn't like that Ctrl+C was the default "toggle comments" shortcut
"#" = "toggle_comments"
# I didn't feel like learning a different way
# to go to the beginning/end of a line so
# I remapped ^ and $
"^" = "goto_first_nonwhitespace"
"$" = "goto_line_end"
[keys.select]
"^" = "goto_first_nonwhitespace"
"$" = "goto_line_end"
[keys.normal.space]
# I write a lot of text so I need to constantly reflow,
# and missed vim's `gq` shortcut
l = ":reflow"
There’s a separate languages.toml configuration where I set some language
preferences, like turning off autoformatting.
For example, here’s my Python configuration:
[[language]]
name = "python"
formatter = { command = "black", args = ["--stdin-filename", "%{buffer_name}", "-"] }
language-servers = ["pyright"]
auto-format = false
we’ll see how it goes
Three months is not that long, and it’s possible that I’ll decide to go back to Vim at some point. For example, I wrote a post about switching to nix a while back but after maybe 8 months I switched back to Homebrew (though I’m still using NixOS to manage one little server, and I’m still satisfied with that).
2025-10-08T12:14:38-07:00
Fullscreen
Open in Tab
The IETF OAuth Working Group has adopted the Client ID Metadata Document specification!
This specification defines a mechanism through which an OAuth client can identify itself to authorization servers, without prior dynamic client registration or other existing registration.
Clients identify themselves with their own URL, and host their metadata (name, logo, redirect URL) in a JSON document at that URL. They then use that URL as the client_id to introduce themselves to an authorization server for the first time.
The mechanism of clients identifying themselves as a URL has been in use in IndieAuth for over a decade, and more recently has been adopted by BlueSky for their OAuth API. The recent surge in interest in MCP has further demonstrated the need for this to be a standardized mechanism, and was the main driver in the latest round of discussion for the document! This could replace Dynamic Client Registration in MCP, dramatically simplifying management of clients, as well as enabling servers to limit access to specific clients if they want.
The folks at Stytch put together a really nice explainer website about it too! cimd.dev
Thanks to everyone for your contributions and feedback so far! And thanks to my co-author Emilia Smith for her work on the document!
2025-10-04T07:32:57-07:00
Fullscreen
Open in Tab
I just released some updates for Meetable, my open source event listing website.
The major new feature is the ability to let users log in with a Discord account. A Meetable instance can be linked to a Discord server to enable any member of the server to log in to the site. You can also restrict who can log in based on Discord "roles", so you can limit who can edit events to only certain Discord members.
One of the first questions I get about Meetable is whether recurring events are supported. My answer has always been "no". In general, it's too easy for recurring events on community calendars go get stale. If an organizer forgets to cancel or just stops showing up, that isn't visible unless someone takes the time to clean up the recurrence. Instead, it's healthier to require each event be created manually. There is a "clone event" feature that makes it easy to copy all the details from a previous event to be able to quickly manually create these sorts of recurring events. In this update, I just added a feature to streamline this even further. The next recurrence is now predicted based on the past interval of the event.
For example, for a biweekly cadence, the following steps happen now:
- You would create the first instance manually, say for October 1
- You click "Clone Event" and change the date of the new event to October 15
- Now when you click "Clone Event" on the October 15 event, it will pre-fill October 29 based on the fact that the October 15 event was created 2 weeks after the event it was cloned from
Currently this only works by counting days, so wouldn't work for things like "first Tuesday of the month" or "the 1st of the month", but I hope this saves some time in the future regardless. If "first Tuesday" or specific days of the month are an important use case for you, let me know and I can try to come up with a solution.
Minor changes/fixes below:
- Added "Create New Event" to the "Add Event" dropdown menu because it wasn't obvious "Add Event" was clickable.
- Meeting link no longer appears for cancelled events. (Actually the meeting link only appears for "confirmed" events.)
- If you add a meeting link but don't set a timezone, a warning message appears on the event.
- Added a setting to show a message when uploading a photo, you can use this to describe a photo license policy for example.
- Added a "user profile" page, and if users are configured to fetch profile info from their website, a button to re-fetch the profile info will appear.
2025-08-06T17:00:00-07:00
Fullscreen
Open in Tab
Every time I take a Lyft from the San Francisco airport to downtown going up 101, I notice the billboards. The billboards on 101 are always such a good snapshot in time of the current peak of the Silicon Valley hype cycle. I've decided to capture photos of the billboards every time I am there, to see how this changes over time.
Here's a photo dump from the 101 billboards from August 2025. The theme is clearly AI. Apologies for the slightly blurry photos, these were taken while driving 60mph down the highway, some of them at night.
2025-06-26T00:00:00+00:00
Fullscreen
Open in Tab
Hello! After many months of writing deep dive blog posts about the terminal, on Tuesday I released a new zine called “The Secret Rules of the Terminal”!
You can get it for $12 here: https://wizardzines.com/zines/terminal, or get an 15-pack of all my zines here.
Here’s the cover:
the table of contents
Here’s the table of contents:
why the terminal?
I’ve been using the terminal every day for 20 years but even though I’m very confident in the terminal, I’ve always had a bit of an uneasy feeling about it. Usually things work fine, but sometimes something goes wrong and it just feels like investigating it is impossible, or at least like it would open up a huge can of worms.
So I started trying to write down a list of weird problems I’ve run into in terminal and I realized that the terminal has a lot of tiny inconsistencies like:
- sometimes you can use the arrow keys to move around, but sometimes pressing the arrow keys just prints
^[[D - sometimes you can use the mouse to select text, but sometimes you can’t
- sometimes your commands get saved to a history when you run them, and sometimes they don’t
- some shells let you use the up arrow to see the previous command, and some don’t
If you use the terminal daily for 10 or 20 years, even if you don’t understand exactly why these things happen, you’ll probably build an intuition for them.
But having an intuition for them isn’t the same as understanding why they happen. When writing this zine I actually had to do a lot of work to figure out exactly what was happening in the terminal to be able to talk about how to reason about it.
the rules aren’t written down anywhere
It turns out that the “rules” for how the terminal works (how do
you edit a command you type in? how do you quit a program? how do you fix your
colours?) are extremely hard to fully understand, because “the terminal” is actually
made of many different pieces of software (your terminal emulator, your
operating system, your shell, the core utilities like grep, and every other random
terminal program you’ve installed) which are written by different people with different
ideas about how things should work.
So I wanted to write something that would explain:
- how the 4 pieces of the terminal (your shell, terminal emulator, programs, and TTY driver) fit together to make everything work
- some of the core conventions for how you can expect things in your terminal to work
- lots of tips and tricks for how to use terminal programs
this zine explains the most useful parts of terminal internals
Terminal internals are a mess. A lot of it is just the way it is because someone made a decision in the 80s and now it’s impossible to change, and honestly I don’t think learning everything about terminal internals is worth it.
But some parts are not that hard to understand and can really make your experience in the terminal better, like:
- if you understand what your shell is responsible for, you can configure your shell (or use a different one!) to access your history more easily, get great tab completion, and so much more
- if you understand escape codes, it’s much less scary when
cating a binary to stdout messes up your terminal, you can just typeresetand move on - if you understand how colour works, you can get rid of bad colour contrast in your terminal so you can actually read the text
I learned a surprising amount writing this zine
When I wrote How Git Works, I thought I
knew how Git worked, and I was right. But the terminal is different. Even
though I feel totally confident in the terminal and even though I’ve used it
every day for 20 years, I had a lot of misunderstandings about how the terminal
works and (unless you’re the author of tmux or something) I think there’s a
good chance you do too.
A few things I learned that are actually useful to me:
- I understand the structure of the terminal better and so I feel more confident debugging weird terminal stuff that happens to me (I was even able to suggest a small improvement to fish!). Identifying exactly which piece of software is causing a weird thing to happen in my terminal still isn’t easy but I’m a lot better at it now.
- you can write a shell script to copy to your clipboard over SSH
- how
resetworks under the hood (it does the equivalent ofstty sane; sleep 1; tput reset) – basically I learned that I don’t ever need to worry about rememberingstty saneortput resetand I can just runresetinstead - how to look at the invisible escape codes that a program is printing out (run
unbuffer program > out; less out) - why the builtin REPLs on my Mac like
sqlite3are so annoying to use (they uselibeditinstead ofreadline)
blog posts I wrote along the way
As usual these days I wrote a bunch of blog posts about various side quests:
- How to add a directory to your PATH
- “rules” that terminal problems follow
- why pipes sometimes get “stuck”: buffering
- some terminal frustrations
- ASCII control characters in my terminal on “what’s the deal with Ctrl+A, Ctrl+B, Ctrl+C, etc?”
- entering text in the terminal is complicated
- what’s involved in getting a “modern” terminal setup?
- reasons to use your shell’s job control
- standards for ANSI escape codes, which is really me trying to figure out if I think the
terminfodatabase is serving us well today
people who helped with this zine
A long time ago I used to write zines mostly by myself but with every project I get more and more help. I met with Marie Claire LeBlanc Flanagan every weekday from September to June to work on this one.
The cover is by Vladimir Kašiković, Lesley Trites did copy editing, Simon Tatham (who wrote PuTTY) did technical review, our Operations Manager Lee did the transcription as well as a million other things, and Jesse Luehrs (who is one of the very few people I know who actually understands the terminal’s cursed inner workings) had so many incredibly helpful conversations with me about what is going on in the terminal.
get the zine
Here are some links to get the zine again:
As always, you can get either a PDF version to print at home or a print version shipped to your house. The only caveat is print orders will ship in August – I need to wait for orders to come in to get an idea of how many I should print before sending it to the printer.
2025-06-10T00:00:00+00:00
Fullscreen
Open in Tab
I have never been a C programmer but every so often I need to compile a C/C++
program from source. This has been kind of a struggle for me: for a
long time, my approach was basically “install the dependencies, run make, if
it doesn’t work, either try to find a binary someone has compiled or give up”.
“Hope someone else has compiled it” worked pretty well when I was running Linux but since I’ve been using a Mac for the last couple of years I’ve been running into more situations where I have to actually compile programs myself.
So let’s talk about what you might have to do to compile a C program! I’ll use a couple of examples of specific C programs I’ve compiled and talk about a few things that can go wrong. Here are three programs we’ll be talking about compiling:
step 1: install a C compiler
This is pretty simple: on an Ubuntu system if I don’t already have a C compiler I’ll install one with:
sudo apt-get install build-essential
This installs gcc, g++, and make. The situation on a Mac is more
confusing but it’s something like “install xcode command line tools”.
step 2: install the program’s dependencies
Unlike some newer programming languages, C doesn’t have a dependency manager. So if a program has any dependencies, you need to hunt them down yourself. Thankfully because of this, C programmers usually keep their dependencies very minimal and often the dependencies will be available in whatever package manager you’re using.
There’s almost always a section explaining how to get the dependencies in the README, for example in paperjam’s README, it says:
To compile PaperJam, you need the headers for the libqpdf and libpaper libraries (usually available as libqpdf-dev and libpaper-dev packages).
You may need
a2x(found in AsciiDoc) for building manual pages.
So on a Debian-based system you can install the dependencies like this.
sudo apt install -y libqpdf-dev libpaper-dev
If a README gives a name for a package (like libqpdf-dev), I’d basically
always assume that they mean “in a Debian-based Linux distro”: if you’re on a
Mac brew install libqpdf-dev will not work. I still have not 100% gotten
the hang of developing on a Mac yet so I don’t have many tips there yet. I
guess in this case it would be brew install qpdf if you’re using Homebrew.
step 3: run ./configure (if needed)
Some C programs come with a Makefile and some instead come with a script called
./configure. For example, if you download sqlite’s source code, it has a ./configure script in
it instead of a Makefile.
My understanding of this ./configure script is:
- You run it, it prints out a lot of somewhat inscrutable output, and then it
either generates a
Makefileor fails because you’re missing some dependency - The
./configurescript is part of a system called autotools that I have never needed to learn anything about beyond “run it to generate aMakefile”.
I think there might be some options you can pass to get the ./configure
script to produce a different Makefile but I have never done that.
step 4: run make
The next step is to run make to try to build a program. Some notes about
make:
- Sometimes you can run
make -j8to parallelize the build and make it go faster - It usually prints out a million compiler warnings when compiling the program. I always just ignore them. I didn’t write the software! The compiler warnings are not my problem.
compiler errors are often dependency problems
Here’s an error I got while compiling paperjam on my Mac:
/opt/homebrew/Cellar/qpdf/12.0.0/include/qpdf/InputSource.hh:85:19: error: function definition does not declare parameters
85 | qpdf_offset_t last_offset{0};
| ^
Over the years I’ve learned it’s usually best not to overthink problems like
this: if it’s talking about qpdf, there’s a good change it just means that
I’ve done something wrong with how I’m including the qpdf dependency.
Now let’s talk about some ways to get the qpdf dependency included in the right way.
the world’s shortest introduction to the compiler and linker
Before we talk about how to fix dependency problems: building C programs is split into 2 steps:
- Compiling the code into object files (with
gccorclang) - Linking those object files into a final binary (with
ld)
It’s important to know this when building a C program because sometimes you need to pass the right flags to the compiler and linker to tell them where to find the dependencies for the program you’re compiling.
make uses environment variables to configure the compiler and linker
If I run make on my Mac to install paperjam, I get this error:
c++ -o paperjam paperjam.o pdf-tools.o parse.o cmds.o pdf.o -lqpdf -lpaper
ld: library 'qpdf' not found
This is not because qpdf is not installed on my system (it actually is!). But
the compiler and linker don’t know how to find the qpdf library. To fix this, we need to:
- pass
"-I/opt/homebrew/include"to the compiler (to tell it where to find the header files) - pass
"-L/opt/homebrew/lib -liconv"to the linker (to tell it where to find library files and to link iniconv)
And we can get make to pass those extra parameters to the compiler and linker using environment variables!
To see how this works: inside paperjam’s Makefile you can see a bunch of environment variables, like LDLIBS here:
paperjam: $(OBJS)
$(LD) -o $@ $^ $(LDLIBS)
Everything you put into the LDLIBS environment variable gets passed to the
linker (ld) as a command line argument.
secret environment variable: CPPFLAGS
Makefiles sometimes define their own environment variables that they pass to
the compiler/linker, but make also has a bunch of “implicit” environment
variables which it will automatically pass to the C compiler and linker. There’s a full list of implicit environment variables here,
but one of them is CPPFLAGS, which gets automatically passed to the C compiler.
(technically it would be more normal to use CXXFLAGS for this, but this
particular Makefile hardcodes CXXFLAGS so setting CPPFLAGS was the only
way I could find to set the compiler flags without editing the Makefile)
two ways to pass environment variables to make
I learned thanks to @zwol that there are actually two ways to pass environment variables to make:
CXXFLAGS=xyz make(the usual way)make CXXFLAGS=xyz
The difference between them is that make CXXFLAGS=xyz will override the
value of CXXFLAGS set in the Makefile but CXXFLAGS=xyz make won’t.
I’m not sure which way is the norm but I’m going to use the first way in this post.
how to use CPPFLAGS and LDLIBS to fix this compiler error
Now that we’ve talked about how CPPFLAGS and LDLIBS get passed to the
compiler and linker, here’s the final incantation that I used to get the
program to build successfully!
CPPFLAGS="-I/opt/homebrew/include" LDLIBS="-L/opt/homebrew/lib -liconv" make paperjam
This passes -I/opt/homebrew/include to the compiler and -L/opt/homebrew/lib -liconv to the linker.
Also I don’t want to pretend that I “magically” knew that those were the right arguments to pass, figuring them out involved a bunch of confused Googling that I skipped over in this post. I will say that:
- the
-Icompiler flag tells the compiler which directory to find header files in, like/opt/homebrew/include/qpdf/QPDF.hh - the
-Llinker flag tells the linker which directory to find libraries in, like/opt/homebrew/lib/libqpdf.a - the
-llinker flag tells the linker which libraries to link in, like-liconvmeans “link in theiconvlibrary”, or-lmmeans “linkmath”
tip: how to just build 1 specific file: make $FILENAME
Yesterday I discovered this cool tool called
qf which you can use to quickly
open files from the output of ripgrep.
qf is in a big directory of various tools, but I only wanted to compile qf.
So I just compiled qf, like this:
make qf
Basically if you know (or can guess) the output filename of the file you’re
trying to build, you can tell make to just build that file by running make $FILENAME
tip: you don’t need a Makefile
I sometimes write 5-line C programs with no dependencies, and I just learned
that if I have a file called blah.c, I can just compile it like this without creating a Makefile:
make blah
It gets automaticaly expanded to cc -o blah blah.c, which saves a bit of
typing. I have no idea if I’m going to remember this (I might just keep typing
gcc -o blah blah.c anyway) but it seems like a fun trick.
tip: look at how other packaging systems built the same C program
If you’re having trouble building a C program, maybe other people had problems building it too! Every Linux distribution has build files for every package that they build, so even if you can’t install packages from that distribution directly, maybe you can get tips from that Linux distro for how to build the package. Realizing this (thanks to my friend Dave) was a huge ah-ha moment for me.
For example, this line from the nix package for paperjam says:
env.NIX_LDFLAGS = lib.optionalString stdenv.hostPlatform.isDarwin "-liconv";
This is basically saying “pass the linker flag -liconv to build this on a
Mac”, so that’s a clue we could use to build it.
That same file also says env.NIX_CFLAGS_COMPILE = "-DPOINTERHOLDER_TRANSITION=1";. I’m not sure what this means, but when I try
to build the paperjam package I do get an error about something called a
PointerHolder, so I guess that’s somehow related to the “PointerHolder
transition”.
step 5: installing the binary
Once you’ve managed to compile the program, probably you want to install it somewhere!
Some Makefiles have an install target that let you install the tool on your
system with make install. I’m always a bit scared of this (where is it going
to put the files? what if I want to uninstall them later?), so if I’m compiling
a pretty simple program I’ll often just manually copy the binary to install it
instead, like this:
cp qf ~/bin
step 6: maybe make your own package!
Once I figured out how to do all of this, I realized that I could use my new
make knowledge to contribute a paperjam package to Homebrew! Then I could
just brew install paperjam on future systems.
The good thing is that even if the details of how all of the different packaging systems, they fundamentally all use C compilers and linkers.
it can be useful to understand a little about C even if you’re not a C programmer
I think all of this is an interesting example of how it can useful to understand some basics of how C programs work (like “they have header files”) even if you’re never planning to write a nontrivial C program if your life.
It feels good to have some ability to compile C/C++ programs myself, even
though I’m still not totally confident about all of the compiler and linker
flags and I still plan to never learn anything about how autotools works other
than “you run ./configure to generate the Makefile”.
Two things I left out of this post:
LD_LIBRARY_PATH / DYLD_LIBRARY_PATH(which you use to tell the dynamic linker at runtime where to find dynamically linked files) because I can’t remember the last time I ran into anLD_LIBRARY_PATHissue and couldn’t find an example.pkg-config, which I think is important but I don’t understand yet
2025-05-12T22:01:23-07:00
Fullscreen
Open in Tab
I've seen a lot of complaints about how MCP isn't ready for the enterprise.
I agree, although maybe not for the reasons you think. But don't worry, this isn't just a rant! I believe we can fix it!
The good news is the recent updates to the MCP authorization spec that separate out the role of the authorization server from the MCP server have now put the building blocks in place to make this a lot easier.
But let's back up and talk about what enterprise buyers expect when they are evaluating AI tools to bring into their companies.
Single Sign-On
At a minimum, an enterprise admin expects to be able to put an application under their single sign-on system. This enables the company to manage which users are allowed to use which applications, and prevents their users from needing to have their own passwords at the applications. The goal is to get every application managed under their single sign-on (SSO) system. Many large companies have more than 200 applications, so having them all managed through their SSO solution is a lot better than employees having to manage 200 passwords for each application!
There's a lot more than SSO too, like lifecycle management, entitlements, and logout. We're tackling these in the IPSIE working group in the OpenID Foundation. But for the purposes of this discussion, let's stick to the basics of SSO.
So what does this have to do with MCP?
An AI agent using MCP is just another application enterprises expect to be able to integrate into their single-sign-on (SSO) system. Let's take the example of Claude. When rolled out at a company, ideally every employee would log in to their company Claude account using the company identity provider (IdP). This lets the enterprise admin decide how many Claude licenses to purchase and who should be able to use it.
Connecting to External Apps
The next thing that should happen after a user logs in to Claude via SSO is they need to connect Claude to their other enterprise apps. This includes the built-in integrations in Claude like Google Calendar and Google Drive, as well as any MCP servers exposed by other apps in use within the enterprise. That could cover other SaaS apps like Zoom, Atlassian, and Slack, as well as home-grown internal apps.
Today, this process involves a somewhat cumbersome series of steps each individual employee must take. Here's an example of what the user needs to do to connect their AI agent to external apps:
First, the user logs in to Claude using SSO. This involves a redirect from Claude to the enterprise IdP where they authenticate with one or more factors, and then are redirected back.

Next, they need to connect the external app from within Claude. Claude provides a button to initiate the connection. This takes the user to that app (in this example, Google), which redirects them to the IdP to authenticate again, eventually getting redirected back to the app where an OAuth consent prompt is displayed asking the user to approve access, and finally the user is redirected back to Claude and the connection is established.

The user has to repeat these steps for every MCP server that they want to connect to Claude. There are two main problems with this:
- This user experience is not great. That's a lot of clicking that the user has to do.
- The enterprise admin has no visibility or control over the connection established between the two applications.
Both of these are significant problems. If you have even just 10 MCP servers rolled out in the enterprise, you're asking users to click through 10 SSO and OAuth prompts to establish the connections, and it will only get worse as MCP is more widely adopted within apps. But also, should we really be asking the user if it's okay for Claude to access their data in Google Drive? In a company context, that's not actually the user's decision. That decision should be made by the enterprise IT admin.
In "An Open Letter to Third-party Suppliers", Patrick Opet, Chief Information Security Officer of JPMorgan Chase writes:
"Modern integration patterns, however, dismantle these essential boundaries, relying heavily on modern identity protocols (e.g., OAuth) to create direct, often unchecked interactions between third-party services and firms' sensitive internal resources."
Right now, these app-to-app connections are happening behind the back of the IdP. What we need is a way to move the connections between the applications into the IdP where they can be managed by the enterprise admin.
Let's see how this works if we leverage a new (in-progress) OAuth extension called "Identity and Authorization Chaining Across Domains", which I'll refer to as "Cross-App Access" for short, enabling the enterprise IdP to sit in the middle of the OAuth exchange between the two apps.
A Brief Intro to Cross-App Access
In this example, we'll use Claude as the application that is trying to connect to Slack's (hypothetical) MCP server. We'll start with a high-level overview of the flow, and later go over the detailed protocol.
First, the user logs in to Claude through the IdP as normal. This results in Claude getting either an ID token or SAML assertion from the IdP, which tells Claude who the user is. (This works the same for SAML assertions or ID tokens, so I'll use ID tokens in the example from here out.) This is no different than what the user would do today when signing in to Claude.

Then, instead of prompting the user to connect Slack, Claude takes the ID token back to the IdP in a request that says "Claude is requesting access to this user's Slack account."
The IdP validates the ID token, sees it was issued to Claude, and verifies that the admin has allowed Claude to access Slack on behalf of the given user. Assuming everything checks out, the IdP issues a new token back to Claude.

Claude takes the intermediate token from the IdP to Slack saying "hi, I would like an access token for the Slack MCP server. The IdP gave me this token with the details of the user to issue the access token for." Slack validates the token the same way it would have validated an ID token. (Remember, Slack is already configured for SSO to the IdP for this customer as well, so it already has a way to validate these tokens.) Slack is able to issue an access token giving Claude access to this user's resources in its MCP server.

This solves the two big problems:
- The exchange happens entirely without any user interaction, so the user never sees any prompts or any OAuth consent screens.
- Since the IdP sits in between the exchange, this gives the enterprise admin a chance to configure the policies around which applications are allowed this direct connection.
The other nice side effect of this is since there is no user interaction required, the first time a new user logs in to Claude, all their enterprise apps will be automatically connected without them having to click any buttons!
Cross-App Access Protocol
Now let's look at what this looks like in the actual protocol. This is based on the adopted in-progress OAuth specification "Identity and Authorization Chaining Across Domains". This spec is actually a combination of two RFCs: Token Exchange (RFC 8693), and JWT Profile for Authorization Grants (RFC 7523). Both RFCs as well as the "Identity and Authorization Chaining Across Domains" spec are very flexible. While this means it is possible to apply this to many different use cases, it does mean we need to be a bit more specific in how to use it for this use case. For that purpose, I've written a profile of the Identity Chaining draft called "Identity Assertion Authorization Grant" to fill in the missing pieces for the specific use case detailed here.
Let's go through it step by step. For this example we'll use the following entities:
- Claude - the "Requesting Application", which is attempting to access Slack
- Slack - the "Resource Application", which has the resources being accessed through MCP
- Okta - the enterprise identity provider which users at the example company can use to sign in to both apps

Single Sign-On
First, Claude gets the user to sign in using a standard OpenID Connect (or SAML) flow in order to obtain an ID token. There isn't anything unique to this spec regarding this first stage, so I will skip the details of the OpenID Connect flow and we'll start with the ID token as the input to the next step.
Token Exchange
Claude, the requesting application, then makes a Token Exchange request (RFC 8693) to the IdP's token endpoint with the following parameters:
requested_token_type: The valueurn:ietf:params:oauth:token-type:id-jagindicates that an ID Assertion JWT is being requested.audience: The Issuer URL of the Resource Application's authorization server.subject_token: The identity assertion (e.g. the OpenID Connect ID Token or SAML assertion) for the target end-user.subject_token_type: Eitherurn:ietf:params:oauth:token-type:id_tokenorurn:ietf:params:oauth:token-type:saml2as defined by RFC 8693.
This request will also include the client credentials that Claude would use in a traditional OAuth token request, which could be a client secret or a JWT Bearer Assertion.
POST /oauth2/token HTTP/1.1
Host: acme.okta.com
Content-Type: application/x-www-form-urlencoded
grant_type=urn:ietf:params:oauth:grant-type:token-exchange
&requested_token_type=urn:ietf:params:oauth:token-type:id-jag
&audience=https://auth.slack.com/
&subject_token=eyJraWQiOiJzMTZ0cVNtODhwREo4VGZCXzdrSEtQ...
&subject_token_type=urn:ietf:params:oauth:token-type:id_token
&client_assertion_type=urn:ietf:params:oauth:client-assertion-type:jwt-bearer
&client_assertion=eyJhbGciOiJSUzI1NiIsImtpZCI6IjIyIn0...
ID Assertion Validation and Policy Evaluation
At this point, the IdP evaluates the request and decides whether to issue the requested "ID Assertion JWT". The request will be evaluated based on the validity of the arguments, as well as the configured policy by the customer.
For example, the IdP validates that the ID token in this request was issued to the same client that matches the provided client authentication. It evaluates that the user still exists and is active, and that the user is assigned the Resource Application. Other policies can be evaluated at the discretion of the IdP, just like it can during a single sign-on flow.
If the IdP agrees that the requesting app should be authorized to access the given user's data in the resource app's MCP server, it will respond with a Token Exchange response to issue the token:
HTTP/1.1 200 OK
Content-Type: application/json
Cache-Control: no-store
{
"issued_token_type": "urn:ietf:params:oauth:token-type:id-jag",
"access_token": "eyJhbGciOiJIUzI1NiIsI...",
"token_type": "N_A",
"expires_in": 300
}
The claims in the issued JWT are defined in "Identity Assertion Authorization Grant". The JWT is signed using the same key that the IdP signs ID tokens with. This is a critical aspect that makes this work, since again we assumed that both apps would already be configured for SSO to the IdP so would already be aware of the signing key for that purpose.
At this point, Claude is ready to request a token for the Resource App's MCP server
Access Token Request
The JWT received in the previous request can now be used as a "JWT Authorization Grant" as described by RFC 7523. To do this, Claude makes a request to the MCP authorization server's token endpoint with the following parameters:
grant_type:urn:ietf:params:oauth:grant-type:jwt-bearerassertion: The Identity Assertion Authorization Grant JWT obtained in the previous token exchange step
For example:
POST /oauth2/token HTTP/1.1
Host: auth.slack.com
Authorization: Basic yZS1yYW5kb20tc2VjcmV0v3JOkF0XG5Qx2
grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer
assertion=eyJhbGciOiJIUzI1NiIsI...
Slack's authorization server can now evaluate this request to determine whether to issue an access token. The authorization server can validate the JWT by checking the issuer (iss) in the JWT to determine which enterprise IdP the token is from, and then check the signature using the public key discovered at that server. There are other claims to be validated as well, described in Section 6.1 of the Identity Assertion Authorization Grant.
Assuming all the validations pass, Slack is ready to issue an access token to Claude in the token response:
HTTP/1.1 200 OK
Content-Type: application/json
Cache-Control: no-store
{
"token_type": "Bearer",
"access_token": "2YotnFZFEjr1zCsicMWpAA",
"expires_in": 86400
}
This token response is the same format that Slack's authorization server would be responding to a traditional OAuth flow. That's another key aspect of this design that makes it scalable. We don't need the resource app to use any particular access token format, since only that server is responsible for validating those tokens.
Now that Claude has the access token, it can make a request to the (hypothetical) Slack MCP server using the bearer token the same way it would have if it got the token using the traditional redirect-based OAuth flow.
Note: Eventually we'll need to define the specific behavior of when to return a refresh token in this token response. The goal is to ensure the client goes through the IdP often enough for the IdP to enforce its access policies. A refresh token could potentially undermine that if the refresh token lifetime is too long. It follows that ultimately the IdP should enforce the refresh token lifetime, so we will need to define a way for the IdP to communicate to the authorization server whether and how long to issue refresh tokens. This would enable the authorization server to make its own decision on access token lifetime, while still respecting the enterprise IdP policy.
Cross-App Access Sequence Diagram
Here's the flow again, this time as a sequence diagram.

- The client initiates a login request
- The user's browser is redirected to the IdP
- The user logs in at the IdP
- The IdP returns an OAuth authorization code to the user's browser
- The user's browser delivers the authorization code to the client
- The client exchanges the authorization code for an ID token at the IdP
- The IdP returns an ID token to the client
At this point, the user is logged in to the MCP client. Everything up until this point has been a standard OpenID Connect flow.
- The client makes a direct Token Exchange request to the IdP to exchange the ID token for a cross-domain "ID Assertion JWT"
- The IdP validates the request and checks the internal policy
- The IdP returns the ID-JAG to the client
- The client makes a token request using the ID-JAG to the MCP authorization server
- The authorization server validates the token using the signing key it also uses for its OpenID Connect flow with the IdP
- The authorization server returns an access token
- The client makes a request with the access token to the MCP server
- The MCP server returns the response
For a more detailed step by step of the flow, see Appendix A.3 of the Identity Assertion Authorization Grant.
Next Steps
If this is something you're interested in, we'd love your help! The in-progress spec is publicly available, and we're looking for people interested in helping prototype it. If you're building an MCP server and you want to make it enterprise-ready, I'd be happy to help you build this!
You can find me at a few related events coming up:
- MCP Night on May 14
- MCP Developers Summit on May 23
- AWS MCP Agents Hackathon on May 30
- Identiverse 2025 on June 3-6
And of course you can always find me on LinkedIn or email me at aaron.parecki@okta.com.
2025-04-03T16:39:37-07:00
Fullscreen
Open in Tab
Update: The changes described in this blog post have been incorporated into the 2025-06-18 version of the MCP spec!
Let's not overthink auth in MCP.
Yes, the MCP server is going to need its own auth server. But it's not as bad as it sounds. Let me explain.
First let's get a few pieces of terminology straight.
The confusion that's happening in the discussions I've seen so far is because the spec and diagrams show that the MCP server itself is handing authorization. That's not necessary.

In OAuth, we talk about the "authorization server" and "resource server" as distinct roles. I like to think of the authorization server as the "token factory", that's the thing that makes the access tokens. The resource server (usually an API) needs to be able to validate the tokens created by the authorization server.

It's possible to build a single server that is both a resource server and authorization server, and in fact many OAuth systems are built that way, especially large consumer services.

But nothing about the spec requires that the two roles are combined, it's also possible to run these as two totally unrelated services.
This flexibility that's been baked into OAuth for over a decade is what has led to the rapid adoption, as well the proliferation of open source and commercial products that provide an OAuth authorization server as a service.
So how does this relate to MCP?
I can annotate the flow from the Model Context Protocol spec to show the parts where the client talks to the MCP Resource Server separately from where the client talks to the MCP Authorization Server.
Here is the updated sequence diagram showing communication with each role separately.
Why is it important to call out this change?
I've seen a few conversations in various places about how requiring the MCP Server to be both an authorization server and resource server is too much of a burden. But actually, very little needs to change about the spec to enable this separation of concerns that OAuth already provides.
I've also seen various suggestions of other ways to separate the authorization server from the MCP server, like delegating to an enterprise IdP and having the MCP server validate access tokens issued by the IdP. These other options also conflate the OAuth roles in an awkward way and would result in some undesirable properties or relationships between the various parties involved.
So what needs to change in the MCP spec to enable this?
Discovery
The main thing currently forcing the MCP Server to be both the authorization server and resource server is how the client does discovery.
One design goal of MCP is to enable a client to bootstrap everything it needs based on only the server URL provided. I think this is a great design goal, and luckily is something that can be achieved even when separating the roles in the way I've described.
The MCP spec currently says that clients are expected to fetch the OAuth Server Metadata (RFC8414) file from the MCP Server base URL, resulting in a URL such as:
https://example.com/.well-known/oauth-authorization-server
This ends up meaning the MCP Resource Server must also be an Authorization Server, which leads to the complications the community has encountered so far. The good news is there is an OAuth spec we can apply here instead: Protected Resource Metadata.
Protected Resource Metadata
The Protected Resource Metadata spec is used by a Resource Server to advertise metadata about itself, including which Authorization Server can be used with it. This spec is both new and old. It was started in 2016, but was never adopted by the OAuth working group until 2023, after I had presented at an IETF meeting about the need for clients to be able to bootstrap OAuth flows given an OAuth resource server. The spec is now awaiting publication as an RFC, and should get its RFC number in a couple months. (Update: This became RFC 9728 on April 23, 2025!)
Applying this to the MCP server would result in a sequence like the following:
- The MCP Client fetches the Resource Server Metadata file by appending
/.well-known/oauth-protected-resourceto the MCP Server base URL. - The MCP Client finds the
authorization_serversproperty in the JSON response, and builds the Authorization Server Metadata URL by appending/.well-known/oauth-authorization-server - The MCP Client fetches the Authorization Server Metadata to find the endpoints it needs for the OAuth flow, the authorization endpoint and token endpoint
- The MCP Client initiates an OAuth flow and continues as normal
Note: The Protected Resource Metadata spec also supports the Resource Server returning WWW-Authenticate with a link to the resource metadata URL if you want to avoid the requirement that MCP Servers host their metadata URLs at the .well-known endpoint, it just requires an extra HTTP request to support this.
Access Token Validation
Two things to keep in mind about how the MCP Server validates access tokens with this new separation of concerns.
If you do build the MCP Authorization Server and Resource Server as part of the same system, you don't need to do anything special to validate the access tokens the Authorization Server issues. You probably already have some sort of infrastructure in place for your normal API to validate tokens issued by your Authorization Server, so nothing changes there.
If you are using an external Authorization Server, whether that's an open source product or a commercial hosted service, that product will have its own docs for how you can validate the tokens it creates. There's a good chance it already supports the standardized JWT Access Tokens described in RFC 9068, in which case you can use off-the-shelf JWT validation middleware for common frameworks.
In either case, the critical design goal here is that the MCP Authorization Server issues access tokens that only ever need to be validated by the MCP Resource Server. This is in line with the security recommendations in Section 2.3 of RFC 9700, in particular that "access tokens SHOULD be audience-restricted to a specific resource server". In other words, it would be a bad idea for the MCP Client to be issued an access token that works with both the MCP Resource Server and the service's REST API.
Why Require the MCP Server to have an Authorization Server in the first place?
Another argument I've seen is that MCP Server developers shouldn't have to build any OAuth infrastructure at all, instead they should be able to delegate all the OAuth bits to an external service.
In principle, I agree. Getting API access and authorization right is tricky, that's why there are entire companies dedicated to solving the problem.
The architecture laid out above enables this exact separation of concerns. The difference between this architecture and some of the other proposals I've seen is that this cleanly separates the security boundaries so that there are minimal dependencies among the parties involved.
But, one thing I haven't seen mentioned in the discussions is that there actually is no requirement than an OAuth Authorization Server provide any UI itself.
An Authorization Server with no UI?
While it is desirable from a security perspective that the MCP Resource Server has a corresponding Authorization Server that issues access tokens for it, that Authorization Server doesn't actually need to have any UI or even any concept of user login or accounts. You can actually build an Authorization Server that delegates all user account management to an external service. You can see an example of this in PayPal's MCP server they recently launched.
PayPal's traditional API already supports OAuth, the authorization and token endpoints are:
https://www.paypal.com/signin/authorizehttps://api-m.paypal.com/v1/oauth2/token
When PayPal built their MCP server, they launched it at https://mcp.paypal.com. If you fetch the metadata for the MCP Server, you'll find the two OAuth endpoints for the MCP Authorization Server:
https://mcp.paypal.com/authorizehttps://mcp.paypal.com/token
When the MCP Client redirects the user to the authorization endpoint, the MCP server itself doesn't provide any UI. Instead, it immediately redirects the user to the real PayPal authorization endpoint which then prompts the user to log in and authorize the client.

This points to yet another benefit of architecting the MCP Authorization Server and Resource Server this way. It enables implementers to delegate the actual user management to their existing OAuth server with no changes needed to the MCP Client. The MCP Client isn't even aware that this extra redirect step was inserted in the middle. As far as the MCP Client is concerned, it has been talking to only the MCP Authorization Server. It just so happens that the MCP Authorization Server has sent the user elsewhere to actually log in.
Dynamic Client Registration
There's one more point I want to make about why having a dedicated MCP Authorization Server is helpful architecturally.
The MCP spec strongly recommends that MCP Servers (authorization servers) support Dynamic Client Registration. If MCP is successful, there will be a large number of MCP Clients talking to a large number of MCP Servers, and the user is the one deciding which combinations of clients and servers to use. This means it is not scalable to require that every MCP Client developer register their client with every MCP Server.
This is similar to the idea of using an email client with the user's chosen email server. Obviously Mozilla can't register Thunderbird with every email server out there. Instead, there needs to be a way to dynamically establish a client's identity with the OAuth server at runtime. Dynamic Client Registration is one option for how to do that.
The problem is most commercial APIs are not going to enable Dynamic Client Registration on their production servers. For example, in order to get client credentials to use the Google APIs, you need to register as a developer and then register an OAuth client after logging in. Dynamic Client Registration would allow a client to register itself without the link to the developer's account. That would mean there is no paper trail for who the client was developed by. The Dynamic Client Registration endpoint can't require authentication by definition, so is a public endpoint that can create clients, which as you can imagine opens up some potential security issues.
I do, however, think it would be reasonable to expect production services to enable Dynamic Client Registration only on the MCP's Authorization Server. This way the dynamically-registered clients wouldn't be able to use the regular REST API, but would only be able to interact with the MCP API.
Mastodon and BlueSky also have a similar problem of needing clients to show up at arbitrary authorization servers without prior coordination between the client developer and authorization server operator. I call this the "OAuth for the Open Web" problem. Mastodon used Dynamic Client Registration as their solution, and has since documented some of the issues that this creates, linked here and here.
BlueSky decided to take a different approach and instead uses an https URL as a client identifier, bypassing the need for a client registration step entirely. This has the added bonus of having at least some level of confidence of the client identity because the client identity is hosted at a domain. It would be a perfectly viable approach to use this method for MCP as well. There is a discussion on that within MCP here. This is an ongoing topic within the OAuth working group, I have a couple of drafts in progress to formalize this pattern, Client ID Metadata Document and Client ID Scheme.
Enterprise IdP Integration
Lastly, I want to touch on the idea of enabling users to log in to MCP Servers with their enterprise IdP.
When an enterprise company purchases software, they expect to be able to tie it in to their single-sign-on solution. For example, when I log in to work Slack, I enter my work email and Slack redirects me to my work IdP where I log in. This way employees don't need to have passwords with every app they use in the enterprise, they can log in to everything with the same enterprise account, and all the apps can be protected with multi-factor authentication through the IdP. This also gives the company control over which users can access which apps, as well as a way to revoke a user's access at any time.
So how does this relate to MCP?
Well, plenty of people are already trying to figure out how to let their employees safely use AI tools within the enterprise. So we need a way to let employees use their enterprise IdP to log in and authorize MCP Clients to access MCP Servers.
If you're building an MCP Server in front of an existing application that already supports enterprise Single Sign-On, then you don't need to do anything differently in the MCP Client or Server and you already have support for this. When the MCP Client redirects to the MCP Authorization Server, the MCP Authorization Server redirects to the main Authorization Server, which would then prompt the user for their company email/domain and redirect to the enterprise IdP to log in.
This brings me to yet another thing I've been seeing conflated in the discussions: user login and user authorization.
OAuth is an authorization delegation protocol. OAuth doesn't actually say anything about how users authenticate at the OAuth server, it only talks about how the user can authorize access to an application. This is actually a really great thing, because it means we can get super creative with how users authenticate.

Remember the yellow box "User logs in and authorizes" from the original sequence diagram? These are actually two totally distinct steps. The OAuth authorization server is responsible for getting the user to log in somehow, but there's no requirement that how the user logs in is with a username/password. This is where we can insert a single-sign-on flow to an enterprise IdP, or really anything you can imagine.
So think of this as two separate boxes: "user logs in", and "user authorizes". Then, we can replace the "user logs in" box with an entirely new OpenID Connect flow out to the enterprise IdP to log the user in, and after they are logged in they can authorize the client.

I'll spare you the complete expanded sequence diagram, since it looks a lot more complicated than it actually is. But I again want to stress that this is nothing new, this is already how things are commonly done today.
This all just becomes cleaner to understand when you separate the MCP Authorization Server from the MCP Resource Server.
We can push all the complexity of user login, token minting, and more onto the MCP Authorization Server, keeping the MCP Resource Server free to do the much simpler task of validating access tokens and serving resources.
Future Improvements of Enterprise IdP Integration
There are two things I want to call out about how enterprise IdP integration could be improved. Both of these are entire topics on their own, so I will only touch on the problems and link out to other places where work is happening to solve them.
There are two points of friction with the current state of enterprise login for SaaS apps.
- IdP discovery
- User consent
IdP Discovery
When a user logs in to a SaaS app, they need to tell the app how to find their enterprise IdP. This is commonly done by either asking the user to enter their work email, or asking the user to enter their tenant URL at the service.

Neither of these is really a great user experience. It would be a lot better if the browser already knew which enterprise IdP the user should be sent to. This is one of my goals with the work happening in FedCM. With this new browser API, the browser can mediate the login, telling the SaaS app which enterprise IdP to use automatically only needing the user to click their account icon rather than type anything in.
User Consent
Another point of friction in the enterprise happens when a user starts connecting multiple applications to each other within the company. For example, if you drop in a Google Docs link into Slack, Slack will prompt you to connect your Google account to preview the link. Multiply this by N number of applications that can preview links, and M number of applications you might drop links to, and you end up sending the user through a huge number of OAuth consent flows.
The problem is only made worse with the explosion of AI tools. Every AI tool will need access to data in every other application in the enterprise. That is a lot of OAuth consent flows for the user to manage. Plus, the user shouldn't really be the one granting consent for Slack to access the company Google Docs account anyway. That consent should ideally be managed by the enterprise IT admin.
What we actually need is a way to enable the IT admin to grant consent for apps to talk to each other company-wide, removing the need for users to be sent through an OAuth flow at all.
This is the basis of another OAuth spec I've been working on, the Identity Assertion Authorization Grant.
The same problem applies to MCP Servers, and with the separation of concerns laid out above, it becomes straightforward to add this extension to move the consent to the enterprise and streamline the user experience.
Get in touch!
If these sound like interesting problems, please get in touch! You can find me on LinkedIn or reach me via email at aaron@parecki.com.
2025-03-07T00:00:00+00:00
Fullscreen
Open in Tab
Hello! Today I want to talk about ANSI escape codes.
For a long time I was vaguely aware of ANSI escape codes (“that’s how you make text red in the terminal and stuff”) but I had no real understanding of where they were supposed to be defined or whether or not there were standards for them. I just had a kind of vague “there be dragons” feeling around them. While learning about the terminal this year, I’ve learned that:
- ANSI escape codes are responsible for a lot of usability improvements in the terminal (did you know there’s a way to copy to your system clipboard when SSHed into a remote machine?? It’s an escape code called OSC 52!)
- They aren’t completely standardized, and because of that they don’t always work reliably. And because they’re also invisible, it’s extremely frustrating to troubleshoot escape code issues.
So I wanted to put together a list for myself of some standards that exist around escape codes, because I want to know if they have to feel unreliable and frustrating, or if there’s a future where we could all rely on them with more confidence.
- what’s an escape code?
- ECMA-48
- xterm control sequences
- terminfo
- should programs use terminfo?
- is there a “single common set” of escape codes?
- some reasons to use terminfo
- some more documents/standards
- why I think this is interesting
what’s an escape code?
Have you ever pressed the left arrow key in your terminal and seen ^[[D?
That’s an escape code! It’s called an “escape code” because the first character
is the “escape” character, which is usually written as ESC, \x1b, \E,
\033, or ^[.
Escape codes are how your terminal emulator communicates various kinds of information (colours, mouse movement, etc) with programs running in the terminal. There are two kind of escape codes:
- input codes which your terminal emulator sends for keypresses or mouse
movements that don’t fit into Unicode. For example “left arrow key” is
ESC[D, “Ctrl+left arrow” might beESC[1;5D, and clicking the mouse might be something likeESC[M :3. - output codes which programs can print out to colour text, move the cursor around, clear the screen, hide the cursor, copy text to the clipboard, enable mouse reporting, set the window title, etc.
Now let’s talk about standards!
ECMA-48
The first standard I found relating to escape codes was ECMA-48, which was originally published in 1976.
ECMA-48 does two things:
- Define some general formats for escape codes (like “CSI” codes, which are
ESC[+ something and “OSC” codes, which areESC]+ something) - Define some specific escape codes, like how “move the cursor to the left” is
ESC[D, or “turn text red” isESC[31m. In the spec, the “cursor left” one is calledCURSOR LEFTand the one for changing colours is calledSELECT GRAPHIC RENDITION.
The formats are extensible, so there’s room for others to define more escape codes in the future. Lots of escape codes that are popular today aren’t defined in ECMA-48: for example it’s pretty common for terminal applications (like vim, htop, or tmux) to support using the mouse, but ECMA-48 doesn’t define escape codes for the mouse.
xterm control sequences
There are a bunch of escape codes that aren’t defined in ECMA-48, for example:
- enabling mouse reporting (where did you click in your terminal?)
- bracketed paste (did you paste that text or type it in?)
- OSC 52 (which terminal applications can use to copy text to your system clipboard)
I believe (correct me if I’m wrong!) that these and some others came from xterm, are documented in XTerm Control Sequences, and have been widely implemented by other terminal emulators.
This list of “what xterm supports” is not a standard exactly, but xterm is extremely influential and so it seems like an important document.
terminfo
In the 80s (and to some extent today, but my understanding is that it was MUCH more dramatic in the 80s) there was a huge amount of variation in what escape codes terminals actually supported.
To deal with this, there’s a database of escape codes for various terminals called “terminfo”.
It looks like the standard for terminfo is called X/Open Curses, though you need to create an account to view that standard for some reason. It defines the database format as well as a C library interface (“curses”) for accessing the database.
For example you can run this bash snippet to see every possible escape code for “clear screen” for all of the different terminals your system knows about:
for term in $(toe -a | awk '{print $1}')
do
echo $term
infocmp -1 -T "$term" 2>/dev/null | grep 'clear=' | sed 's/clear=//g;s/,//g'
done
On my system (and probably every system I’ve ever used?), the terminfo database is managed by ncurses.
should programs use terminfo?
I think it’s interesting that there are two main approaches that applications take to handling ANSI escape codes:
- Use the terminfo database to figure out which escape codes to use, depending
on what’s in the
TERMenvironment variable. Fish does this, for example. - Identify a “single common set” of escape codes which works in “enough” terminal emulators and just hardcode those.
Some examples of programs/libraries that take approach #2 (“don’t use terminfo”) include:
I got curious about why folks might be moving away from terminfo and I found this very interesting and extremely detailed rant about terminfo from one of the fish maintainers, which argues that:
[the terminfo authors] have done a lot of work that, at the time, was extremely important and helpful. My point is that it no longer is.
I’m not going to do it justice so I’m not going to summarize it, I think it’s worth reading.
is there a “single common set” of escape codes?
I was just talking about the idea that you can use a “common set” of escape codes that will work for most people. But what is that set? Is there any agreement?
I really do not know the answer to this at all, but from doing some reading it seems like it’s some combination of:
- The codes that the VT100 supported (though some aren’t relevant on modern terminals)
- what’s in ECMA-48 (which I think also has some things that are no longer relevant)
- What xterm supports (though I’d guess that not everything in there is actually widely supported enough)
and maybe ultimately “identify the terminal emulators you think your users are going to use most frequently and test in those”, the same way web developers do when deciding which CSS features are okay to use
I don’t think there are any resources like Can I use…? or Baseline for the terminal though. (in theory terminfo is supposed to be the “caniuse” for the terminal but it seems like it often takes 10+ years to add new terminal features when people invent them which makes it very limited)
some reasons to use terminfo
I also asked on Mastodon why people found terminfo valuable in 2025 and got a few reasons that made sense to me:
- some people expect to be able to use the
TERMenvironment variable to control how programs behave (for example withTERM=dumb), and there’s no standard for how that should work in a post-terminfo world - even though there’s less variation between terminal emulators than there was in the 80s, there’s far from zero variation: there are graphical terminals, the Linux framebuffer console, the situation you’re in when connecting to a server via its serial console, Emacs shell mode, and probably more that I’m missing
- there is no one standard for what the “single common set” of escape codes is, and sometimes programs use escape codes which aren’t actually widely supported enough
terminfo & user agent detection
The way that ncurses uses the TERM environment variable to decide which
escape codes to use reminds me of how webservers used to sometimes use the
browser user agent to decide which version of a website to serve.
It also seems like it’s had some of the same results – the way iTerm2 reports itself as being “xterm-256color” feels similar to how Safari’s user agent is “Mozilla/5.0 (Macintosh; Intel Mac OS X 14_7_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.3 Safari/605.1.15”. In both cases the terminal emulator / browser ends up changing its user agent to get around user agent detection that isn’t working well.
On the web we ended up deciding that user agent detection was not a good practice and to instead focus on standardization so we can serve the same HTML/CSS to all browsers. I don’t know if the same approach is the future in the terminal though – I think the terminal landscape today is much more fragmented than the web ever was as well as being much less well funded.
some more documents/standards
A few more documents and standards related to escape codes, in no particular order:
- the Linux console_codes man page documents escape codes that Linux supports
- how the VT 100 handles escape codes & control sequences
- the kitty keyboard protocol
- OSC 8 for links in the terminal (and notes on adoption)
- A summary of ANSI standards from tmux
- this terminal features reporting specification from iTerm
- sixel graphics
why I think this is interesting
I sometimes see people saying that the unix terminal is “outdated”, and since I love the terminal so much I’m always curious about what incremental changes might make it feel less “outdated”.
Maybe if we had a clearer standards landscape (like we do on the web!) it would be easier for terminal emulator developers to build new features and for authors of terminal applications to more confidently adopt those features so that we can all benefit from them and have a richer experience in the terminal.
Obviously standardizing ANSI escape codes is not easy (ECMA-48 was first published almost 50 years ago and we’re still not there!). I don’t even know what all of the challenges are. But the situation with HTML/CSS/JS used to be extremely bad too and now it’s MUCH better, so maybe there’s hope.
2025-02-13T12:27:56+00:00
Fullscreen
Open in Tab
I was talking to a friend about how to add a directory to your PATH today. It’s
something that feels “obvious” to me since I’ve been using the terminal for a
long time, but when I searched for instructions for how to do it, I actually
couldn’t find something that explained all of the steps – a lot of them just
said “add this to ~/.bashrc”, but what if you’re not using bash? What if your
bash config is actually in a different file? And how are you supposed to figure
out which directory to add anyway?
So I wanted to try to write down some more complete directions and mention some of the gotchas I’ve run into over the years.
Here’s a table of contents:
- step 1: what shell are you using?
- step 2: find your shell’s config file
- step 3: figure out which directory to add
- step 4: edit your shell config
- step 5: restart your shell
- problems:
- notes:
step 1: what shell are you using?
If you’re not sure what shell you’re using, here’s a way to find out. Run this:
ps -p $$ -o pid,comm=
- if you’re using bash, it’ll print out
97295 bash - if you’re using zsh, it’ll print out
97295 zsh - if you’re using fish, it’ll print out an error like “In fish, please use
$fish_pid” (
$$isn’t valid syntax in fish, but in any case the error message tells you that you’re using fish, which you probably already knew)
Also bash is the default on Linux and zsh is the default on Mac OS (as of 2024). I’ll only cover bash, zsh, and fish in these directions.
step 2: find your shell’s config file
- in zsh, it’s probably
~/.zshrc - in bash, it might be
~/.bashrc, but it’s complicated, see the note in the next section - in fish, it’s probably
~/.config/fish/config.fish(you can runecho $__fish_config_dirif you want to be 100% sure)
a note on bash’s config file
Bash has three possible config files: ~/.bashrc, ~/.bash_profile, and ~/.profile.
If you’re not sure which one your system is set up to use, I’d recommend testing this way:
- add
echo hi thereto your~/.bashrc - Restart your terminal
- If you see “hi there”, that means
~/.bashrcis being used! Hooray! - Otherwise remove it and try the same thing with
~/.bash_profile - You can also try
~/.profileif the first two options don’t work.
(there are a lot of elaborate flow charts out there that explain how bash decides which config file to use but IMO it’s not worth it to internalize them and just testing is the fastest way to be sure)
step 3: figure out which directory to add
Let’s say that you’re trying to install and run a program called http-server
and it doesn’t work, like this:
$ npm install -g http-server
$ http-server
bash: http-server: command not found
How do you find what directory http-server is in? Honestly in general this is
not that easy – often the answer is something like “it depends on how npm is
configured”. A few ideas:
- Often when setting up a new installer (like
cargo,npm,homebrew, etc), when you first set it up it’ll print out some directions about how to update your PATH. So if you’re paying attention you can get the directions then. - Sometimes installers will automatically update your shell’s config file
to update your
PATHfor you - Sometimes just Googling “where does npm install things?” will turn up the answer
- Some tools have a subcommand that tells you where they’re configured to
install things, like:
- Node/npm:
npm config get prefix(then append/bin/) - Go:
go env GOPATH(then append/bin/) - asdf:
asdf info | grep ASDF_DIR(then append/bin/and/shims/)
- Node/npm:
step 3.1: double check it’s the right directory
Once you’ve found a directory you think might be the right one, make sure it’s
actually correct! For example, I found out that on my machine, http-server is
in ~/.npm-global/bin. I can make sure that it’s the right directory by trying to
run the program http-server in that directory like this:
$ ~/.npm-global/bin/http-server
Starting up http-server, serving ./public
It worked! Now that you know what directory you need to add to your PATH,
let’s move to the next step!
step 4: edit your shell config
Now we have the 2 critical pieces of information we need:
- Which directory you’re trying to add to your PATH (like
~/.npm-global/bin/) - Where your shell’s config is (like
~/.bashrc,~/.zshrc, or~/.config/fish/config.fish)
Now what you need to add depends on your shell:
bash instructions:
Open your shell’s config file, and add a line like this:
export PATH=$PATH:~/.npm-global/bin/
(obviously replace ~/.npm-global/bin with the actual directory you’re trying to add)
zsh instructions:
You can do the same thing as in bash, but zsh also has some slightly fancier syntax you can use if you prefer:
path=(
$path
~/.npm-global/bin
)
fish instructions:
In fish, the syntax is different:
set PATH $PATH ~/.npm-global/bin
(in fish you can also use fish_add_path, some notes on that further down)
step 5: restart your shell
Now, an extremely important step: updating your shell’s config won’t take effect if you don’t restart it!
Two ways to do this:
- open a new terminal (or terminal tab), and maybe close the old one so you don’t get confused
- Run
bashto start a new shell (orzshif you’re using zsh, orfishif you’re using fish)
I’ve found that both of these usually work fine.
And you should be done! Try running the program you were trying to run and hopefully it works now.
If not, here are a couple of problems that you might run into:
problem 1: it ran the wrong program
If the wrong version of a program is running, you might need to add the directory to the beginning of your PATH instead of the end.
For example, on my system I have two versions of python3 installed, which I
can see by running which -a:
$ which -a python3
/usr/bin/python3
/opt/homebrew/bin/python3
The one your shell will use is the first one listed.
If you want to use the Homebrew version, you need to add that directory
(/opt/homebrew/bin) to the beginning of your PATH instead, by putting this in
your shell’s config file (it’s /opt/homebrew/bin/:$PATH instead of the usual $PATH:/opt/homebrew/bin/)
export PATH=/opt/homebrew/bin/:$PATH
or in fish:
set PATH ~/.cargo/bin $PATH
problem 2: the program isn’t being run from your shell
All of these directions only work if you’re running the program from your shell. If you’re running the program from an IDE, from a GUI, in a cron job, or some other way, you’ll need to add the directory to your PATH in a different way, and the exact details might depend on the situation.
in a cron job
Some options:
- use the full path to the program you’re running, like
/home/bork/bin/my-program - put the full PATH you want as the first line of your crontab (something like
PATH=/bin:/usr/bin:/usr/local/bin:….). You can get the full PATH you’re
using in your shell by running
echo "PATH=$PATH".
I’m honestly not sure how to handle it in an IDE/GUI because I haven’t run into that in a long time, will add directions here if someone points me in the right direction.
problem 3: duplicate PATH entries making it harder to debug
If you edit your path and start a new shell by running bash (or zsh, or
fish), you’ll often end up with duplicate PATH entries, because the shell
keeps adding new things to your PATH every time you start your shell.
Personally I don’t think I’ve run into a situation where this kind of
duplication breaks anything, but the duplicates can make it harder to debug
what’s going on with your PATH if you’re trying to understand its contents.
Some ways you could deal with this:
- If you’re debugging your
PATH, open a new terminal to do it in so you get a “fresh” state. This should avoid the duplication. - Deduplicate your
PATHat the end of your shell’s config (for example in zsh apparently you can do this withtypeset -U path) - Check that the directory isn’t already in your
PATHwhen adding it (for example in fish I believe you can do this withfish_add_path --path /some/directory)
How to deduplicate your PATH is shell-specific and there isn’t always a
built in way to do it so you’ll need to look up how to accomplish it in your
shell.
problem 4: losing your history after updating your PATH
Here’s a situation that’s easy to get into in bash or zsh:
- Run a command (it fails)
- Update your
PATH - Run
bashto reload your config - Press the up arrow a couple of times to rerun the failed command (or open a new terminal)
- The failed command isn’t in your history! Why not?
This happens because in bash, by default, history is not saved until you exit the shell.
Some options for fixing this:
- Instead of running
bashto reload your config, runsource ~/.bashrc(orsource ~/.zshrcin zsh). This will reload the config inside your current session. - Configure your shell to continuously save your history instead of only saving the history when the shell exits. (How to do this depends on whether you’re using bash or zsh, the history options in zsh are a bit complicated and I’m not exactly sure what the best way is)
a note on source
When you install cargo (Rust’s installer) for the first time, it gives you
these instructions for how to set up your PATH, which don’t mention a specific
directory at all.
This is usually done by running one of the following (note the leading DOT):
. "$HOME/.cargo/env" # For sh/bash/zsh/ash/dash/pdksh
source "$HOME/.cargo/env.fish" # For fish
The idea is that you add that line to your shell’s config, and their script
automatically sets up your PATH (and potentially other things) for you.
This is pretty common (for example Homebrew suggests you eval brew shellenv), and there are
two ways to approach this:
- Just do what the tool suggests (like adding
. "$HOME/.cargo/env"to your shell’s config) - Figure out which directories the script they’re telling you to run would add
to your PATH, and then add those manually. Here’s how I’d do that:
- Run
. "$HOME/.cargo/env"in my shell (or the fish version if using fish) - Run
echo "$PATH" | tr ':' '\n' | grep cargoto figure out which directories it added - See that it says
/Users/bork/.cargo/binand shorten that to~/.cargo/bin - Add the directory
~/.cargo/binto PATH (with the directions in this post)
- Run
I don’t think there’s anything wrong with doing what the tool suggests (it might be the “best way”!), but personally I usually use the second approach because I prefer knowing exactly what configuration I’m changing.
a note on fish_add_path
fish has a handy function called fish_add_path that you can run to add a directory to your PATH like this:
fish_add_path /some/directory
This is cool (it’s such a simple command!) but I’ve stopped using it for a couple of reasons:
- Sometimes
fish_add_pathwill update thePATHfor every session in the future (with a “universal variable”) and sometimes it will update thePATHjust for the current session and it’s hard for me to tell which one it will do. In theory the docs explain this but I could not understand them. - If you ever need to remove the directory from your
PATHa few weeks or months later because maybe you made a mistake, it’s kind of hard to do (there are instructions in this comments of this github issue though).
that’s all
Hopefully this will help some people. Let me know (on Mastodon or Bluesky) if you there are other major gotchas that have tripped you up when adding a directory to your PATH, or if you have questions about this post!
2025-02-05T16:57:00+00:00
Fullscreen
Open in Tab
A few weeks ago I ran a terminal survey (you can read the results here) and at the end I asked:
What’s the most frustrating thing about using the terminal for you?
1600 people answered, and I decided to spend a few days categorizing all the responses. Along the way I learned that classifying qualitative data is not easy but I gave it my best shot. I ended up building a custom tool to make it faster to categorize everything.
As with all of my surveys the methodology isn’t particularly scientific. I just posted the survey to Mastodon and Twitter, ran it for a couple of days, and got answers from whoever happened to see it and felt like responding.
Here are the top categories of frustrations!
I think it’s worth keeping in mind while reading these comments that
- 40% of people answering this survey have been using the terminal for 21+ years
- 95% of people answering the survey have been using the terminal for at least 4 years
These comments aren’t coming from total beginners.
Here are the categories of frustrations! The number in brackets is the number of people with that frustration. I’m mostly writing this up for myself because I’m trying to write a zine about the terminal and I wanted to get a sense for what people are having trouble with.
remembering syntax (115)
People talked about struggles remembering:
- the syntax for CLI tools like awk, jq, sed, etc
- the syntax for redirects
- keyboard shortcuts for tmux, text editing, etc
One example comment:
There are just so many little “trivia” details to remember for full functionality. Even after all these years I’ll sometimes forget where it’s 2 or 1 for stderr, or forget which is which for
>and>>.
switching terminals is hard (91)
People talked about struggling with switching systems (for example home/work computer or when SSHing) and running into:
- OS differences in keyboard shortcuts (like Linux vs Mac)
- systems which don’t have their preferred text editor (“no vim” or “only vim”)
- different versions of the same command (like Mac OS grep vs GNU grep)
- no tab completion
- a shell they aren’t used to (“the subtle differences between zsh and bash”)
as well as differences inside the same system like pagers being not consistent with each other (git diff pagers, other pagers).
One example comment:
I got used to fish and vi mode which are not available when I ssh into servers, containers.
color (85)
Lots of problems with color, like:
- programs setting colors that are unreadable with a light background color
- finding a colorscheme they like (and getting it to work consistently across different apps)
- color not working inside several layers of SSH/tmux/etc
- not liking the defaults
- not wanting color at all and struggling to turn it off
This comment felt relatable to me:
Getting my terminal theme configured in a reasonable way between the terminal emulator and fish (I did this years ago and remember it being tedious and fiddly and now feel like I’m locked into my current theme because it works and I dread touching any of that configuration ever again).
keyboard shortcuts (84)
Half of the comments on keyboard shortcuts were about how on Linux/Windows, the keyboard shortcut to copy/paste in the terminal is different from in the rest of the OS.
Some other issues with keyboard shortcuts other than copy/paste:
- using
Ctrl-Win a browser-based terminal and closing the window - the terminal only supports a limited set of keyboard shortcuts (no
Ctrl-Shift-, noSuper, noHyper, lots ofctrl-shortcuts aren’t possible likeCtrl-,) - the OS stopping you from using a terminal keyboard shortcut (like by default
Mac OS uses
Ctrl+left arrowfor something else) - issues using emacs in the terminal
- backspace not working (2)
other copy and paste issues (75)
Aside from “the keyboard shortcut for copy and paste is different”, there were a lot of OTHER issues with copy and paste, like:
- copying over SSH
- how tmux and the terminal emulator both do copy/paste in different ways
- dealing with many different clipboards (system clipboard, vim clipboard, the “middle click” clipboard on Linux, tmux’s clipboard, etc) and potentially synchronizing them
- random spaces added when copying from the terminal
- pasting multiline commands which automatically get run in a terrifying way
- wanting a way to copy text without using the mouse
discoverability (55)
There were lots of comments about this, which all came down to the same basic complaint – it’s hard to discover useful tools or features! This comment kind of summed it all up:
How difficult it is to learn independently. Most of what I know is an assorted collection of stuff I’ve been told by random people over the years.
steep learning curve (44)
A lot of comments about it generally having a steep learning curve. A couple of example comments:
After 15 years of using it, I’m not much faster than using it than I was 5 or maybe even 10 years ago.
and
That I know I could make my life easier by learning more about the shortcuts and commands and configuring the terminal but I don’t spend the time because it feels overwhelming.
history (42)
Some issues with shell history:
- history not being shared between terminal tabs (16)
- limits that are too short (4)
- history not being restored when terminal tabs are restored
- losing history because the terminal crashed
- not knowing how to search history
One example comment:
It wasted a lot of time until I figured it out and still annoys me that “history” on zsh has such a small buffer; I have to type “history 0” to get any useful length of history.
bad documentation (37)
People talked about:
- documentation being generally opaque
- lack of examples in man pages
- programs which don’t have man pages
Here’s a representative comment:
Finding good examples and docs. Man pages often not enough, have to wade through stack overflow
scrollback (36)
A few issues with scrollback:
- programs printing out too much data making you lose scrollback history
- resizing the terminal messes up the scrollback
- lack of timestamps
- GUI programs that you start in the background printing stuff out that gets in the way of other programs’ outputs
One example comment:
When resizing the terminal (in particular: making it narrower) leads to broken rewrapping of the scrollback content because the commands formatted their output based on the terminal window width.
“it feels outdated” (33)
Lots of comments about how the terminal feels hampered by legacy decisions and how users often end up needing to learn implementation details that feel very esoteric. One example comment:
Most of the legacy cruft, it would be great to have a green field implementation of the CLI interface.
shell scripting (32)
Lots of complaints about POSIX shell scripting. There’s a general feeling that shell scripting is difficult but also that switching to a different less standard scripting language (fish, nushell, etc) brings its own problems.
Shell scripting. My tolerance to ditch a shell script and go to a scripting language is pretty low. It’s just too messy and powerful. Screwing up can be costly so I don’t even bother.
more issues
Some more issues that were mentioned at least 10 times:
- (31) inconsistent command line arguments: is it -h or help or –help?
- (24) keeping dotfiles in sync across different systems
- (23) performance (e.g. “my shell takes too long to start”)
- (20) window management (potentially with some combination of tmux tabs, terminal tabs, and multiple terminal windows. Where did that shell session go?)
- (17) generally feeling scared/uneasy (“The debilitating fear that I’m going to do some mysterious Bad Thing with a command and I will have absolutely no idea how to fix or undo it or even really figure out what happened”)
- (16) terminfo issues (“Having to learn about terminfo if/when I try a new terminal emulator and ssh elsewhere.”)
- (16) lack of image support (sixel etc)
- (15) SSH issues (like having to start over when you lose the SSH connection)
- (15) various tmux/screen issues (for example lack of integration between tmux and the terminal emulator)
- (15) typos & slow typing
- (13) the terminal getting messed up for various reasons (pressing
Ctrl-S,cating a binary, etc) - (12) quoting/escaping in the shell
- (11) various Windows/PowerShell issues
n/a (122)
There were also 122 answers to the effect of “nothing really” or “only that I can’t do EVERYTHING in the terminal”
One example comment:
Think I’ve found work arounds for most/all frustrations
that’s all!
I’m not going to make a lot of commentary on these results, but here are a couple of categories that feel related to me:
- remembering syntax & history (often the thing you need to remember is something you’ve run before!)
- discoverability & the learning curve (the lack of discoverability is definitely a big part of what makes it hard to learn)
- “switching systems is hard” & “it feels outdated” (tools that haven’t really changed in 30 or 40 years have many problems but they do tend to be always there no matter what system you’re on, which is very useful and makes them hard to stop using)
Trying to categorize all these results in a reasonable way really gave me an appreciation for social science researchers’ skills.
2025-01-11T09:46:01+00:00
Fullscreen
Open in Tab
Hello! Recently I ran a terminal survey and I asked people what frustrated them. One person commented:
There are so many pieces to having a modern terminal experience. I wish it all came out of the box.
My immediate reaction was “oh, getting a modern terminal experience isn’t that hard, you just need to….”, but the more I thought about it, the longer the “you just need to…” list got, and I kept thinking about more and more caveats.
So I thought I would write down some notes about what it means to me personally to have a “modern” terminal experience and what I think can make it hard for people to get there.
what is a “modern terminal experience”?
Here are a few things that are important to me, with which part of the system is responsible for them:
- multiline support for copy and paste: if you paste 3 commands in your shell, it should not immediately run them all! That’s scary! (shell, terminal emulator)
- infinite shell history: if I run a command in my shell, it should be saved forever, not deleted after 500 history entries or whatever. Also I want commands to be saved to the history immediately when I run them, not only when I exit the shell session (shell)
- a useful prompt: I can’t live without having my current directory and current git branch in my prompt (shell)
- 24-bit colour: this is important to me because I find it MUCH easier to theme neovim with 24-bit colour support than in a terminal with only 256 colours (terminal emulator)
- clipboard integration between vim and my operating system so that when I copy in Firefox, I can just press
pin vim to paste (text editor, maybe the OS/terminal emulator too) - good autocomplete: for example commands like git should have command-specific autocomplete (shell)
- having colours in
ls(shell config) - a terminal theme I like: I spend a lot of time in my terminal, I want it to look nice and I want its theme to match my terminal editor’s theme. (terminal emulator, text editor)
- automatic terminal fixing: If a programs prints out some weird escape codes that mess up my terminal, I want that to automatically get reset so that my terminal doesn’t get messed up (shell)
- keybindings: I want
Ctrl+left arrowto work (shell or application) - being able to use the scroll wheel in programs like
less: (terminal emulator and applications)
There are a million other terminal conveniences out there and different people value different things, but those are the ones that I would be really unhappy without.
how I achieve a “modern experience”
My basic approach is:
- use the
fishshell. Mostly don’t configure it, except to:- set the
EDITORenvironment variable to my favourite terminal editor - alias
lstols --color=auto
- set the
- use any terminal emulator with 24-bit colour support. In the past I’ve used GNOME Terminal, Terminator, and iTerm, but I’m not picky about this. I don’t really configure it other than to choose a font.
- use
neovim, with a configuration that I’ve been very slowly building over the last 9 years or so (the last time I deleted my vim config and started from scratch was 9 years ago) - use the base16 framework to theme everything
A few things that affect my approach:
- I don’t spend a lot of time SSHed into other machines
- I’d rather use the mouse a little than come up with keyboard-based ways to do everything
- I work on a lot of small projects, not one big project
some “out of the box” options for a “modern” experience
What if you want a nice experience, but don’t want to spend a lot of time on configuration? Figuring out how to configure vim in a way that I was satisfied with really did take me like ten years, which is a long time!
My best ideas for how to get a reasonable terminal experience with minimal config are:
- shell: either
fishorzshwith oh-my-zsh - terminal emulator: almost anything with 24-bit colour support, for example all of these are popular:
- linux: GNOME Terminal, Konsole, Terminator, xfce4-terminal
- mac: iTerm (Terminal.app doesn’t have 256-colour support)
- cross-platform: kitty, alacritty, wezterm, or ghostty
- shell config:
- set the
EDITORenvironment variable to your favourite terminal text editor - maybe alias
lstols --color=auto
- set the
- text editor: this is a tough one, maybe micro or helix? I haven’t used
either of them seriously but they both seem like very cool projects and I
think it’s amazing that you can just use all the usual GUI editor commands
(
Ctrl-Cto copy,Ctrl-Vto paste,Ctrl-Ato select all) in micro and they do what you’d expect. I would probably try switching to helix except that retraining my vim muscle memory seems way too hard. Also helix doesn’t have a GUI or plugin system yet.
Personally I wouldn’t use xterm, rxvt, or Terminal.app as a terminal emulator, because I’ve found in the past that they’re missing core features (like 24-bit colour in Terminal.app’s case) that make the terminal harder to use for me.
I don’t want to pretend that getting a “modern” terminal experience is easier than it is though – I think there are two issues that make it hard. Let’s talk about them!
issue 1 with getting to a “modern” experience: the shell
bash and zsh are by far the two most popular shells, and neither of them provide a default experience that I would be happy using out of the box, for example:
- you need to customize your prompt
- they don’t come with git completions by default, you have to set them up
- by default, bash only stores 500 (!) lines of history and (at least on Mac OS) zsh is only configured to store 2000 lines, which is still not a lot
- I find bash’s tab completion very frustrating, if there’s more than one match then you can’t tab through them
And even though I love fish, the fact that it isn’t POSIX does make it hard for a lot of folks to make the switch.
Of course it’s totally possible to learn how to customize your prompt in bash
or whatever, and it doesn’t even need to be that complicated (in bash I’d
probably start with something like export PS1='[\u@\h \W$(__git_ps1 " (%s)")]\$ ', or maybe use starship).
But each of these “not complicated” things really does add up and it’s
especially tough if you need to keep your config in sync across several
systems.
An extremely popular solution to getting a “modern” shell experience is oh-my-zsh. It seems like a great project and I know a lot of people use it very happily, but I’ve struggled with configuration systems like that in the past – it looks like right now the base oh-my-zsh adds about 3000 lines of config, and often I find that having an extra configuration system makes it harder to debug what’s happening when things go wrong. I personally have a tendency to use the system to add a lot of extra plugins, make my system slow, get frustrated that it’s slow, and then delete it completely and write a new config from scratch.
issue 2 with getting to a “modern” experience: the text editor
In the terminal survey I ran recently, the most popular terminal text editors
by far were vim, emacs, and nano.
I think the main options for terminal text editors are:
- use vim or emacs and configure it to your liking, you can probably have any feature you want if you put in the work
- use nano and accept that you’re going to have a pretty limited experience (for example I don’t think you can select text with the mouse and then “cut” it in nano)
- use
microorhelixwhich seem to offer a pretty good out-of-the-box experience, potentially occasionally run into issues with using a less mainstream text editor - just avoid using a terminal text editor as much as possible, maybe use VSCode, use
VSCode’s terminal for all your terminal needs, and mostly never edit files in
the terminal. Or I know a lot of people use
codeas theirEDITORin the terminal.
issue 3: individual applications
The last issue is that sometimes individual programs that I use are kind of
annoying. For example on my Mac OS machine, /usr/bin/sqlite3 doesn’t support
the Ctrl+Left Arrow keyboard shortcut. Fixing this to get a reasonable
terminal experience in SQLite was a little complicated, I had to:
- realize why this is happening (Mac OS won’t ship GNU tools, and “Ctrl-Left arrow” support comes from GNU readline)
- find a workaround (install sqlite from homebrew, which does have readline support)
- adjust my environment (put Homebrew’s sqlite3 in my PATH)
I find that debugging application-specific issues like this is really not easy and often it doesn’t feel “worth it” – often I’ll end up just dealing with various minor inconveniences because I don’t want to spend hours investigating them. The only reason I was even able to figure this one out at all is that I’ve been spending a huge amount of time thinking about the terminal recently.
A big part of having a “modern” experience using terminal programs is just
using newer terminal programs, for example I can’t be bothered to learn a
keyboard shortcut to sort the columns in top, but in htop I can just click
on a column heading with my mouse to sort it. So I use htop instead! But discovering new more “modern” command line tools isn’t easy (though
I made a list here),
finding ones that I actually like using in practice takes time, and if you’re
SSHed into another machine, they won’t always be there.
everything affects everything else
Something I find tricky about configuring my terminal to make everything “nice” is that changing one seemingly small thing about my workflow can really affect everything else. For example right now I don’t use tmux. But if I needed to use tmux again (for example because I was doing a lot of work SSHed into another machine), I’d need to think about a few things, like:
- if I wanted tmux’s copy to synchronize with my system clipboard over SSH, I’d need to make sure that my terminal emulator has OSC 52 support
- if I wanted to use iTerm’s tmux integration (which makes tmux tabs into iTerm tabs), I’d need to change how I configure colours – right now I set them with a shell script that I run when my shell starts, but that means the colours get lost when restoring a tmux session.
and probably more things I haven’t thought of. “Using tmux means that I have to change how I manage my colours” sounds unlikely, but that really did happen to me and I decided “well, I don’t want to change how I manage colours right now, so I guess I’m not using that feature!”.
It’s also hard to remember which features I’m relying on – for example maybe my current terminal does have OSC 52 support and because copying from tmux over SSH has always Just Worked I don’t even realize that that’s something I need, and then it mysteriously stops working when I switch terminals.
change things slowly
Personally even though I think my setup is not that complicated, it’s taken me 20 years to get to this point! Because terminal config changes are so likely to have unexpected and hard-to-understand consequences, I’ve found that if I change a lot of terminal configuration all at once it makes it much harder to understand what went wrong if there’s a problem, which can be really disorienting.
So I usually prefer to make pretty small changes, and accept that changes can
might take me a REALLY long time to get used to. For example I switched from
using ls to eza a year or two ago and
while I like it (because eza -l prints human-readable file sizes by default)
I’m still not quite sure about it. But also sometimes it’s worth it to make a
big change, like I made the switch to fish (from bash) 10 years ago and I’m
very happy I did.
getting a “modern” terminal is not that easy
Trying to explain how “easy” it is to configure your terminal really just made me think that it’s kind of hard and that I still sometimes get confused.
I’ve found that there’s never one perfect way to configure things in the terminal that will be compatible with every single other thing. I just need to try stuff, figure out some kind of locally stable state that works for me, and accept that if I start using a new tool it might disrupt the system and I might need to rethink things.
2024-12-12T09:28:22+00:00
Fullscreen
Open in Tab
Recently I’ve been thinking about how everything that happens in the terminal is some combination of:
- Your operating system’s job
- Your shell’s job
- Your terminal emulator’s job
- The job of whatever program you happen to be running (like
toporvimorcat)
The first three (your operating system, shell, and terminal emulator) are all kind of known quantities – if you’re using bash in GNOME Terminal on Linux, you can more or less reason about how how all of those things interact, and some of their behaviour is standardized by POSIX.
But the fourth one (“whatever program you happen to be running”) feels like it could do ANYTHING. How are you supposed to know how a program is going to behave?
This post is kind of long so here’s a quick table of contents:
- programs behave surprisingly consistently
- these are meant to be descriptive, not prescriptive
- it’s not always obvious which “rules” are the program’s responsibility to implement
- rule 1: noninteractive programs should quit when you press
Ctrl-C - rule 2: TUIs should quit when you press
q - rule 3: REPLs should quit when you press
Ctrl-Don an empty line - rule 4: don’t use more than 16 colours
- rule 5: vaguely support readline keybindings
- rule 5.1:
Ctrl-Wshould delete the last word - rule 6: disable colours when writing to a pipe
- rule 7:
-means stdin/stdout - these “rules” take a long time to learn
programs behave surprisingly consistently
As far as I know, there are no real standards for how programs in the terminal should behave – the closest things I know of are:
- POSIX, which mostly dictates how your terminal emulator / OS / shell should
work together. I think it does specify a few things about how core utilities like
cpshould work but AFAIK it doesn’t have anything to say about how for examplehtopshould behave. - these command line interface guidelines
But even though there are no standards, in my experience programs in the terminal behave in a pretty consistent way. So I wanted to write down a list of “rules” that in my experience programs mostly follow.
these are meant to be descriptive, not prescriptive
My goal here isn’t to convince authors of terminal programs that they should follow any of these rules. There are lots of exceptions to these and often there’s a good reason for those exceptions.
But it’s very useful for me to know what behaviour to expect from a random new terminal program that I’m using. Instead of “uh, programs could do literally anything”, it’s “ok, here are the basic rules I expect, and then I can keep a short mental list of exceptions”.
So I’m just writing down what I’ve observed about how programs behave in my 20 years of using the terminal, why I think they behave that way, and some examples of cases where that rule is “broken”.
it’s not always obvious which “rules” are the program’s responsibility to implement
There are a bunch of common conventions that I think are pretty clearly the program’s responsibility to implement, like:
- config files should go in
~/.BLAHrcor~/.config/BLAH/FILEor/etc/BLAH/or something --helpshould print help text- programs should print “regular” output to stdout and errors to stderr
But in this post I’m going to focus on things that it’s not 100% obvious are
the program’s responsibility. For example it feels to me like a “law of nature”
that pressing Ctrl-D should quit a REPL, but programs often
need to explicitly implement support for it – even though cat doesn’t need
to implement Ctrl-D support, ipython does. (more about that in “rule 3” below)
Understanding which things are the program’s responsibility makes it much less surprising when different programs’ implementations are slightly different.
rule 1: noninteractive programs should quit when you press Ctrl-C
The main reason for this rule is that noninteractive programs will quit by
default on Ctrl-C if they don’t set up a SIGINT signal handler, so this is
kind of a “you should act like the default” rule.
Something that trips a lot of people up is that this doesn’t apply to
interactive programs like python3 or bc or less. This is because in
an interactive program, Ctrl-C has a different job – if the program is
running an operation (like for example a search in less or some Python code
in python3), then Ctrl-C will interrupt that operation but not stop the
program.
As an example of how this works in an interactive program: here’s the code in prompt-toolkit (the library that iPython uses for handling input)
that aborts a search when you press Ctrl-C.
rule 2: TUIs should quit when you press q
TUI programs (like less or htop) will usually quit when you press q.
This rule doesn’t apply to any program where pressing q to quit wouldn’t make
sense, like tmux or text editors.
rule 3: REPLs should quit when you press Ctrl-D on an empty line
REPLs (like python3 or ed) will usually quit when you press Ctrl-D on an
empty line. This rule is similar to the Ctrl-C rule – the reason for this is
that by default if you’re running a program (like cat) in “cooked mode”, then
the operating system will return an EOF when you press Ctrl-D on an empty
line.
Most of the REPLs I use (sqlite3, python3, fish, bash, etc) don’t actually use cooked mode, but they all implement this keyboard shortcut anyway to mimic the default behaviour.
For example, here’s the code in prompt-toolkit that quits when you press Ctrl-D, and here’s the same code in readline.
I actually thought that this one was a “Law of Terminal Physics” until very recently because I’ve basically never seen it broken, but you can see that it’s just something that each individual input library has to implement in the links above.
Someone pointed out that the Erlang REPL does not quit when you press Ctrl-D,
so I guess not every REPL follows this “rule”.
rule 4: don’t use more than 16 colours
Terminal programs rarely use colours other than the base 16 ANSI colours. This
is because if you specify colours with a hex code, it’s very likely to clash
with some users’ background colour. For example if I print out some text as
#EEEEEE, it would be almost invisible on a white background, though it would
look fine on a dark background.
But if you stick to the default 16 base colours, you have a much better chance that the user has configured those colours in their terminal emulator so that they work reasonably well with their background color. Another reason to stick to the default base 16 colours is that it makes less assumptions about what colours the terminal emulator supports.
The only programs I usually see breaking this “rule” are text editors, for example Helix by default will use a purple background which is not a default ANSI colour. It seems fine for Helix to break this rule since Helix isn’t a “core” program and I assume any Helix user who doesn’t like that colorscheme will just change the theme.
rule 5: vaguely support readline keybindings
Almost every program I use supports readline keybindings if it would make
sense to do so. For example, here are a bunch of different programs and a link
to where they define Ctrl-E to go to the end of the line:
- ipython (Ctrl-E defined here)
- atuin (Ctrl-E defined here)
- fzf (Ctrl-E defined here)
- zsh (Ctrl-E defined here)
- fish (Ctrl-E defined here)
- tmux’s command prompt (Ctrl-E defined here)
None of those programs actually uses readline directly, they just sort of
mimic emacs/readline keybindings. They don’t always mimic them exactly: for
example atuin seems to use Ctrl-A as a prefix, so Ctrl-A doesn’t go to the
beginning of the line.
Also all of these programs seem to implement their own internal cut and paste
buffers so you can delete a line with Ctrl-U and then paste it with Ctrl-Y.
The exceptions to this are:
- some programs (like
git,cat, andnc) don’t have any line editing support at all (except for backspace,Ctrl-W, andCtrl-U) - as usual text editors are an exception, every text editor has its own approach to editing text
I wrote more about this “what keybindings does a program support?” question in entering text in the terminal is complicated.
rule 5.1: Ctrl-W should delete the last word
I’ve never seen a program (other than a text editor) where Ctrl-W doesn’t
delete the last word. This is similar to the Ctrl-C rule – by default if a
program is in “cooked mode”, the OS will delete the last word if you press
Ctrl-W, and delete the whole line if you press Ctrl-U. So usually programs
will imitate that behaviour.
I can’t think of any exceptions to this other than text editors but if there are I’d love to hear about them!
rule 6: disable colours when writing to a pipe
Most programs will disable colours when writing to a pipe. For example:
rg blahwill highlight all occurrences ofblahin the output, but if the output is to a pipe or a file, it’ll turn off the highlighting.ls --color=autowill use colour when writing to a terminal, but not when writing to a pipe
Both of those programs will also format their output differently when writing
to the terminal: ls will organize files into columns, and ripgrep will group
matches with headings.
If you want to force the program to use colour (for example because you want to
look at the colour), you can use unbuffer to force the program’s output to be
a tty like this:
unbuffer rg blah | less -R
I’m sure that there are some programs that “break” this rule but I can’t think
of any examples right now. Some programs have an --color flag that you can
use to force colour to be on, in the example above you could also do rg --color=always | less -R.
rule 7: - means stdin/stdout
Usually if you pass - to a program instead of a filename, it’ll read from
stdin or write to stdout (whichever is appropriate). For example, if you want
to format the Python code that’s on your clipboard with black and then copy
it, you could run:
pbpaste | black - | pbcopy
(pbpaste is a Mac program, you can do something similar on Linux with xclip)
My impression is that most programs implement this if it would make sense and I can’t think of any exceptions right now, but I’m sure there are many exceptions.
these “rules” take a long time to learn
These rules took me a long time for me to learn because I had to:
- learn that the rule applied anywhere at all ("
Ctrl-Cwill exit programs") - notice some exceptions (“okay,
Ctrl-Cwill exitfindbut notless”) - subconsciously figure out what the pattern is ("
Ctrl-Cwill generally quit noninteractive programs, but in interactive programs it might interrupt the current operation instead of quitting the program") - eventually maybe formulate it into an explicit rule that I know
A lot of my understanding of the terminal is honestly still in the “subconscious pattern recognition” stage. The only reason I’ve been taking the time to make things explicit at all is because I’ve been trying to explain how it works to others. Hopefully writing down these “rules” explicitly will make learning some of this stuff a little bit faster for others.
2024-11-29T08:23:31+00:00
Fullscreen
Open in Tab
Here’s a niche terminal problem that has bothered me for years but that I never really understood until a few weeks ago. Let’s say you’re running this command to watch for some specific output in a log file:
tail -f /some/log/file | grep thing1 | grep thing2
If log lines are being added to the file relatively slowly, the result I’d see is… nothing! It doesn’t matter if there were matches in the log file or not, there just wouldn’t be any output.
I internalized this as “uh, I guess pipes just get stuck sometimes and don’t
show me the output, that’s weird”, and I’d handle it by just
running grep thing1 /some/log/file | grep thing2 instead, which would work.
So as I’ve been doing a terminal deep dive over the last few months I was really excited to finally learn exactly why this happens.
why this happens: buffering
The reason why “pipes get stuck” sometimes is that it’s VERY common for programs to buffer their output before writing it to a pipe or file. So the pipe is working fine, the problem is that the program never even wrote the data to the pipe!
This is for performance reasons: writing all output immediately as soon as you can uses more system calls, so it’s more efficient to save up data until you have 8KB or so of data to write (or until the program exits) and THEN write it to the pipe.
In this example:
tail -f /some/log/file | grep thing1 | grep thing2
the problem is that grep thing1 is saving up all of its matches until it has
8KB of data to write, which might literally never happen.
programs don’t buffer when writing to a terminal
Part of why I found this so disorienting is that tail -f file | grep thing
will work totally fine, but then when you add the second grep, it stops
working!! The reason for this is that the way grep handles buffering depends
on whether it’s writing to a terminal or not.
Here’s how grep (and many other programs) decides to buffer its output:
- Check if stdout is a terminal or not using the
isattyfunction- If it’s a terminal, use line buffering (print every line immediately as soon as you have it)
- Otherwise, use “block buffering” – only print data if you have at least 8KB or so of data to print
So if grep is writing directly to your terminal then you’ll see the line as
soon as it’s printed, but if it’s writing to a pipe, you won’t.
Of course the buffer size isn’t always 8KB for every program, it depends on the implementation. For grep the buffering is handled by libc, and libc’s buffer size is
defined in the BUFSIZ variable. Here’s where that’s defined in glibc.
(as an aside: “programs do not use 8KB output buffers when writing to a terminal” isn’t, like, a law of terminal physics, a program COULD use an 8KB buffer when writing output to a terminal if it wanted, it would just be extremely weird if it did that, I can’t think of any program that behaves that way)
commands that buffer & commands that don’t
One annoying thing about this buffering behaviour is that you kind of need to remember which commands buffer their output when writing to a pipe.
Some commands that don’t buffer their output:
- tail
- cat
- tee
I think almost everything else will buffer output, especially if it’s a command where you’re likely to be using it for batch processing. Here’s a list of some common commands that buffer their output when writing to a pipe, along with the flag that disables block buffering.
- grep (
--line-buffered) - sed (
-u) - awk (there’s a
fflush()function) - tcpdump (
-l) - jq (
-u) - tr (
-u) - cut (can’t disable buffering)
Those are all the ones I can think of, lots of unix commands (like sort) may
or may not buffer their output but it doesn’t matter because sort can’t do
anything until it finishes receiving input anyway.
Also I did my best to test both the Mac OS and GNU versions of these but there are a lot of variations and I might have made some mistakes.
programming languages where the default “print” statement buffers
Also, here are a few programming language where the default print statement will buffer output when writing to a pipe, and some ways to disable buffering if you want:
- C (disable with
setvbuf) - Python (disable with
python -u, orPYTHONUNBUFFERED=1, orsys.stdout.reconfigure(line_buffering=False), orprint(x, flush=True)) - Ruby (disable with
STDOUT.sync = true) - Perl (disable with
$| = 1)
I assume that these languages are designed this way so that the default print function will be fast when you’re doing batch processing.
Also whether output is buffered or not might depend on how you print, for
example in C++ cout << "hello\n" buffers when writing to a pipe but cout << "hello" << endl will flush its output.
when you press Ctrl-C on a pipe, the contents of the buffer are lost
Let’s say you’re running this command as a hacky way to watch for DNS requests
to example.com, and you forgot to pass -l to tcpdump:
sudo tcpdump -ni any port 53 | grep example.com
When you press Ctrl-C, what happens? In a magical perfect world, what I would
want to happen is for tcpdump to flush its buffer, grep would search for
example.com, and I would see all the output I missed.
But in the real world, what happens is that all the programs get killed and the
output in tcpdump’s buffer is lost.
I think this problem is probably unavoidable – I spent a little time with
strace to see how this works and grep receives the SIGINT before
tcpdump anyway so even if tcpdump tried to flush its buffer grep would
already be dead.
After a little more investigation, there is a workaround: if you find
tcpdump’s PID and kill -TERM $PID, then tcpdump will flush the buffer so
you can see the output. That’s kind of a pain but I tested it and it seems to
work.
redirecting to a file also buffers
It’s not just pipes, this will also buffer:
sudo tcpdump -ni any port 53 > output.txt
Redirecting to a file doesn’t have the same “Ctrl-C will totally destroy the
contents of the buffer” problem though – in my experience it usually behaves
more like you’d want, where the contents of the buffer get written to the file
before the program exits. I’m not 100% sure whether this is something you can
always rely on or not.
a bunch of potential ways to avoid buffering
Okay, let’s talk solutions. Let’s say you’ve run this command:
tail -f /some/log/file | grep thing1 | grep thing2
I asked people on Mastodon how they would solve this in practice and there were 5 basic approaches. Here they are:
solution 1: run a program that finishes quickly
Historically my solution to this has been to just avoid the “command writing to pipe slowly” situation completely and instead run a program that will finish quickly like this:
cat /some/log/file | grep thing1 | grep thing2 | tail
This doesn’t do the same thing as the original command but it does mean that you get to avoid thinking about these weird buffering issues.
(you could also do grep thing1 /some/log/file but I often prefer to use an
“unnecessary” cat)
solution 2: remember the “line buffer” flag to grep
You could remember that grep has a flag to avoid buffering and pass it like this:
tail -f /some/log/file | grep --line-buffered thing1 | grep thing2
solution 3: use awk
Some people said that if they’re specifically dealing with a multiple greps
situation, they’ll rewrite it to use a single awk instead, like this:
tail -f /some/log/file | awk '/thing1/ && /thing2/'
Or you would write a more complicated grep, like this:
tail -f /some/log/file | grep -E 'thing1.*thing2'
(awk also buffers, so for this to work you’ll want awk to be the last command in the pipeline)
solution 4: use stdbuf
stdbuf uses LD_PRELOAD to turn off libc’s buffering, and you can use it to turn off output buffering like this:
tail -f /some/log/file | stdbuf -o0 grep thing1 | grep thing2
Like any LD_PRELOAD solution it’s a bit unreliable – it doesn’t work on
static binaries, I think won’t work if the program isn’t using libc’s
buffering, and doesn’t always work on Mac OS. Harry Marr has a really nice How stdbuf works post.
solution 5: use unbuffer
unbuffer program will force the program’s output to be a TTY, which means
that it’ll behave the way it normally would on a TTY (less buffering, colour
output, etc). You could use it in this example like this:
tail -f /some/log/file | unbuffer grep thing1 | grep thing2
Unlike stdbuf it will always work, though it might have unwanted side
effects, for example grep thing1’s will also colour matches.
If you want to install unbuffer, it’s in the expect package.
that’s all the solutions I know about!
It’s a bit hard for me to say which one is “best”, I think personally I’m
mostly likely to use unbuffer because I know it’s always going to work.
If I learn about more solutions I’ll try to add them to this post.
I’m not really sure how often this comes up
I think it’s not very common for me to have a program that slowly trickles data into a pipe like this, normally if I’m using a pipe a bunch of data gets written very quickly, processed by everything in the pipeline, and then everything exits. The only examples I can come up with right now are:
- tcpdump
tail -f- watching log files in a different way like with
kubectl logs - the output of a slow computation
what if there were an environment variable to disable buffering?
I think it would be cool if there were a standard environment variable to turn
off buffering, like PYTHONUNBUFFERED in Python. I got this idea from a
couple of blog posts by Mark Dominus
in 2018. Maybe NO_BUFFER like NO_COLOR?
The design seems tricky to get right; Mark points out that NETBSD has environment variables called STDBUF, STDBUF1, etc which gives you a
ton of control over buffering but I imagine most developers don’t want to
implement many different environment variables to handle a relatively minor
edge case.
I’m also curious about whether there are any programs that just automatically flush their output buffers after some period of time (like 1 second). It feels like it would be nice in theory but I can’t think of any program that does that so I imagine there are some downsides.
stuff I left out
Some things I didn’t talk about in this post since these posts have been getting pretty long recently and seriously does anyone REALLY want to read 3000 words about buffering?
- the difference between line buffering and having totally unbuffered output
- how buffering to stderr is different from buffering to stdout
- this post is only about buffering that happens inside the program, your operating system’s TTY driver also does a little bit of buffering sometimes
- other reasons you might need to flush your output other than “you’re writing to a pipe”
2024-11-18T09:35:42+00:00
Fullscreen
Open in Tab
I like writing Javascript without a build system and for the millionth time yesterday I ran into a problem where I needed to figure out how to import a Javascript library in my code without using a build system, and it took FOREVER to figure out how to import it because the library’s setup instructions assume that you’re using a build system.
Luckily at this point I’ve mostly learned how to navigate this situation and either successfully use the library or decide it’s too difficult and switch to a different library, so here’s the guide I wish I had to importing Javascript libraries years ago.
I’m only going to talk about using Javacript libraries on the frontend, and only about how to use them in a no-build-system setup.
In this post I’m going to talk about:
- the three main types of Javascript files a library might provide (ES Modules, the “classic” global variable kind, and CommonJS)
- how to figure out which types of files a Javascript library includes in its build
- ways to import each type of file in your code
the three kinds of Javascript files
There are 3 basic types of Javascript files a library can provide:
- the “classic” type of file that defines a global variable. This is the kind
of file that you can just
<script src>and it’ll Just Work. Great if you can get it but not always available - an ES module (which may or may not depend on other files, we’ll get to that)
- a “CommonJS” module. This is for Node, you can’t use it in a browser at all without using a build system.
I’m not sure if there’s a better name for the “classic” type but I’m just going to call it “classic”. Also there’s a type called “AMD” but I’m not sure how relevant it is in 2024.
Now that we know the 3 types of files, let’s talk about how to figure out which of these the library actually provides!
where to find the files: the NPM build
Every Javascript library has a build which it uploads to NPM. You might be thinking (like I did originally) – Julia! The whole POINT is that we’re not using Node to build our library! Why are we talking about NPM?
But if you’re using a link from a CDN like https://cdnjs.cloudflare.com/ajax/libs/Chart.js/4.4.1/chart.umd.min.js, you’re still using the NPM build! All the files on the CDNs originally come from NPM.
Because of this, I sometimes like to npm install the library even if I’m not
planning to use Node to build my library at all – I’ll just create a new temp
folder, npm install there, and then delete it when I’m done. I like being able to poke
around in the files in the NPM build on my filesystem, because then I can be
100% sure that I’m seeing everything that the library is making available in
its build and that the CDN isn’t hiding something from me.
So let’s npm install a few libraries and try to figure out what types of
Javascript files they provide in their builds!
example library 1: chart.js
First let’s look inside Chart.js, a plotting library.
$ cd /tmp/whatever
$ npm install chart.js
$ cd node_modules/chart.js/dist
$ ls *.*js
chart.cjs chart.js chart.umd.js helpers.cjs helpers.js
This library seems to have 3 basic options:
option 1: chart.cjs. The .cjs suffix tells me that this is a CommonJS
file, for using in Node. This means it’s impossible to use it directly in the
browser without some kind of build step.
option 2:chart.js. The .js suffix by itself doesn’t tell us what kind of
file it is, but if I open it up, I see import '@kurkle/color'; which is an
immediate sign that this is an ES module – the import ... syntax is ES
module syntax.
option 3: chart.umd.js. “UMD” stands for “Universal Module Definition”,
which I think means that you can use this file either with a basic <script src>, CommonJS,
or some third thing called AMD that I don’t understand.
how to use a UMD file
When I was using Chart.js I picked Option 3. I just needed to add this to my code:
<script src="./chart.umd.js"> </script>
and then I could use the library with the global Chart environment variable.
Couldn’t be easier. I just copied chart.umd.js into my Git repository so that
I didn’t have to worry about using NPM or the CDNs going down or anything.
the build files aren’t always in the dist directory
A lot of libraries will put their build in the dist directory, but not
always! The build files’ location is specified in the library’s package.json.
For example here’s an excerpt from Chart.js’s package.json.
"jsdelivr": "./dist/chart.umd.js",
"unpkg": "./dist/chart.umd.js",
"main": "./dist/chart.cjs",
"module": "./dist/chart.js",
I think this is saying that if you want to use an ES Module (module) you
should use dist/chart.js, but the jsDelivr and unpkg CDNs should use
./dist/chart.umd.js. I guess main is for Node.
chart.js’s package.json also says "type": "module", which according to this documentation
tells Node to treat files as ES modules by default. I think it doesn’t tell us
specifically which files are ES modules and which ones aren’t but it does tell
us that something in there is an ES module.
example library 2: @atcute/oauth-browser-client
@atcute/oauth-browser-client
is a library for logging into Bluesky with OAuth in the browser.
Let’s see what kinds of Javascript files it provides in its build!
$ npm install @atcute/oauth-browser-client
$ cd node_modules/@atcute/oauth-browser-client/dist
$ ls *js
constants.js dpop.js environment.js errors.js index.js resolvers.js
It seems like the only plausible root file in here is index.js, which looks
something like this:
export { configureOAuth } from './environment.js';
export * from './errors.js';
export * from './resolvers.js';
This export syntax means it’s an ES module. That means we can use it in
the browser without a build step! Let’s see how to do that.
how to use an ES module with importmaps
Using an ES module isn’t an easy as just adding a <script src="whatever.js">. Instead, if
the ES module has dependencies (like @atcute/oauth-browser-client does) the
steps are:
- Set up an import map in your HTML
- Put import statements like
import { configureOAuth } from '@atcute/oauth-browser-client';in your JS code - Include your JS code in your HTML like this:
<script type="module" src="YOURSCRIPT.js"></script>
The reason we need an import map instead of just doing something like import { BrowserOAuthClient } from "./oauth-client-browser.js" is that internally the module has more import statements like import {something} from @atcute/client, and we need to tell the browser where to get the code for @atcute/client and all of its other dependencies.
Here’s what the importmap I used looks like for @atcute/oauth-browser-client:
<script type="importmap">
{
"imports": {
"nanoid": "./node_modules/nanoid/bin/dist/index.js",
"nanoid/non-secure": "./node_modules/nanoid/non-secure/index.js",
"nanoid/url-alphabet": "./node_modules/nanoid/url-alphabet/dist/index.js",
"@atcute/oauth-browser-client": "./node_modules/@atcute/oauth-browser-client/dist/index.js",
"@atcute/client": "./node_modules/@atcute/client/dist/index.js",
"@atcute/client/utils/did": "./node_modules/@atcute/client/dist/utils/did.js"
}
}
</script>
Getting these import maps to work is pretty fiddly, I feel like there must be a tool to generate them automatically but I haven’t found one yet. It’s definitely possible to write a script that automatically generates the importmaps using esbuild’s metafile but I haven’t done that and maybe there’s a better way.
I decided to set up importmaps yesterday to get github.com/jvns/bsky-oauth-example to work, so there’s some example code in that repo.
Also someone pointed me to Simon Willison’s download-esm, which will download an ES module and rewrite the imports to point to the JS files directly so that you don’t need importmaps. I haven’t tried it yet but it seems like a great idea.
problems with importmaps: too many files
I did run into some problems with using importmaps in the browser though – it needed to download dozens of Javascript files to load my site, and my webserver in development couldn’t keep up for some reason. I kept seeing files fail to load randomly and then had to reload the page and hope that they would succeed this time.
It wasn’t an issue anymore when I deployed my site to production, so I guess it was a problem with my local dev environment.
Also one slightly annoying thing about ES modules in general is that you need to
be running a webserver to use them, I’m sure this is for a good reason but it’s
easier when you can just open your index.html file without starting a
webserver.
Because of the “too many files” thing I think actually using ES modules with importmaps in this way isn’t actually that appealing to me, but it’s good to know it’s possible.
how to use an ES module without importmaps
If the ES module doesn’t have dependencies then it’s even easier – you don’t need the importmaps! You can just:
- put
<script type="module" src="YOURCODE.js"></script>in your HTML. Thetype="module"is important. - put
import {whatever} from "https://example.com/whatever.js"inYOURCODE.js
alternative: use esbuild
If you don’t want to use importmaps, you can also use a build system like esbuild. I talked about how to do that in Some notes on using esbuild, but this blog post is about ways to avoid build systems completely so I’m not going to talk about that option here. I do still like esbuild though and I think it’s a good option in this case.
what’s the browser support for importmaps?
CanIUse says that importmaps are in
“Baseline 2023: newly available across major browsers” so my sense is that in
2024 that’s still maybe a little bit too new? I think I would use importmaps
for some fun experimental code that I only wanted like myself and 12 people to
use, but if I wanted my code to be more widely usable I’d use esbuild instead.
example library 3: @atproto/oauth-client-browser
Let’s look at one final example library! This is a different Bluesky auth
library than @atcute/oauth-browser-client.
$ npm install @atproto/oauth-client-browser
$ cd node_modules/@atproto/oauth-client-browser/dist
$ ls *js
browser-oauth-client.js browser-oauth-database.js browser-runtime-implementation.js errors.js index.js indexed-db-store.js util.js
Again, it seems like only real candidate file here is index.js. But this is a
different situation from the previous example library! Let’s take a look at
index.js:
There’s a bunch of stuff like this in index.js:
__exportStar(require("@atproto/oauth-client"), exports);
__exportStar(require("./browser-oauth-client.js"), exports);
__exportStar(require("./errors.js"), exports);
var util_js_1 = require("./util.js");
This require() syntax is CommonJS syntax, which means that we can’t use this
file in the browser at all, we need to use some kind of build step, and
ESBuild won’t work either.
Also in this library’s package.json it says "type": "commonjs" which is
another way to tell it’s CommonJS.
how to use a CommonJS module with esm.sh
Originally I thought it was impossible to use CommonJS modules without learning a build system, but then someone Bluesky told me about esm.sh! It’s a CDN that will translate anything into an ES Module. skypack.dev does something similar, I’m not sure what the difference is but one person mentioned that if one doesn’t work sometimes they’ll try the other one.
For @atproto/oauth-client-browser using it seems pretty simple, I just need to put this in my HTML:
<script type="module" src="script.js"> </script>
and then put this in script.js.
import { BrowserOAuthClient } from "https://esm.sh/@atproto/oauth-client-browser@0.3.0"
It seems to Just Work, which is cool! Of course this is still sort of using a build system – it’s just that esm.sh is running the build instead of me. My main concerns with this approach are:
- I don’t really trust CDNs to keep working forever – usually I like to copy dependencies into my repository so that they don’t go away for some reason in the future.
- I’ve heard of some issues with CDNs having security compromises which scares me.
- I don’t really understand what esm.sh is doing.
esbuild can also convert CommonJS modules into ES modules
I also learned that you can also use esbuild to convert a CommonJS module
into an ES module, though there are some limitations – the import { BrowserOAuthClient } from syntax doesn’t work. Here’s a github issue about that.
I think the esbuild approach is probably more appealing to me than the
esm.sh approach because it’s a tool that I already have on my computer so I
trust it more. I haven’t experimented with this much yet though.
summary of the three types of files
Here’s a summary of the three types of JS files you might encounter, options for how to use them, and how to identify them.
Unhelpfully a .js or .min.js file extension could be any of these 3
options, so if the file is something.js you need to do more detective work to
figure out what you’re dealing with.
- “classic” JS files
- How to use it::
<script src="whatever.js"></script> - Ways to identify it:
- The website has a big friendly banner in its setup instructions saying “Use this with a CDN!” or something
- A
.umd.jsextension - Just try to put it in a
<script src=...tag and see if it works
- How to use it::
- ES Modules
- Ways to use it:
- If there are no dependencies, just
import {whatever} from "./my-module.js"directly in your code - If there are dependencies, create an importmap and
import {whatever} from "my-module"- or use download-esm to remove the need for an importmap
- Use esbuild or any ES Module bundler
- If there are no dependencies, just
- Ways to identify it:
- Look for an
importorexportstatement. (notmodule.exports = ..., that’s CommonJS) - An
.mjsextension - maybe
"type": "module"inpackage.json(though it’s not clear to me which file exactly this refers to)
- Look for an
- Ways to use it:
- CommonJS Modules
- Ways to use it:
- Use https://esm.sh to convert it into an ES module, like
https://esm.sh/@atproto/oauth-client-browser@0.3.0 - Use a build somehow (??)
- Use https://esm.sh to convert it into an ES module, like
- Ways to identify it:
- Look for
require()ormodule.exports = ...in the code - A
.cjsextension - maybe
"type": "commonjs"inpackage.json(though it’s not clear to me which file exactly this refers to)
- Look for
- Ways to use it:
it’s really nice to have ES modules standardized
The main difference between CommonJS modules and ES modules from my perspective is that ES modules are actually a standard. This makes me feel a lot more confident using them, because browsers commit to backwards compatibility for web standards forever – if I write some code using ES modules today, I can feel sure that it’ll still work the same way in 15 years.
It also makes me feel better about using tooling like esbuild because even if
the esbuild project dies, because it’s implementing a standard it feels likely
that there will be another similar tool in the future that I can replace it
with.
the JS community has built a lot of very cool tools
A lot of the time when I talk about this stuff I get responses like “I hate javascript!!! it’s the worst!!!”. But my experience is that there are a lot of great tools for Javascript (I just learned about https://esm.sh yesterday which seems great! I love esbuild!), and that if I take the time to learn how things works I can take advantage of some of those tools and make my life a lot easier.
So the goal of this post is definitely not to complain about Javascript, it’s to understand the landscape so I can use the tooling in a way that feels good to me.
questions I still have
Here are some questions I still have, I’ll add the answers into the post if I learn the answer.
- Is there a tool that automatically generates importmaps for an ES Module that I have set up locally? (apparently yes: jspm)
- How can I convert a CommonJS module into an ES module on my computer, the way https://esm.sh does? (apparently esbuild can sort of do this, though named exports don’t work)
- When people normally build CommonJS modules into regular JS code, what’s code is doing that? Obviously there are tools like webpack, rollup, esbuild, etc, but do those tools all implement their own JS parsers/static analysis? How many JS parsers are there out there?
- Is there any way to bundle an ES module into a single file (like
atcute-client.js), but so that in the browser I can still import multiple different paths from that file (like both@atcute/client/lexiconsand@atcute/client)?
all the tools
Here’s a list of every tool we talked about in this post:
- Simon Willison’s download-esm which will download an ES module and convert the imports to point at JS files so you don’t need an importmap
- https://esm.sh/ and skypack.dev
- esbuild
- JSPM can generate importmaps
Writing this post has made me think that even though I usually don’t want to
have a build that I run every time I update the project, I might be willing to
have a build step (using download-esm or something) that I run only once
when setting up the project and never run again except maybe if I’m updating my
dependency versions.
that’s all!
Thanks to Marco Rogers who taught me a lot of the things in this post. I’ve probably made some mistakes in this post and I’d love to know what they are – let me know on Bluesky or Mastodon!
2024-11-09T09:24:29+00:00
Fullscreen
Open in Tab
I added a new section to this site a couple weeks ago called TIL (“today I learned”).
the goal: save interesting tools & facts I posted on social media
One kind of thing I like to post on Mastodon/Bluesky is “hey, here’s a cool thing”, like the great SQLite repl litecli, or the fact that cross compiling in Go Just Works and it’s amazing, or cryptographic right answers, or this great diff tool. Usually I don’t want to write a whole blog post about those things because I really don’t have much more to say than “hey this is useful!”
It started to bother me that I didn’t have anywhere to put those things: for example recently I wanted to use diffdiff and I just could not remember what it was called.
the solution: make a new section of this blog
So I quickly made a new folder called /til/, added some
custom styling (I wanted to style the posts to look a little bit like a tweet),
made a little Rake task to help me create new posts quickly (rake new_til), and
set up a separate RSS Feed for it.
I think this new section of the blog might be more for myself than anything, now when I forget the link to Cryptographic Right Answers I can hopefully look it up on the TIL page. (you might think “julia, why not use bookmarks??” but I have been failing to use bookmarks for my whole life and I don’t see that changing ever, putting things in public is for whatever reason much easier for me)
So far it’s been working, often I can actually just make a quick post in 2 minutes which was the goal.
inspired by Simon Willison’s TIL blog
My page is inspired by Simon Willison’s great TIL blog, though my TIL posts are a lot shorter.
I don’t necessarily want everything to be archived
This came about because I spent a lot of time on Twitter, so I’ve been thinking about what I want to do about all of my tweets.
I keep reading the advice to “POSSE” (“post on your own site, syndicate elsewhere”), and while I find the idea appealing in principle, for me part of the appeal of social media is that it’s a little bit ephemeral. I can post polls or questions or observations or jokes and then they can just kind of fade away as they become less relevant.
I find it a lot easier to identify specific categories of things that I actually want to have on a Real Website That I Own:
- blog posts here!
- comics at https://wizardzines.com/comics/!
- now TILs at https://jvns.ca/til/)
and then let everything else be kind of ephemeral.
I really believe in the advice to make email lists though – the first two (blog posts & comics) both have email lists and RSS feeds that people can subscribe to if they want. I might add a quick summary of any TIL posts from that week to the “blog posts from this week” mailing list.
2024-11-04T09:18:03+00:00
Fullscreen
Open in Tab

Here's where you can find me at IETF 121 in Dublin!
Monday
- 9:30 - 11:30 • oauth
- 15:30 - 17:00 • alldispatch
Tuesday
Thursday
- 9:30 - 11:30 • oauth
Get in Touch
My Current Drafts
2024-10-31T08:00:10+00:00
Fullscreen
Open in Tab
Hello! I’ve been thinking about the terminal a lot and yesterday I got curious
about all these “control codes”, like Ctrl-A, Ctrl-C, Ctrl-W, etc. What’s
the deal with all of them?
a table of ASCII control characters
Here’s a table of all 33 ASCII control characters, and what they do on my machine (on Mac OS), more or less. There are about a million caveats, but I’ll talk about what it means and all the problems with this diagram that I know about.
You can also view it as an HTML page (I just made it an image so it would show up in RSS).
different kinds of codes are mixed together
The first surprising thing about this diagram to me is that there are 33 control codes, split into (very roughly speaking) these categories:
- Codes that are handled by the operating system’s terminal driver, for
example when the OS sees a
3(Ctrl-C), it’ll send aSIGINTsignal to the current program - Everything else is passed through to the application as-is and the
application can do whatever it wants with them. Some subcategories of
those:
- Codes that correspond to a literal keypress of a key on your keyboard
(
Enter,Tab,Backspace). For example when you pressEnter, your terminal gets sent13. - Codes used by
readline: “the application can do whatever it wants” often means “it’ll do more or less what thereadlinelibrary does, whether the application actually usesreadlineor not”, so I’ve labelled a bunch of the codes thatreadlineuses - Other codes, for example I think
Ctrl-Xhas no standard meaning in the terminal in general but emacs uses it very heavily
- Codes that correspond to a literal keypress of a key on your keyboard
(
There’s no real structure to which codes are in which categories, they’re all just kind of randomly scattered because this evolved organically.
(If you’re curious about readline, I wrote more about readline in entering text in the terminal is complicated, and there are a lot of cheat sheets out there)
there are only 33 control codes
Something else that I find a little surprising is that are only 33 control codes –
A to Z, plus 7 more (@, [, \, ], ^, _, ?). This means that if you want to
have for example Ctrl-1 as a keyboard shortcut in a terminal application,
that’s not really meaningful – on my machine at least Ctrl-1 is exactly the
same thing as just pressing 1, Ctrl-3 is the same as Ctrl-[, etc.
Also Ctrl+Shift+C isn’t a control code – what it does depends on your
terminal emulator. On Linux Ctrl-Shift-X is often used by the terminal
emulator to copy or open a new tab or paste for example, it’s not sent to the
TTY at all.
Also I use Ctrl+Left Arrow all the time, but that isn’t a control code,
instead it sends an ANSI escape sequence (ctrl-[[1;5D) which is a different
thing which we absolutely do not have space for in this post.
This “there are only 33 codes” thing is totally different from how keyboard
shortcuts work in a GUI where you can have Ctrl+KEY for any key you want.
the official ASCII names aren’t very meaningful to me
Each of these 33 control codes has a name in ASCII (for example 3 is ETX).
When all of these control codes were originally defined, they weren’t being
used for computers or terminals at all, they were used for the telegraph machine.
Telegraph machines aren’t the same as UNIX terminals so a lot of the codes were repurposed to mean something else.
Personally I don’t find these ASCII names very useful, because 50% of the time the name in ASCII has no actual relationship to what that code does on UNIX systems today. So it feels easier to just ignore the ASCII names completely instead of trying to figure which ones still match their original meaning.
It’s hard to use Ctrl-M as a keyboard shortcut
Another thing that’s a bit weird is that Ctrl-M is literally the same as
Enter, and Ctrl-I is the same as Tab, which makes it hard to use those two as keyboard shortcuts.
From some quick research, it seems like some folks do still use Ctrl-I and
Ctrl-M as keyboard shortcuts (here’s an example), but to do that
you need to configure your terminal emulator to treat them differently than the
default.
For me the main takeaway is that if I ever write a terminal application I
should avoid Ctrl-I and Ctrl-M as keyboard shortcuts in it.
how to identify what control codes get sent
While writing this I needed to do a bunch of experimenting to figure out what various key combinations did, so I wrote this Python script echo-key.py that will print them out.
There’s probably a more official way but I appreciated having a script I could customize.
caveat: on canonical vs noncanonical mode
Two of these codes (Ctrl-W and Ctrl-U) are labelled in the table as
“handled by the OS”, but actually they’re not always handled by the OS, it
depends on whether the terminal is in “canonical” mode or in “noncanonical mode”.
In canonical mode,
programs only get input when you press Enter (and the OS is in charge of deleting characters when you press Backspace or Ctrl-W). But in noncanonical mode the program gets
input immediately when you press a key, and the Ctrl-W and Ctrl-U codes are passed through to the program to handle any way it wants.
Generally in noncanonical mode the program will handle Ctrl-W and Ctrl-U
similarly to how the OS does, but there are some small differences.
Some examples of programs that use canonical mode:
- probably pretty much any noninteractive program, like
greporcat git, I think
Examples of programs that use noncanonical mode:
python3,irband other REPLs- your shell
- any full screen TUI like
lessorvim
caveat: all of the “OS terminal driver” codes are configurable with stty
I said that Ctrl-C sends SIGINT but technically this is not necessarily
true, if you really want to you can remap all of the codes labelled “OS
terminal driver”, plus Backspace, using a tool called stty, and you can view
the mappings with stty -a.
Here are the mappings on my machine right now:
$ stty -a
cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>;
eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V;
min = 1; quit = ^\; reprint = ^R; start = ^Q; status = ^T;
stop = ^S; susp = ^Z; time = 0; werase = ^W;
I have personally never remapped any of these and I cannot imagine a reason I
would (I think it would be a recipe for confusion and disaster for me), but I
asked on Mastodon and people said the most common reasons they used
stty were:
- fix a broken terminal with
stty sane - set
stty erase ^Hto change how Backspace works - set
stty ixoff - some people even map
SIGINTto a different key, like theirDELETEkey
caveat: on signals
Two signals caveats:
- If the
ISIGterminal mode is turned off, then the OS won’t send signals. For examplevimturns offISIG - Apparently on BSDs, there’s an extra control code (
Ctrl-T) which sendsSIGINFO
You can see which terminal modes a program is setting using strace like this,
terminal modes are set with the ioctl system call:
$ strace -tt -o out vim
$ grep ioctl out | grep SET
here are the modes vim sets when it starts (ISIG and ICANON are
missing!):
17:43:36.670636 ioctl(0, TCSETS, {c_iflag=IXANY|IMAXBEL|IUTF8,
c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST, c_cflag=B38400|CS8|CREAD,
c_lflag=ECHOK|ECHOCTL|ECHOKE|PENDIN, ...}) = 0
and it resets the modes when it exits:
17:43:38.027284 ioctl(0, TCSETS, {c_iflag=ICRNL|IXANY|IMAXBEL|IUTF8,
c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST|ONLCR, c_cflag=B38400|CS8|CREAD,
c_lflag=ISIG|ICANON|ECHO|ECHOE|ECHOK|IEXTEN|ECHOCTL|ECHOKE|PENDIN, ...}) = 0
I think the specific combination of modes vim is using here might be called “raw mode”, man cfmakeraw talks about that.
there are a lot of conflicts
Related to “there are only 33 codes”, there are a lot of conflicts where
different parts of the system want to use the same code for different things,
for example by default Ctrl-S will freeze your screen, but if you turn that
off then readline will use Ctrl-S to do a forward search.
Another example is that on my machine sometimes Ctrl-T will send SIGINFO
and sometimes it’ll transpose 2 characters and sometimes it’ll do something
completely different depending on:
- whether the program has
ISIGset - whether the program uses
readline/ imitates readline’s behaviour
caveat: on “backspace” and “other backspace”
In this diagram I’ve labelled code 127 as “backspace” and 8 as “other backspace”. Uh, what?
I think this was the single biggest topic of discussion in the replies on Mastodon – apparently there’s a LOT of history to this and I’d never heard of any of it before.
First, here’s how it works on my machine:
- I press the
Backspacekey - The TTY gets sent the byte
127, which is calledDELin ASCII - the OS terminal driver and readline both have
127mapped to “backspace” (so it works both in canonical mode and noncanonical mode) - The previous character gets deleted
If I press Ctrl+H, it has the same effect as Backspace if I’m using
readline, but in a program without readline support (like cat for instance),
it just prints out ^H.
Apparently Step 2 above is different for some folks – their Backspace key sends
the byte 8 instead of 127, and so if they want Backspace to work then they
need to configure the OS (using stty) to set erase = ^H.
There’s an incredible section of the Debian Policy Manual on keyboard configuration
that describes how Delete and Backspace should work according to Debian
policy, which seems very similar to how it works on my Mac today. My
understanding (via this mastodon post)
is that this policy was written in the 90s because there was a lot of confusion
about what Backspace should do in the 90s and there needed to be a standard
to get everything to work.
There’s a bunch more historical terminal stuff here but that’s all I’ll say for now.
there’s probably a lot more diversity in how this works
I’ve probably missed a bunch more ways that “how it works on my machine” might be different from how it works on other people’s machines, and I’ve probably made some mistakes about how it works on my machine too. But that’s all I’ve got for today.
Some more stuff I know that I’ve left out: according to stty -a Ctrl-O is
“discard”, Ctrl-R is “reprint”, and Ctrl-Y is “dsusp”. I have no idea how
to make those actually do anything (pressing them does not do anything
obvious, and some people have told me what they used to do historically but
it’s not clear to me if they have a use in 2024), and a lot of the time in practice
they seem to just be passed through to the application anyway so I just
labelled Ctrl-R and Ctrl-Y as
readline.
not all of this is that useful to know
Also I want to say that I think the contents of this post are kind of interesting
but I don’t think they’re necessarily that useful. I’ve used the terminal
pretty successfully every day for the last 20 years without knowing literally
any of this – I just knew what Ctrl-C, Ctrl-D, Ctrl-Z, Ctrl-R,
Ctrl-L did in practice (plus maybe Ctrl-A, Ctrl-E and Ctrl-W) and did
not worry about the details for the most part, and that was
almost always totally fine except when I was trying to use xterm.js.
But I had fun learning about it so maybe it’ll be interesting to you too.
2024-10-27T07:47:04+00:00
Fullscreen
Open in Tab
I’ve been having problems for the last 3 years or so where Mess With DNS periodically runs out of memory and gets OOM killed.
This hasn’t been a big priority for me: usually it just goes down for a few minutes while it restarts, and it only happens once a day at most, so I’ve just been ignoring. But last week it started actually causing a problem so I decided to look into it.
This was kind of winding road where I learned a lot so here’s a table of contents:
- there’s about 100MB of memory available
- the problem: OOM killing the backup script
- attempt 1: use SQLite
- attempt 2: use a trie
- attempt 3: make my array use less memory
there’s about 100MB of memory available
I run Mess With DNS on a VM without about 465MB of RAM, which according to
ps aux (the RSS column) is split up something like:
- 100MB for PowerDNS
- 200MB for Mess With DNS
- 40MB for hallpass
That leaves about 110MB of memory free.
A while back I set GOMEMLIMIT to 250MB to try to make sure the garbage collector ran if Mess With DNS used more than 250MB of memory, and I think this helped but it didn’t solve everything.
the problem: OOM killing the backup script
A few weeks ago I started backing up Mess With DNS’s database for the first time using restic.
This has been working okay, but since Mess With DNS operates without much extra
memory I think restic sometimes needed more memory than was available on the
system, and so the backup script sometimes got OOM killed.
This was a problem because
- backups might be corrupted sometimes
- more importantly, restic takes out a lock when it runs, and so I’d have to manually do an unlock if I wanted the backups to continue working. Doing manual work like this is the #1 thing I try to avoid with all my web services (who has time for that!) so I really wanted to do something about it.
There’s probably more than one solution to this, but I decided to try to make Mess With DNS use less memory so that there was more available memory on the system, mostly because it seemed like a fun problem to try to solve.
what’s using memory: IP addresses
I’d run a memory profile of Mess With DNS a bunch of times in the past, so I knew exactly what was using most of Mess With DNS’s memory: IP addresses.
When it starts, Mess With DNS loads this database where you can look up the
ASN of every IP address into memory, so that when it
receives a DNS query it can take the source IP address like 74.125.16.248 and
tell you that IP address belongs to GOOGLE.
This database by itself used about 117MB of memory, and a simple du told me
that was too much – the original text files were only 37MB!
$ du -sh *.tsv
26M ip2asn-v4.tsv
11M ip2asn-v6.tsv
The way it worked originally is that I had an array of these:
type IPRange struct {
StartIP net.IP
EndIP net.IP
Num int
Name string
Country string
}
and I searched through it with a binary search to figure out if any of the ranges contained the IP I was looking for. Basically the simplest possible thing and it’s super fast, my machine can do about 9 million lookups per second.
attempt 1: use SQLite
I’ve been using SQLite recently, so my first thought was – maybe I can store all of this data on disk in an SQLite database, give the tables an index, and that’ll use less memory.
So I:
- wrote a quick Python script using sqlite-utils to import the TSV files into an SQLite database
- adjusted my code to select from the database instead
This did solve the initial memory goal (after a GC it now hardly used any memory at all because the table was on disk!), though I’m not sure how much GC churn this solution would cause if we needed to do a lot of queries at once. I did a quick memory profile and it seemed to allocate about 1KB of memory per lookup.
Let’s talk about the issues I ran into with using SQLite though.
problem: how to store IPv6 addresses
SQLite doesn’t have support for big integers and IPv6 addresses are 128 bits,
so I decided to store them as text. I think BLOB might have been better, I
originally thought BLOBs couldn’t be compared but the sqlite docs say they can.
I ended up with this schema:
CREATE TABLE ipv4_ranges (
start_ip INTEGER NOT NULL,
end_ip INTEGER NOT NULL,
asn INTEGER NOT NULL,
country TEXT NOT NULL,
name TEXT NOT NULL
);
CREATE TABLE ipv6_ranges (
start_ip TEXT NOT NULL,
end_ip TEXT NOT NULL,
asn INTEGER,
country TEXT,
name TEXT
);
CREATE INDEX idx_ipv4_ranges_start_ip ON ipv4_ranges (start_ip);
CREATE INDEX idx_ipv6_ranges_start_ip ON ipv6_ranges (start_ip);
CREATE INDEX idx_ipv4_ranges_end_ip ON ipv4_ranges (end_ip);
CREATE INDEX idx_ipv6_ranges_end_ip ON ipv6_ranges (end_ip);
Also I learned that Python has an ipaddress module, so I could use
ipaddress.ip_address(s).exploded to make sure that the IPv6 addresses were
expanded so that a string comparison would compare them properly.
problem: it’s 500x slower
I ran a quick microbenchmark, something like this. It printed out that it could look up 17,000 IPv6 addresses per second, and similarly for IPv4 addresses.
This was pretty discouraging – being able to look up 17k addresses per section is kind of fine (Mess With DNS does not get a lot of traffic), but I compared it to the original binary search code and the original code could do 9 million per second.
ips := []net.IP{}
count := 20000
for i := 0; i < count; i++ {
// create a random IPv6 address
bytes := randomBytes()
ip := net.IP(bytes[:])
ips = append(ips, ip)
}
now := time.Now()
success := 0
for _, ip := range ips {
_, err := ranges.FindASN(ip)
if err == nil {
success++
}
}
fmt.Println(success)
elapsed := time.Since(now)
fmt.Println("number per second", float64(count)/elapsed.Seconds())
time for EXPLAIN QUERY PLAN
I’d never really done an EXPLAIN in sqlite, so I thought it would be a fun opportunity to see what the query plan was doing.
sqlite> explain query plan select * from ipv6_ranges where '2607:f8b0:4006:0824:0000:0000:0000:200e' BETWEEN start_ip and end_ip;
QUERY PLAN
`--SEARCH ipv6_ranges USING INDEX idx_ipv6_ranges_end_ip (end_ip>?)
It looks like it’s just using the end_ip index and not the start_ip index,
so maybe it makes sense that it’s slower than the binary search.
I tried to figure out if there was a way to make SQLite use both indexes, but I couldn’t find one and maybe it knows best anyway.
At this point I gave up on the SQLite solution, I didn’t love that it was slower and also it’s a lot more complex than just doing a binary search. I felt like I’d rather keep something much more similar to the binary search.
A few things I tried with SQLite that did not cause it to use both indexes:
- using a compound index instead of two separate indexes
- running
ANALYZE - using
INTERSECTto intersect the results ofstart_ip < ?and? < end_ip. This did make it use both indexes, but it also seemed to make the query literally 1000x slower, probably because it needed to create the results of both subqueries in memory and intersect them.
attempt 2: use a trie
My next idea was to use a trie, because I had some vague idea that maybe a trie would use less memory, and I found this library called ipaddress-go that lets you look up IP addresses using a trie.
I tried using it here’s the code, but I think I was doing something wildly wrong because, compared to my naive array + binary search:
- it used WAY more memory (800MB to store just the IPv4 addresses)
- it was a lot slower to do the lookups (it could do only 100K/second instead of 9 million/second)
I’m not really sure what went wrong here but I gave up on this approach and decided to just try to make my array use less memory and stick to a simple binary search.
some notes on memory profiling
One thing I learned about memory profiling is that you can use runtime
package to see how much memory is currently allocated in the program. That’s
how I got all the memory numbers in this post. Here’s the code:
func memusage() {
runtime.GC()
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("Alloc = %v MiB\n", m.Alloc/1024/1024)
// write mem.prof
f, err := os.Create("mem.prof")
if err != nil {
log.Fatal(err)
}
pprof.WriteHeapProfile(f)
f.Close()
}
Also I learned that if you use pprof to analyze a heap profile there are two
ways to analyze it: you can pass either --alloc-space or --inuse-space to
go tool pprof. I don’t know how I didn’t realize this before but
alloc-space will tell you about everything that was allocated, and
inuse-space will just include memory that’s currently in use.
Anyway I ran go tool pprof -pdf --inuse_space mem.prof > mem.pdf a lot. Also
every time I use pprof I find myself referring to my own intro to pprof, it’s probably
the blog post I wrote that I use the most often. I should add --alloc-space
and --inuse-space to it.
attempt 3: make my array use less memory
I was storing my ip2asn entries like this:
type IPRange struct {
StartIP net.IP
EndIP net.IP
Num int
Name string
Country string
}
I had 3 ideas for ways to improve this:
- There was a lot of repetition of
Nameand theCountry, because a lot of IP ranges belong to the same ASN net.IPis an[]byteunder the hood, which felt like it involved an unnecessary pointer, was there a way to inline it into the struct?- Maybe I didn’t need both the start IP and the end IP, often the ranges were consecutive so maybe I could rearrange things so that I only had the start IP
idea 3.1: deduplicate the Name and Country
I figured I could store the ASN info in an array, and then just store the index
into the array in my IPRange struct. Here are the structs so you can see what
I mean:
type IPRange struct {
StartIP netip.Addr
EndIP netip.Addr
ASN uint32
Idx uint32
}
type ASNInfo struct {
Country string
Name string
}
type ASNPool struct {
asns []ASNInfo
lookup map[ASNInfo]uint32
}
This worked! It brought memory usage from 117MB to 65MB – a 50MB savings. I felt good about this.
Here’s all of the code for that part.
how big are ASNs?
As an aside – I’m storing the ASN in a uint32, is that right? I looked in the ip2asn
file and the biggest one seems to be 401307, though there are a few lines that
say 4294901931 which is much bigger, but also are just inside the range of a
uint32. So I can definitely use a uint32.
59.101.179.0 59.101.179.255 4294901931 Unknown AS4294901931
idea 3.2: use netip.Addr instead of net.IP
It turns out that I’m not the only one who felt that net.IP was using an
unnecessary amount of memory – in 2021 the folks at Tailscale released a new
IP address library for Go which solves this and many other issues. They wrote a great blog post about it.
I discovered (to my delight) that not only does this new IP address library exist and do exactly what I want, it’s also now in the Go
standard library as netip.Addr. Switching to netip.Addr was
very easy and saved another 20MB of memory, bringing us to 46MB.
I didn’t try my third idea (remove the end IP from the struct) because I’d already been programming for long enough on a Saturday morning and I was happy with my progress.
It’s always such a great feeling when I think “hey, I don’t like this, there must be a better way” and then immediately discover that someone has already made the exact thing I want, thought about it a lot more than me, and implemented it much better than I would have.
all of this was messier in real life
Even though I tried to explain this in a simple linear way “I tried X, then I tried Y, then I tried Z”, that’s kind of a lie – I always try to take my actual debugging process (total chaos) and make it seem more linear and understandable because the reality is just too annoying to write down. It’s more like:
- try sqlite
- try a trie
- second guess everything that I concluded about sqlite, go back and look at the results again
- wait what about indexes
- very very belatedly realize that I can use
runtimeto check how much memory everything is using, start doing that - look at the trie again, maybe I misunderstood everything
- give up and go back to binary search
- look at all of the numbers for tries/sqlite again to make sure I didn’t misunderstand
A note on using 512MB of memory
Someone asked why I don’t just give the VM more memory. I could very easily afford to pay for a VM with 1GB of memory, but I feel like 512MB really should be enough (and really that 256MB should be enough!) so I’d rather stay inside that constraint. It’s kind of a fun puzzle.
a few ideas from the replies
Folks had a lot of good ideas I hadn’t thought of. Recording them as inspiration if I feel like having another Fun Performance Day at some point.
- Try Go’s unique package for the
ASNPool. Someone tried this and it uses more memory, probably because Go’s pointers are 64 bits - Try compiling with
GOARCH=386to use 32-bit pointers to sace space (maybe in combination with usingunique!) - It should be possible to store all of the IPv6 addresses in just 64 bits, because only the first 64 bits of the address are public
- Interpolation search might be faster than binary search since IP addresses are numeric
- Try the MaxMind db format with mmdbwriter or mmdbctl
- Tailscale’s art routing table package
the result: saved 70MB of memory!
I deployed the new version and now Mess With DNS is using less memory! Hooray!
A few other notes:
- lookups are a little slower – in my microbenchmark they went from 9 million lookups/second to 6 million, maybe because I added a little indirection. Using less memory and a little more CPU seemed like a good tradeoff though.
- it’s still using more memory than the raw text files do (46MB vs 37MB), I guess pointers take up space and that’s okay.
I’m honestly not sure if this will solve all my memory problems, probably not! But I had fun, I learned a few things about SQLite, I still don’t know what to think about tries, and it made me love binary search even more than I already did.
2024-10-07T09:19:57+00:00
Fullscreen
Open in Tab
Warning: this is a post about very boring yakshaving, probably only of interest to people who are trying to upgrade Hugo from a very old version to a new version. But what are blogs for if not documenting one’s very boring yakshaves from time to time?
So yesterday I decided to try to upgrade Hugo. There’s no real reason to do this – I’ve been using Hugo version 0.40 to generate this blog since 2018, it works fine, and I don’t have any problems with it. But I thought – maybe it won’t be as hard as I think, and I kind of like a tedious computer task sometimes!
I thought I’d document what I learned along the way in case it’s useful to anyone else doing this very specific migration. I upgraded from Hugo v0.40 (from 2018) to v0.135 (from 2024).
Here are most of the changes I had to make:
change 1: template "theme/partials/thing.html is now partial thing.html
I had to replace a bunch of instances of {{ template "theme/partials/header.html" . }} with {{ partial "header.html" . }}.
This happened in v0.42:
We have now virtualized the filesystems for project and theme files. This makes everything simpler, faster and more powerful. But it also means that template lookups on the form {{ template “theme/partials/pagination.html” . }} will not work anymore. That syntax has never been documented, so it’s not expected to be in wide use.
change 2: .Data.Pages is now site.RegularPages
This seems to be discussed in the release notes for 0.57.2
I just needed to replace .Data.Pages with site.RegularPages in the template on the homepage as well as in my RSS feed template.
change 3: .Next and .Prev got flipped
I had this comment in the part of my theme where I link to the next/previous blog post:
“next” and “previous” in hugo apparently mean the opposite of what I’d think they’d mean intuitively. I’d expect “next” to mean “in the future” and “previous” to mean “in the past” but it’s the opposite
It looks they changed this in ad705aac064 so that “next” actually is in the future and “prev” actually is in the past. I definitely find the new behaviour more intuitive.
downloading the Hugo changelogs with a script
Figuring out why/when all of these changes happened was a little difficult. I ended up hacking together a bash script to download all of the changelogs from github as text files, which I could then grep to try to figure out what happened. It turns out it’s pretty easy to get all of the changelogs from the GitHub API.
So far everything was not so bad – there was also a change around taxonomies that’s I can’t quite explain, but it was all pretty manageable, but then we got to the really tough one: the markdown renderer.
change 4: the markdown renderer (blackfriday -> goldmark)
The blackfriday markdown renderer (which was previously the default) was removed in v0.100.0. This seems pretty reasonable:
It has been deprecated for a long time, its v1 version is not maintained anymore, and there are many known issues. Goldmark should be a mature replacement by now.
Fixing all my Markdown changes was a huge pain – I ended up having to update 80 different Markdown files (out of 700) so that they would render properly, and I’m not totally sure
why bother switching renderers?
The obvious question here is – why bother even trying to upgrade Hugo at all if I have to switch Markdown renderers? My old site was running totally fine and I think it wasn’t necessarily a good use of time, but the one reason I think it might be useful in the future is that the new renderer (goldmark) uses the CommonMark markdown standard, which I’m hoping will be somewhat more futureproof. So maybe I won’t have to go through this again? We’ll see.
Also it turned out that the new Goldmark renderer does fix some problems I had (but didn’t know that I had) with smart quotes and how lists/blockquotes interact.
finding all the Markdown problems: the process
The hard part of this Markdown change was even figuring out what changed. Almost all of the problems (including #2 and #3 above) just silently broke the site, they didn’t cause any errors or anything. So I had to diff the HTML to hunt them down.
Here’s what I ended up doing:
- Generate the site with the old version, put it in
public_old - Generate the new version, put it in
public - Diff every single HTML file in
public/andpublic_oldwith this diff.sh script and put the results in adiffs/folder - Run variations on
find diffs -type f | xargs cat | grep -C 5 '(31m|32m)' | less -rover and over again to look at every single change until I found something that seemed wrong - Update the Markdown to fix the problem
- Repeat until everything seemed okay
(the grep 31m|32m thing is searching for red/green text in the diff)
This was very time consuming but it was a little bit fun for some reason so I kept doing it until it seemed like nothing too horrible was left.
the new markdown rules
Here’s a list of every type of Markdown change I had to make. It’s very possible these are all extremely specific to me but it took me a long time to figure them all out so maybe this will be helpful to one other person who finds this in the future.
4.1: mixing HTML and markdown
This doesn’t work anymore (it doesn’t expand the link):
<small>
[a link](https://example.com)
</small>
I need to do this instead:
<small>
[a link](https://example.com)
</small>
This works too:
<small> [a link](https://example.com) </small>
4.2: << is changed into «
I didn’t want this so I needed to configure:
markup:
goldmark:
extensions:
typographer:
leftAngleQuote: '<<'
rightAngleQuote: '>>'
4.3: nested lists sometimes need 4 space indents
This doesn’t render as a nested list anymore if I only indent by 2 spaces, I need to put 4 spaces.
1. a
* b
* c
2. b
The problem is that the amount of indent needed depends on the size of the list markers. Here’s a reference in CommonMark for this.
4.4: blockquotes inside lists work better
Previously the > quote here didn’t render as a blockquote, and with the new renderer it does.
* something
> quote
* something else
I found a bunch of Markdown that had been kind of broken (which I hadn’t noticed) that works better with the new renderer, and this is an example of that.
Lists inside blockquotes also seem to work better.
4.5: headings inside lists
Previously this didn’t render as a heading, but now it does. So I needed to
replace the # with #.
* # passengers: 20
4.6: + or 1) at the beginning of the line makes it a list
I had something which looked like this:
`1 / (1
+ exp(-1)) = 0.73`
With Blackfriday it rendered like this:
<p><code>1 / (1
+ exp(-1)) = 0.73</code></p>
and with Goldmark it rendered like this:
<p>`1 / (1</p>
<ul>
<li>exp(-1)) = 0.73`</li>
</ul>
Same thing if there was an accidental 1) at the beginning of a line, like in this Markdown snippet
I set up a small Hadoop cluster (1 master, 2 workers, replication set to
1) on
To fix this I just had to rewrap the line so that the + wasn’t the first character.
The Markdown is formatted this way because I wrap my Markdown to 80 characters a lot and the wrapping isn’t very context sensitive.
4.7: no more smart quotes in code blocks
There were a bunch of places where the old renderer (Blackfriday) was doing
unwanted things in code blocks like replacing ... with … or replacing
quotes with smart quotes. I hadn’t realized this was happening and I was very
happy to have it fixed.
4.8: better quote management
The way this gets rendered got better:
"Oh, *interesting*!"
- old: “Oh, interesting!“
- new: “Oh, interesting!”
Before there were two left smart quotes, now the quotes match.
4.9: images are no longer wrapped in a p tag
Previously if I had an image like this:
<img src="https://jvns.ca/images/rustboot1.png">
it would get wrapped in a <p> tag, now it doesn’t anymore. I dealt with this
just by adding a margin-bottom: 0.75em to images in the CSS, hopefully
that’ll make them display well enough.
4.10: <br> is now wrapped in a p tag
Previously this wouldn’t get wrapped in a p tag, but now it seems to:
<br><br>
I just gave up on fixing this though and resigned myself to maybe having some extra space in some cases. Maybe I’ll try to fix it later if I feel like another yakshave.
4.11: some more goldmark settings
I also needed to
- turn off code highlighting (because it wasn’t working properly and I didn’t have it before anyway)
- use the old “blackfriday” method to generate heading IDs so they didn’t change
- allow raw HTML in my markdown
Here’s what I needed to add to my config.yaml to do all that:
markup:
highlight:
codeFences: false
goldmark:
renderer:
unsafe: true
parser:
autoHeadingIDType: blackfriday
Maybe I’ll try to get syntax highlighting working one day, who knows. I might prefer having it off though.
a little script to compare blackfriday and goldmark
I also wrote a little program to compare the Blackfriday and Goldmark output for various markdown snippets, here it is in a gist.
It’s not really configured the exact same way Blackfriday and Goldmark were in my Hugo versions, but it was still helpful to have to help me understand what was going on.
a quick note on maintaining themes
My approach to themes in Hugo has been:
- pay someone to make a nice design for the site (for example wizardzines.com was designed by Melody Starling)
- use a totally custom theme
- commit that theme to the same Github repo as the site
So I just need to edit the theme files to fix any problems. Also I wrote a lot of the theme myself so I’m pretty familiar with how it works.
Relying on someone else to keep a theme updated feels kind of scary to me, I think if I were using a third-party theme I’d just copy the code into my site’s github repo and then maintain it myself.
which static site generators have better backwards compatibility?
I asked on Mastodon if anyone had used a static site generator with good backwards compatibility.
The main answers seemed to be Jekyll and 11ty. Several people said they’d been using Jekyll for 10 years without any issues, and 11ty says it has stability as a core goal.
I think a big factor in how appealing Jekyll/11ty are is how easy it is for you to maintain a working Ruby / Node environment on your computer: part of the reason I stopped using Jekyll was that I got tired of having to maintain a working Ruby installation. But I imagine this wouldn’t be a problem for a Ruby or Node developer.
Several people said that they don’t build their Jekyll site locally at all – they just use GitHub Pages to build it.
that’s it!
Overall I’ve been happy with Hugo – I started using it because it had fast build times and it was a static binary, and both of those things are still extremely useful to me. I might have spent 10 hours on this upgrade, but I’ve probably spent 1000+ hours writing blog posts without thinking about Hugo at all so that seems like an extremely reasonable ratio.
I find it hard to be too mad about the backwards incompatible changes, most of
them were quite a long time ago, Hugo does a great job of making their old
releases available so you can use the old release if you want, and the most
difficult one is removing support for the blackfriday Markdown renderer in
favour of using something CommonMark-compliant which seems pretty reasonable to
me even if it is a huge pain.
But it did take a long time and I don’t think I’d particularly recommend moving 700 blog posts to a new Markdown renderer unless you’re really in the mood for a lot of computer suffering for some reason.
The new renderer did fix a bunch of problems so I think overall it might be a good thing, even if I’ll have to remember to make 2 changes to how I write Markdown (4.1 and 4.3).
Also I’m still using Hugo 0.54 for https://wizardzines.com so maybe these notes will be useful to Future Me if I ever feel like upgrading Hugo for that site.
Hopefully I didn’t break too many things on the blog by doing this, let me know if you see anything broken!
2024-10-01T10:01:44+00:00
Fullscreen
Open in Tab
Yesterday I was thinking about how long it took me to get a colorscheme in my terminal that I was mostly happy with (SO MANY YEARS), and it made me wonder what about terminal colours made it so hard.
So I asked people on Mastodon what problems they’ve run into with colours in the terminal, and I got a ton of interesting responses! Let’s talk about some of the problems and a few possible ways to fix them.
problem 1: blue on black
One of the top complaints was “blue on black is hard to read”. Here’s an
example of that: if I open Terminal.app, set the background to black, and run
ls, the directories are displayed in a blue that isn’t that easy to read:
To understand why we’re seeing this blue, let’s talk about ANSI colours!
the 16 ANSI colours
Your terminal has 16 numbered colours – black, red, green, yellow, blue, magenta, cyan, white, and “bright” version of each of those.
Programs can use them by printing out an “ANSI escape code” – for example if you want to see each of the 16 colours in your terminal, you can run this Python program:
def color(num, text):
return f"\033[38;5;{num}m{text}\033[0m"
for i in range(16):
print(color(i, f"number {i:02}"))
what are the ANSI colours?
This made me wonder – if blue is colour number 5, who decides what hex color that should correspond to?
The answer seems to be “there’s no standard, terminal emulators just choose colours and it’s not very consistent”. Here’s a screenshot of a table from Wikipedia, where you can see that there’s a lot of variation:
problem 1.5: bright yellow on white
Bright yellow on white is even worse than blue on black, here’s what I get in a terminal with the default settings:
That’s almost impossible to read (and some other colours like light green cause similar issues), so let’s talk about solutions!
two ways to reconfigure your colours
If you’re annoyed by these colour contrast issues (or maybe you just think the default ANSI colours are ugly), you might think – well, I’ll just choose a different “blue” and pick something I like better!
There are two ways you can do this:
Way 1: Configure your terminal emulator: I think most modern terminal emulators have a way to reconfigure the colours, and some of them even come with some preinstalled themes that you might like better than the defaults.
Way 2: Run a shell script: There are ANSI escape codes that you can print
out to tell your terminal emulator to reconfigure its colours. Here’s a shell script that does that,
from the base16-shell project.
You can see that it has a few different conventions for changing the colours –
I guess different terminal emulators have different escape codes for changing
their colour palette, and so the script is trying to pick the right style of
escape code based on the TERM environment variable.
what are the pros and cons of the 2 ways of configuring your colours?
I prefer to use the “shell script” method, because:
- if I switch terminal emulators for some reason, I don’t need to a different configuration system, my colours still Just Work
- I use base16-shell with base16-vim to make my vim colours match my terminal colours, which is convenient
some advantages of configuring colours in your terminal emulator:
- if you use a popular terminal emulator, there are probably a lot more nice terminal themes out there that you can choose from
- not all terminal emulators support the “shell script method”, and even if they do, the results can be a little inconsistent
This is what my shell has looked like for probably the last 5 years (using the
solarized light base16 theme), and I’m pretty happy with it. Here’s htop:
Okay, so let’s say you’ve found a terminal colorscheme that you like. What else can go wrong?
problem 2: programs using 256 colours
Here’s what some output of fd, a find alternative, looks like in my
colorscheme:
The contrast is pretty bad here, and I definitely don’t have that lime green in my normal colorscheme. What’s going on?
We can see what color codes fd is using using the unbuffer program to
capture its output including the color codes:
$ unbuffer fd . > out
$ vim out
^[[38;5;48mbad-again.sh^[[0m
^[[38;5;48mbad.sh^[[0m
^[[38;5;48mbetter.sh^[[0m
out
^[[38;5;48 means “set the foreground color to color 48”. Terminals don’t
only have 16 colours – many terminals these days actually have 3 ways of
specifying colours:
- the 16 ANSI colours we already talked about
- an extended set of 256 colours
- a further extended set of 24-bit hex colours, like
#ffea03
So fd is using one of the colours from the extended 256-color set. bat (a
cat alternative) does something similar – here’s what it looks like by
default in my terminal.
This looks fine though and it really seems like it’s trying to work well with a variety of terminal themes.
some newer tools seem to have theme support
I think it’s interesting that some of these newer terminal tools (fd, cat,
delta, and probably more) have support for arbitrary custom themes. I guess
the downside of this approach is that the default theme might clash with your
terminal’s background, but the upside is that it gives you a lot more control
over theming the tool’s output than just choosing 16 ANSI colours.
I don’t really use bat, but if I did I’d probably use bat --theme ansi to
just use the ANSI colours that I have set in my normal terminal colorscheme.
problem 3: the grays in Solarized
A bunch of people on Mastodon mentioned a specific issue with grays in the Solarized theme: when I list a directory, the base16 Solarized Light theme looks like this:
but iTerm’s default Solarized Light theme looks like this:
This is because in the iTerm theme (which is the original Solarized design), colors 9-14 (the “bright blue”, “bright
red”, etc) are mapped to a series of grays, and when I run ls, it’s trying to
use those “bright” colours to color my directories and executables.
My best guess for why the original Solarized theme is designed this way is to make the grays available to the vim Solarized colorscheme.
I’m pretty sure I prefer the modified base16 version I use where the “bright” colours are actually colours instead of all being shades of gray though. (I didn’t actually realize the version I was using wasn’t the “original” Solarized theme until I wrote this post)
In any case I really love Solarized and I’m very happy it exists so that I can use a modified version of it.
problem 4: a vim theme that doesn’t match the terminal background
If I my vim theme has a different background colour than my terminal theme, I get this ugly border, like this:
This one is a pretty minor issue though and I think making your terminal background match your vim background is pretty straightforward.
problem 5: programs setting a background color
A few people mentioned problems with terminal applications setting an unwanted background colour, so let’s look at an example of that.
Here ngrok has set the background to color #16 (“black”), but the
base16-shell script I use sets color 16 to be bright orange, so I get this,
which is pretty bad:
I think the intention is for ngrok to look something like this:
I think base16-shell sets color #16 to orange (instead of black)
so that it can provide extra colours for use by base16-vim.
This feels reasonable to me – I use base16-vim in the terminal, so I guess I’m
using that feature and it’s probably more important to me than ngrok (which I
rarely use) behaving a bit weirdly.
This particular issue is a maybe obscure clash between ngrok and my colorschem, but I think this kind of clash is pretty common when a program sets an ANSI background color that the user has remapped for some reason.
a nice solution to contrast issues: “minimum contrast”
A bunch of terminals (iTerm2, tabby, kitty’s text_fg_override_threshold, and folks tell me also Ghostty and Windows Terminal) have a “minimum contrast” feature that will automatically adjust colours to make sure they have enough contrast.
Here’s an example from iTerm. This ngrok accident from before has pretty bad contrast, I find it pretty difficult to read:
With “minimum contrast” set to 40 in iTerm, it looks like this instead:
I didn’t have minimum contrast turned on before but I just turned it on today because it makes such a big difference when something goes wrong with colours in the terminal.
problem 6: TERM being set to the wrong thing
A few people mentioned that they’ll SSH into a system that doesn’t support the
TERM environment variable that they have set locally, and then the colours
won’t work.
I think the way TERM works is that systems have a terminfo database, so if
the value of the TERM environment variable isn’t in the system’s terminfo
database, then it won’t know how to output colours for that terminal. I don’t
know too much about terminfo, but someone linked me to this terminfo rant that talks about a few other
issues with terminfo.
I don’t have a system on hand to reproduce this one so I can’t say for sure how
to fix it, but this stackoverflow question
suggests running something like TERM=xterm ssh instead of ssh.
problem 7: picking “good” colours is hard
A couple of problems people mentioned with designing / finding terminal colorschemes:
- some folks are colorblind and have trouble finding an appropriate colorscheme
- accidentally making the background color too close to the cursor or selection color, so they’re hard to find
- generally finding colours that work with every program is a struggle (for example you can see me having a problem with this with ngrok above!)
problem 8: making nethack/mc look right
Another problem people mentioned is using a program like nethack or midnight commander which you might expect to have a specific colourscheme based on the default ANSI terminal colours.
For example, midnight commander has a really specific classic look:
But in my Solarized theme, midnight commander looks like this:
The Solarized version feels like it could be disorienting if you’re very used to the “classic” look.
One solution Simon Tatham mentioned to this is using some palette customization ANSI codes (like the ones base16 uses that I talked about earlier) to change the color palette right before starting the program, for example remapping yellow to a brighter yellow before starting Nethack so that the yellow characters look better.
problem 9: commands disabling colours when writing to a pipe
If I run fd | less, I see something like this, with the colours disabled.
In general I find this useful – if I pipe a command to grep, I don’t want it
to print out all those color escape codes, I just want the plain text. But what if you want to see the colours?
To see the colours, you can run unbuffer fd | less -r! I just learned about
unbuffer recently and I think it’s really cool, unbuffer opens a tty for the
command to write to so that it thinks it’s writing to a TTY. It also fixes
issues with programs buffering their output when writing to a pipe, which is
why it’s called unbuffer.
Here’s what the output of unbuffer fd | less -r looks like for me:
Also some commands (including fd) support a --color=always flag which will
force them to always print out the colours.
problem 10: unwanted colour in ls and other commands
Some people mentioned that they don’t want ls to use colour at all, perhaps
because ls uses blue, it’s hard to read on black, and maybe they don’t feel like
customizing their terminal’s colourscheme to make the blue more readable or
just don’t find the use of colour helpful.
Some possible solutions to this one:
- you can run
ls --color=never, which is probably easiest - you can also set
LS_COLORSto customize the colours used byls. I think some other programs other thanlssupport theLS_COLORSenvironment variable too. - also some programs support setting
NO_COLOR=true(there’s a list here)
Here’s an example of running LS_COLORS="fi=0:di=0:ln=0:pi=0:so=0:bd=0:cd=0:or=0:ex=0" ls:
problem 11: the colours in vim
I used to have a lot of problems with configuring my colours in vim – I’d set up my terminal colours in a way that I thought was okay, and then I’d start vim and it would just be a disaster.
I think what was going on here is that today, there are two ways to set up a vim colorscheme in the terminal:
- using your ANSI terminal colours – you tell vim which ANSI colour number to use for the background, for functions, etc.
- using 24-bit hex colours – instead of ANSI terminal colours, the vim colorscheme can use hex codes like #faea99 directly
20 years ago when I started using vim, terminals with 24-bit hex color support were a lot less common (or maybe they didn’t exist at all), and vim certainly didn’t have support for using 24-bit colour in the terminal. From some quick searching through git, it looks like vim added support for 24-bit colour in 2016 – just 8 years ago!
So to get colours to work properly in vim before 2016, you needed to synchronize
your terminal colorscheme and your vim colorscheme. Here’s what that looked like,
the colorscheme needed to map the vim color classes like cterm05 to ANSI colour numbers.
But in 2024, the story is really different! Vim (and Neovim, which I use now)
support 24-bit colours, and as of Neovim 0.10 (released in May 2024), the
termguicolors setting (which tells Vim to use 24-bit hex colours for
colorschemes) is turned on by default in any terminal with 24-bit
color support.
So this “you need to synchronize your terminal colorscheme and your vim colorscheme” problem is not an issue anymore for me in 2024, since I don’t plan to use terminals without 24-bit color support in the future.
The biggest consequence for me of this whole thing is that I don’t need base16
to set colors 16-21 to weird stuff anymore to integrate with vim – I can just
use a terminal theme and a vim theme, and as long as the two themes use similar
colours (so it’s not jarring for me to switch between them) there’s no problem.
I think I can just remove those parts from my base16 shell script and totally
avoid the problem with ngrok and the weird orange background I talked about
above.
some more problems I left out
I think there are a lot of issues around the intersection of multiple programs, like using some combination tmux/ssh/vim that I couldn’t figure out how to reproduce well enough to talk about them. Also I’m sure I missed a lot of other things too.
base16 has really worked for me
I’ve personally had a lot of success with using
base16-shell with
base16-vim – I just need to add a couple of lines to my
fish config to set it up (+ a few .vimrc lines) and then I can move on and
accept any remaining problems that that doesn’t solve.
I don’t think base16 is for everyone though, some limitations I’m aware of with base16 that might make it not work for you:
- it comes with a limited set of builtin themes and you might not like any of them
- the Solarized base16 theme (and maybe all of the themes?) sets the “bright” ANSI colours to be exactly the same as the normal colours, which might cause a problem if you’re relying on the “bright” colours to be different from the regular ones
- it sets colours 16-21 in order to give the vim colorschemes from
base16-vimaccess to more colours, which might not be relevant if you always use a terminal with 24-bit color support, and can cause problems like the ngrok issue above - also the way it sets colours 16-21 could be a problem in terminals that don’t have 256-color support, like the linux framebuffer terminal
Apparently there’s a community fork of base16 called tinted-theming, which I haven’t looked into much yet.
some other colorscheme tools
Just one so far but I’ll link more if people tell me about them:
- rootloops.sh for generating colorschemes (and “let’s create a terminal color scheme”)
- Some popular colorschemes (according to people I asked on Mastodon): catpuccin, Monokai, Gruvbox, Dracula, Modus (a high contrast theme), Tokyo Night, Nord, Rosé Pine
okay, that was a lot
We talked about a lot in this post and while I think learning about all these details is kind of fun if I’m in the mood to do a deep dive, I find it SO FRUSTRATING to deal with it when I just want my colours to work! Being surprised by unreadable text and having to find a workaround is just not my idea of a good day.
Personally I’m a zero-configuration kind of person and it’s not that appealing to me to have to put together a lot of custom configuration just to make my colours in the terminal look acceptable. I’d much rather just have some reasonable defaults that I don’t have to change.
minimum contrast seems like an amazing feature
My one big takeaway from writing this was to turn on “minimum contrast” in my terminal, I think it’s going to fix most of the occasional accidental unreadable text issues I run into and I’m pretty excited about it.
2024-09-27T11:16:00+00:00
Fullscreen
Open in Tab
I spent a lot of time in the past couple of weeks working on a website in Go that may or may not ever see the light of day, but I learned a couple of things along the way I wanted to write down. Here they are:
go 1.22 now has better routing
I’ve never felt motivated to learn any of the Go routing libraries (gorilla/mux, chi, etc), so I’ve been doing all my routing by hand, like this.
// DELETE /records:
case r.Method == "DELETE" && n == 1 && p[0] == "records":
if !requireLogin(username, r.URL.Path, r, w) {
return
}
deleteAllRecords(ctx, username, rs, w, r)
// POST /records/<ID>
case r.Method == "POST" && n == 2 && p[0] == "records" && len(p[1]) > 0:
if !requireLogin(username, r.URL.Path, r, w) {
return
}
updateRecord(ctx, username, p[1], rs, w, r)
But apparently as of Go 1.22, Go now has better support for routing in the standard library, so that code can be rewritten something like this:
mux.HandleFunc("DELETE /records/", app.deleteAllRecords)
mux.HandleFunc("POST /records/{record_id}", app.updateRecord)
Though it would also need a login middleware, so maybe something more like
this, with a requireLogin middleware.
mux.Handle("DELETE /records/", requireLogin(http.HandlerFunc(app.deleteAllRecords)))
a gotcha with the built-in router: redirects with trailing slashes
One annoying gotcha I ran into was: if I make a route for /records/, then a
request for /records will be redirected to /records/.
I ran into an issue with this where sending a POST request to /records
redirected to a GET request for /records/, which broke the POST request
because it removed the request body. Thankfully Xe Iaso wrote a blog post about the exact same issue which made it
easier to debug.
I think the solution to this is just to use API endpoints like POST /records
instead of POST /records/, which seems like a more normal design anyway.
sqlc automatically generates code for my db queries
I got a little bit tired of writing so much boilerplate for my SQL queries, but I didn’t really feel like learning an ORM, because I know what SQL queries I want to write, and I didn’t feel like learning the ORM’s conventions for translating things into SQL queries.
But then I found sqlc, which will compile a query like this:
-- name: GetVariant :one
SELECT *
FROM variants
WHERE id = ?;
into Go code like this:
const getVariant = `-- name: GetVariant :one
SELECT id, created_at, updated_at, disabled, product_name, variant_name
FROM variants
WHERE id = ?
`
func (q *Queries) GetVariant(ctx context.Context, id int64) (Variant, error) {
row := q.db.QueryRowContext(ctx, getVariant, id)
var i Variant
err := row.Scan(
&i.ID,
&i.CreatedAt,
&i.UpdatedAt,
&i.Disabled,
&i.ProductName,
&i.VariantName,
)
return i, err
}
What I like about this is that if I’m ever unsure about what Go code to write for a given SQL query, I can just write the query I want, read the generated function and it’ll tell me exactly what to do to call it. It feels much easier to me than trying to dig through the ORM’s documentation to figure out how to construct the SQL query I want.
Reading Brandur’s sqlc notes from 2024 also gave me some confidence that this is a workable path for my tiny programs. That post gives a really helpful example of how to conditionally update fields in a table using CASE statements (for example if you have a table with 20 columns and you only want to update 3 of them).
sqlite tips
Someone on Mastodon linked me to this post called Optimizing sqlite for servers. My projects are small and I’m not so concerned about performance, but my main takeaways were:
- have a dedicated object for writing to the database, and run
db.SetMaxOpenConns(1)on it. I learned the hard way that if I don’t do this then I’ll getSQLITE_BUSYerrors from two threads trying to write to the db at the same time. - if I want to make reads faster, I could have 2 separate db objects, one for writing and one for reading
There are a more tips in that post that seem useful (like “COUNT queries are slow” and “Use STRICT tables”), but I haven’t done those yet.
Also sometimes if I have two tables where I know I’ll never need to do a JOIN
beteween them, I’ll just put them in separate databases so that I can connect
to them independently.
Go 1.19 introduced a way to set a GC memory limit
I run all of my Go projects in VMs with relatively little memory, like 256MB or 512MB. I ran into an issue where my application kept getting OOM killed and it was confusing – did I have a memory leak? What?
After some Googling, I realized that maybe I didn’t have a memory leak, maybe I just needed to reconfigure the garbage collector! It turns out that by default (according to A Guide to the Go Garbage Collector), Go’s garbage collector will let the application allocate memory up to 2x the current heap size.
Mess With DNS’s base heap size is around 170MB and the amount of memory free on the VM is around 160MB right now, so if its memory doubled, it’ll get OOM killed.
In Go 1.19, they added a way to tell Go “hey, if the application starts using this much memory, run a GC”. So I set the GC memory limit to 250MB and it seems to have resulted in the application getting OOM killed less often:
export GOMEMLIMIT=250MiB
some reasons I like making websites in Go
I’ve been making tiny websites (like the nginx playground) in Go on and off for the last 4 years or so and it’s really been working for me. I think I like it because:
- there’s just 1 static binary, all I need to do to deploy it is copy the binary. If there are static files I can just embed them in the binary with embed.
- there’s a built-in webserver that’s okay to use in production, so I don’t need to configure WSGI or whatever to get it to work. I can just put it behind Caddy or run it on fly.io or whatever.
- Go’s toolchain is very easy to install, I can just do
apt-get install golang-goor whatever and then ago buildwill build my project - it feels like there’s very little to remember to start sending HTTP responses
– basically all there is are functions like
Serve(w http.ResponseWriter, r *http.Request)which read the request and send a response. If I need to remember some detail of how exactly that’s accomplished, I just have to read the function! - also
net/httpis in the standard library, so you can start making websites without installing any libraries at all. I really appreciate this one. - Go is a pretty systems-y language, so if I need to run an
ioctlor something that’s easy to do
In general everything about it feels like it makes projects easy to work on for 5 days, abandon for 2 years, and then get back into writing code without a lot of problems.
For contrast, I’ve tried to learn Rails a couple of times and I really want to love Rails – I’ve made a couple of toy websites in Rails and it’s always felt like a really magical experience. But ultimately when I come back to those projects I can’t remember how anything works and I just end up giving up. It feels easier to me to come back to my Go projects that are full of a lot of repetitive boilerplate, because at least I can read the code and figure out how it works.
things I haven’t figured out yet
some things I haven’t done much of yet in Go:
- rendering HTML templates: usually my Go servers are just APIs and I make the
frontend a single-page app with Vue. I’ve used
html/templatea lot in Hugo (which I’ve used for this blog for the last 8 years) but I’m still not sure how I feel about it. - I’ve never made a real login system, usually my servers don’t have users at all.
- I’ve never tried to implement CSRF
In general I’m not sure how to implement security-sensitive features so I don’t start projects which need login/CSRF/etc. I imagine this is where a framework would help.
it’s cool to see the new features Go has been adding
Both of the Go features I mentioned in this post (GOMEMLIMIT and the routing)
are new in the last couple of years and I didn’t notice when they came out. It
makes me think I should pay closer attention to the release notes for new Go
versions.
2024-09-12T15:09:12+00:00
Fullscreen
Open in Tab
I wrote about how much I love fish in this blog post from 2017 and, 7 years of using it every day later, I’ve found even more reasons to love it. So I thought I’d write a new post with both the old reasons I loved it and some reasons.
This came up today because I was trying to figure out why my terminal doesn’t break anymore when I cat a binary to my terminal, the answer was “fish fixes the terminal!”, and I just thought that was really nice.
1. no configuration
In 10 years of using fish I have never found a single thing I wanted to configure. It just works the way I want. My fish config file just has:
- environment variables
- aliases (
alias ls eza,alias vim nvim, etc) - the occasional
direnv hook fish | sourceto integrate a tool like direnv - a script I run to set up my terminal colours
I’ve been told that configuring things in fish is really easy if you ever do want to configure something though.
2. autosuggestions from my shell history
My absolute favourite thing about fish is that I type, it’ll automatically suggest (in light grey) a matching command that I ran recently. I can press the right arrow key to accept the completion, or keep typing to ignore it.
Here’s what that looks like. In this example I just typed the “v” key and it guessed that I want to run the previous vim command again.
2.5 “smart” shell autosuggestions
One of my favourite subtle autocomplete features is how fish handles autocompleting commands that contain paths in them. For example, if I run:
$ ls blah.txt
that command will only be autocompleted in directories that contain blah.txt – it won’t show up in a different directory. (here’s a short comment about how it works)
As an example, if in this directory I type bash scripts/, it’ll only suggest
history commands including files that actually exist in my blog’s scripts
folder, and not the dozens of other irrelevant scripts/ commands I’ve run in
other folders.
I didn’t understand exactly how this worked until last week, it just felt like fish was magically able to suggest the right commands. It still feels a little like magic and I love it.
3. pasting multiline commands
If I copy and paste multiple lines, bash will run them all, like this:
[bork@grapefruit linux-playground (main)]$ echo hi
hi
[bork@grapefruit linux-playground (main)]$ touch blah
[bork@grapefruit linux-playground (main)]$ echo hi
hi
This is a bit alarming – what if I didn’t actually want to run all those commands?
Fish will paste them all at a single prompt, so that I can press Enter if I actually want to run them. Much less scary.
bork@grapefruit ~/work/> echo hi
touch blah
echo hi
4. nice tab completion
If I run ls and press tab, it’ll display all the filenames in a nice grid. I can use either Tab, Shift+Tab, or the arrow keys to navigate the grid.
Also, I can tab complete from the middle of a filename – if the filename starts with a weird character (or if it’s just not very unique), I can type some characters from the middle and press tab.
Here’s what the tab completion looks like:
bork@grapefruit ~/work/> ls
api/ blah.py fly.toml README.md
blah Dockerfile frontend/ test_websocket.sh
I honestly don’t complete things other than filenames very much so I can’t speak to that, but I’ve found the experience of tab completing filenames to be very good.
5. nice default prompt (including git integration)
Fish’s default prompt includes everything I want:
- username
- hostname
- current folder
- git integration
- status of last command exit (if the last command failed)
Here’s a screenshot with a few different variations on the default prompt,
including if the last command was interrupted (the SIGINT) or failed.
6. nice history defaults
In bash, the maximum history size is 500 by default, presumably because computers used to be slow and not have a lot of disk space. Also, by default, commands don’t get added to your history until you end your session. So if your computer crashes, you lose some history.
In fish:
- the default history size is 256,000 commands. I don’t see any reason I’d ever need more.
- if you open a new tab, everything you’ve ever run (including commands in open sessions) is immediately available to you
- in an existing session, the history search will only include commands from the current session, plus everything that was in history at the time that you started the shell
I’m not sure how clearly I’m explaining how fish’s history system works here, but it feels really good to me in practice. My impression is that the way it’s implemented is the commands are continually added to the history file, but fish only loads the history file once, on startup.
I’ll mention here that if you want to have a fancier history system in another shell it might be worth checking out atuin or fzf.
7. press up arrow to search history
I also like fish’s interface for searching history: for example if I want to edit my fish config file, I can just type:
$ config.fish
and then press the up arrow to go back the last command that included config.fish. That’ll complete to:
$ vim ~/.config/fish/config.fish
and I’m done. This isn’t so different from using Ctrl+R in bash to search
your history but I think I like it a little better over all, maybe because
Ctrl+R has some behaviours that I find confusing (for example you can
end up accidentally editing your history which I don’t like).
8. the terminal doesn’t break
I used to run into issues with bash where I’d accidentally cat a binary to
the terminal, and it would break the terminal.
Every time fish displays a prompt, it’ll try to fix up your terminal so that you don’t end up in weird situations like this. I think this is some of the code in fish to prevent broken terminals.
Some things that it does are:
- turn on
echoso that you can see the characters you type - make sure that newlines work properly so that you don’t get that weird staircase effect
- reset your terminal background colour, etc
I don’t think I’ve run into any of these “my terminal is broken” issues in a very long time, and I actually didn’t even realize that this was because of fish – I thought that things somehow magically just got better, or maybe I wasn’t making as many mistakes. But I think it was mostly fish saving me from myself, and I really appreciate that.
9. Ctrl+S is disabled
Also related to terminals breaking: fish disables Ctrl+S (which freezes your terminal and then you need to remember to press Ctrl+Q to unfreeze it). It’s a feature that I’ve never wanted and I’m happy to not have it.
Apparently you can disable Ctrl+S in other shells with stty -ixon.
10. nice syntax highlighting
By default commands that don’t exist are highlighted in red, like this.
11. easier loops
I find the loop syntax in fish a lot easier to type than the bash syntax. It looks like this:
for i in *.yaml
echo $i
end
Also it’ll add indentation in your loops which is nice.
12. easier multiline editing
Related to loops: you can edit multiline commands much more easily than in bash (just use the arrow keys to navigate the multiline command!). Also when you use the up arrow to get a multiline command from your history, it’ll show you the whole command the exact same way you typed it instead of squishing it all onto one line like bash does:
$ bash
$ for i in *.png
> do
> echo $i
> done
$ # press up arrow
$ for i in *.png; do echo $i; done ink
13. Ctrl+left arrow
This might just be me, but I really appreciate that fish has the Ctrl+left arrow / Ctrl+right arrow keyboard shortcut for moving between
words when writing a command.
I’m honestly a bit confused about where this keyboard shortcut is coming from
(the only documented keyboard shortcut for this I can find in fish is Alt+left arrow / Alt + right arrow which seems to do the same thing), but I’m pretty
sure this is a fish shortcut.
A couple of notes about getting this shortcut to work / where it comes from:
- one person said they needed to switch their terminal emulator from the “Linux console” keybindings to “Default (XFree 4)” to get it to work in fish
- on Mac OS,
Ctrl+left arrowswitches workspaces by default, so I had to turn that off. - Also apparently Ubuntu configures libreadline in
/etc/inputrcto makeCtrl+left/right arrowgo back/forward a word, so it’ll work in bash on Ubuntu and maybe other Linux distros too. Here’s a stack overflow question talking about that
a downside: not everything has a fish integration
Sometimes tools don’t have instructions for integrating them with fish. That’s annoying, but:
- I’ve found this has gotten better over the last 10 years as fish has gotten more popular. For example Python’s virtualenv has had a fish integration for a long time now.
- If I need to run a POSIX shell command real quick, I can always just run
bashorzsh - I’ve gotten much better over the years at translating simple commands to fish syntax when I need to
My biggest day-to-day to annoyance is probably that for whatever reason I’m
still not used to fish’s syntax for setting environment variables, I get confused
about set vs set -x.
another downside: fish_add_path
fish has a function called fish_add_path that you can run to add a directory
to your PATH like this:
fish_add_path /some/directory
I love the idea of it and I used to use it all the time, but I’ve stopped using it for two reasons:
- Sometimes
fish_add_pathwill update thePATHfor every session in the future (with a “universal variable”) and sometimes it will update thePATHjust for the current session. It’s hard for me to tell which one it will do: in theory the docs explain this but I could not understand them. - If you ever need to remove the directory from your
PATHa few weeks or months later because maybe you made a mistake, that’s also kind of hard to do (there are instructions in this comments of this github issue though).
Instead I just update my PATH like this, similarly to how I’d do it in bash:
set PATH $PATH /some/directory/bin
on POSIX compatibility
When I started using fish, you couldn’t do things like cmd1 && cmd2 – it
would complain “no, you need to run cmd1; and cmd2” instead.
It seems like over the years fish has started accepting a little more POSIX-style syntax than it used to, like:
cmd1 && cmd2export a=bto set an environment variable (though this seems a bit limited, you can’t doexport PATH=$PATH:/whateverso I think it’s probably better to learnsetinstead)
on fish as a default shell
Changing my default shell to fish is always a little annoying, I occasionally get myself into a situation where
- I install fish somewhere like maybe
/home/bork/.nix-stuff/bin/fish - I add the new fish location to
/etc/shellsas an allowed shell - I change my shell with
chsh - at some point months/years later I reinstall fish in a different location for some reason and remove the old one
- oh no!!! I have no valid shell! I can’t open a new terminal tab anymore!
This has never been a major issue because I always have a terminal open somewhere where I can fix the problem and rescue myself, but it’s a bit alarming.
If you don’t want to use chsh to change your shell to fish (which is very reasonable,
maybe I shouldn’t be doing that), the Arch wiki page has a couple of good suggestions –
either configure your terminal emulator to run fish or add an exec fish to
your .bashrc.
I’ve never really learned the scripting language
Other than occasionally writing a for loop interactively on the command line, I’ve never really learned the fish scripting language. I still do all of my shell scripting in bash.
I don’t think I’ve ever written a fish function or if statement.
it seems like fish is getting pretty popular
I ran a highly unscientific poll on Mastodon asking people what shell they use interactively. The results were (of 2600 responses):
- 46% bash
- 49% zsh
- 16% fish
- 5% other
I think 16% for fish is pretty remarkable, since (as far as I know) there isn’t any system where fish is the default shell, and my sense is that it’s very common to just stick to whatever your system’s default shell is.
It feels like a big achievement for the fish project, even if maybe my Mastodon followers are more likely than the average shell user to use fish for some reason.
who might fish be right for?
Fish definitely isn’t for everyone. I think I like it because:
- I really dislike configuring my shell (and honestly my dev environment in general), I want things to “just work” with the default settings
- fish’s defaults feel good to me
- I don’t spend that much time logged into random servers using other shells so there’s not too much context switching
- I liked its features so much that I was willing to relearn how to do a few
“basic” shell things, like using parentheses
(seq 1 10)to run a command instead of backticks or usingsetinstead ofexport
Maybe you’re also a person who would like fish! I hope a few more of the people who fish is for can find it, because I spend so much of my time in the terminal and it’s made that time much more pleasant.
2024-08-31T18:36:50-07:00
Fullscreen
Open in Tab
I just did a massive spring cleaning of one of my servers, trying to clean up what has become quite the mess of clutter. For every website on the server, I either:
- Documented what it is, who is using it, and what version of language and framework it uses
- Archived it as static HTML flat files
- Moved the source code from GitHub to a private git server
- Deleted the files
It feels good to get rid of old code, and to turn previously dynamic sites (with all of the risk they come with) into plain HTML.
This is also making me seriously reconsider the value of spinning up any new projects. Several of these are now 10 years old, still churning along fine, but difficult to do any maintenance on because of versions and dependencies. For example:
- indieauth.com - this has been on the chopping block for years, but I haven't managed to build a replacement yet, and is still used by a lot of people
- webmention.io - this is a pretty popular service, and I don't want to shut it down, but there's a lot of problems with how it's currently built and no easy way to make changes
- switchboard.p3k.io - this is a public WebSub (PubSubHubbub) hub, like Superfeedr, and has weirdly gained a lot of popularity in the podcast feed space in the last few years
One that I'm particularly happy with, despite it being an ugly pile of PHP, is oauth.net. I inherited this site in 2012, and it hasn't needed any framework upgrades since it's just using PHP templates. My ham radio website w7apk.com is similarly a small amount of templated PHP, and it is low stress to maintain, and actually fun to quickly jot some notes down when I want. I like not having to go through the whole ceremony of setting up a dev environment, installing dependencies, upgrading things to the latest version, checking for backwards incompatible changes, git commit, deploy, etc. I can just sftp some changes up to the server and they're live.
Some questions for myself for the future, before starting a new project:
- Could this actually just be a tag page on my website, like #100DaysOfMusic or #BikeTheEclipse?
- If it really needs to be a new project, then:
- Can I create it in PHP without using any frameworks or libraries? Plain PHP ages far better than pulling in any dependencies which inevitably stop working with a version 2-3 EOL cycles back, so every library brought in means signing up for annual maintenance of the whole project. Frameworks can save time in the short term, but have a huge cost in the long term.
- Is it possible to avoid using a database? Databases aren't inherently bad, but using one does make the project slightly more fragile, since it requires plans for migrations and backups, and
- If a database is required, is it possible to create it in a way that does not result in ever-growing storage needs?
- Is this going to store data or be a service that other people are going to use? If so, plan on a registration form so that I have a way to contact people eventually when I need to change it or shut it down.
- If I've got this far with the questions, am I really ready to commit to supporting this code base for the next 10 years?
One project I've been committed to maintaining and doing regular (ok fine, "semi-regular") updates for is Meetable, the open source events website that I run on a few domains:
I started this project in October 2019, excited for all the IndieWebCamps we were going to run in 2020. Somehow that is already 5 years ago now. Well that didn't exactly pan out, but I did quickly pivot it to add a bunch of features that are helpful for virtual events, so it worked out ok in the end. We've continued to use it for posting IndieWeb events, and I also run an instance for two IETF working groups. I'd love to see more instances pop up, I've only encountered one or two other ones in the wild. I even spent a significant amount of time on the onboarding flow so that it's relatively easy to install and configure. I even added passkeys for the admin login so you don't need any external dependencies on auth providers. It's a cool project if I may say so myself.
Anyway, this is not a particularly well thought out blog post, I just wanted to get my thoughts down after spending all day combing through the filesystem of my web server and uncovering a lot of ancient history.
2024-08-29T12:59:53-07:00
Fullscreen
Open in Tab
The first law of OAuth states that
the total number of authorized access tokens
in an isolated system
must remain constant over time. Over time.
In the world of OAuth, where the sun always shines,
Tokens like treasures, in digital lines.
Security's a breeze, with every law so fine,
OAuth, oh yeah, tonight we dance online!
The second law of OAuth states that
the overall security of the system
must always remain constant over time.
Over time. Over time. Over time.
In the world of OAuth, where the sun always shines,
Tokens like treasures, in digital lines.
Security's a breeze, with every law so fine,
OAuth, oh yeah, tonight we dance online!
The third law of OAuth states that
as the security of the system approaches absolute,
the ability to grant authorized access approaches zero. Zero!
In the world of OAuth, where the sun always shines,
Tokens like treasures, in digital lines.
Security's a breeze, with every law so fine,
OAuth, oh yeah, tonight we dance online!
Tonight we dance online!
OAuth, oh yeah!
Lyrics and music by AI, prompted and edited by Aaron Parecki
2024-08-19T08:15:28+00:00
Fullscreen
Open in Tab
About 3 years ago, I announced Mess With DNS in this blog post, a playground where you can learn how DNS works by messing around and creating records.
I wasn’t very careful with the DNS implementation though (to quote the release blog post: “following the DNS RFCs? not exactly”), and people started reporting problems that eventually I decided that I wanted to fix.
the problems
Some of the problems people have reported were:
- domain names with underscores weren’t allowed, even though they should be
- If there was a CNAME record for a domain name, it allowed you to create other records for that domain name, even if it shouldn’t
- you could create 2 different CNAME records for the same domain name, which shouldn’t be allowed
- no support for the SVCB or HTTPS record types, which seemed a little complex to implement
- no support for upgrading from UDP to TCP for big responses
And there are certainly more issues that nobody got around to reporting, for example that if you added an NS record for a subdomain to delegate it, Mess With DNS wouldn’t handle the delegation properly.
the solution: PowerDNS
I wasn’t sure how to fix these problems for a long time – technically I could have started addressing them individually, but it felt like there were a million edge cases and I’d never get there.
But then one day I was chatting with someone else who was working on a DNS server and they said they were using PowerDNS: an open source DNS server with an HTTP API!
This seemed like an obvious solution to my problems – I could just swap out my own crappy DNS implementation for PowerDNS.
There were a couple of challenges I ran into when setting up PowerDNS that I’ll talk about here. I really don’t do a lot of web development and I think I’ve never built a website that depends on a relatively complex API before, so it was a bit of a learning experience.
challenge 1: getting every query made to the DNS server
One of the main things Mess With DNS does is give you a live view of every DNS query it receives for your subdomain, using a websocket. To make this work, it needs to intercept every DNS query before they it gets sent to the PowerDNS DNS server:
There were 2 options I could think of for how to intercept the DNS queries:
- dnstap:
dnsdist(a DNS load balancer from the PowerDNS project) has support for logging all DNS queries it receives using dnstap, so I could put dnsdist in front of PowerDNS and then log queries that way - Have my Go server listen on port 53 and proxy the queries myself
I originally implemented option #1, but for some reason there was a 1 second delay before every query got logged. I couldn’t figure out why, so I implemented my own very simple proxy instead.
challenge 2: should the frontend have direct access to the PowerDNS API?
The frontend used to have a lot of DNS logic in it – it converted emoji domain
names to ASCII using punycode, had a lookup table to convert numeric DNS query
types (like 1) to their human-readable names (like A), did a little bit of
validation, and more.
Originally I considered keeping this pattern and just giving the frontend (more or less) direct access to the PowerDNS API to create and delete, but writing even more complex code in Javascript didn’t feel that appealing to me – I don’t really know how to write tests in Javascript and it seemed like it wouldn’t end well.
So I decided to take all of the DNS logic out of the frontend and write a new DNS API for managing records, shaped something like this:
GET /recordsDELETE /records/<ID>DELETE /records/(delete all records for a user)POST /records/(create record)POST /records/<ID>(update record)
This meant that I could actually write tests for my code, since the backend is in Go and I do know how to write tests in Go.
what I learned: it’s okay for an API to duplicate information
I had this idea that APIs shouldn’t return duplicate information – for example if I get a DNS record, it should only include a given piece of information once.
But I ran into a problem with that idea when displaying MX records: an MX record has 2 fields, “preference”, and “mail server”. And I needed to display that information in 2 different ways on the frontend:
- In a form, where “Preference” and “Mail Server” are 2 different form fields (like
10andmail.example.com) - In a summary view, where I wanted to just show the record (
10 mail.example.com)
This is kind of a small problem, but it came up in a few different places.
I talked to my friend Marco Rogers about this, and based on some advice from him I realized that I could return the same information in the API in 2 different ways! Then the frontend just has to display it. So I started just returning duplicate information in the API, something like this:
{
values: {'Preference': 10, 'Server': 'mail.example.com'},
content: '10 mail.example.com',
...
}
I ended up using this pattern in a couple of other places where I needed to display the same information in 2 different ways and it was SO much easier.
I think what I learned from this is that if I’m making an API that isn’t intended for external use (there are no users of this API other than the frontend!), I can tailor it very specifically to the frontend’s needs and that’s okay.
challenge 3: what’s a record’s ID?
In Mess With DNS (and I think in most DNS user interfaces!), you create, add, and delete records.
But that’s not how the PowerDNS API works. In PowerDNS, you create a zone, which is made of record sets. Records don’t have any ID in the API at all.
I ended up solving this by generate a fake ID for each records which is made of:
- its name
- its type
- and its content (base64-encoded)
For example one record’s ID is brooch225.messwithdns.com.|NS|bnMxLm1lc3N3aXRoZG5zLmNvbS4=
Then I can search through the zone and find the appropriate record to update it.
This means that if you update a record then its ID will change which isn’t usually what I want in an ID, but that seems fine.
challenge 4: making clear error messages
I think the error messages that the PowerDNS API returns aren’t really intended to be shown to end users, for example:
Name 'new\032site.island358.messwithdns.com.' contains unsupported characters(this error encodes the space as\032, which is a bit disorienting if you don’t know that the space character is 32 in ASCII)RRset test.pear5.messwithdns.com. IN CNAME: Conflicts with pre-existing RRset(this talks about RRsets, which aren’t a concept that the Mess With DNS UI has at all)Record orange.beryl5.messwithdns.com./A '1.2.3.4$': Parsing record content (try 'pdnsutil check-zone'): unable to parse IP address, strange character: $(mentions “pdnsutil”, a utility which Mess With DNS’s users don’t have access to in this context)
I ended up handling this in two ways:
- Do some initial basic validation of values that users enter (like IP addresses), so I can just return errors like
Invalid IPv4 address: "1.2.3.4$ - If that goes well, send the request to PowerDNS and if we get an error back, then do some hacky translation of those messages to make them clearer.
Sometimes users will still get errors from PowerDNS directly, but I added some logging of all the errors that users see, so hopefully I can review them and add extra translations if there are other common errors that come up.
I think what I learned from this is that if I’m building a user-facing application on top of an API, I need to be pretty thoughtful about how I resurface those errors to users.
challenge 5: setting up SQLite
Previously Mess With DNS was using a Postgres database. This was problematic
because I only gave the Postgres machine 256MB of RAM, which meant that the
database got OOM killed almost every single day. I never really worked out
exactly why it got OOM killed every day, but that’s how it was. I spent some
time trying to tune Postgres’ memory usage by setting the max connections /
work-mem / maintenance-work-mem and it helped a bit but didn’t solve the
problem.
So for this refactor I decided to use SQLite instead, because the website doesn’t really get that much traffic. There are some choices involved with using SQLite, and I decided to:
- Run
db.SetMaxOpenConns(1)to make sure that we only open 1 connection to the database at a time, to preventSQLITE_BUSYerrors from two threads trying to access the database at the same time (just setting WAL mode didn’t work) - Use separate databases for each of the 3 tables (users, records, and requests) to reduce contention. This maybe isn’t really necessary, but there was no reason I needed the tables to be in the same database so I figured I’d set up separate databases to be safe.
- Use the cgo-free modernc.org/sqlite, which translates SQLite’s source code to Go. I might switch to a more “normal” sqlite implementation instead at some point and use cgo though. I think the main reason I prefer to avoid cgo is that cgo has landed me with difficult-to-debug errors in the past.
- use WAL mode
I still haven’t set up backups, though I don’t think my Postgres database had backups either. I think I’m unlikely to use litestream for backups – Mess With DNS is very far from a critical application, and I think daily backups that I could recover from in case of a disaster are more than good enough.
challenge 6: upgrading Vue & managing forms
This has nothing to do with PowerDNS but I decided to upgrade Vue.js from version 2 to 3 as part of this refresh. The main problem with that is that the form validation library I was using (FormKit) completely changed its API between Vue 2 and Vue 3, so I decided to just stop using it instead of learning the new API.
I ended up switching to some form validation tools that are built into the
browser like required and oninvalid (here’s the code).
I think it could use some of improvement, I still don’t understand forms very well.
challenge 7: managing state in the frontend
This also has nothing to do with PowerDNS, but when modifying the frontend I realized that my state management in the frontend was a mess – in every place where I made an API request to the backend, I had to try to remember to add a “refresh records” call after that in every place that I’d modified the state and I wasn’t always consistent about it.
With some more advice from Marco, I ended up implementing a single global state management store which stores all the state for the application, and which lets me create/update/delete records.
Then my components can just call store.createRecord(record), and the store
will automatically resynchronize all of the state as needed.
challenge 8: sequencing the project
This project ended up having several steps because I reworked the whole integration between the frontend and the backend. I ended up splitting it into a few different phases:
- Upgrade Vue from v2 to v3
- Make the state management store
- Implement a different backend API, move a lot of DNS logic out of the frontend, and add tests for the backend
- Integrate PowerDNS
I made sure that the website was (more or less) 100% working and then deployed it in between phases, so that the amount of changes I was managing at a time stayed somewhat under control.
the new website is up now!
I released the upgraded website a few days ago and it seems to work! The PowerDNS API has been great to work on top of, and I’m relieved that there’s a whole class of problems that I now don’t have to think about at all, other than potentially trying to make the error messages from PowerDNS a little clearer. Using PowerDNS has fixed a lot of the DNS issues that folks have reported in the last few years and it feels great.
If you run into problems with the new Mess With DNS I’d love to hear about them here.
2024-08-06T08:38:35+00:00
Fullscreen
Open in Tab
I’ve been writing Go pretty casually for years – the backends for all of my playgrounds (nginx, dns, memory, more DNS) are written in Go, but many of those projects are just a few hundred lines and I don’t come back to those codebases much.
I thought I more or less understood the basics of the language, but this week I’ve been writing a lot more Go than usual while working on some upgrades to Mess with DNS, and ran into a bug that revealed I was missing a very basic concept!
Then I posted about this on Mastodon and someone linked me to this very cool site (and book) called 100 Go Mistakes and How To Avoid Them by Teiva Harsanyi. It just came out in 2022 so it’s relatively new.
I decided to read through the site to see what else I was missing, and found a couple of other misconceptions I had about Go. I’ll talk about some of the mistakes that jumped out to me the most, but really the whole 100 Go Mistakes site is great and I’d recommend reading it.
Here’s the initial mistake that started me on this journey:
mistake 1: not understanding that structs are copied on assignment
Let’s say we have a struct:
type Thing struct {
Name string
}
and this code:
thing := Thing{"record"}
other_thing := thing
other_thing.Name = "banana"
fmt.Println(thing)
This prints “record” and not “banana” (play.go.dev link), because thing is copied when you
assign it to other_thing.
the problem this caused me: ranges
The bug I spent 2 hours of my life debugging last week was effectively this code (play.go.dev link):
type Thing struct {
Name string
}
func findThing(things []Thing, name string) *Thing {
for _, thing := range things {
if thing.Name == name {
return &thing
}
}
return nil
}
func main() {
things := []Thing{Thing{"record"}, Thing{"banana"}}
thing := findThing(things, "record")
thing.Name = "gramaphone"
fmt.Println(things)
}
This prints out [{record} {banana}] – because findThing returned a copy, we didn’t change the name in the original array.
This mistake is #30 in 100 Go Mistakes.
I fixed the bug by changing it to something like this (play.go.dev link), which returns a reference to the item in the array we’re looking for instead of a copy.
func findThing(things []Thing, name string) *Thing {
for i := range things {
if things[i].Name == name {
return &things[i]
}
}
return nil
}
why didn’t I realize this?
When I learned that I was mistaken about how assignment worked in Go I was really taken aback, like – it’s such a basic fact about the language works! If I was wrong about that then what ELSE am I wrong about in Go????
My best guess for what happened is:
- I’ve heard for my whole life that when you define a function, you need to think about whether its arguments are passed by reference or by value
- So I’d thought about this in Go, and I knew that if you pass a struct as a value to a function, it gets copied – if you want to pass a reference then you have to pass a pointer
- But somehow it never occurred to me that you need to think about the same
thing for assignments, perhaps because in most of the other languages I
use (Python, JS, Java) I think everything is a reference anyway. Except for
in Rust, where you do have values that you make copies of but I think most of the time I had to run
.clone()explicitly. (though apparently structs will be automatically copied on assignment if the struct implements theCopytrait) - Also obviously I just don’t write that much Go so I guess it’s never come up.
mistake 2: side effects appending slices (#25)
When you subset a slice with x[2:3], the original slice and the sub-slice
share the same backing array, so if you append to the new slice, it can
unintentionally change the old slice:
For example, this code prints [1 2 3 555 5] (code on play.go.dev)
x := []int{1, 2, 3, 4, 5}
y := x[2:3]
y = append(y, 555)
fmt.Println(x)
I don’t think this has ever actually happened to me, but it’s alarming and I’m very happy to know about it.
Apparently you can avoid this problem by changing y := x[2:3] to y := x[2:3:3], which restricts the new slice’s capacity so that appending to it
will re-allocate a new slice. Here’s some code on play.go.dev that does that.
mistake 3: not understanding the different types of method receivers (#42)
This one isn’t a “mistake” exactly, but it’s been a source of confusion for me and it’s pretty simple so I’m glad to have it cleared up.
In Go you can declare methods in 2 different ways:
func (t Thing) Function()(a “value receiver”)func (t *Thing) Function()(a “pointer receiver”)
My understanding now is that basically:
- If you want the method to mutate the struct
t, you need a pointer receiver. - If you want to make sure the method doesn’t mutate the struct
t, use a value receiver.
Explanation #42 has a bunch of other interesting details though. There’s definitely still something I’m missing about value vs pointer receivers (I got a compile error related to them a couple of times in the last week that I still don’t understand), but hopefully I’ll run into that error again soon and I can figure it out.
more interesting things I noticed
Some more notes from 100 Go Mistakes:
- apparently you can name the outputs of your function (#43), though that can have issues (#44) and I’m not sure I want to
- apparently you can put tests in a different package (#90) to ensure that you only use the package’s public interfaces, which seems really useful
- there are a lots of notes about how to use contexts, channels, goroutines, mutexes, sync.WaitGroup, etc. I’m sure I have something to learn about all of those but today is not the day I’m going to learn them.
Also there are some things that have tripped me up in the past, like:
- forgetting the return statement after replying to an HTTP request (#80)
- not realizing the httptest package exists (#88)
this “100 common mistakes” format is great
I really appreciated this “100 common mistakes” format – it made it really easy for me to skim through the mistakes and very quickly mentally classify them into:
- yep, I know that
- not interested in that one right now
- WOW WAIT I DID NOT KNOW THAT, THAT IS VERY USEFUL!!!!
It looks like “100 Common Mistakes” is a series of books from Manning and they also have “100 Java Mistakes” and an upcoming “100 SQL Server Mistakes”.
Also I enjoyed what I’ve read of Effective Python by Brett Slatkin, which has a similar “here are a bunch of short Python style tips” structure where you can quickly skim it and take what’s useful to you. There’s also Effective C++, Effective Java, and probably more.
some other Go resources
other resources I’ve appreciated:
- Go by example for basic syntax
- go.dev/play
- obviously https://pkg.go.dev for documentation about literally everything
- staticcheck seems like a useful linter – for example I just started using it to tell me when I’ve forgotten to handle an error
- apparently golangci-lint includes a bunch of different linters
2024-07-21T12:54:40-07:00
Fullscreen
Open in Tab

Here's where you can find me at IETF 120 in Vancouver!
Monday
- 9:30 - 11:30 • alldispatch • Regency C/D
- 13:00 - 15:00 • oauth • Plaza B
- 18:30 - 19:30 • Hackdemo Happy Hour • Regency Hallway
Tuesday
Wednesday
- 9:30 - 11:30 • wimse • Georgia A
- 11:45 - 12:45 • Chairs Forum • Regency C/D
- 17:30 - 19:30 • IETF Plenary • Regency A/B/C/D
Thursday
Friday
- 13:00 - 15:00 • oauth • Regency A/B
My Current Drafts
2024-07-08T13:00:15+00:00
Fullscreen
Open in Tab
The other day I asked what folks on Mastodon find confusing about working in the terminal, and one thing that stood out to me was “editing a command you already typed in”.
This really resonated with me: even though entering some text and editing it is
a very “basic” task, it took me maybe 15 years of using the terminal every
single day to get used to using Ctrl+A to go to the beginning of the line (or
Ctrl+E for the end – I think I used Home/End instead).
So let’s talk about why entering text might be hard! I’ll also share a few tips that I wish I’d learned earlier.
it’s very inconsistent between programs
A big part of what makes entering text in the terminal hard is the inconsistency between how different programs handle entering text. For example:
- some programs (
cat,nc,git commit --interactive, etc) don’t support using arrow keys at all: if you press arrow keys, you’ll just see^[[D^[[D^[[C^[[C^ - many programs (like
irb,python3on a Linux machine and many many more) use thereadlinelibrary, which gives you a lot of basic functionality (history, arrow keys, etc) - some programs (like
/usr/bin/python3on my Mac) do support very basic features like arrow keys, but not other features likeCtrl+leftor reverse searching withCtrl+R - some programs (like the
fishshell oripython3ormicroorvim) have their own fancy system for accepting input which is totally custom
So there’s a lot of variation! Let’s talk about each of those a little more.
mode 1: the baseline
First, there’s “the baseline” – what happens if a program just accepts text by
calling fgets() or whatever and doing absolutely nothing else to provide a
nicer experience. Here’s what using these tools typically looks for me – If I
start the version of dash installed on
my machine (a pretty minimal shell) press the left arrow keys, it just prints
^[[D to the terminal.
$ ls l-^[[D^[[D^[[D
At first it doesn’t seem like all of these “baseline” tools have much in common, but there are actually a few features that you get for free just from your terminal, without the program needing to do anything special at all.
The things you get for free are:
- typing in text, obviously
- backspace
Ctrl+W, to delete the previous wordCtrl+U, to delete the whole line- a few other things unrelated to text editing (like
Ctrl+Cto interrupt the process,Ctrl+Zto suspend, etc)
This is not great, but it means that if you want to delete a word you
generally can do it with Ctrl+W instead of pressing backspace 15 times, even
if you’re in an environment which is offering you absolutely zero features.
You can get a list of all the ctrl codes that your terminal supports with stty -a.
mode 2: tools that use readline
The next group is tools that use readline! Readline is a GNU library to make entering text more pleasant, and it’s very widely used.
My favourite readline keyboard shortcuts are:
Ctrl+E(orEnd) to go to the end of the lineCtrl+A(orHome) to go to the beginning of the lineCtrl+left/right arrowto go back/forward 1 word- up arrow to go back to the previous command
Ctrl+Rto search your history
And you can use Ctrl+W / Ctrl+U from the “baseline” list, though Ctrl+U
deletes from the cursor to the beginning of the line instead of deleting the
whole line. I think Ctrl+W might also have a slightly different definition of
what a “word” is.
There are a lot more (here’s a full list), but those are the only ones that I personally use.
The bash shell is probably the most famous readline user (when you use
Ctrl+R to search your history in bash, that feature actually comes from
readline), but there are TONS of programs that use it – for example psql,
irb, python3, etc.
tip: you can make ANYTHING use readline with rlwrap
One of my absolute favourite things is that if you have a program like nc
without readline support, you can just run rlwrap nc to turn it into a
program with readline support!
This is incredible and makes a lot of tools that are borderline unusable MUCH more pleasant to use. You can even apparently set up rlwrap to include your own custom autocompletions, though I’ve never tried that.
some reasons tools might not use readline
I think reasons tools might not use readline might include:
- the program is very simple (like
catornc) and maybe the maintainers don’t want to bring in a relatively large dependency - license reasons, if the program’s license is not GPL-compatible – readline is GPL-licensed, not LGPL
- only a very small part of the program is interactive, and maybe readline
support isn’t seen as important. For example
githas a few interactive features (likegit add -p), but not very many, and usually you’re just typing a single character likeyorn– most of the time you need to really type something significant in git, it’ll drop you into a text editor instead.
For example idris2 says they don’t use readline
to keep dependencies minimal and suggest using rlwrap to get better
interactive features.
how to know if you’re using readline
The simplest test I can think of is to press Ctrl+R, and if you see:
(reverse-i-search)`':
then you’re probably using readline. This obviously isn’t a guarantee (some
other library could use the term reverse-i-search too!), but I don’t know of
another system that uses that specific term to refer to searching history.
the readline keybindings come from Emacs
Because I’m a vim user, It took me a very long time to understand where these
keybindings come from (why Ctrl+A to go to the beginning of a line??? so
weird!)
My understanding is these keybindings actually come from Emacs – Ctrl+A and
Ctrl+E do the same thing in Emacs as they do in Readline and I assume the
other keyboard shortcuts mostly do as well, though I tried out Ctrl+W and
Ctrl+U in Emacs and they don’t do the same thing as they do in the terminal
so I guess there are some differences.
There’s some more history of the Readline project here.
mode 3: another input library (like libedit)
On my Mac laptop, /usr/bin/python3 is in a weird middle ground where it
supports some readline features (for example the arrow keys), but not the
other ones. For example when I press Ctrl+left arrow, it prints out ;5D,
like this:
$ python3
>>> importt subprocess;5D
Folks on Mastodon helped me figure out that this is because in the default
Python install on Mac OS, the Python readline module is actually backed by
libedit, which is a similar library which has fewer features, presumably
because Readline is GPL licensed.
Here’s how I was eventually able to figure out that Python was using libedit on my system:
$ python3 -c "import readline; print(readline.__doc__)"
Importing this module enables command line editing using libedit readline.
Generally Python uses readline though if you install it on Linux or through Homebrew. It’s just that the specific version that Apple includes on their systems doesn’t have readline. Also Python 3.13 is going to remove the readline dependency in favour of a custom library, so “Python uses readline” won’t be true in the future.
I assume that there are more programs on my Mac that use libedit but I haven’t looked into it.
mode 4: something custom
The last group of programs is programs that have their own custom (and sometimes much fancier!) system for editing text. This includes:
- most terminal text editors (nano, micro, vim, emacs, etc)
- some shells (like fish), for example it seems like fish supports
Ctrl+Zfor undo when typing in a command. Zsh’s line editor is called zle. - some REPLs (like
ipython), for example IPython uses the prompt_toolkit library instead of readline - lots of other programs (like
atuin)
Some features you might see are:
- better autocomplete which is more customized to the tool
- nicer history management (for example with syntax highlighting) than the default you get from readline
- more keyboard shortcuts
custom input systems are often readline-inspired
I went looking at how Atuin (a wonderful tool for searching your shell history that I started using recently) handles text input. Looking at the code and some of the discussion around it, their implementation is custom but it’s inspired by readline, which makes sense to me – a lot of users are used to those keybindings, and it’s convenient for them to work even though atuin doesn’t use readline.
prompt_toolkit (the library IPython uses) is similar – it actually supports a lot of options (including vi-like keybindings), but the default is to support the readline-style keybindings.
This is like how you see a lot of programs which support very basic vim
keybindings (like j for down and k for up). For example Fastmail supports
j and k even though most of its other keybindings don’t have much
relationship to vim.
I assume that most “readline-inspired” custom input systems have various subtle incompatibilities with readline, but this doesn’t really bother me at all personally because I’m extremely ignorant of most of readline’s features. I only use maybe 5 keyboard shortcuts, so as long as they support the 5 basic commands I know (which they always do!) I feel pretty comfortable. And usually these custom systems have much better autocomplete than you’d get from just using readline, so generally I prefer them over readline.
lots of shells support vi keybindings
Bash, zsh, and fish all have a “vi mode” for entering text. In a very unscientific poll I ran on Mastodon, 12% of people said they use it, so it seems pretty popular.
Readline also has a “vi mode” (which is how Bash’s support for it works), so by extension lots of other programs have it too.
I’ve always thought that vi mode seems really cool, but for some reason even though I’m a vim user it’s never stuck for me.
understanding what situation you’re in really helps
I’ve spent a lot of my life being confused about why a command line application I was using wasn’t behaving the way I wanted, and it feels good to be able to more or less understand what’s going on.
I think this is roughly my mental flowchart when I’m entering text at a command line prompt:
- Do the arrow keys not work? Probably there’s no input system at all, but at
least I can use
Ctrl+WandCtrl+U, and I canrlwrapthe tool if I want more features. - Does
Ctrl+Rprintreverse-i-search? Probably it’s readline, so I can use all of the readline shortcuts I’m used to, and I know I can get some basic history and press up arrow to get the previous command. - Does
Ctrl+Rdo something else? This is probably some custom input library: it’ll probably act more or less like readline, and I can check the documentation if I really want to know how it works.
Being able to diagnose what’s going on like this makes the command line feel a more predictable and less chaotic.
some things this post left out
There are lots more complications related to entering text that we didn’t talk about at all here, like:
- issues related to ssh / tmux / etc
- the
TERMenvironment variable - how different terminals (gnome terminal, iTerm, xterm, etc) have different kinds of support for copying/pasting text
- unicode
- probably a lot more














































