just discovered 57 minutes in to recording an audio version of the newsletter that my failing mic cable had disconnected, and my recording software helpfully failed over to the webcam mic that makes me sound like i’m in a tin can. anyway how’s your day going?
apparently preventing fraud is “anti-crypto”.
according to this Fortune headline, the SEC going after fraud and deceptive business practices after a company publicly announced they were going to breach a previous agreement with the agency is an “anti-crypto campaign”
This is particularly hilarious given that Fortune has skewered Gary Gensler for failing to go after the FTX, Celsius, and Terra frauds.
Schrödinger’s regulator can’t go after fraud before the company collapses, but if it collapses and the SEC didn’t warn us, they failed.
Remember when mainstream news outlets published a bunch of incredibly irresponsible articles about how rich people were getting off crypto, and then people bought in and got wrecked over the two years of "crypto winter" that followed?
Anyway here's a WSJ headline I just saw: "Young Men Are Making Risky Bets on Crypto and Politics—and Raking It In Right Now" (gift link)
And an excerpt:
Today's links
- Proud to be a blockhead: The true economics of creativity and communication.
- Hey look at this: Delights to delectate.
- This day in history: 2009, 2014, 2019, 2023
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Proud to be a blockhead (permalink)
This is my last Pluralistic post of the year, and rather than round up my most successful posts of the year, I figured I'd write a little about why it's impossible for me to do that, and why that is by design, and what that says about the arts, monopolies, and creative labor markets.
I started Pluralistic nearly five years ago, and from the outset, I was adamant that I wouldn't measure my success through quantitative measures. The canonical version of Pluralistic – the one that lives at pluralistic.net – has no metrics, no analytics, no logs, and no tracking. I don't know who visits the site. I don't know how many people visit the site. I don't know which posts are most popular, and which ones are the least popular. I can't know any of that.
The other versions of Pluralistic are less ascetic, but only because there's no way for me to turn off some metrics on those channels. The Mailman service that delivers the (tracker-free) email version of Pluralistic necessarily has a system for telling me how many subscribers I have, but I have never looked at that number, and have no intention of doing so. I have turned off notifications when someone signs up for the list, or resigns from it.
The commercial, surveillance-heavy channels for Pluralistic – Tumblr, Twitter – have a lot of metrics, but again, I don't consult them. Medium and Mastodon have some metrics, and again, I just pretend they don't exist.
What do I pay attention to? The qualitative impacts of my writing. Comments. Replies. Emails. Other bloggers who discuss it, or discussions on Metafilter, Slashdot, Reddit and Hacker News. That stuff matters to me a lot because I write for two reasons, which are, in order: to work out my own thinking, and; to influence other peoples' thinking.
Writing is a cognitive prosthesis for me. Working things out on the page helps me work things out in my life. And, of course, working things out on the page helps me work more things out on the page. Writing begets writing:
https://pluralistic.net/2021/05/09/the-memex-method/
Honestly, that is sufficient. Not in the sense that writing, without being read, would make me happy or fulfilled. Being read and being part of a community and a conversation matters a lot to me. But the very act of writing is so important to me that even if no one read me, I would still write.
This is a thing that writers aren't supposed to admit. As I wrote on this blog's fourth anniversary, the most laughably false statement about writing ever uttered is Samuel Johnson's notorious "No man but a blockhead ever wrote but for money":
https://pluralistic.net/2024/02/20/fore/#synthesis
Making art is not an "economically rational" activity. Neither is attempting to persuade other people to your point of view. These activities are not merely intrinsically satisfying, they are also necessary, at least for many of us. The long, stupid fight about copyright that started in the Napster era has rarely acknowledged this, nor has it grappled with the implications of it. On the one hand, you have copyright maximalists who say totally absurd things like, "If you don't pay for art, no one will make art, and art will disappear." This is one of those radioactively false statements whose falsity is so glaring that it can be seen from orbit.
But on the other hand, you know who knows this fact very well? The corporations that pay creative workers. Movie studios, record labels, publishers, games studios: they all know that they are in possession of a workforce that has to make art, and will continue to do so, paycheck or not, until someone pokes their eyes out or breaks their fingers. People make art because it matters to them, and this trait makes workers terribly exploitable. As Fobazi Ettarh writes in her seminal paper on "vocational awe," workers who care about their jobs are at a huge disadvantage in labor markets. Teachers, librarians, nurses, and yes, artists, are all motivated by a sense of mission that often trumps their own self-interest and well-being and their bosses know it:
https://www.inthelibrarywiththeleadpipe.org/2018/vocational-awe/
One of the most important ideas in David Graeber's magisterial book Bullshit Jobs is that the ground state of labor is to do a job that you are proud of and that matters to you, but late-stage capitalist alienation has gotten so grotesque that some people will actually sneer at the idea that, say, teachers should be well compensated: "Why should you get a living wage – isn't the satisfaction of helping children payment enough?"
These are the most salient facts of the copyright fight: creativity is a non-economic activity, and this makes creative workers extremely vulnerable to exploitation. People make art because they have to. As Marx was finishing Kapital, he was often stuck working from home, having pawned his trousers so he could keep writing. The fact that artists don't respond rationally to economic incentives doesn't mean they should starve to death. Art – like nursing, teaching and librarianship – is necessary for human thriving.
No, the implication of the economic irrationality of vocational awe is this: the only tool that can secure economic justice for workers who truly can't help but do their jobs is solidarity. Creative workers need to be in solidarity with one another, and with our audiences – and, often, with the other workers at the corporations who bring our work to market. We are all class allies locked in struggle with the owners of both the entertainment companies and the technology companies that sit between us and our audiences (this is the thesis of Rebecca Giblin's and my 2022 book Chokepoint Capitalism):
https://chokepointcapitalism.com/
The idea of artistic solidarity is an old and important one. Victor Hugo, creator of the first copyright treaty – the Berne Convention – wrote movingly about how the point of securing rights for creators wasn't to allow their biological children to exploit their work after their death, but rather, to ensure that the creative successors of artists could build on their forebears' accomplishments. Hugo – like any other artist who has a shred of honesty and has thought about the subject for more than ten seconds – knew that he was part of a creative community and tradition, one composed of readers and writers and critics and publishing workers, and that this was a community and a tradition worth fighting for and protecting.
One of the most important and memorable interviews Rebecca and I did for our book was with Liz Pelly, one of the sharpest critics of Spotify (our chapter about how Spotify steals from musicians is the only part of the audiobook available on Spotify itself – a "Spotify Exclusive"!):
https://open.spotify.com/show/7oLW9ANweI01CVbZUyH4Xg
Pelly has just published a major, important new book about Spotify's ripoffs, called Mood Machine:
https://www.simonandschuster.com/books/Mood-Machine/Liz-Pelly/9781668083505
A long article in Harper's unpacks one of the core mechanics at the heart of Spotify's systematic theft from creative workers: the use of "ghost artists," whose generic music is cheaper than real music, which is why Spotify crams it into their playlists:
https://harpers.org/archive/2025/01/the-ghosts-in-the-machine-liz-pelly-spotify-musicians/
The subject of Ghost Artists has long been shrouded in mystery and ardent – but highly selective – denials from Spotify itself. In her article – which features leaked internal chats from Spotify – Pelly gets to the heart of the matter. Ghost artists are musicians who are recruited by shadowy companies that offer flat fees for composing and performing inoffensive muzak that can fade into the background. This is wholesaled to Spotify, which crams it into wildly popular playlists of music that people put on while they're doing something else ("Deep Focus," "100% Lounge," "Bossa Nova Dinner," "Cocktail Jazz," "Deep Sleep," "Morning Stretch") and might therefore settle for an inferior product.
Spotify calls this "Perfect Fit Music" and it's the pink slime of music, an extruded, musiclike content that plugs a music-shaped hole in your life, without performing the communicative and aesthetic job that real music exists for.
After many dead-end leads with people involved in the musical pink slime industry, Pelly finally locates a musician who's willing to speak anonymously about his work (he asks for anonymity because he relies on the pittances he receives for making pink slime to survive). This jazz musician knows very little about where the music he's commissioned to produce ends up, which is by design. The musical pink slime industry, like all sleaze industries, is shrouded in the secrecy sought by bosses who know that they're running a racket they should be ashamed of.
The anonymous musician composes a stack of compositions on his couch, then goes into a studio for a series of one-take recordings. There's usually a rep from the PFC pink slime industry there, and the rep's feedback is always "play simpler." As the anonymous musician explains:
That’s definitely the thing: nothing that could be even remotely challenging or offensive, really. The goal, for sure, is to be as milquetoast as possible.
This source calls the arrangement "shameful." Another musician Pelly spoke to said "it felt unethical, like some kind of money-laundering scheme." The PFC companies say that these composers and performers are just making music, the way anyone might, and releasing it under pseudonyms in a way that "has been popular across mediums for decades." But Pelly's interview subjects told her that they don't consider their work to be art:
It feels like someone is giving you a prompt or a question, and you’re just answering it, whether it’s actually your conviction or not. Nobody I know would ever go into the studio and record music this way.
Artists who are recruited to make new pink slime are given reference links to existing pink slime and ordered to replicate it as closely as possible. The tracks produced this way that do the best are then fed to the next group of musicians to replicate, and so on. It's the musical equivalent of feeding slaughterhouse sweepings to the next generation of livestock, a version of the gag from Catch 22 where a patient in a body-cast has a catheter bag and an IV drip, and once a day a nurse comes and swaps them around.
Pelly reminds us that Spotify was supposed to be an answer to the painful question of the Napster era: how do we pay musicians for their labor? Spotify was sold as a way to bypass the "gatekeepers": the big three labels who own 70% of all recorded music, whose financial maltreatment of artists was seen as moral justification for file sharing ("Why buy the CD if the musician won't see any of the money from it?").
But the way that Spotify secured rights to all the popular music in the world was by handing over big equity stakes in its business to the Big Three labels, and giving them wildly preferential terms that made it impossible for independent musicians and labels to earn more than homeopathic fractions of a penny for each stream, even as Spotify became the one essential conduit for reaching an audience:
https://pluralistic.net/2021/03/16/wage-theft/#excessive-buyer-power
It turns out that getting fans to pay for music has no necessary connection to getting musicians paid. Vocational awe means that the fact that someone has induced a musician to make music doesn't mean that the musician is getting a fair share of what you pay for music. The same goes for every kind of art, and every field where vocational awe plays a role, from nursing to librarianship.
Chokepoint Capitalism tries very hard to grapple with this conundrum; the second half of the book is a series of detailed, shovel-ready policy prescriptions for labor, contract, and copyright reforms that will immediately and profoundly shift the share of income generated by creative labor from bosses to workers.
Which brings me back to this little publishing enterprise of mine, and the fact that I do it for free, and not only that, give it away under a Creative Commons Attribution license that allows you to share and republish it, for money, if you choose:
https://creativecommons.org/licenses/by/4.0/
I am lucky enough that I make a good living from my writing, but I'm also honest enough with myself to know just how much luck was involved with that fact, and insecure enough to live in a state of constant near-terror about what happens when my luck runs out. I came up in science fiction, and I vividly remember the writers I admired whose careers popped like soap-bubbles when Reagan deregulated the retail sector, precipitating a collapse in the grocery stores and pharmacies where "midlist" mass-market paperbacks were sold by the millions across the country:
https://pluralistic.net/2021/07/04/self-publishing/
These writers – the ones who are still alive – are living proof of the fact that you have to break our fingers to get us to stop writing. Some of them haven't had a mainstream publisher in decades, but they're still writing, and self-publishing, or publishing with small presses, and often they're doing the best work of their careers, and almost no one is seeing it, and they're still doing it.
Because we aren't engaged in economically rational activity. We're doing something essential – essential to us, first and foremost, and essential to the audiences and peers our work reaches and changes and challenges.
Pluralistic is, in part, a way for me to face the fear I wake up with every day, that some day, my luck will run out, as it has for nearly all the writers I've ever admired, and to reassure myself that the writing will go on doing what I need it to do for my psyche and my heart even if – when – my career regresses to the mean.
It's a way for me to reaffirm the solidaristic nature of artistic activity, the connection with other writers and other readers (because I am, of course, an avid, constant reader). Commercial fortunes change. Monopolies lay waste to whole sectors and swallow up the livelihoods of people who believe in what they do like a whale straining tons of plankton through its baleen. But solidarity endures. Solidarietatis longa, vita brevis.
Happy New Year folks. See you in 2025.
Hey look at this (permalink)
- The How and the Tao of Old Time Banjo https://ia801601.us.archive.org/34/items/PatrickCostello/The%20How%20and%20the%20Tao%20of%20Old-Time%20Banjo.pdf
-
The Debt Limit Should Absolutely Be Eliminated https://prospect.org/blogs-and-newsletters/tap/2024-12-19-debt-limit-should-absolutely-be-eliminated/
-
Plumbing poverty: More people living without running water in US cities since global financial crisis https://phys.org/news/2024-12-plumbing-poverty-people-cities-global.html
This day in history (permalink)
#15yrsago Soviet kids’-book robots https://web.archive.org/web/20100107193522/https://ajourneyroundmyskull.blogspot.com/2009/12/mummy-was-robot-daddy-was-small-non.html
#15yrsago EFF’s ebook-buyer’s guide to privacy https://www.eff.org/deeplinks/2009/12/e-book-privacy
#15yrsago Botnet runners start their own ISPs https://web.archive.org/web/20100103161911/http://threatpost.com/en_us/blogs/attackers-buying-own-data-centers-botnets-spam-122109
#15yrsago BBC’s plan to kick free/open source out of UK TV devices https://www.theguardian.com/technology/2009/dec/22/bbc-drm-cory-doctorow
#15yrsago How to Teach Physics to Your Dog: explaining quantum physics through discussions with a German shepherd https://memex.craphound.com/2009/12/22/how-to-teach-physics-to-your-dog-explaining-quantum-physics-through-discussions-with-a-german-shepherd/
#10yrsago Podcast: Happy Xmas! (guest starring Poesy) https://ia801602.us.archive.org/32/items/Cory_Doctorow_Podcast_280/Cory_Doctorow_Podcast_280_Happy_Christmas_with_Poesy.mp3
#10yrsago Homophobic pastor arrested for squeezing man’s genitals in park https://www.attitude.co.uk/news/world/anti-gay-pastor-gaylard-williams-arrested-after-squeezing-mans-genitals-283001/
#10yrsago Clever student uses red/blue masking to double exam cribsheet https://www.reddit.com/r/pics/comments/2pxxaj/told_my_students_they_could_use_a_3_x_5_notecard/
#10yrsago Dollar Store Dungeons! http://www.bladeandcrown.com/blog/2013/12/30/dollar-store-dungeons-the-project/
#10yrsago Delware school district wants kids to get signed permission before checking out YA library books https://cbldf.org/2014/12/delaware-school-district-considers-permission-slips-for-young-adult-books/
#5yrsago The 2010s were the decade of Citizens United https://slate.com/news-and-politics/2019/12/citizens-united-devastating-impact-american-politics.html
#5yrsago Kentucky’s former GOP governor pardoned a bunch of rapists and murderers on his way out of office, including a child rapist https://www.washingtonpost.com/nation/2019/12/20/matt-bevin-micah-schoettle-child-rapist-hymen-intact-pardon/
#5yrsago Mel Brooks on the 40th Anniversary of his "greatest film," Young Frankenstein https://www.latimes.com/entertainment/movies/la-et-mn-mel-brooks-20140909-story.html
#1yrago A year in illustration, 2023 edition https://pluralistic.net/2023/12/21/collages-r-us/#ki-bosch
Upcoming appearances (permalink)
- Picks and Shovels with Ken Liu (Boston), Feb 14
https://brooklinebooksmith.com/event/2025-02-14/cory-doctorow-ken-liu-picks-and-shovels -
Picks and Shovels with Charlie Jane Anders (Menlo Park), Feb 17
https://www.keplers.org/upcoming-events-internal/cory-doctorow -
Picks and Shovels with Wil Wheaton (Los Angeles), Feb 18
https://www.dieselbookstore.com/event/Cory-Doctorow-Wil-Wheaton-Author-signing -
Picks and Shovels with Dan Savage (Seattle), Feb 19
https://www.eventbrite.com/e/cory-doctorow-with-dan-savage-picks-and-shovels-a-martin-hench-novel-tickets-1106741957989 -
Cloudfest (Europa Park), Mar 17-20
https://cloudfest.link/ -
Picks and Shovels at Imagine! Belfast (Remote), Mar 24
https://www.eventbrite.co.uk/e/cory-doctorow-in-conversation-with-alan-meban-tickets-1106421399189 -
DeepSouthCon63 (New Orleans), Oct 10-12, 2025
http://www.contraflowscifi.org/
Recent appearances (permalink)
- The Intersection of Storytelling and Technology (Grey Matter)
https://www.greymatter.show/episodes/s1e109-cory-doctorow-the-intersection-of-storytelling-and-technology -
Can we avoid the enshittification of clean-energy tech? (Volts.wtf)
https://www.volts.wtf/p/can-we-avoid-the-enshittification -
Enshittification: Why Everything Suddenly Got Worse and What to Do About It (HOPE XV)
https://www.youtube.com/watch?v=YrciT_dc2sc&list=PLcajvRZA8E0_tLLEh1COeAv-TcaDna2k1&index=32
Latest books (permalink)
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3062/Available_Feb_20th%3A_The_Bezzle_HB.html#/).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books (permalink)
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
-
Enshittification: Why Everything Suddenly Got Worse and What to Do About It, Farrar, Straus, Giroux, October 2025
-
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025
Colophon (permalink)
Today's top sources:
Currently writing:
- Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)
-
A Little Brother short story about DIY insulin PLANNING
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025
Latest podcast: Daddy-Daughter Podcast 2024 https://craphound.com/overclocked/2024/12/17/daddy-daughter-podcast-2024/
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
Today's links
- Trumpism's healthcare fracture-lines: How public health can win the Battle of the Cranks.
- Hey look at this: Delights to delectate.
- This day in history: 2009, 2014, 2019, 2023
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Trumpism's healthcare fracture-lines (permalink)
There was never any question as to whether Trump would implement Project 2025, the 900-page brick of terrifying and unhinged policy prescriptions edited by the Heritage Foundation. He would not implement it, because he could not implement it. No one could. It's impossible.
This isn't a statement about constitutional limits on executive authority or the realpolitik of getting bizarre and stupid policies past judges or through a hair-thin Congressional majority. This is a statement about the incoherence of Project 2025 itself. You probably haven't read it. Few have. Realistically, few people are going to read a 900-page group work of neofeudalist fanfic shit out by the most esoteric Fedsoc weirdos the world has ever seen.
But one person who did read Project 2025 was the leftist historian Rick Perlstein, who was the first person to really dig into what a fucking mess that thing is:
https://pluralistic.net/2024/07/14/fracture-lines/#disassembly-manual
Perlstein's excellent analysis doesn't claim that Project 2025's authors aren't sincere in their intentions to wreak great harm upon the nation and its people; rather, his point is that Project 2025 is filled with contradictory, mutually exclusive proposals written by people who fundamentally disagree with one another, and who each have enough power within the Trump coalition that all of their proposals have to be included in a document like this:
https://prospect.org/politics/2024-07-10-project-2025-republican-presidencies-tradition/
Project 2025 isn't just a guide to the masturbatory fantasies of the worst people in American politics – far more importantly, it is a detailed map of the fracture lines in the GOP coalition, the places where it is liable to split and shatter. This is an important point if you want to do more about Trumpism than run around feeling miserable and scared. If you want to fight, Project 2025 is a guide to the weak spots where an attack will do the most damage.
Perlstein's insight continues to be borne out as the Trump regime makes ready to take power. In a new story for KFF News, Stephanie Armour and Julie Rovner describe the irreconcilable differences among Trump's picks for the country's top public health authorities:
https://kffhealthnews.org/news/article/trump-rfk-kennedy-health-hhs-fda-cdc-vaccines-covid-weldon/
The brain-worm-infected-elephant in the room is, of course, RFK Jr, who has been announced as Trump's head of Health and Human Services. RFK Jr is a notorious antivaxer, chairman of Children’s Health Defense, a notorious anti-vaccine group. Kennedy's view is shared by Trump's chosen CDC boss, Dave Weldon, a physician who has repeated the dangerous lie that vaccinations cause autism. Mehmet "Dr Oz" Oz, the TV "physician" Trump wants to put in charge of Medicare/Medicaid, calls vaccines "oversold" and advocates for treating covid with hydroxychloroquine, another thoroughly debunked hoax:
However, other top Trump public health picks emphatically support vaccines. Marty Makary is Trump's choice for FDA commissioner; he's a Johns Hopkins trained surgeon who says vaccines "save lives" (but he peddles the lethal, unscientific hoax that childhood vaccines should be "spread out"). Jay Bhattacharya, the economist/MD whom Trump wants to put in charge of the NIH, supports vaccines (he is also one of the country's leading proponents of the eugenicist idea of accepting the mass death of elderly, sick and disabled people rather than imposing quarantines during epidemics). Then there's Janette Nesheiwat, whom Trump has asked to serve as the nation's surgeon general; she calls vaccines "a gift from God."
Like "Bidenism," Trumpism is a fragile coalition of people who thoroughly and irreconcilably disagree with one another. During the Biden administration, this resulted in self-inflicted injuries like appointing the brilliant trustbuster Lina Khan to run the FTC, but also appointing the pro-monopoly corporate lawyer Jacqueline Scott Corley to a lifetime seat as a federal judge, from which perch she ruled against Khan's no-brainer suit to block the Microsoft-Activision merger:
https://www.thebignewsletter.com/p/judge-rules-for-microsoft-mergers
The Trump coalition is even broader than the Biden coalition. That's how he won the 2024 election. But that also means that Trumpism is more fractious and off-balance, and hence will be easier to disrupt, because it is riven by people in senior positions who hate one another and are actively working for each others' political demise.
The Trump coalition is a coalition of cranks. I'm using "crank" here in a technical, non-pejorative sense. I am a crank, after all. A crank is someone who is overwhelmingly passionate about a single issue, whose uncrossable bright lines are not broadly shared. Cranks can be right or they can be wrong, but we're hard to be in coalition with, because we are uncompromisingly passionate about things that other people largely don't even notice, let alone care about. You can be a crank whose single issue is eliminating water fluoridation, even though this is very, very stupid and dangerous:
https://yourlocalepidemiologist.substack.com/p/the-fluoride-debate
Or you can be a crank about digital rights, a subject that, for decades, was viewed as by turns either unserious or as a sneaky way of shilling for Big Tech (thankfully, that's changing):
https://pluralistic.net/2024/06/18/greetings-fellow-pirates/#arrrrrrrrrr
Cranks make hard coalition partners. Trump's cranks are cranked up about different things – vaccines, culture war trans panics, eugenics – and are total normies about other things. The eugenicist MD/economist who wants to "let 'er rip" rather than engage in nonpharmaceutical pandemic interventions is gonna be horrified by total abortion bans and antivax. These cranks are on a collision course with one another.
This is on prominent display in these public health appointments, and we're very likely about to get a test of the cohesiveness and capability of the second Trump administration, thanks to bird flu. Now that bird flu has infected humans in multiple US states, there is every chance that we will have to confront a public health emergency in the coming weeks. If that happens, the Trump public health divisions over masking, quarantine and (especially) vaccines (Kennedy called the covid vaccine the "deadliest" ever made, without any evidence) will become the most important issue in the country, under constant and pitiless scrutiny, and criticism.
Trump's public health shambles is by no means unique. The lesson of Project 2025 is that the entire Trump project is one factional squabble away from collapse at all times.
Hey look at this (permalink)
- Can We Solve Canada's Monopoly Problem? https://www.youtube.com/watch?v=9VjpA36sxVM (h/t Regs to Riches)
- Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet https://www.404media.co/copyright-abuse-is-getting-luigi-mangione-merch-removed-from-the-internet/
-
SiCKO | A Film by Michael Moore | 2007 | Full Movie https://www.youtube.com/watch?v=YbEQ7acb0IE (h/t Ian Forrester)
This day in history (permalink)
#15yrsago Pope passes special Vatican copyright giving him exclusive right to use his name, title, image https://www.catholicnewsagency.com/news/18122/holy-see-declares-unique-copyright-on-papal-figure
#15yrsago Norwegian public broadcaster torrents 7-hour, hi-def trainride https://nrkbeta.no/2009/12/18/bergensbanen-eng/
#15yrsago Xmaspunk raygun https://www.flickr.com/photos/andrew_colunga/4201119099/
#15yrsago America can’t make things because managers all learn finance instead of production https://newrepublic.com/article/72035/wagoner-henderson
#10yrsago EFF’s copyfighter’s crossword https://www.eff.org/deeplinks/2014/12/crossword-puzzle-year-copyright-news
#10yrsago TX SWAT team beats, deafens nude man in his own home, lies about arrest; judge declines to punish cops or DA https://web.archive.org/web/20141224170549/http://www.myfoxhouston.com/story/27645689/ft-bend-police-prosecutors-accused-of-abuse-in-swat-incident
#10yrsago Outfit a game-designer’s toolkit for < $20 https://web.archive.org/web/20141222165215/http://iq212.com/iQ212Blog/2014/12/16/the-20-dollar-game-designers-tool-kit/
#10yrsago Telcos’ anti-Net Neutrality argument may let the MPAA destroy DNS https://www.techdirt.com/2014/12/18/mpaas-secret-war-net-neutrality-is-key-part-its-plan-to-block-sites/
#10yrsago Musical time-machine to Walt Disney World in the late 1970s https://passport2dreams.blogspot.com/2014/12/another-musical-souvenir-of-walt-disney.html
#10yrsago LISTEN: Wil Wheaton reads “Information Doesn’t Want to Be Free” https://ia600908.us.archive.org/24/items/idwtbf/Cory_Doctorow_-_Information_Doesnt_Want_to_Be_Free_Chapter_1_read_by_Wil_Wheaton.mp3
#10yrsago Kenya’s Parliament erupts into chaos as government rams through brutal “anti-terrorism” law https://www.standardmedia.co.ke/article/2000145159/chaos-disrupt-parliament-special-sitting-on-security-bill
#10yrsago Gingerbread Enterprise https://imgur.com/a/gingerbread-uss-enterprise-pvtYQ
#10yrsago NY DA gives unlicensed driver who killed senior in crosswalk a $400 fine https://nyc.streetsblog.org/2014/12/18/vance-deal-400-fine-for-unlicensed-driver-who-killed-senior-in-crosswalk
#10yrsago FCC seems to have lost hundreds of thousands of net neutrality comments https://www.reddit.com/r/technology/comments/2psxh9/the_fcc_ignored_hundreds_of_thousands_of_net/
#5yrsago Mass convictions of local warlords for 2009 massacre revive faith in Philippines’ justice system https://www.bbc.com/news/world-asia-50770644.amp
#5yrsago A vast network of shadowy news sites promote conservative talking points mixed with floods of algorithmically generated “news” https://www.cjr.org/tow_center_reports/hundreds-of-pink-slime-local-news-outlets-are-distributing-algorithmic-stories-conservative-talking-points.php
#5yrsago Volunteer “stick library” is a hit with neighborhood dogs https://metro.co.uk/2019/12/13/dad-creates-stick-library-dogs-11902209/?ito=article.tablet.share.top.messenger
#5yrsago Students at elite Shanghai university protest the removal of “freedom of thought” from the school charter https://asiatimes.com/2019/12/students-protest-at-shanghais-fudan-university/
#5yrsago NIST confirms that facial recognition is a racist, sexist dumpster-fire https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software
#5yrsago Betsy DeVos quietly spends millions to promote the unpopular policies she hopes to enact as a federal official https://www.salon.com/2019/12/19/exclusive-betsy-devos-family-foundation-funnels-money-to-right-wing-groups-that-boost-her-agenda/
#5yrsago Bernie Sanders got the GAO to study the life chances of millennials, and the report concludes that debt is “crushing their dreams” https://www.teenvogue.com/story/bernie-sanders-report-millennial-living-standards
#5yrsago Doctors who take pharma industry freebies prescribe more of their benefactors’ drugs https://www.propublica.org/article/doctors-prescribe-more-of-a-drug-if-they-receive-money-from-a-pharma-company-tied-to-it#173787
#5yrsago New York Times analyzes a leaked set of location data from a private broker, sounds the alarm https://www.nytimes.com/interactive/2019/12/19/opinion/location-tracking-cell-phone.html
#5yrsago Americans should definitely be worried about the EU’s new copyright rules https://medium.com/berkman-klein-center/why-americans-should-worry-about-the-new-eu-copyright-rules-97800be3f8fc
#5yrsago Illinois schools don’t just lock special ed kids in solitary, they also restrain them https://www.propublica.org/article/illinois-school-restraints#173374>
#5yrsago Medicare for All would cut most Americans’ taxes, creating the biggest American take-home pay raise in a generation https://www.theguardian.com/commentisfree/2019/oct/25/medicare-for-all-taxes-saez-zucman
#5yrsago Codifying “Boomerspeak” and debating the ethics of poking fun at it https://www.wired.com/story/boomerspeak-enregisterment/
#5yrsago Alberta’s tax-funded climate denial “war room” ripped off its logo from a US tech company https://edmonton.ctvnews.ca/alberta-s-oil-and-gas-war-room-changing-logo-following-complaints-it-copied-u-s-data-company-1.4737423
#5yrsago My annual Daddy-Daughter Xmas Podcast: interview with an 11-year-old https://ia802801.us.archive.org/18/items/Cory_Doctorow_Podcast_320/Cory_Doctorow_Podcast_320_-_Christmas_2019_with_Poesy.mp3
#1yrago 2024's public domain is a banger https://pluralistic.net/2023/12/20/em-oh-you-ess-ee/#sexytimes
#1yrago What kind of bubble is AI? https://pluralistic.net/2023/12/19/bubblenomics/#pop
Upcoming appearances (permalink)
- Picks and Shovels with Ken Liu (Boston), Feb 14
https://brooklinebooksmith.com/event/2025-02-14/cory-doctorow-ken-liu-picks-and-shovels -
Picks and Shovels with Charlie Jane Anders (Menlo Park), Feb 17
https://www.keplers.org/upcoming-events-internal/cory-doctorow -
Picks and Shovels with Wil Wheaton (Los Angeles), Feb 18
https://www.dieselbookstore.com/event/Cory-Doctorow-Wil-Wheaton-Author-signing -
Picks and Shovels with Dan Savage (Seattle), Feb 19
https://www.eventbrite.com/e/cory-doctorow-with-dan-savage-picks-and-shovels-a-martin-hench-novel-tickets-1106741957989 -
Cloudfest (Europa Park), Mar 17-20
https://cloudfest.link/ -
Picks and Shovels at Imagine! Belfast (Remote), Mar 24
https://www.eventbrite.co.uk/e/cory-doctorow-in-conversation-with-alan-meban-tickets-1106421399189 -
DeepSouthCon63 (New Orleans), Oct 10-12, 2025
http://www.contraflowscifi.org/
Recent appearances (permalink)
- The Intersection of Storytelling and Technology (Grey Matter)
https://www.greymatter.show/episodes/s1e109-cory-doctorow-the-intersection-of-storytelling-and-technology -
Can we avoid the enshittification of clean-energy tech? (Volts.wtf)
https://www.volts.wtf/p/can-we-avoid-the-enshittification -
Enshittification: Why Everything Suddenly Got Worse and What to Do About It (HOPE XV)
https://www.youtube.com/watch?v=YrciT_dc2sc&list=PLcajvRZA8E0_tLLEh1COeAv-TcaDna2k1&index=32
Latest books (permalink)
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3062/Available_Feb_20th%3A_The_Bezzle_HB.html#/).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books (permalink)
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
-
Enshittification: Why Everything Suddenly Got Worse and What to Do About It, Farrar, Straus, Giroux, October 2025
-
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025
Colophon (permalink)
Today's top sources:
Currently writing:
- Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)
-
A Little Brother short story about DIY insulin PLANNING
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025
Latest podcast: Daddy-Daughter Podcast 2024 https://craphound.com/overclocked/2024/12/17/daddy-daughter-podcast-2024/
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
Today's links
- Nurses whose shitty boss is a shitty app: "Uber for nurses" is even worse than it sounds.
- Hey look at this: Delights to delectate.
- This day in history: 2014, 2019, 2023
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Nurses whose shitty boss is a shitty app (permalink)
Operating a business is risky: you can't ever be sure how many customers you'll have, or what they'll show up looking for. If you guess wrong, you'll either have too few workers to serve the crowd, or you'll pay workers to stand around and wait for customers. This is true even when your "business" is a "hospital."
Capitalists hate capitalism. Capitalism is defined by risk – like the risk of competitors poaching your customers and workers. Capitalists all secretly dream of a "command economy" in which other people have to arrange their affairs to suit the capitalists' preferences, taking the risk off their shoulders. Capitalists love anti-competitive exclusivity deals with suppliers, and they really love noncompete "agreements" that ban their workers from taking better jobs:
https://pluralistic.net/2023/04/21/bondage-fees/#doorman-building
One of the sleaziest, most common ways for capitalists to shed risk is by shifting it onto their workers' shoulders, for example, by sending workers home on slow days and refusing to pay them for the rest of their shifts. This is easy for capitalists to do because workers have a collective action problem: for workers to force their bosses not to do this, they all have to agree to go on strike, and other workers have to honor their picket-lines. That's a lot of chivvying and bargaining and group-forming, and it's very hard. Meanwhile, the only person the boss needs to convince to screw you this way is themself.
Libertarians will insist that this is impossible, of course, because workers will just quit and go work for someone else when this happens, and so bosses will be disciplined by the competition to find workers willing to put up with their bullshit. Of course, these same libertarians will tell you that it should be legal for your boss to require you to sign a noncompete "agreement" so you can't quit and get a job elsewhere in your field. They'll also tell you that we don't need antitrust enforcement to prevent your boss from buying up all the businesses you might work for if you do manage to quit.
In practice, the only way workers have successfully resisted being burdened with their bosses' risks is by a) forming a union, and then b) using the union to lobby for strong labor laws. Labor laws aren't a substitute for a union, but they are an important backstop, and of course, if you're not unionized, labor law is all you've got.
Enter the tech-bro, app in hand. The tech-bro's most absurd (and successful) ruse is "it's not a crime, I did it with an app." As in "it's not money-laundering, I did it with an app." Or "it's not a privacy violation, I did it with an app." Or "it's not securities fraud, I did it with an app." Or "it's not price-gouging, I did it with an app," or, importantly, "it's not a labor-law violation, I did it with an app."
The point of the "gig economy" is to use the "did it with an app" trick to avoid labor laws, so that bosses can shift risks onto workers, because capitalists hate capitalism. These apps were first used to immiserate taxi-drivers, and this was so successful that it spawned a whole universe of "Uber for __________" apps that took away labor rights from other kinds of workers, from dog-groomers to carpenters.
One group of workers whose rights are being devoured by gig-work apps is nurses, which is bad news, because without nurses, I would be dead by now.
A new report from the Roosevelt Institute goes deep on the way that nurses' lives are being destroyed by gig work apps that let bosses in America's wildly dysfunctional for-profit health care industry shift risk from bosses to the hardest-working group of health care professionals:
https://rooseveltinstitute.org/publications/uber-for-nursing/
The report's authors interviewed nurses who were employed through three apps: Shiftkey, Shiftmed and Carerev, and reveal a host of risk-shifting, worker-abusing practices that has nurses working for so little that they can't afford medical insurance themselves.
Take Shiftkey: nurses are required to log into Shiftkey and indicate which shifts they are available for, and if they are assigned any of those shifts later but can't take them, their app-based score declines and they risk not being offered shifts in the future. But Shiftkey doesn't guarantee that you'll get work on any of those shifts – in other words, nurses have to pledge not to take any work during the times when Shiftkey might need them, but they only get paid for those hours where Shiftkey calls them out. Nurses assume all the risk that there won't be enough demand for their services.
Each Shiftkey nurse is offered a different pay-scale for each shift. Apps use commercially available financial data – purchased on the cheap from the chaotic, unregulated data broker sector – to predict how desperate each nurse is. The less money you have in your bank accounts and the more you owe on your credit cards, the lower the wage the app will offer you. This is a classic example of what the legal scholar Veena Dubal calls "algorithmic wage discrimination" – a form of wage theft that's supposedly legal because it's done with an app:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
Shiftkey workers also have to bid against one another for shifts, with the job going to the worker who accepts the lowest wage. Shiftkey pays nominal wages that sound reasonable – one nurse's topline rate is $23/hour. But by payday, Shiftkey has used junk fees to scrape that rate down to the bone. Workers have to pay a daily $3.67 "safety fee" to pay for background checks, drug screening, etc. Nevermind that these tasks are only performed once per nurse, not every day – and nevermind that this is another way to force workers to assume the boss's risks. Nurses also pay daily fees for accident insurance ($2.14) and malpractice insurance ($0.21) – more employer risk being shifted onto workers. Workers also pay $2 per shift if they want to get paid on the same day – a payday lending-style usury levied against workers whose wages are priced based on their desperation. Then there's a $6/shift fee nurses pay as a finders' fee to the app, a fee that's up to $7/shift next year. All told, that $23/hour rate cashes out to $13/hour.
On top of that, gig nurses have to pay for their own uniforms, licenses and equipment, including different colored scrubs and even shoes for each hospital. And because these nurses are "their own bosses" they have to deduct their own payroll taxes from that final figure. As "self-employed" workers, they aren't entitled to overtime or worker's comp, they get no retirement plan, health insurance, sick days or vacation.
The apps sell themselves to bosses as a way to get vetted, qualified nurses, but the entire vetting process is automated. Nurses upload a laundry list of documents related to their qualifications and undergo a background check, but are never interviewed by a human. They are assessed through automated means – for example, they have to run a location-tracking app en route to callouts and their reliability scores decline if they lose mobile data service while stuck in traffic.
Shiftmed docks nurses who cancel shifts after agreeing to take them, but bosses who cancel on nurses, even at the last minute, get away at most a small penalty (having to pay for the first two hours of a canceled shift), or, more often, nothing at all. For example, bosses who book nurses through the Carerev app can cancel without penalty on a mere two hours' notice. One nurse quoted in the study describes getting up at 5AM for a 7AM shift, only to discover that the shift was canceled while she slept, leaving her without any work or pay for the day, after having made arrangements for her kid to get childcare. The nurse assumes all the risk again: blocking out a day's work, paying for childcare, altering her sleep schedule. If she cancels on Carerev, her score goes down and she will get fewer shifts in the future. But if the boss cancels, he faces no consequences.
Carerev also lets bosses send nurses home early without paying them for the whole day – and they don't pay overtime if a nurse stays after her shift ends in order to ensure that their patients are cared for. The librarian scholar Fobazi Ettarh coined the term "vocational awe" to describe how workers in caring professions will endure abusive conditions and put in unpaid overtime because of their commitment to the patrons, patients, and pupils who depend on them:
https://www.inthelibrarywiththeleadpipe.org/2018/vocational-awe/
Many of the nurses in the study report having shifts canceled on them as they pull into the hospital parking lot. Needless to say, when your shift is canceled just as it was supposed to start, it's unlikely you'll be able to book a shift at another facility.
The American healthcare industry is dominated by monopolies. First came the pharma monopolies, when pharma companies merged and merged and merged, allowing them to screw hospitals with sky-high prices. Then the hospitals gobbled each other up, merging until most regions were dominated by one or two hospital chains, who could use buyer power to get a better deal on pharma prices – but also use seller power to screw the insurers with outrageous prices for care. So the insurers merged, too, until they could fight hospital price-gouging.
Everywhere you turn in the healthcare industry, you find another monopolist: pharmacists and pharmacy benefit managers, group purchasing organizations, medical beds, saline and supplies. Monopoly begets monopoly.
(Unitedhealthcare is extraordinary in that its divisions are among the most powerful players in all of these sectors, making it a monopolist among monopolists – for example, UHC is the nation's largest employer of physicians:)
https://www.thebignewsletter.com/p/its-time-to-break-up-big-medicine
But there are two key stakeholders in American health-care who can't monopolize: patients and health-care workers. We are the disorganized, loose, flapping ends at the beginning and end of the healthcare supply-chain. We are easy pickings for the monopolists in the middle, which is why patients pay more for worse care every year, and why healthcare workers get paid less for worse working conditions every year.
This is the one area where the Biden administration indisputably took action, bringing cases, making rules, and freaking out investment bankers and billionaires by repeatedly announcing that crimes were still crimes, even if you used an app to commit them.
The kind of treatment these apps mete out to nurses is illegal, app or no. In an important speech just last month, FTC commissioner Alvaro Bedoya explained how the FTC Act empowered the agency to shut down this kind of bossware because it is an "unfair and deceptive" form of competition:
https://pluralistic.net/2024/11/26/hawtch-hawtch/#you-treasure-what-you-measure
This is the kind of thing the FTC could be doing. Will Trump's FTC actually do it? The Trump campaign called the FTC "politicized" – but Trump's pick for the next FTC chair has vowed to politicize it even more:
https://theintercept.com/2024/12/18/trump-ftc-andrew-ferguson-ticket-fees/
Like Biden's FTC, Trump's FTC will have a target-rich environment if it wants to bring enforcement actions on behalf of workers. But Biden's trustbusters chose their targets by giving priority to the crooked companies that were doing the most harm to Americans, while Trump's trustbusters are more likely to give priority to the crooked companies that Trump personally dislikes:
https://pluralistic.net/2024/11/12/the-enemy-of-your-enemy/#is-your-enemy
So if one of these nursing apps pisses off Trump or one of his cronies, then yeah, maybe those nurses will get justice.
(Image: Cryteria, CC BY 3.0, modified)
Hey look at this (permalink)
- It’s time for the European Union to rethink personal social networking https://www.bruegel.org/policy-brief/its-time-european-union-rethink-personal-social-networking (h/t Svea)
-
Never Forgive Them https://www.wheresyoured.at/never-forgive-them/
-
Margot Susca on How Hedge Funds Helped Destroy American Newspapers https://www.corporatecrimereporter.com/news/200/margot-susca-on-how-hedge-funds-helped-destroy-american-newspapers/
This day in history (permalink)
#10yrsago A modest proposal for Wall Street’s future https://web.archive.org/web/20141215195720/http://www.bloombergview.com/articles/2014-12-15/michael-lewis-eight-things-i-wish-for-wall-street
#5yrsago From Enron to Saudi Arabia, from Rikers Island to ICE’s gulag, how McKinsey serves as “Capitalism’s Consigliere” https://theintercept.com/2019/12/18/capitalisms-consigliere-mckinseys-work-for-insurance-companies-ice-drug-manufacturers-and-despots/
#5yrsago A profile of Cliff “Cuckoo’s Egg” Stoll, a pioneering “hacker hunter” https://www.wired.com/story/meet-the-mad-scientist-who-wrote-the-book-on-how-to-hunt-hackers/
#5yrsago With 5G, 2019 reached peak bullshit https://www.lightreading.com/5g/2019-the-year-telecom-went-doolally-about-5g
#5yrsago Kentucky’s governor insisted that investment bankers could provide broadband. He was wrong https://www.propublica.org/article/there-are-kentuckians-who-still-dont-have-broadband-because-the-former-governor-chose-an-investment-bank-over-experts#173512
#1yrago Debbie Urbanski's 'After World' https://pluralistic.net/2023/12/18/storyworker-ad39-393a-7fbc/#digital-human-archive-project
Upcoming appearances (permalink)
- ISSA-LA Holiday Celebration keynote (Los Angeles), Dec 18
https://issala.org/event/issa-la-december-18-dinner-meeting/ -
Picks and Shovels with Ken Liu (Boston), Feb 14
https://brooklinebooksmith.com/event/2025-02-14/cory-doctorow-ken-liu-picks-and-shovels -
Picks and Shovels with Charlie Jane Anders (Menlo Park), Feb 17
https://www.keplers.org/upcoming-events-internal/cory-doctorow -
Picks and Shovels with Wil Wheaton (Los Angeles), Feb 18
https://www.dieselbookstore.com/event/Cory-Doctorow-Wil-Wheaton-Author-signing -
Picks and Shovels with Dan Savage (Seattle), Feb 19
https://www.eventbrite.com/e/cory-doctorow-with-dan-savage-picks-and-shovels-a-martin-hench-novel-tickets-1106741957989 -
Cloudfest (Europa Park), Mar 17-20
https://cloudfest.link/ -
Picks and Shovels at Imagine! Belfast (Remote), Mar 24
https://www.eventbrite.co.uk/e/cory-doctorow-in-conversation-with-alan-meban-tickets-1106421399189 -
DeepSouthCon63 (New Orleans), Oct 10-12, 2025
http://www.contraflowscifi.org/
Recent appearances (permalink)
- Can we avoid the enshittification of clean-energy tech? (Volts.wtf)
https://www.volts.wtf/p/can-we-avoid-the-enshittification -
Enshittification: Why Everything Suddenly Got Worse and What to Do About It (HOPE XV)
https://www.youtube.com/watch?v=YrciT_dc2sc&list=PLcajvRZA8E0_tLLEh1COeAv-TcaDna2k1&index=32 -
How To Keep IoT From Becoming An IoTrash (Def Con)
https://www.youtube.com/watch?v=tA7bpp8qXxI
Latest books (permalink)
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3062/Available_Feb_20th%3A_The_Bezzle_HB.html#/).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books (permalink)
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
-
Enshittification: Why Everything Suddenly Got Worse and What to Do About It, Farrar, Straus, Giroux, October 2025
-
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025
Colophon (permalink)
Today's top sources:
Currently writing:
- Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)
-
A Little Brother short story about DIY insulin PLANNING
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025
Latest podcast: Daddy-Daughter Podcast 2024 https://craphound.com/overclocked/2024/12/17/daddy-daughter-podcast-2024/
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
I’ve been banging the drum about the need for a federal anti-SLAPP law for a long time now, and one has just been proposed. Call your Congresspeople!
More on the bill from the Reporters Committee for Freedom of the Press.
“I can’t leave Substack, the alternatives charge monthly fees!”
For a mid-sized paid newsletter, you will pay:
Ghost Pro: $149–$269/month
Beehiiv: $131–$218/month
Buttondown: $239/month
Mailchimp: $285/month
Substack: $700/month
(this is based on assumptions of 20,000 members, 7% paid at $5/mo. twiddle the math as you see fit)
Cheapest option is self-hosting, though of course there is added time cost and a technical barrier to entry. I pay $100–$150/mo to self-host Ghost with a bit over 25,000 subscribers.
For free newsletters, Beehiiv is free for up to 2,500 subscribers. ConvertKit is free up to 10,000.
Here’s a comparison for newsletters of different sizes, paid and free:
Crypto Got What It Wanted in November’s Election. Now What?
Will the cryptocurrency industry’s endemic fraud and risk-taking ultimately be backstopped by government bailouts, funded by taxpayers who may themselves have no exposure to crypto assets? Has crypto become too big too fail?
My latest in Bloomberg Businessweek: Crypto Got What It Wanted in November’s Election. Now What?
Today's links
- Happy Public Domain Day 2025 to all who celebrate: A new bumper crop, with many more to come!
- Hey look at this: Delights to delectate.
- This day in history: 2004, 2009, 2014, 2019
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Happy Public Domain Day 2025 to all who celebrate (permalink)
In 1976, Congress set fire to the country's libraries; in 1998, they did it again. Today, in 2024, the flames have died down, and out of the ashes a new public domain is growing. Happy Public Domain Day 2025 to all who celebrate!
For most of US history, copyright was something you had to ask for. To copyright a work, you'd send a copy to the Library of Congress and they'd issue you a copyright. Not only did that let you display a copyright mark on your work – so people would know they weren't allowed to copy it without your permission – but if anyone wanted to figure out who to ask in order to get permission to copy or adapt a work, they could just go look up the paperwork at the LoC.
In 1976, Congress amended the Copyright Act to eliminate the "formality" of copyright registration. Now, all creative works of human authorship were copyrighted "at the moment of fixation" – the instant you drew, typed, wrote, filmed, or recorded them. From a toddler's nursery-school finger-painting to a graffiti mural on a subway car, every creative act suddenly became an article of property.
But whose property? That was on you to figure out, before you could copy, publish, perform, or preserve the work, because without registration, permissions had to start with a scavenger hunt for the person who could grant it. Congress simultaneously enacted a massive expansion of property rights, while abolishing the title registry that spelled out who owned what. As though this wasn't enough, Congress reached back in time and plopped an extra 20 years' onto the copyrights of existing works, even ones whose authors were unknown and unlocatable.
For the next 20 years, creative workers, archivists, educators and fans struggled in the face of this regime of unknowable property rights. After decades of well-documented problems, Congress acted again: they made it worse.
In 1998, Congress passed the Sonny Bono Copyright Act, AKA the Mickey Mouse Preservation Act, AKA the Copyright Term Extension Act. The 1998 Act tacked another 20 years onto copyright terms, but not just for works that were still in copyright. At the insistence of Disney, Congress actually yanked works out of the public domain – works that had been anthologized, adapted and re-issued – and put them back into copyright for two more decades. Copyright stretched to the century-plus "life plus 70 years" term. Nothing entered the public domain for the next 20 years.
So many of my comrades in the fight for the public domain were certain that this would happen again in 2018. In 2010, e-book inventor and Project Gutenberg founder Michael S Hart and I got into a friendly email argument because he was positive that in 2018, Congress would set fire to the public domain again. When I insisted that there was no way this could happen given the public bitterness over the 1998 Act, he told me I was being naive, but said he hoped that I was right.
Michael didn't live to see it, but in 2019, the public domain opened again. It was an incredible day:
No one has done a better job of chronicling the fortunes of our fragile, beautiful, bounteous public domain than Jennifer Jenkins and James Boyle of Duke University's Center for the Study of the Public Domain. Every year from 2010-2019, Boyle and Jenkins chronicled the works that weren't entering the public domain because of the 1998 Act, making sure we knew what had been stolen from our cultural commons. In so many cases, these works disappeared before their copyrights expired, for example, the majority of silent films are lost forever.
Then, in 2019, Jenkins and Boyle got to start cataloging the works that were entering the public domain, most of them from 1923 (copyright is complicated, so not everything that entered the public domain in 2019 was from that year):
https://web.law.duke.edu/cspd/publicdomainday/2019/
Every year since, they've celebrated a new bumper crop. Last year, we got Mickey Mouse!
https://pluralistic.net/2023/12/15/mouse-liberation-front/#free-mickey
In addition to numerous other works – by Woolf, Hemingway, Doyle, Christie, Proust, Hesse, Milne, DuBois, Frost, Chaplin, Escher, and more:
https://pluralistic.net/2023/12/20/em-oh-you-ess-ee/#sexytimes
Now, 2024 was a fantastic year for the public domain, but – as you'll see in the 2025 edition of the Public Domain Day post – 2025 is even better:
https://web.law.duke.edu/cspd/publicdomainday/2025/
So what's entering the public domain this year? Well, for one thing, there's more of the stuff from last year, which makes sense: if Hemingway's first books entered the PD last year, then this year, we'll get the books he wrote next (and this will continue every year until we catch up with Hemingway's tragic death).
There are some big hits from our returning champions, like Woolf's To the Lighthouse and A Farewell to Arms from Hemingway. Jenkins and Boyle call particular attention to one book: Faulkner's The Sound and the Fury, its title taken from a public domain work by Shakespeare. As they write, Faulkner spoke eloquently about the nature of posterity and culture:
[Humanity] is immortal, not because he alone among creatures has an inexhaustible voice, but because he has a soul, a spirit capable of compassion and sacrifice and endurance…The poet’s voice need not merely be the record of man, it can be one of the props, the pillars to help him endure and prevail.
The main attraction on last year's Public Domain Day was the entry of Steamboat Willie – the first Mickey Mouse cartoon – into the public domain. This year, we're getting a dozen new Mickey cartoons, including the first Mickey talkie:
https://en.wikipedia.org/wiki/Mickey_Mouse_(film_series)#1929
Those 12 shorts represent a kind of creative explosion for the Disney Studios. Those early Mickey cartoons were, each and every one, a hybrid of new copyrighted works and the public domain. The backbone of each Mickey short was a beloved, public domain song, with Mickey's motion synched to the beat (animators came to call this "mickey mousing"). In 1929, there was a huge crop of public domain music that anyone could use this way:
Blue Danube, Pop Goes the Weasel, Yankee Doodle, Here We Go Round the Mulberry Bush, Ach Du Lieber Augustin, Listen to the Mocking Bird, A-Hunting We Will Go, Dixie, The Girl I Left Behind Me, a tune known as the snake charmer song, Coming Thru the Rye, Mary Had a Little Lamb, Auld Lang Syne, Aloha ‘Oe, Turkey in the Straw, My Bonnie Lies Over the Ocean, Habanera and Toreador Song from Carmen, Lizst’s Hungarian Rhapsody No. 2, and Goodnight, Ladies.
These were recent compositions, songs that were written and popularized in the lifetimes of the parents and grandparents who took their kids to the movies to see Mickey shorts like "The Barn Dance," "The Opry House" and "The Jazz Fool." The ability to plunder this music at will was key to the success of Mickey Mouse and Disney. Think of all the Mickeys and Disneys we've lost by locking up the public domain for the past half-century!
This year, we're getting some outstanding new old music for our public domain. The complexities of copyright terms mean that compositions from 1929 are entering the public domain, but we're only getting recordings from 1924. 1924's outstanding recordings include:
George Gershwin performing Rhapsody in Blue, Jelly Roll Morton playing Shreveport Stomp, and an early recording from contralto and civil rights icon Marian Anderson, who is famous for her 1939 performance to an integrated audience of over 75,000 people at the Lincoln Memorial. Anderson’s 1924 recording is of the spiritual Nobody Knows the Trouble I’ve Seen.
While the compositions include Singin' in the Rain, Ain't Misbehavin', An American in Paris, Bolero, (What Did I Do to Be So) Black and Blue, Tiptoe Through the Tulips, Happy Days Are Here Again, What Is This Thing Called, Love?, Am I Blue? and many, many more.
On the art front, we're getting Salvador Dali's earliest surrealist masterpieces, like Illumined Pleasures, The Accommodations of Desire, and The Great Masturbator. Dali's contemporaries are not so lucky: after a century, the early history of the works of Magritte are so muddy that it's impossible to say whether they are in or out of copyright.
But there's plenty of art with clearer provenance that we can welcome into the public domain this year, most notably, Popeye and Tintin. As the first Popeye and Tintin comics go PD, so too do those characters.
The idea that a fictional character can have a copyright separate from the stories they appear in is relatively new, and it's weird and very stupid. Courts have found that the Batmobile is a copyrightable character (Batman won't enter the public domain until 2035).
Copyright for characters is such a muddy, gross, weird idea. The clearest example of how stupid this gets comes from Sherlock Holmes, whose canon spans many years. The Doyle estate – a rent-seeking copyright troll – claimed that Holmes wouldn't enter the public domain until every Holmes story was in the public domain (that's this year, incidentally!).
This didn't fly, so their next gambit was to claim copyright over those aspects of Holmes's character that were developed later in the stories. For example, they claimed that Holmes didn't show compassion until the later stories, and, on that basis, sued the creators of the Enola Holmes TV show for depicting a gender-swapped Sherlock who wasn't a total dick:
As the Enola lawyers pointed out in their briefs, this was tantamount to a copyright over emotions: "Copyright law does not allow the ownership of generic concepts like warmth, kindness, empathy, or respect, even as expressed by a public domain character – which, of course, belongs to the public, not plaintiff."
When Mickey entered the public domain last year, Jenkins did an excellent deep dive into which aspects of Mickey's character and design emerged when:
https://web.law.duke.edu/cspd/mickey/
Jenkins uses this year's entry of Tintin and Popeye into the public domain to further explore the subject of proprietary characters.
Even though copyright extends to characters, it only covers the "copyrightable" parts of those characters. As the Enola lawyers wrote, the generic character traits (their age, emotional vibe, etc) are not protected. Neither is anything "trivial" or "minuscule" – for example, if a cartoonist makes a minor alteration to the way a character's pupils or eyes are drawn, that's a minor detail, not a copyrightable element.
The biggest impediment to using public domain characters isn't copyright, it's trademark. Trademark is very different from copyright: foundationally, trademark is the right to protect your customers from being deceived by your competitors. Coke can use trademark to stop Pepsi from selling its sugary drinks in Coke cans – not because it owns the word "Coke" or the Coke logo, but because it has been deputized to protect Coke drinkers from being tricked into buying not-Coke, thinking that they're getting the true Black Waters of American Imperialism.
Companies claim trademarks over cartoon characters all the time, and license those trademarks on food, clothing, toys, and more (remember Popeye candy cigarettes?).
Indeed, Hearst Holdings claims a trademark over Popeye in many traditional categories, like cartoons, amusement parks, ads and clothes. They're also in the midst of applying for a Popeye NFT trademark (lol).
Does that mean you can't use Popeye in any of those ways? Nope! All you need to do is prominently mention that your use of Popeye is unofficial, not associated with Hearst, and dispel any chance of confusion. A unanimous Supreme Court decision (in Dastar) affirm your right to do so. You can also use Popeye in the title of your unauthorized Popeye comic, thanks to a case called Rogers v Grimaldi.
This all applies to Tintin, too – a big deal, given that Tintin is managed by a notorious copyright bully who delights in cruelly terrorizing fan artists. Tintin is joined in the public domain by Buck Rogers, another old-timey character whose owners are scumbag rent-seekers.
Congress buried the public domain alive in 1976, and dumped a load of gravel over its grave in 1998, but miraculously, we've managed to exhume the PD, and it has been revived and is showing signs of rude health.
2024 saw the blockbuster film adaptation of Wicked, based on the public domain Oz books. It also saw the publication of James, a celebrated retelling of Twain's Huck Finn from the perspective of Huck's enslaved sidekick.
This is completely normal. It's how art was made since time immemorial. The 40 year experiment in life without a public domain is at an end, and not a minute too soon.
You can piece together a complete-as-possible list of 2025's public domain (including the Marx Brothers' Cocoanuts, Disney's Skeleton Dance, and Del Ruth's Gold Diggers of Broadway) here:
https://onlinebooks.library.upenn.edu/cce/
Hey look at this (permalink)
- Find out why your health insurer denied your claim https://projects.propublica.org/claimfile/ (h/t Gregory Cherlin)
-
Woman blames Star Trek license plates for tens of thousands of dollars in accidental tickets https://www.ksl.com/article/51204984/woman-blames-star-trek-license-plates-for-tens-of-thousands-of-dollars-in-accidental-tickets (h/t Hackaday)
-
The £25,000 Pre-Amp Repair and the Copyright Strike https://www.youtube.com/watch?v=yPIrCaeVtvI (h/t Sal Fadhley)
This day in history (permalink)
#20yrsago US will shut down GPS to “fight terrorists” https://www.nbcnews.com/id/wbna6720387
#20yrsago Firefox ad in today’s NYT https://web.archive.org/web/20050204043841/https://www.mozilla.org/images/nyt_ad_large_2004.png
#20yrsago Barlow’s trial blogged https://web.archive.org/web/20041229065803/http://vitanuova.loyalty.org/weblog/nb.cgi/view/vitanuova/2004/12/16/1
#20yrsago Donate to EFF, send a lump of coal to MPAA and RIAA https://web.archive.org/web/20041218015602/http://www.downhillbattle.org/coal/
#20yrsago 65MB of vintage random numbers from 1965 https://www.rand.org/pubs/monograph_reports/MR1418.html
#15yrsago Spite Houses, built to piss off the neighbors https://en.wikipedia.org/wiki/Spite_house#
#15yrsago Bug powder causes male bedbugs to stab each other to death with their penises https://www.medindia.net/news/bedbugs-may-be-on-way-out-with-new-discovery-62273-1.htm
#15yrsago Installing Windows considered as a literary genre https://nielsenhayden.com/makinglight/archives/012008.html#012008
#15yrsago Montage of magic “photo enhancement” in cop shows and movies https://www.youtube.com/watch?v=Vxq9yj2pVWk
#15yrsago Leaked secret EU-Canada copyright agreement – EU screws Canada https://web.archive.org/web/20091220121340/https://www.michaelgeist.ca/content/view/4627/125/
#15yrsago Rapist ex-lawmaker claims copyright on his name, threatens legal action against anyone who uses it without permission https://rapidcityjournal.com/news/rapist-former-lawmaker-ted-klaudt-claims-name-copyright/article_03881cae-e9a3-11de-848e-001cc4c002e0.html
#15yrsago RIP, Roy E Disney https://web.archive.org/web/20091220040552/http://abclocal.go.com/wabc/story?section=news&id=7174485
#15yrsago Photos of rotting, abandoned water park at Walt Disney World https://web.archive.org/web/20091213143405/http://disboards.com/showthread.php?t=2344523
#15yrsago Great Firewall of Australia will nationally block sites appearing on a secret, unaccountable list https://web.archive.org/web/20091220042804/http://www.efa.org.au/2009/12/17/filtering-coming-to-australian-in-2010/
#10yrsago Barbaric, backwards ancestor worship https://memex.craphound.com/2014/12/16/barbaric-backwards-ancestor-worship/
#10yrsago UK cops demand list of attendees at university fracking debate https://www.theguardian.com/uk-news/2014/dec/15/police-university-list-fracking-debate
#10yrsago Over 700 million people have taken steps to improve privacy since Snowden https://www.schneier.com/blog/archives/2014/12/over_700_millio.html
#10yrsago Judge convicted of planting meth on woman who reported him for harassment https://web.archive.org/web/20141212022710/http://www.ajc.com/news/news/local/ex-judge-convicted-of-planting-drugs-on-woman/njQwd/
#10yrsago No charges for Japanese man who dumped a quarter-ton of porn in a park https://web.archive.org/web/20141225092617/https://www.afp.com/en/node/2965441/
#10yrsago The strange history of Disney’s cyber-psychedelic “Computers Are People Too” https://www.vice.com/en/article/how-disney-was-hustled-into-making-the-trippiest-movie-about-computers-ever/
#10yrsago HOWTO cut paper snowflakes in the likeness of Nobel physics prizewinners https://www.symmetrymagazine.org/article/december-2014/deck-the-halls-with-nobel-physicists
#5yrsago Insulin prices doubled between 2012 and 2016 https://www.usatoday.com/story/news/health/2019/12/09/insulin-prices-double-ohio-lawmakers-looking-answers/2629115001/
#5yrsago Sloppy security mistakes in smart conferencing gear allows hackers to spy on board rooms, steal presentations https://www.wired.com/story/dten-video-conferencing-vulnerabilities/
#5yrsago Bernie Sanders is the only leading Democrat who hasn’t taken money from billionaires https://www.cbsnews.com/news/bernie-sanders-knocks-rivals-for-taking-donations-from-billionaires/
#5yrsago Privacy activists spent a day on Capitol Hill scanning faces to prove that scanning faces should be banned https://fightfortheftr.medium.com/we-scanned-thousands-of-faces-in-dc-today-to-show-why-facial-recognition-surveillance-should-be-3360958a76f1
#5yrsago Foxconn wants Wisconsin to keep paying it billions, but it won’t disclose what kind of factory it will build https://www.theverge.com/2019/12/13/21020885/foxconn-wisconsin-deal-renegotiate-tax-subsidy-lcd-factory-plant
#5yrsago Citing the Panama Papers, Elizabeth Warren proposes sweeping anti-financial secrecy rules https://medium.com/@teamwarren/my-plan-to-fight-global-financial-corruption-b66492583129
#5yrsago McKinsey is lying about its role in building ICE’s gulags, and paying to own the top search result for “McKinsey ICE” https://www.propublica.org/article/mckinsey-called-our-story-about-its-ice-contract-false-its-not
#5yrsago Boston city council election decided by a single vote https://www.wgbh.org/news/local/2019-12-13/five-takeaways-from-what-might-have-been-the-closest-election-in-boston-history
#5yrsago Bunnie Huang’s classic “Essential Guide to Electronics in Shenzhen” is now free online https://bunniefoo.com/bunnie/essential/essential-guide-shenzhen-web.pdf
#5yrsago Private equity firms should be abolished https://www.thebignewsletter.com/p/why-private-equity-should-not-exist
#5yrsago ICANN hits pause on the sale of .ORG to Republican billionaires’ private equity fund https://www.icann.org/en/blogs/details/org-update-9-12-2019-en
#5yrsago San Diego’s Mysterious Galaxy bookstore is saved! https://www.mystgalaxy.com/new-location-mysterious-galaxy-2020
Upcoming appearances (permalink)
- ISSA-LA Holiday Celebration keynote (Los Angeles), Dec 18
https://issala.org/event/issa-la-december-18-dinner-meeting/ -
Picks and Shovels with Charlie Jane Anders (Menlo Park), Feb 17
https://www.keplers.org/upcoming-events-internal/cory-doctorow -
Picks and Shovels with Wil Wheaton (Los Angeles), Feb 18
https://www.dieselbookstore.com/event/Cory-Doctorow-Wil-Wheaton-Author-signing -
Picks and Shovels with Dan Savage (Seattle), Feb 19
https://www.eventbrite.com/e/cory-doctorow-with-dan-savage-picks-and-shovels-a-martin-hench-novel-tickets-1106741957989 -
Cloudfest (Europa Park), Mar 17-20
https://cloudfest.link/ -
Picks and Shovels at Imagine! Belfast (Remote), Mar 24
https://www.eventbrite.co.uk/e/cory-doctorow-in-conversation-with-alan-meban-tickets-1106421399189 -
DeepSouthCon63 (New Orleans), Oct 10-12, 2025
http://www.contraflowscifi.org/
Recent appearances (permalink)
- Can we avoid the enshittification of clean-energy tech? (Volts.wtf)
https://www.volts.wtf/p/can-we-avoid-the-enshittification -
Enshittification: Why Everything Suddenly Got Worse and What to Do About It (HOPE XV)
https://www.youtube.com/watch?v=YrciT_dc2sc&list=PLcajvRZA8E0_tLLEh1COeAv-TcaDna2k1&index=32 -
How To Keep IoT From Becoming An IoTrash (Def Con)
https://www.youtube.com/watch?v=tA7bpp8qXxI
Latest books (permalink)
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3062/Available_Feb_20th%3A_The_Bezzle_HB.html#/).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books (permalink)
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
-
Enshittification: Why Everything Suddenly Got Worse and What to Do About It, Farrar, Straus, Giroux, October 2025
-
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025
Colophon (permalink)
Today's top sources:
Currently writing:
- Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)
-
A Little Brother short story about DIY insulin PLANNING
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025
Latest podcast: Daddy-Daughter Podcast 2024 https://craphound.com/overclocked/2024/12/17/daddy-daughter-podcast-2024/
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
archive spelunking. via Don Pedro Presents: Politics and Protest.
Today's links
- Social media needs (dumpster) fire exits: No one wants to have a fire, but you need to plan for one anyway.
- Hey look at this: Delights to delectate.
- This day in history: 2004, 2009, 2014, 2019, 2023
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Social media needs (dumpster) fire exits (permalink)
Of course you should do everything you can to prevent fires – and also, you should build fire exits, because no matter how hard to you try, stuff burns. That includes social media sites.
Social media has its own special form of lock-in: we use social media sites to connect with friends, family members, community members, audiences, comrades, customers…people we love, depend on, and care for. Gathering people together is a profoundly powerful activity, because once people are in one place, they can do things: plan demonstrations, raise funds, organize outings, start movements. Social media systems that attract people then attract more people – the more people there are on a service, the more reasons there are to join that service, and once you join the service, you become a reason for other people to join.
Economists call this the "network effect." Services that increase in value as more people use them are said to enjoy "network effects." But network effects are a trap, because services that grow by connecting people get harder and harder to escape.
That's thanks to something called the "collective action problem." You experience the collective action problems all the time, whenever you try and get your friends together to do something. I mean, you love your friends but goddamn are they a pain in the ass: whether it's deciding what board game to play, what movie to see, or where to go for a drink afterwards, hell is truly other people. Specifically, people that you love but who stubbornly insist on not agreeing to do what you want to do.
You join a social media site because of network effects. You stay because of the collective action problem. And if you leave anyway, you will experience "switching costs." Switching costs are all the things you give up when you leave one product or service and join another. If you leave a social media service, you lose contact with all the people you rely on there.
Social media bosses know all this. They play a game where they try to enshittify things right up to the point where the costs they're imposing on you (with ads, boosted content, undermoderation, overmoderation, AI slop, etc) is just a little less than the switching costs you'd have to bear if you left. That's the revenue maximization strategy of social media: make things shittier for you to make things better for the company, but not so shitty that you go.
The more you love and need the people on the site, the harder it is for you to leave, and the shittier the service can make things for you.
How cursed is that?
But digital technology has an answer. Because computers are so marvelously, miraculously flexible, we can create emergency exits between services so when they turn into raging dumpster fires, you can hit the crash-bar and escape to a better service.
For example, in 2006, when Facebook decided to open its doors to the public – not just college kids with .edu addresses – they understood that most people interested in social media already had accounts on Myspace, a service that had been sold to master enshittifier Rupert Murdoch the year before. Myspace users were champing at the bit to leave, but they were holding each other hostage.
To resolve this hostage situation, Facebook gave prospective Myspace users a bot that would take their Myspace login and password and impersonate them on Myspace, scraping all the messages their stay-behind friends had posted for them. These would show up in your Facebook inbox, and when you replied to them, the bot would log back into Myspace as you and autopilot those messages into your outbox, so they'd be delivered to your friends there.
No switching costs, in other words: you could use Facebook and still talk to your Myspace friends, without using Myspace. Without switching costs, there was no collective action problem, because you didn't all have to leave at once. You could trickle from Myspace to Facebook in ones and twos, and stay connected to each other.
Of course, that trickle quickly became a flood. Network effects are a double-edged sword: if you're only stuck to a service because of the people there, then if those people go, there's no reason for you to stick around. The anthropologist danah boyd was able to watch this from the inside, watching Myspace's back-end as whole groups departed en masse:
When I started seeing the disappearance of emotionally sticky nodes, I reached out to members of the MySpace team to share my concerns and they told me that their numbers looked fine. Active uniques were high, the amount of time people spent on the site was continuing to grow, and new accounts were being created at a rate faster than accounts were being closed. I shook my head; I didn’t think that was enough. A few months later, the site started to unravel.
https://www.zephoria.org/thoughts/archives/2022/12/05/what-if-failure-is-the-plan.html
Social media bosses hate the idea of fire exits. For social media enshittifiers, the dumpster fire is a feature, not a bug. If users can escape the minute you turn up the heat, how will you cook them alive?
Facebook nonconsensually hacked fire exits into Myspace and freed all of Rupert Murdoch's hostages. Fire exits represents a huge opportunity for competitors – or at least they did, until the motley collection of rules we call "IP" was cultivated into a thicket that made doing unto Facebook as Facebook did unto Myspace a felony:
https://locusmag.com/2020/09/cory-doctorow-ip/
When Elon Musk set fire to Twitter, people bolted for the exits. The safe harbor they sought out at first was Mastodon, and a wide variety of third party friend-finder services popped up to help Twitter refugees reassemble their networks on Mastodon. All departing Twitter users had to do was put their Mastodon usernames in their bios. The friend-finder services would use the Twitter API to pull the bios of everyone you followed and then automatically follow their Mastodon handles for you. For a couple weeks there, I re-ran a friend-finder service every couple days, discovering dozens and sometimes hundreds of friends in the Fediverse.
Then, Elon Musk shut down the API – bricking up the fire exit. For a time there, Musk even suspended the accounts of Twitter users who mentioned the existence of their Mastodon handles on the platform – the "free speech absolutist" banned millions of his hostages from shouting "fire exit" in a burning theater:
Mastodon is a nonprofit, federated service built on open standards. Anyone can run a Mastodon server, and the servers all talk to each other. This is like email – you can use your Gmail account to communicate with friends who have Outlook accounts. But when you change email servers, you have to manually email everyone in your contact list to get them to switch over, while Mastodon has an automatic forwarding service that switches everyone you follow, and everyone who follows you, onto a new server. This is more like cellular number-porting, where you can switch from Verizon to T-Mobile and keep your phone number, so your friends don't have to care about which network your phone is on, they just call you and reach you.
This federation with automatic portability is the fire exit of all fire exits. It means that when your server turns into a dumpster fire, you can quit it and go somewhere else and lose none of your social connections – just a couple clicks gets you set up on a server run by someone you trust more or like better than the boss on your old server. And just as with real-world fire exits, you can use this fire exit in non-emergency ways, too – like maybe you just want to hang out on a server that runs faster, or whose users you like more, or that has a cooler name. Click-click-click, and you're in the new place. Change your mind? No problem – click-click-click, and you're back where you started.
This doesn't just protect you from dumpster fires, it's also a flame-retardant, reducing the likelihood of conflagration. A server admin who is going through some kind of enraging event (whomst amongst us etc etc) knows that if they do something stupid and gross to their users, the users can bolt for the exits. That knowledge increases the volume on the quiet voice of sober second thought that keeps us from flying off the handle. And if the admin doesn't listen to that voice? No problem: the fire exit works as an exit – not just as a admin-pacifying measure.
Any public facility should be built with fire exits. Long before fire exits were a legal duty, they were still a widely recognized good idea, and lots of people installed them voluntarily. But after horrorshows like the Triangle Shirtwaist factory fire, fire exits became a legal obligation. Today, the EU's Digital Markets Act imposes a requirement on large platforms to stand up interoperable APIs so that users can quit their services and go to a rival without losing contact with the people they leave behind – it's the world's first fire exit regulation for online platforms.
It won't be the last. Existing data protection laws like California's CCPA, which give users a right to demand copies of their data, arguably impose a duty on Mastodon server hosts to give users the data-files they need to hop from one server to the next. This doesn't just apply to the giant companies that are captured by the EU's DMA (which calls them "very large online platforms," or "VLOPS" – hands-down my favorite weird EU bureaucratic coinage of all time). CCPA would capture pretty much any server hosted in California and possibly any server with Californian users.
Which is OK! It's fine to tell small coffee-shops and offices with three desks that they need a fire exit, provided that installing that fire exit doesn't cost so much to install and maintain that it makes it impossible to run a small business or nonprofit or hobby. A duty to hand over your users' data files isn't a crushing compliance burden – after all, the facility for exporting that file comes built into Mastodon, so all a Mastodon server owner has to do to comply is not turn that facility off. What's more, if there's a dispute about whether a Mastodon server operator has provided a user with the file, we can resolve it by simply asking the server operator to send another copy of the file, or, in extreme cases, to provide a regulator with the file so that they can hand it to the user.
This is a great fire exit design. Fire exits aren't a substitute for making buildings less flammable, but they're a necessity, no matter how diligent the building's owner is about fire suppression. People are right to be pissed off about platform content moderation and content moderation at scale is effectively impossible:
The pain of bad content moderation is not evenly distributed. Typically, the people who get it worst are disfavored minorities with little social power, targeted by large cadres of organized bad actors who engage in coordinated harassment campaigns. Ironically, these people also rely more on one another for support (because they are disfavored, disadvantaged, and targeted) than the median user, which means they pay higher switching costs when they leave a platform and lose one another. That means that the people who suffer the worst from content moderation failures are also the people whom a platform can afford to fail most egregiously without losing their business.
It's the "Fiddler on the Roof" problem: sure, the villagers of Anatevka get six kinds of shit kicked out of them by cossacks every 15 minutes, but if they leave the shtetl, they'll lose everything they have. Their wealth isn't material. Anatekvans are peasants with little more than the clothes on their back and a storehouse of banging musical numbers. The wealth of Anatevka is social, it's one another. The only thing worse than living in Anatevka is leaving Anatevka, because the collective action problem dictates that once you leave Anatevka, you lose everyone you love:
https://pluralistic.net/2022/10/29/how-to-leave-dying-social-media-platforms/
Twitter's exodus remains a trickle, albeit one punctuated by the occasional surge when Musk does something particularly odious and the costs of staying come into sharp relief, pushing users to depart. These days, most of these departures are for Bluesky, not Mastodon.
Bluesky, like Mastodon, was conceived of as a federated social service with easy portability between servers that would let users hop from one server to another. The Bluesky codebase and architecture frames out a really ambitious fire-suppression program, with composable, stackable moderation tools and group follow/block lists that make it harder for dumpster fires to break out. I love this stuff: it's innovative in the good sense of "something that makes life better for technology users" (as opposed to the colloquial meaning of "innovative," which is "something that torments locked-in users to make shareholders richer).
But as I said when I opened this essay, "you should do everything you can to prevent fires – and also, you should build fire exits, because no matter how hard you try, stuff burns."
Bluesky's managers claim they've framed in everything they need to install the fire exits that would let you leave Bluesky and go to a rival server without losing the people you follow and the people who follow you. They've got personal data servers that let you move all your posts. They've got stable, user-controlled identifiers that could maintain connections across federated servers.
But, despite all this, there's no actual fire exits for Bluesky. No Bluesky user has severed all connections with the Bluesky business entity, renounced its terms of service and abandoned their accounts on Bluesky-managed servers without losing their personal connections to the people they left behind.
Those live, ongoing connections to people – not your old posts or your identifiers – impose the highest switching costs for any social media service. Myspace users who were reluctant to leave for the superior lands of Facebook (where, Mark Zuckerberg assured them, they would never face any surveillance – no, really!) were stuck on Rupert Murdoch's sinking ship by their love of one another, not by their old Myspace posts. Giving users who left Myspace the power to continue talking to the users who stayed was what broke the floodgates, leading to the "unraveling" that boyd observed.
Bluesky management has evinced an admirable and (I believe) sincere devotion to their users' wellbeing, and they've amply demonstrated that commitment with capital expenditures on content moderators and tools to allow users to control their own content moderation. They've invested heavily in fire suppression.
But there's still no fire exits on Bluesky. The exits are on the blueprints, they're roughed into the walls, but no one's installed them. Bluesky users' only defense against a dumpster fire is the ongoing goodwill and wisdom of Bluesky management. That's not enough. As I wrote earlier, every social media service where I'm currently locked in by my social connections was founded by someone I knew personally, respected, and liked and respected (and often still like and respect):
https://pluralistic.net/2024/11/02/ulysses-pact/#tie-yourself-to-a-federated-mast
I would love to use Bluesky, not least because I am fast approaching the point where the costs of using Twitter will exceed the benefits. I'm pretty sure that an account on Bluesky would substitute well for the residual value that keeps me glued to Twitter. But the fact that Twitter is such a dumpster fire is why I'm not going to join Bluesky until they install those fire exits. I've learned my lesson: you should never, ever, ever join another service unless they've got working fire exits.
Hey look at this (permalink)
- It's Time to Break Up Big Medicine https://www.thebignewsletter.com/p/its-time-to-break-up-big-medicine
-
'Kids for Cash' Judge has sentence commuted by President Biden https://www.wnep.com/article/news/investigations/action-16/kids-for-cash-the-new-crisis/kids-for-cash-judge-has-sentence-commuted-by-president-biden-pennsylvania/523-1be56573-6940-4e45-8daa-5a03abd67464
-
I have a cunning plan … https://www.antipope.org/charlie/blog-static/2024/12/i-have-a-cunning-plan.html
This day in history (permalink)
#20yrsago Advertising techniques that Web-users hate https://www.nngroup.com/articles/most-hated-advertising-techniques/
#20yrsago Haunted Mansion’s cobwebbing-and-griming regimen https://web.archive.org/web/20041216114700/http://disney.go.com/inside/issues/stories/v041214.html
#15yrsago Danish police abuse climate-change demonstrators https://web.archive.org/web/20091215061955/https://itsgettinghotinhere.org/2009/12/13/crackdown-in-copenhagen/
#15yrsago Three strikes law reintroduced in New Zealand https://memex.craphound.com/2009/12/15/three-strikes-law-reintroduced-in-new-zealand/
#15yrsago SFPD won’t investigate hit-and-run car-v-bike accident https://web.archive.org/web/20091220112602/https://jwz.livejournal.com/1139721.html
#15yrsago Comical legal case names https://kevinunderhill.typepad.com/lowering_the_bar/comical_case_names.html
#10yrsago Photographer beaten, detained in London for being “cocky” to policeman who implies she is a terrorist https://www.youtube.com/watch?v=GAs4gZY1bro
#10yrsago HOWTO: Make glue-gun sticks out of sugar for building gingerbread houses https://memex.craphound.com/2014/12/15/howto-make-glue-gun-sticks-out-of-sugar-for-building-gingerbread-houses/
#10yrsago New York City’s worst landlords https://web.archive.org/web/20141216015548/https://blogs.villagevoice.com/runninscared/2014/12/the_ten_worst_new_york_city_landlords_of_2014.php
#10yrsago Macedonia helped CIA kidnap and torture a German they mistook for a terrorist https://www.thelocal.de/20141210/cia-tortured-german-mistaken-for-terrorist
#10yrsago Why it matters whether or not torture works https://www.theatlantic.com/health/archive/2014/12/the-humane-interrogation-technique-that-works-much-better-than-torture/383698/
#5yrsago Spain’s Xnet: leak-publishing corruption-fighters https://www.smh.com.au/world/spains-wikileaksinspired-xnet-peaceful-guerrilla-movement-fights-graft-using-technology-courts-20141213-126e1d.html
#5yrsago DRM screws blind people https://www.wired.com/2014/12/e-books-for-the-blind-should-be-legal/
#1yrago It all started with a mouse https://pluralistic.net/2023/12/15/mouse-liberation-front/#free-mickey
Upcoming appearances (permalink)
- Should a Public Telecom Be An Election Issue/Davenport NDP (Remote), Dec 15
https://www.davenportndp.ca/public_telecom_town_hall -
ISSA-LA Holiday Celebration keynote (Los Angeles), Dec 18
https://issala.org/event/issa-la-december-18-dinner-meeting/ -
Picks and Shovels with Charlie Jane Anders (Menlo Park), Feb 17
https://www.keplers.org/upcoming-events-internal/cory-doctorow -
Picks and Shovels with Wil Wheaton (Los Angeles), Feb 18
https://www.dieselbookstore.com/event/Cory-Doctorow-Wil-Wheaton-Author-signing -
Picks and Shovels with Dan Savage (Seattle), Feb 19
https://www.eventbrite.com/e/cory-doctorow-with-dan-savage-picks-and-shovels-a-martin-hench-novel-tickets-1106741957989 -
Cloudfest (Europa Park), Mar 17-20
https://cloudfest.link/ -
Picks and Shovels at Imagine! Belfast (Remote), Mar 24
https://www.eventbrite.co.uk/e/cory-doctorow-in-conversation-with-alan-meban-tickets-1106421399189 -
DeepSouthCon63 (New Orleans), Oct 10-12, 2025
http://www.contraflowscifi.org/
Recent appearances (permalink)
- Can we avoid the enshittification of clean-energy tech? (Volts.wtf)
https://www.volts.wtf/p/can-we-avoid-the-enshittification -
Enshittification: Why Everything Suddenly Got Worse and What to Do About It (HOPE XV)
https://www.youtube.com/watch?v=YrciT_dc2sc&list=PLcajvRZA8E0_tLLEh1COeAv-TcaDna2k1&index=32 -
How To Keep IoT From Becoming An IoTrash (Def Con)
https://www.youtube.com/watch?v=tA7bpp8qXxI
Latest books (permalink)
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3062/Available_Feb_20th%3A_The_Bezzle_HB.html#/).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books (permalink)
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
-
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025
Colophon (permalink)
Today's top sources:
Currently writing:
- Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)
-
A Little Brother short story about DIY insulin PLANNING
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025
Latest podcast: Spill, part six (FINALE) (a Little Brother story) https://craphound.com/littlebrother/2024/12/08/spill-part-six-finale-a-little-brother-story/
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
taking psychic damage reading the lawsuit by Justin Sun’s Bit Global against Coinbase
i have to respect the argument that “memecoins... unlike wBTC have no inherent value other than demand created by their memetic potential as jokes”. your honor, wBTC’s lack of inherent value is for a different reason entirely
Today's links
- The GOP is not the party of workers: Which is why the Democrats should be.
- Hey look at this: Delights to delectate.
- This day in history: 2009, 2019, 2023
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
The GOP is not the party of workers (permalink)
The GOP says it's the "party of the working class" and indeed, they have promoted numerous policies that attack select groups within the American ruling class. But just because the party of unlimited power for billionaires is attacking a few of their own, it doesn't make them friends to the working people.
The best way to understand the GOP's relationship to workers is through "boss politics" – that's where one group of elites consolidates its power by crushing rival elites. All elites are bad for working people, so any attack on any elite is, in some narrow sense, "pro-worker." What's more, all elites cheat the system, so any attack on any elite is, again, "pro-fairness."
In other words, if you want to prosecute a company for hurting workers, customers, neighbors and the environment, you have a target-rich environment. But just because you crush a corrupt enterprise that's hurting workers, it doesn't mean you did it for the workers, and – most importantly – it doesn't mean that you will take workers' side next time.
Autocrats do this all the time. Xi Jinping engaged in a massive purge of corrupt officials, who were indeed corrupt – but he only targeted the corrupt officials that made up his rivals' power-base. His own corrupt officials were unscathed:
Putin did this, too. Russia's oligarchs are, to a one, monsters. When Putin defenestrates a rival – confiscates their fortune and sends them to prison – he acts against a genuinely corrupt criminal and brings some small measure of justice to that criminal's victims. But he only does this to the criminals who don't support him:
https://www.npr.org/sections/money/2022/03/29/1088886554/how-putin-conquered-russias-oligarchy
The Trump camp – notably JD Vance and Josh Hawley – have vowed to keep up the work of the FTC under Lina Khan, the generationally brilliant FTC Chair who accomplished more in four years than her predecessors have in 40. Trump just announced that he would replace Khan with Andrew Ferguson, who sounds like an LLM's bad approximation of Khan, promising to deal with "woke Big Tech" but also to end the FTC's "war on mergers." Ferguson may well plow ahead with the giant, important tech antitrust cases that Khan brought, but he'll do so because this is good grievance politics for Trump's base, and not because Trump or Ferguson are committed to protecting the American people from corporate predation itself:
https://pluralistic.net/2024/11/12/the-enemy-of-your-enemy/#is-your-enemy
Writing in his newsletter today, Hamilton Nolan describes all the ways that the GOP plans to destroy workers' lives while claiming to be a workers' party, and also all the ways the Dems failed to protect workers and so allowed the GOP to outlandishly claim to be for workers:
https://www.hamiltonnolan.com/p/you-cant-rebrand-a-class-war
For example, if Ferguson limits his merger enforcement to "woke Big Tech" companies while ending the "war on mergers," he won't stop the next Albertson's/Kroger merger, a giant supermarket consolidation that just collapsed because Khan's FTC fought it. The Albertson's/Kroger merger had two goals: raising food prices and slashing workers' wages, primarily by eliminating union jobs. Fighting "woke Big Tech" while waving through mergers between giant companies seeking to price-gouge and screw workers does not make you the party of the little guy, even if smashing Big Tech is the right thing to do.
Trump's hatred of Big Tech is highly selective. He's not proposing to do anything about Elon Musk, of course, except to make Musk even richer. Musk's net worth has hit $447b because the market is buying stock in his companies, which stand to make billions from cozy, no-bid federal contracts. Musk is a billionaire welfare queen who hates workers and unions and has a long rap-sheet of cheating, maiming and tormenting his workforce. A pro-worker Trump administration could add labor conditions to every federal contract, disqualifying businesses that cheat workers and union-bust from getting government contracts.
Instead, Trump is getting set to blow up the NLRB, an agency that Reagan put into a coma 40 years ago, until the Sanders/Warren wing of the party forced Biden to install some genuinely excellent people, like general counsel Jennifer Abruzzo, who – like Khan – did more for workers in four years than her predecessors did in 40. Abruzzo and her colleagues could have remained in office for years to come, if Democratic Senators had been able to confirm board member Lauren McFerran (or if two of those "pro-labor" Republican Senators had voted for her). Instead, Joe Manchin and Kirsten Synema rushed to the Senate chamber at the last minute in order to vote McFerran down and give Trump total control over the NLRB:
https://www.axios.com/2024/12/11/schumer-nlrb-vote-manchin-sinema
This latest installment in the Manchin Synematic Universe is a reminder that the GOP's ability to rebrand as the party of workers is largely the fault of Democrats, whose corporate wing has been at war with workers since the Clinton years (NAFTA, welfare reform, etc). Today, that same corporate wing claims that the reason Dems were wiped out in the 2024 election is that they were too left, insisting that the path to victory in the midterms and 2028 is to fuck workers even worse and suck up to big business even more.
We have to take the party back from billionaires. No Dem presidential candidate should ever again have "proxies" who campaign to fire anti-corporate watchdogs like Lina Khan. The path to a successful Democratic Party runs through worker power, and the only reliable path to worker power runs through unions.
Nolan's written frequently about how bad many union leaders are today. It's not just that union leaders are sitting on historically unprecedented piles of cash while doing less organizing than ever, at a moment when unions are more popular than they've been in a century with workers clamoring to join unions, even as union membership declines. It's also that union leaders have actually endorsed Trump – even as the rank and file get ready to strike:
The GOP is going to do everything it can to help a tiny number of billionaires defeat hundreds of millions of workers in the class war. A future Democratic Party victory will come from taking a side in that class war – the workers' side. As Nolan writes:
If billionaires are destroying our country in order to serve their own self-interest, the reasonable thing to do is not to try to quibble over a 15% or a 21% corporate tax rate. The reasonable thing to do is to eradicate the existence of billionaires. If everyone knows our health care system is a broken monstrosity, the reasonable thing to do is not to tinker around the edges. The reasonable thing to do is to advocate Medicare for All. If there is a class war—and there is—and one party is being run completely by the upper class, the reasonable thing is for the other party to operate in the interests of the other, much larger, much needier class. That is quite rational and ethical and obvious in addition to being politically wise.
Nolan's remedy for the Democratic Party is simple and straightforward, if not easy:
The answer is spend every last dollar we have to organize and organize and strike and strike. Women are workers. Immigrants are workers. The poor are workers. A party that is banning abortion and violently deporting immigrants and economically assaulting the poor is not a friend to the labor movement, ever. (An opposition party that cannot rouse itself to participate on the correct side of the ongoing class war is not our friend, either—the difference is that the fascists will always try to actively destroy unions, while the Democrats will just not do enough to help us, a distinction that is important to understand.)
Cosigned.
Hey look at this (permalink)
- Gaytheist https://www.lonniecomics.com
-
YouTube quietly made some of its web embeds worse, including ours https://www.theverge.com/2024/12/12/24318124/youtube-player-cant-click-title-sigh
-
FTC Revives 1930s Law in Suing Alcohol Distributor https://prospect.org/economy/2024-12-12-ftc-revives-1930s-law-suing-alcohol-distributor/
This day in history (permalink)
#15yrsago Philosophy prof won’t go to jail for making unofficial Derrida translations available to students https://www.ip-watch.org/2009/12/14/restoration-of-french-philosopher’s-work-online-in-argentina-seen-as-an-opening/
#15yrsago How electricity became a right, and what it means for broadband https://web.archive.org/web/20091217080912/http://publicola.net/?p=20687
#15yrsago Haagen Dazs opens no-Indians-allowed store in Delhi https://timesofindia.indiatimes.com/blogs/randomaccess/sorry-indians-not-allowed1/
#5yrsago McKinsey’s internal mythology compares management consultants to “the Marine Corps, the Roman Catholic Church, and the Jesuits” https://www.propublica.org/article/how-mckinsey-makes-its-own-rules
#5yrsago Lawmaker admits not independently researching lobbyist’s claim that ectopic fetuses could be reimplanted in the uterus, blames medical journals https://www.wosu.org/news/2019-12-12/lawmaker-says-he-didnt-research-ectopic-pregnancy-procedure-before-adding-to-bill#stream/0
#5yrsago Private equity looters startled to be called out by name in Taylor Swift award-acceptance speech https://nypost.com/2019/12/13/private-equity-stunned-to-be-dragged-into-battle-between-taylor-swift-and-scooter-braun/
#5yrsago Radicalized is one of the Wall Street Journal’s top sf books of 2019! https://memex.craphound.com/2019/12/14/radicalized-is-one-of-the-wall-street-journals-top-sf-books-of-2019/
#1yrago How the NYPD defeated bodycams https://pluralistic.net/2023/12/14/all-cams-are-private/#spoil-the-bushel
Upcoming appearances (permalink)
- Should a Public Telecom Be An Election Issue/Davenport NDP (Remote), Dec 15
https://www.davenportndp.ca/public_telecom_town_hall -
ISSA-LA Holiday Celebration keynote (Los Angeles), Dec 18
https://issala.org/event/issa-la-december-18-dinner-meeting/ -
Picks and Shovels with Charlie Jane Anders (Menlo Park), Feb 17
https://www.keplers.org/upcoming-events-internal/cory-doctorow -
Picks and Shovels with Dan Savage (Seattle), Feb 19
https://www.eventbrite.com/e/cory-doctorow-with-dan-savage-picks-and-shovels-a-martin-hench-novel-tickets-1106741957989 -
Cloudfest (Europa Park), Mar 17-20
https://cloudfest.link/ -
Picks and Shovels at Imagine! Belfast (Remote), Mar 24
https://www.eventbrite.co.uk/e/cory-doctorow-in-conversation-with-alan-meban-tickets-1106421399189 -
DeepSouthCon63 (New Orleans), Oct 10-12, 2025
http://www.contraflowscifi.org/
Recent appearances (permalink)
- Can we avoid the enshittification of clean-energy tech? (Volts.wtf)
https://www.volts.wtf/p/can-we-avoid-the-enshittification -
Enshittification: Why Everything Suddenly Got Worse and What to Do About It (HOPE XV)
https://www.youtube.com/watch?v=YrciT_dc2sc&list=PLcajvRZA8E0_tLLEh1COeAv-TcaDna2k1&index=32 -
How To Keep IoT From Becoming An IoTrash (Def Con)
https://www.youtube.com/watch?v=tA7bpp8qXxI
Latest books (permalink)
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3062/Available_Feb_20th%3A_The_Bezzle_HB.html#/).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books (permalink)
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
-
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025
Colophon (permalink)
Today's top sources:
Currently writing:
- Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)
-
A Little Brother short story about DIY insulin PLANNING
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025
Latest podcast: Spill, part six (FINALE) (a Little Brother story) https://craphound.com/littlebrother/2024/12/08/spill-part-six-finale-a-little-brother-story/
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
I wonder if Coinbase routinely calls up newsrooms to try to blackball people who criticize them, or if I'm just special
Today's links
- A Democratic media strategy to save journalism and the nation: Hire journalists, publish the news, win the country.
- Hey look at this: Delights to delectate.
- This day in history: 2004, 2009, 2014, 2019, 2023
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
A Democratic media strategy to save journalism and the nation (permalink)
As unbearably cringe as the hunt for a "leftist Joe Rogan" is, it is (to use a shopworn phrase), "directionally correct." Democrats suck at getting their message out, and that exacts a high electoral cost.
The right has an extremely well-funded media ecosystem of high-paid bullshitters backed by algorithm-gaming SEO dickheads. This system isn't necessarily supposed to turn a profit or even break even: the point of Prageru isn't to score ad revenue, it's to ensure that anyone who googles "what the fuck causes inflation" gets 25 minutes of relatable, upbeat, cheerfully sociopathic Austrian economics jammed into their eyeballs. Far right news isn't a for-profit concern, it's a loss-leader for oligarch-friendly policies. It's a steal: a million bucks' worth of news buys America's ultra-rich a billion dollars' worth of tax-cuts and the right to maim their workers and poison their customers for profit.
Meanwhile, the Democrats have historically relied on the "traditional media" to carry their messages, on the ground that reality has a well-known leftist bias, so any news outlet that hews to "journalistic ethics" will publish the truth, and the truth will weigh in favor of Democratic positions: trans people are humans, racism is real, abortion isn't murder, housing is a market failure, the planet is on fire, etc, etc, etc.
This is a stupid policy, and it has failed. The "respectable" news media hews to a self-imposed code of "balance" and "neutrality" that is easily gamed: "some people say that Hatians don't eat pet dogs, some people do, let's report both sides!" This is called "the view from nowhere" and it gets Democrats precisely nowhere:
http://archive.pressthink.org/2008/03/14/pincus_neutrality.html
Balance and neutrality are bullshit, an excuse that has been so thoroughly weaponized by billionaires and their lickspittles that anyone who takes it seriously demonstrates comprehensively that they, themselves, are deeply unserious:
Press neutrality – the view from nowhere – isn't some eternal verity. In terms of the history of the press, it's an idea that's about ten seconds old. The glory days of the news were dominated by papers with names like The Smallville Democrat and The Ruling Class Republican. Most of the world boggles at the idea that a news outlet wouldn't declare its political posture. Britons know that the Telegraph is the Torygraph; that the Guardian is in the tank for Labour (and specifically, committed to enabling Blairite/Starmerite purges of the left); the Mirror is a leftist tabloid; and the Mail is so far right that its editorial board considers Attila the Hun "woke."
Writing for The American Prospect – an excellent leftist news outlet – Ryan Cooper proposes a solution to the Democratic media gap that's way better than the hunt for the elusive "leftist Joe Rogan": sponsoring explicitly Democratic news outlets:
https://prospect.org/politics/2024-12-12-democrats-lost-propaganda-war/
The country is a bleak landscape of news deserts where voters literally didn't hear about what Trump was saying he would do, and, if they heard about it, they didn't hear from anyone who could explain what it meant. The average normie voter doesn't know what a "tariff" is, and chances are they think it's a tax that other countries inexplicably pay for the privilege of selling very cheap things to Americans.
Ironically, this news desert is also a crowded field of hungry, unemployed, talented journalists. What if Dems funded free newsgathering and publication in news deserts that told the truth? What if these news outlets, by dint of being an explicitly partisan, party-subsidized project, refused to adopt all the anti-reader practices of other websites, like disgusting surveillance, intrusive advertising, AI slop, email-soliciting pop-ups, and all the other crap that makes the news worse and worse every day?
Cooper recounts how this was actually tried on a small scale, to modest good effect, when the Center for American Progress subsidized Thinkprogress, an explicitly leftist news outlet. This was going great until 2019, when corporate Dems and their megadonors killed it because Thinkprogress had the temerity to report on their corrupt dealings:
https://www.thedailybeast.com/thinkprogress-a-top-progressive-news-site-is-shutting-down/
And, Cooper points out, this isn't what happens with far-right subsidy news. Right wing influencers, personalities and writers can stray pretty far from the party line without getting shut down.
I love the idea of a disenshittified, explicitly political leftist Democratic news media. Imagine a newsroom whose purpose is to get its message repeated as widely as possible. It wouldn't have a paywall – it would be Creative Commons Attribution-only, allowing for commercial republication by anyone who wants to reprint it, so long as they link back to it. It wouldn't wring its hands over AI ingestion or whether a slop site that rewrote its articles got to the top of Google News. That's fine! If the point is to get people to understand your point of view – and not to attract clicks or eyeballs – other people repackaging your content and finding ways to spread it is a feature, not a bug.
Back in the Napster Wars, entertainment industry shills – like Hillary Rosen, who oversaw a campaign to sue tens of thousands of children before becoming a major Democratic Party power-broker – used to tell us that "you can't compete with free." That's not entirely true, but it's not entirely false, either. If your news is a loss-leader for a democratic society that addresses human flourishing and a habitable planet, then you can make that news free-as-in-speech and free-as-in-beer, and avoid all the suckitude that makes reading "real" news so fucking garbage.
For the past five years, I've been publishing a newsletter – this thing you're reading now – that has no analytics, ads, tracking, pop-ups, or other trash. As a writer, it's profoundly satisfying and liberating, because all I have to care about is whether people engage with my ideas. I literally have no idea how many people read this, but I know everything people say about it.
That's how the news worked back in the good old days that everyone says we need to return to. Writers and editors measured the success of a story based on how the public reacted to it, not based on clicks or metrics that told you how far someone scrolled before they gave up on it. The supposed benefits of "data-driven" editorial policy have not materialized – the "data-driven" part is the search for an equilibrium between how surveillant and obnoxious a website can be and your decision to stop reading it forever.
Outlets like Propublica have done well by adopting much of this program, albeit without any explicit leftist agenda (the fact that they seem leftist reflects nothing more than their commitment to reporting the truth, e.g., Clarence Thomas is a lavishly corrupt puppet of billionaires who've showered him with riches).
The fact that they've been as successful as they are on a national beat – and partnering with the scant few regional papers to do some local coverage – just proves the point. The Democratic Party doesn't need its own Joe Rogan – they need a nationwide network of local outlets, sponsored by the party, committed to never enshittifying, bringing relevant, timely news to a nation in desperate need of it.
Hey look at this (permalink)
- DJ Riko Merry Mixmas 2024 https://www.youtube.com/watch?v=VyZ7-s05r7k&t=480s
-
The Lost Chronicles of Oz https://www.kickstarter.com/projects/freeforall/the-lost-chronicles-of-oz/
-
“The Ancient Engineer” by Bruce Sterling https://bruces.medium.com/the-ancient-engineer-by-bruce-sterling-2016-167351c82385
This day in history (permalink)
#20yrsago Forever War with better sex, Starship Troopers without the lectures: Old Man’s War https://memex.craphound.com/2004/12/12/forever-war-with-better-sex-starship-troopers-without-the-lectures-old-mans-war/
#20yrsago Cable companies will expire your Six Feet Under recordings after 2-4 weeks https://memex.craphound.com/2004/12/12/cable-companies-will-expire-your-six-feet-under-recordings-after-2-4-weeks/
#15yrsago FDIC sends a big F-U: completely blacked out documents in response to WaMu takeover freedom of information requests https://web.archive.org/web/20100114010713if_/https://www.bizjournals.com/seattle/blog/2009/12/the_fight_for_wamu_documents.html
#10yrsago IBM’s banking security software demands the right to spy on you https://yro.slashdot.org/story/14/12/11/2233234/bank-security-software-eula-allows-spying-on-users
#10yrsago US Christian terrorism: the other white meat https://web.archive.org/web/20141205144046/https://thinkprogress.org/justice/2014/12/04/3599271/austin-shooter-christian-extremism/
#10yrsago Senate IP address vandalizes Wikipedia to scrub “torture” from CIA torture report https://mashable.com/archive/senate-wikipedia-torture-report
#5yrsago Teespring removes Techdirt’s “Copying is Not Theft” tees for copyright infringement, and won’t discuss the matter any further https://www.techdirt.com/2019/12/12/teespring-takes-down-our-copying-is-not-theft-gear-refuses-to-say-why/
#5yrsago The three biggest Chinese business scams that target foreign firms https://web.archive.org/web/20200107202820/https://www.chinalawblog.com/2019/12/china-scams-our-annual-holiday-edition.html">https://web.archive.org/web/20200107202820/https://www.chinalawblog.com/2019/12/china-scams-our-annual-holiday-edition.html
#5yrsago A Wechat-based “mobile court” presided over by a chatbot has handled 3m legal procedures since March https://web.archive.org/web/20191207192051/https://www.japantimes.co.jp/news/2019/12/07/asia-pacific/crime-legal-asia-pacific/ai-judges-verdicts-via-chat-app-brave-new-world-chinas-digital-courts/#.Xev7n2bP1qY
#5yrsago Facebook promised to provide academics data to study disinformation, but their foot-dragging has endangered the whole project https://socialscience.one/blog/public-statement-european-advisory-committee-social-science-one
#5yrsago DJ Riko is back with the 18th annual Merry Mixmas mashup album! http://djriko.com/mixmases.htm
#5yrsago Family puts Ring camera in children’s room, discovers that hacker is watching their kids 24/7, taunting them through the speaker https://www.vice.com/en/article/how-hackers-are-breaking-into-ring-cameras/
#5yrsago 2019 was the year of voice assistant privacy dumpster fires https://www.bloomberg.com/news/features/2019-12-11/silicon-valley-got-millions-to-let-siri-and-alexa-listen-in
#1yrago An Epic antitrust loss for Google https://pluralistic.net/2023/12/12/im-feeling-lucky/#hugger-mugger
Upcoming appearances (permalink)
- IA et “merdification“ d’internet: peut-on envisager un nouveau web? (Remote), Dec 12
https://www.unige.ch/comprendre-le-numerique/conferences-publiques1/cycle-5-2024-2025/ia-et-merdification-dinternet-peut-envisager-un-nouveau-web/ -
Should a Public Telecom Be An Election Issue/Davenport NDP (Remote), Dec 15
https://www.davenportndp.ca/public_telecom_town_hall -
ISSA-LA Holiday Celebration keynote (Los Angeles), Dec 18
https://issala.org/event/issa-la-december-18-dinner-meeting/ -
Picks and Shovels with Charlie Jane Anders (Menlo Park), Feb 17
https://www.keplers.org/upcoming-events-internal/cory-doctorow -
Picks and Shovels with Dan Savage (Seattle), Feb 19
https://www.eventbrite.com/e/cory-doctorow-with-dan-savage-picks-and-shovels-a-martin-hench-novel-tickets-1106741957989 -
Cloudfest (Europa Park), Mar 17-20
https://cloudfest.link/ -
DeepSouthCon63 (New Orleans), Oct 10-12, 2025
http://www.contraflowscifi.org/
Recent appearances (permalink)
- Can we avoid the enshittification of clean-energy tech? (Volts.wtf)
https://www.volts.wtf/p/can-we-avoid-the-enshittification -
Enshittification: Why Everything Suddenly Got Worse and What to Do About It (HOPE XV)
https://www.youtube.com/watch?v=YrciT_dc2sc&list=PLcajvRZA8E0_tLLEh1COeAv-TcaDna2k1&index=32 -
How To Keep IoT From Becoming An IoTrash (Def Con)
https://www.youtube.com/watch?v=tA7bpp8qXxI
Latest books (permalink)
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3062/Available_Feb_20th%3A_The_Bezzle_HB.html#/).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books (permalink)
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
-
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025
Colophon (permalink)
Today's top sources:
Currently writing:
- Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)
-
A Little Brother short story about DIY insulin PLANNING
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025
Latest podcast: Spill, part six (FINALE) (a Little Brother story) https://craphound.com/littlebrother/2024/12/08/spill-part-six-finale-a-little-brother-story/
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
Recently I’ve been thinking about how everything that happens in the terminal is some combination of:
- Your operating system’s job
- Your shell’s job
- Your terminal emulator’s job
- The job of whatever program you happen to be running (like
top
orvim
orcat
)
The first three (your operating system, shell, and terminal emulator) are all kind of known quantities – if you’re using bash in GNOME Terminal on Linux, you can more or less reason about how how all of those things interact, and some of their behaviour is standardized by POSIX.
But the fourth one (“whatever program you happen to be running”) feels like it could do ANYTHING. How are you supposed to know how a program is going to behave?
This post is kind of long so here’s a quick table of contents:
- programs behave surprisingly consistently
- these are meant to be descriptive, not prescriptive
- it’s not always obvious which “rules” are the program’s responsibility to implement
- rule 1: noninteractive programs should quit when you press
Ctrl-C
- rule 2: TUIs should quit when you press
q
- rule 3: REPLs should quit when you press
Ctrl-D
on an empty line - rule 4: don’t use more than 16 colours
- rule 5: vaguely support readline keybindings
- rule 5.1:
Ctrl-W
should delete the last word - rule 6: disable colours when writing to a pipe
- rule 7:
-
means stdin/stdout - these “rules” take a long time to learn
programs behave surprisingly consistently
As far as I know, there are no real standards for how programs in the terminal should behave – the closest things I know of are:
- POSIX, which mostly dictates how your terminal emulator / OS / shell should
work together. I think it does specify a few things about how core utilities like
cp
should work but AFAIK it doesn’t have anything to say about how for examplehtop
should behave. - these command line interface guidelines
But even though there are no standards, in my experience programs in the terminal behave in a pretty consistent way. So I wanted to write down a list of “rules” that in my experience programs mostly follow.
these are meant to be descriptive, not prescriptive
My goal here isn’t to convince authors of terminal programs that they should follow any of these rules. There are lots of exceptions to these and often there’s a good reason for those exceptions.
But it’s very useful for me to know what behaviour to expect from a random new terminal program that I’m using. Instead of “uh, programs could do literally anything”, it’s “ok, here are the basic rules I expect, and then I can keep a short mental list of exceptions”.
So I’m just writing down what I’ve observed about how programs behave in my 20 years of using the terminal, why I think they behave that way, and some examples of cases where that rule is “broken”.
it’s not always obvious which “rules” are the program’s responsibility to implement
There are a bunch of common conventions that I think are pretty clearly the program’s responsibility to implement, like:
- config files should go in
~/.BLAHrc
or~/.config/BLAH/FILE
or/etc/BLAH/
or something --help
should print help text- programs should print “regular” output to stdout and errors to stderr
But in this post I’m going to focus on things that it’s not 100% obvious are
the program’s responsibility. For example it feels to me like a “law of nature”
that pressing Ctrl-D
should quit a REPL, but programs often
need to explicitly implement support for it – even though cat
doesn’t need
to implement Ctrl-D
support, ipython
does. (more about that in “rule 3” below)
Understanding which things are the program’s responsibility makes it much less surprising when different programs’ implementations are slightly different.
rule 1: noninteractive programs should quit when you press Ctrl-C
The main reason for this rule is that noninteractive programs will quit by
default on Ctrl-C
if they don’t set up a SIGINT
signal handler, so this is
kind of a “you should act like the default” rule.
Something that trips a lot of people up is that this doesn’t apply to
interactive programs like python3
or bc
or less
. This is because in
an interactive program, Ctrl-C
has a different job – if the program is
running an operation (like for example a search in less
or some Python code
in python3
), then Ctrl-C
will interrupt that operation but not stop the
program.
As an example of how this works in an interactive program: here’s the code in prompt-toolkit (the library that iPython uses for handling input)
that aborts a search when you press Ctrl-C
.
rule 2: TUIs should quit when you press q
TUI programs (like less
or htop
) will usually quit when you press q
.
This rule doesn’t apply to any program where pressing q
to quit wouldn’t make
sense, like tmux
or text editors.
rule 3: REPLs should quit when you press Ctrl-D
on an empty line
REPLs (like python3
or ed
) will usually quit when you press Ctrl-D
on an
empty line. This rule is similar to the Ctrl-C
rule – the reason for this is
that by default if you’re running a program (like cat
) in “cooked mode”, then
the operating system will return an EOF
when you press Ctrl-D
on an empty
line.
Most of the REPLs I use (sqlite3, python3, fish, bash, etc) don’t actually use cooked mode, but they all implement this keyboard shortcut anyway to mimic the default behaviour.
For example, here’s the code in prompt-toolkit that quits when you press Ctrl-D, and here’s the same code in readline.
I actually thought that this one was a “Law of Terminal Physics” until very recently because I’ve basically never seen it broken, but you can see that it’s just something that each individual input library has to implement in the links above.
Someone pointed out that the Erlang REPL does not quit when you press Ctrl-D
,
so I guess not every REPL follows this “rule”.
rule 4: don’t use more than 16 colours
Terminal programs rarely use colours other than the base 16 ANSI colours. This
is because if you specify colours with a hex code, it’s very likely to clash
with some users’ background colour. For example if I print out some text as
#EEEEEE
, it would be almost invisible on a white background, though it would
look fine on a dark background.
But if you stick to the default 16 base colours, you have a much better chance that the user has configured those colours in their terminal emulator so that they work reasonably well with their background color. Another reason to stick to the default base 16 colours is that it makes less assumptions about what colours the terminal emulator supports.
The only programs I usually see breaking this “rule” are text editors, for example Helix by default will use a purple background which is not a default ANSI colour. It seems fine for Helix to break this rule since Helix isn’t a “core” program and I assume any Helix user who doesn’t like that colorscheme will just change the theme.
rule 5: vaguely support readline keybindings
Almost every program I use supports readline
keybindings if it would make
sense to do so. For example, here are a bunch of different programs and a link
to where they define Ctrl-E
to go to the end of the line:
- ipython (Ctrl-E defined here)
- atuin (Ctrl-E defined here)
- fzf (Ctrl-E defined here)
- zsh (Ctrl-E defined here)
- fish (Ctrl-E defined here)
- tmux’s command prompt (Ctrl-E defined here)
None of those programs actually uses readline
directly, they just sort of
mimic emacs/readline keybindings. They don’t always mimic them exactly: for
example atuin seems to use Ctrl-A
as a prefix, so Ctrl-A
doesn’t go to the
beginning of the line.
Also all of these programs seem to implement their own internal cut and paste
buffers so you can delete a line with Ctrl-U
and then paste it with Ctrl-Y
.
The exceptions to this are:
- some programs (like
git
,cat
, andnc
) don’t have any line editing support at all (except for backspace,Ctrl-W
, andCtrl-U
) - as usual text editors are an exception, every text editor has its own approach to editing text
I wrote more about this “what keybindings does a program support?” question in entering text in the terminal is complicated.
rule 5.1: Ctrl-W should delete the last word
I’ve never seen a program (other than a text editor) where Ctrl-W
doesn’t
delete the last word. This is similar to the Ctrl-C
rule – by default if a
program is in “cooked mode”, the OS will delete the last word if you press
Ctrl-W
, and delete the whole line if you press Ctrl-U
. So usually programs
will imitate that behaviour.
I can’t think of any exceptions to this other than text editors but if there are I’d love to hear about them!
rule 6: disable colours when writing to a pipe
Most programs will disable colours when writing to a pipe. For example:
rg blah
will highlight all occurrences ofblah
in the output, but if the output is to a pipe or a file, it’ll turn off the highlighting.ls --color=auto
will use colour when writing to a terminal, but not when writing to a pipe
Both of those programs will also format their output differently when writing
to the terminal: ls
will organize files into columns, and ripgrep will group
matches with headings.
If you want to force the program to use colour (for example because you want to
look at the colour), you can use unbuffer
to force the program’s output to be
a tty like this:
unbuffer rg blah | less -R
I’m sure that there are some programs that “break” this rule but I can’t think
of any examples right now. Some programs have an --color
flag that you can
use to force colour to be on, in the example above you could also do rg --color=always | less -R
.
rule 7: -
means stdin/stdout
Usually if you pass -
to a program instead of a filename, it’ll read from
stdin or write to stdout (whichever is appropriate). For example, if you want
to format the Python code that’s on your clipboard with black
and then copy
it, you could run:
pbpaste | black - | pbcopy
(pbpaste
is a Mac program, you can do something similar on Linux with xclip
)
My impression is that most programs implement this if it would make sense and I can’t think of any exceptions right now, but I’m sure there are many exceptions.
these “rules” take a long time to learn
These rules took me a long time for me to learn because I had to:
- learn that the rule applied anywhere at all ("
Ctrl-C
will exit programs") - notice some exceptions (“okay,
Ctrl-C
will exitfind
but notless
”) - subconsciously figure out what the pattern is ("
Ctrl-C
will generally quit noninteractive programs, but in interactive programs it might interrupt the current operation instead of quitting the program") - eventually maybe formulate it into an explicit rule that I know
A lot of my understanding of the terminal is honestly still in the “subconscious pattern recognition” stage. The only reason I’ve been taking the time to make things explicit at all is because I’ve been trying to explain how it works to others. Hopefully writing down these “rules” explicitly will make learning some of this stuff a little bit faster for others.
Today's links
- The housing emergency and the second Trump term: Weaponized shelter gave us Trump II; can we fix it despite Trumpism?
- Hey look at this: Delights to delectate.
- This day in history: 2009, 2014, 2019, 2023
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
The housing emergency and the second Trump term (permalink)
Postmortems and blame for the 2024 elections are thick on the ground, but amidst all those theories and pointed fingers, one explanation looms large and credible: the American housing emergency. If the system can't put a roof over your head, that system needs to go.
American housing has been in crisis for decades, of course, but it keeps getting worse…and worse…and worse. Americans pay more for worse housing than at any time in their history. Homelessness is at a peak that is soul-crushing to witness and maddening to experience. We turned housing – a human necessity second only to air, food and water – into an asset governed almost entirely by market forces, and so created a crisis that has consumed the nation.
The Trump administration has no plan to deal with housing. Or rather, they do have plans, but strictly of the "bad ideas only" variety. Trump wants to deport 11m undocumented immigrants, and their families, including citizens and Green Card holders (otherwise, that would be "family separation" and that's cruel). Even if you are the kind of monster who can set aside the ghoulishness of solving your housing problems by throwing someone in a concentration camp at gunpoint and then deporting them to a country where they legitimately fear for their lives, this still doesn't solve the housing emergency, and will leave America several million homes short.
Their other solution? Deregulation and tax cuts. We've seen this movie before, and it's an R-rated horror flick. Financial deregulation created the speculative mortgage markets that led to the 2008 housing crisis, which created a seemingly permanent incapacity to build new homes in America, as skilled tradespeople retired or changed careers and housebuilding firms left the market. Handing giant tax cuts to the monopolists who gobbled up the remains of these bankrupt small companies minted a dozen new housing billionaires who preside over companies that make more money than ever by building fewer homes:
This isn't working. Homelessness is ballooning. The only answer Trump and his regime have for our homeless neighbors is to just make it a crime to be homeless, sweeping up homeless encampments and busting homeless people for "loitering" (that is, existing in space). There is no universe in which this reduces homelessness. People who lose their homes aren't going to dig holes, crawl inside, and pull the dirt down on top of themselves. If anything, sweeps and arrests will make homelessness worse, by destroying the possessions, medication and stability that homeless people need if they are to become housed.
Today, The American Prospect published an excellent package on the housing emergency, looking at its causes and the road-tested solutions that can work even when the federal government is doing everything it can to make the problem worse:
https://prospect.org/infrastructure/housing/2024-12-11-tackling-the-housing-crisis/
The Harris campaign ran on Biden's economic record, insisting that he had tamed inflation. It's true that the Biden admin took action against monopolists and greedflation, including criminal price-fixing companies like Realpage, which helps landlords coordinate illegal conspiracies to rig rents. Realpage sets the rents for the majority of homes in major metros, like Phoenix:
Of course, reducing inflation isn't the same as bringing prices down – it just means prices are going up more slowly. And sure, inflation is way down in many categories, but not in housing. In housing, inflation is accelerating:
The housing emergency makes everything else worse. Blue states are in danger of losing Congressional seats because people are leaving big cities: not because they want to, but because they literally can't afford to keep a roof over their heads. LGBTQ people fleeing fascist red state legislatures and their policies on trans and gay rights can't afford to move to the states where they will be allowed to simply live:
https://www.nytimes.com/2024/07/11/business/economy/lgbtq-moving-cost.html
So what are the roots of this problem, and what can we do about it? The housing emergency doesn't have a unitary cause, but among the most important factors is fuckery that led to the Great Financial Crisis and the fuckery that followed on from it, as Ryan Cooper writes:
The Glass-Steagall Act was a 1933 banking regulation created to prevent Great Depression-style market crashes. It was killed in 1999 by Bill Clinton, who declared, "the Glass–Steagall law is no longer appropriate." Nine years later, the global economy melted down in a Great Depression-style market crash fueled by reckless speculation of the sort that Glass-Steagall had prohibited.
The crash of 2008 took down all kinds of industries, but none were so hard-hit as home-building (after all, mortgages were the raw material of the financial bubble that popped in 2008). After 2008, construction of new housing fell by 90% for the next two years. This protracted nuclear winter in the housing market killed many associated industries. Skilled tradespeople retrained, or "left the job market" (a euphemism for becoming disabled, homeless, or destroyed). Waves of bankruptcies swept through the construction industry. The construction workforce didn't recover to pre-crisis levels for 16 years (and of course, by then, there was a huge backlog of unbuilt homes, and a larger population seeking housing).
Meanwhile, the collapse of every part of the housing supply chain – from raw materials to producers – set the stage for monopoly rollups, with the biggest firms gobbling up all these distressed smaller firms. Thanks to this massive consolidation, homebuilders were able to build fewer houses and extract higher profits by gouging on price. They doubled down on this monopoly price-gouging during the pandemic supply shocks, raising prices well above the pandemic shortage costs.
The housing market is monopolized in ways that will be familiar to anyone angry about consolidation in other markets – from eyeglasses to pharma to tech. One builder, HR Horton, is the largest player in 3 of the country's largest markets, and it has tripled its profits since 2005 while building half as many houses. Modern homebuilders don't build: they use their scale to get land at knock-down rates, slow-walk the planning process, and then farm out the work to actual construction firms at rates that barely keep the lights on:
https://www.thebignewsletter.com/p/its-the-land-stupid-how-the-homebuilder
Monopolists can increase profits by constraining supply. 60% of US markets are "highly concentrated" and the companies that dominate these markets are starving homebuilding in them to the tune of $106b/year:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3303984
There are some obvious fixes to this, but they are either unlikely under Trump (antitrust action to break up builders based on their share in each market) or impossible to imagine (closing tax loopholes that benefit large building firms). Likewise, we could create a "homes guarantee" that would act as an "automatic stabilizer." That would mean that any time the economy slips into recession, this would trigger automatic funding to pay firms to build public housing, thus stimulating the economy and alleviating the housing supply crisis:
https://www.peoplespolicyproject.org/wp-content/uploads/2018/04/SocialHousing.pdf
The Homes Guarantee is further explained in a separate article in the package by Sulma Arias from People's Action, who describes how grassroots activists fighting redlining planted the seeds of a legal guarantee of a home:
https://prospect.org/infrastructure/housing/2024-12-11-why-we-need-homes-guarantee/
Arias describes the path to a right to a home as running through the mass provision of public housing – and what makes that so exciting is that public housing can be funded, administered and built by local or state governments, meaning this is a thing that can happen even in the face of a hostile or indifferent federal regime.
In Paul E Williams's story on FIMBY (finance in my back yard), the executive director of Center for Public Enterprise offers an inspirational story of how local governments can provide thousands of homes:
https://prospect.org/infrastructure/housing/2024-12-11-fimby-finance-in-my-backyard/
Williams recounts the events of 2021 in Montgomery County, Maryland, where a county agency stepped in to loan money to a property developer who had land, zoning approval and work crews to build a major new housing block, but couldn't find finance. Montgomery County's Housing Opportunities Commission made a short-term loan at market rates to the developer.
By 2023, the building was up and the loan had been repaid. All 268 units are occupied and a third are rented at rates tailored to low-income tenants. The HOC is the permanent owner of those homes. It worked so well that Montgomery's HOC is on track to build 3,000 more public homes this way:
https://www.nytimes.com/2023/08/25/business/affordable-housing-montgomery-county.html
Others – in red states! – have followed suit, with lookalike funds and projects in Atlanta and Chattanooga, with "dozens" more plans underway at state and local levels. The Massachusetts Momentum Fund is set to fund 40,000 homes.
https://www.nytimes.com/2023/08/25/business/affordable-housing-montgomery-county.html
The Center for Public Enterprise has a whole report on these "Government Sponsored Enterprises" and the role they can play in creating a supply of homes priced at a rate that working people can afford:
https://prospect.org/infrastructure/housing/2024-12-11-fimby-finance-in-my-backyard/
Of course, for a GSE to loan money to build a home, that home has to be possible. YIMBYs are right to point to restrictive zoning as a major impediment to building new homes, and Robert Cruickshank from California YIMBY has a piece breaking down the strategy for fixing zoning:
https://prospect.org/infrastructure/housing/2024-12-11-make-it-legal-to-build/
Cruickshank lays out YIMBY success stories in cities like Austin and Minneapolis adopting YIMBY-style zoning rules and seeing significant improvements in rental prices. These success stories are representative of a broader recognition – at least among Democratic politicians – that restrictive zoning is a major contributor to the housing emergency.
Repeating these successes in the rest of the country will take a long time, and in the meantime, American tenants are sitting ducks for predatory landlords. With criminal enterprises like Realpage enabling collusive price-fixing for housing and monopoly developers deliberately restricting supplies to keep prices up (a recent Blackrock investor communique gloated over the undersupply of housing as a source of profits for its massive portfolio of rental properties), tenants pay more and more of their paychecks for worse and worse accommodations. They can't wait for the housing emergency to be solved through zoning changes and public housing. They need relief now.
That's where tenants' unions come in, as Ruthy Gourevitch and Tara Raghuveer of the Tenant Union Federation writes in their piece on the tenants across the country who are coordinating rent strikes to protest obscene rent-hikes and dangerous living conditions:
https://prospect.org/infrastructure/housing/2024-12-11-look-for-the-tenant-union/
They describe a country where tenants work multiple jobs, send the majority of their take-home pay to their landlords – a quarter of tenants pay 70% of their wages in rent – and live in vermin-filled homes without heat or ventilation:
https://www.phenomenalworld.org/analysis/terms-of-investment/
Public money from Freddie Mac and Fannie Mae flood into the speculative market for multifamily homes, a largely unregulated, subsidized speculative bonanza that lets the wealthy make bets and the poor pay their losses.
In response, tenants unions are popping up all across the country, especially in red state cities like Bozeman, MT and Louisville, KY. They organize for "just cause" evictions that ban landlords from taking their homes away. They seek fair housing voucher distribution practices. They seek to close eviction loopholes like the LA wheeze that lets landlords kick you out following "renovations."
The National Tenant Policy Agenda demands "national rent caps, anti-eviction protections, habitability standards, and antitrust action," measures that would immediately and profoundly improve the lives of millions of American workers:
https://docs.google.com/document/d/1JF1-fTalW1tOBO0FhYDcVvEd1kQ2HIzkYFNRo6zmSsg/edit
They caution that it's not enough to merely increase housing supply. Without a strong countervailing force from organized tenants, new housing can be just another source of extraction and speculation for the rich. They say that the Federal Housing Finance Agency – regulator for Fannie and Freddie – could play an active role in ensuring that new housing addresses the needs of people, not corporations.
In the meantime, a tenants' union in KC successfully used a rent strike – where every tenant in a building refuses to pay rent – to get millions in overdue repairs. More strikes are planned across the country.
The American system is in crisis. A country that cannot house its people is a failure. As Rachael Dziaba writes in the final piece for the package, the situation is so bad that water has started to flow uphill: the cities with the most inward migration have the least job growth:
https://prospect.org/infrastructure/housing/2024-10-18-housing-blues/
It's not just housing, of course. Americans pay more for health care than anyone else in the rich world and get worse outcomes than anyone else in the rich world. Their monopoly grocers have spiked their food prices. The incoming administration has declared war on public education and seeks to relegate poor children to unsupervised schools where "education" can consist of filling in forms on a Chromebook and learning that the Earth is only 5,000 years old.
A system that can't shelter, feed, educate or care for its people is a failure. People in failed states will vote for anyone who promises to tear the system down. The decision to turn life's necessities over to unregulated, uncaring markets has produced a populace who are so desperate for change, they'll even vote for their own destruction.
Hey look at this (permalink)
- Luigi's Manifesto https://www.kenklippenstein.com/p/luigis-manifesto
-
Waiting for Takeoff: The short-term impact of AI adoption on firm productivity https://dais.ca/reports/waiting-for-takeoff/
-
Debanking (and Debunking?) https://www.bitsaboutmoney.com/archive/debanking-and-debunking/
This day in history (permalink)
#15yrsago Mall cops in Norwich, England get police powers https://web.archive.org/web/20091220231959/https://www.edp24.co.uk/content/edp24/news/story.aspx/?brand=EDPOnline&category=News&tBrand=EDPOnline&tCategory=xDefault&itemid=NOED10%20Dec%202009%2015%3A22%3A21%3A773
#15yrsago Kenyan bike-mechanic’s homemade tools https://www.youtube.com/watch?v=gEeyY09YzEY
#15yrsago Science fiction fandom is 80 today https://web.archive.org/web/20091214023834/http://www.tor.com/index.php?option=com_content&view=blog&id=58405
#15yrsago English anti-terror cops ask nursery school workers to watch 4 year olds for signs of “radicalization” https://web.archive.org/web/20100106032907/http://www.timesonline.co.uk/tol/news/uk/crime/article6952503.ece
#15yrsago Just look at this awesome EU banana curvature regulation. https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CONSLEG:1994R2257:20060217:EN:PDF
#15yrsago Anti-Olympic mural censored in Vancouver https://web.archive.org/web/20091214014017/https://www.theglobeandmail.com/news/national/british-columbia/vancouver-orders-removal-of-anti-olympic-mural/article1396541/
#15yrsago RIAA, MPAA and US Chamber of Commerce declare war on blind and disabled people https://web.archive.org/web/20091214062920/https://www.wired.com/threatlevel/2009/12/blind_block/
#15yrsago Dr Peter Watts, Canadian science fiction writer, beaten and arrested at US border https://memex.craphound.com/2009/12/11/dr-peter-watts-canadian-science-fiction-writer-beaten-and-arrested-at-us-border/
#10yrsago Google News shuts down in Spain https://www.eff.org/deeplinks/2014/12/google-news-shuts-shop-spain-thanks-ancillary-copyright-law
#10yrsago Calling out the doctors who abetted CIA torture https://kottke.org/14/12/medical-profession-aided-cia-torture
#10yrsago Lawquake! Judge rules that explaining jailbreaking isn’t illegal https://www.eff.org/deeplinks/2014/12/pointing-users-drm-stripping-software-isnt-copyright-infringement-judge-rules
#10yrsago We know you love privacy, Judge Posner. We just wish you’d share. https://www.techdirt.com/2014/12/09/judge-posner-says-nsa-should-be-able-to-get-everything-that-privacy-is-overrated/
#10yrsago Furry convention evacuated after chlorine-gas attack https://www.themarysue.com/furry-con-terrorist-attack/
#5yrsago Twitter wants to develop an open, decentralized, federated social media standard…and then join it https://www.techdirt.com/2019/12/11/twitter-makes-bet-protocols-over-platforms/
#5yrsago The true nature of creativity: pilfering and recombining the work of your forebears (who, in turn, pilfered and recombined) https://www.youtube.com/watch?v=CB1KE5dbOZo
#5yrsago South Carolina’s feudal magistrate system may take a modest step toward modernization https://www.propublica.org/article/we-investigated-magistrates-now-lawmakers-want-to-overhaul-the-system
#1yrago Daddy-Daughter Podcast 2023 https://pluralistic.net/2023/12/11/daddy-daughter-2023/#not-bye
Upcoming appearances (permalink)
- IA et “merdification“ d’internet: peut-on envisager un nouveau web? (Remote), Dec 12
https://www.unige.ch/comprendre-le-numerique/conferences-publiques1/cycle-5-2024-2025/ia-et-merdification-dinternet-peut-envisager-un-nouveau-web/ -
Should a Public Telecom Be An Election Issue/Davenport NDP (Remote), Dec 15
https://www.davenportndp.ca/public_telecom_town_hall -
ISSA-LA Holiday Celebration keynote (Los Angeles), Dec 18
https://issala.org/event/issa-la-december-18-dinner-meeting/ -
Picks and Shovels with Charlie Jane Anders (Menlo Park), Feb 17
https://www.keplers.org/upcoming-events-internal/cory-doctorow -
Picks and Shovels with Dan Savage (Seattle), Feb 19
https://www.eventbrite.com/e/cory-doctorow-with-dan-savage-picks-and-shovels-a-martin-hench-novel-tickets-1106741957989 -
Cloudfest (Europa Park), Mar 17-20
https://cloudfest.link/ -
DeepSouthCon63 (New Orleans), Oct 10-12, 2025
http://www.contraflowscifi.org/
Recent appearances (permalink)
- Can we avoid the enshittification of clean-energy tech? (Volts.wtf)
https://www.volts.wtf/p/can-we-avoid-the-enshittification -
Enshittification: Why Everything Suddenly Got Worse and What to Do About It (HOPE XV)
https://www.youtube.com/watch?v=YrciT_dc2sc&list=PLcajvRZA8E0_tLLEh1COeAv-TcaDna2k1&index=32 -
How To Keep IoT From Becoming An IoTrash (Def Con)
https://www.youtube.com/watch?v=tA7bpp8qXxI
Latest books (permalink)
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3062/Available_Feb_20th%3A_The_Bezzle_HB.html#/).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books (permalink)
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
-
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025
Colophon (permalink)
Today's top sources:
Currently writing:
- Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)
-
A Little Brother short story about DIY insulin PLANNING
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025
Latest podcast: Spill, part six (FINALE) (a Little Brother story) https://craphound.com/littlebrother/2024/12/08/spill-part-six-finale-a-little-brother-story/
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
sometimes i scroll r/booksuggestions and get totally overwhelmed by how many amazing books exist
The outgoing Sherrod Brown addressing the Senate Committee on Banking, Housing, and Urban Affairs:
This committee must ready itself for the fights and challenges ahead:
Rising housing costs, private equity infiltrating more and more of our economy, insurance costs going up, risks building up in the private credit market, new technology that’s increasingly being used in our financial system – from algorithmic prices to AI to crypto.
All these risks have one thing in common: they all have the potential to take even more money away from working Americans…and funnel it to the same corporate elite that always seem to come out ahead.
Incredible essay about the importance and challenges of digital archival by Maxwell Neely-Cohen , as well as the various imperfect strategies to achieve “century-scale” digital archives.
We picked a century scale because most physical objects can survive 100 years in good care. It is attainable, and yet we selected it because the design of mainstream digital storage mediums are nowhere close to even considering this mark.
The current web pages and marketing for Microsoft Azure and Google Cloud do not mention cultural or historical preservation at any point. ... At this precise moment all of these services mention AI (a lot) and how it’s going to change everything. ... Two years ago their marketing materials mentioned web3 and the metaverse (a lot) and how it was going to change everything, and how if your business did not adapt you were going to be left behind—yet those sentiments no longer appear.
The Jack Welch school of shareholder supremacy is completely incompatible with the sorts of values that would ensure a cloud storage provider would reliably exist for a century.
The progeny of [early internet filesharing] platforms still exist, and in some cases, thrive, though they are no longer a dominant means of distributing media. Sci-Hub, Library Genesis, and Z-Library offer academic journal articles for free to anyone who wants to download them, flouting intellectual property laws and invoking the right to science and culture under Article 27 of the Universal Declaration of Human Rights.
It’s worth considering the efficacy of piracy and the intentional breaking of intellectual property law as a long-term preservation tactic. Abigail De Kosnik, a professor in the Berkeley Center for New Media, contends that, given the nature of digital cultural output and the failures of the current corporate and institutional orders to properly care for them, piracy-based media preservation efforts are more likely to survive catastrophic future events than traditional institutions. On the other hand, as the notorious prosecution of Aaron Swartz or the legal cases against the Internet Archive demonstrate, engaging in copyright infringement at scale runs the constant risk of sanction and shutdown from state actors.
On blockchain-based filestorage projects:
If providing storage generates revenue, that revenue will centralize because it is incentivized to centralize, just like other supposedly decentralized offerings in an unregulated market context. The untested legal status of these systems also poses potential problems. ... None of these schemes have so far proven that they can function, let alone thrive, as functional viable marketplaces for a sustained period of time, nor that they can reliably incentivize storage in times of strife or scarcity. ... To directly peg an archival storage method to a market system with stakeholders that feed on volatility is equivalent to burying your hard drives in a 100-year flood zone.
If your goal in century-scale storage is avoiding kinetic, Hollywood-ready catastrophes, then decentralized solutions are ideal, but whether they can combat neglect is less clear. If a decentralized scheme wants to be successful at century scale, this is what they should and must attack. One of the few clear benefits of centralization is that it inspires care. If people know something is important, of value, potentially even the last of something, they tend to fight every day to protect it.
What is consistent about these examples is that they all involve groups who care. The most enduring decentralized efforts don’t owe their success to technological or organizational innovation, but rather by having enlisted generations of people with an emotional and intellectual investment in their worth. For both cloud storage services and distributed storage schemes, the question is whether they can provoke the necessary level of passion and watchfulness. Are they and their technologies empowering those who care, or setting them up to fail? Can cloud storage corporations transform themselves into wardens? Can distributed storage systems turn each node into a guardian?
The librarians and archivists of the world have been tackling the challenges of digital preservation for decades—the issue is that no one else is. The real solution to century-scale storage, especially at scale, is to change this reality. Successful century-scale storage will require a massive investment in digital preservation, a societal commitment. Politicians, governments, companies, and investors will have to be convinced, incentivized, or even bullied.
Every time a media company destroys an archive, every time a video game company prosecutes the preservers of content it has abandoned, every time a tech company kills a well-used product with no plan for preservation, these actions should be met with attention and resistance. We are on the brink of a dark age, or have already entered one. The scale of art, music, and literature being lost each day as the World Wide Web shifts and degenerates represents the biggest loss of human cultural production since World War II. My generation was continuously warned by teachers, parents, and authority figures that we should be careful online because the internet is written in ink, and yet it turned out to be the exact opposite. As writer and researcher Kevin T. Baker remarked, “On the internet, Alexandria burns daily.”
In order to survive, a data storer, and the makers of the tools they use, must be prepared to adopt a skeptical and even defiant attitude toward the societies in which they live. They must accept the protection of a patron while also preparing for the possibility of betrayal. If you’re wondering why much of this essay takes such an antagonistic pose toward external political and economic actors, while also considering the fruits of their offerings, it is because the century-scale archivist must sometimes be in service of an ideology that only answers to itself—to the protection of the collected artifacts at all costs. This ideology, an “Archivism,” entails a belief in the preservation of that which we make and think for future generations, at the expense of anything else. Century-scale storage can span methods and platforms, be enabled by governments and titans of industry, be helped by religions, cultures, artists, scenes, fans, collectors, technocrats, and engineers, but it must, at the end of the day, retain its values internally.
This is where, once again, the only true solution is an aggressive and massive investment in archives, libraries, digital preservationists, and software and hardware maintainers at every level, in every form of practice and economic circumstance. This needs to happen not just for states, corporations, and institutions, but for hobbyists and consumers.
The goal of century-scale storage must be to preserve that which we have created so that others, those we will never meet, may experience their intricacies and ecstasies, their capacities for enlightenment. This should be done by whatever means necessary, whatever method or decision ensures the possibility of that future—one day at a time,—and be willing to change at any moment, to scrap and claw against the forces attempting to smother the light.
did you know if you open a HYSA at american express, you cannot transfer funds from a business checking account linked to that HYSA?
the only way to do it would be AMEX checking -> some other account -> AMEX HYSA
and this has apparently been an issue for YEARS
dealing with banks is so infuriating oh my god
Today's links
- Tech's benevolent-dictator-for-life to authoritarian pipeline: It's damned hard to be an anti-authoritarian in a C-suite.
- Hey look at this: Delights to delectate.
- This day in history: 2014, 2019
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Tech's benevolent-dictator-for-life to authoritarian pipeline (permalink)
Silicon Valley's "authoritarian turn" is hard to miss: tech bosses have come out for autocrats like Trump, Orban, Milei, Bolsonaro, et al, and want to turn San Francisco into a militia-patrolled apartheid state operated for the benefit of tech bros:
https://newrepublic.com/article/180487/balaji-srinivasan-network-state-plutocrat
Smart people have written well about what this means, and have gotten me thinking, too:
https://www.programmablemutter.com/p/why-did-silicon-valley-turn-right
Regular readers will know that I make a kind of hobby of collecting definitions of right-wing thought:
https://pluralistic.net/2021/09/29/jubilance/#tolerable-racism
One of these – a hoary old cliche – is that "a conservative is a liberal who's been mugged." I don't give this one much credence, but it takes on an interesting sheen when combined with this anonymous gem: "Conservatives say they long for the simpler times of their childhood, but what they miss is that the reason they lived simpler lives back then wasn't that the times were simpler; rather, it's because they were children."
If you're a tech founder who once lived in a world where your workers were also your pals and didn't shout at you about labor relations, perhaps that's not because workers got "woke," but rather, because when you were all scrapping at a startup, you were all on an equal footing and there weren't any labor relations to speak of. And if you're a once-right-on tech founder who used to abstractly favor "social justice" but now find yourself beset by people demanding that you confront your privilege, perhaps what's changed isn't those people, but rather the amount of privilege you have.
In other words, "a reactionary tech boss is a liberal tech boss who hired a bunch of pals only to have them turn around and start a union." And also: "Tech founders say things were simpler when they were running startups, but what they miss is that the reason no one asked their startup to seriously engage with the social harms it caused is the because the startup was largely irrelevant to society, while the large company it turned into is destroying millions of peoples' lives today."
The oft-repeated reactionary excuse that "I didn't leave the progressive movement, they left me," can be both technically true and also profoundly wrong: if progressives in your circle never bothered you about your commercial affairs, perhaps that's because those affairs didn't matter when you were grinding out code in your hacker house, but they matter a lot now that you have millions of users and thousands of employees.
I've been in tech circles since before the dawn of the dotcoms; I was part of a movement of people who would come over to your house with a stack of floppies and install TCP/IP and PPP networking software on your computer and show you how to connect to a BBS or ISP, because we wanted everyone to have as much fun as we were having.
Some of us channeled that excitement into starting companies that let people get online, create digital presences of their own, and connect with other people. Some of us were more .ORG than .COM and gave our lives over to activism and nonprofits, missing out on the stock options and big paydays. But even though we ended up in different places, we mostly started in the same place, as spittle-flecked, excited kids talking a mile a minute about how cool this internet thing would be and helping you, a normie, jump into it.
Many of my peers from the .ORG and .COM worlds went on to set up institutions – both companies and nonprofits – that have since grown to be critical pieces of internet infrastructure: classified ad platforms, online encyclopedias, CMSes and personal publishing services, critical free/open source projects, standards bodies, server-to-server utilities, and more.
These all started out as benevolent autocracies: personal projects started by people who pitched in to help their virtual neighbors with the new, digital problems we were all facing. These good people, with good impulses, did good: their projects filled an important need, and grew, and grew, and became structurally important to the digital world. What started off as "Our pal's project that we all pitch in on," became, "Our pal's important mission that we help with, but that also has paid staff and important stakeholders, which they oversee as 'benevolent dictator for life.'"
Which was fine. The people who kicked off these projects had nurtured them all the way from a napkin doodle to infrastructure. They understood them better than anyone else, had sacrificed much for them, and it made sense for them to be installed as stewards.
But what they did next, how they used their powers as "BDFLs," made a huge difference. Because we are all imperfect, we are all capable of rationalizing our way into bad choices, we are all riven with insecurities that can push us to do things we later regret. When our actions are checked – by our peers' social approval or approbation; by the need to keep our volunteers happy; by the possibility of a mass exodus of our users or a fork of our code – these imperfections are balanced by consequences.
Dictators aren't necessarily any more prone to these lapses in judgment than anyone else. Benevolent dictators actually exist, people who only retain power because they genuinely want to use that power for good. Those people aren't more likely to fly off the handle or talk themselves into bad places than you or me – but to be a dictator (benevolent or otherwise) is to exist without the consequences that prevent you from giving in to those impulses. Worse: if you are the dictator – again, benevolent or otherwise – of a big, structurally important company or nonprofit that millions of people rely on, the consequences of these lapses are extremely consequential.
This is how BDFL arrangements turn sour: by removing themselves from formal constraint, the people whose screwups matter the most end up with the fewest guardrails to prevent themselves from screwing up.
No wonder people who set out to do good, to help others find safe and satisfying digital homes online, find themselves feeling furious and beset. Given those feelings, can we really be surprised when "benevolent" dictators discover that they have sympathy for real-world autocrats whose core ethos is, "I know what needs to be done and I could do it, if only the rest of you would stop nagging me about petty bullshit that you just made up 10 minutes ago but now insist is the most important thing in the world?"
That all said, it's interesting to look at the process by which some BDFLs transitioned to community-run projects with checks and balances. I often think about how Wikipedia's BDFL, the self-avowed libertarian Jimmy Wales, decided (correctly, and to his everlasting credit), that the project he raised from a weird idea into a world-historic phenomenon should not be ruled over by one guy, not even him.
(Jimmy is one of those libertarians who believes that we don't need governments to make us be kind and take care of one another because he is kind and takes care of other people – see also John Gilmore and Penn Jillette:)
https://www.cracked.com/article_40871_penn-jillette-wants-to-talk-it-all-out.html
Jimmy's handover to the Wikimedia Foundation gives me hope for our other BDFLs. He's proof that you can find yourself in the hotseat without being so overwhelmed with personal grievance that you find yourself in sympathy with actual fascists, but rather, have the maturity and self-awareness to know that the reason people are demanding so much of you is that you have – deliberately and with great effort – created a situation in which you owe the world a superhuman degree of care and attention, and the only way to resolve that situation equitably and secure your own posterity is to share that power around, not demand that you be allowed to wield it without reproach.
Hey look at this (permalink)
- Dane County judge strikes down Act 10, restoring public employee union bargaining rights https://www.wpr.org/news/dane-county-judge-strikes-down-act-10-restoring-public-employee-union-bargaining-rights (h/t Metafilter)
-
Specific Suggestions: Simple Sabotage for the 21st Century https://specificsuggestions.com/
-
How to Cut $2 Trillion in Federal Spending Without Breaking a Sweat https://stephaniekelton.substack.com/p/how-to-cut-2-trillion-in-federal
This day in history (permalink)
#10yrsago Tech companies should do something about harassment, but not this https://www.theverge.com/2014/12/8/7350597/why-its-so-hard-to-stop-online-harassment
#10yrsago Information Doesn’t Want to Be Free: the audiobook, read by Wil Wheaton https://craphound.com/info/2014/12/10/information-doesnt-want-to-be-free-audiobook/
#10yrsago World-beating email EULA https://memex.craphound.com/2014/12/10/world-beating-email-eula/
#10yrsago Great Firewall of Cameron blocks Parliamentary committee on rendition/torture https://b2fxxx.blogspot.com/2014/12/virgin-media-blocking-website-of.html
#10yrsago Police, technology and bodycams https://www.eff.org/deeplinks/2014/12/obamas-plan-better-policing-good-bad-and-body-cameras
#10yrsago NYC theater overrules MPAA rating for Snowden documentary https://twitter.com/tommycollison/status/541787027315101696
#5yrsago Youtube copyright trolls Adrev claim to own a homemade MIDI rendition of 1899’s Flight of the Bumblebee https://www.ghostwheel.com/2019/12/08/the-absurdity-of-youtubes-copyright-claim-system/
#5yrsago NYC paid McKinsey $27.5m to reduce violence at Riker’s, producing useless recommendations backed by junk evidence https://www.propublica.org/article/new-york-city-paid-mckinsey-millions-to-stem-jail-violence-instead-violence-soared
#5yrsago Chinese law professor’s social media denunciation of facial recognition in the Beijing subway system https://docs.google.com/document/d/18L4FuiUjGN5Y2j_4-VnYi2U116KWEIaG4pxVp78x-ss/edit?tab=t.0
#5yrsago Distinguishing between “platforms” and “aggregators” in competition law https://memex.craphound.com/2019/12/10/distinguishing-between-platforms-and-aggregators-in-competition-law/
#5yrsago Pete Buttigieg’s prizewinning high-school essay praising Bernie Sanders: “the power to win back the faith of a voting public weary and wary of political opportunism” https://jacobin.com/2019/12/pete-buttigieg-essay-contest-bernie-sanders/
#5yrsago Amazon’s Ring surveillance doorbell leaks its customers’ home addresses, linked to their doorbell videos https://gizmodo.com/ring-s-hidden-data-let-us-map-amazons-sprawling-home-su-1840312279
Upcoming appearances (permalink)
- IA et “merdification“ d’internet: peut-on envisager un nouveau web? (Remote), Dec 12
https://www.unige.ch/comprendre-le-numerique/conferences-publiques1/cycle-5-2024-2025/ia-et-merdification-dinternet-peut-envisager-un-nouveau-web/ -
Should a Public Telecom Be An Election Issue/Davenport NDP (Remote), Dec 15
https://www.davenportndp.ca/public_telecom_town_hall -
ISSA-LA Holiday Celebration keynote (Los Angeles), Dec 18
https://issala.org/event/issa-la-december-18-dinner-meeting/ -
Picks and Shovels with Charlie Jane Anders (Menlo Park), Feb 17
https://www.keplers.org/upcoming-events-internal/cory-doctorow -
Picks and Shovels with Dan Savage (Seattle), Feb 19
https://www.eventbrite.com/e/cory-doctorow-with-dan-savage-picks-and-shovels-a-martin-hench-novel-tickets-1106741957989 -
Cloudfest (Europa Park), Mar 17-20
https://cloudfest.link/ -
DeepSouthCon63 (New Orleans), Oct 10-12, 2025
http://www.contraflowscifi.org/
Recent appearances (permalink)
- Enshittification: Why Everything Suddenly Got Worse and What to Do About It (HOPE XV)
https://www.youtube.com/watch?v=YrciT_dc2sc&list=PLcajvRZA8E0_tLLEh1COeAv-TcaDna2k1&index=32 -
How To Keep IoT From Becoming An IoTrash (Def Con)
https://www.youtube.com/watch?v=tA7bpp8qXxI -
How Big Tech made Trump 2.0 (Real News Network)
https://therealnews.com/how-big-tech-made-trump-2-0
Latest books (permalink)
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3062/Available_Feb_20th%3A_The_Bezzle_HB.html#/).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books (permalink)
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
-
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025
Colophon (permalink)
Today's top sources:
Currently writing:
- Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)
-
A Little Brother short story about DIY insulin PLANNING
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025
Latest podcast: Spill, part six (FINALE) (a Little Brother story) https://craphound.com/littlebrother/2024/12/08/spill-part-six-finale-a-little-brother-story/
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
Today's links
- Predicting the present: Thinking about "Radicalized" after Brian Thompson's assassination.
- Hey look at this: Delights to delectate.
- This day in history: 2009, 2014, 2019, 2023
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Predicting the present (permalink)
Back in 2018, around the time I emailed my immigration lawyer about applying for US citizenship, I started work on a short story called "Radicalized," which eventually became the title story of a collection that came out in 2019:
https://us.macmillan.com/books/9781250228598/radicalized/
"Radicalized" is a story about America, and about guns, and about health care, and about violence. I live in Burbank, which ranks second in gun-stores-per-capita in the USA, a dubious honor that represents a kind of regulatory arbitrage with our neighboring goliath, the City of Los Angeles, where gun store licensing is extremely tight. If you're an Angeleno in search of a firearm, you're almost certainly coming to Burbank to buy it.
Walking, cycling and driving past more gun stores than I'd ever seen in my Canadian life got me thinking about Americans and guns, a subject that many Canadians have passed comment upon. Americans kill each other, and especially themselves, at rates that baffle everyone else in the world, and they do it with guns. When we moved here, my UK born-and-raised daughter came home from her first elementary school lockdown drill perplexed and worried. Knowing what I did about US gun violence, I understood that while school shootings and other spree killings happened with dismal and terrifying regularity, they only accounted for a small percentage of the gun deaths here. If you die with a bullet in you, the chances are that the finger on the trigger was your own. The next most likely suspect is someone you know. After that, a cop. Getting shot by a stranger out of uniform is something of a rarity here – albeit a spectacular one that captures our imaginations in ways that deliberate or accidental self-slayings and related-party shootings do not.
So I told her, "Look, you can basically ignore everything they tell you during those lockdown drills, because they almost certainly have nothing to do with your future. But if a friend ever says to you, 'Hey, wanna see my dad's gun?' I want you to turn around and leave and get in touch with me right away, that instant."
Guns turn the murderous impulse – which, let's be honest, we've all felt at some time or another – into a murderous act. Same goes for suicide, which explains the high levels of non-accidental self-shootings in the USA: when you've got a gun, the distance between suicidal ideation and your death is the ten feet from the sofa to the gun in the closet.
Americans get angry at people and then, if they have a gun to hand, sometimes they shoot them. In a thread /r/Burbank about how people at our local cinemas are rude and use their phones in which someone posted, "Well, you should just ask them to stop." The reply: "That's a great way to get shot." No one chimed in to say, "Don't be ridiculous, no one would shoot you for asking them to put away their phone during a movie." Same goes for "road rage."
And while Americans shoot people they've only just gotten angry at, they also sometimes plan shooting sprees and kill a bunch of people because they're just generically angry. Being angry about the state of the world is a completely relatable emotion, of course, but the targets of these shootings are arbitrary. Sure sometimes these killings have clear, bigoted targets – mass shootings at Black supermarkets or mosques or synagogues or gay bars – more often the people who get sprayed with bullets (at country and western concerts or elementary schools or movie theaters) are almost certainly not the people the gunman (almost always a man) is angry at.
This line of thought kept surfacing as I went through the immigration process, but not just when I was dealing with immigration paperwork. I was also spending an incredible amount of time dealing with our health insurer, Cigna, who kept refusing treatments my pain doctor – one of the most-cited pain researchers in the country – thought I would benefit from. I've had chronic pain since I was a teenager, and it's only ever gotten worse. I've had decades of pain care in Canada and the UK, and while the treatments never worked for very long, it was never compounded by the kinds of bureaucratic stuff I went through with my US insurer.
The multi-hour phone calls with Cigna that went nowhere would often have me seeing red – literally, a red tinge closing in around my vision – and usually my hands would be shaking by the time I got off the call.
And I had it easy! I wasn't terminally ill, and I certainly wasn't calling in on behalf of a child or a spouse or parent who was seriously ill or dying, whose care was being denied by their insurer. Bernie's 2016 Medicare For All campaign promise had filled the air with statistics (Americans pay more for care and get worse outcomes than anyone else in the rich world), and stories. So many stories – stories that just tore your heart out, about parents who literally had to watch their children die because the insurance they paid for refused to treat their kids. As a dad, I literally couldn't imagine how I'd cope in that situation. Just thinking about it filled me with rage.
One day, as I was swimming in the community pool across the street – a critical part of my pain management strategy – I was struck with a thought: "Why don't these people murder health insurance executives?" Not that I wanted them to. I don't want anyone to kill anyone. But why do American men who murder their wives and the people who cut them off in traffic and random classrooms full of children leave the health insurance industry alone? This is an industry that is practically designed to fill the people who interact with it with uncontrollable rage. I mean, if you're watching your wife or your kid die before your eyes because some millionaire CEO decided to aim for a $10 billion stock buyback this year instead of his customary $9 billion target, wouldn't you feel that kind of murderous rage?
Around this time, my parents came out for a visit from Canada. It was a great trip, until one night, my mom woke me up after midnight: "We have to take your father to the ER. He's really sick." He was: shaking, nauseated, feverish. We raced down the street to the local hospital, part of a gigantic chain that has swallowed nearly all the doctors' practices, labs and hospitals within an hour's drive of here.
Dad had kidney stones, and they'd gone septic. When the ER docs removed the stones, all the septic gunk in his kidneys was flushed into his bloodstream, and he crashed. If he hadn't been in an ER recovery room at the time, he would have died. As it was, he was in a coma for three days and it was touch and go. My brother flew down from Toronto, not sure if this was his last chance to see our dad alive. The nurses and doctors took great care of my dad, though, and three days later, he emerged from his coma, and today, he's better than ever.
But on day two, when we thought he was probably at the end of his life, as my mother sat at his side, holding the hand of her husband of fifty years, someone from the hospital billing department came to her side and said, "Mrs Doctorow, I know this is a difficult time, but I'd like to discuss the matter of your husband's bill with you."
The bill was $176,000. Thankfully, the travel medical insurance plan offered by the Ontario Teachers' Union pension covered it all (I don't suppose anyone gets very angry with them).
How do people tolerate this? Again, not in the sense of "people should commit violent acts in the face of these provocations," but rather, "How is it that in a country filled with both assault rifles and unimaginable acts of murderous cruelty committed by fantastically wealthy corporations, people don't leap from their murderous impulses to their murderous weapons to commit murderous acts?
For me, writing fiction is an accretive process. I can tell that a story is brewing when thoughts start rattling around in my mind, resurfacing at odd times. I think of them as stray atoms, seeking molecules with available docking sites to glom onto. I process all my emotions – but especially my negative ones – through this process, by writing stories and novels. I could tell that something was cooking, but it was missing an ingredient.
Then I found it: an interview with the woman who coined the term "incel." It was on the Reply All podcast, and Alana, a queer Canadian woman explained that she had struggled all her life to find romantic and sexual partnership, and jokingly started referring to herself as "involuntarily celibate," and then, as an "incel":
https://gimletmedia.com/shows/reply-all/76h59o
Alana started a message board where other "incels" could offer each other support, and it was remarkably successful. The incels on Alana's message board helped each other work through the problems that stood between them and love, and when they did, they drifted away from the board to pursue a happier life.
That was the problem, Alana explained. If you're in a support group for people with a drinking problem, the group elders, the ones who've been around forever, are the people who've figured it out and gotten sober. When life seems impossible, those elders step in to tell you, I know it's terrible right now, but it'll get better. I was where you are and I got through it. You will, too. I'm here for you. We all are.
But on Alana's incel board, the old timers were the people who couldn't figure it out. They were the ones for whom mutual support and advice didn't help them figure out what they needed to do in order to find the love they sought. The longer the message board ran, the more it became dominated by people who were convinced that it was hopeless, that love was impossible for the likes of them. When newbies posted in rage and despair, these Great Old Ones were there to feed it: You're right. It will never get better. It only gets worse. There is no hope.
That was the missing piece. My short story Radicalized was born. It's a story about men on a message board called Fuck Cancer Right In the Fucking Face (FCKRFF, or "Fuckriff"), who are watching the people they love the most in the world be murdered by their insurance companies, who egg each other on to spectacular acts of mass violence against health insurance company employees, hospital billing offices, and other targets of their rage. As of today, anyone can read this story for free, courtesy of my publishers at Macmillan, who gave permission for the good folks at The American Prospect to post it:
https://prospect.org/culture/books/2024-12-09-radicalized-cory-doctorow-story-health-care/
I often hear from people about this story, even before an unknown (at the time of writing) man assassinated Brian Thompson, CEO of Unitedhealthcare, the murderous health insurance monopoly that is the largest medical insurer in the USA. Since then, hundreds of people have gotten in touch with me to ask me how I feel about this turn of events, how it feels to have "predicted" this.
I've been thinking about it for a few days now, and I gotta tell you, I have complicated feelings.
You've doubtless seen the outpourings of sarcastic graveyard humor about Thompson's murder. People hate Unitedhealthcare, for good reason, because he personally decided – or approved – countless policies that killed people by cheating them until they died.
Nurses and doctors hate Thompson and United. United kills people, for money. During the most acute phase of the pandemic, the company charged the US government $11,000 for each $8 covid test:
https://pluralistic.net/2020/09/06/137300-pct-markup/#137300-pct-markup
UHC leads the nation in claims denials, with a denial rate of 32% (!!). If you want to understand how the US can spend 20% of its GDP and get the worst health outcomes in the world, just connect the dots between those two facts: the largest health insurer in human history charges the government a 183,300% markup on covid tests and also denies a third of its claims.
UHC is a vertically integrated, murdering health profiteer. They bought Optum, the largest pharmacy benefit manager ("A spreadsheet with political power" -Matt Stoller) in the country. Then they starved Optum of IT investment in order to give more money to their shareholders. Then Optum was hacked by ransomware gang and no one could get their prescriptions for weeks. This killed people:
The irony is, Optum is terrible even when it's not hacked. The purpose of Optum is to make you pay more for pharmaceuticals. If that's more than you can afford, you die. Optum – that is, UHC – kills people:
https://pluralistic.net/2024/09/23/shield-of-boringness/#some-men-rob-you-with-a-fountain-pen
Optum isn't the only murderous UHC division. Take Navihealth, an algorithm that United uses to kick people out of their hospital beds even if they're so frail, sick or injured they can't stand or walk. Doctors and nurses routinely watch their gravely ill patients get thrown out of their hospitals. Many die. UHC kills them, for money:
https://prospect.org/health/2024-08-16-steward-bankruptcy-physicians-private-equity/
The patients murdered by Navihealth are on Medicare Advantage. Medicare is the public health care system the USA extends to old people. Medicare Advantage is a privatized system you can swap your Medicare coverage for, and UHC leads the country in Medicare Advantage, blitzing seniors with deceptive ads that trick them into signing up for UHC Medicare Advantage. Seniors who do this lose access to their doctors and specialists, have to pay hundreds or thousands of dollars for their medication, and get hit with $400 surprise bills to use the "free" ambulance service:
https://prospect.org/health/2024-12-05-manhattan-medicare-murder-mystery/
No wonder the public spends 22% more subsidizing Medicare Advantage than they spend on the care for seniors who stick with actual Medicare:
It's not just the elderly, it's also the addicted and mentally ill. UHC illegally denies coverage for mental health and substance abuse treatment. Imagine watching a family member spiral out of control, ODing, or ending up on the streets with hallucinations, and knowing that the health insurance company that takes thousands of dollars out of your paycheck refused to treat them:
Unsurprisingly, the internal culture at UHC is callous beyond belief. How could it not be? How could you go to work at UHC and know you were killing people and not dehumanize those victims? A lawsuit by a chronically ill patient whom UHC had denied care for uncovered recorded phone calls in which UHC employees laughed long and hard about the denied claims, dismissing the patient's desperate, tearful pleas as "tantrums" :
https://www.propublica.org/article/unitedhealth-healthcare-insurance-denial-ulcerative-colitis
Those UHC workers are just trying to get by, of course, and the calluses they develop so they can bear to go to work were ripped off by last week's murder. UHC's executive team knows this, and has gone on a rampage to stop employees from leaking their own horror stories, or even mentioning that the internal company announcement of Thompson's death was seen by 16,000 employees, of whom only 28 left a comment:
https://www.kenklippenstein.com/p/unitedhealthcare-tells-employees
Doctors and nurses hate UHC on behalf of their patients, but it's also personal. UHC screws doctor's practices by refusing to pay them, making them chase payments for months or even years, and then it offers them a payday lending service that helps them keep the lights on while they wait to get paid:
https://www.youtube.com/watch?v=frr4wuvAB6U
Is it any surprise that Reddit's nursing forums are full of nurses making grim, satisfied jokes about the assassination of the $10m/year CEO who ran the $400b/year corporation that does all this?
We're not supposed to experience – much less express – schadenfreude when someone is murdered in the street, no matter who they are. We're meant to express horror at the idea of political violence, even when that violence only claims a single life, a fraction of the body count UCH produced under Thompson's direction. As Malcolm Harris put it, "'Every life is precious' stuff about a healthcare CEO whose company is noted for denying coverage is pretty silly":
https://twitter.com/BigMeanInternet/status/1864471932386623753
As Woody Guthrie wrote, "Some will rob you with a six-gun/And some with a fountain pen." The weapon is lethal when it's a pistol and when it's an insurance company. The insurance company merely serves as an accountability sink, a layer of indirection that lets a murder happen without any person being the technical murderer:
https://profilebooks.com/work/the-unaccountability-machine/
I don't want people to kill insurance executives, and I don't want insurance executives to kill people. But I am unsurprised that this happened. Indeed, I'm surprised that it took so long. It should not be controversial to note that if you run an institution that makes people furious, they will eventually become furious with you. This is the entire pitch of Thomas Piketty's Capital in the 21st Century: that wealth concentration leads to corruption, which is destabilizing, and in the long run it's cheaper to run a fair society than it is to pay for the guards you'll need to keep the guillotines off your lawn:
https://memex.craphound.com/2014/06/24/thomas-pikettys-capital-in-the-21st-century/
But we've spent the past 40 years running in the other direction, maximizing monopolies, inequality and corruption, and gaslighting the public when they insist that this is monstrous and unfair. Back in 2022, when UHC was buying Change Healthcare – the dominant payment network for hospitals, which would allow UHC to surveil all its competitors' payments – the DOJ sued to block the merger. The Trump-appointed judge in the case, Carl Nichols – who owned tens of thousands of dollars in UHC bonds – ruled against the DOJ, saying that it would all be fine thanks to United's "culture of trust and integrity":
https://www.thebignewsletter.com/p/the-antitrust-shooting-war-has-started
We don't know much about Thompson's killer yet, but he's already becoming a folk hero, with lookalike contests in NYC:
https://twitter.com/CollinRugg/status/1865472577478553976
And gigantic graffiti murals praising him and reproducing the words he wrote on the shell casings of the bullets he used to kill Thompson, "delay, deny, depose":
https://www.tumblr.com/radicalgraff/769193188403675136/killin-fuckin-ceos-freight-graff-in-the-bay
I get why this is distasteful. Thompson is said to have been a "family man" who loved his kids, and I have no reason to disbelieve this. I can only imagine that his wife and kids are shattered by this. Every living person is the apex of a massive project involving dozens, hundreds of people who personally worked to raise, nurture and love them. I wrote about this in my novel Walkaway, as the characters consider whether to execute a mercenary sent to kill them, whom they have taken hostage:
She had parents. People who loved her. Every human was a hyper-dense node of intense emotional and material investment. Speaking meant someone had spent thousands of hours cooing to you. Those lean muscles, the ringing tone of command — their inputs were from all over the world, carefully administered. The merc was more than a person: like a spaceship launch, her existence implied thousands of skilled people, generations of experts, wars, treaties, scholarship and supply-chain management. Every one of them was all that.
But so often, the formula for "folk hero" is "killing + time." The person who terrorizes the people who terrorize you is your hero, and eventually we sanitize the deaths, and just remember them as fighters for justice. If you doubt it, consider the legend of Robin Hood:
https://twitter.com/mcmansionhell/status/1865554985842352501
The health industry is trying to put a lid on this, palpably afraid that – as in my story "Radicalized" – this one murderer will become a folk hero who inspires others to acts of spectacular violence. They're insisting that it's unseemly to gloat about Thompson's death. They're right, but this is an obvious loser strategy. The health industry is full of people whose deaths would be deplorable, but not unsurprising. As Clarence Darrow had it:
I’ve never wished a man dead, but I have read some obituaries with great pleasure.
Murder is never the answer. Murder is not a healthy response to corruption. But it is healthy for people to fear that if they kill people for greed, they will be unsafe. On December 5 – the day after Thompson's killing – the health insurer Anthem announced that it would not pay for anesthesia for medical procedures that ran long. The next day, they retracted the policy, citing "outrage":
Sure, maybe it was their fear of reputation damage that got them to decide to reverse this inhumane, disgusting, murderous policy. But maybe it was also someone in the C-suite thinking about what share of the profits from this policy would have to be spent on additional bodyguards for every Anthem exec if it went into effect, and decided that it was a money-loser after all.
Think about hospital exec Ralph de la Torre, who cheerfully testified to Congress that he'd killed patients in pursuit of profit. De la Torre clearly doesn't fear any kind of consequences for his actions. He owns hospitals that are filled with tens of thousands of bats (he stiffed the exterminators), where none of the elevators work (he stiffed the repair techs), where there's no medicine or blood (he stiffed the suppliers) and where the doctors and nurses can't make rent (he stiffed them too). De La Torre doesn't just own hospitals – he also owns a pair of superyachts:
https://pluralistic.net/2024/02/28/5000-bats/#charnel-house
It is a miracle that so many people have lost their mothers, sons, wives and husbands so Ralph de la Torre could buy himself another superyacht, and that those people live in a country where you can buy an assault rifle, and that Ralph de la Torre isn't forced to live in a bunker and travel in a tank.
It's a rather beautiful sort of miracle, to be honest. I like to think that it comes from a widespread belief by the people of this country I have since become a citizen of, that we should solve our problems politically, rather than with bullets.
But the assassination of Brian Thompson is a wake-up call, a warning that if we don't solve this problem politically, we may not have a choice about whether it's solved with violence. As a character in "Radicalized" says, "They say violence never solves anything, but to quote The Onion: that's only true so long as you ignore all of human history":
https://prospect.org/culture/books/2024-12-09-radicalized-cory-doctorow-story-health-care/
Hey look at this (permalink)
- Wil Wheaton audiobooks https://wilwheaton.bandcamp.com/
-
Why is printer ink so expensive? https://www.digitalrightsbytes.org/topics/why-is-printer-ink-so-expensive
-
Shipt’s Algorithm Squeezed Gig Workers. They Fought Back https://spectrum.ieee.org/shipt
This day in history (permalink)
#15yrsago Google CEO says privacy doesn’t matter. Google blacklists CNet for violating CEO’s privacy. https://www.schneier.com/blog/archives/2009/12/my_reaction_to.html
#15yrsago Spanish cops called in over allegation that band was playing “contemporary” music at jazz festival, medical necessity cited https://www.theguardian.com/music/2009/dec/09/jazz-festival-larry-ochs-saxophone
#15yrsago US lobbyist: Canadians would get US government infrastructure contracts if it adopted US copyright laws https://web.archive.org/web/20091213133326/https://www.theglobeandmail.com/news/politics/could-copyright-reform-win-buy-american-battle/article1392951/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheGlobeAndMail-Business+(The+Globe+and+Mail+-+Business+News)
#15yrsago Famous architecture photographer swarmed by multiple police vehicles in London for refusing to tell security guard why he was photographing famous church https://www.theguardian.com/uk/2009/dec/08/police-search-photographer-terrorism-powers
#10yrsago Corporate sovereignty: already costing the EU billions https://www.techdirt.com/2014/12/09/true-cost-corporate-sovereignty-eu-35bn-already-paid-30bn-demanded-even-before-taftattip/
#10yrsago Taxpayers pick up the tab for violent, abusive, murdering cops 99.8% of the time https://nyulawreview.org/issues/volume-89-number-3/police-indemnification/
#10yrsago Modern slavery: the Mexican megafarms that supply America’s top grocers https://graphics.latimes.com/product-of-mexico-camps/
#15yrsago San Francisco’s Monkeybrains ISP offering gigabit home wireless connections https://www.indiegogo.com/projects/gigabit-wireless-to-the-home–2#/
#5yrsago The New Yorker’s profile of William Gibson: “Droll, chilled out, and scarily articulate” https://www.newyorker.com/magazine/2019/12/16/how-william-gibson-keeps-his-science-fiction-real
#5yrsago Model stealing, rewarding hacking and poisoning attacks: a taxonomy of machine learning’s failure modes https://learn.microsoft.com/en-us/security/engineering/failure-modes-in-machine-learning
#5yrsago The blood of poor Americans is now a leading export, bigger than corn or soy https://www.mintpressnews.com/harvesting-blood-americas-poor-late-stage-capitalism/263175/
#5yrsago Popular Chinese video game invites players to “hunt down traitors” in Hong Kong https://www.globaltimes.cn/content/1172323.shtml
#5yrsago The student movements at the vanguard of Chile’s protests are allied with former student leaders now serving in Congress https://apnews.com/article/student-loans-santiago-chile-business-social-services-819108269b65dc2dd4dffcfd7712d53a
#5yrsago In any other industry, emergency medical billing would be considered fraudulent https://www.nytimes.com/2019/12/07/opinion/sunday/medical-billing-fraud.html
#5yrsago US pharma and biotech lobbyists’ documents reveal their plan to gouge Britons in any post-Brexit trade-deal https://theintercept.com/2019/12/09/brexit-american-trade-deal-boris-johnson/
#5yrsago As the end nears for Yahoo Groups, Verizon pulls out all the stops to keep archivists from preserving them https://modsandmembersblog.wordpress.com/2019/12/08/verizon-yahoo-bad-form/
#5yrsago Church nativity scene puts the holy family in cages, because that’s how America deals with asylum-seekers like Christ https://www.nbcnews.com/news/us-news/church-nativity-depicts-jesus-mary-joseph-family-separated-border-n1097891
Upcoming appearances (permalink)
- ACM Conext-2024 Workshop on the Decentralization of the Internet (Los Angeles), Dec 9
https://conferences.sigcomm.org/co-next/2024/#!/din -
IA et “merdification“ d’internet: peut-on envisager un nouveau web? (Remote), Dec 12
https://www.unige.ch/comprendre-le-numerique/conferences-publiques1/cycle-5-2024-2025/ia-et-merdification-dinternet-peut-envisager-un-nouveau-web/ -
Should a Public Telecom Be An Election Issue/Davenport NDP (Remote), Dec 15
https://www.davenportndp.ca/public_telecom_town_hall -
ISSA-LA Holiday Celebration keynote (Los Angeles), Dec 18
https://issala.org/event/issa-la-december-18-dinner-meeting/ -
Picks and Shovels with Charlie Jane Anders (Menlo Park), Feb 17
https://www.keplers.org/upcoming-events-internal/cory-doctorow -
Picks and Shovels with Dan Savage (Seattle), Feb 19
https://www.eventbrite.com/e/cory-doctorow-with-dan-savage-picks-and-shovels-a-martin-hench-novel-tickets-1106741957989 -
Cloudfest (Europa Park), Mar 17-20
https://cloudfest.link/ -
DeepSouthCon63 (New Orleans), Oct 10-12, 2025
http://www.contraflowscifi.org/
Recent appearances (permalink)
- Enshittification: Why Everything Suddenly Got Worse and What to Do About It (HOPE XV)
https://www.youtube.com/watch?v=YrciT_dc2sc&list=PLcajvRZA8E0_tLLEh1COeAv-TcaDna2k1&index=32 -
How To Keep IoT From Becoming An IoTrash (Def Con)
https://www.youtube.com/watch?v=tA7bpp8qXxI -
How Big Tech made Trump 2.0 (Real News Network)
https://therealnews.com/how-big-tech-made-trump-2-0
Latest books (permalink)
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3062/Available_Feb_20th%3A_The_Bezzle_HB.html#/).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books (permalink)
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
-
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025
Colophon (permalink)
Today's top sources:
Currently writing:
- Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)
-
A Little Brother short story about DIY insulin PLANNING
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025
Latest podcast: Spill, part six (FINALE) (a Little Brother story) https://craphound.com/littlebrother/2024/12/08/spill-part-six-finale-a-little-brother-story/
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
Here’s a niche terminal problem that has bothered me for years but that I never really understood until a few weeks ago. Let’s say you’re running this command to watch for some specific output in a log file:
tail -f /some/log/file | grep thing1 | grep thing2
If log lines are being added to the file relatively slowly, the result I’d see is… nothing! It doesn’t matter if there were matches in the log file or not, there just wouldn’t be any output.
I internalized this as “uh, I guess pipes just get stuck sometimes and don’t
show me the output, that’s weird”, and I’d handle it by just
running grep thing1 /some/log/file | grep thing2
instead, which would work.
So as I’ve been doing a terminal deep dive over the last few months I was really excited to finally learn exactly why this happens.
why this happens: buffering
The reason why “pipes get stuck” sometimes is that it’s VERY common for programs to buffer their output before writing it to a pipe or file. So the pipe is working fine, the problem is that the program never even wrote the data to the pipe!
This is for performance reasons: writing all output immediately as soon as you can uses more system calls, so it’s more efficient to save up data until you have 8KB or so of data to write (or until the program exits) and THEN write it to the pipe.
In this example:
tail -f /some/log/file | grep thing1 | grep thing2
the problem is that grep thing1
is saving up all of its matches until it has
8KB of data to write, which might literally never happen.
programs don’t buffer when writing to a terminal
Part of why I found this so disorienting is that tail -f file | grep thing
will work totally fine, but then when you add the second grep
, it stops
working!! The reason for this is that the way grep
handles buffering depends
on whether it’s writing to a terminal or not.
Here’s how grep
(and many other programs) decides to buffer its output:
- Check if stdout is a terminal or not using the
isatty
function- If it’s a terminal, use line buffering (print every line immediately as soon as you have it)
- Otherwise, use “block buffering” – only print data if you have at least 8KB or so of data to print
So if grep
is writing directly to your terminal then you’ll see the line as
soon as it’s printed, but if it’s writing to a pipe, you won’t.
Of course the buffer size isn’t always 8KB for every program, it depends on the implementation. For grep
the buffering is handled by libc, and libc’s buffer size is
defined in the BUFSIZ
variable. Here’s where that’s defined in glibc.
(as an aside: “programs do not use 8KB output buffers when writing to a terminal” isn’t, like, a law of terminal physics, a program COULD use an 8KB buffer when writing output to a terminal if it wanted, it would just be extremely weird if it did that, I can’t think of any program that behaves that way)
commands that buffer & commands that don’t
One annoying thing about this buffering behaviour is that you kind of need to remember which commands buffer their output when writing to a pipe.
Some commands that don’t buffer their output:
- tail
- cat
- tee
I think almost everything else will buffer output, especially if it’s a command where you’re likely to be using it for batch processing. Here’s a list of some common commands that buffer their output when writing to a pipe, along with the flag that disables block buffering.
- grep (
--line-buffered
) - sed (
-u
) - awk (there’s a
fflush()
function) - tcpdump (
-l
) - jq (
-u
) - tr (
-u
) - cut (can’t disable buffering)
Those are all the ones I can think of, lots of unix commands (like sort
) may
or may not buffer their output but it doesn’t matter because sort
can’t do
anything until it finishes receiving input anyway.
Also I did my best to test both the Mac OS and GNU versions of these but there are a lot of variations and I might have made some mistakes.
programming languages where the default “print” statement buffers
Also, here are a few programming language where the default print statement will buffer output when writing to a pipe, and some ways to disable buffering if you want:
- C (disable with
setvbuf
) - Python (disable with
python -u
, orPYTHONUNBUFFERED=1
, orsys.stdout.reconfigure(line_buffering=False)
, orprint(x, flush=True)
) - Ruby (disable with
STDOUT.sync = true
) - Perl (disable with
$| = 1
)
I assume that these languages are designed this way so that the default print function will be fast when you’re doing batch processing.
Also whether output is buffered or not might depend on how you print, for
example in C++ cout << "hello\n"
buffers when writing to a pipe but cout << "hello" << endl
will flush its output.
when you press Ctrl-C
on a pipe, the contents of the buffer are lost
Let’s say you’re running this command as a hacky way to watch for DNS requests
to example.com
, and you forgot to pass -l
to tcpdump:
sudo tcpdump -ni any port 53 | grep example.com
When you press Ctrl-C
, what happens? In a magical perfect world, what I would
want to happen is for tcpdump
to flush its buffer, grep
would search for
example.com
, and I would see all the output I missed.
But in the real world, what happens is that all the programs get killed and the
output in tcpdump
’s buffer is lost.
I think this problem is probably unavoidable – I spent a little time with
strace
to see how this works and grep
receives the SIGINT
before
tcpdump
anyway so even if tcpdump
tried to flush its buffer grep
would
already be dead.
After a little more investigation, there is a workaround: if you find
tcpdump
’s PID and kill -TERM $PID
, then tcpdump will flush the buffer so
you can see the output. That’s kind of a pain but I tested it and it seems to
work.
redirecting to a file also buffers
It’s not just pipes, this will also buffer:
sudo tcpdump -ni any port 53 > output.txt
Redirecting to a file doesn’t have the same “Ctrl-C
will totally destroy the
contents of the buffer” problem though – in my experience it usually behaves
more like you’d want, where the contents of the buffer get written to the file
before the program exits. I’m not 100% sure whether this is something you can
always rely on or not.
a bunch of potential ways to avoid buffering
Okay, let’s talk solutions. Let’s say you’ve run this command or s
tail -f /some/log/file | grep thing1 | grep thing2
I asked people on Mastodon how they would solve this in practice and there were 5 basic approaches. Here they are:
solution 1: run a program that finishes quickly
Historically my solution to this has been to just avoid the “command writing to pipe slowly” situation completely and instead run a program that will finish quickly like this:
cat /some/log/file | grep thing1 | grep thing2 | tail
This doesn’t do the same thing as the original command but it does mean that you get to avoid thinking about these weird buffering issues.
(you could also do grep thing1 /some/log/file
but I often prefer to use an
“unnecessary” cat
)
solution 2: remember the “line buffer” flag to grep
You could remember that grep has a flag to avoid buffering and pass it like this:
tail -f /some/log/file | grep --line-buffered thing1 | grep thing2
solution 3: use awk
Some people said that if they’re specifically dealing with a multiple greps
situation, they’ll rewrite it to use a single awk
instead, like this:
tail -f /some/log/file | awk '/thing1/ && /thing2/'
Or you would write a more complicated grep
, like this:
tail -f /some/log/file | grep -E 'thing1.*thing2'
(awk
also buffers, so for this to work you’ll want awk
to be the last command in the pipeline)
solution 4: use stdbuf
stdbuf
uses LD_PRELOAD to turn off libc’s buffering, and you can use it to turn off output buffering like this:
tail -f /some/log/file | stdbuf -o0 grep thing1 | grep thing2
Like any LD_PRELOAD
solution it’s a bit unreliable – it doesn’t work on
static binaries, I think won’t work if the program isn’t using libc’s
buffering, and doesn’t always work on Mac OS. Harry Marr has a really nice How stdbuf works post.
solution 5: use unbuffer
unbuffer program
will force the program’s output to be a TTY, which means
that it’ll behave the way it normally would on a TTY (less buffering, colour
output, etc). You could use it in this example like this:
tail -f /some/log/file | unbuffer grep thing1 | grep thing2
Unlike stdbuf
it will always work, though it might have unwanted side
effects, for example grep thing1
’s will also colour matches.
If you want to install unbuffer, it’s in the expect
package.
that’s all the solutions I know about!
It’s a bit hard for me to say which one is “best”, I think personally I’m
mostly likely to use unbuffer
because I know it’s always going to work.
If I learn about more solutions I’ll try to add them to this post.
I’m not really sure how often this comes up
I think it’s not very common for me to have a program that slowly trickles data into a pipe like this, normally if I’m using a pipe a bunch of data gets written very quickly, processed by everything in the pipeline, and then everything exits. The only examples I can come up with right now are:
- tcpdump
tail -f
- watching log files in a different way like with
kubectl logs
- the output of a slow computation
what if there were an environment variable to disable buffering?
I think it would be cool if there were a standard environment variable to turn
off buffering, like PYTHONUNBUFFERED
in Python. I got this idea from a
couple of blog posts by Mark Dominus
in 2018. Maybe NO_BUFFER
like NO_COLOR?
The design seems tricky to get right; Mark points out that NETBSD has environment variables called STDBUF
, STDBUF1
, etc which gives you a
ton of control over buffering but I imagine most developers don’t want to
implement many different environment variables to handle a relatively minor
edge case.
I’m also curious about whether there are any programs that just automatically flush their output buffers after some period of time (like 1 second). It feels like it would be nice in theory but I can’t think of any program that does that so I imagine there are some downsides.
stuff I left out
Some things I didn’t talk about in this post since these posts have been getting pretty long recently and seriously does anyone REALLY want to read 3000 words about buffering?
- the difference between line buffering and having totally unbuffered output
- how buffering to stderr is different from buffering to stdout
- this post is only about buffering that happens inside the program, your operating system’s TTY driver also does a little bit of buffering sometimes
- other reasons you might need to flush your output other than “you’re writing to a pipe”
I like writing Javascript without a build system and for the millionth time yesterday I ran into a problem where I needed to figure out how to import a Javascript library in my code without using a build system, and it took FOREVER to figure out how to import it because the library’s setup instructions assume that you’re using a build system.
Luckily at this point I’ve mostly learned how to navigate this situation and either successfully use the library or decide it’s too difficult and switch to a different library, so here’s the guide I wish I had to importing Javascript libraries years ago.
I’m only going to talk about using Javacript libraries on the frontend, and only about how to use them in a no-build-system setup.
In this post I’m going to talk about:
- the three main types of Javascript files a library might provide (ES Modules, the “classic” global variable kind, and CommonJS)
- how to figure out which types of files a Javascript library includes in its build
- ways to import each type of file in your code
the three kinds of Javascript files
There are 3 basic types of Javascript files a library can provide:
- the “classic” type of file that defines a global variable. This is the kind
of file that you can just
<script src>
and it’ll Just Work. Great if you can get it but not always available - an ES module (which may or may not depend on other files, we’ll get to that)
- a “CommonJS” module. This is for Node, you can’t use it in a browser at all without using a build system.
I’m not sure if there’s a better name for the “classic” type but I’m just going to call it “classic”. Also there’s a type called “AMD” but I’m not sure how relevant it is in 2024.
Now that we know the 3 types of files, let’s talk about how to figure out which of these the library actually provides!
where to find the files: the NPM build
Every Javascript library has a build which it uploads to NPM. You might be thinking (like I did originally) – Julia! The whole POINT is that we’re not using Node to build our library! Why are we talking about NPM?
But if you’re using a link from a CDN like https://cdnjs.cloudflare.com/ajax/libs/Chart.js/4.4.1/chart.umd.min.js, you’re still using the NPM build! All the files on the CDNs originally come from NPM.
Because of this, I sometimes like to npm install
the library even if I’m not
planning to use Node to build my library at all – I’ll just create a new temp
folder, npm install
there, and then delete it when I’m done. I like being able to poke
around in the files in the NPM build on my filesystem, because then I can be
100% sure that I’m seeing everything that the library is making available in
its build and that the CDN isn’t hiding something from me.
So let’s npm install
a few libraries and try to figure out what types of
Javascript files they provide in their builds!
example library 1: chart.js
First let’s look inside Chart.js, a plotting library.
$ cd /tmp/whatever
$ npm install chart.js
$ cd node_modules/chart.js/dist
$ ls *.*js
chart.cjs chart.js chart.umd.js helpers.cjs helpers.js
This library seems to have 3 basic options:
option 1: chart.cjs
. The .cjs
suffix tells me that this is a CommonJS
file, for using in Node. This means it’s impossible to use it directly in the
browser without some kind of build step.
option 2:chart.js
. The .js
suffix by itself doesn’t tell us what kind of
file it is, but if I open it up, I see import '@kurkle/color';
which is an
immediate sign that this is an ES module – the import ...
syntax is ES
module syntax.
option 3: chart.umd.js
. “UMD” stands for “Universal Module Definition”,
which I think means that you can use this file either with a basic <script src>
, CommonJS,
or some third thing called AMD that I don’t understand.
how to use a UMD file
When I was using Chart.js I picked Option 3. I just needed to add this to my code:
<script src="./chart.umd.js"> </script>
and then I could use the library with the global Chart
environment variable.
Couldn’t be easier. I just copied chart.umd.js
into my Git repository so that
I didn’t have to worry about using NPM or the CDNs going down or anything.
the build files aren’t always in the dist
directory
A lot of libraries will put their build in the dist
directory, but not
always! The build files’ location is specified in the library’s package.json
.
For example here’s an excerpt from Chart.js’s package.json
.
"jsdelivr": "./dist/chart.umd.js",
"unpkg": "./dist/chart.umd.js",
"main": "./dist/chart.cjs",
"module": "./dist/chart.js",
I think this is saying that if you want to use an ES Module (module
) you
should use dist/chart.js
, but the jsDelivr and unpkg CDNs should use
./dist/chart.umd.js
. I guess main
is for Node.
chart.js
’s package.json
also says "type": "module"
, which according to this documentation
tells Node to treat files as ES modules by default. I think it doesn’t tell us
specifically which files are ES modules and which ones aren’t but it does tell
us that something in there is an ES module.
example library 2: @atcute/oauth-browser-client
@atcute/oauth-browser-client
is a library for logging into Bluesky with OAuth in the browser.
Let’s see what kinds of Javascript files it provides in its build!
$ npm install @atcute/oauth-browser-client
$ cd node_modules/@atcute/oauth-browser-client/dist
$ ls *js
constants.js dpop.js environment.js errors.js index.js resolvers.js
It seems like the only plausible root file in here is index.js
, which looks
something like this:
export { configureOAuth } from './environment.js';
export * from './errors.js';
export * from './resolvers.js';
This export
syntax means it’s an ES module. That means we can use it in
the browser without a build step! Let’s see how to do that.
how to use an ES module with importmaps
Using an ES module isn’t an easy as just adding a <script src="whatever.js">
. Instead, if
the ES module has dependencies (like @atcute/oauth-browser-client
does) the
steps are:
- Set up an import map in your HTML
- Put import statements like
import { configureOAuth } from '@atcute/oauth-browser-client';
in your JS code - Include your JS code in your HTML liek this:
<script type="module" src="YOURSCRIPT.js"></script>
The reason we need an import map instead of just doing something like import { BrowserOAuthClient } from "./oauth-client-browser.js"
is that internally the module has more import statements like import {something} from @atcute/client
, and we need to tell the browser where to get the code for @atcute/client
and all of its other dependencies.
Here’s what the importmap I used looks like for @atcute/oauth-browser-client
:
<script type="importmap">
{
"imports": {
"nanoid": "./node_modules/nanoid/bin/dist/index.js",
"nanoid/non-secure": "./node_modules/nanoid/non-secure/index.js",
"nanoid/url-alphabet": "./node_modules/nanoid/url-alphabet/dist/index.js",
"@atcute/oauth-browser-client": "./node_modules/@atcute/oauth-browser-client/dist/index.js",
"@atcute/client": "./node_modules/@atcute/client/dist/index.js",
"@atcute/client/utils/did": "./node_modules/@atcute/client/dist/utils/did.js"
}
}
</script>
Getting these import maps to work is pretty fiddly, I feel like there must be a tool to generate them automatically but I haven’t found one yet. It’s definitely possible to write a script that automatically generates the importmaps using esbuild’s metafile but I haven’t done that and maybe there’s a better way.
I decided to set up importmaps yesterday to get github.com/jvns/bsky-oauth-example to work, so there’s some example code in that repo.
Also someone pointed me to Simon Willison’s download-esm, which will download an ES module and rewrite the imports to point to the JS files directly so that you don’t need importmaps. I haven’t tried it yet but it seems like a great idea.
problems with importmaps: too many files
I did run into some problems with using importmaps in the browser though – it needed to download dozens of Javascript files to load my site, and my webserver in development couldn’t keep up for some reason. I kept seeing files fail to load randomly and then had to reload the page and hope that they would succeed this time.
It wasn’t an issue anymore when I deployed my site to production, so I guess it was a problem with my local dev environment.
Also one slightly annoying thing about ES modules in general is that you need to
be running a webserver to use them, I’m sure this is for a good reason but it’s
easier when you can just open your index.html
file without starting a
webserver.
Because of the “too many files” thing I think actually using ES modules with importmaps in this way isn’t actually that appealing to me, but it’s good to know it’s possible.
how to use an ES module without importmaps
If the ES module doesn’t have dependencies then it’s even easier – you don’t need the importmaps! You can just:
- put
<script type="module" src="YOURCODE.js"></script>
in your HTML. Thetype="module"
is important. - put
import {whatever} from "https://example.com/whatever.js"
inYOURCODE.js
alternative: use esbuild
If you don’t want to use importmaps, you can also use a build system like esbuild. I talked about how to do that in Some notes on using esbuild, but this blog post is about ways to avoid build systems completely so I’m not going to talk about that option here. I do still like esbuild though and I think it’s a good option in this case.
what’s the browser support for importmaps?
CanIUse says that importmaps are in
“Baseline 2023: newly available across major browsers” so my sense is that in
2024 that’s still maybe a little bit too new? I think I would use importmaps
for some fun experimental code that I only wanted like myself and 12 people to
use, but if I wanted my code to be more widely usable I’d use esbuild
instead.
example library 3: @atproto/oauth-client-browser
Let’s look at one final example library! This is a different Bluesky auth
library than @atcute/oauth-browser-client
.
$ npm install @atproto/oauth-client-browser
$ cd node_modules/@atproto/oauth-client-browser/dist
$ ls *js
browser-oauth-client.js browser-oauth-database.js browser-runtime-implementation.js errors.js index.js indexed-db-store.js util.js
Again, it seems like only real candidate file here is index.js
. But this is a
different situation from the previous example library! Let’s take a look at
index.js
:
There’s a bunch of stuff like this in index.js
:
__exportStar(require("@atproto/oauth-client"), exports);
__exportStar(require("./browser-oauth-client.js"), exports);
__exportStar(require("./errors.js"), exports);
var util_js_1 = require("./util.js");
This require()
syntax is CommonJS syntax, which means that we can’t use this
file in the browser at all, we need to use some kind of build step, and
ESBuild won’t work either.
Also in this library’s package.json
it says "type": "commonjs"
which is
another way to tell it’s CommonJS.
how to use a CommonJS module with esm.sh
Originally I thought it was impossible to use CommonJS modules without learning a build system, but then someone Bluesky told me about esm.sh! It’s a CDN that will translate anything into an ES Module. skypack.dev does something similar, I’m not sure what the difference is but one person mentioned that if one doesn’t work sometimes they’ll try the other one.
For @atproto/oauth-client-browser
using it seems pretty simple, I just need to put this in my HTML:
<script type="module" src="script.js"> </script>
and then put this in script.js
.
import { BrowserOAuthClient } from "https://esm.sh/@atproto/oauth-client-browser@0.3.0"
It seems to Just Work, which is cool! Of course this is still sort of using a build system – it’s just that esm.sh is running the build instead of me. My main concerns with this approach are:
- I don’t really trust CDNs to keep working forever – usually I like to copy dependencies into my repository so that they don’t go away for some reason in the future.
- I’ve heard of some issues with CDNs having security compromises which scares me. Also I don’t
- I don’t really understand what esm.sh is doing and
esbuild can also convert CommonJS modules into ES modules
I also learned that you can also use esbuild
to convert a CommonJS module
into an ES module, though there are some limitations – the import { BrowserOAuthClient } from
syntax doesn’t work. Here’s a github issue about that.
I think the esbuild
approach is probably more appealing to me than the
esm.sh
approach because it’s a tool that I already have on my computer so I
trust it more. I haven’t experimented with this much yet though.
summary of the three types of files
Here’s a summary of the three types of JS files you might encounter, options for how to use them, and how to identify them.
Unhelpfully a .js
or .min.js
file extension could be any of these 3
options, so if the file is something.js
you need to do more detective work to
figure out what you’re dealing with.
- “classic” JS files
- How to use it::
<script src="whatever.js"></script>
- Ways to identify it:
- The website has a big friendly banner in its setup instructions saying “Use this with a CDN!” or something
- A
.umd.js
extension - Just try to put it in a
<script src=...
tag and see if it works
- How to use it::
- ES Modules
- Ways to use it:
- If there are no dependencies, just
import {whatever} from "./my-module.js"
directly in your code - If there are dependencies, create an importmap and
import {whatever} from "my-module"
- or use download-esm to remove the need for an importmap
- Use esbuild or any ES Module bundler
- If there are no dependencies, just
- Ways to identify it:
- Look for an
import
orexport
statement. (notmodule.exports = ...
, that’s CommonJS) - An
.mjs
extension - maybe
"type": "module"
inpackage.json
(though it’s not clear to me which file exactly this refers to)
- Look for an
- Ways to use it:
- CommonJS Modules
- Ways to use it:
- Use https://esm.sh to convert it into an ES module, like
https://esm.sh/@atproto/oauth-client-browser@0.3.0
- Use a build somehow (??)
- Use https://esm.sh to convert it into an ES module, like
- Ways to identify it:
- Look for
require()
ormodule.exports = ...
in the code - A
.cjs
extension - maybe
"type": "commonjs"
inpackage.json
(though it’s not clear to me which file exactly this refers to)
- Look for
- Ways to use it:
it’s really nice to have ES modules standardized
The main difference between CommonJS modules and ES modules from my perspective is that ES modules are actually a standard. This makes me feel a lot more confident using them, because browsers commit to backwards compatibility for web standards forever – if I write some code using ES modules today, I can feel sure that it’ll still work the same way in 15 years.
It also makes me feel better about using tooling like esbuild
because even if
the esbuild project dies, because it’s implementing a standard it feels likely
that there will be another similar tool in the future that I can replace it
with.
the JS community has built a lot of very cool tools
A lot of the time when I talk about this stuff I get responses like “I hate javascript!!! it’s the worst!!!”. But my experience is that there are a lot of great tools for Javascript (I just learned about https://esm.sh yesterday which seems great! I love esbuild!), and that if I take the time to learn how things works I can take advantage of some of those tools and make my life a lot easier.
So the goal of this post is definitely not to complain about Javascript, it’s to understand the landscape so I can use the tooling in a way that feels good to me.
questions I still have
Here are some questions I still have, I’ll add the answers into the post if I learn the answer.
- Is there a tool that automatically generates importmaps for an ES Module that I have set up locally? (apparently yes: jspm)
- How can I convert a CommonJS module into an ES module on my computer, the way https://esm.sh does? (apparently esbuild can sort of do this, though named exports don’t work)
- When people normally build CommonJS modules into regular JS code, what’s code is doing that? Obviously there are tools like webpack, rollup, esbuild, etc, but do those tools all implement their own JS parsers/static analysis? How many JS parsers are there out there?
- Is there any way to bundle an ES module into a single file (like
atcute-client.js
), but so that in the browser I can still import multiple different paths from that file (like both@atcute/client/lexicons
and@atcute/client
)?
all the tools
Here’s a list of every tool we talked about in this post:
- Simon Willison’s download-esm which will download an ES module and convert the imports to point at JS files so you don’t need an importmap
- https://esm.sh/ and skypack.dev
- esbuild
- JSPM can generate importmaps
Writing this post has made me think that even though I usually don’t want to
have a build that I run every time I update the project, I might be willing to
have a build step (using download-esm
or something) that I run only once
when setting up the project and never run again except maybe if I’m updating my
dependency versions.
that’s all!
Thanks to Marco Rogers who taught me a lot of the things in this post. I’ve probably made some mistakes in this post and I’d love to know what they are – let me know on Bluesky or Mastodon!
I added a new section to this site a couple weeks ago called TIL (“today I learned”).
the goal: save interesting tools & facts I posted on social media
One kind of thing I like to post on Mastodon/Bluesky is “hey, here’s a cool thing”, like the great SQLite repl litecli, or the fact that cross compiling in Go Just Works and it’s amazing, or cryptographic right answers, or this great diff tool. Usually I don’t want to write a whole blog post about those things because I really don’t have much more to say than “hey this is useful!”
It started to bother me that I didn’t have anywhere to put those things: for example recently I wanted to use diffdiff and I just could not remember what it was called.
the solution: make a new section of this blog
So I quickly made a new folder called /til/, added some
custom styling (I wanted to style the posts to look a little bit like a tweet),
made a little Rake task to help me create new posts quickly (rake new_til
), and
set up a separate RSS Feed for it.
I think this new section of the blog might be more for myself than anything, now when I forget the link to Cryptographic Right Answers I can hopefully look it up on the TIL page. (you might think “julia, why not use bookmarks??” but I have been failing to use bookmarks for my whole life and I don’t see that changing ever, putting things in public is for whatever reason much easier for me)
So far it’s been working, often I can actually just make a quick post in 2 minutes which was the goal.
inspired by Simon Willison’s TIL blog
My page is inspired by Simon Willison’s great TIL blog, though my TIL posts are a lot shorter.
I don’t necessarily want everything to be archived
This came about because I spent a lot of time on Twitter, so I’ve been thinking about what I want to do about all of my tweets.
I keep reading the advice to “POSSE” (“post on your own site, syndicate elsewhere”), and while I find the idea appealing in principle, for me part of the appeal of social media is that it’s a little bit ephemeral. I can post polls or questions or observations or jokes and then they can just kind of fade away as they become less relevant.
I find it a lot easier to identify specific categories of things that I actually want to have on a Real Website That I Own:
- blog posts here!
- comics at https://wizardzines.com/comics/!
- now TILs at https://jvns.ca/til/)
and then let everything else be kind of ephemeral.
I really believe in the advice to make email lists though – the first two (blog posts & comics) both have email lists and RSS feeds that people can subscribe to if they want. I might add a quick summary of any TIL posts from that week to the “blog posts from this week” mailing list.
Here's where you can find me at IETF 121 in Dublin!
Monday
- 9:30 - 11:30 • oauth
- 15:30 - 17:00 • alldispatch
Tuesday
Thursday
- 9:30 - 11:30 • oauth
Get in Touch
My Current Drafts
Hello! I’ve been thinking about the terminal a lot and yesterday I got curious
about all these “control codes”, like Ctrl-A
, Ctrl-C
, Ctrl-W
, etc. What’s
the deal with all of them?
a table of ASCII control characters
Here’s a table of all 33 ASCII control characters, and what they do on my machine (on Mac OS), more or less. There are about a million caveats, but I’ll talk about what it means and all the problems with this diagram that I know about.
You can also view it as an HTML page (I just made it an image so it would show up in RSS).
different kinds of codes are mixed together
The first surprising thing about this diagram to me is that there are 33 control codes, split into (very roughly speaking) these categories:
- Codes that are handled by the operating system’s terminal driver, for
example when the OS sees a
3
(Ctrl-C
), it’ll send aSIGINT
signal to the current program - Everything else is passed through to the application as-is and the
application can do whatever it wants with them. Some subcategories of
those:
- Codes that correspond to a literal keypress of a key on your keyboard
(
Enter
,Tab
,Backspace
). For example when you pressEnter
, your terminal gets sent13
. - Codes used by
readline
: “the application can do whatever it wants” often means “it’ll do more or less what thereadline
library does, whether the application actually usesreadline
or not”, so I’ve labelled a bunch of the codes thatreadline
uses - Other codes, for example I think
Ctrl-X
has no standard meaning in the terminal in general but emacs uses it very heavily
- Codes that correspond to a literal keypress of a key on your keyboard
(
There’s no real structure to which codes are in which categories, they’re all just kind of randomly scattered because this evolved organically.
(If you’re curious about readline, I wrote more about readline in entering text in the terminal is complicated, and there are a lot of cheat sheets out there)
there are only 33 control codes
Something else that I find a little surprising is that are only 33 control codes –
A to Z, plus 7 more (@, [, \, ], ^, _, ?
). This means that if you want to
have for example Ctrl-1
as a keyboard shortcut in a terminal application,
that’s not really meaningful – on my machine at least Ctrl-1
is exactly the
same thing as just pressing 1
, Ctrl-3
is the same as Ctrl-[
, etc.
Also Ctrl+Shift+C
isn’t a control code – what it does depends on your
terminal emulator. On Linux Ctrl-Shift-X
is often used by the terminal
emulator to copy or open a new tab or paste for example, it’s not sent to the
TTY at all.
Also I use Ctrl+Left Arrow
all the time, but that isn’t a control code,
instead it sends an ANSI escape sequence (ctrl-[[1;5D
) which is a different
thing which we absolutely do not have space for in this post.
This “there are only 33 codes” thing is totally different from how keyboard
shortcuts work in a GUI where you can have Ctrl+KEY
for any key you want.
the official ASCII names aren’t very meaningful to me
Each of these 33 control codes has a name in ASCII (for example 3
is ETX
).
When all of these control codes were originally defined, they weren’t being
used for computers or terminals at all, they were used for the telegraph machine.
Telegraph machines aren’t the same as UNIX terminals so a lot of the codes were repurposed to mean something else.
Personally I don’t find these ASCII names very useful, because 50% of the time the name in ASCII has no actual relationship to what that code does on UNIX systems today. So it feels easier to just ignore the ASCII names completely instead of trying to figure which ones still match their original meaning.
It’s hard to use Ctrl-M as a keyboard shortcut
Another thing that’s a bit weird is that Ctrl-M
is literally the same as
Enter
, and Ctrl-I
is the same as Tab
, which makes it hard to use those two as keyboard shortcuts.
From some quick research, it seems like some folks do still use Ctrl-I
and
Ctrl-M
as keyboard shortcuts (here’s an example), but to do that
you need to configure your terminal emulator to treat them differently than the
default.
For me the main takeaway is that if I ever write a terminal application I
should avoid Ctrl-I
and Ctrl-M
as keyboard shortcuts in it.
how to identify what control codes get sent
While writing this I needed to do a bunch of experimenting to digure out what various key combinations did, so I wrote this Python script echo-key.py that will print them out.
There’s probably a more official way but I appreciated having a script I could customize.
caveat: on canonical vs noncanonical mode
Two of these codes (Ctrl-W
and Ctrl-U
) are labelled in the table as
“handled by the OS”, but actually they’re not always handled by the OS, it
depends on whether the terminal is in “canonical” mode or in “noncanonical mode”.
In canonical mode,
programs only get input when you press Enter
(and the OS is in charge of deleting characters when you press Backspace
or Ctrl-W
). But in noncanonical mode the program gets
input immediately when you press a key, and the Ctrl-W
and Ctrl-U
codes are passed through to the program to handle any way it wants.
Generally in noncanonical mode the program will handle Ctrl-W
and Ctrl-U
similarly to how the OS does, but there are some small differences.
Some examples of programs that use canonical mode:
- probably pretty much any noninteractive program, like
grep
orcat
git
, I think
Examples of programs that use noncanonical mode:
python3
,irb
and other REPLs- your shell
- any full screen TUI like
less
orvim
caveat: all of the “OS terminal driver” codes are configurable with stty
I said that Ctrl-C
sends SIGINT
but technically this is not necessarily
true, if you really want to you can remap all of the codes labelled “OS
terminal driver”, plus Backspace, using a tool called stty
, and you can view
the mappings with stty -a
.
Here are the mappings on my machine right now:
$ stty -a
cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>;
eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V;
min = 1; quit = ^\; reprint = ^R; start = ^Q; status = ^T;
stop = ^S; susp = ^Z; time = 0; werase = ^W;
I have personally never remapped any of these and I cannot imagine a reason I
would (I think it would be a recipe for confusion and disaster for me), but I
asked on Mastodon and people said the most common reasons they used
stty
were:
- fix a broken terminal with
stty sane
- set
stty erase ^H
to change how Backspace works - set
stty ixoff
- some people even map
SIGINT
to a different key, like theirDELETE
key
caveat: on signals
Two signals caveats:
- If the
ISIG
terminal mode is turned off, then the OS won’t send signals. For examplevim
turns offISIG
- Apparently on BSDs, there’s an extra control code (
Ctrl-T
) which sendsSIGINFO
You can see which terminal modes a program is setting using strace
like this,
terminal modes are set with the ioctl
system call:
$ strace -tt -o out vim
$ grep ioctl out | grep SET
here are the modes vim
sets when it starts (ISIG
and ICANON
are
missing!):
17:43:36.670636 ioctl(0, TCSETS, {c_iflag=IXANY|IMAXBEL|IUTF8,
c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST, c_cflag=B38400|CS8|CREAD,
c_lflag=ECHOK|ECHOCTL|ECHOKE|PENDIN, ...}) = 0
and it resets the modes when it exits:
17:43:38.027284 ioctl(0, TCSETS, {c_iflag=ICRNL|IXANY|IMAXBEL|IUTF8,
c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST|ONLCR, c_cflag=B38400|CS8|CREAD,
c_lflag=ISIG|ICANON|ECHO|ECHOE|ECHOK|IEXTEN|ECHOCTL|ECHOKE|PENDIN, ...}) = 0
I think the specific combination of modes vim is using here might be called “raw mode”, man cfmakeraw talks about that.
there are a lot of conflicts
Related to “there are only 33 codes”, there are a lot of conflicts where
different parts of the system want to use the same code for different things,
for example by default Ctrl-S
will freeze your screen, but if you turn that
off then readline
will use Ctrl-S
to do a forward search.
Another example is that on my machine sometimes Ctrl-T
will send SIGINFO
and sometimes it’ll transpose 2 characters and sometimes it’ll do something
completely different depending on:
- whether the program has
ISIG
set - whether the program uses
readline
/ imitates readline’s behaviour
caveat: on “backspace” and “other backspace”
In this diagram I’ve labelled code 127 as “backspace” and 8 as “other backspace”. Uh, what?
I think this was the single biggest topic of discussion in the replies on Mastodon – apparently there’s a LOT of history to this and I’d never heard of any of it before.
First, here’s how it works on my machine:
- I press the
Backspace
key - The TTY gets sent the byte
127
, which is calledDEL
in ASCII - the OS terminal driver and readline both have
127
mapped to “backspace” (so it works both in canonical mode and noncanonical mode) - The previous character gets deleted
If I press Ctrl+H
, it has the same effect as Backspace
if I’m using
readline, but in a program without readline support (like cat
for instance),
it just prints out ^H
.
Apparently Step 2 above is different for some folks – their Backspace
key sends
the byte 8
instead of 127
, and so if they want Backspace to work then they
need to configure the OS (using stty
) to set erase = ^H
.
There’s an incredible section of the Debian Policy Manual on keyboard configuration
that describes how Delete
and Backspace
should work according to Debian
policy, which seems very similar to how it works on my Mac today. My
understanding (via this mastodon post)
is that this policy was written in the 90s because there was a lot of confusion
about what Backspace
should do in the 90s and there needed to be a standard
to get everything to work.
There’s a bunch more historical terminal stuff here but that’s all I’ll say for now.
there’s probably a lot more diversity in how this works
I’ve probably missed a bunch more ways that “how it works on my machine” might be different from how it works on other people’s machines, and I’ve probably made some mistakes about how it works on my machine too. But that’s all I’ve got for today.
Some more stuff I know that I’ve left out: according to stty -a
Ctrl-O
is
“discard”, Ctrl-R
is “reprint”, and Ctrl-Y
is “dsusp”. I have no idea how
to make those actually do anything (pressing them does not do anything
obvious, and some people have told me what they used to do historically but
it’s not clear to me if they have a use in 2024), and a lot of the time in practice
they seem to just be passed through to the application anyway so I just
labelled Ctrl-R
and Ctrl-Y
as
readline
.
not all of this is that useful to know
Also I want to say that I think the contents of this post are kind of interesting
but I don’t think they’re necessarily that useful. I’ve used the terminal
pretty successfully every day for the last 20 years without knowing literally
any of this – I just knew what Ctrl-C
, Ctrl-D
, Ctrl-Z
, Ctrl-R
,
Ctrl-L
did in practice (plus maybe Ctrl-A
, Ctrl-E
and Ctrl-W
) and did
not worry about the details for the most part, and that was
almost always totally fine except when I was trying to use xterm.js.
But I had fun learning about it so maybe it’ll be interesting to you too.
I’ve been having problems for the last 3 years or so where Mess With DNS periodically runs out of memory and gets OOM killed.
This hasn’t been a big priority for me: usually it just goes down for a few minutes while it restarts, and it only happens once a day at most, so I’ve just been ignoring. But last week it started actually causing a problem so I decided to look into it.
This was kind of winding road where I learned a lot so here’s a table of contents:
- there’s about 100MB of memory available
- the problem: OOM killing the backup script
- attempt 1: use SQLite
- attempt 2: use a trie
- attempt 3: make my array use less memory
there’s about 100MB of memory available
I run Mess With DNS on a VM without about 465MB of RAM, which according to
ps aux
(the RSS
column) is split up something like:
- 100MB for PowerDNS
- 200MB for Mess With DNS
- 40MB for hallpass
That leaves about 110MB of memory free.
A while back I set GOMEMLIMIT to 250MB to try to make sure the garbage collector ran if Mess With DNS used more than 250MB of memory, and I think this helped but it didn’t solve everything.
the problem: OOM killing the backup script
A few weeks ago I started backing up Mess With DNS’s database for the first time using restic.
This has been working okay, but since Mess With DNS operates without much extra
memory I think restic
sometimes needed more memory than was available on the
system, and so the backup script sometimes got OOM killed.
This was a problem because
- backups might be corrupted sometimes
- more importantly, restic takes out a lock when it runs, and so I’d have to manually do an unlock if I wanted the backups to continue working. Doing manual work like this is the #1 thing I try to avoid with all my web services (who has time for that!) so I really wanted to do something about it.
There’s probably more than one solution to this, but I decided to try to make Mess With DNS use less memory so that there was more available memory on the system, mostly because it seemed like a fun problem to try to solve.
what’s using memory: IP addresses
I’d run a memory profile of Mess With DNS a bunch of times in the past, so I knew exactly what was using most of Mess With DNS’s memory: IP addresses.
When it starts, Mess With DNS loads this database where you can look up the
ASN of every IP address into memory, so that when it
receives a DNS query it can take the source IP address like 74.125.16.248
and
tell you that IP address belongs to GOOGLE
.
This database by itself used about 117MB of memory, and a simple du
told me
that was too much – the original text files were only 37MB!
$ du -sh *.tsv
26M ip2asn-v4.tsv
11M ip2asn-v6.tsv
The way it worked originally is that I had an array of these:
type IPRange struct {
StartIP net.IP
EndIP net.IP
Num int
Name string
Country string
}
and I searched through it with a binary search to figure out if any of the ranges contained the IP I was looking for. Basically the simplest possible thing and it’s super fast, my machine can do about 9 million lookups per second.
attempt 1: use SQLite
I’ve been using SQLite recently, so my first thought was – maybe I can store all of this data on disk in an SQLite database, give the tables an index, and that’ll use less memory.
So I:
- wrote a quick Python script using sqlite-utils to import the TSV files into an SQLite database
- adjusted my code to select from the database instead
This did solve the initial memory goal (after a GC it now hardly used any memory at all because the table was on disk!), though I’m not sure how much GC churn this solution would cause if we needed to do a lot of queries at once. I did a quick memory profile and it seemed to allocate about 1KB of memory per lookup.
Let’s talk about the issues I ran into with using SQLite though.
problem: how to store IPv6 addresses
SQLite doesn’t have support for big integers and IPv6 addresses are 128 bits,
so I decided to store them as text. I think BLOB
might have been better, I
originally thought BLOB
s couldn’t be compared but the sqlite docs say they can.
I ended up with this schema:
CREATE TABLE ipv4_ranges (
start_ip INTEGER NOT NULL,
end_ip INTEGER NOT NULL,
asn INTEGER NOT NULL,
country TEXT NOT NULL,
name TEXT NOT NULL
);
CREATE TABLE ipv6_ranges (
start_ip TEXT NOT NULL,
end_ip TEXT NOT NULL,
asn INTEGER,
country TEXT,
name TEXT
);
CREATE INDEX idx_ipv4_ranges_start_ip ON ipv4_ranges (start_ip);
CREATE INDEX idx_ipv6_ranges_start_ip ON ipv6_ranges (start_ip);
CREATE INDEX idx_ipv4_ranges_end_ip ON ipv4_ranges (end_ip);
CREATE INDEX idx_ipv6_ranges_end_ip ON ipv6_ranges (end_ip);
Also I learned that Python has an ipaddress
module, so I could use
ipaddress.ip_address(s).exploded
to make sure that the IPv6 addresses were
expanded so that a string comparison would compare them properly.
problem: it’s 500x slower
I ran a quick microbenchmark, something like this. It printed out that it could look up 17,000 IPv6 addresses per second, and similarly for IPv4 addresses.
This was pretty discouraging – being able to look up 17k addresses per section is kind of fine (Mess With DNS does not get a lot of traffic), but I compared it to the original binary search code and the original code could do 9 million per second.
ips := []net.IP{}
count := 20000
for i := 0; i < count; i++ {
// create a random IPv6 address
bytes := randomBytes()
ip := net.IP(bytes[:])
ips = append(ips, ip)
}
now := time.Now()
success := 0
for _, ip := range ips {
_, err := ranges.FindASN(ip)
if err == nil {
success++
}
}
fmt.Println(success)
elapsed := time.Since(now)
fmt.Println("number per second", float64(count)/elapsed.Seconds())
time for EXPLAIN QUERY PLAN
I’d never really done an EXPLAIN in sqlite, so I thought it would be a fun opportunity to see what the query plan was doing.
sqlite> explain query plan select * from ipv6_ranges where '2607:f8b0:4006:0824:0000:0000:0000:200e' BETWEEN start_ip and end_ip;
QUERY PLAN
`--SEARCH ipv6_ranges USING INDEX idx_ipv6_ranges_end_ip (end_ip>?)
It looks like it’s just using the end_ip
index and not the start_ip
index,
so maybe it makes sense that it’s slower than the binary search.
I tried to figure out if there was a way to make SQLite use both indexes, but I couldn’t find one and maybe it knows best anyway.
At this point I gave up on the SQLite solution, I didn’t love that it was slower and also it’s a lot more complex than just doing a binary search. I felt like I’d rather keep something much more similar to the binary search.
A few things I tried with SQLite that did not cause it to use both indexes:
- using a compound index instead of two separate indexes
- running
ANALYZE
- using
INTERSECT
to intersect the results ofstart_ip < ?
and? < end_ip
. This did make it use both indexes, but it also seemed to make the query literally 1000x slower, probably because it needed to create the results of both subqueries in memory and intersect them.
attempt 2: use a trie
My next idea was to use a trie, because I had some vague idea that maybe a trie would use less memory, and I found this library called ipaddress-go that lets you look up IP addresses using a trie.
I tried using it here’s the code, but I think I was doing something wildly wrong because, compared to my naive array + binary search:
- it used WAY more memory (800MB to store just the IPv4 addresses)
- it was a lot slower to do the lookups (it could do only 100K/second instead of 9 million/second)
I’m not really sure what went wrong here but I gave up on this approach and decided to just try to make my array use less memory and stick to a simple binary search.
some notes on memory profiling
One thing I learned about memory profiling is that you can use runtime
package to see how much memory is currently allocated in the program. That’s
how I got all the memory numbers in this post. Here’s the code:
func memusage() {
runtime.GC()
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("Alloc = %v MiB\n", m.Alloc/1024/1024)
// write mem.prof
f, err := os.Create("mem.prof")
if err != nil {
log.Fatal(err)
}
pprof.WriteHeapProfile(f)
f.Close()
}
Also I learned that if you use pprof
to analyze a heap profile there are two
ways to analyze it: you can pass either --alloc-space
or --inuse-space
to
go tool pprof
. I don’t know how I didn’t realize this before but
alloc-space
will tell you about everything that was allocated, and
inuse-space
will just include memory that’s currently in use.
Anyway I ran go tool pprof -pdf --inuse_space mem.prof > mem.pdf
a lot. Also
every time I use pprof I find myself referring to my own intro to pprof, it’s probably
the blog post I wrote that I use the most often. I should add --alloc-space
and --inuse-space
to it.
attempt 3: make my array use less memory
I was storing my ip2asn entries like this:
type IPRange struct {
StartIP net.IP
EndIP net.IP
Num int
Name string
Country string
}
I had 3 ideas for ways to improve this:
- There was a lot of repetition of
Name
and theCountry
, because a lot of IP ranges belong to the same ASN net.IP
is an[]byte
under the hood, which felt like it involved an unnecessary pointer, was there a way to inline it into the struct?- Maybe I didn’t need both the start IP and the end IP, often the ranges were consecutive so maybe I could rearrange things so that I only had the start IP
idea 3.1: deduplicate the Name and Country
I figured I could store the ASN info in an array, and then just store the index
into the array in my IPRange
struct. Here are the structs so you can see what
I mean:
type IPRange struct {
StartIP netip.Addr
EndIP netip.Addr
ASN uint32
Idx uint32
}
type ASNInfo struct {
Country string
Name string
}
type ASNPool struct {
asns []ASNInfo
lookup map[ASNInfo]uint32
}
This worked! It brought memory usage from 117MB to 65MB – a 50MB savings. I felt good about this.
Here’s all of the code for that part.
how big are ASNs?
As an aside – I’m storing the ASN in a uint32
, is that right? I looked in the ip2asn
file and the biggest one seems to be 401307, though there are a few lines that
say 4294901931
which is much bigger, but also are just inside the range of a
uint32. So I can definitely use a uint32
.
59.101.179.0 59.101.179.255 4294901931 Unknown AS4294901931
idea 3.2: use netip.Addr
instead of net.IP
It turns out that I’m not the only one who felt that net.IP
was using an
unnecessary amount of memory – in 2021 the folks at Tailscale released a new
IP address library for Go which solves this and many other issues. They wrote a great blog post about it.
I discovered (to my delight) that not only does this new IP address library exist and do exactly what I want, it’s also now in the Go
standard library as netip.Addr. Switching to netip.Addr
was
very easy and saved another 20MB of memory, bringing us to 46MB.
I didn’t try my third idea (remove the end IP from the struct) because I’d already been programming for long enough on a Saturday morning and I was happy with my progress.
It’s always such a great feeling when I think “hey, I don’t like this, there must be a better way” and then immediately discover that someone has already made the exact thing I want, thought about it a lot more than me, and implemented it much better than I would have.
all of this was messier in real life
Even though I tried to explain this in a simple linear way “I tried X, then I tried Y, then I tried Z”, that’s kind of a lie – I always try to take my actual debugging process (total chaos) and make it seem more linear and understandable because the reality is just too annoying to write down. It’s more like:
- try sqlite
- try a trie
- second guess everything that I concluded about sqlite, go back and look at the results again
- wait what about indexes
- very very belatedly realize that I can use
runtime
to check how much memory everything is using, start doing that - look at the trie again, maybe I misunderstood everything
- give up and go back to binary search
- look at all of the numbers for tries/sqlite again to make sure I didn’t misunderstand
A note on using 512MB of memory
Someone asked why I don’t just give the VM more memory. I could very easily afford to pay for a VM with 1GB of memory, but I feel like 512MB really should be enough (and really that 256MB should be enough!) so I’d rather stay inside that constraint. It’s kind of a fun puzzle.
a few ideas from the replies
Folks had a lot of good ideas I hadn’t thought of. Recording them as inspiration if I feel like having another Fun Performance Day at some point.
- Try Go’s unique package for the
ASNPool
. Someone tried this and it uses more memory, probably because Go’s pointers are 64 bits - Try compiling with
GOARCH=386
to use 32-bit pointers to sace space (maybe in combination with usingunique
!) - It should be possible to store all of the IPv6 addresses in just 64 bits, because only the first 64 bits of the address are public
- Interpolation search might be faster than binary search since IP addresses are numeric
- Try the MaxMind db format with mmdbwriter or mmdbctl
- Tailscale’s art routing table package
the result: saved 70MB of memory!
I deployed the new version and now Mess With DNS is using less memory! Hooray!
A few other notes:
- lookups are a little slower – in my microbenchmark they went from 9 million lookups/second to 6 million, maybe because I added a little indirection. Using less memory and a little more CPU seemed like a good tradeoff though.
- it’s still using more memory than the raw text files do (46MB vs 37MB), I guess pointers take up space and that’s okay.
I’m honestly not sure if this will solve all my memory problems, probably not! But I had fun, I learned a few things about SQLite, I still don’t know what to think about tries, and it made me love binary search even more than I already did.
Warning: this is a post about very boring yakshaving, probably only of interest to people who are trying to upgrade Hugo from a very old version to a new version. But what are blogs for if not documenting one’s very boring yakshaves from time to time?
So yesterday I decided to try to upgrade Hugo. There’s no real reason to do this – I’ve been using Hugo version 0.40 to generate this blog since 2018, it works fine, and I don’t have any problems with it. But I thought – maybe it won’t be as hard as I think, and I kind of like a tedious computer task sometimes!
I thought I’d document what I learned along the way in case it’s useful to anyone else doing this very specific migration. I upgraded from Hugo v0.40 (from 2018) to v0.135 (from 2024).
Here are most of the changes I had to make:
change 1: template "theme/partials/thing.html
is now partial thing.html
I had to replace a bunch of instances of {{ template "theme/partials/header.html" . }}
with {{ partial "header.html" . }}
.
This happened in v0.42:
We have now virtualized the filesystems for project and theme files. This makes everything simpler, faster and more powerful. But it also means that template lookups on the form {{ template “theme/partials/pagination.html” . }} will not work anymore. That syntax has never been documented, so it’s not expected to be in wide use.
change 2: .Data.Pages
is now site.RegularPages
This seems to be discussed in the release notes for 0.57.2
I just needed to replace .Data.Pages
with site.RegularPages
in the template on the homepage as well as in my RSS feed template.
change 3: .Next
and .Prev
got flipped
I had this comment in the part of my theme where I link to the next/previous blog post:
“next” and “previous” in hugo apparently mean the opposite of what I’d think they’d mean intuitively. I’d expect “next” to mean “in the future” and “previous” to mean “in the past” but it’s the opposite
It looks they changed this in ad705aac064 so that “next” actually is in the future and “prev” actually is in the past. I definitely find the new behaviour more intuitive.
downloading the Hugo changelogs with a script
Figuring out why/when all of these changes happened was a little difficult. I ended up hacking together a bash script to download all of the changelogs from github as text files, which I could then grep to try to figure out what happened. It turns out it’s pretty easy to get all of the changelogs from the GitHub API.
So far everything was not so bad – there was also a change around taxonomies that’s I can’t quite explain, but it was all pretty manageable, but then we got to the really tough one: the markdown renderer.
change 4: the markdown renderer (blackfriday -> goldmark)
The blackfriday markdown renderer (which was previously the default) was removed in v0.100.0. This seems pretty reasonable:
It has been deprecated for a long time, its v1 version is not maintained anymore, and there are many known issues. Goldmark should be a mature replacement by now.
Fixing all my Markdown changes was a huge pain – I ended up having to update 80 different Markdown files (out of 700) so that they would render properly, and I’m not totally sure
why bother switching renderers?
The obvious question here is – why bother even trying to upgrade Hugo at all if I have to switch Markdown renderers? My old site was running totally fine and I think it wasn’t necessarily a good use of time, but the one reason I think it might be useful in the future is that the new renderer (goldmark) uses the CommonMark markdown standard, which I’m hoping will be somewhat more futureproof. So maybe I won’t have to go through this again? We’ll see.
Also it turned out that the new Goldmark renderer does fix some problems I had (but didn’t know that I had) with smart quotes and how lists/blockquotes interact.
finding all the Markdown problems: the process
The hard part of this Markdown change was even figuring out what changed. Almost all of the problems (including #2 and #3 above) just silently broke the site, they didn’t cause any errors or anything. So I had to diff the HTML to hunt them down.
Here’s what I ended up doing:
- Generate the site with the old version, put it in
public_old
- Generate the new version, put it in
public
- Diff every single HTML file in
public/
andpublic_old
with this diff.sh script and put the results in adiffs/
folder - Run variations on
find diffs -type f | xargs cat | grep -C 5 '(31m|32m)' | less -r
over and over again to look at every single change until I found something that seemed wrong - Update the Markdown to fix the problem
- Repeat until everything seemed okay
(the grep 31m|32m
thing is searching for red/green text in the diff)
This was very time consuming but it was a little bit fun for some reason so I kept doing it until it seemed like nothing too horrible was left.
the new markdown rules
Here’s a list of every type of Markdown change I had to make. It’s very possible these are all extremely specific to me but it took me a long time to figure them all out so maybe this will be helpful to one other person who finds this in the future.
4.1: mixing HTML and markdown
This doesn’t work anymore (it doesn’t expand the link):
<small>
[a link](https://example.com)
</small>
I need to do this instead:
<small>
[a link](https://example.com)
</small>
This works too:
<small> [a link](https://example.com) </small>
4.2: <<
is changed into «
I didn’t want this so I needed to configure:
markup:
goldmark:
extensions:
typographer:
leftAngleQuote: '<<'
rightAngleQuote: '>>'
4.3: nested lists sometimes need 4 space indents
This doesn’t render as a nested list anymore if I only indent by 2 spaces, I need to put 4 spaces.
1. a
* b
* c
2. b
The problem is that the amount of indent needed depends on the size of the list markers. Here’s a reference in CommonMark for this.
4.4: blockquotes inside lists work better
Previously the > quote
here didn’t render as a blockquote, and with the new renderer it does.
* something
> quote
* something else
I found a bunch of Markdown that had been kind of broken (which I hadn’t noticed) that works better with the new renderer, and this is an example of that.
Lists inside blockquotes also seem to work better.
4.5: headings inside lists
Previously this didn’t render as a heading, but now it does. So I needed to
replace the #
with #
.
* # passengers: 20
4.6: +
or 1)
at the beginning of the line makes it a list
I had something which looked like this:
`1 / (1
+ exp(-1)) = 0.73`
With Blackfriday it rendered like this:
<p><code>1 / (1
+ exp(-1)) = 0.73</code></p>
and with Goldmark it rendered like this:
<p>`1 / (1</p>
<ul>
<li>exp(-1)) = 0.73`</li>
</ul>
Same thing if there was an accidental 1)
at the beginning of a line, like in this Markdown snippet
I set up a small Hadoop cluster (1 master, 2 workers, replication set to
1) on
To fix this I just had to rewrap the line so that the +
wasn’t the first character.
The Markdown is formatted this way because I wrap my Markdown to 80 characters a lot and the wrapping isn’t very context sensitive.
4.7: no more smart quotes in code blocks
There were a bunch of places where the old renderer (Blackfriday) was doing
unwanted things in code blocks like replacing ...
with …
or replacing
quotes with smart quotes. I hadn’t realized this was happening and I was very
happy to have it fixed.
4.8: better quote management
The way this gets rendered got better:
"Oh, *interesting*!"
- old: “Oh, interesting!“
- new: “Oh, interesting!”
Before there were two left smart quotes, now the quotes match.
4.9: images are no longer wrapped in a p
tag
Previously if I had an image like this:
<img src="https://jvns.ca/images/rustboot1.png">
it would get wrapped in a <p>
tag, now it doesn’t anymore. I dealt with this
just by adding a margin-bottom: 0.75em
to images in the CSS, hopefully
that’ll make them display well enough.
4.10: <br>
is now wrapped in a p
tag
Previously this wouldn’t get wrapped in a p
tag, but now it seems to:
<br><br>
I just gave up on fixing this though and resigned myself to maybe having some extra space in some cases. Maybe I’ll try to fix it later if I feel like another yakshave.
4.11: some more goldmark settings
I also needed to
- turn off code highlighting (because it wasn’t working properly and I didn’t have it before anyway)
- use the old “blackfriday” method to generate heading IDs so they didn’t change
- allow raw HTML in my markdown
Here’s what I needed to add to my config.yaml
to do all that:
markup:
highlight:
codeFences: false
goldmark:
renderer:
unsafe: true
parser:
autoHeadingIDType: blackfriday
Maybe I’ll try to get syntax highlighting working one day, who knows. I might prefer having it off though.
a little script to compare blackfriday and goldmark
I also wrote a little program to compare the Blackfriday and Goldmark output for various markdown snippets, here it is in a gist.
It’s not really configured the exact same way Blackfriday and Goldmark were in my Hugo versions, but it was still helpful to have to help me understand what was going on.
a quick note on maintaining themes
My approach to themes in Hugo has been:
- pay someone to make a nice design for the site (for example wizardzines.com was designed by Melody Starling)
- use a totally custom theme
- commit that theme to the same Github repo as the site
So I just need to edit the theme files to fix any problems. Also I wrote a lot of the theme myself so I’m pretty familiar with how it works.
Relying on someone else to keep a theme updated feels kind of scary to me, I think if I were using a third-party theme I’d just copy the code into my site’s github repo and then maintain it myself.
which static site generators have better backwards compatibility?
I asked on Mastodon if anyone had used a static site generator with good backwards compatibility.
The main answers seemed to be Jekyll and 11ty. Several people said they’d been using Jekyll for 10 years without any issues, and 11ty says it has stability as a core goal.
I think a big factor in how appealing Jekyll/11ty are is how easy it is for you to maintain a working Ruby / Node environment on your computer: part of the reason I stopped using Jekyll was that I got tired of having to maintain a working Ruby installation. But I imagine this wouldn’t be a problem for a Ruby or Node developer.
Several people said that they don’t build their Jekyll site locally at all – they just use GitHub Pages to build it.
that’s it!
Overall I’ve been happy with Hugo – I started using it because it had fast build times and it was a static binary, and both of those things are still extremely useful to me. I might have spent 10 hours on this upgrade, but I’ve probably spent 1000+ hours writing blog posts without thinking about Hugo at all so that seems like an extremely reasonable ratio.
I find it hard to be too mad about the backwards incompatible changes, most of
them were quite a long time ago, Hugo does a great job of making their old
releases available so you can use the old release if you want, and the most
difficult one is removing support for the blackfriday
Markdown renderer in
favour of using something CommonMark-compliant which seems pretty reasonable to
me even if it is a huge pain.
But it did take a long time and I don’t think I’d particularly recommend moving 700 blog posts to a new Markdown renderer unless you’re really in the mood for a lot of computer suffering for some reason.
The new renderer did fix a bunch of problems so I think overall it might be a good thing, even if I’ll have to remember to make 2 changes to how I write Markdown (4.1 and 4.3).
Also I’m still using Hugo 0.54 for https://wizardzines.com so maybe these notes will be useful to Future Me if I ever feel like upgrading Hugo for that site.
Hopefully I didn’t break too many things on the blog by doing this, let me know if you see anything broken!
Yesterday I was thinking about how long it took me to get a colorscheme in my terminal that I was mostly happy with (SO MANY YEARS), and it made me wonder what about terminal colours made it so hard.
So I asked people on Mastodon what problems they’ve run into with colours in the terminal, and I got a ton of interesting responses! Let’s talk about some of the problems and a few possible ways to fix them.
problem 1: blue on black
One of the top complaints was “blue on black is hard to read”. Here’s an
example of that: if I open Terminal.app, set the background to black, and run
ls
, the directories are displayed in a blue that isn’t that easy to read:
To understand why we’re seeing this blue, let’s talk about ANSI colours!
the 16 ANSI colours
Your terminal has 16 numbered colours – black, red, green, yellow, blue, magenta, cyan, white, and “bright” version of each of those.
Programs can use them by printing out an “ANSI escape code” – for example if you want to see each of the 16 colours in your terminal, you can run this Python program:
def color(num, text):
return f"\033[38;5;{num}m{text}\033[0m"
for i in range(16):
print(color(i, f"number {i:02}"))
what are the ANSI colours?
This made me wonder – if blue is colour number 5, who decides what hex color that should correspond to?
The answer seems to be “there’s no standard, terminal emulators just choose colours and it’s not very consistent”. Here’s a screenshot of a table from Wikipedia, where you can see that there’s a lot of variation:
problem 1.5: bright yellow on white
Bright yellow on white is even worse than blue on black, here’s what I get in a terminal with the default settings:
That’s almost impossible to read (and some other colours like light green cause similar issues), so let’s talk about solutions!
two ways to reconfigure your colours
If you’re annoyed by these colour contrast issues (or maybe you just think the default ANSI colours are ugly), you might think – well, I’ll just choose a different “blue” and pick something I like better!
There are two ways you can do this:
Way 1: Configure your terminal emulator: I think most modern terminal emulators have a way to reconfigure the colours, and some of them even come with some preinstalled themes that you might like better than the defaults.
Way 2: Run a shell script: There are ANSI escape codes that you can print
out to tell your terminal emulator to reconfigure its colours. Here’s a shell script that does that,
from the base16-shell project.
You can see that it has a few different conventions for changing the colours –
I guess different terminal emulators have different escape codes for changing
their colour palette, and so the script is trying to pick the right style of
escape code based on the TERM
environment variable.
what are the pros and cons of the 2 ways of configuring your colours?
I prefer to use the “shell script” method, because:
- if I switch terminal emulators for some reason, I don’t need to a different configuration system, my colours still Just Work
- I use base16-shell with base16-vim to make my vim colours match my terminal colours, which is convenient
some advantages of configuring colours in your terminal emulator:
- if you use a popular terminal emulator, there are probably a lot more nice terminal themes out there that you can choose from
- not all terminal emulators support the “shell script method”, and even if they do, the results can be a little inconsistent
This is what my shell has looked like for probably the last 5 years (using the
solarized light base16 theme), and I’m pretty happy with it. Here’s htop
:
Okay, so let’s say you’ve found a terminal colorscheme that you like. What else can go wrong?
problem 2: programs using 256 colours
Here’s what some output of fd
, a find
alternative, looks like in my
colorscheme:
The contrast is pretty bad here, and I definitely don’t have that lime green in my normal colorscheme. What’s going on?
We can see what color codes fd
is using using the unbuffer
program to
capture its output including the color codes:
$ unbuffer fd . > out
$ vim out
^[[38;5;48mbad-again.sh^[[0m
^[[38;5;48mbad.sh^[[0m
^[[38;5;48mbetter.sh^[[0m
out
^[[38;5;48
means “set the foreground color to color 48
”. Terminals don’t
only have 16 colours – many terminals these days actually have 3 ways of
specifying colours:
- the 16 ANSI colours we already talked about
- an extended set of 256 colours
- a further extended set of 24-bit hex colours, like
#ffea03
So fd
is using one of the colours from the extended 256-color set. bat
(a
cat
alternative) does something similar – here’s what it looks like by
default in my terminal.
This looks fine though and it really seems like it’s trying to work well with a variety of terminal themes.
some newer tools seem to have theme support
I think it’s interesting that some of these newer terminal tools (fd
, cat
,
delta
, and probably more) have support for arbitrary custom themes. I guess
the downside of this approach is that the default theme might clash with your
terminal’s background, but the upside is that it gives you a lot more control
over theming the tool’s output than just choosing 16 ANSI colours.
I don’t really use bat
, but if I did I’d probably use bat --theme ansi
to
just use the ANSI colours that I have set in my normal terminal colorscheme.
problem 3: the grays in Solarized
A bunch of people on Mastodon mentioned a specific issue with grays in the Solarized theme: when I list a directory, the base16 Solarized Light theme looks like this:
but iTerm’s default Solarized Light theme looks like this:
This is because in the iTerm theme (which is the original Solarized design), colors 9-14 (the “bright blue”, “bright
red”, etc) are mapped to a series of grays, and when I run ls
, it’s trying to
use those “bright” colours to color my directories and executables.
My best guess for why the original Solarized theme is designed this way is to make the grays available to the vim Solarized colorscheme.
I’m pretty sure I prefer the modified base16 version I use where the “bright” colours are actually colours instead of all being shades of gray though. (I didn’t actually realize the version I was using wasn’t the “original” Solarized theme until I wrote this post)
In any case I really love Solarized and I’m very happy it exists so that I can use a modified version of it.
problem 4: a vim theme that doesn’t match the terminal background
If I my vim theme has a different background colour than my terminal theme, I get this ugly border, like this:
This one is a pretty minor issue though and I think making your terminal background match your vim background is pretty straightforward.
problem 5: programs setting a background color
A few people mentioned problems with terminal applications setting an unwanted background colour, so let’s look at an example of that.
Here ngrok
has set the background to color #16 (“black”), but the
base16-shell
script I use sets color 16 to be bright orange, so I get this,
which is pretty bad:
I think the intention is for ngrok to look something like this:
I think base16-shell
sets color #16 to orange (instead of black)
so that it can provide extra colours for use by base16-vim.
This feels reasonable to me – I use base16-vim
in the terminal, so I guess I’m
using that feature and it’s probably more important to me than ngrok
(which I
rarely use) behaving a bit weirdly.
This particular issue is a maybe obscure clash between ngrok and my colorschem, but I think this kind of clash is pretty common when a program sets an ANSI background color that the user has remapped for some reason.
a nice solution to contrast issues: “minimum contrast”
A bunch of terminals (iTerm2, tabby, kitty’s text_fg_override_threshold, and folks tell me also Ghostty and Windows Terminal) have a “minimum contrast” feature that will automatically adjust colours to make sure they have enough contrast.
Here’s an example from iTerm. This ngrok accident from before has pretty bad contrast, I find it pretty difficult to read:
With “minimum contrast” set to 40 in iTerm, it looks like this instead:
I didn’t have minimum contrast turned on before but I just turned it on today because it makes such a big difference when something goes wrong with colours in the terminal.
problem 6: TERM
being set to the wrong thing
A few people mentioned that they’ll SSH into a system that doesn’t support the
TERM
environment variable that they have set locally, and then the colours
won’t work.
I think the way TERM
works is that systems have a terminfo
database, so if
the value of the TERM
environment variable isn’t in the system’s terminfo
database, then it won’t know how to output colours for that terminal. I don’t
know too much about terminfo, but someone linked me to this terminfo rant that talks about a few other
issues with terminfo.
I don’t have a system on hand to reproduce this one so I can’t say for sure how
to fix it, but this stackoverflow question
suggests running something like TERM=xterm ssh
instead of ssh
.
problem 7: picking “good” colours is hard
A couple of problems people mentioned with designing / finding terminal colorschemes:
- some folks are colorblind and have trouble finding an appropriate colorscheme
- accidentally making the background color too close to the cursor or selection color, so they’re hard to find
- generally finding colours that work with every program is a struggle (for example you can see me having a problem with this with ngrok above!)
problem 8: making nethack/mc look right
Another problem people mentioned is using a program like nethack or midnight commander which you might expect to have a specific colourscheme based on the default ANSI terminal colours.
For example, midnight commander has a really specific classic look:
But in my Solarized theme, midnight commander looks like this:
The Solarized version feels like it could be disorienting if you’re very used to the “classic” look.
One solution Simon Tatham mentioned to this is using some palette customization ANSI codes (like the ones base16 uses that I talked about earlier) to change the color palette right before starting the program, for example remapping yellow to a brighter yellow before starting Nethack so that the yellow characters look better.
problem 9: commands disabling colours when writing to a pipe
If I run fd | less
, I see something like this, with the colours disabled.
In general I find this useful – if I pipe a command to grep
, I don’t want it
to print out all those color escape codes, I just want the plain text. But what if you want to see the colours?
To see the colours, you can run unbuffer fd | less -r
! I just learned about
unbuffer
recently and I think it’s really cool, unbuffer
opens a tty for the
command to write to so that it thinks it’s writing to a TTY. It also fixes
issues with programs buffering their output when writing to a pipe, which is
why it’s called unbuffer
.
Here’s what the output of unbuffer fd | less -r
looks like for me:
Also some commands (including fd
) support a --color=always
flag which will
force them to always print out the colours.
problem 10: unwanted colour in ls
and other commands
Some people mentioned that they don’t want ls
to use colour at all, perhaps
because ls
uses blue, it’s hard to read on black, and maybe they don’t feel like
customizing their terminal’s colourscheme to make the blue more readable or
just don’t find the use of colour helpful.
Some possible solutions to this one:
- you can run
ls --color=never
, which is probably easiest - you can also set
LS_COLORS
to customize the colours used byls
. I think some other programs other thanls
support theLS_COLORS
environment variable too. - also some programs support setting
NO_COLOR=true
(there’s a list here)
Here’s an example of running LS_COLORS="fi=0:di=0:ln=0:pi=0:so=0:bd=0:cd=0:or=0:ex=0" ls
:
problem 11: the colours in vim
I used to have a lot of problems with configuring my colours in vim – I’d set up my terminal colours in a way that I thought was okay, and then I’d start vim and it would just be a disaster.
I think what was going on here is that today, there are two ways to set up a vim colorscheme in the terminal:
- using your ANSI terminal colours – you tell vim which ANSI colour number to use for the background, for functions, etc.
- using 24-bit hex colours – instead of ANSI terminal colours, the vim colorscheme can use hex codes like #faea99 directly
20 years ago when I started using vim, terminals with 24-bit hex color support were a lot less common (or maybe they didn’t exist at all), and vim certainly didn’t have support for using 24-bit colour in the terminal. From some quick searching through git, it looks like vim added support for 24-bit colour in 2016 – just 8 years ago!
So to get colours to work properly in vim before 2016, you needed to synchronize
your terminal colorscheme and your vim colorscheme. Here’s what that looked like,
the colorscheme needed to map the vim color classes like cterm05
to ANSI colour numbers.
But in 2024, the story is really different! Vim (and Neovim, which I use now)
support 24-bit colours, and as of Neovim 0.10 (released in May 2024), the
termguicolors
setting (which tells Vim to use 24-bit hex colours for
colorschemes) is turned on by default in any terminal with 24-bit
color support.
So this “you need to synchronize your terminal colorscheme and your vim colorscheme” problem is not an issue anymore for me in 2024, since I don’t plan to use terminals without 24-bit color support in the future.
The biggest consequence for me of this whole thing is that I don’t need base16
to set colors 16-21 to weird stuff anymore to integrate with vim – I can just
use a terminal theme and a vim theme, and as long as the two themes use similar
colours (so it’s not jarring for me to switch between them) there’s no problem.
I think I can just remove those parts from my base16
shell script and totally
avoid the problem with ngrok and the weird orange background I talked about
above.
some more problems I left out
I think there are a lot of issues around the intersection of multiple programs, like using some combination tmux/ssh/vim that I couldn’t figure out how to reproduce well enough to talk about them. Also I’m sure I missed a lot of other things too.
base16 has really worked for me
I’ve personally had a lot of success with using
base16-shell with
base16-vim – I just need to add a couple of lines to my
fish config to set it up (+ a few .vimrc
lines) and then I can move on and
accept any remaining problems that that doesn’t solve.
I don’t think base16 is for everyone though, some limitations I’m aware of with base16 that might make it not work for you:
- it comes with a limited set of builtin themes and you might not like any of them
- the Solarized base16 theme (and maybe all of the themes?) sets the “bright” ANSI colours to be exactly the same as the normal colours, which might cause a problem if you’re relying on the “bright” colours to be different from the regular ones
- it sets colours 16-21 in order to give the vim colorschemes from
base16-vim
access to more colours, which might not be relevant if you always use a terminal with 24-bit color support, and can cause problems like the ngrok issue above - also the way it sets colours 16-21 could be a problem in terminals that don’t have 256-color support, like the linux framebuffer terminal
Apparently there’s a community fork of base16 called tinted-theming, which I haven’t looked into much yet.
some other colorscheme tools
Just one so far but I’ll link more if people tell me about them:
- rootloops.sh for generating colorschemes (and “let’s create a terminal color scheme”)
- Some popular colorschemes (according to people I asked on Mastodon): catpuccin, Monokai, Gruvbox, Dracula, Modus (a high contrast theme), Tokyo Night, Nord, Rosé Pine
okay, that was a lot
We talked about a lot in this post and while I think learning about all these details is kind of fun if I’m in the mood to do a deep dive, I find it SO FRUSTRATING to deal with it when I just want my colours to work! Being surprised by unreadable text and having to find a workaround is just not my idea of a good day.
Personally I’m a zero-configuration kind of person and it’s not that appealing to me to have to put together a lot of custom configuration just to make my colours in the terminal look acceptable. I’d much rather just have some reasonable defaults that I don’t have to change.
minimum contrast seems like an amazing feature
My one big takeaway from writing this was to turn on “minimum contrast” in my terminal, I think it’s going to fix most of the occasional accidental unreadable text issues I run into and I’m pretty excited about it.
I spent a lot of time in the past couple of weeks working on a website in Go that may or may not ever see the light of day, but I learned a couple of things along the way I wanted to write down. Here they are:
go 1.22 now has better routing
I’ve never felt motivated to learn any of the Go routing libraries (gorilla/mux, chi, etc), so I’ve been doing all my routing by hand, like this.
// DELETE /records:
case r.Method == "DELETE" && n == 1 && p[0] == "records":
if !requireLogin(username, r.URL.Path, r, w) {
return
}
deleteAllRecords(ctx, username, rs, w, r)
// POST /records/<ID>
case r.Method == "POST" && n == 2 && p[0] == "records" && len(p[1]) > 0:
if !requireLogin(username, r.URL.Path, r, w) {
return
}
updateRecord(ctx, username, p[1], rs, w, r)
But apparently as of Go 1.22, Go now has better support for routing in the standard library, so that code can be rewritten something like this:
mux.HandleFunc("DELETE /records/", app.deleteAllRecords)
mux.HandleFunc("POST /records/{record_id}", app.updateRecord)
Though it would also need a login middleware, so maybe something more like
this, with a requireLogin
middleware.
mux.Handle("DELETE /records/", requireLogin(http.HandlerFunc(app.deleteAllRecords)))
a gotcha with the built-in router: redirects with trailing slashes
One annoying gotcha I ran into was: if I make a route for /records/
, then a
request for /records
will be redirected to /records/
.
I ran into an issue with this where sending a POST request to /records
redirected to a GET request for /records/
, which broke the POST request
because it removed the request body. Thankfully Xe Iaso wrote a blog post about the exact same issue which made it
easier to debug.
I think the solution to this is just to use API endpoints like POST /records
instead of POST /records/
, which seems like a more normal design anyway.
sqlc automatically generates code for my db queries
I got a little bit tired of writing so much boilerplate for my SQL queries, but I didn’t really feel like learning an ORM, because I know what SQL queries I want to write, and I didn’t feel like learning the ORM’s conventions for translating things into SQL queries.
But then I found sqlc, which will compile a query like this:
-- name: GetVariant :one
SELECT *
FROM variants
WHERE id = ?;
into Go code like this:
const getVariant = `-- name: GetVariant :one
SELECT id, created_at, updated_at, disabled, product_name, variant_name
FROM variants
WHERE id = ?
`
func (q *Queries) GetVariant(ctx context.Context, id int64) (Variant, error) {
row := q.db.QueryRowContext(ctx, getVariant, id)
var i Variant
err := row.Scan(
&i.ID,
&i.CreatedAt,
&i.UpdatedAt,
&i.Disabled,
&i.ProductName,
&i.VariantName,
)
return i, err
}
What I like about this is that if I’m ever unsure about what Go code to write for a given SQL query, I can just write the query I want, read the generated function and it’ll tell me exactly what to do to call it. It feels much easier to me than trying to dig through the ORM’s documentation to figure out how to construct the SQL query I want.
Reading Brandur’s sqlc notes from 2024 also gave me some confidence that this is a workable path for my tiny programs. That post gives a really helpful example of how to conditionally update fields in a table using CASE statements (for example if you have a table with 20 columns and you only want to update 3 of them).
sqlite tips
Someone on Mastodon linked me to this post called Optimizing sqlite for servers. My projects are small and I’m not so concerned about performance, but my main takeaways were:
- have a dedicated object for writing to the database, and run
db.SetMaxOpenConns(1)
on it. I learned the hard way that if I don’t do this then I’ll getSQLITE_BUSY
errors from two threads trying to write to the db at the same time. - if I want to make reads faster, I could have 2 separate db objects, one for writing and one for reading
There are a more tips in that post that seem useful (like “COUNT queries are slow” and “Use STRICT tables”), but I haven’t done those yet.
Also sometimes if I have two tables where I know I’ll never need to do a JOIN
beteween them, I’ll just put them in separate databases so that I can connect
to them independently.
Go 1.19 introduced a way to set a GC memory limit
I run all of my Go projects in VMs with relatively little memory, like 256MB or 512MB. I ran into an issue where my application kept getting OOM killed and it was confusing – did I have a memory leak? What?
After some Googling, I realized that maybe I didn’t have a memory leak, maybe I just needed to reconfigure the garbage collector! It turns out that by default (according to A Guide to the Go Garbage Collector), Go’s garbage collector will let the application allocate memory up to 2x the current heap size.
Mess With DNS’s base heap size is around 170MB and the amount of memory free on the VM is around 160MB right now, so if its memory doubled, it’ll get OOM killed.
In Go 1.19, they added a way to tell Go “hey, if the application starts using this much memory, run a GC”. So I set the GC memory limit to 250MB and it seems to have resulted in the application getting OOM killed less often:
export GOMEMLIMIT=250MiB
some reasons I like making websites in Go
I’ve been making tiny websites (like the nginx playground) in Go on and off for the last 4 years or so and it’s really been working for me. I think I like it because:
- there’s just 1 static binary, all I need to do to deploy it is copy the binary. If there are static files I can just embed them in the binary with embed.
- there’s a built-in webserver that’s okay to use in production, so I don’t need to configure WSGI or whatever to get it to work. I can just put it behind Caddy or run it on fly.io or whatever.
- Go’s toolchain is very easy to install, I can just do
apt-get install golang-go
or whatever and then ago build
will build my project - it feels like there’s very little to remember to start sending HTTP responses
– basically all there is are functions like
Serve(w http.ResponseWriter, r *http.Request)
which read the request and send a response. If I need to remember some detail of how exactly that’s accomplished, I just have to read the function! - also
net/http
is in the standard library, so you can start making websites without installing any libraries at all. I really appreciate this one. - Go is a pretty systems-y language, so if I need to run an
ioctl
or something that’s easy to do
In general everything about it feels like it makes projects easy to work on for 5 days, abandon for 2 years, and then get back into writing code without a lot of problems.
For contrast, I’ve tried to learn Rails a couple of times and I really want to love Rails – I’ve made a couple of toy websites in Rails and it’s always felt like a really magical experience. But ultimately when I come back to those projects I can’t remember how anything works and I just end up giving up. It feels easier to me to come back to my Go projects that are full of a lot of repetitive boilerplate, because at least I can read the code and figure out how it works.
things I haven’t figured out yet
some things I haven’t done much of yet in Go:
- rendering HTML templates: usually my Go servers are just APIs and I make the
frontend a single-page app with Vue. I’ve used
html/template
a lot in Hugo (which I’ve used for this blog for the last 8 years) but I’m still not sure how I feel about it. - I’ve never made a real login system, usually my servers don’t have users at all.
- I’ve never tried to implement CSRF
In general I’m not sure how to implement security-sensitive features so I don’t start projects which need login/CSRF/etc. I imagine this is where a framework would help.
it’s cool to see the new features Go has been adding
Both of the Go features I mentioned in this post (GOMEMLIMIT
and the routing)
are new in the last couple of years and I didn’t notice when they came out. It
makes me think I should pay closer attention to the release notes for new Go
versions.
I wrote about how much I love fish in this blog post from 2017 and, 7 years of using it every day later, I’ve found even more reasons to love it. So I thought I’d write a new post with both the old reasons I loved it and some reasons.
This came up today because I was trying to figure out why my terminal doesn’t break anymore when I cat a binary to my terminal, the answer was “fish fixes the terminal!”, and I just thought that was really nice.
1. no configuration
In 10 years of using fish I have never found a single thing I wanted to configure. It just works the way I want. My fish config file just has:
- environment variables
- aliases (
alias ls eza
,alias vim nvim
, etc) - the occasional
direnv hook fish | source
to integrate a tool like direnv - a script I run to set up my terminal colours
I’ve been told that configuring things in fish is really easy if you ever do want to configure something though.
2. autosuggestions from my shell history
My absolute favourite thing about fish is that I type, it’ll automatically suggest (in light grey) a matching command that I ran recently. I can press the right arrow key to accept the completion, or keep typing to ignore it.
Here’s what that looks like. In this example I just typed the “v” key and it guessed that I want to run the previous vim command again.
2.5 “smart” shell autosuggestions
One of my favourite subtle autocomplete features is how fish handles autocompleting commands that contain paths in them. For example, if I run:
$ ls blah.txt
that command will only be autocompleted in directories that contain blah.txt
– it won’t show up in a different directory. (here’s a short comment about how it works)
As an example, if in this directory I type bash scripts/
, it’ll only suggest
history commands including files that actually exist in my blog’s scripts
folder, and not the dozens of other irrelevant scripts/
commands I’ve run in
other folders.
I didn’t understand exactly how this worked until last week, it just felt like fish was magically able to suggest the right commands. It still feels a little like magic and I love it.
3. pasting multiline commands
If I copy and paste multiple lines, bash will run them all, like this:
[bork@grapefruit linux-playground (main)]$ echo hi
hi
[bork@grapefruit linux-playground (main)]$ touch blah
[bork@grapefruit linux-playground (main)]$ echo hi
hi
This is a bit alarming – what if I didn’t actually want to run all those commands?
Fish will paste them all at a single prompt, so that I can press Enter if I actually want to run them. Much less scary.
bork@grapefruit ~/work/> echo hi
touch blah
echo hi
4. nice tab completion
If I run ls
and press tab, it’ll display all the filenames in a nice grid. I can use either Tab, Shift+Tab, or the arrow keys to navigate the grid.
Also, I can tab complete from the middle of a filename – if the filename starts with a weird character (or if it’s just not very unique), I can type some characters from the middle and press tab.
Here’s what the tab completion looks like:
bork@grapefruit ~/work/> ls
api/ blah.py fly.toml README.md
blah Dockerfile frontend/ test_websocket.sh
I honestly don’t complete things other than filenames very much so I can’t speak to that, but I’ve found the experience of tab completing filenames to be very good.
5. nice default prompt (including git integration)
Fish’s default prompt includes everything I want:
- username
- hostname
- current folder
- git integration
- status of last command exit (if the last command failed)
Here’s a screenshot with a few different variations on the default prompt,
including if the last command was interrupted (the SIGINT
) or failed.
6. nice history defaults
In bash, the maximum history size is 500 by default, presumably because computers used to be slow and not have a lot of disk space. Also, by default, commands don’t get added to your history until you end your session. So if your computer crashes, you lose some history.
In fish:
- the default history size is 256,000 commands. I don’t see any reason I’d ever need more.
- if you open a new tab, everything you’ve ever run (including commands in open sessions) is immediately available to you
- in an existing session, the history search will only include commands from the current session, plus everything that was in history at the time that you started the shell
I’m not sure how clearly I’m explaining how fish’s history system works here, but it feels really good to me in practice. My impression is that the way it’s implemented is the commands are continually added to the history file, but fish only loads the history file once, on startup.
I’ll mention here that if you want to have a fancier history system in another shell it might be worth checking out atuin or fzf.
7. press up arrow to search history
I also like fish’s interface for searching history: for example if I want to edit my fish config file, I can just type:
$ config.fish
and then press the up arrow to go back the last command that included config.fish
. That’ll complete to:
$ vim ~/.config/fish/config.fish
and I’m done. This isn’t so different from using Ctrl+R
in bash to search
your history but I think I like it a little better over all, maybe because
Ctrl+R
has some behaviours that I find confusing (for example you can
end up accidentally editing your history which I don’t like).
8. the terminal doesn’t break
I used to run into issues with bash where I’d accidentally cat
a binary to
the terminal, and it would break the terminal.
Every time fish displays a prompt, it’ll try to fix up your terminal so that you don’t end up in weird situations like this. I think this is some of the code in fish to prevent broken terminals.
Some things that it does are:
- turn on
echo
so that you can see the characters you type - make sure that newlines work properly so that you don’t get that weird staircase effect
- reset your terminal background colour, etc
I don’t think I’ve run into any of these “my terminal is broken” issues in a very long time, and I actually didn’t even realize that this was because of fish – I thought that things somehow magically just got better, or maybe I wasn’t making as many mistakes. But I think it was mostly fish saving me from myself, and I really appreciate that.
9. Ctrl+S is disabled
Also related to terminals breaking: fish disables Ctrl+S (which freezes your terminal and then you need to remember to press Ctrl+Q to unfreeze it). It’s a feature that I’ve never wanted and I’m happy to not have it.
Apparently you can disable Ctrl+S
in other shells with stty -ixon
.
10. fish_add_path
I have mixed feelings about this one, but in Fish you can use fish_add_path /opt/whatever/bin
to add a path to your PATH, globally, permanently, across
all open shell sessions. This can get a bit confusing if you forget where
those PATH entries are configured but overall I think I appreciate it.
11. nice syntax highlighting
By default commands that don’t exist are highlighted in red, like this.
12. easier loops
I find the loop syntax in fish a lot easier to type than the bash syntax. It looks like this:
for i in *.yaml
echo $i
end
Also it’ll add indentation in your loops which is nice.
13. easier multiline editing
Related to loops: you can edit multiline commands much more easily than in bash (just use the arrow keys to navigate the multiline command!). Also when you use the up arrow to get a multiline command from your history, it’ll show you the whole command the exact same way you typed it instead of squishing it all onto one line like bash does:
$ bash
$ for i in *.png
> do
> echo $i
> done
$ # press up arrow
$ for i in *.png; do echo $i; done ink
14. Ctrl+left arrow
This might just be me, but I really appreciate that fish has the Ctrl+left arrow
/ Ctrl+right arrow
keyboard shortcut for moving between
words when writing a command.
I’m honestly a bit confused about where this keyboard shortcut is coming from
(the only documented keyboard shortcut for this I can find in fish is Alt+left arrow
/ Alt + right arrow
which seems to do the same thing), but I’m pretty
sure this is a fish shortcut.
A couple of notes about getting this shortcut to work / where it comes from:
- one person said they needed to switch their terminal emulator from the “Linux console” keybindings to “Default (XFree 4)” to get it to work in fish
- on Mac OS,
Ctrl+left arrow
switches workspaces by default, so I had to turn that off. - Also apparently Ubuntu configures libreadline in
/etc/inputrc
to makeCtrl+left/right arrow
go back/forward a word, so it’ll work in bash on Ubuntu and maybe other Linux distros too. Here’s a stack overflow question talking about that
a downside: not everything has a fish integration
Sometimes tools don’t have instructions for integrating them with fish. That’s annoying, but:
- I’ve found this has gotten better over the last 10 years as fish has gotten more popular. For example Python’s virtualenv has had a fish integration for a long time now.
- If I need to run a POSIX shell command real quick, I can always just run
bash
orzsh
- I’ve gotten much better over the years at translating simple commands to fish syntax when I need to
My biggest day-to-day to annoyance is probably that for whatever reason I’m
still not used to fish’s syntax for setting environment variables, I get confused
about set
vs set -x
.
on POSIX compatibility
When I started using fish, you couldn’t do things like cmd1 && cmd2
– it
would complain “no, you need to run cmd1; and cmd2
” instead.
It seems like over the years fish has started accepting a little more POSIX-style syntax than it used to, like:
cmd1 && cmd2
export a=b
to set an environment variable (though this seems a bit limited, you can’t doexport PATH=$PATH:/whatever
so I think it’s probably better to learnset
instead)
on fish as a default shell
Changing my default shell to fish is always a little annoying, I occasionally get myself into a situation where
- I install fish somewhere like maybe
/home/bork/.nix-stuff/bin/fish
- I add the new fish location to
/etc/shells
as an allowed shell - I change my shell with
chsh
- at some point months/years later I reinstall fish in a different location for some reason and remove the old one
- oh no!!! I have no valid shell! I can’t open a new terminal tab anymore!
This has never been a major issue because I always have a terminal open somewhere where I can fix the problem and rescue myself, but it’s a bit alarming.
If you don’t want to use chsh
to change your shell to fish (which is very reasonable,
maybe I shouldn’t be doing that), the Arch wiki page has a couple of good suggestions –
either configure your terminal emulator to run fish or add an exec fish
to
your .bashrc
.
I’ve never really learned the scripting language
Other than occasionally writing a for loop interactively on the command line, I’ve never really learned the fish scripting language. I still do all of my shell scripting in bash.
I don’t think I’ve ever written a fish function or if
statement.
it seems like fish is getting pretty popular
I ran a highly unscientific poll on Mastodon asking people what shell they use interactively. The results were (of 2600 responses):
- 46% bash
- 49% zsh
- 16% fish
- 5% other
I think 16% for fish is pretty remarkable, since (as far as I know) there isn’t any system where fish is the default shell, and my sense is that it’s very common to just stick to whatever your system’s default shell is.
It feels like a big achievement for the fish project, even if maybe my Mastodon followers are more likely than the average shell user to use fish for some reason.
who might fish be right for?
Fish definitely isn’t for everyone. I think I like it because:
- I really dislike configuring my shell (and honestly my dev environment in general), I want things to “just work” with the default settings
- fish’s defaults feel good to me
- I don’t spend that much time logged into random servers using other shells so there’s not too much context switching
- I liked its features so much that I was willing to relearn how to do a few
“basic” shell things, like using parentheses
(seq 1 10)
to run a command instead of backticks or usingset
instead ofexport
Maybe you’re also a person who would like fish! I hope a few more of the people who fish is for can find it, because I spend so much of my time in the terminal and it’s made that time much more pleasant.
I just did a massive spring cleaning of one of my servers, trying to clean up what has become quite the mess of clutter. For every website on the server, I either:
- Documented what it is, who is using it, and what version of language and framework it uses
- Archived it as static HTML flat files
- Moved the source code from GitHub to a private git server
- Deleted the files
It feels good to get rid of old code, and to turn previously dynamic sites (with all of the risk they come with) into plain HTML.
This is also making me seriously reconsider the value of spinning up any new projects. Several of these are now 10 years old, still churning along fine, but difficult to do any maintenance on because of versions and dependencies. For example:
- indieauth.com - this has been on the chopping block for years, but I haven't managed to build a replacement yet, and is still used by a lot of people
- webmention.io - this is a pretty popular service, and I don't want to shut it down, but there's a lot of problems with how it's currently built and no easy way to make changes
- switchboard.p3k.io - this is a public WebSub (PubSubHubbub) hub, like Superfeedr, and has weirdly gained a lot of popularity in the podcast feed space in the last few years
One that I'm particularly happy with, despite it being an ugly pile of PHP, is oauth.net. I inherited this site in 2012, and it hasn't needed any framework upgrades since it's just using PHP templates. My ham radio website w7apk.com is similarly a small amount of templated PHP, and it is low stress to maintain, and actually fun to quickly jot some notes down when I want. I like not having to go through the whole ceremony of setting up a dev environment, installing dependencies, upgrading things to the latest version, checking for backwards incompatible changes, git commit, deploy, etc. I can just sftp some changes up to the server and they're live.
Some questions for myself for the future, before starting a new project:
- Could this actually just be a tag page on my website, like #100DaysOfMusic or #BikeTheEclipse?
- If it really needs to be a new project, then:
- Can I create it in PHP without using any frameworks or libraries? Plain PHP ages far better than pulling in any dependencies which inevitably stop working with a version 2-3 EOL cycles back, so every library brought in means signing up for annual maintenance of the whole project. Frameworks can save time in the short term, but have a huge cost in the long term.
- Is it possible to avoid using a database? Databases aren't inherently bad, but using one does make the project slightly more fragile, since it requires plans for migrations and backups, and
- If a database is required, is it possible to create it in a way that does not result in ever-growing storage needs?
- Is this going to store data or be a service that other people are going to use? If so, plan on a registration form so that I have a way to contact people eventually when I need to change it or shut it down.
- If I've got this far with the questions, am I really ready to commit to supporting this code base for the next 10 years?
One project I've been committed to maintaining and doing regular (ok fine, "semi-regular") updates for is Meetable, the open source events website that I run on a few domains:
I started this project in October 2019, excited for all the IndieWebCamps we were going to run in 2020. Somehow that is already 5 years ago now. Well that didn't exactly pan out, but I did quickly pivot it to add a bunch of features that are helpful for virtual events, so it worked out ok in the end. We've continued to use it for posting IndieWeb events, and I also run an instance for two IETF working groups. I'd love to see more instances pop up, I've only encountered one or two other ones in the wild. I even spent a significant amount of time on the onboarding flow so that it's relatively easy to install and configure. I even added passkeys for the admin login so you don't need any external dependencies on auth providers. It's a cool project if I may say so myself.
Anyway, this is not a particularly well thought out blog post, I just wanted to get my thoughts down after spending all day combing through the filesystem of my web server and uncovering a lot of ancient history.
About 3 years ago, I announced Mess With DNS in this blog post, a playground where you can learn how DNS works by messing around and creating records.
I wasn’t very careful with the DNS implementation though (to quote the release blog post: “following the DNS RFCs? not exactly”), and people started reporting problems that eventually I decided that I wanted to fix.
the problems
Some of the problems people have reported were:
- domain names with underscores weren’t allowed, even though they should be
- If there was a CNAME record for a domain name, it allowed you to create other records for that domain name, even if it shouldn’t
- you could create 2 different CNAME records for the same domain name, which shouldn’t be allowed
- no support for the SVCB or HTTPS record types, which seemed a little complex to implement
- no support for upgrading from UDP to TCP for big responses
And there are certainly more issues that nobody got around to reporting, for example that if you added an NS record for a subdomain to delegate it, Mess With DNS wouldn’t handle the delegation properly.
the solution: PowerDNS
I wasn’t sure how to fix these problems for a long time – technically I could have started addressing them individually, but it felt like there were a million edge cases and I’d never get there.
But then one day I was chatting with someone else who was working on a DNS server and they said they were using PowerDNS: an open source DNS server with an HTTP API!
This seemed like an obvious solution to my problems – I could just swap out my own crappy DNS implementation for PowerDNS.
There were a couple of challenges I ran into when setting up PowerDNS that I’ll talk about here. I really don’t do a lot of web development and I think I’ve never built a website that depends on a relatively complex API before, so it was a bit of a learning experience.
challenge 1: getting every query made to the DNS server
One of the main things Mess With DNS does is give you a live view of every DNS query it receives for your subdomain, using a websocket. To make this work, it needs to intercept every DNS query before they it gets sent to the PowerDNS DNS server:
There were 2 options I could think of for how to intercept the DNS queries:
- dnstap:
dnsdist
(a DNS load balancer from the PowerDNS project) has support for logging all DNS queries it receives using dnstap, so I could put dnsdist in front of PowerDNS and then log queries that way - Have my Go server listen on port 53 and proxy the queries myself
I originally implemented option #1, but for some reason there was a 1 second delay before every query got logged. I couldn’t figure out why, so I implemented my own very simple proxy instead.
challenge 2: should the frontend have direct access to the PowerDNS API?
The frontend used to have a lot of DNS logic in it – it converted emoji domain
names to ASCII using punycode, had a lookup table to convert numeric DNS query
types (like 1
) to their human-readable names (like A
), did a little bit of
validation, and more.
Originally I considered keeping this pattern and just giving the frontend (more or less) direct access to the PowerDNS API to create and delete, but writing even more complex code in Javascript didn’t feel that appealing to me – I don’t really know how to write tests in Javascript and it seemed like it wouldn’t end well.
So I decided to take all of the DNS logic out of the frontend and write a new DNS API for managing records, shaped something like this:
GET /records
DELETE /records/<ID>
DELETE /records/
(delete all records for a user)POST /records/
(create record)POST /records/<ID>
(update record)
This meant that I could actually write tests for my code, since the backend is in Go and I do know how to write tests in Go.
what I learned: it’s okay for an API to duplicate information
I had this idea that APIs shouldn’t return duplicate information – for example if I get a DNS record, it should only include a given piece of information once.
But I ran into a problem with that idea when displaying MX records: an MX record has 2 fields, “preference”, and “mail server”. And I needed to display that information in 2 different ways on the frontend:
- In a form, where “Preference” and “Mail Server” are 2 different form fields (like
10
andmail.example.com
) - In a summary view, where I wanted to just show the record (
10 mail.example.com
)
This is kind of a small problem, but it came up in a few different places.
I talked to my friend Marco Rogers about this, and based on some advice from him I realized that I could return the same information in the API in 2 different ways! Then the frontend just has to display it. So I started just returning duplicate information in the API, something like this:
{
values: {'Preference': 10, 'Server': 'mail.example.com'},
content: '10 mail.example.com',
...
}
I ended up using this pattern in a couple of other places where I needed to display the same information in 2 different ways and it was SO much easier.
I think what I learned from this is that if I’m making an API that isn’t intended for external use (there are no users of this API other than the frontend!), I can tailor it very specifically to the frontend’s needs and that’s okay.
challenge 3: what’s a record’s ID?
In Mess With DNS (and I think in most DNS user interfaces!), you create, add, and delete records.
But that’s not how the PowerDNS API works. In PowerDNS, you create a zone, which is made of record sets. Records don’t have any ID in the API at all.
I ended up solving this by generate a fake ID for each records which is made of:
- its name
- its type
- and its content (base64-encoded)
For example one record’s ID is brooch225.messwithdns.com.|NS|bnMxLm1lc3N3aXRoZG5zLmNvbS4=
Then I can search through the zone and find the appropriate record to update it.
This means that if you update a record then its ID will change which isn’t usually what I want in an ID, but that seems fine.
challenge 4: making clear error messages
I think the error messages that the PowerDNS API returns aren’t really intended to be shown to end users, for example:
Name 'new\032site.island358.messwithdns.com.' contains unsupported characters
(this error encodes the space as\032
, which is a bit disorienting if you don’t know that the space character is 32 in ASCII)RRset test.pear5.messwithdns.com. IN CNAME: Conflicts with pre-existing RRset
(this talks about RRsets, which aren’t a concept that the Mess With DNS UI has at all)Record orange.beryl5.messwithdns.com./A '1.2.3.4$': Parsing record content (try 'pdnsutil check-zone'): unable to parse IP address, strange character: $
(mentions “pdnsutil”, a utility which Mess With DNS’s users don’t have access to in this context)
I ended up handling this in two ways:
- Do some initial basic validation of values that users enter (like IP addresses), so I can just return errors like
Invalid IPv4 address: "1.2.3.4$
- If that goes well, send the request to PowerDNS and if we get an error back, then do some hacky translation of those messages to make them clearer.
Sometimes users will still get errors from PowerDNS directly, but I added some logging of all the errors that users see, so hopefully I can review them and add extra translations if there are other common errors that come up.
I think what I learned from this is that if I’m building a user-facing application on top of an API, I need to be pretty thoughtful about how I resurface those errors to users.
challenge 5: setting up SQLite
Previously Mess With DNS was using a Postgres database. This was problematic
because I only gave the Postgres machine 256MB of RAM, which meant that the
database got OOM killed almost every single day. I never really worked out
exactly why it got OOM killed every day, but that’s how it was. I spent some
time trying to tune Postgres’ memory usage by setting the max connections /
work-mem
/ maintenance-work-mem
and it helped a bit but didn’t solve the
problem.
So for this refactor I decided to use SQLite instead, because the website doesn’t really get that much traffic. There are some choices involved with using SQLite, and I decided to:
- Run
db.SetMaxOpenConns(1)
to make sure that we only open 1 connection to the database at a time, to preventSQLITE_BUSY
errors from two threads trying to access the database at the same time (just setting WAL mode didn’t work) - Use separate databases for each of the 3 tables (users, records, and requests) to reduce contention. This maybe isn’t really necessary, but there was no reason I needed the tables to be in the same database so I figured I’d set up separate databases to be safe.
- Use the cgo-free modernc.org/sqlite, which translates SQLite’s source code to Go. I might switch to a more “normal” sqlite implementation instead at some point and use cgo though. I think the main reason I prefer to avoid cgo is that cgo has landed me with difficult-to-debug errors in the past.
- use WAL mode
I still haven’t set up backups, though I don’t think my Postgres database had backups either. I think I’m unlikely to use litestream for backups – Mess With DNS is very far from a critical application, and I think daily backups that I could recover from in case of a disaster are more than good enough.
challenge 6: upgrading Vue & managing forms
This has nothing to do with PowerDNS but I decided to upgrade Vue.js from version 2 to 3 as part of this refresh. The main problem with that is that the form validation library I was using (FormKit) completely changed its API between Vue 2 and Vue 3, so I decided to just stop using it instead of learning the new API.
I ended up switching to some form validation tools that are built into the
browser like required
and oninvalid
(here’s the code).
I think it could use some of improvement, I still don’t understand forms very well.
challenge 7: managing state in the frontend
This also has nothing to do with PowerDNS, but when modifying the frontend I realized that my state management in the frontend was a mess – in every place where I made an API request to the backend, I had to try to remember to add a “refresh records” call after that in every place that I’d modified the state and I wasn’t always consistent about it.
With some more advice from Marco, I ended up implementing a single global state management store which stores all the state for the application, and which lets me create/update/delete records.
Then my components can just call store.createRecord(record)
, and the store
will automatically resynchronize all of the state as needed.
challenge 8: sequencing the project
This project ended up having several steps because I reworked the whole integration between the frontend and the backend. I ended up splitting it into a few different phases:
- Upgrade Vue from v2 to v3
- Make the state management store
- Implement a different backend API, move a lot of DNS logic out of the frontend, and add tests for the backend
- Integrate PowerDNS
I made sure that the website was (more or less) 100% working and then deployed it in between phases, so that the amount of changes I was managing at a time stayed somewhat under control.
the new website is up now!
I released the upgraded website a few days ago and it seems to work! The PowerDNS API has been great to work on top of, and I’m relieved that there’s a whole class of problems that I now don’t have to think about at all, other than potentially trying to make the error messages from PowerDNS a little clearer. Using PowerDNS has fixed a lot of the DNS issues that folks have reported in the last few years and it feels great.
If you run into problems with the new Mess With DNS I’d love to hear about them here.
I’ve been writing Go pretty casually for years – the backends for all of my playgrounds (nginx, dns, memory, more DNS) are written in Go, but many of those projects are just a few hundred lines and I don’t come back to those codebases much.
I thought I more or less understood the basics of the language, but this week I’ve been writing a lot more Go than usual while working on some upgrades to Mess with DNS, and ran into a bug that revealed I was missing a very basic concept!
Then I posted about this on Mastodon and someone linked me to this very cool site (and book) called 100 Go Mistakes and How To Avoid Them by Teiva Harsanyi. It just came out in 2022 so it’s relatively new.
I decided to read through the site to see what else I was missing, and found a couple of other misconceptions I had about Go. I’ll talk about some of the mistakes that jumped out to me the most, but really the whole 100 Go Mistakes site is great and I’d recommend reading it.
Here’s the initial mistake that started me on this journey:
mistake 1: not understanding that structs are copied on assignment
Let’s say we have a struct:
type Thing struct {
Name string
}
and this code:
thing := Thing{"record"}
other_thing := thing
other_thing.Name = "banana"
fmt.Println(thing)
This prints “record” and not “banana” (play.go.dev link), because thing
is copied when you
assign it to other_thing
.
the problem this caused me: ranges
The bug I spent 2 hours of my life debugging last week was effectively this code (play.go.dev link):
type Thing struct {
Name string
}
func findThing(things []Thing, name string) *Thing {
for _, thing := range things {
if thing.Name == name {
return &thing
}
}
return nil
}
func main() {
things := []Thing{Thing{"record"}, Thing{"banana"}}
thing := findThing(things, "record")
thing.Name = "gramaphone"
fmt.Println(things)
}
This prints out [{record} {banana}]
– because findThing
returned a copy, we didn’t change the name in the original array.
This mistake is #30 in 100 Go Mistakes.
I fixed the bug by changing it to something like this (play.go.dev link), which returns a reference to the item in the array we’re looking for instead of a copy.
func findThing(things []Thing, name string) *Thing {
for i := range things {
if things[i].Name == name {
return &things[i]
}
}
return nil
}
why didn’t I realize this?
When I learned that I was mistaken about how assignment worked in Go I was really taken aback, like – it’s such a basic fact about the language works! If I was wrong about that then what ELSE am I wrong about in Go????
My best guess for what happened is:
- I’ve heard for my whole life that when you define a function, you need to think about whether its arguments are passed by reference or by value
- So I’d thought about this in Go, and I knew that if you pass a struct as a value to a function, it gets copied – if you want to pass a reference then you have to pass a pointer
- But somehow it never occurred to me that you need to think about the same
thing for assignments, perhaps because in most of the other languages I
use (Python, JS, Java) I think everything is a reference anyway. Except for
in Rust, where you do have values that you make copies of but I think most of the time I had to run
.clone()
explicitly. (though apparently structs will be automatically copied on assignment if the struct implements theCopy
trait) - Also obviously I just don’t write that much Go so I guess it’s never come up.
mistake 2: side effects appending slices (#25)
When you subset a slice with x[2:3]
, the original slice and the sub-slice
share the same backing array, so if you append to the new slice, it can
unintentionally change the old slice:
For example, this code prints [1 2 3 555 5]
(code on play.go.dev)
x := []int{1, 2, 3, 4, 5}
y := x[2:3]
y = append(y, 555)
fmt.Println(x)
I don’t think this has ever actually happened to me, but it’s alarming and I’m very happy to know about it.
Apparently you can avoid this problem by changing y := x[2:3]
to y := x[2:3:3]
, which restricts the new slice’s capacity so that appending to it
will re-allocate a new slice. Here’s some code on play.go.dev that does that.
mistake 3: not understanding the different types of method receivers (#42)
This one isn’t a “mistake” exactly, but it’s been a source of confusion for me and it’s pretty simple so I’m glad to have it cleared up.
In Go you can declare methods in 2 different ways:
func (t Thing) Function()
(a “value receiver”)func (t *Thing) Function()
(a “pointer receiver”)
My understanding now is that basically:
- If you want the method to mutate the struct
t
, you need a pointer receiver. - If you want to make sure the method doesn’t mutate the struct
t
, use a value receiver.
Explanation #42 has a bunch of other interesting details though. There’s definitely still something I’m missing about value vs pointer receivers (I got a compile error related to them a couple of times in the last week that I still don’t understand), but hopefully I’ll run into that error again soon and I can figure it out.
more interesting things I noticed
Some more notes from 100 Go Mistakes:
- apparently you can name the outputs of your function (#43), though that can have issues (#44) and I’m not sure I want to
- apparently you can put tests in a different package (#90) to ensure that you only use the package’s public interfaces, which seems really useful
- there are a lots of notes about how to use contexts, channels, goroutines, mutexes, sync.WaitGroup, etc. I’m sure I have something to learn about all of those but today is not the day I’m going to learn them.
Also there are some things that have tripped me up in the past, like:
- forgetting the return statement after replying to an HTTP request (#80)
- not realizing the httptest package exists (#88)
this “100 common mistakes” format is great
I really appreciated this “100 common mistakes” format – it made it really easy for me to skim through the mistakes and very quickly mentally classify them into:
- yep, I know that
- not interested in that one right now
- WOW WAIT I DID NOT KNOW THAT, THAT IS VERY USEFUL!!!!
It looks like “100 Common Mistakes” is a series of books from Manning and they also have “100 Java Mistakes” and an upcoming “100 SQL Server Mistakes”.
Also I enjoyed what I’ve read of Effective Python by Brett Slatkin, which has a similar “here are a bunch of short Python style tips” structure where you can quickly skim it and take what’s useful to you. There’s also Effective C++, Effective Java, and probably more.
some other Go resources
other resources I’ve appreciated:
- Go by example for basic syntax
- go.dev/play
- obviously https://pkg.go.dev for documentation about literally everything
- staticcheck seems like a useful linter – for example I just started using it to tell me when I’ve forgotten to handle an error
- apparently golangci-lint includes a bunch of different linters
Here's where you can find me at IETF 120 in Vancouver!
Monday
- 9:30 - 11:30 • alldispatch • Regency C/D
- 13:00 - 15:00 • oauth • Plaza B
- 18:30 - 19:30 • Hackdemo Happy Hour • Regency Hallway
Tuesday
Wednesday
- 9:30 - 11:30 • wimse • Georgia A
- 11:45 - 12:45 • Chairs Forum • Regency C/D
- 17:30 - 19:30 • IETF Plenary • Regency A/B/C/D
Thursday
Friday
- 13:00 - 15:00 • oauth • Regency A/B
My Current Drafts
The other day I asked what folks on Mastodon find confusing about working in the terminal, and one thing that stood out to me was “editing a command you already typed in”.
This really resonated with me: even though entering some text and editing it is
a very “basic” task, it took me maybe 15 years of using the terminal every
single day to get used to using Ctrl+A
to go to the beginning of the line (or
Ctrl+E
for the end – I think I used Home
/End
instead).
So let’s talk about why entering text might be hard! I’ll also share a few tips that I wish I’d learned earlier.
it’s very inconsistent between programs
A big part of what makes entering text in the terminal hard is the inconsistency between how different programs handle entering text. For example:
- some programs (
cat
,nc
,git commit --interactive
, etc) don’t support using arrow keys at all: if you press arrow keys, you’ll just see^[[D^[[D^[[C^[[C^
- many programs (like
irb
,python3
on a Linux machine and many many more) use thereadline
library, which gives you a lot of basic functionality (history, arrow keys, etc) - some programs (like
/usr/bin/python3
on my Mac) do support very basic features like arrow keys, but not other features likeCtrl+left
or reverse searching withCtrl+R
- some programs (like the
fish
shell oripython3
ormicro
orvim
) have their own fancy system for accepting input which is totally custom
So there’s a lot of variation! Let’s talk about each of those a little more.
mode 1: the baseline
First, there’s “the baseline” – what happens if a program just accepts text by
calling fgets()
or whatever and doing absolutely nothing else to provide a
nicer experience. Here’s what using these tools typically looks for me – If I
start the version of dash installed on
my machine (a pretty minimal shell) press the left arrow keys, it just prints
^[[D
to the terminal.
$ ls l-^[[D^[[D^[[D
At first it doesn’t seem like all of these “baseline” tools have much in common, but there are actually a few features that you get for free just from your terminal, without the program needing to do anything special at all.
The things you get for free are:
- typing in text, obviously
- backspace
Ctrl+W
, to delete the previous wordCtrl+U
, to delete the whole line- a few other things unrelated to text editing (like
Ctrl+C
to interrupt the process,Ctrl+Z
to suspend, etc)
This is not great, but it means that if you want to delete a word you
generally can do it with Ctrl+W
instead of pressing backspace 15 times, even
if you’re in an environment which is offering you absolutely zero features.
You can get a list of all the ctrl codes that your terminal supports with stty -a
.
mode 2: tools that use readline
The next group is tools that use readline! Readline is a GNU library to make entering text more pleasant, and it’s very widely used.
My favourite readline keyboard shortcuts are:
Ctrl+E
(orEnd
) to go to the end of the lineCtrl+A
(orHome
) to go to the beginning of the lineCtrl+left/right arrow
to go back/forward 1 word- up arrow to go back to the previous command
Ctrl+R
to search your history
And you can use Ctrl+W
/ Ctrl+U
from the “baseline” list, though Ctrl+U
deletes from the cursor to the beginning of the line instead of deleting the
whole line. I think Ctrl+W
might also have a slightly different definition of
what a “word” is.
There are a lot more (here’s a full list), but those are the only ones that I personally use.
The bash
shell is probably the most famous readline user (when you use
Ctrl+R
to search your history in bash, that feature actually comes from
readline), but there are TONS of programs that use it – for example psql
,
irb
, python3
, etc.
tip: you can make ANYTHING use readline with rlwrap
One of my absolute favourite things is that if you have a program like nc
without readline support, you can just run rlwrap nc
to turn it into a
program with readline support!
This is incredible and makes a lot of tools that are borderline unusable MUCH more pleasant to use. You can even apparently set up rlwrap to include your own custom autocompletions, though I’ve never tried that.
some reasons tools might not use readline
I think reasons tools might not use readline might include:
- the program is very simple (like
cat
ornc
) and maybe the maintainers don’t want to bring in a relatively large dependency - license reasons, if the program’s license is not GPL-compatible – readline is GPL-licensed, not LGPL
- only a very small part of the program is interactive, and maybe readline
support isn’t seen as important. For example
git
has a few interactive features (likegit add -p
), but not very many, and usually you’re just typing a single character likey
orn
– most of the time you need to really type something significant in git, it’ll drop you into a text editor instead.
For example idris2 says they don’t use readline
to keep dependencies minimal and suggest using rlwrap
to get better
interactive features.
how to know if you’re using readline
The simplest test I can think of is to press Ctrl+R
, and if you see:
(reverse-i-search)`':
then you’re probably using readline. This obviously isn’t a guarantee (some
other library could use the term reverse-i-search
too!), but I don’t know of
another system that uses that specific term to refer to searching history.
the readline keybindings come from Emacs
Because I’m a vim user, It took me a very long time to understand where these
keybindings come from (why Ctrl+A
to go to the beginning of a line??? so
weird!)
My understanding is these keybindings actually come from Emacs – Ctrl+A
and
Ctrl+E
do the same thing in Emacs as they do in Readline and I assume the
other keyboard shortcuts mostly do as well, though I tried out Ctrl+W
and
Ctrl+U
in Emacs and they don’t do the same thing as they do in the terminal
so I guess there are some differences.
There’s some more history of the Readline project here.
mode 3: another input library (like libedit
)
On my Mac laptop, /usr/bin/python3
is in a weird middle ground where it
supports some readline features (for example the arrow keys), but not the
other ones. For example when I press Ctrl+left arrow
, it prints out ;5D
,
like this:
$ python3
>>> importt subprocess;5D
Folks on Mastodon helped me figure out that this is because in the default
Python install on Mac OS, the Python readline
module is actually backed by
libedit
, which is a similar library which has fewer features, presumably
because Readline is GPL licensed.
Here’s how I was eventually able to figure out that Python was using libedit on my system:
$ python3 -c "import readline; print(readline.__doc__)"
Importing this module enables command line editing using libedit readline.
Generally Python uses readline though if you install it on Linux or through Homebrew. It’s just that the specific version that Apple includes on their systems doesn’t have readline. Also Python 3.13 is going to remove the readline dependency in favour of a custom library, so “Python uses readline” won’t be true in the future.
I assume that there are more programs on my Mac that use libedit but I haven’t looked into it.
mode 4: something custom
The last group of programs is programs that have their own custom (and sometimes much fancier!) system for editing text. This includes:
- most terminal text editors (nano, micro, vim, emacs, etc)
- some shells (like fish), for example it seems like fish supports
Ctrl+Z
for undo when typing in a command. Zsh’s line editor is called zle. - some REPLs (like
ipython
), for example IPython uses the prompt_toolkit library instead of readline - lots of other programs (like
atuin
)
Some features you might see are:
- better autocomplete which is more customized to the tool
- nicer history management (for example with syntax highlighting) than the default you get from readline
- more keyboard shortcuts
custom input systems are often readline-inspired
I went looking at how Atuin (a wonderful tool for searching your shell history that I started using recently) handles text input. Looking at the code and some of the discussion around it, their implementation is custom but it’s inspired by readline, which makes sense to me – a lot of users are used to those keybindings, and it’s convenient for them to work even though atuin doesn’t use readline.
prompt_toolkit (the library IPython uses) is similar – it actually supports a lot of options (including vi-like keybindings), but the default is to support the readline-style keybindings.
This is like how you see a lot of programs which support very basic vim
keybindings (like j
for down and k
for up). For example Fastmail supports
j
and k
even though most of its other keybindings don’t have much
relationship to vim.
I assume that most “readline-inspired” custom input systems have various subtle incompatibilities with readline, but this doesn’t really bother me at all personally because I’m extremely ignorant of most of readline’s features. I only use maybe 5 keyboard shortcuts, so as long as they support the 5 basic commands I know (which they always do!) I feel pretty comfortable. And usually these custom systems have much better autocomplete than you’d get from just using readline, so generally I prefer them over readline.
lots of shells support vi keybindings
Bash, zsh, and fish all have a “vi mode” for entering text. In a very unscientific poll I ran on Mastodon, 12% of people said they use it, so it seems pretty popular.
Readline also has a “vi mode” (which is how Bash’s support for it works), so by extension lots of other programs have it too.
I’ve always thought that vi mode seems really cool, but for some reason even though I’m a vim user it’s never stuck for me.
understanding what situation you’re in really helps
I’ve spent a lot of my life being confused about why a command line application I was using wasn’t behaving the way I wanted, and it feels good to be able to more or less understand what’s going on.
I think this is roughly my mental flowchart when I’m entering text at a command line prompt:
- Do the arrow keys not work? Probably there’s no input system at all, but at
least I can use
Ctrl+W
andCtrl+U
, and I canrlwrap
the tool if I want more features. - Does
Ctrl+R
printreverse-i-search
? Probably it’s readline, so I can use all of the readline shortcuts I’m used to, and I know I can get some basic history and press up arrow to get the previous command. - Does
Ctrl+R
do something else? This is probably some custom input library: it’ll probably act more or less like readline, and I can check the documentation if I really want to know how it works.
Being able to diagnose what’s going on like this makes the command line feel a more predictable and less chaotic.
some things this post left out
There are lots more complications related to entering text that we didn’t talk about at all here, like:
- issues related to ssh / tmux / etc
- the
TERM
environment variable - how different terminals (gnome terminal, iTerm, xterm, etc) have different kinds of support for copying/pasting text
- unicode
- probably a lot more
Hello! Today someone on Mastodon asked about job control (fg
, bg
, Ctrl+z
,
wait
, etc). It made me think about how I don’t use my shell’s job
control interactively very often: usually I prefer to just open a new terminal
tab if I want to run multiple terminal programs, or use tmux if it’s over ssh.
But I was curious about whether other people used job control more often than me.
So I asked on Mastodon for reasons people use job control. There were a lot of great responses, and it even made me want to consider using job control a little more!
In this post I’m only going to talk about using job control interactively (not in scripts) – the post is already long enough just talking about interactive use.
what’s job control?
First: what’s job control? Well – in a terminal, your processes can be in one of 3 states:
- in the foreground. This is the normal state when you start a process.
- in the background. This is what happens when you run
some_process &
: the process is still running, but you can’t interact with it anymore unless you bring it back to the foreground. - stopped. This is what happens when you start a process and then press
Ctrl+Z
. This pauses the process: it won’t keep using the CPU, but you can restart it if you want.
“Job control” is a set of commands for seeing which processes are running in a terminal and moving processes between these 3 states
how to use job control
fg
brings a process to the foreground. It works on both stopped processes and background processes. For example, if you start a background process withcat < /dev/zero &
, you can bring it back to the foreground by runningfg
bg
restarts a stopped process and puts it in the background.- Pressing
Ctrl+z
stops the current foreground process. jobs
lists all processes that are active in your terminalkill
sends a signal (likeSIGKILL
) to a job (this is the shell builtinkill
, not/bin/kill
)disown
removes the job from the list of running jobs, so that it doesn’t get killed when you close the terminalwait
waits for all background processes to complete. I only use this in scripts though.- apparently in bash/zsh you can also just type
%2
instead offg %2
I might have forgotten some other job control commands but I think those are all the ones I’ve ever used.
You can also give fg
or bg
a specific job to foreground/background. For example if I see this in the output of jobs
:
$ jobs
Job Group State Command
1 3161 running cat < /dev/zero &
2 3264 stopped nvim -w ~/.vimkeys $argv
then I can foreground nvim
with fg %2
. You can also kill it with kill -9 %2
, or just kill %2
if you want to be more gentle.
how is kill %2
implemented?
I was curious about how kill %2
works – does %2
just get replaced with the
PID of the relevant process when you run the command, the way environment
variables are? Some quick experimentation shows that it isn’t:
$ echo kill %2
kill %2
$ type kill
kill is a function with definition
# Defined in /nix/store/vicfrai6lhnl8xw6azq5dzaizx56gw4m-fish-3.7.0/share/fish/config.fish
So kill
is a fish builtin that knows how to interpret %2
. Looking at
the source code (which is very easy in fish!), it uses jobs -p %2
to expand %2
into a PID, and then runs the regular kill
command.
on differences between shells
Job control is implemented by your shell. I use fish, but my sense is that the basics of job control work pretty similarly in bash, fish, and zsh.
There are definitely some shells which don’t have job control at all, but I’ve only used bash/fish/zsh so I don’t know much about that.
Now let’s get into a few reasons people use job control!
reason 1: kill a command that’s not responding to Ctrl+C
I run into processes that don’t respond to Ctrl+C
pretty regularly, and it’s
always a little annoying – I usually switch terminal tabs to find and kill and
the process. A bunch of people pointed out that you can do this in a faster way
using job control!
How to do this: Press Ctrl+Z
, then kill %1
(or the appropriate job number
if there’s more than one stopped/background job, which you can get from
jobs
). You can also kill -9
if it’s really not responding.
reason 2: background a GUI app so it’s not using up a terminal tab
Sometimes I start a GUI program from the command line (for example with
wireshark some_file.pcap
), forget to start it in the background, and don’t want it eating up my terminal tab.
How to do this:
- move the GUI program to the background by pressing
Ctrl+Z
and then runningbg
. - you can also run
disown
to remove it from the list of jobs, to make sure that the GUI program won’t get closed when you close your terminal tab.
Personally I try to avoid starting GUI programs from the terminal if possible
because I don’t like how their stdout pollutes my terminal (on a Mac I use
open -a Wireshark
instead because I find it works better but sometimes you
don’t have another choice.
reason 2.5: accidentally started a long-running job without tmux
This is basically the same as the GUI app thing – you can move the job to the background and disown it.
I was also curious about if there are ways to redirect a process’s output to a file after it’s already started. A quick search turned up this Linux-only tool which is based on nelhage’s reptyr (which lets you for example move a process that you started outside of tmux to tmux) but I haven’t tried either of those.
reason 3: running a command while using vim
A lot of people mentioned that if they want to quickly test something while
editing code in vim
or another terminal editor, they like to use Ctrl+Z
to stop vim, run the command, and then run fg
to go back to their editor.
You can also use this to check the output of a command that you ran before
starting vim
.
I’ve never gotten in the habit of this, probably because I mostly use a GUI version of vim. I feel like I’d also be likely to switch terminal tabs and end up wondering “wait… where did I put my editor???” and have to go searching for it.
reason 4: preferring interleaved output
A few people said that they prefer to the output of all of their commands being interleaved in the terminal. This really surprised me because I usually think of having the output of lots of different commands interleaved as being a bad thing, but one person said that they like to do this with tcpdump specifically and I think that actually sounds extremely useful. Here’s what it looks like:
# start tcpdump
$ sudo tcpdump -ni any port 1234 &
tcpdump: data link type PKTAP
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type PKTAP (Apple DLT_PKTAP), snapshot length 524288 bytes
# run curl
$ curl google.com:1234
13:13:29.881018 IP 192.168.1.173.49626 > 142.251.41.78.1234: Flags [S], seq 613574185, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 2730440518 ecr 0,sackOK,eol], length 0
13:13:30.881963 IP 192.168.1.173.49626 > 142.251.41.78.1234: Flags [S], seq 613574185, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 2730441519 ecr 0,sackOK,eol], length 0
13:13:31.882587 IP 192.168.1.173.49626 > 142.251.41.78.1234: Flags [S], seq 613574185, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 2730442520 ecr 0,sackOK,eol], length 0
# when you're done, kill the tcpdump in the background
$ kill %1
I think it’s really nice here that you can see the output of tcpdump inline in your terminal – when I’m using tcpdump I’m always switching back and forth and I always get confused trying to match up the timestamps, so keeping everything in one terminal seems like it might be a lot clearer. I’m going to try it.
reason 5: suspend a CPU-hungry program
One person said that sometimes they’re running a very CPU-intensive program,
for example converting a video with ffmpeg
, and they need to use the CPU for
something else, but don’t want to lose the work that ffmpeg already did.
You can do this by pressing Ctrl+Z
to pause the process, and then run fg
when you want to start it again.
reason 6: you accidentally ran Ctrl+Z
Many people replied that they didn’t use job control intentionally, but
that they sometimes accidentally ran Ctrl+Z, which stopped whatever program was
running, so they needed to learn how to use fg
to bring it back to the
foreground.
The were also some mentions of accidentally running Ctrl+S
too (which stops
your terminal and I think can be undone with Ctrl+Q
). My terminal totally
ignores Ctrl+S
so I guess I’m safe from that one though.
reason 7: already set up a bunch of environment variables
Some folks mentioned that they already set up a bunch of environment variables that they need to run various commands, so it’s easier to use job control to run multiple commands in the same terminal than to redo that work in another tab.
reason 8: it’s your only option
Probably the most obvious reason to use job control to manage multiple processes is “because you have to” – maybe you’re in single-user mode, or on a very restricted computer, or SSH’d into a machine that doesn’t have tmux or screen and you don’t want to create multiple SSH sessions.
reason 9: some people just like it better
Some people also said that they just don’t like using terminal tabs: for instance a few folks mentioned that they prefer to be able to see all of their terminals on the screen at the same time, so they’d rather have 4 terminals on the screen and then use job control if they need to run more than 4 programs.
I learned a few new tricks!
I think my two main takeaways from thos post is I’ll probably try out job control a little more for:
- killing processes that don’t respond to Ctrl+C
- running
tcpdump
in the background with whatever network command I’m running, so I can see both of their output in the same place
Hello! I’ve been writing about git on here nonstop for months, and the git zine is FINALLY done! It came out on Friday!
You can get it for $12 here: https://wizardzines.com/zines/git, or get an 14-pack of all my zines here.
Here’s the cover:
the table of contents
Here’s the table of contents:
who is this zine for?
I wrote this zine for people who have been using git for years and are still afraid of it. As always – I think it sucks to be afraid of the tools that you use in your work every day! I want folks to feel confident using git.
My goals are:
- To explain how some parts of git that initially seem scary (like “detached HEAD state”) are pretty straightforward to deal with once you understand what’s going on
- To show some parts of git you probably should be careful around. For example, the stash is one of the places in git where it’s easiest to lose your work in a way that’s incredibly annoying to recover form, and I avoid using it heavily because of that.
- To clear up a few common misconceptions about how the core parts of git (like commits, branches, and merging) work
what’s the difference between this and Oh Shit, Git!
You might be wondering – Julia! You already have a zine about git! What’s going on? Oh Shit, Git! is a set of tricks for fixing git messes. “How Git Works” explains how Git actually works.
Also, Oh Shit, Git! is the amazing Katie Sylor Miller’s concept: we made it into a zine because I was such a huge fan of her work on it.
I think they go really well together.
what’s so confusing about git, anyway?
This zine was really hard for me to write because when I started writing it, I’d been using git pretty confidently for 10 years. I had no real memory of what it was like to struggle with git.
But thanks to a huge amount of help from Marie as well as everyone who talked to me about git on Mastodon, eventually I was able to see that there are a lot of things about git that are counterintuitive, misleading, or just plain confusing. These include:
- confusing terminology (for example “fast-forward”, “reference”, or “remote-tracking branch”)
- misleading messages (for example how
Your branch is up to date with 'origin/main'
doesn’t necessary mean that your branch is up to date with themain
branch on the origin) - uninformative output (for example how I STILL can’t reliably figure out which code comes from which branch when I’m looking at a merge conflict)
- a lack of guidance around handling diverged branches (for example how when you run
git pull
and your branch has diverged from the origin, it doesn’t give you great guidance how to handle the situation) - inconsistent behaviour (for example how git’s reflogs are almost always append-only, EXCEPT for the stash, where git will delete entries when you run
git stash drop
)
The more I heard from people how about how confusing they find git, the more it became clear that git really does not make it easy to figure out what its internal logic is just by using it.
handling git’s weirdnesses becomes pretty routine
The previous section made git sound really bad, like “how can anyone possibly use this thing?”.
But my experience is that after I learned what git actually means by all of its
weird error messages, dealing with it became pretty routine! I’ll see an
error: failed to push some refs to 'github.com:jvns/wizard-zines-site'
,
realize “oh right, probably a coworker made some changes to main
since I last
ran git pull
”, run git pull --rebase
to incorporate their changes, and move
on with my day. The whole thing takes about 10 seconds.
Or if I see a You are in 'detached HEAD' state
warning, I’ll just make sure
to run git checkout mybranch
before continuing to write code. No big deal.
For me (and for a lot of folks I talk to about git!), dealing with git’s weird language can become so normal that you totally forget why anybody would even find it weird.
a little bit of internals
One of my biggest questions when writing this zine was how much to focus on
what’s in the .git
directory. We ended up deciding to include a couple of
pages about internals (“inside .git”, pages 14-15), but otherwise focus more on
git’s behaviour when you use it and why sometimes git behaves in unexpected
ways.
This is partly because there are lots of great guides to git’s internals out there already (1, 2), and partly because I think even if you have read one of these guides to git’s internals, it isn’t totally obvious how to connect that information to what you actually see in git’s user interface.
For example: it’s easy to find documentation about remotes in git – for example this page says:
Remote-tracking branches […] remind you where the branches in your remote repositories were the last time you connected to them.
But even if you’ve read that, you might not realize that the statement Your branch is up to date with 'origin/main'"
in git status
doesn’t necessarily
mean that you’re actually up to date with the remote main
branch.
So in general in the zine we focus on the behaviour you see in Git’s UI, and then explain how that relates to what’s happening internally in Git.
the cheat sheet
The zine also comes with a free printable cheat sheet: (click to get a PDF version)
it comes with an HTML transcript!
The zine also comes with an HTML transcript, to (hopefully) make it easier to read on a screen reader! Our Operations Manager, Lee, transcribed all of the pages and wrote image descriptions. I’d love feedback about the experience of reading the zine on a screen reader if you try it.
I really do love git
I’ve been pretty critical about git in this post, but I only write zines about technologies I love, and git is no exception.
Some reasons I love git:
- it’s fast!
- it’s backwards compatible! I learned how to use it 10 years ago and everything I learned then is still true
- there’s tons of great free Git hosting available out there (GitHub! Gitlab! a million more!), so I can easily back up all my code
- simple workflows are REALLY simple (if I’m working on a project on my own, I
can just run
git commit -am 'whatever'
andgit push
over and over again and it works perfectly) - Almost every internal file in git is a pretty simple text file (or has a version which is a text file), which makes me feel like I can always understand exactly what’s going on under the hood if I want to.
I hope this zine helps some of you love it too.
people who helped with this zine
I don’t make these zines by myself!
I worked with Marie Claire LeBlanc Flanagan every morning for 8 months to write clear explanations of git.
The cover is by Vladimir Kašiković, Gersande La Flèche did copy editing, James Coglan (of the great Building Git) did technical review, our Operations Manager Lee did the transcription as well as a million other things, my partner Kamal read the zine and told me which parts were off (as he always does), and I had a million great conversations with Marco Rogers about git.
And finally, I want to thank all the beta readers! There were 66 this time which is a record! They left hundreds of comments about what was confusing, what they learned, and which of my jokes were funny. It’s always hard to hear from beta readers that a page I thought made sense is actually extremely confusing, and fixing those problems before the final version makes the zine so much better.
get the zine
Here are some links to get the zine again:
- get How Git Works
- get an 14-pack of all my zines here.
As always, you can get either a PDF version to print at home or a print version shipped to your house. The only caveat is print orders will ship in July – I need to wait for orders to come in to get an idea of how many I should print before sending it to the printer.
thank you
As always: if you’ve bought zines in the past, thank you for all your support over the years. And thanks to all of you (1000+ people!!!) who have already bought the zine in the first 3 days. It’s already set a record for most zines sold in a single day and I’ve been really blown away.
IndieWebCamp Düsseldorf took place this weekend, and I was inspired to work on a quick hack for demo day to show off a new feature I've been working on for IndieAuth.
Since I do actually use my website to log in to different websites on a regular basis, I am often presented with the login screen asking for my domain name, which is admittedly an annoying part of the process. I don't even like having to enter my email address when I log in to a site, and entering my domain isn't any better.
So instead, I'd like to get rid of this prompt, and let the browser handle it for you! Here's a quick video of logging in to a website using my domain with the new browser API:
So how does this work?
For the last couple of years, there has been an ongoing effort at the Federated Identity Community Group at the W3C to build a new API in browsers that can sit in the middle of login flows. It's primarily being driven by Google for their use case of letting websites show a Google login popup dialog without needing 3rd party cookies and doing so in a privacy-preserving way. There's a lot to unpack here, more than I want to go into in this blog post. You can check out Tim Cappalli's slides from the OAuth Security Workshop for a good explainer on the background and how it works.
However, there are a few experimental features that are being considered for the API to accommodate use cases beyond the "Sign in with Google" case. The one that's particularly interesting to the IndieAuth use case is the IdP Registration API. This API allows any website to register itself as an identity provider that can appear in the account chooser popup, so that a relying party website doesn't have to list out all the IdPs it supports, it can just say it supports "any" IdP. This maps to how IndieAuth is already used today, where a website can accept any user's IndieAuth server without any prior relationship with the user. For more background, check out my previous blog post "OAuth for the Open Web".
So now, with the IdP Registration API in FedCM, your website can tell your browser that it is an IdP, then when a website wants to log you in, it asks your browser to prompt you. You choose your account from the list, the negotiation happens behind the scenes, and you're logged in!
One of the nice things about combining FedCM with IndieAuth is it lends itself nicely to running the FedCM IdP as a separate service from your actual website. I could run an IndieAuth IdP service that you could sign up for and link your website to. Since your identity is your website, your website would be the thing ultimately sent to the relying party that you're signing in to, even though it was brokered through the IdP service. Ultimately this means much faster adoption is possible, since all it takes to turn your website into a FedCM-supported site is adding a single <link>
tag to your home page.
So if this sounds interesting to you, leave a comment below! The IdP registration API is currently an early experiment, and Google needs to see actual interest in it in order to keep it around! In particular, they are looking for Relying Parties who would be interested in actually using this to log users in. I am planning on launching this on webmention.io as an experiment. If you have a website where users can sign in with IndieAuth, feel free to get in touch and I'd be happy to help you set up FedCM support as well!
The draft specification OAuth for Browser-Based Applications has just entered Working Group Last Call!
https://datatracker.ietf.org/doc/html/draft-ietf-oauth-browser-based-apps
This begins a two-week period to collect final comments on the draft. Please review the draft and reply on the OAuth mailing list if you have any comments or concerns. And if you've reviewed the document and are happy with the current state, it is also extremely helpful if you can reply on the list to just say "looks good to me"!
If joining the mailing list is too much work, you're also welcome to comment on the Last Call issue on GitHub.
In case you were wondering, yes your comments matter! Even just a small indication of support goes a long way in these discussions!
I am extremely happy with how this draft has turned out, and would like to again give a huge thanks to Philippe De Ryck for the massive amount of work he's put in to the latest few versions to help get this over the finish line!
While writing about Git, I’ve noticed that a lot of folks struggle with Git’s error messages. I’ve had many years to get used to these error messages so it took me a really long time to understand why folks were confused, but having thought about it much more, I’ve realized that:
- sometimes I actually am confused by the error messages, I’m just used to being confused
- I have a bunch of strategies for getting more information when the error message git gives me isn’t very informative
So in this post, I’m going to go through a bunch of Git’s error messages, list a few things that I think are confusing about them for each one, and talk about what I do when I’m confused by the message.
improving error messages isn’t easy
Before we start, I want to say that trying to think about why these error messages are confusing has given me a lot of respect for how difficult maintaining Git is. I’ve been thinking about Git for months, and for some of these messages I really have no idea how to improve them.
Some things that seem hard to me about improving error messages:
- if you come up with an idea for a new message, it’s hard to tell if it’s actually better!
- work like improving error messages often isn’t funded
- the error messages have to be translated (git’s error messages are translated into 19 languages!)
That said, if you find these messages confusing, hopefully some of these notes will help clarify them a bit.
error: git push
on a diverged branch
$ git push To github.com:jvns/int-exposed ! [rejected] main -> main (non-fast-forward) error: failed to push some refs to 'github.com:jvns/int-exposed' hint: Updates were rejected because the tip of your current branch is behind hint: its remote counterpart. Integrate the remote changes (e.g. hint: 'git pull ...') before pushing again. hint: See the 'Note about fast-forwards' in 'git push --help' for details. $ git status On branch main Your branch and 'origin/main' have diverged, and have 2 and 1 different commits each, respectively.
Some things I find confusing about this:
- You get the exact same error message whether the branch is just behind
or the branch has diverged. There’s no way to tell which it is from this
message: you need to run
git status
orgit pull
to find out. - It says
failed to push some refs
, but it’s not totally clear which references it failed to push. I believe everything that failed to push is listed with! [rejected]
on the previous line– in this case just themain
branch.
What I like to do if I’m confused:
- I’ll run
git status
to figure out what the state of my current branch is. - I think I almost never try to push more than one branch at a time, so I usually totally ignore git’s notes about which specific branch failed to push – I just assume that it’s my current branch
error: git pull
on a diverged branch
$ git pull
hint: You have divergent branches and need to specify how to reconcile them.
hint: You can do so by running one of the following commands sometime before
hint: your next pull:
hint:
hint: git config pull.rebase false # merge
hint: git config pull.rebase true # rebase
hint: git config pull.ff only # fast-forward only
hint:
hint: You can replace "git config" with "git config --global" to set a default
hint: preference for all repositories. You can also pass --rebase, --no-rebase,
hint: or --ff-only on the command line to override the configured default per
hint: invocation.
fatal: Need to specify how to reconcile divergent branches.
The main thing I think is confusing here is that git is presenting you with a kind of overwhelming number of options: it’s saying that you can either:
- configure
pull.rebase false
,pull.rebase true
, orpull.ff only
locally - or configure them globally
- or run
git pull --rebase
orgit pull --no-rebase
It’s very hard to imagine how a beginner to git could easily use this hint to sort through all these options on their own.
If I were explaining this to a friend, I’d say something like “you can use git pull --rebase
or git pull --no-rebase
to resolve this with a rebase or merge
right now, and if you want to set a permanent preference, you can do that
with git config pull.rebase false
or git config pull.rebase true
.
git config pull.ff only
feels a little redundant to me because that’s git’s
default behaviour anyway (though it wasn’t always).
What I like to do here:
- run
git status
to see the state of my current branch - maybe run
git log origin/main
orgit log
to see what the diverged commits are - usually run
git pull --rebase
to resolve it - sometimes I’ll run
git push --force
orgit reset --hard origin/main
if I want to throw away my local work or remote work (for example because I accidentally commited to the wrong branch, or because I rangit commit --amend
on a personal branch that only I’m using and want to force push)
error: git checkout asdf
(a branch that doesn't exist)
$ git checkout asdf error: pathspec 'asdf' did not match any file(s) known to git
This is a little weird because we my intention was to check out a branch,
but git checkout
is complaining about a path that doesn’t exist.
This is happening because git checkout
’s first argument can be either a
branch or a path, and git has no way of knowing which one you intended. This
seems tricky to improve, but I might expect something like “No such branch,
commit, or path: asdf”.
What I like to do here:
- in theory it would be good to use
git switch
instead, but I keep usinggit checkout
anyway - generally I just remember that I need to decode this as “branch
asdf
doesn’t exist”
error: git switch asdf
(a branch that doesn't exist)
$ git switch asdf fatal: invalid reference: asdf
git switch
only accepts a branch as an argument (unless you pass -d
), so why is it saying invalid reference: asdf
instead of invalid branch: asdf
?
I think the reason is that internally, git switch
is trying to be helpful in its error messages: if you run git switch v0.1
to switch to a tag, it’ll say:
$ git switch v0.1
fatal: a branch is expected, got tag 'v0.1'`
So what git is trying to communicate with fatal: invalid reference: asdf
is
“asdf
isn’t a branch, but it’s not a tag either, or any other reference”. From my various git polls my impression is that
a lot of git users have literally no idea what a “reference” is in git, so I’m not sure if that’s coming across.
What I like to do here:
90% of the time when a git error message says reference
I just mentally
replace it with branch
in my head.
error: git checkout HEAD^
$ git checkout HEAD^ Note: switching to 'HEAD^'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -cOr undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at 182cd3f add "swap byte order" button
This is a tough one. Definitely a lot of people are confused about this message, but obviously there's been a lot of effort to improve it too. I don't have anything smart to say about this one.
What I like to do here:
- my shell prompt tells me if I’m in detached HEAD state, and generally I can remember not to make new commits while in that state
- when I’m done looking at whatever old commits I wanted to look at, I’ll run
git checkout main
or something to go back to a branch
message: git status
when a rebase is in progress
This isn’t an error message, but I still find it a little confusing on its own:
$ git status interactive rebase in progress; onto c694cf8 Last command done (1 command done): pick 0a9964d wip No commands remaining. You are currently rebasing branch 'main' on 'c694cf8'. (fix conflicts and then run "git rebase --continue") (use "git rebase --skip" to skip this patch) (use "git rebase --abort" to check out the original branch) Unmerged paths: (use "git restore --staged..." to unstage) (use "git add ..." to mark resolution) both modified: index.html no changes added to commit (use "git add" and/or "git commit -a")
Two things I think could be clearer here:
- I think it would be nice if
You are currently rebasing branch 'main' on 'c694cf8'.
were on the first line instead of the 5th line – right now the first line doesn’t say which branch you’re rebasing. - In this case,
c694cf8
is actuallyorigin/main
, so I feel likeYou are currently rebasing branch 'main' on 'origin/main'
might be even clearer.
What I like to do here:
My shell prompt includes the branch that I’m currently rebasing, so I rely on that instead of the output of git status
.
error: git rebase
when a file has been deleted
$ git rebase main CONFLICT (modify/delete): index.html deleted in 0ce151e (wip) and modified in HEAD. Version HEAD of index.html left in tree. error: could not apply 0ce151e... wip
The thing I still find confusing about this is – index.html
was modified in
HEAD
. But what is HEAD
? Is it the commit I was working on when I started
the merge/rebase, or is it the commit from the other branch? (the answer is
“HEAD
is your branch if you’re doing a merge, and it’s the “other branch” if
you’re doing a rebase, but I always find that hard to remember)
I think I would personally find it easier to understand if the message listed the branch names if possible, something like this:
CONFLICT (modify/delete): index.html deleted on `main` and modified on `mybranch`
error: git status
during a merge or rebase (who is "them"?)
$ git status On branch master You have unmerged paths. (fix conflicts and run "git commit") (use "git merge --abort" to abort the merge)Unmerged paths: (use “git add/rm
…” as appropriate to mark resolution) deleted by them: the_file no changes added to commit (use “git add” and/or “git commit -a”)
I find this one confusing in exactly the same way as the previous message: it
says deleted by them:
, but what “them” refers to depends on whether you did a merge or rebase or cherry-pick.
- for a merge,
them
is the other branch you merged in - for a rebase,
them
is the branch that you were on when you rangit rebase
- for a cherry-pick, I guess it’s the commit you cherry-picked
What I like to do if I’m confused:
- try to remember what I did
- run
git show main --stat
or something to see what I did on themain
branch if I can’t remember
error: git clean
$ git clean fatal: clean.requireForce defaults to true and neither -i, -n, nor -f given; refusing to clean
I just find it a bit confusing that you need to look up what -i
, -n
and
-f
are to be able to understand this error message. I’m personally way too
lazy to do that so even though I’ve probably been using git clean
for 10
years I still had no idea what -i
stood for (interactive
) until I was
writing this down.
What I like to do if I’m confused:
Usually I just chaotically run git clean -f
to delete all my untracked files
and hope for the best, though I might actually switch to git clean -i
now
that I know what -i
stands for. Seems a lot safer.
that’s all!
Hopefully some of this is helpful!
I noticed some tech bloggers I follow have been making April Cools Day posts about topics they don’t normally write about (like decaf or microscopes). The goal isn’t to trick anyone, just to write about something different for a day.
I thought those posts were fun so here is a post with some notes on learning to crochet tiny cacti.
first, the cacti
I’ve been trying to do some non-computer hobbies, without putting a lot of pressure on myself to be “good” at them. Here are some cacti I crocheted:
They are a little wonky and I like them.
a couple of other critters
Here are a couple of other things I made: an elephant, an orange guy, a much earlier attempt at a cactus, and an in-progress cactus
Some of these are also pretty wonky, but sometimes it adds to the charm: for example the elephant’s head is attached at an angle which was not on purpose but I think adds to the effect. (orange guy pattern, elephant pattern)
I haven’t really been making clothing: I like working in a pretty chaotic way and I think you need to be a lot more careful when you make clothing so that it will actually fit.
the first project: a mouse
The first project I made was this little mouse. It took me a few hours (maybe 3 hours?) and I made a lot of mistakes and it definitely was not as cute as it was in the pictures in the pattern, but it was still good! I can’t find a picture right now though.
buying patterns is great
Originally I started out using free patterns, but I found some cacti patterns I really liked in an ebook called Knotmonsters: Cactus Gardens Edition, so I bought it.
I like the patterns in that book and also buying patterns seems like a nice way to support people who are making fun patterns. I found this guide to designing your own patterns through searching on Ravelry and it seems like a lot of work! Maybe I will do it one day but for now I appreciate the work of other people who make the patterns.
modifying patterns chaotically is great too
I’ve been modifying all of the patterns I make in a somewhat chaotic way, often just because I made a mistake somewhere along the way and then decide to move forward and change the pattern to adjust for the mistake instead of undoing my work. Some of of the changes I’ve made are:
- remove rows
- put fewer stitches in a row
- use a different stitch
This doesn’t always work but often it works well enough, and I think all of the mistakes help me learn.
no safety eyes
A lot of the patterns I’ve been seeing for animals suggest using “safety eyes” (plastic eyes). I didn’t really feel like buying those , so I’ve been embroidering eyes on instead. “Embroidering” might not be accurate, really I just sew some black yarn on in a haphazard way and hope it doesn’t come out looking too weird.
My crochet kit came with a big plastic yarn needle that I’ve been using to embroider and also
no stitch markers
My crochet kit came with some plastic “stitch markers” which you can use to figure out where the beginning of your row is, so you know when you’re done. I’ve been finding it easier to just use a short piece of scrap yarn instead.
on dealing with all the counting
In crochet there is a LOT of counting. Like “single crochet 3 times, then double crochet 1 time, then repeat that 6 times”. I find it hard to do that accurately without making mistakes, and all of the counting is not that fun! A few things that have helped:
- go back and look at my stitches to see what I did (“have I done 1 single crochet, or 2?”). I’m not actually very good at doing this, but I find it easier to see my stitches with wool/cotton yarn than with acrylic yarn for some reason.
- count how many stitches in total I’ve done since the last row, and make sure it seems approximately right (“well, I’m supposed to have 20 stitches and I have 19, that’s pretty close!”). Then I’ll maybe just add an extra stitch in the wrong place to adjust, or maybe just leave it the way it is.
notes on yarn
So far I’ve tried three kinds of yarn: merino (for the elephant), cotton (for the cacti), and acrylic (for the orange dude). I still don’t know which one I like best, but since I’m doing small projects it feels like the right move is still to just buy small amounts of yarn and experiment. I think I like the cotton and merino more than the acrylic.
For the cacti I used Ricorumi cotton yarn, which comes in tiny balls (which is good for me because if I don’t end up liking it, I don’t have a lot of extra!) and in a lot of different colours.
There are a lot of yarn weights (lace! sock! sport! DK! worsted! bulky! and more!). I don’t really underestand them yet but I think so far I’ve been mostly using DK and worsted yarn.
hook size? who knows!
I’ve mostly been using a 3.5mm hook, probably because I read a tutorial that said to use a 3.5mm hook. It seems to work fine! I used a larger hook size when making a hat, and that also worked.
I still don’t really know how to choose hook sizes but that doesn’t seem to have a lot of consequences when making cacti.
every stitch I’ve learned
I think I’ve probably only learned how to do 5 things in crochet so far:
- magic ring (mr)
- single crochet (sc)
- half double crochet (hdc)
- front post half double crochet (fphdc)
- double crochet (dc)
- back loops only/front loops only (flo/blo)
- increase/decrease
The way I’ve been approaching learning new crochet stitches is:
- find a pattern I want to make
- start it without reviewing it very much at all
- when I get to a stitch I don’t know, watch youtube videos
- don’t watch it very carefully and get it wrong
- eventually realize that it doesn’t look right at all, rewatch the video, and continue
I’ve been using Sarah Maker’s pages a lot, except for the magic ring where I used this 3-minute youtube video.
The magic ring took me a very long time to learn to do correctly, I didn’t pay attention very closely to the 3-minute youtube video so I did it wrong in maybe 4 projects before I figured out how to do it right.
every single thing I’ve bought
So far I’ve only needed:
- a crochet kit (which I got as a gift). it came with yarn, a bunch of crochet needles in different sizes, big sewing needles, and some other things I haven’t needed yet.
- some Ricorumi cotton (for the cacti)
- 1 ball of gray yarn (for the elephant)
I’ve been trying to not buy too much stuff, because I never know if I’ll get bored with a new hobby, and if I get bored it’s annoying to have a bunch of stuff lying around. Some examples of things I’ve avoided buying so far:
- Instead of buying polyester fiberfill, to fill all of the critters I’ve just been cutting up an old sweater I have that was falling apart.
- I’ve been embroidering the eyes instead of buying safety eyes
Everything I have right now fits in a the box the crochet kit came in (which is about the size of a large shoebox), and my plan is to keep it that way for a while.
that’s all!
Mainly what I like about crochet so far is that:
- it’s a way to not be on the computer, and you can chat with people while doing it
- you can do it without buying too much stuff, it’s pretty compact
- I end up with cacti in our living room which is great (I also have a bunch of real succulents, so they go with those)
- it seems extremely forgiving of mistakes and experimentation
There are definitely still a lot of things I’m doing “wrong” but it’s fun to learn through trial and error.
These are common questions when writing documentation for OAuth-related things. While these terms are all used in RFC 6749 and many extensions, the differences between the terminology is never actually explained.
I wanted to finally write down a definition of the terms, along with examples of when each is appropriate.
-
flow - use "flow" when referring to the end-to-end process, for example:
- "the client initiates the flow by..."
- "the flow ends with the successful issuance of an access token"
- This can also be combined with the type of flow, for example:
- "The Authorization Code flow starts by..."
-
grant - use "grant" when referring to the specific POST request to the token endpoint, for example:
- "The authorization code grant includes the PKCE code verifier..."
- "The refresh token grant can be used with or without client authentication..."
- "Grant" also refers to the abstract concept of the user having granted authorization, which is expressed as the authorization code, or implicitly with the client credentials grant. This is a bit of an academic definition of the term, and is used much less frequently in normal conversation around OAuth.
-
grant type - use "grant type" when referring to the definition of the flow in the spec itself, for example:
- "there are several drawbacks to the Implicit grant type"
- "the Authorization Code grant type enables the use of..."
Let me know if you have any suggestions for clarifying any of this, or any other helpful examples to add! I'm planning on adding this summary to OAuth 2.1 so that we have a formal reference for it in the future!
A new thing I’ve been trying while writing this Git zine is doing a bunch of polls on Mastodon to learn about:
- which git commands/workflows people use (like “do you use merge or rebase more?” or “do you put your current git branch in your shell prompt?”)
- what kinds of problems people run into with git (like “have you lost work because of a git problem in the last year or two?”)
- which terminology people find confusing (like “how confident do you feel that you know what HEAD means in git?”)
- how people think about various git concepts (“how do you think about git branches?”)
- in what ways my usage of git is “normal” and in what ways it’s “weird”. Where am I pretty similar to the majority of people, and where am I different?
It’s been a lot of fun and some of the results have been surprising to me, so here are some of the results. I’m partly just posting these so that I can have them all in one place for myself to refer to, but maybe some of you will find them interesting too.
these polls are highly unscientific
Polls on social media that I thought about for approximately 45 seconds before posting are not the most rigorous way of doing user research, so I’m pretty cautious about drawing conclusions from them. Potential problems include: I phrased the poll badly, the set of possible responses aren’t chosen very carefully, some of the poll responses I just picked because I thought they were funny, and the set of people who follow me on Mastodon is not representative of all git users.
But here are a couple of examples of why I still find these poll results useful:
- The first poll is “what’s your approach to merge commits and rebase in git”? 600 people (30% of responders) replied “I usually use merge, rarely/never rebase”. It’s helpful for me to know that there are a lot of people out there who rarely/never use rebase, because I use rebase all the time – it’s a good reminder that my experiences isn’t necessarily representative.
- For the poll “how confident do you feel that you know what HEAD means in
git?”, 14% of people replied “literally no idea”. That tells me to be careful
about assuming that people know what
HEAD
means in my writing.
where to read more
If you want to read more about any given poll, you can click at the date at the bottom – there’s usually a bunch of interesting follow-up discussion.
Also this post has a lot of CSS so it might not work well in a feed reader.
Now! Here are the polls! I’m mostly just going to post the results without commenting on them.
merge and rebase
poll: what's your approach to merge commits and rebase in git?
merge conflicts
poll: if you use git, how often do you deal with nontrivial merge conflicts? (like where 2 people were really editing the same code at the same time and you need to take time to think about how to reconcile the edits)
another merge conflict poll:
have you ever seen a bug in production caused by an incorrect merge conflict resolution? I've heard about this as a reason to prefer merges over rebase (because it makes the merge conflict resolution easier to audit) and I'm curious about how common it is
I thought it was interesting in the next one that “edit the weird text file by hand” was most people’s preference:
poll: when you have a merge conflict, how do you prefer to handle it?
merge conflict follow up: if you prefer to edit the weird text file by hand instead of using a dedicated merge conflict tool, why is that?
poll: did you know that in a git merge conflict, the order of the code is different when you do a merge/rebase?
merge:
<<<<<<< HEAD
YOUR CODE
=======
OTHER BRANCH'S CODE
>>>>>>> c694cf8aabe
rebase:
<<<<<<< HEAD
OTHER BRANCH'S CODE
=======
YOUR CODE
>>>>>>> d945752 (your commit message)
(where "YOUR CODE" is the code from the branch you were on when you ran `git merge` or `git rebase`)
git pull
poll: do you prefer `git fetch` or `git pull`?
(no lectures about why you think `git pull` is bad please but if you use both I'd be curious to hear in what cases you use fetch!)
commits
[poll] how do you think of a git commit?
(sorry, you can't pick “it’s all 3”, I'm curious about which one feels most true to you)
branches
poll: how do you think about git branches? (I'll put an image in a reply with pictures for the 3 options)
as with all of these polls obviously all 3 are valid, I'm curious which one feels the most true to you
git environment
poll: do you put your current git branch in your shell prompt?
poll: do you use git on the command line or in a GUI?
(you can pick more than one option if it’s a mix of both, sorry magit users I didn't have space for you in this poll)
losing work
poll: have you lost work because of a git problem in the last year or two? (it counts even if it was "your fault" :))
meaning of various git terms
These polls gave me the impression that for a lot of git terms (fast-forward, reference, HEAD), there are a lot of git users who have “literally no idea” what they mean. That makes me want to be careful about using and defining those terms.
poll: how confident do you feel that you know what HEAD means in git?
another poll: how do you think of HEAD in git?
poll: when you see this message in `git status`:
”Your branch is up to date with 'origin/main’.”
do you know that your branch may not actually be up to date with the `main` branch on the remote?
poll: how confident do you feel that you know what the term "fast-forward" means in git, for example in this error message:
`! [rejected] main -> main (non-fast-forward)`
or this one:
fatal: Not possible to fast-forward, aborting.
(I promise this is not a trick question, I'm just writing a blog post about git terminology and I'm trying to gauge how people feel about various core git terms)
poll: how confident do you feel that you know what a "ref" or "reference" is in git? (“ref” and “reference” are the same thing)
for example in this error message (from `git push`)
error: failed to push some refs to 'github.com:jvns/int-exposed'
or this one: (from `git switch mybranch`)
fatal: invalid reference: mybranch
another git terminology poll: how confident do you feel that you know what a git commit is?
(not a trick question, I'm mostly curious how this one relates to people's reported confidence about more "advanced" terms like reference/fast-forward/HEAD)
poll: in git, do you think of "detached HEAD state" and "not having any branch checked out" as being the same thing?
poll: how confident do you feel that you know what the term "current branch" means in git?
(deleted & reposted to clarify that I'm asking about the meaning of the term)
other version control systems
I occasionally hear “SVN was better than git!” but this “svn vs git” poll makes me think that’s a minority opinion. I’m much more cautious about concluding anything from the hg-vs-git poll but it does seem like some people prefer git and some people prefer Mercurial.
poll 2: if you've used both svn and git, which do you prefer?
(no replies please, i have already read 300 comments about git vs other version control systems today and they were great but i can't read more)
gonna do a short thread of git vs other version control systems polls just to get an overall vibe
poll 1: if you've used both hg and git, which do you prefer?
(no replies please though, i have already read 300 comments about git vs other version control systems today and i can't read more)
that’s all!
It’s been very fun to run all of these polls and I’ve learned a lot about how people use and think about git.
Hello! I know I just wrote a blog post about HEAD in git, but I’ve been thinking more about what the term “current branch” means in git and it’s a little weirder than I thought.
four possible definitions for “current branch”
- It’s what’s in the file
.git/HEAD
. This is how the git glossary defines it. - It’s what
git status
says on the first line - It’s what you most recently checked out with
git checkout
orgit switch
- It’s what’s in your shell’s git prompt. I use fish_git_prompt so that’s what I’ll be talking about.
I originally thought that these 4 definitions were all more or less the same, but after chatting with some people on Mastodon, I realized that they’re more different from each other than I thought.
So let’s talk about a few git scenarios and how each of these definitions plays
out in each of them. I used git version 2.39.2 (Apple Git-143)
for all of these experiments.
scenario 1: right after git checkout main
Here’s the most normal situation: you check out a branch.
.git/HEAD
containsref: refs/heads/main
git status
saysOn branch main
- The thing I most recently checked out was:
main
- My shell’s git prompt says:
(main)
In this case the 4 definitions all match up: they’re all main
. Simple enough.
scenario 2: right after git checkout 775b2b399
Now let’s imagine I check out a specific commit ID (so that we’re in “detached HEAD state”).
.git/HEAD
contains775b2b399fb8b13ee3341e819f2aaa024a37fa92
git status
saysHEAD detached at 775b2b39
- The thing I most recently checked out was
775b2b399
- My shell’s git prompt says
((775b2b39))
Again, these all basically match up – some of them have truncated the commit ID and some haven’t, but that’s it. Let’s move on.
scenario 3: right after git checkout v1.0.13
What if we’ve checked out a tag, instead of a branch or commit ID?
.git/HEAD
containsca182053c7710a286d72102f4576cf32e0dafcfb
git status
saysHEAD detached at v1.0.13
- The thing I most recently checked out was
v1.0.13
- My shell’s git prompt says
((v1.0.13))
Now things start to get a bit weirder! .git/HEAD
disagrees with the other 3
indicators: git status
, the git prompt, and what I checked out are all the
same (v1.0.13
), but .git/HEAD
contains a commit ID.
The reason for this is that git is trying to help us out: commit IDs are kind
of opaque, so if there’s a tag that corresponds to the current commit, git status
will show us that instead.
Some notes about this:
- If we check out the commit by its ID (
git checkout ca182053c7710a286d72
) instead of by its tag, what shows up ingit status
and in my shell prompt are exactly the same – git doesn’t actually “know” that we checked out a tag. - it looks like you can find the tags matching
HEAD
by runninggit describe HEAD --tags --exact-match
(here’s the fish git prompt code) - You can see where
git-prompt.sh
added support for describing a commit by a tag in this way in commit 27c578885 in 2008. - I don’t know if it makes a difference whether the tag is annotated or not.
- If there are 2 tags with the same commit ID, it gets a little weird. For
example, if I add the tag
v1.0.12
to this commit so that it’s with bothv1.0.12
andv1.0.13
, you can see here that my git prompt changes, and then the prompt andgit status
disagree about which tag to display:
bork@grapefruit ~/w/int-exposed ((v1.0.12))> git status
HEAD detached at v1.0.13
(my prompt shows v1.0.12
and git status
shows v1.0.13
)
scenario 4: in the middle of a rebase
Now: what if I check out the main
branch, do a rebase, but then there was a
merge conflict in the middle of the rebase? Here’s the situation:
.git/HEAD
containsc694cf8aabe2148b2299a988406f3395c0461742
(the commit ID of the commit that I’m rebasing onto,origin/main
in this case)git status
saysinteractive rebase in progress; onto c694cf8
- The thing I most recently checked out was
main
- My shell’s git prompt says
(main|REBASE-i 1/1)
Some notes about this:
- I think that in some sense the “current branch” is
main
here – it’s what I most recently checked out, it’s what we’ll go back to after the rebase is done, and it’s where we’d go back to if I rungit rebase --abort
- in another sense, we’re in a detached HEAD state at
c694cf8aabe2
. But it doesn’t have the usual implications of being in “detached HEAD state” – if you make a commit, it won’t get orphaned! Instead, assuming you finish the rebase, it’ll get absorbed into the rebase and put somewhere in the middle of your branch. - it looks like during the rebase, the old “current branch” (
main
) is stored in.git/rebase-merge/head-name
. Not totally sure about this though.
scenario 5: right after git init
What about when we create an empty repository with git init
?
.git/HEAD
containsref: refs/heads/main
git status
saysOn branch main
(and “No commits yet”)- The thing I most recently checked out was, well, nothing
- My shell’s git prompt says:
(main)
So here everything mostly lines up, except that we’ve never run git checkout
or git switch
. Basically Git automatically switches to whatever
branch was configured in init.defaultBranch
.
scenario 6: a bare git repository
What if we clone a bare repository with git clone --bare https://github.com/rbspy/rbspy
?
HEAD
containsref: refs/heads/main
git status
saysfatal: this operation must be run in a work tree
- The thing I most recently checked out was, well, nothing,
git checkout
doesn’t even work in bare repositories - My shell’s git prompt says:
(BARE:main)
So #1 and #4 match (they both agree that the current branch is “main”), but git status
and git checkout
don’t even work.
Some notes about this one:
- I think
HEAD
in a bare repository mainly only really affects 1 thing: it’s the branch that gets checked out when you clone the repository. It’s also used when you rungit log
. - if you really want to, you can update
HEAD
in a bare repository to a different branch withgit symbolic-ref HEAD refs/heads/whatever
. I’ve never needed to do that though and it seems weird becausegit symbolic ref
doesn’t check if the thing you’re pointingHEAD
at is actually a branch that exists. Not sure if there’s a better way.
all the results
Here’s a table with all of the results:
.git/HEAD |
git status | checked out | prompt | |
---|---|---|---|---|
1. checkout main |
ref: refs/heads/main |
On branch main |
main | (main) |
2. checkout 775b2b |
775b2b399... |
HEAD detached at 775b2b39 |
775b2b399 | ((775b2b39)) |
3. checkout v1.0.13 |
ca182053c... |
HEAD detached at v1.0.13 |
v1.0.13 | ((v1.0.13)) |
4. inside rebase | c694cf8aa... |
interactive rebase in progress; onto c694cf8 |
main | (main|REBASE-i 1/1) |
5. after git init |
ref: refs/heads/main |
On branch main |
n/a | (main) |
6. bare repository | ref: refs/heads/main |
fatal: this operation must be run in a work tree |
n/a | (BARE:main) |
“current branch” doesn’t seem completely well defined
My original instinct when talking about git was to agree with the git glossary
and say that HEAD
and the “current branch” mean the exact same thing.
But this doesn’t seem as ironclad as I used to think anymore! Some thoughts:
.git/HEAD
is definitely the one with the most consistent format – it’s always either a branch or a commit ID. The others are all much messier- I have a lot more sympathy than I used to for the definition “the current branch is whatever you last checked out”. Git does a lot of work to remember which branch you last checked out (even if you’re currently doing a bisect or a merge or something else that temporarily moves HEAD off of that branch) and it feels weird to ignore that.
git status
gives a lot of helpful context – these 5 status messages say a lot more than just whatHEAD
is set to currentlyon branch main
HEAD detached at 775b2b39
HEAD detached at v1.0.13
interactive rebase in progress; onto c694cf8
on branch main, no commits yet
some more “current branch” definitions
I’m going to try to collect some other definitions of the term current branch
that I heard from people on Mastodon here and write some notes on them.
- “the branch that would be updated if i made a commit”
- Most of the time this is the same as
.git/HEAD
- Arguably if you’re in the middle of a rebase, it’s different from
HEAD
, because ultimately that new commit will end up on the branch in.git/rebase-merge/head-name
- “the branch most git operations work against”
- This is sort of the same as what’s in
.git/HEAD
, except that some operations (likegit status
) will behave differently in some situations, like howgit status
won’t tell you the current branch if you’re in a bare repository
on orphaned commits
One thing I noticed that wasn’t captured in any of this is whether the
current commit is orphaned or not – the git status
message (HEAD detached from c694cf8
) is the same whether or not your current commit is
orphaned.
I imagine this is because figuring out whether or not a given commit is
orphaned might take a long time in a large repository: you can find out if
the current commit is orphaned with git branch --contains HEAD
, and that
command takes about 500ms in a repository with 70,000 commits.
Git will warn you if the commit is orphaned (“Warning: you are leaving 1 commit behind, not connected to any of your branches…”) when you switch to a different branch though.
that’s all!
I don’t have anything particularly smart to say about any of this. The more I think about git the more I can understand why people get confused.
Hello! The other day I ran a Mastodon poll asking people how confident they were that they understood how HEAD works in Git. The results (out of 1700 votes) were a little surprising to me:
- 10% “100%”
- 36% “pretty confident”
- 39% “somewhat confident?”
- 15% “literally no idea”
I was surprised that people were so unconfident about their understanding –
I’d been thinking of HEAD
as a pretty straightforward topic.
Usually when people say that a topic is confusing when I think it’s not, the
reason is that there’s actually some hidden complexity that I wasn’t
considering. And after some follow up conversations, it turned out that HEAD
actually was a bit more complicated than I’d appreciated!
Here’s a quick table of contents:
- HEAD is actually a few different things
- the file .git/HEAD
- HEAD as in git show HEAD
- next: all the output formats
HEAD is actually a few different things
After talking to a bunch of different people about HEAD
, I realized that
HEAD
actually has a few different closely related meanings:
- The file
.git/HEAD
HEAD
as ingit show HEAD
(git calls this a “revision parameter”)- All of the ways git uses
HEAD
in the output of various commands (<<<<<<<<<<HEAD
,(HEAD -> main)
,detached HEAD state
,On branch main
, etc)
These are extremely closely related to each other, but I don’t think the relationship is totally obvious to folks who are starting out with git.
the file .git/HEAD
Git has a very important file called .git/HEAD
. The way this file works is that it contains either:
- The name of a branch (like
ref: refs/heads/main
) - A commit ID (like
96fa6899ea34697257e84865fefc56beb42d6390
)
This file is what determines what your “current branch” is in Git. For example, when you run git status
and see this:
$ git status
On branch main
it means that the file .git/HEAD
contains ref: refs/heads/main
.
If .git/HEAD
contains a commit ID instead of a branch, git calls that
“detached HEAD state”. We’ll get to that later.
(People will sometimes say that HEAD contains a name of a reference or a
commit ID, but I’m pretty sure that that the reference has to be a branch.
You can technically make .git/HEAD
contain the name of a reference that
isn’t a branch by manually editing .git/HEAD
, but I don’t think you can do it
with a regular git command. I’d be interested to know if there is a
regular-git-command way to make .git/HEAD a non-branch reference though, and if
so why you might want to do that!)
HEAD
as in git show HEAD
It’s very common to use HEAD
in git commands to refer to a commit ID, like:
git diff HEAD
git rebase -i HEAD^^^^
git diff main..HEAD
git reset --hard HEAD@{2}
All of these things (HEAD
, HEAD^^^
, HEAD@{2}
) are called “revision parameters”. They’re documented in man
gitrevisions, and Git will try to
resolve them to a commit ID.
(I’ve honestly never actually heard the term “revision parameter” before, but that’s the term that’ll get you to the documentation for this concept)
HEAD in git show HEAD
has a pretty simple meaning: it resolves to the
current commit you have checked out! Git resolves HEAD
in one of two ways:
- if
.git/HEAD
contains a branch name, it’ll be the latest commit on that branch (for example by reading it from.git/refs/heads/main
) - if
.git/HEAD
contains a commit ID, it’ll be that commit ID
next: all the output formats
Now we’ve talked about the file .git/HEAD
, and the “revision parameter”
HEAD
, like in git show HEAD
. We’re left with all of the various ways git
uses HEAD
in its output.
git status
: “on branch main” or “HEAD detached”
When you run git status
, the first line will always look like one of these two:
on branch main
. This means that.git/HEAD
contains a branch.HEAD detached at 90c81c72
. This means that.git/HEAD
contains a commit ID.
I promised earlier I’d explain what “HEAD detached” means, so let’s do that now.
detached HEAD state
“HEAD is detached” or “detached HEAD state” mean that you have no current branch.
Having no current branch is a little dangerous because if you make new commits, those commits won’t be attached to any branch – they’ll be orphaned! Orphaned commits are a problem for 2 reasons:
- the commits are more difficult to find (you can’t run
git log somebranch
to find them) - orphaned commits will eventually be deleted by git’s garbage collection
Personally I’m very careful about avoiding creating commits in detached HEAD state, though some people prefer to work that way. Getting out of detached HEAD state is pretty easy though, you can either:
- Go back to a branch (
git checkout main
) - Create a new branch at that commit (
git checkout -b newbranch
) - If you’re in detached HEAD state because you’re in the middle of a rebase, finish or abort the rebase (
git rebase --abort
)
Okay, back to other git commands which have HEAD
in their output!
git log
: (HEAD -> main)
When you run git log
and look at the first line, you might see one of the following 3 things:
commit 96fa6899ea (HEAD -> main)
commit 96fa6899ea (HEAD, main)
commit 96fa6899ea (HEAD)
It’s not totally obvious how to interpret these, so here’s the deal:
- inside the
(...)
, git lists every reference that points at that commit, for example(HEAD -> main, origin/main, origin/HEAD)
meansHEAD
,main
,origin/main
, andorigin/HEAD
all point at that commit (either directly or indirectly) HEAD -> main
means that your current branch ismain
- If that line says
HEAD,
instead ofHEAD ->
, it means you’re in detached HEAD state (you have no current branch)
if we use these rules to explain the 3 examples above: the result is:
commit 96fa6899ea (HEAD -> main)
means:.git/HEAD
containsref: refs/heads/main
.git/refs/heads/main
contains96fa6899ea
commit 96fa6899ea (HEAD, main)
means:.git/HEAD
contains96fa6899ea
(HEAD is “detached”).git/refs/heads/main
also contains96fa6899ea
commit 96fa6899ea (HEAD)
means:.git/HEAD
contains96fa6899ea
(HEAD is “detached”).git/refs/heads/main
either contains a different commit ID or doesn’t exist
merge conflicts: <<<<<<< HEAD
is just confusing
When you’re resolving a merge conflict, you might see something like this:
<<<<<<< HEAD
def parse(input):
return input.split("\n")
=======
def parse(text):
return text.split("\n\n")
>>>>>>> somebranch
I find HEAD
in this context extremely confusing and I basically just ignore it. Here’s why.
- When you do a merge,
HEAD
in the merge conflict is the same as whatHEAD
was when you rangit merge
. Simple. - When you do a rebase,
HEAD
in the merge conflict is something totally different: it’s the other commit that you’re rebasing on top of. So it’s totally different from whatHEAD
was when you rangit rebase
. It’s like this because rebase works by first checking out the other commit and then repeatedly cherry-picking commits on top of it.
Similarly, the meaning of “ours” and “theirs” are flipped in a merge and rebase.
The fact that the meaning of HEAD
changes depending on whether I’m doing a
rebase or merge is really just too confusing for me and I find it much simpler
to just ignore HEAD
entirely and use another method to figure out which part
of the code is which.
some thoughts on consistent terminology
I think HEAD would be more intuitive if git’s terminology around HEAD were a little more internally consistent.
For example, git talks about “detached HEAD state”, but never about “attached
HEAD state” – git’s documentation never uses the term “attached” at all to
refer to HEAD
. And git talks about being “on” a branch, but never “not on” a
branch.
So it’s very hard to guess that on branch main
is actually the opposite of
HEAD detached
. How is the user supposed to guess that HEAD detached
has
anything to do with branches at all, or that “on branch main” has anything to
do with HEAD
?
that’s all!
If I think of other ways HEAD
is used in Git (especially ways HEAD appears in
Git’s output), I might add them to this post later.
If you find HEAD confusing, I hope this helps a bit!
It was 11am at the Fort Lauderdale airport, an hour after my non-stop flight to Portland was supposed to have boarded. As I had been watching our estimated departure get pushed back in 15 minute increments, I finally received the dreaded news over the loudspeaker - the flight was cancelled entirely. As hordes of people started lining up to rebook their flights with the gate agent, I found a quiet spot in the corner and opened up my laptop to look at my options.
The other Alaska Airlines flight options were pretty terrible. There was a Fort Lauderdale to Seattle to Portland option that would have me landing at midnight. A flight on a partner airline had a 1-hour connection through Dallas, and there were only middle seats available on both legs. So I started to get creative, and searched for flights from Orlando, about 200 miles north. There was a non-stop on Alaska Airlines at 7pm, with plenty of available seats, so I called up customer service and asked them to change my booking. Since the delay was their fault, there were no change fees even though the flight was leaving from a different airport.
So now it was my responsibility to get myself from Miami to Orlando by 7pm. I could have booked a flight on a budget airline for $150, but it wouldn't have been a very nice experience, and I'd have a lot of time to kill in the Orlando airport. Then I remembered the Brightline train recently opened new service from Miami to Orlando, supposedly taking less time than driving there.
Brightline Station Fort Lauderdale
Never having tried to take that train before, I didn't realize they run a shuttle service from the Fort Lauderdale airport to the train station, so I jumped in an Uber headed to the station. On the way there, I booked a ticket on my phone. The price from Miami to Orlando was $144 for Coach, or $229 for Premium class. Since this will probably be the only time I take this train for the foreseeable future, I splurged for the Premium class ticket to see what that experience is like.
Astute readers will have noticed that I mentioned I booked a ticket from Miami rather than Fort Lauderdale. We'll come back to that in a bit. Once I arrived at the station, I began my Brightline experience.
Walking in to the station felt like something between an airport and a car rental center.
There was a small ticket counter in the lobby, but I already had a ticket on my phone so I went up the escalators.
At the top of the escalators was an electronic gate where you scan your QR code to go through. Mine didn't work (again, more on that later), but it was relatively empty and a staff member was able to look at my ticket on my phone and let me through anyway. There was a small X-Ray machine, I tossed my roller bag and backpack onto the belt, but kept my phone and wallet in my pocket, and walked through the security checkpoint.
Once through the minimal security checkpoint, I was up in the waiting area above the platform with a variety of different sections. There was a small bar with drinks and snacks, a couple large seating areas, an automated mini mart, some tall tables...
... and the entrance to the Premium lounge.
Brightline Station Premium Lounge
The Premium Lounge entrance had another electronic gate with a QR code scanner. I tried getting in but it also rejected my boarding pass. My first thought was I booked my ticket just 10 minutes earlier so it hadn't synced up yet, so I went back to the the security checkpoint and asked what was wrong. They looked at my boarding pass and had no idea what was wrong, and let me in to the lounge via the back employee-only entrance instead.
Once inside the lounge, I did a quick loop to see what kind of food and drink options there were. The lounge was entirely un-attended, the only staff I saw were at the security checkpoint, and someone occasionally coming through to take out dirty dishes.
The first thing you're presented with after entering the lounge is the beverage station. There are 6 taps with beer and wine, and you use a touch screen to make your selection and pour what you want.
On the other side of the wall is the food. I arrived at the tail end of the breakfast service, so there were pretty slim pickings by the end.
There were yogurts, granola, a bowl of bacon and egg mix, several kinds of pastries, and a bowl of fruit that nobody seemed to have touched. I don't know if this was just because this was the end of the morning, but if you were vegan or gluten free there was really nothing you could eat there.
There was also a coffee and tea station with some minimal options.
Shortly after I arrived, it rolled over to lunch time, so the staff came out to swap out the food at the food station. The lunch options were also minimal, but there was a bit more selection.
There was a good size meat and cheese spread. I'm not a big fan of when they mix the meat and cheese on the same plate, but there was enough of a cheese island in the middle I was reasonably confident I wasn't eating meat juice off the side of the cheeses. The pasta dish also had meat so I didn't investigate further. Two of the three wraps had meat and I wasn't confident about which were which so I skipped those. There was a pretty good spinach and feta salad, and some hummus as well as artichoke dip, and a variety of crackers. If you like desserts, there was an even better selection of small desserts as well.
At this point I was starting to listen for my train's boarding announcement. There was really barely any staff visible anywhere, but the few people I saw had made it clear they would clearly announce the train over the loudspeakers when it was time. There was also a sign at the escalators to the platform that said boarding opens 10 minutes before the train departs.
The trains run northbound and southbound every 1-2 hours, so it's likely that you'll only hear one announcement for a train other than yours the entire time you're there.
The one train announcement I heard was a good demonstration of how quickly the whole process actually is once the train shows up. The train pulls up, they call everyone down to the platform, and you have ten minutes to get onto the train. Ten minutes isn't much, but you're sitting literally right on top of the train platform so it takes no time to get down there.
Once your train is called, it's time to head down the escalator to the train platform!
Boarding the Train
But wait, I mentioned my barcode had failed to be scanned a couple times at this point. Let me explain. Apparently, in my haste in the back of the Uber, I had actually booked a ticket from Miami to Orlando, but since I was already at the Fort Lauderdale airport, I had gone to the Fort Lauderdale Brightline station since it was the closest. So the departure time I saw on my ticket didn't match the time the train arrived at Fort Lauderdale, and the ticket gates refused to let me in because the ticket didn't depart from that station. I don't know why none of the employees who looked at my ticket mentioned this ever. It didn't end up being a big deal because thankfully Miami was earlier in the route, so I essentially just got on my scheduled train 2 stops late.
So anyway, I made my way down to the platform to board the train. I should also mention at this point that I was on a conference call from my phone. I had previously connected my phone to the free wifi at the station, and it was plenty good enough for the call. As I went down the escalator to the platform, it broke up a bit in the middle of the escalator, but picked back up once I was on the platform outside.
There were some signs on the platform to indicate "Coach 1", "Coach 2" and "Coach 3" cars. However my ticket was a "Premium" ticket, so I walked to where I assumed the front of the train would be when it pulled up.
I got on the train on the front car marked "SMART" and "3", seats 9-17. It wasn't clear what "SMART" was since I didn't see that option when booking online. My seat was seat 9A, so I wasn't entirely sure I was in the right spot, but I figured better to be on the train than on the platform, so I just went in. We started moving shortly after. As soon as I walked in, I had to walk past the train attendant pushing a beverage cart through the aisles. I made it to seat 9, but it was occupied. I asked the attendant where my seat was, and she said it was in car 1 at the "front", and motioned to the back of the train. I don't know why their cars are in the opposite order you'd expect. So I took my bags back to car 1 where I was finally greeted with the "Premium" sign I was looking for.
I was quickly able to find my seat, which was not in fact occupied. The Premium car was configured with 2 seats on one side and 1 seat on the other side.
The Brightline Premium Car
Some of the seats are configured to face each other, so there is a nice variety of seating options. You could all be sitting around a table if you booked a ticket for 4 people, or you could book 2 tickets and sit either next to each other or across from each other.
Since I had booked my ticket so last minute, I had basically the last available seat in the car so I was sitting next to someone. As soon as I sat down, the beverage cart came by with drinks. The cart looked like the same type you'd find on an airplane, and even had some identical warning stickers on it such as the "must be secured for takeoff and landing" sign. The drink options were also similar to what you'd get on a Premium Economy flight service. I opted for a glass of prosecco, and made myself comfortable.
The tray table at the seat had two configurations. You could either drop down a small flap or the whole tray.
The small tray was big enough to hold a drink or an iPad or phone, but not much else. The large tray was big enough for my laptop with a drink next to it as well as an empty glass or bottle behind it.
Under the seat there was a single power outlet for the 2 seats with 120v power as well as two USB-C ports.
Shortly after I had settled in, the crew came back with a snack tray and handed me these four snacks without really giving me the option of refusing any of them.
At this point I wasn't really hungry since I had just eaten at the airport, so I stuffed the snacks in my bag, except for the prosciutto, which I offered to my seat mate but he refused.
By this point we were well on our way to the Boca Raton stop. A few people got off and on there, and we continued on. I should add here that I always feel a bit unsettled when there is that much movement of people getting on and off all the time. These stops were about 20-30 minutes away from each other, which meant the beginning of the ride I never really felt completely settled in. This is the same reason I prefer a 6 hour flight over two 3 hour flights. I like to be able to settle in and just not think about anything until we arrive.
We finally left the last of the South Florida stops, West Palm Beach, and started the rest of the trip to Orlando. A bunch of people got off at West Palm Beach, enough that the Premium cabin was nearly empty at that point. I was able to move to the seat across the aisle which was a window/aisle seat all to myself!
Finally I could settle in for the long haul. Shortly before 3, the crew came by with the lunch cart. The options were either a vegetarian or non-vegetarian option, so that made the choice easy for me.
The vegetarian option was a tomato basil mozzarella sandwich, a side of fruit salad, and some vegetables with hummus. The hummus was surprisingly good, not like the little plastic tubs you get at the airport. The sandwich was okay, but did have a nice pesto spread on it.
After lunch, I opened up my computer to start writing this post and worked on it for most of the rest of the trip.
As the train started making a left turn to head west, the conductor came on the loudspeaker and made an announcement along the lines of "we're about to head west onto the newest tracks that have been built in the US in 100 years. We'll be reaching 120 miles per hour, so feel free to feel smug as we whiz by the cars on the highway." And sure enough, we really picked up the speed on that stretch! While we had reached 100-120mph briefly during the trip north, that last stretch was a solid 120mph sustained for about 20 minutes!
Orlando Station
We finally slowed down and pulled into the Orlando station at the airport.
Disembarking the train was simple enough. This was the last stop of the train so there wasn't quite as much of a rush to get off before the train started again. There's no need to mind the gap as you get off since there's a little platform that extends from the train car.
At the Orlando station there was a short escalator up and then you exit through the automated gates.
I assumed I would have to scan my ticket when exiting but that ended up not being the case. Which actually meant that the only time my ticket was ever checked was when entering the station. I never saw anyone come through to check tickets on the train.
At this point I was already in the airport, and it was a short walk around the corner to the tram that goes directly to the airport security checkpoint.
The whole trip took 176 minutes for 210 miles, which is an average speed of 71 miles per hour. When moving, we were typically moving at anywhere from 80-120 miles per hour.
Summary
- The whole experience was way nicer than an airplane, I would take this over a short flight from Miami to Orlando any day.
- It felt similar to a European train, but with service closer to an airline.
- The service needs to be better timed with the stops when people are boarding.
- The only ticket check was when entering the station, nobody came to check my ticket or seat on the train, or even when I left the destination station.
- While the Premium car food and drinks were free, I'm not sure it was worth the $85 extra ticket price over just buying the food you want.
- Unfortunately the ticket cost was similar to that of budget airlines, I would have preferred the cost to be slightly lower. But even still, I would definitely take this train over a budget airline at the same cost.
We need more high speed trains in the US! I go from Portland to Seattle often enough that a train running every 90 minutes that was faster than a car and easier and more comfortable than an airplane would be so nice!
After a lot of discussion on the mailing list over the last few months, and after some excellent discussions at the OAuth Security Workshop, we've been working on revising the draft to provide clearer guidance and clearer discussion of the threats and consequences of the various architectural patterns in the draft.
I would like to give a huge thanks to Philippe De Ryck for stepping up to work on this draft as a co-author!
This version is a huge restructuring of the draft and now starts with a concrete description of possible threats of malicious JavaScript as well as the consequences of each. The architectural patterns have been updated to reference which of each threat is mitigated by the pattern. This restructuring should help readers make a better informed decision by being able to evaluate the risks and benefits of each solution.
https://datatracker.ietf.org/doc/html/draft-ietf-oauth-browser-based-apps
https://www.ietf.org/archive/id/draft-ietf-oauth-browser-based-apps-15.html
Please give this a read, I am confident that this is a major improvement to the draft!
Bluesky, a new social media platform and AT Protocol, is unsurprisingly running up against the same challenges and limitations that Flickr, Twitter and many other social media platforms faced in the 2000s: passwords!
You wouldn't give your Gmail password to Yelp, right? Why should you give your Bluesky password to random apps either!
The current official Bluesky iOS application unsurprisingly works by logging in with a username and password. It's the easiest form of authentication to implement, even if it is the least secure. Since Bluesky and the AT Protocol are actually intending on creating an entire ecosystem of servers and clients, this is inevitably going to lead to a complete security disaster. In fact, we're already seeing people spin up prototype Bluesky clients, sharing links around to them, which result in users being taught that there's nothing wrong with handing out their account passwords to random website and applications that ask for them. Clearly there has to be a solution, right?
The good news is there has been a solution that has existed for about 15 years -- OAuth! This is exactly the problem that OAuth was created to solve. How do we let third party applications access data in a web service without sharing the password with that application.
What's novel about Bluesky (and other similarly decentralized and open services like WordPress, Mastodon, Micro.blog, and others), is that there is an expectation that any user should be able to bring any client to any server, without prior relationships between client developers and servers. This is in contrast to consumer services like Twitter and Google, where they limit which developers can access their API by going through a developer registration process. I wrote more about this problem in a previous blog post, OAuth for the Open Web.
There are two separate problems that Bluesky can solve with OAuth, especially a flavor of OAuth like IndieAuth.
- How apps can access data in the user's Personal Data Server (PDS)
- How the user logs in to their PDS
How apps can access the user's data
This is the problem OAuth solved when it was originally created, and the problem ATProto currently has. It's obviously very unsafe to have users give their PDS password to every third party application that's created, especially since the ecosystem is totally open so there's no way for a user to know how legitimate a particular application is. OAuth solves this by having the application redirect to the OAuth server, the user logs in there, and then the application gets only an access token.
ATProto already uses access tokens and refresh tokens, (although they strangely call them accessJwt
and refreshJwt
) so this is a small leap to make. OAuth support in mobile apps has gotten a lot better than it was 10 years ago, and there is first class support for this pattern on iOS and Android to make the experience work better than the much older plain redirect model used to work a decade ago.
Here is what the rough experience the user would see when logging in to an app:
- The user launches the app and taps the "Sign In" button
- The user enters their handle or server name (e.g.
jay.bsky.social
,bsky.social
, oraaronpk.com
) - The app discovers the user's OAuth server, and launches an in-app browser
- The user lands on their own PDS server, and logs in there (however they log in is not relevant to the app, it could be with a password, via email magic link, a passkey, or even delegated login to another provider)
- The user is presented with a dialog asking if they want to grant access to this app (this step is optional, but it's up to the OAuth server whether to do this and what it looks like)
- The application receives the authorization code and exchanges it at the PDS for an access token and refresh token
Most of this is defined in the core OAuth specifications. The part that's missing from OAuth is:
- discovering an OAuth server given a server name
- and how clients should be identified when there is no client preregistration step.
That's where IndieAuth fills this in. With IndieAuth, the user's authorization server is discovered by fetching the web page at their URL. IndieAuth avoids the need for client registration by also using URLs as OAuth client_id
s.
This does mean IndieAuth assumes there is an HTML document hosted at the URL the user enters, which works well for web based solutions, and might even work well for Bluesky given the number of people who have already rushed to set their Bluesky handle to the same URL as their personal website. But, long term it might be an additional burden for people who want to bring their own domain to Bluesky if they aren't also hosting a website there.
There's a new discussion happening in the OAuth working group to enable this kind of authorization server discovery from a URL which could rely on DNS or a well-known endpoint. This is in-progress work at the IETF, and I would love to have ATProto/Bluesky involved in those discussions!
How the user logs in to their PDS
Currently, the AT Protocol specifies that login happens with a username and password to get the tokens the app needs. Once clients start using OAuth to log in to apps, this method can be dropped from the specification, which interestingly opens up a lot of new possibilities.
Passwords are inherently insecure, and there has been a multi-year effort to improve the security of every online service by adding two-factor authentication and even moving away from passwords entirely by using passkeys instead.
Imagine today, Bluesky wants to add multifactor authenticaiton to their current service. There's no good way to add this to the existing API, since the Bluesky client will send the password to the API and expect an access token immediately. If Bluesky switches to an OAuth flow described above, then the app never sees the password, which means the Bluesky server can start doing more fun things with multifactor auth as well as even passwordless flows!
Logging in with a passkey
Here is the same sequence of steps but this time swapping out the password step for a passkey.
- The user launches the app and taps the "Sign In" button
- The user enters their handle or server name (e.g.
jay.bsky.social
,bsky.social
, oraaronpk.com
) - The app discovers the user's OAuth server, and launches an in-app browser
- The user lands on their own PDS server, and logs in there with a passkey
- The user is presented with a dialog asking if they want to grant access to this app (this step is optional, but it's up to the OAuth server whether to do this and what it looks like)
- The application receives the authorization code and exchanges it at the PDS for an access token and refresh token
This is already a great improvement, and the nice thing is app developers don't need to worry about implementing passkeys, they just need to implement OAuth! The user's PDS implements passkeys and abstracts that away by providing the OAuth API instead.
Logging in with IndieAuth
Another variation of this would be if the Bluesky service itself supported delegating logins instead of managing any passwords or passkeys at all.
Since Bluesky already supports users setting their handle to their own personal website, it's a short leap to imaging allowing users to authenticate themselves to Bluesky using their website as well!
That is the exact problem IndieAuth already solves, with quite a few implementations in the wild of services that are IndieAuth providers, including Micro.blog, a WordPress plugin, a Drupal module, and many options for self-hosting and endpoint.
Let's look at what the sequence would look like for a user to use the bsky.social PDS with their custom domain handle mapped to it.
- The user launches the app and taps the "Sign In" button
- The user enters their server name (e.g.
bsky.social
) - The app discovers the OAuth server and launches an in-app browser
- The user enters their handle, and bsky.social determines whether to prompt for a password or do an IndieAuth flow to their server
- The user is redirected to their own website (IndieAuth server) and authenticates there, and is then redirected back to bsky.social
- The user is presented by bsky.social with a dialog asking if they want to grant access to this app
- The application receives the authorization code and exchanges it at the PDS for an access token and refresh token
This is very similar to the previous flows, the difference being that in this version, bsky.social is the OAuth server as far as the app is concerned. The app never sees the user's actual IndieAuth server at all.
Further Work
These are some ideas to kick off the discussion of improving the security of Bluesky and the AT Protocol. Let me know if you have any thoughts on this! There is of course a lot more detail to discuss about the specifics, so if you're interested in diving in, a good place to start is reading up on OAuth as well as the IndieAuth extension to OAuth which has solved some of the problems that exist in the space.
You can reply to this post by sending a Webmention from your own website, or you can get in touch with me via Mastodon or, of course, find me on Bluesky as @aaronpk.com
!