Or try one of the following: 詹姆斯.com, adult swim, Afterdawn, Ajaxian, Andy Budd, Ask a Ninja, AtomEnabled.org, BBC News, BBC Arabic, BBC China, BBC Russia, Brent Simmons, Channel Frederator, CNN, Digg, Diggnation, Flickr, Google News, Google Video, Harvard Law, Hebrew Language, InfoWorld, iTunes, Japanese Language, Korean Language, mir.aculo.us, Movie Trailers, Newspond, Nick Bradbury, OK/Cancel, OS News, Phil Ringnalda, Photoshop Videocast, reddit, Romanian Language, Russian Language, Ryan Parman, Traditional Chinese Language, Technorati, Tim Bray, TUAW, TVgasm, UNEASYsilence, Web 2.0 Show, Windows Vista Blog, XKCD, Yahoo! News, You Tube, Zeldman
ongoing by Tim Bray
ongoing fragmented essay by Tim BrayTotem Tribe Towers 7 Mar 2025, 9:00 pm
I bought new speakers. This story combines beautiful music with advanced analogue technology and nerdy obsession. Despite which, many of you are not fascinated by high-end audio; you can leave now. Hey, this is a blog, I get to write about what excites me. The seventeen of you who remain will probably enjoy the deep dive.

Totem Tribe Tower loudspeakers, standing on a subwoofer.
This picture makes them look
bigger than they really are. They come in black or white, satin or gloss finish.
Prettier with the grille on, I think.
Why?
My main speakers were 22 years old, bore scars from toddlers (now grown) and cats (now deceased). While they still sounded beautiful, there was loss of precision. They’d had a good run.
Speakers matter
Just in the last year, I’ve become convinced, and argued here, that both DACs and amplifiers are pretty well solved problems, that there’s no good reason to spend big money on them, and that you should focus your audio investments on speakers and maybe room treatment. So this purchase is a big deal for me.
How to buy?
The number of boutique speaker makers, from all over the world, is mind-boggling; check out the Stereophile list of recommendations. Here’s the thing: Pretty well all of them sound wonderful. (The speakers I bought haven’t been reviewed by Stereophile.)
So there are too many options. Nobody could listen to even a small proportion of them, at any price point. Fortunately, I had three powerful filters to narrow down the options. The speakers had to (1) look nice, and (2) be Canadian products, probably (3) from Totem Acoustic.
Decor?
I do not have, nor do I want, a man-cave. I’ve never understood the concept.
And you have to be careful. There are high-end speakers, some very well-reviewed, with design sensibilities right out of Mad Max or Brazil. And then a whole bunch that are featureless rectangles with drivers on the front.
Ours have to live in a big media alcove just off the kitchen; they are shared by the pure-audio system and the huge TV. The setup has to please the eyes of the whole family.
Canadian?
At this point in time, a position of “from anywhere but the US, the malignant force threatening our sovereignty” would be unsurprising in a Canadian. But there are unsentimental reasons, too. It turns out Canadian speaker makers have had an advantage stretching back many decades.
This is mostly due to the work of Floyd Toole, electrical engineer and acoustician, once an employee of Canada’s National Research Council, who built an anechoic chamber at the NRC facility, demonstrated that humans can reliably detect differences in speaker accuracy, and made his facility available to commercial speaker builders. So there have been quite a few good speakers built up here over the years.
Totem?
What happened was, in 1990 or so I went to an audio show down East somewhere and met Vince Bruzzese, founder of Totem Acoustic, who was showing off his then-brand-new “Model One” speakers. They were small, basic-black, and entirely melted my heart playing a Purcell string suite. They still sell them, I see. Also, the Totem exhibit was having a quiet spell so there was time to talk, and it turned out that Bruzzese and I liked a lot of the same music.
So I snapped up the Model Ones and that same set is still sounding beautiful over at our cabin. And every speaker I’ve bought in the intervening decades has come from Totem or from PSB, another excellent Toole-influenced Canadian shop. I’ve also met and conversed with Paul Barton, PSB’s founder and main brain. Basically, there’s a good chance that I’ll like anything Vince or Paul ship.
My plan was to give a listen to those two companies’ products. A cousin I’d visited last year had big recent PSB speakers and I liked them a whole lot, so they were on my menu. But PSB seems to have given up on audio dealers, want to sell online. Huh?! Maybe it’ll work for them, but it doesn’t work for me.
So I found a local Totem dealer; audiofi in Mount Pleasant.
Auditioning
For this, you should use some of your most-listened-to tracks from your own collection. I took my computer along for that purpose, but it turned out that Qobuz had ’em all. (Hmm, maybe I should look closer at Qobuz.)
Here’s what was on my list. I should emphasize that, while I like all these tracks, they’re not terribly representative of what I listen to. They’re selected to stress out a specific aspect of audio reproduction. The Americana and Baroque and Roots Rock that I’m currently fixated on are pretty easy to reproduce.
200 More Miles from the Cowboy Junkies’ Trinity Session. Almost any track from this record would do; they recorded with a single ambiphonic microphone and any competent setup should make it feel like you’re in the room with them. And Margo’s singing should make you want to cry.
The Longships, from Enya’s Watermark album. This is a single-purpose test for low bass. It has these huge carefully-tuned bass-drum whacks that just vanish on most speakers without extreme bass extension, and the music makes much less sense without them. You don’t have to listen to the whole track; but it’s fine music, Enya was really on her game back then.
The opening of Dvořák’s Symphony #9, “From the New World”. There are plenty of good recordings, but I like Solti and the Chicago Symphony. Dvořák gleefully deploys jump-scare explosions of massed strings and other cheap orchestration tricks in the first couple of minutes to pull you into the symphony. What I’m looking for is the raw physical shock of the first big full-orchestra entrance.
Death Don’t Have No Mercy from Hot Tuna’s Live At Sweetwater Two. Some of the prettiest slide guitar you’ll hear anywhere from Kaukonen, and magic muscle from Casady. And then Jorma’s voice, as comfortable as old shoes and full of grace. About three minutes in there’s an instrumental break and you want to hear the musical lines dancing around each other with no mixups at all.
First movement of Beethoven’s Sonata #23, “Appassionata”, Ashkenazy on London. Pianos are very difficult; two little speakers have a tiny fraction of the mass and vibrating surface of a big concert grand. It’s really easy for the sound to be on the one hand too small, or on the other all jumbled up. Ashkenazy and the London engineers do a fine job here; it really should sound like he’s sitting across the room from you.
Cannonball, the Breeders’ big hit. It’s a pure rocker and a real triumph of arrangement and production, with lots of different guitar/keys/drum tones. You need to feel it in your gut, and the rock & roll edge should be frightening.
Identikit from Radiohead’s A Moon Shaped Pool. This is mostly drums and voice, although there are eventually guitar interjections. It’s a totally artificial construct, no attempt to sound like live musicians in a real space. But the singing and drumming are fabulous and they need to be 100% separated in space, dancing without touching. And Thom Yorke in good voice had better make you shiver a bit.
Miles Runs The Voodoo Down from Bitches Brew. This is complex stuff, and Teo Macero’s production wizardry embraces the complexity without losing any of that fabulous band’s playing. Also Miles plays two of the greatest instrumental solos ever recorded, any instrument, any genre, and one or two of the ascending lines should feel like he’s pulling your whole body up out of your chair.
Emmylou Harris. This would better be phrased as “Some singer you have strong emotional reactions to.” I listened to the title track and Deeper Well from the Wrecking Ball album. If a song that can make you feel that way doesn’t make you feel that way, try different speakers.
The listening session
I made an appointment with Phil at Audiofi, and we spent much of an afternoon listening. I thought Audiofi was fine, would go back. Phil was erudite and patient and not pushy and clearly loves the technology and music and culture.
I was particularly interested in the Element Fire V2, which has been creating buzz in online audiophile conversation. They’re “bookshelf” (i.e. stand-mounted) rather than floorstanders, but people keep saying they sound like huge tower speakers that are taller than you are. So I was predisposed to find them interesting, and I listened to maybe half of the list above.
But I was unhappy, it just wasn’t making me smile. Sure, there was a stereo image, but at no point did I get a convincing musicians-are-right-over-there illusion. It was particularly painful on the Cowboy Junkies. It leapt satisfactorily out of the speakers on the Dvořák and was brilliant on Cannonball, but there were too many misses.
Also, the longer I looked at it the less it pleased my eyes.
“Not working, sorry. Let’s listen to something else” I said. I’d already noticed the Tribe Towers, which even though they were floorstanders, looked skinny and pointy compared to the Elements. I’d never read anything about them but they share the Element’s interesting driver technology, and are cheaper.
So we set them up and they absolutely aced everything the Elements had missed. Just vanished, I mean, and there was a three-dimensional posse of musicians across the room, filling the space with three-dimensional music. They flunked the Enya drum-thwack test but that’s OK because I have a subwoofer (from PSB) at home. In particular, they handled Ashkenazy pounding out the Beethoven just absolutely without effort. I’m not sure I’ve ever heard better piano reproduction.
And the longer I looked at them the more my thinking switched from “skinny and pointy” to “slender and elegant”.
A few minutes in and, I told Phil, I was two-thirds sold. He suggested I look at some Magico speakers but they were huge and like $30K; as an audiophile I’m only mildly deranged. And American, so no thanks.
I went home to think about it. I was worried that I’d somehow been unfair to the Elements. Then I read the Stereophile review, and while the guy who did the subjective listening test loved ’em, the lab measurements seemed to show real problems.
I dunno. Maybe that was the wrong room for them. Or the wrong amplifier. Or the wrong positioning. Or maybe they’re just a rare miss from Totem.
My research didn’t turn up a quantitative take on the Tribes, just a lot of people writing that they sound much bigger than they really are, and that they were happy they’d bought them.
And I’d been happy listening to them. So I pulled the trigger. My listening space is acoustically friendlier than the one at Audiofi and if they made me happy there, they’d make me happy at home.
And they do. Didn’t worry too much about positioning, just made sure it was symmetric. The first notes they played were brilliant.
But how does it sound?
See all those auditioning tracks up above, where it says what speakers “should” do? They do, that’s what they sound like.
I’ve been a little short on sleep, staying up late to listen to music.
Follow-up: Customer service
As noted above I have a subwoofer, and my preamp lets you configure where to roll off the bass going to the main speakers and hand off to the subwoofer. I wrote off to Totem’s customer-support email address wondering if they had any guidance on frequency. They got back to me with specific advice, and another couple of things to double-check.
High-end audio. Simpatico salespeople. The products last decades. The vendors answer emails from random customers. Businesses it’s still possible to like.
Bye, Prime 6 Mar 2025, 9:00 pm
Today I canceled my Amazon Prime subscription.

Why?
As I wrote in Not an Amazon Problem (and please go read that if you haven’t) I don’t see myself as an enemy of Amazon, particularly. I think the pressures of 21st-century capitalism have put every large company into a place where they really can’t afford to be ethical or the financial sector will rip them to shreds then replace the CEO with someone who will maximize shareholder return at all costs, without any of that amateurish “ethics” stuff.
To the extent that Amazon is objectionable, it’s a symptom of those circumstances.
I’m bailing out of Prime not to hurt Amazon, but because it doesn’t make commercial or emotional sense for me just now.
Commercial?
Yes, free next-day delivery is pretty great. In fact, in connection with our recent move, I’ve been ordering small cheap stuff furiously: (USB cables, light switches, closet organizers, a mailbox, a TV mount, WiFi hubs, banana plugs, you name it).
But the moving operations are mostly done, and there are few (any?) things we really need the next day, and we’re fortunate, living in the center of a 15-minute city. So getting my elderly ass out of my chair and going to a store is a good option, for more than one reason.
Second, for a lot of things you want to order, the manufacturer has its own online store these days and a lot of them are actually well-built, perfectly pleasant to use.
Third, Amazon’s prices aren’t notably cheaper than the alternatives.
Emotional?
Amazon is an US corporation and the US is now hostile to Canada, repeatedly threatening to annex us. So I’m routing my shopping dollars away from there generally and to Canadian suppliers specifically. Dumping Prime is an easy way to help that along.
Second, shopping on Amazon for the kinds of small cheap things listed above is more than a little unpleasant. The search-results page is a battle of tooth and claw among low-rent importers. Also it’s just really freaking ugly, hurts my eyes to look at it.

Really? I have no idea what they were.
Finally, one of Prime’s big benefits used to be Prime Video, but no longer. There was just no excuse for greenlighting that execrable Rings of Power show, and I’m not aware of anything else I want to watch.
Amazon is good at lots of things, but has never been known for good taste. I mean, look at that search-results page.

Yep.
Is it easy?
Yep, no complaints. There were only two please-don’t-go begs and neither was offensive.
No hard feelings.
Moved 28 Feb 2025, 9:00 pm
It is traditional in this season in this space to tickle your eyes with pictures of our early spring crocuses, while gently dunking a bit on our fellow Canadians who, away from the bottom left corner of the country, are still snowbound. So, here you go. Only not really.

Yes, those are this spring’s crocuses. But they’re not our crocuses, they’re someone else’s. We don’t have any. Because we moved.
It’s a blog isn’t it? I’ve written up childbirths and pet news and vacations and all that stuff. So why not this?
What happened was, we bought a house in 1996 and then, after 27 years and raising two kids and more cats, it was, well, not actually dingy, but definitely tired. The floors. The paint. The carpet. The cupboards. So we started down two paths at once, planning for a major renovation on one side, and shopping for a new place on the other. Eighteen months later we hadn’t found anything to buy, and the reno was all planned and permitted and we were looking for rentals to camp out in.
Then, 72 hours from when we were scheduled to sign the reno contract, this place came on the market across our back alley and three houses over. The price was OK and it didn’t need much work and, well, now we live there.
I’m sweeping a lot of drama under the rug. Banking drama and real-estate drama and insurance drama and floor-finishing drama and Internet-setup drama and A/V drama and storage drama. And of course moving drama. Month after month now, Lauren and I have ended more days than not exhausted.
But here we are. And we’re not entirely without our plants.

This is Jason of Cycle Driven Gardening,who lent his expertise to moving our favorite rosebushes, whose history goes back decades. Of course, there could be no guarantee that those old friends would survive the process.
Today was unseasonably warm and our new back patio is south-facing, so we soaked up the sun and cleared it of leftover moving rubble. Then ventured into the back yard, much-ignored over winter.
Each and every rosebush has buds peeking out. So it looks, Dear Reader, like I’ll be able to inflict still more blossom pictures on you, come spring.
And we’ll be putting in crocuses, but those photos will have to wait twelve months or so.
See, even in 2025, there are stories with happy endings.
Safari Cleanup 26 Feb 2025, 9:00 pm
Like most Web-heads I spent years living in Chrome, but now feel less comfy there, because Google. I use many browsers but now my daily driver is Safari. I’m pretty happy with it but there’s ugly stuff hiding in its corners that needs to be cleaned up. This fragment’s mostly about those corners, but I include notes on the bigger browser picture and a couple of ProTips.
Many browsers?
If your life is complicated at all you need to use more than one. By way of illustration not recommendation, here’s what I do:
Safari is where I spend most of my time. As I write this I have 36 tabs, eight of them pinned. That the pinned number is eight is no accident, it’s because of the Tab Trick, which if you don’t know about, you really need to learn.
More on Safari later.
I use Chrome for business. It’s where I do banking and time-tracking and invoicing. (Much of this relies on Paymo, which is great. It takes seconds to track my time, and like ten minutes to do a super-professional monthly invoice.)
I use Firefox when I need to be @coop@cosocial.ca or go anywhere while certain that no Google accounts are logged in.
I use Chrome Canary for an organization I work with that has Chrome-dependent stuff that I don’t want to mix up with any of my personal business.
Safari, you say?
We inhabit the epoch of Late Capitalism. Which means there’s no reason for me to expect any company to exhibit ethical behavior. Because ethics is for amateurs.
So when I go looking for infrastructure that offers privacy protection, I look for a provider whose business model depends at least in part on it. That leaves Safari.
Yeah, I know about Cook kissing Trump’s ring, and detest companies who route billions of nominal profits internationally to dodge taxes, and am revolted at the App Store’s merciless rent-extraction from app developers who make Apple products better.
But still, I think their privacy story is pretty good, and it makes me happy when their marketing emphasizes it. Because if privacy is on their path to profit, I don’t have to mis-place my faith in any large 21st-century corporation’s “ethical values”.
Also, Safari is technically competent. It’s fast enough, and (unlike even a very few years ago) compatible with wherever I go. The number of Chome-only sites, thank goodness, seems to be declining rapidly.
So, a tip o’ the hat to the Safari team, they’re mostly giving me what I need. But there are irritants.
Tab fragility
This is my biggest gripe. Every so often, Safari just loses all my tabs when… well, I can’t spot a pattern. Sometimes it’s when I accidentally ⌘-Q it, sometimes it’s when I have two windows open for some reason and ⌘-W something. I think. Maybe. Sometimes they’re just gone.
Yes, I know about the “Reopen all windows from last session” operation. If it solved the problem I wouldn’t be writing this.
This is insanely annoying, and a few years back, more than once it seriously damaged my progress in multiple projects. Fortunately, I discovered that the Bookmarks menu has a one-click thing to create bookmarks for all my open tabs. So I hit that now and again and it’s saved me from tab-loss damage a couple of times now.
Someone out there might be thinking of suggesting that I not use browser tabs to store my current professional status. Please don’t, that would be rude.
Pin fragility
Even weirder, sometimes when I notice I’ve lost my main window and use the History menu to try to bring it back, I get a new window with all my tabs except for the pinned ones. Please, Safari.
Kill-pinned-tab theater
Safari won’t let me ⌘-W a pinned tab. This is good, correct where Chrome is wrong.
But when I try, does it quietly ignore me, or emit a gentle beep? No, it abruptly shifts to the first un-pinned tab. Which makes me think that I indeed killed the tab I was on, then I realize that no I didn’t, then I panic because obviously I killed something, and go looking for it. I try Shift-⌘-T to bring back most recently closed tab, realize I killed that an hour ago, and sit there blank-faced and worried.
New window huh?
When I’m in Discord or my Mail client or somewhere and I click on a link, sometimes it puts up a new Safari window. Huh? But usually not, I can’t spot the pattern. When I kill the new window, sometimes I lose all my tabs. Sigh.
Passive-aggressive refresh
When I have some tab that’s been around and unvisited for a while, sometimes there’s this tasteful decoration across the top.

I think that this used to say “significant memory” rather than “significant energy”? But really, Safari, try to imagine how little I care about your memory/energy problems, just do what you need to and keep it to yourself. And if you can’t, at least spruce up the typography and copy-editing.
Better back button
[This is partly a MacOS rather than Safari issue.] On my Android, I can click on something in Discord that takes me to the GitHub app, another click and I’m in the browser, then click on something there and be in the YouTube app, and so on and so on. And then I can use “Back” to retrace my steps from app to app. This is just incredibly convenient.
Safari’s memory of “how did I get here” apparently lives in the same evanescent place my tab configuration does, and usually vanishes the instant I step outside the browser. Why shouldn’t the Back operation always at least try to do something useful?
Hey Apple, it’s your operating system and your browser, why not catch up with Android in an area where you’re clearly behind?
I humbly suggest
… that Safari do these things:
Save my current-tabs setup every few seconds on something more robust than the current fabric of spider webs and thistledown. Offer a “Restore Tabs” entry in the History menu that always works.
Don’t just exit on ⌘-Q. Chrome gets this right, offering an option where I have to hold that key combo down for a second or two.
When I try to kill a pinned tab, just ignore me or beep or put up a little message or something.
Never create a new Safari window unless I ask for it.
Kill the dumb “this webpage was refreshed…”
Offer a “back” affordance that always works, even across applications.
Other browsers?
I already use Firefox every day and I know about Opera, Vivaldi, Brave, Arc, etc., and I’ve tried them, and none ever stuck. Or the experience was feeling good when something emerged about the provider that was scammy or scary or just dumb. (And the recent rumblings out of Mozilla are not reassuring.)
While it’d sure be nice for there to be a world-class unencumbered open-source browser from an organization I respect, I’m not holding my breath. So it’s Safari for me for now.
And it seems to me that the things that bother me should be easy to fix. Please do.
Posting and Fascism 8 Feb 2025, 9:00 pm
Recently, Janus Rose’s You Can’t Post Your Way Out of Fascism crossed my radar on a hundred channels. It’s a smart piece that says smart things. But I ended up mostly disagreeing. I’m not saying you can post your way out of Fascism, but I do think it’s gonna be hard to build the opposition without a lot of posting. The what and especially the where matter. But the “posting is useless” stance is dangerously reductive.
Before I get into my gripes with Ms Rose’s piece, let me highlight the good part: Use your browser’s search-in-page to scroll forward to “defend migrants”. Here begins a really smart and inspirational narrative of things people are doing to deflect and defeat the enemy.
But it ends with the observation that all the useful progressive action “arose from existing networks of neighbors and community organizers”. Here’s where I part ways. Sure, local action is the most accessible and in most cases the only action, but right now Fascism is a global problem and these fighters here need to network with those there, for values of “here” and “there” that are not local.
Which is gonna involve a certain amount of posting: Analyses, critiques, calls to action, date-setting, message-sharpening; it’s just not sensible to rely on networks of neighbors to accomplish this.
What to post about?
Message sharpening feels like the top of the list. Last month I posted In The Minority, making the (obvious I think) point that current progressive messaging isn’t working very well; we keep losing elections! What needs to be changed? I don’t know and I don’t believe anybody who says they do.
It’s not as simple as “be more progressive” or conversely “be more centrist”. I personally think the way to arrive at the right messaging strategies and wording is going to involve a lot of trial balloons and yes, local efforts. Since I unironically think that progressive policies will produce results that a majority of people will like, I also believe that there absolutely must be a way of explaining why and how that will move the needle and lead to victories.
Where to post it?
Short answer: Everywhere, almost.
Granted that TV, whatever that means these days, is useless. Anyone doing mass broadcasting is terrified of controversy and can’t afford to be seen as a progressive nexus.
And Ms Rose is 100% right that Tiktok, Xitter, Facebook, Insta, or really any other centralized profit-driven corporate “social network” products are just not useful for progressives. These are all ad-supported, and (at this historical moment) under heavy pressure from governments controlled by our enemies, and in some cases, themselves owned and operated by Fascists.
That leaves decentralized social media (the Fediverse and (for the moment) Bluesky), Net-native operations like 404/Vice/Axios/Verge (even though most of them are struggling), and mainstream “quality publications”: The Atlantic, the Guardian, and your local progressive press (nearest to me here in Canada, The Tyee).
Don’t forget blogs. They can still move the needle.
And, I guess, as Ms Rose says, highly focused local conversations on Discord, WhatsApp, and Signal. (Are there other tech options for this kind of thing?)
Are you angry?
I am. And here I part paths with Ms Rose, who is vehement that we should see online anger as an anti-pattern. Me, I’m kinda with Joe Strummer, anger can be power. Rose writes “researchers have found that the viral outrage disseminated on social media in response to these ridiculous claims actually reduces the effectiveness of collective action”. I followed that link and found the evidence unconvincing.
Also, if there’s one thing I believe it’s that in the social-media context, being yourself, exposing the person behind the words, is central to getting anywhere. And if the enemy’s actions are filling me with anger, it would be disingenuous and ineffective to edit that out of my public conversation.
Posting is a progressive tool
Not gonna say more about principles or theory, just offer samples.
50501 has done it all with hashtags and micro-posts. Let’s see how it works.
Here’s Semafor arguing that the Democrats’ litigation-centric resistance is working pretty well.
Heidi Li Feldman, in Fear and loathing plus what blue states should be doing now argues on her blog for resistance at the state-government level, disengaging from and pushing back against toxic Musk/Trump projects.
Here’s Josh Marshall at Talking Points Memo calling for pure oppositionism and then arguing that Democrats should go to the mattresses on keeping the government open and raising the debt limit.
Here’s the let’s-both-sides-Fascism New York Times absolutely savaging the GOP campaign to keep Mayor Adams in place as a MAGA puppet.
Here’s yours truly posting about who progressives should talk to.
Here’s Mark Cuban on Bluesky saying hardass political podcasts are the only way to reach young men.
Here’s Elizabeth Kolbert in The New Yorker making very specific suggestions as to the tone and content of progressive messaging.
Here’s Cory Doctorow on many channels as usual, on how Canada should push back against the Trump tariffs.
There’s lots more strong stuff out there. Who’s right?
I don’t know. Not convinced anyone does.
Let’s keep posting about it till we get it right.
Photo Philosophizing 4 Feb 2025, 9:00 pm
What happened was, I went to Saskatchewan to keep my mother company, and got a little obsessed about photo composition and complexity. Which in these troubled times is a relief.
This got started just after take-off from Vancouver. As the plane climbed over the city I thought “That’s a nice angle” and pointed the Pixel through the plexiglass.

You might want to enlarge this one.
A couple of days into my Prairie visit I got around to processing the photos and thought that Vancouver aerial had come out well. No credit to the photographer here, got lucky on the opportunity, but holy crap modern mobile-device camera tech is getting good these days. I’ll take a little credit for the Lightrooming; this has had heavy dehazing and other prettifications applied.
A couple of days later I woke up and the thermometer said -36°C (in Fahrenheit that’s “too freaking cold”). The air was still and the hazy sunlight was weird. “There has to be a good photo in this somewhere, maybe to contrast that Vancouver shot” I thought. So I tucked the Fujifilm inside my parka (it claims to be only rated to -10°) and went for a walk. Mom politely declined my invitation to come along without, to her credit, getting that “Is he crazy?” expression on her face.
Her neighborhood isn’t that photogenic but there’s a Pitch-n-putt golf course a block away so I trudged through that. The snow made freaky squeaking sounds underfoot. At that temperature, it feels like you have to push the air aside with each step. Also, you realize that your lungs did not evolve to process that particular atmospheric condition.
Twenty minutes in I had seen nothing that made me want to pull out the camera, and was thinking it was about time to head home. So I stopped in a place where there was a bit of shape and shadow, and decided that if I had to force a photo opportunity to occur by pure force of will, so be it.

It ain’t a great city framed by coastal mountains. But it ain’t nothing either. I had to take my gloves off to shoot, and after just a couple of minutes of twisting around looking for angles, my fingers were screaming at me.
The two pictures are at the opposite end of the density-vs-minimalism spectrum but they share, um, snow, so that’s something.
Anyhow, here’s the real reason I was there.

Jean Bray, who’ll be turning 95 this year.
I find photography to be a very useful distraction from what’s happening to the world.
In The Minority 22 Jan 2025, 9:00 pm
That’s us. I assume you’re among those horrified at the direction of politics and culture in recent years and especially recent weeks, in the world at large and especially in America. We are a minority. We shouldn’t try to deny it, we should be adults and figure out how to deal with it.
Denialists
I’m out of patience with people who put the blame on the pollsters or the media or Big Tech, or really any third party. People generally heard what Mr Trump was offering — portrayed pretty accurately I thought — and enough of them liked it to elect him. Those who didn’t are in a minority. Quit dodging and deal.
Clearly, we the minority have failed in explaining our views. Many years ago I wrote an essay called Two Laws of Explanation. One law says that if you’re explaining something and the person you’re explaining to doesn’t get it, that’s not their problem, it’s your problem. I still believe this, absolutely.
So let’s try to figure out better explanations.
But first, a side trip into economic perception and reality.
Economists
A strong faction of progressives and macroeconomists are baffled by people being disaffected when the economy, they say, is great. Paul Krugman beats this drum all the time. Unemployment and inflation are low! Everything’s peachy! Subtext: If the population disagrees, they are fools.
I call bullshit. The evidence of homelessness is in my face wherever I go, even if there are Lamborghinis cruising past the sidewalk tents. Food banks are growing. I give a chunk of money every year to Adopt a School, which puts free cafeterias in Vancouver schools where kids are coming to school hungry. Kingston, a mid-sized mid-Canadian city, just declared an emergency because one household in three is suffering from food insecurity.
Even among those who are making it, for many it’s just barely:
… half of Canadians (50%, +8) are now $200 or less away each month from not being able to pay their bills and debt payments. This is a result of significantly more Canadians saying they are already insolvent (35%, +9) compared to last quarter. Canadians who disproportionately report being $200 or less away from insolvency continue to be women (55%, +4) but the proportion of men at risk has increased to 44%, up 13 points from last quarter.
Source: MNP Consumer Debt Index. The numbers like “+8” give the change since last quarter.
(Yes, this data is Canadian, because I am. But I can’t imagine that America is statistically any better.)
Majorities
Minorities need to study majorities closely. So let me sort them, the ones who gave Trump the election I mean, into baskets:
Stone racists who hate immigrants, especially brown ones.
Culture warriors who hate gays and trans people and so on.
Class warriors; the conventional billionaire-led Republican faction who are rich, voting for anyone they think offers lower taxes and less regulation.
People who don’t pay much attention to the news but remember that gas was cheaper when Trump was in office.
Oh wait, I forgot one: People who heard Trump say what boiled down to “The people who are running things don’t care about you and are corrupt!” This worked pretty well because far too many don’t and are. A whole lot of the people who heard this are financially stressed (see above).
Who to talk to?
Frankly, I wouldn’t bother trying to reach out to either of the first two groups. Empirically, some people are garbage. You can argue that it’s not their fault; maybe they had a shitty upbringing or just fell into the wrong fellowships. Maybe. But you can be sure that that’s not your fault. The best practice is some combination of ignoring them and defending against their attacks, politics vs politics and force versus force.
I think talking to the 1% is worthwhile. The fascist leaders are rich, but not all of the rich are fascist. Some retain much of their humanity. And presumably some are smart enough to hear an argument that on this economic path lie tumbrils and guillotines.
That leaves the people who mostly ignore the news and the ones who have just had it with the deal they’re getting from late-Capitalist society. I’m pretty sure that’s who we should be talking to, mostly.
What to say?
I’m not going to claim I know. I hear lots of suggestions…
In the New Yorker, Elizabeth Kolbert’s Does One Emotion Rule All Our Ethical Judgments? makes two points. First, fear generally trumps all other emotions. So, try phrasing your arguments in terms of the threats that fascism poses directly to the listener, rather than abstract benefits to be enjoyed by everyone in a progressive world.
Second, she points out the awesome power of anecdote: MAGA made this terrible thing happen to this actual person, identified by name and neighborhood.
On Bluesky, Mark Cuban says we need offensive hardass progressive political podcasts, and offers a sort of horrifying example that might work.
On Bloomberg (paywalled) they say that the ruling class should be terrified of a K-shaped recovery; by inference, progressives should be using that as an attack vector.
Josh Marshall has been arguing for weeks that since the enemies won the election, they have the power and have to own the results. Progressives don’t need to sweat alternative policies, they just have to highlight the downsides of encroaching fascism (there are plenty) and say “What we are for is NOT THAT!” and just keep saying it. Here’s an example.
Maybe one of these lines of attack is right. I think they’re all worth trying. And I’m pretty sure I know one ingredient that’s going to have to be part of any successful line of attack…
Be blunt
Looking back at last year’s Presidential campaign, there’s a thing that strikes me as a huge example of What Not To Do. I’m talking about Harris campaign slogan: “Opportunity Economy”. This is marketing-speak. If there’s one thing we should have learned it’s that the population as a whole — rich, poor, Black, white, queer, straight, any old gender — has learned to see through this kind of happy talk.
Basically, in Modern Capitalism, whenever, and I mean whenever without exception, whenever someone offers you an “opportunity”, they’re trying to take advantage of you. This is appallingly tone-deaf, and apparently nobody inside that campaign asked themselves the simple question “Would I actually use this language in talking to someone I care about?” Because they wouldn’t.
Be blunt. Call theft theft. Call lies lies. Call violence violence. Call ignorance ignorance. Call stupidity stupidity.
Also, talk about money a lot. Because billionaires are unpopular.

From a good AP poll.
Don’t say anything you wouldn’t say straight-up in straight-up conversation with a real person. Don’t let any marketing or PR professionals edit the messaging. This is the kind of messaging that social media is made for.
Maybe I’m oversimplifying, but I don’t think so.
Protocol Churn 14 Jan 2025, 9:00 pm
Bluesky and the Fediverse are our best online hopes for humane human conversation. Things happened on 2025/01/13; I’ll hand the microphone to Anil Dash, whose post starts “This is a monumental day for the future of the social web.”

What happened? Follow Anil’s links: Mastodon and Bluesky (under the “Free Our Feeds” banner). Not in his sound-bite: Both groups are seeking donations, raising funds to meet those goals.


I’m sympathetic to both these efforts, but not equally. I’m also cynical, mostly about the numbers: They’ve each announced a fundraising target, and both the targets are substantial, and I’m not going to share either, because they’re just numbers pulled out of the air, written on whiteboards, designed to sound impressive.
What is true
These initiatives, just by existing, are evidence in letters of fire 500 miles high, evidence of people noticing something important: Corporately-owned town squares are irreversibly discredited. They haven’t worked in the past, they don’t work now, and they’ll never work.
Something decentralized is the only way forward. Something not owned by anyone, defined by freely-available protocols. Something like email. Or like the Fediverse, which runs on the ActivityPub protocol. Or, maybe Bluesky, where by “Bluesky” I mean independent service providers federated via the AT Protocol, “ATProto” for short.
What is hard?
I’ll tell you what’s hard: Raising money for a good cause, when that good cause is full of abstractions about openness and the town square and so on. Which implies you’re not intending that the people providing the money will make money. So let’s wish both these efforts good luck. They’ll need it.
What matters
Previously in Why Not Bluesky I argued that, when thinking about the future of conversational media, what matters isn’t the technology, or even so much the culture, but the money: Who pays for the service? On that basis, I’m happy about both these initiatives.
But now I’m going to change course and talk about technology a bit. At the moment, the ATProto implementation that drives Bluesky is the only one in the world. If the company operating it failed in execution or ran out of money, the service would shut down.
So, in practice, Bluesky’s not really decentralized at all. Thus, I’m glad that the “Free Our Feeds” effort is going to focus on funding an alternative ATProto implementation. In particular, they’re talking about offering an alternative ATProto “Relay”.
Before I go on, you’re going to need a basic understanding of what ATProto is and how its parts work. Fortunately, as usual, Wikipedia has a terse, accurate introduction. If you haven’t looked into ATProto yet, please hop over there and remedy that. I’ll wait.
Now that you know the basics, you can understand why Free Our Feeds is focusing on the Relay. Because, assuming that Bluesky keeps growing, this is going to be a big, challenging piece of software to build, maintain, and operate, and the performance of the whole service depends on it.
The Fediverse in general and Mastodon in particular generally don’t rely on a global firehose feed that knows everything that happens, like an eye in the sky. In fact, the ActivityPub protocol assumes a large number of full-stack peer implementations that chatter with each other, in stark contrast to ATProto’s menagerie of Repos and PDSes and Relays and App Views and Lexicons.
The ATProto approach has advantages; since the Relay knows everything, you can be confident of seeing everything relevant. The Fediverse makes no such promise, and it’s well-known that in certain circumstances you can miss replies to your posts. And perhaps more important, miss replies to others’ posts, which opens the door to invisible attackers.
And this makes me nervous. Because why would anyone make the large engineering and financial investments that’d be required to build and operate an ATProto Relay?
ActivityPub servers may have their flaws, but in practice they are pretty cheap to operate. And it’s easy to think of lots of reasons why lots of organizations might want to run them:
A university, to provide a conversational platform for its students…
… or its faculty.
A Developer Relations team, to talk to geeks.
Organized religion, for evangelism, scholarship, and ministry.
Marketing and PR teams, to get the message out.
Government departments that provide services to the public.
Or consider my own instance, CoSocial, the creation of Canadians who (a) are fans of the co-operative movement, (b) concerned about Canadians’ data staying in Canada, and (c) want to explore modes of funding conversational media that aren’t advertising or Patreon.
Maybe, having built and run a Relay, the Free Our Feeds people will discover a rationale for why anyone else should do this.
So, anyhow…
I hope both efforts hit their fundraising targets. I hope both succeed at what they say they’re going to try.
But for my own conversation with the world, I’m sticking with the Fediverse.
Most of all, I’m happy that so many people, whatever they think of capitalism, have realized that it’s an unsuitable foundation for online human conversation. And most of all I hope that that number keeps growing.
AI Noise Reduction 10 Jan 2025, 9:00 pm
What happened was, there was a pretty moon in the sky, so I got out a tripod and the big honkin’ Tamron 150-500 and fired away. Here’s the shot I wanted to keep.

Sadly, the clouds had shifted
and Luna had lost her pretty bronze shading.
I thought the camera and lens did OK given that I was shooting from sea level through soggy Pacific-Northwest winter air. But when I zoomed in there was what looked like pretty heavy static. So I applied Lightroom to the problem, twice.


I’ll be surprised if many of you can see a significant difference. (Go ahead and enlarge.) But you would if it were printed on a big piece of paper and hung on a wall. So we’ll look at the zoomed-in version. But first…
Noise reduction, old-school
Lightroom has had a Luminance-noise reduction tool for years. Once you wake it up, you can further refine with “Detail” and “Contrast” sliders, whose effects are subtle at best. For the moon shot, I cranked the Luminance slider pretty all the way over and turned up Detail quite a bit too.
Noise reduction, with AI
In recent Lightroom versions there’s a “Denoise…” button. Yes, with an ellipsis and a note that says “Reduce noise with AI.” It’s slow; took 30 seconds or more to get where it was going.
Anyhow, here are the close-up shots.



Original first, then noise-reduced
in Lightroom by hand, then with AI.
What do you think?
I have a not-terribly-strong preference for the by-hand version. I think both noise reductions add value to the photo. I wonder why the AI decided to enhance the very-slight violet cast? You can look at the rim of one crater or another and obsess about things that nobody just admiring the moon will ever see.
It’s probably worth noting that the static in the original version isn’t “Luminance noise”, which is what you get when you’re pushing your sensor too hard to capture an image in low light. When you take pictures of the moon you quickly learn that it’s not a low-light scenario at all, the moon is a light-colored object in direct sunlight. These pix are taken at F7.1 at 1/4000 second shutter. I think the static is just the Earth’s atmosphere getting in the way. So I’m probably abusing Lightroom’s Luminance slider. Oh well.
You could take this as an opportunity to sneer at AI, but that would be dumb. First, Lightroom’s AI-driven “select sky” and “select subject” tools work astonishingly well, most times. Second, Adobe’s been refining that noise-reduction code for decades and the AI isn’t even a year old yet.
We’ll see how it goes.
Bitcoin Lessons 4 Jan 2025, 9:00 pm
Here we are, it’s 2025 and Bitcoin is surging. Around $100K last time I looked. While its creation spews megatons of carbon into our atmosphere, investors line up to buy it in respectable ETFs, and long-term players like retirement pools and university endowments are looking to get in. Many of us are finding this extremely annoying. But I look at Bitcoin and I think what I’m seeing is Modern Capitalism itself, writ large and in brutally sharp focus.
[Disclosure: In 2017 I made a lot of money selling Bitcoins at around $20K, ones I’d bought in 2013. Then in 2021 I lost money shorting Bitcoin (but I’m still ahead on this regrettable game).]
What is a Bitcoin?
It is verifiable proof that a large amount of computing has been done. Let’s measure it in carbon, and while it’s complicated and I’ve seen a range of answers, they’re all over 100 tonnes of CO2/Btc. That proof is all that a Bitcoin is.

Bitcoin is also a store of value. It doesn’t matter whether you think it should be, empirically it is, because lots of people are exchanging lots of money for Bitcoins on the assumption that they will store the value of that money. Is it a good store of value? Many of us think not, but who cares what we think?
Is Bitcoin useful?
I mean, sure, there are currency applications in gun-running, ransoms, narcotics, and sanctions-dodging. But nope, the blockchain is so expensive and slow that all most people can really do with Bitcoin is refresh their wallets hoping to see number go up.
Bitcoin and late capitalism
The success of Bitcoin teaches the following about capitalism in the 2020s:
Capitalism doesn’t care about aesthetics. Bitcoins in and of themselves in no way offer any pleasure to any human.
Capitalism doesn’t care about negative externalities generally, nor about the future of the planet in particular. As long as the number goes up, the CO2 tonnage is simply invisible. Even as LA burns.
Capitalism can be oblivious to the sunk-cost fallacy as long as people are making money right now.
Capitalism doesn’t care about utility; the fact that you can’t actually use Bitcoins for anything is apparently irrelevant.
And oblivious about crime too. The fact that most actual use of Bitcoins as a currency carries the stench of international crime doesn’t seem to bother anyone.
Capitalism doesn’t care about resiliency or sustainability. Bitcoins are fragile; very easy to lose forever by forgetting a password or failing to back up data just right. Also, on the evidence, easy to steal.
Capitalism can get along with obviously crazy behavior, for example what MicroStrategy is doing: Turning a third-rate software company into a bag of Bitcoins and having an equity valuation that is higher than the value of the bag; see Matt Levine (you have to scroll down a bit, look for “MicroStrategy”).
Capitalism says: “Only money is real. Those other considerations are for amateurs. Also, fuck the future.”
Do I hate capitalism?
Not entirely. As Paul Krugman points out, a market-based economy can in practice deliver reasonably good results for a reasonably high proportion of the population, as America’s did in the decades following 1945. Was that a one-time historical aberration? Maybe.
But as for what capitalism has become in the 21st century? Everything got financialized and Bitcoin isn’t the disease, it’s just a highly visible symptom. Other symptoms: The explosion of homelessness, the destruction of my children’s ecosystem, the gig economy, and the pervasiveness of wage theft. It’s really hard to find a single kind word to say.
Are Bitcoins dangerous?
Not existentially. I mean, smart people are worried, for example Rostin Behnam, chair of the Commodity Futures Trading Commission: “You still have a large swath of the digital asset space unregulated in the US regulatory system and it’s important — given the adoption we’ve seen by some traditional financial institutions, the huge demand for these products by both the retail and institutional investors — that we fill this gap.”
All that granted, the market cap of Bitcoin is around two trillion US dollars as I write this. Yes, that’s a lot of money. But most of them are held by market insiders, so even in the (plausible) case that it plunges close to zero, the damage to the mainstream economy shouldn’t be excessive.
It’s just immensely annoying.
Bitcoin and gold
One of the things Bitcoin teaches us is that there is too much money in the world, more than can be put to work in sensible investments. So the people who have it do things like buy Bitcoins.
Gold is also a store of value, also mostly just because people believe it is. But it has the virtues of beauty and of applications in jewellery and electronics. I dunno, I’m seriously thinking about buying some on the grounds that the people who have too much money are going to keep investing in it. In particular if Bitcoin implodes.
Having fun staying poor
I’ve been snarling at cryptocurrencies since 2018 or so. But, number go up. So I’ll close by linking to HODLers apology.
Question
Is this the best socio-economic system we as a species can build?
December 24th Lasagna 2 Jan 2025, 9:00 pm
We had thirteen people at my Mom’s house this last Christmas. One of our traditions is a heroic Lasagna for Christmas Eve, a specialty of a family member. This year we asked them for the recipe and they agreed, but would rather remain uncredited. It’s called “Very Rich Red Sauce and four-Cheese Lasagna”.
I sort of enjoy cooking but would never have the courage to take on a project like this. I can testify that the results are wonderful.
Sauce
Key sauce ingredients are red wine & sun-dried tomatoes; lasagna is all about mozza, the provolone adds a certain extra something. Over to the chef:
Ingredients
Sauce:
butter - half a lb block or less
olive oil to sauté
carrots - 2 to 4 depending on size, make sure they are fresh!
celery - 5-6 spears, say ⅔ of a bunch
onions (ideally yellow, but whatevs) - 3-6 depending on size.
(by volume, you are looking to get roughly 3:2:1 proportions when chopped of onions:celery:carrots)
a red sweet pepper
extra lean hamburger - 1 kg
3 cloves
salt - half a tablespoon? more? (tomato sauces tend to need a little more salt than other applications)
black pepper - if not using cayenne, a LOT, perhaps a tablespoon of freshly ground
cayenne - if using, to taste, I find strength is highly variable so hard to say how much, perhaps a couple of teaspoons?
Two big cans of diced tomatoes (NOT AYLMER! yuck, Unico is acceptable, Italian imported best)
3 small cans or one big can of tomato paste (or tubes, if you get them from an Italian grocery store, which are a rather superior product) It is hard to use too much tomato paste
(pro tip for cans of tomato paste: use the can opener to open BOTH the top and bottom of the can, then the paste slides out neatly and you don’t have to fool around trying to spatula the paste out of the can!)
about a half bottle of cheap red wine (the grocery store sells cheap cooking wine in demi bottles, though red can be hard to find, one demi bottle is more than enough)
most of a garlic bulb, say three quarters, once again FINELY chopped (no not pressed or processed!)
a LOT of dried oregano, say 3-4 tablespoons plus
a fair amount of dried basil, say 2 tablespoons plus
some died thyme, say a half tablespoon plus
(herbs are tough to give measurements for, as they vary enormously in potency with brand, age & storage)
about a tablespoon of powdered beef stock
about 1½ to 2 cups of finely chopped sun-dried tomatoes NO SUBSTITUTIONS!!!!!
Lasagna
1 disposable foil pan
foil to cover
butter to grease pan
melted butter to cover top lightly
1¼ margarine tins of sauce above
1½ boxes of lasagna noodles (NOT the “instant” or pre-cooked kind)
salted water to boil noodles
1 big block of mozzarella cheese, coarsely grated
2 packages (12-16 slices) of provolone cheese
1 tin of ricotta cheese
grated parmesan cheese
little bit of freshly ground black pepper
Procedure: Sauce
For the mirepoix, the vegetables all need to be FINELY chopped up for sauce consistency. And I mean FINELY chopped-- I have tried both powered and manual food processors, and they do not do a good job, either reducing things to mush, or leaving big chunks-- the idea here is to get everything about the same size (and the garlic later), it gives the sauce some consistency.
This is tedious and takes a long time.
To give you an idea, a celery spear can be cut into 5-7 strips longwise with the grain, and then the strips chopped as finely as possible across the grain. A big carrot can yield say 6 long flat pieces that can be cut longways into strips, and then the strips cuts finely against the grain to produce little cubes.
The mirepoix is important, we need to get the natural sugars to balance out the acidity of the tomatoes and wine. I cook the onions separately from the rest…
Caramelize the onions. Frankly, this is a pain in the ass. There are no shortcuts; the internet has all these handy tips like using a pressure cooker at first or a slow cooker or adding baking soda or water or whatnot, I’ve tried them, and just no. So a big non-stick frying pan, and sloooow saute with a little butter and salt, frequent stirring, a lot of patience, at LEAST 40 minutes. Maybe an hour. And constant attention, the bastards will burn on you at the drop of a hat. There is no hiding it, this is tricky and somewhat difficult. But worth it, caramelized onions are magic.
Meanwhile, in as big a heavy bottom pot as you can find, saute the celery and carrots (and peppers if you are using them) with say 3 dried cloves, in a little butter, you want to shrink them down and bring out the carrots’ natural sugars, and soften the cloves & bring out their flavour, this takes a while, say 20 minutes or more?
Remove the onions and carrots and celery to a BIG bowl, clean out the big heavy bottom pot and brown the hamburger, about 1kg. DO NOT USE “regular” as it is mostly fat; lately I have been using the “extra lean” because even the lean is quite fatty these days. A little olive oil in the pot, a fair amount of salt (half tablespoon?) & a lot of freshly ground black pepper (approaching a tablespoon?) on the meat and lots of stirring.
This where a lot of people screw up, essentially they “grey” the meat, they don’t BROWN it, ie achieve the Maillard reaction-- as with the mirepoix, we are trying to bring out natural sugars. A lot of stirring/scraping, to break the hamburger up into as small pieces as possible, ie the same size as the veggies in the mirepoix.
While this is happening, add the rest of the ingredients to the mirepoix in the big bowl: tomatoes, tomato paste (more is better), wine.
Also the chopped garlic, oregano, basil, thyme, about a tablespoon of powdered beef stock (yes this is cheating, shhh).
A word about sun-dried tomatoes: they are absolutely wonderful things, but hard to find. Ideally, you want the dried kind, not the “in oil” kind, but you may have to settle for that. There are two different kinds of “in oil”, the ones in jars floating in oil like pickles in brine (they are prepped for salads, not sauces, so they are floating oil, and disintegrate once they are in a sauce), which you really do not want, and the ones in packages that have been oiled for preservation. This is usually what you have to settle for. The best ones are dry and quite hard.They are hard to find.
Of these, the worst sun-dried tomatoes are the North American product, the big food companies noted there was a demand for them, and started tossing field tomatoes into drying kilns. These are quite inferior, but useable if it’s all you can find. The best ones are Turkish, followed by Italian, you can usually find in Italian delicatessens or grocery stores (even the Italians agree the Turkish ones are best).
[Tim says: I found them in Vancouver and the chef is right, the flavor is to die for.]
If you get the fully dried/no oil kind, you can simmer them a little in a little water to soften them before you try chopping them up; the water (or some of it anyway) should be added to the sauce…
Drain any fat from the browned meat, add the mix to the meat in the big pot. Stir it all up some more, get it well mixed. You can add a little wine and/or tomato juice if it seems too thick, but be aware that it will liquify a little during cooking. If you are going to use it as a pasta sauce, you want it more liquid, if for lasagna, you want it as thick as practical. In any case, it is very difficult to judge the final thickness at this point.
You can add a little (sun-dried tomato?) water, or tomato juice, or wine, if it needs it, but be cautious.
Put on medium to low heat and stir frequently until it comes to a high simmer/low boil (don’t let it burn!) then turn the heat down to minimum, and slow-cook for at least 4 hours, stirring every 20 minutes or so. Cooking it longer is a Good Thing, I often cook it for 7 or more hours.
LEAVE it out overnight! (this is important!)
Warm it up the next day, bringing it back to a high simmer/low boil and stirring a lot, serve it on pasta. Or make lasagna. Or freeze it. One margarine tin makes a meal with leftovers (each serving would be small, because the stuff is quite strong). One margarine tin (plus a little, ideally) makes for one lasagna. You should get about 3 or so margarine tins from the above. It freezes just fine.
Whew.
Lasagna
In theory, one box of lasagna noodles is just enough to make one pan of lasagna. However, there are almost always a lot of broken pieces of lasagna in a box, and pieces that break or get stuck or something during cooking. I allow for 1 and half a box per lasagna pan.
Ingredients:
enough lasagna noodles, cooked al dente in salted boiling water
1¼ margarine tins of sauce, gently heated up
Butter (better than margarine but you can use that if necessary)
1 big block of mozzarella (the regular supermarket kind is fine, I tried the expensive high quality italian stuff, and it was a LITTLE bit better, but not enough to justify the cost)
1 tin of ricotta cheese, or failing that, dry curd cottage cheese
12-16 slices of provolone cheese (usually 2 packages)
parmesan cheese (like with the mozza, I find the cheap Kraft product fully acceptable for this purpose)
freshly ground black pepper
foil lasagna pan
heavy duty foil
Grease the pan with butter. do a thorough job, you don’t want huge amounts of butter, but you do not want any bit of foil that might come in contact with lasagna ungreased.
Coarsely grate the mozzarella.
Lay an overlapping layer of lasagna noodles crossways in the pan, with the sides going up the sides of the pan. The pieces at either end need to overlap the piece on the bottom, but bend up to cover the short end of the pan as well. This is a pain in the ass to get right.
The sauce goes a long way, you do NOT want a thick layer. Spread a thin layer over the bottom of the pan, about a third of the sauce. Put a layer of mozza on top, about a third of it. Put half the provolone on top. Put a layer of lasagna noodles down long-ways , overlapping a bit.
Spread a thin layer of sauce, another third, covered with mozza, another third, and cover that with the ricotta (or dry-curd cottage). Put down another layer of lasagna long-ways.
The rest of the sauce, the rest of the mozza, and the rest of the provolone.
Cover the cross-wise lasagna noodles, overlapping, and trying to tuck the ends in to seal the package. This is tricky at the ends, I find putting a small cut in the lasagna noodle at the corner makes it easier to fold over neatly. (Usually I am tired by this point and less inclined to be finicky and careful, which is a pity.)
Pour a little melted butter over the top, enough to grease the surface. Sprinkle some freshly ground pepper and a fair amount of parmesan over the top. Cover with foil. Put in fridge (you can pre-make hours before use if you are having a party or something) or immediately put in pre-heated oven.
Preheat oven to pretty hot, 375-400. Cook lasagna for 45 minutes to an hour. Remove foil from pan 10-15 minutes before you remove it from the oven. Put the foil back on and let it stand for AT LEAST 15 minutes, better if you can hold off for half an hour (the hotter it is, the more likely it is do disintegrate when being served, you want the melted cheese to start to re-set a little)
Serve with garlic bread and salad and an impertinent red wine. One pan goes a fairly long way, serves 6-8ish? Depends how many hungry teens you have around…
QRS: Dot-matching Redux 29 Dec 2024, 9:00 pm
Recently I posted
Matching “.” in UTF-8, in which I claimed that you could match
the regular-expression “.
” in a UTF-8 stream with either four or five states in a byte-driven finite automaton,
depending how
you define the problem. That statement was arguably wrong, and you might need three more states, for a total of eight. But you
can make a case that really, only four should be needed, and another case calling for quite a few more.
Because that phrase “depending how you define the problem” is doing a lot of work.
But first, thanks: Ed Davies, whose blog contributions (1, 2, 3) were getting insufficient attention from me until Daphne Preston-Kendal insisted I look more closely.
To summarize Ed’s argument: There are a bunch of byte combinations that look (and work) like regular UTF-8 but are explicitly ruled out by the Unicode spec, in particular Section 3.9.3 and its Table 3.7.
Moar States!
Ed posted a nice picture of a corrected 8-state automaton that will fail to match any of these forbidden sequences.

(Original SVG here.)
I looked closely at Ed’s proposal and it made sense, so I implemented it and (more important) wrote a bunch of unit tests exploring the code space, and it indeed seems to accept/reject everything correctly per Unicode 3.9.3.
So, argument over, and I should go forward with the 8-state Davies automaton, right? Why am I feeling nervous and grumpy, then?
Not all Unicode
I’ve already mentioned in this series that your protocols and data structures just gotta support Unicode in the 21st century, but you almost certainly don’t want to support all the Unicode characters, where by “character” I mean, well… if you care at all about this stuff, please go read Unicode Character Repertoire Subsets (“Unichars for short), a draft inching its way through the IETF, with luck an RFC some day. And if you really care, dig into RFC 3454: PRECIS Framework: Preparation, Enforcement, and Comparison of Internationalized Strings in Application Protocols. Get a coffee first, PRECIS has multiple walls of text and isn’t simple at all. But it goes to tremendous lengths to address security issues and other best practices.
If you don’t have the strength, take my word for it that the following things are true:
We don’t talk much about abstract characters; instead focus on the numeric “code points” that represent them.
JSON, for historical reasons, accepts all the code points.
There are several types of code points that don’t represent characters: “Surrogates”, “controls”, and “noncharacters”.
There are plenty of code points that are problematic because they can be used by phishers and other attackers to fool their victims because they look like other characters.
There are characters that you shouldn’t use because they represent one or another of the temporary historical hacks used in the process of migrating from previous encoding schemes to Unicode.
The consequence of all this is that there are many subsets of Unicode that you might want to restrict users of your protocols or data structures to:
JSON characters: That is to say, all of them, including all the bad stuff.
Unichars “Scalars”: Everything except the surrogates.
Unichars “XML characters”: Lots but not all of the problematic code points excluded.
Unichars “Unicode Assignables”: “All code points that are currently assigned, excluding legacy control codes, or that might in future be assigned.”
PRECIS “IdentifierClass”: “Strings that can be used to refer to, include, or communicate protocol strings like usernames, filenames, data feed identifiers, and chatroom name.”
PRECIS “FreeformClass”: “Strings that can be used in a free-form way, e.g., as a password in an authentication exchange or a nickname in a chatroom.”
Some variation where you don’t accept any unassigned code points; risky, because that changes with every Unicode release.
(I acknowledge that I am unreasonably fond of numbered lists, which is probably an admission that I should try harder to compose smoothly-flowing linear arguments that don’t need numbers.)
You’ll notice that I didn’t provide links for any of those entries. That’s because you really shouldn’t pick one without reading the underlying document describing why it exists.
What should you accept?
I dunno. None of the above are crazy. I’m kind of fond of Unicode Assignables, which I co-invented. The only thing I’m sure of is that you should not go with JSON Characters, because of the fact that its rules make the following chthonic horror perfectly legal:
{"example": "\u0000\u0089\uDEAD\uD9BF\uDFFF"}
Unichars describes it:
The value of the “example” field contains the C0 control NUL, the C1 control "CHARACTER TABULATION WITH JUSTIFICATION", an unpaired surrogate, and the noncharacter U+7FFFF encoded per JSON rules as two escaped UTF-16 surrogate code points. It is unlikely to be useful as the value of a text field. That value cannot be serialized into well-formed UTF-8, but the behavior of libraries asked to parse the sample is unpredictable; some will silently parse this and generate an ill-formed UTF-8 string.
No, really.
What is Quamina for?
If you’re wondering what a “Quamina” is, you probably stumbled into this post through some link and, well, there’s a lot of history. Tl;dr: Quamina is a pattern-matching library in Go with an unusual (and fast) performance envelope; it can match thousands of Patterns to millions of JSON blobs per second. For much, much more, peruse the Quamina Diary series on this blog.
Anyhow, all this work in being correctly restrictive as to the shape of the incoming UTF-8 was making me uncomfortable. Quamina is about telling you what byte patterns are in your incoming data, not enforcing rules about what should be there.
And it dawned on me that it might be useful to ask Quamina to look at a few hundred thousand inputs per second and tell you which had ill-formed-data problems. Quamina’s dumb-but-fast byte-driven finite automaton would be happy to do that, and very efficiently too.
Conclusion
So, having literally lain awake at night fretting over this, here’s what I think I’m going to do:
I’ll implement a new Quamina pattern called
ill-formed
or some such that will match any field that has busted UTF-8 of the kind we’ve been talking about here. It’d rely on an automaton that is basically the inverse of Davies’ state machine.By default, the meaning of “
.
” will be “matches the Davies automaton”; it’ll match well-formed UTF-8 matching all code points except surrogates.I’ll figure out how to parameterize regular-expression matches so you can change the definition of “
.
” to match one or more of the smaller subsets like those in the list above from Unichars and PRECIS.
But who knows, maybe I’ll end up changing my mind again. I already have, multiple times. Granted that implementing regular
expressions is hard, you’d think that matching “.
” would be the easy part. Ha ha ha.
QRS: Matching “.” in UTF-8 18 Dec 2024, 9:00 pm
Back on December 13th, I
posted a challenge on Mastodon: In a simple UTF-8 byte-driven
finite automaton, how many states does it take to match the regular-expression construct “.
”, i.e. “any character”?
Commenter
Anthony Williams responded,
getting it almost right I think,
but I found his description a little hard to understand. In this piece I’m going to dig into what
.
actually means, and then how many states you need to match it.
[Update: Lots more on this subject and some of the material below is arguably wrong, but just “arguably”; see
Dot-matching Redux.]
The answer surprised me. Obviously this is of interest only to the faction of people who are interested in automaton wrangling, problematic characters, and the finer points of UTF-8. I expect close attention from all 17 of you!
The answer is…
Four. Or five, depending.
What’s a “Unicode character”?
They’re represented by “code points”, which are numbers in the range 0 … 17×216, which is to say 1,114,112 possible values. It turns out you don’t actually want to match all of them; more on that later.
How many states?
Quamina is a “byte-level automaton” which means it’s in a state, it reads a byte, looks up the value of that byte in a map
yielding either the next state, or nil
, which means no match. Repeat until you match or fail.
What bytes are we talking about here? We’re talking about UTF-8 bytes. If you don’t understand UTF-8 the rest of this is going to be difficult. I wrote a short explainer called Characters vs. Bytes twenty-one years ago. I now assume you understand UTF-8 and knew that code points are encoded as sequences of from 1 to 4 bytes.
Let’s count!
When you match a code point successfully you move to the part of the automaton that’s trying to match the next one; let’s call this condition MATCHED.
(From here on, all the numbers are hex, I’ll skip the leading 0x. And all the ranges are inclusive.)
In multi-byte characters, all the UTF-8 bytes but the first have bitmasks like
10XX XXXX
, so there are six significant bits, thus 26 or 64 distinct possible values ranging from 80-BF.There’s a Start state. It maps byte values 00-7F (as in ASCII) to MATCHED. That’s our first state, and we’ve handled all the one-byte code points.
In the Start state, the 32 byte values C0-DF, all of which begin
110
signaling a two-byte code point, are mapped to the Last state. In the Last state, the 64 values 80-BF are mapped to MATCHED. This takes care of all the two-byte code points and we’re up to two states.In the Start state, the 16 byte values E0-EF, all of which begin
1110
signaling a three-byte code point, are mapped to the LastInter state. In that state, the 64 values 80-BF are mapped to the Last state. Now we’re up to three states and we’ve handled the three-byte code points.In the Start state, the 8 byte values F0-F7, all of which begin 11110 signaling a four-byte code point, are mapped to the FirstInter state. In that state, the 64 values 80-BF are mapped to the LastInter state. Now we’ve handled all the code points with four states.
But wait!
I mentioned above about not wanting to match all the code points. “Wait,” you say, “why wouldn’t you want to be maximally inclusive?!” Once again, I’ll link to Unicode Character Repertoire Subsets, a document I co-wrote that is making its way through the IETF and may become an RFC some year. I’m not going to try to summarize a draft that bends over backwards to be short and clear; suffice it to say that there are good reasons for leaving out several different flavors of code point.
Probably the most pernicious code points are the “Surrogates”, U+D800-U+DFFF. If you want an explanation of what they are and why they’re bad, go read that Repertoire Subsets draft or just take my word for it. If you were to encode them per UTF-8 rules (which the UTF-8 spec says you’re not allowed to), the low and high bounds would be ED,A0,80 and ED,BF,BF.
Go’s UTF-8 implementation agrees that Surrogates Are Bad and The UTF-8 Spec Is Good and flatly refuses to convert those UTF-8 sequences into code points or vice versa. The resulting subset of code points even has a catchy name: Unicode Scalars. Case closed, right?
Wrong. Because JSON was designed before we’d thought through these problems, explicitly saying it’s OK to include any code point whatsoever, including surrogates. And Quamina is used for matching JSON data. So, standards fight!
I’m being a little unfair here. I’m sure that if Doug Crockford were inventing JSON now instead of in 2001, he’d exclude surrogates and probably some of the other problematic code points discussed in that Subsets doc.
Anyhow, Quamina will go with Go and exclude surrogates. Any RFC8259 purists out there, feel free accuse me of standards apostasy and I will grant your point but won’t change Quamina. Actually, not true; at some point I’ll probably add an option to be more restrictive and exclude more than just surrogates.
Which means that now we have to go back to the start of this essay and figure out how many states it takes to match
“.
” Let’s see…
The Start state changes a bit. See #5 in the list above. Instead of mapping all of E0-EF to the LastInter state, it maps one byte in that range, ED, to a new state we’ll call, let’s see, how about ED.
In ED, just as in LastInter, 80-9F are mapped to Last. But A0-BF aren’t mapped to anything, because on that path lie the surrogates.
So, going with the Unicode Scalar path of virtue means I need five states, not four.
1994 Hong Kong Adventure 14 Dec 2024, 9:00 pm
This story is about Hong Kong and mountains and ferries and food and beer. What happened was, there’s a thirty-year-old picture I wanted to share and it brought the story to mind. I was sure I’d written it up but can’t find it here on the blog, hard as I try, so here we go. Happy ending promised!
The picture I wanted to share is from a business trip to Hong Kong in 1994 and hey, it turns out I have lots more pictures from that trip.

Kai Tak airport in 1994.

Rats for sale in a sketchy corner of Kowloon.

Kai Tak, what an airport that was. If you could open the plane’s windows, you’d have been able to grab laundry hung to dry on people’s balconies. My fast-talking HK friend said “Safest airport in the world! You know pilot paying 100% attention!”
My trip extended over a weekend and I wanted to get out of town so I read up on interesting walks; on paper of course, the Web only just barely existed. Lantau Island was recommended; there was a good hike up over the local mountains that reached a Trappist monastery with a well-reviewed milk bar. So I took the ferry from Central to Mui Wo.
The view from the ferry was great!


I revisited Mui Wo in 2019, visiting the Big Buddha.
It was easy to find the hiking trail up the mountains, well-maintained but steep. I stopped to take pictures maybe more often than strictly necessary because it was in the high Celsius thirties with 99% humidity and my North-Euro metabolism wasn’t dealing well. Visions of Trappist ice-cream danced in my head as the sweat dripped off my chin.


Having said that, I’m glad I stopped because the pictures please my eye. These are all Ektachrome; can’t remember whether I took them with the Pentax SLR or the little Nikon pocket camera.
Lantau has the new international airport on it now; I wonder if those green hills are still unspoiled.
Eventually, sweat-soaked and my body screaming for mercy, I reached a small mountaintop. I could see the monastery, but it was a couple of little mountains over, so I arrived in poor condition. Sadly for me, it was a Sunday so, commerce deferring to the sacred, the joint was closed. Poor Tim. Especially since I hadn’t brought anything to eat.
Fortunately I didn’t have to hike all the way back to Mui Wo; Almost straight downhill there there was a “Monastery Pier” with an occasional ferry to the nearby islet of Peng Chau and a connection back to Central. Looks like there still is.
It was midafternoon, the heat approaching its peak, and walking downhill has its own stresses and strains. By the time I got to the pier I was a sad excuse for a human. Here’s a picture of the ferry.

As you can see, it was pretty crowded, but unsurprisingly, nobody wanted to share the bench the big sweaty panting hungry-looking pink person was on.
Peng Chau itself was visually charming but the ferry connection was tight so I couldn’t explore.

Peng Chau waterfront in 1994. This is the picture I wanted to share that led me to (re?)tell this story. My conversational-media home on the Net is on Mastodon, but I like to keep an eye on Bluesky, so I post random pictures there under the tag #blueskyabove; this will be one.
Trudging onto the medium-sized ferry back home, I encountered a food-service option: A counter with one guy and a big steaming pot of soup behind it. My spirit lifted. The guy’s outfit might have once been white; he was unshaven and sweaty but then so was I, and my clothes were nothing to write home about either.
I stopped and pointed at the bowls. He filled one, then wanted to upsell me on a leathery, greasy-looking fried egg to go on top but there are limits. Disappointed, he stepped aside to put it back, revealing a small glass-fronted fridge, icicles hanging off it, full of big cans of San Miguel beer. My spirit lifted again.
The soup was salty and delicious. I’m not sure I’ve enjoyed a beer more in the thirty years since that day. The ferry was fast enough to generate a refreshing breeze all the way, and there were charming boats to photograph.

The tourist who walked off the boat at Central was a dry, well-hydrated, and cheerful specimen of humanity. The next day, my fast-talking HK friend said “You climb over Lantau in that weather yesterday? White guys so weird!” “It was great!” I told him, smirking obnoxiously.
I’ve been back to HK a few times over the years, but it’s not really a happy place any more.
QRS: Parsing Regexps 12 Dec 2024, 9:00 pm
Parsing regular expression syntax is hard. I’ve written a lot of parsers and,for this one, adopted a couple of new techniques that I haven’t used before. I learned things that might be of general interest.
I was initially surprised that the problem was harder than it looked, but quickly realized that I shouldn’t have been, because my brain has also always had a hard time parsing them.
They’re definitely a write-only syntax and just because I’m gleefully writing this series doesn’t mean I’m recommending you reach for REs as a tool very often.
But I bet most people in my profession find themselves using them pretty regularly, in the common case where they’re the quickest path from A to B. And I know for sure that, on a certain number of occasions, they’ve ended up regretting that choice.
Anyhow, I console myself with the thought that the I-Regexp RE dialect has less syntax and fewer footguns than PCREs generally. Plus, I’ve been having fun implementing them. So knock yourselves out. (Not legal nor investing advice.)
Sample-driven development
When I started thinking seriously about the parser, the very first thought in my mind was “How in the freaking hell am I going to test this?” I couldn’t stand the thought of writing a single line of code without having a plausible answer. Then it occurred to me that since I-Regexp subsets XSD Regular Expressions, and since XSD (which I mostly dislike) is widely deployed and used, maybe someone already wrote a test suite? So I stuck my head into an XML community space (still pretty vigorous after all these years) and asked “Anyone have an XSD regexp test suite?”
And it worked! (I love this profession sometimes.)
Michael Kay pointed at me a few things notably including
this GitHub repo. The
_regex-syntax-test-set.xml
there, too big to display, contains just under a thousand regular expressions, some
valid, some not, many equipped with strings that should and should not match.
The process by which I turned it into a *_test.go
file, Dear Reader, was not pretty. I will not share the
ugliness, which involved awk and emacs, plus hideous and largely untested one-off Go code.
But I gotta say, if you have to write a parser for any anything, having 992 sample cases makes the job a whole lot less scary.
Lesson: When you’re writing code to process a data format that’s new to you, invest time, before you start, in looking for samples.
Recursive descent
The I-Regexp specification contains a complete ABNF grammar for the syntax. For writing parsers I tend to like finite-automaton based approaches, but for a freakishly complicated mini-language like this, I bowed in the direction of Olympus for that grammar and started recursively descending.
I think at some point I understood the theory of Regular Languages and LL(1) and so on, but not any more. Having said that, the recursive-descent technique is conceptually simple, so I plowed ahead. And it worked eventually. But there seemed a lot of sloppy corners where I had to peek one byte ahead or backtrack one. Maybe if I understood LL(1) better it’d have been smoother.
The “character-class” syntax [abc0-9]
is particularly awful. The possible leading -
or
^
makes it worse, and it has the usual \
-prefixed stanzas. Once again, I salute the original
specifiers who managed to express this in a usable grammar.
I was tempted, but ended up making no use of Go’s regexp
library to help me parse REs.
I have to say that I don’t like the code I ended up with as much as any of my previous (automaton-based) parsers, nor as much as the rest of the Quamina code. But it seems to work OK. Speaking of that…
Test coverage
When I eventually got the code to do the right thing for each of Michael Kay’s 992 test cases, I was feeling a warm glow. So then I ran the test-coverage tool, and got a disappointingly-low number. I’m not a 100%-coverage militant generally, but I am for ultra-low-level stuff like this with a big blast radius.
And here’s the lesson: Code coverage tools are your friend. I went in and looked at the green and red bars; they revealed that while my tests had passed, I was really wrong in my assumptions about the paths they would make the code take. Substantial refactoring ensued.
Second, and somewhat disappointingly, there were a lot of coverage misses on Go’s notorious little if err != nil
stanza. Which revealed that my sample set didn’t cover the RE-syntax space quite as thoroughly as I’d hoped. In particular,
there was really no coverage of the code’s reaction to malformed UTF-8.
The reason I’m writing this is to emphasize that, even if you’re in a shop where the use of code-coverage tools is (regrettably) not required, you should use one anyhow, on basically every important piece of code. I have absolutely never failed to get surprises, and consequently improved code, by doing this.
Sharing the work
I don’t know if I-Regexp is going to be getting any uptake, but it wouldn’t surprise me if it did; it’s a nice tractable subset that hits a lot of use cases. Anyhow, now I have reasonably robust and well-tested I-Regexp parsing code. I’d like to share it, but there’s a problem.
To do that, I’d have to put it in a separate repo; nobody would want to import all of Quamina, which is a fair-sized library, just to parse REs. But then that other repo would become a Quamina dependency. And one of my favorite things about Quamina is that it has 0 dependencies!
It’s not obvious what the right thing to do is; any ideas?
QRS: Quamina Regexp Series 12 Dec 2024, 9:00 pm
Implementing regular expressions is hard. Hard in interesting ways that make me want to share the lessons. Thus this series, QRS for short.
People who keep an eye on my Quamina open-source pattern-matching project will have noticed a recent absence of updates and conversation. That’s because, persuant to Issue #66, I’m working on adding fairly-full regular-expression matching.
Personal note
This is turning out to be hard. Either it’s the hardest nut I’ve had to crack in many years, or maybe my advanced age is dulling my skills. It’s going to be some time before I can do the first incremental release. Whatever; the learning experiences coming out of this work still feel fresh and fascinating and give me the urge to share.
I hope I can retain that urge as long as I’m still mentally present. In fact, I hope I can retain the ability to work on software. For various reasons, I’ve been under a lot of personal stress in recent years. Stealing time from my adult responsibilities to wrestle with executable abstractions has been a pillar of my sanity.
Anyhow, normally when I code I blog about it, but so far I haven’t because the work is unfinished. Then I realized that it’s too big, and addresses too many distinct problems, to be just one piece, thus this mini-series.
[Readers who don’t know what regular expressions are should probably close this tab now. Don’t feel guilty, nobody who’s not a full-time computing professional should have to know much less care.]
[Notation: I’m gonna say “Regexp” or maybe just “RE” in this series.]
I’ll use this post as a table of contents:
(Future) Representing parsed REs.
(Future) Implementing UTF-8 based automata for REs.
At the moment, I think the hardest part of the work is #1, Parsing. (Maybe that’s because I haven’t really dug very deep into other parts yet.) I’d be amazed if the final series had only three parts.
Now, introductory material.
Which regular expressions?
They come in lots of flavors. The one I’m implementing is I-Regexp, RFC 9485. The observant reader will notice that I co-edited that RFC, and I cheerfully confess to bias.
I-Regexp is basically a subset of XSD Regular Expressions (chosen to subset because they have a nice clean immutable spec), which are a lot like good ol’ PCRE (Perl-compatible regular expressions). Except for:
They are designed assuming they will only ever be used to match against a string and return a “yes” or “no” answer.
They are anchored, which is to say that (unlike PCREs) they’re all assumed to start with
^
and end with$
.They omit popular single-character escapes like
\w
and\S
because those are sketchy in the Unicode context.They don’t have capture groups or back-references.
They don’t support character class subtraction, e.g.
.[a-z-m-p]
I’m going to claim that they hit a very useful 80/20 point if what you’re interested is asking “Did the field value match?” which of course is all Quamina is interested in doing.
Project strategy
I’m totally not going to try to do all this as a big bang. I’ve got a reliable RE parser now (it was hard!) that recognizes
ten different RE features, ranging from .
to everything in
(a+b*c?).|d[ef]{3,9}\?\P{Lu}
Unbackslashing again
Go check out
Unbackslash.
Tl;dr: It’s terribly painful to deal with the standard RE escaping character \
in Go software that is processing
JSON. Because both Go and JSON use \
for escaping and your unit tests eventually fill up with \\
and
\\\\\\\\
and become brutally hard to read. So after publishing that blog piece and running polls on Mastodon,
~
is the new \
. So that RE above becomes (a+b*c?).|d[ef]{3,9}~?~P{Lu}
.
You’re allowed to not like it. But I request that you hold off pushing the big button that sends me to Hell until you’ve tried writing a few unit tests for REs that you want Quamina to process.
Back to strategy: The first feature is going to be that lovely little dot operator. And thus…
Quiz
Just for fun, here’s an intellectual challenge. Suppose you’re building a byte-at-a-time state machine to process UTF-8 text. How
many states, roughly, would it take to match .
, i.e. any single Unicode code point? By “match” I mean reject
any byte sequence that
doesn’t, and when it does match, consume just enough bytes to leave you positioned after the .
and ready to start
matching whatever’s next.
I think I’ve found the correct answer. It surprised me, so I’m still sanity-checking, but I think I’m right. I am convinced the problem isn’t as simple as it looks.
Remembering Bonnie 2 Dec 2024, 9:00 pm
The murderer I emailed with is still in prison. And the software that got him pissed off at me still runs, so I ran it. Now here I am to pass on the history and then go all geeky. Here’s the tell: If you don’t know what a “filesystem” is (that’s perfectly OK, few reasonable adults need to) you might want to stay for the murderer story then step off the train.
Filesystems are one of the pieces of software that computers need to run, where “computers” includes your phone and laptop and each of the millions of servers that drive the Internet and populate the cloud. There are many flavors of filesystem and people who care about them care a lot.
One of the differences between filesystems is how fast they are. This matters because how fast the apps you use run depends (partly) on how fast the underlying filesystems are.
Writing filesystem software is very, very difficult and people who have done this earn immense respect from their peers. So, a lot of people try. One of the people who succeeded was named Hans Reiser and for a while his “ReiserFS” filesystem was heavily used on many of those “Linux” servers out there on the Internet that do things for you.
Reiser at one point worked in Russia and used a “mail-order bride” operation to look for a spouse. He ended up marrying Nina Sharanova, one of the bride-brokerage translators, and bringing her back to the US with him. They had two kids, got divorced, and then, on September 3, 2006, he strangled her and buried her in a hidden location.
To make a long story short, he eventually pleaded guilty to a reduced charge in exchange for revealing the grave location, and remains in prison. I haven’t provided any links because it’s a sad, tawdry story, but if you want to know the details the Internet has them.
I had interacted with Reiser a few times as a consequence of having written a piece of filesystem-related software called “Bonnie” (more on Bonnie below). I can’t say he was obviously murderous but I found him unpleasant to deal with.
As you might imagine, people generally did not want to keep using the murderer’s filesystem software, but it takes a long time to make this kind of infrastructure change and just last month, ReiserFS was removed as a Linux option. Which led to this Mastodon exchange:

Here’s a link to that post and the conversation that followed.
(People who don’t care about filesystems can stop reading now.)
Now, numbers
After that conversation, on a whim I tracked down the Bonnie source and ran it on my current laptop, a 2023 M2 MacBook Pro with 32G of RAM and 3T of disk. I think the numbers are interesting in and of themselves even before I start discoursing about benchmarking and filesystems and disks and so on.
-------Sequential Output--------- ---Sequential Input--- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU /sec %CPU
MBP-M2-32G 64 56.9 99.3 3719 89.0 2772 83.4 59.7 99.7 6132 88.0 33613 33.6
Bonnie says:
This puppy can write 3.7 GB/second to a file, and read it back at 6.1GB/sec.
It can update a file in place at 2.8 GB/sec.
It can seek around randomly in a 64GB file at 33K seeks/second.
Single-threaded sequential file I/O is almost but not quite CPU-limited.
I wonder: Are those good numbers for a personal computer in 2024? I genuinely have no idea.
Bonnie
I will shorten the story, because it’s long. In 1988 I was an employee of the University of Waterloo, working on the New Oxford English Dictionary Project. The computers we were using typically had 16MB or so of memory (so the computer I’m typing this on has two thousand times as much) and the full text of the OED occupied 572MB. Thus, we cared really a lot about I/O performance. Since the project was shopping for disks and computers I bashed out Bonnie in a couple of afternoons.
I revised it lots over the years, and Russell Coker made an excellent fork called Bonnie++ that (for a while at least) was more popular than Bonnie. Then I made my own major revision at some point called Bonnie-64.
In 1996, Linux Torvalds recommended Bonnie, calling it a “reasonable disk performance benchmark”.
That’s all I’m going to say here. If for some weird reason you want to know more, Bonnie’s quaint Nineties-flavor home and description pages are still there, plus this blog has documented Bonnie’s twisty history quite thoroughly. And explored, I claim, filesystem-performance issues in a useful way.
I will address a couple of questions here, though.
Do filesystems matter?
Many performance-sensitive applications go to a lot of work to avoid reading and/or writing filesystem data on their critical path. There are lots of ways to accomplish this, the most common being to stuff everything into memory using Redis or Memcached or, well, those two dominate the market, near as I can tell. Another approach is to have the data in a file but access it with mmap rather than filesystem logic. Finally, since real disk hardware reads and writes data in fixed-size blocks, you could arrange for your code to talk straight to the disk, entirely bypassing filesystems. I’ve never seen this done myself, but have heard tales of major commercial databases doing so.
I wonder if anyone has ever done a serious survey study of how the most popular high-performance data repositories, including Relational, NoSQL, object stores, and messaging systems, actually persist the bytes on disk when they have to?
I have an opinion, based on intuition and having seen the non-public inside of several huge high-performance systems at previous employers that, yes, filesystem performance still matters. I’ve no way to prove or even publicly support that intuition. But my bet is that benchmarks like Bonnie are still relevant.
I bet a few of the kind of people who read this blog similarly have intuitions which, however, might be entirely different than mine. I’d like to hear them.
What’s a “disk”?
There is a wide range of hardware and software constructs which are accessed through filesystem semantics. They have wildly different performance envelopes. If I didn’t have so many other hobbies and projects, it’d be fun to run Bonnie on a sample of EC2 instance types with files on various EBS and EFS and so on configurations.
For the vast majority of CPU/storage operations in the cloud, there’s at least one network hop involved. Out there in the real world, there is still really a lot of NFS in production. None of these things are much like that little SSD slab in my laptop. Hmmm.
Today’s benchmarks
I researched whether some great-great-grandchild of Bonnie was the new hotness in filesystem benchmarking, adopting the methodology of typing “filesystem benchmark” into Web search. The results were disappointing; it doesn’t seem like this is a thing people do a lot. Which would suggest that people don’t care about filesystem performance that much? Which I don’t believe. Puzzling.
Whenever there was a list of benchmarks you might look at, Bonnie and Bonnie++ were on that list. Looks to me like IOZone gets the most ink and is thus probably the “industry-leading” benchmark. But I didn’t really turn up any examples of quality research comparing benchmarks in terms of how useful the results are.
Those Bonnie numbers
The biggest problem in benchmarking filesystem I/O is that Linux tries really hard to avoid doing it, aggressively using any spare memory as a filesystem cache. This is why serving static Web traffic out of the filesystem often remains a good idea in 2024; your server will take care of caching the most heavily fetched data in RAM without you having to do cache management, which everyone knows is hard.
I have read of various cache-busting strategies and have never really been convinced that they’ll outsmart this aspect of Linux, which was written by people who are way smarter and know way more than I think I do. So Bonnie has always used a brute-force approach: Work on a test file which is much bigger than main memory, so Linux has to do at least some real I/O. Ideally you’d like it to be several times the memory size.
But this has a nasty downside. The computer I’m typing on has 32GB of memory, so I ran Bonnie with a 64G filesize (128G would have been better) and it took 35 minutes to finish. I really don’t see any way around this annoyance but I guess it’s not a fatal problem.
Oh, and those numbers: Some of them look remarkably big to me. But I’m an old guy with memories of how we had to move the bits back and forth individually back in the day, with electrically-grounded tweezers.
Reiser again
I can’t remember when this was, but some important organization was doing an evaluation of filesystems for inclusion in a big contract or standard or something, and so they benchmarked a bunch, including ReiserFS. Bonnie was one of the benchmarks.
Bonnie investigates the rate at which programs can seek around in a file by forking off three child processes that do a bunch of random seeks, read blocks, and occasionally dirty them and write them back. You can see how this could be stressful for filesystem code, and indeed, it occasionally made ReiserFS misbehave, which was noted by the organization doing the benchmarking.
Pretty soon I had email from Reiser claiming that what Bonnie was doing was actually violating the contract specified for the filesystem API in terms of concurrent write access. Maybe he was right? I can’t remember how the conversation went, but he annoyed me and in the end I don’t think I changed any code.
Here’s Bonnie
At one time Bonnie was on SourceForge, then Google Code, but I decided that if I were going to invest effort in writing this blog, it should be on GitHub too, so here it is. I even filed a couple of bugs against it.
I make no apologies for the rustic style of the code; it was another millennium and I was just a kid.
I cheerfully admit that I felt a warm glow checking in code originally authored 36 years ago.
Why Not Bluesky 15 Nov 2024, 9:00 pm
As a dangerous and evil man drives people away from Xitter, many stories are talking up Bluesky as the destination for the diaspora. This piece explains why I kind of like Bluesky but, for the moment, have no intention of moving my online social life away from the Fediverse.
(By “Fediverse” I mean the social network built around the ActivityPub protocol, which for most people means Mastodon.)
If we’re gonna judge social-network alternatives, here are three criteria that, for me, really matter: Technology, culture, and money.
I don’t think that’s controversial. But this is: Those are in increasing order of importance. At this point in time, I don’t think the technology matters at all, and money matters more than all the others put together. Here’s why.
Technology
Mastodon and the rest of the fediverse rely on ActivityPub implementations. Bluesky relies on the AT Protocol, of which so far there’s only one serious implementation.
Both of these protocols are good enough. We know this is true because both are actually working at scale, providing good and reliable experiences to large numbers of people. It’s reasonable to worry what happens when you get to billions of users and also about which is more expensive to operate. But speaking as someone who spent decades in software and saw it from the inside at Google and AWS, I say: meh. My profession knows how to make this shit work and work at scale. Neither alternative is going to fail, or to trounce its competition, because of technology.
I could write many paragraphs about the competing nice features and problems of the competing platforms, and many people have. But it doesn’t matter that much because they’re both OK.
Culture
At the moment, Bluesky seems, generally speaking, to be more fun. The Fediverse is kind of lefty and geeky and queer. The unfortunate Mastodon culture of two years ago (“Ewww, you want us to have better tools and be more popular? Go away!”) seems to have mostly faded out. But the Fediverse doesn’t have much in the way of celebrities shitposting about the meme-du-jour. In fact it’s definitely celebrity-lite.
I enjoy both cultural flavors, but find Fedi quite a lot more conversational. There are others who find the opposite.
More important, I don’t think either culture is set in stone, or has lost the potential to grow in multiple new, interesting directions.
Money
Here’s the thing. Whatever you think of capitalism, the evidence is overwhelming: Social networks with a single proprietor have trouble with long-term survival, and those that do survive have trouble with user-experience quality: see Enshittification.
The evidence is also perfectly clear that it doesn’t have to be this way. The original social network, email, is now into its sixth decade of vigorous life. It ain’t perfect but it is essential, and not in any serious danger.
The single crucial difference between email and all those other networks — maybe the only significant difference — is that nobody owns or controls it. If you have a deployment that can speak the languages of IMAP and SMTP and the many anti-spam tools, you are de facto part of the global email social network.
The definitive essay on this question is Mike Masnick’s Protocols, Not Platforms: A Technological Approach to Free Speech. (Mike is now on Bluesky’s Board of Directors.)
What does success look like?
My bet for the future (and I think it’s the only one with a chance) is a global protocol-based conversation with many thousands of individual service providers, many of which aren’t profit-oriented businesses. One of them could be your local Buddhist temple, and another could be Facebook. The possibilities are endless: Universities, government departments, political parties, advocacy organizations, sports teams, and, yes, tech companies.
It’s obvious to me that the Fediverse has the potential to become just this. Because it’s most of the way there already.
Could Bluesky? Well, maybe. As far as I can tell, the underlying AT Protocol is non-proprietary and free for anyone to build on. Which means that it’s not impossible. But at the moment, the service and the app are developed and operated by “Bluesky Social, PBC”. In practice, if that company fails, the app and the network go away. Here’s a bit of Bluesky dialogue:

In practice, “Bsky corp” is not in immediate danger of hard times. Their team is much larger than Mastodon’s and on October 24th they announced they’d received $15M in funding, which should buy them at least a year.
But that isn’t entirely good news. The firm that led the investment is seriously sketchy, with strong MAGA and cryptocurrency connections.
The real problem, in my mind, isn’t in the nature of this particular Venture-Capital operation. Because the whole raison-d’etre of Venture Capital is to make money for the “Limited Partners” who provide the capital. Since VC investments are high-risk, most are expected to fail, and the ones that succeed have to exhibit exceptional revenue growth and profitability. Which is a direct path to the problems of survival and product quality that I mentioned above.
Having said that, the investment announcement is full of soothing words about focus on serving the user and denials that they’ll go down the corrupt and broken crypto road. I would like to believe that, but it’s really difficult.
To be clear, I’m a fan of the Bluesky leadership and engineering team. With the VC money as fuel, I expect their next 12 months or so to be golden, with lots of groovy features and mind-blowing growth. But that’s not what I’ll be watching.
I’ll be looking for ecosystem growth in directions that enable survival independent of the company. In the way that email is independent of any technology provider or network operator.
Just like Mastodon and the Fediverse already are.
Yes, in comparison to Bluesky, Mastodon has a smaller development team and slower growth and fewer celebrities and less buzz. It’s supported by Patreon donations and volunteer labor. And in the case of my own registered co-operative instance CoSocial.ca, membership dues of $50/year.
Think of the Fediverse not as just one organism, but a population of mammals, scurrying around the ankles of the bigger and richer alternatives. And when those alternatives enshittify or fall to earth, the Fediversians will still be there. That’s why it’s where my social-media energy is still going.
Read more
On the Fediverse you can follow a hashtag and I’m subscribed to #Bluesky, which means a whole lot of smart, passionate writing on the subject has been coming across my radar. If you’re interested enough to have read to the bottom of this piece, I bet one or more of these will reward an investment of your time:
Maybe Bluesky has “won”, by Gavin Anderegg, goes deep on the trade-offs around Bluesky’s AT Protocol and shares my concern about money.
Blue Sky Mine, by Rob Horning, ignores technology and wonders about the future of text-centric social media and is optimistic about Bluesky.
Does Bluesky have the juice?, by Max Read, is kind of cynical but says smart things about the wave of people currently landing on Bluesky.
The Great Migration to Bluesky Gives Me Hope for the Future of the Internet, by Jason Koebler over at 404 Media, is super-optimistic: “Bluesky feels more vibrant and more filled with real humans than any other social media network on the internet has felt in a very long time.” He also wonders out loud if Threads’ flirtation with Mastodon has been damaging. Hmm.
And finally there’s Cory Doctorow, probably the leading thinker about the existential conflict between capitalism and life online, with Bluesky and enshittification. This is the one to read if you’re thinking that I’m overthinking and over-worrying about a product that is actually pretty nice and currently doing pretty well. If you don’t know what a “Ulysses Pact” is, you should read up and learn about it. Strong stuff.
TV In 2024 11 Nov 2024, 9:00 pm
It’s probably part of your life too. What happened was, we moved to a new place and it had a room set up for a huge TV, so I was left with no choice but to get one. Which got me thinking about TV in general and naturally it spilled over here into the blog. There is good and bad news.
Buying a TV
It’s hard. You visit Wirecutter and Consumer Reports and the model numbers they recommend often don’t quite match the listings at the bigbox Web site. Plus too many choices. Plus it’s deceiving because all the name-brand TVs these days have fabulous pictures.
Having bought a TV doesn’t make me an expert, but for what it’s worth we got a 77" Samsung S90C, which is Samsung’s second-best offering from 2023. Both WC and CR liked it last year and specifically called out that it works well in a bright room; ours is south-facing. And hey, it has quantum dots, so it must be good.
Actually I do have advice. There seems to be a pattern where last year’s TV is often a good buy, if you can find one. And you know where you can often find last year’s good product at a good price? Costco, that’s where, and that’s where we went. Glad we did, because when after a week’s use it frapped out, Costco answered the phone pretty quick and sent over a replacement.
But anyhow, the upside is that you’ll probably like whatever you get. TVs are just really good these days.
Standards!
We were moving the gear around and I snapped a picture of all the video and audio pieces stacked up together. From behind.

The enlarged version of this photo has
embedded
Content Credentials to establish provenance.
Speaking as a guy who’s done standards: This picture is evidence of excellence. All those connections, and the signals they exchange, are 100% interoperable. All the signals are “line-level” (RCA or XLR wires and connectors), or video streams (HDMI), or to speakers (you should care about impedance and capacitance, but 12ga copper is good enough).
Put another way: No two of those boxes come from the same vendor, but when I wired them all up it Just Worked. First time, no compatibility issues. The software profession can only dream of this level of excellence.
Because of this, you can buy all the necessary connectors and cabling super-cheap from your favorite online vendor, but if you’re in Vancouver, go to Lee’s Electronics, where they have everything and intelligent humans will help you find it.
Fixing the picture
Out of the box the default settings yield eye-stabbing brilliance and contrast, entirely unrealistic, suitable (I guess?) for the bigbox shelves.
“So, adjust the picture,” you say. Cue bitter laughter. There are lots of dials to twist; too many really. And how do you know when you’ve got it right? Of course there are YouTubers with advice, but they don’t agree with each other and are short on quantitative data or color science, it’s mostly “This is how I do it and it looks great so you should too.”
What I want is the equivalent of the Datacolor “Spyder” color calibrators. And I wonder why such a thing couldn’t be a mobile app — phonecams are very high-quality these days and have nice low-level APIs. You’d plug your phone into the screen with a USB-C-to-HDMI adapter, it’d put patterns on the screen, and you’d point your phone at them, and it’d tell you how close you are to neutral.
It turns out there are objective standards and methods for measuring color performance; for example, see “Delta-E” in the Tom’s Hardware S90C review. But they don’t help consumers, even reasonably technical ones like me, fine-tune their own sets.
Anyhow, most modern TVs have a “Filmmaker” or “Cinema” setting which is said to be the truest-to-life. So I pick that and then fine-tune it, subjectively. Measurements, who needs ’em?
Privacy
Our TVs spy on us. I have repeatedly read that hardware prices are low because the profit is in mining and selling your watching habits. I’ve not read anything that has actual hard facts about who’s buying and how much they’re paying, but it feels so obvious that it’d be stupid not to believe it.
It’s hopeless to try and keep it from happening. If you’re watching a show on Netflix or a ballgame on MLB.tv, or anything on anything really, they’re gonna sell that fact, they’re up-front about it.
What really frosts my socks, though, is ACR, Automatic Content Recognition, where the TV sends hashed screenshots to home base so it (along with Netflix and MLB and so on) can sell your consumption habits to whoever.
Anyhow, here’s what we do. First, prevent the TV from connecting to the Internet, then play all the streaming services through a little Roku box. (With the exception of one sports streamer that only does Chromecast.) Roku lets you turn off ACR, and Chromecast promises not to. Imperfect but better than nothing.
What to watch?
That’s the problem, of course. It seems likely we’re in the declining tail-end of the Golden Age of TV. The streamers, having ripped the guts out of the cable-TV industry, are turning into Cable, the Next Generation. The price increases are relentless. I haven’t so far seen a general quality decline but I’ve read stories about cost-cutting all over the industry. Even, for example, at Apple, which is currently a quality offering.
And, of course, subscription fatigue. There are lots of shows that everyone agrees are excellent that we’ll never see because I just absolutely will not open my wallet on a monthly basis to yet another outgoing funnel. I keep thinking I should be able to pay to watch individual shows that I want to watch, no matter who’s streaming them. Seems that’s crazy talk.
We only watch episodic TV one evening or so a week, and only a couple episodes at a time, so we’re in no danger of running out of input. I imagine being (unlike us) a real video connoisseur must be an (expensive) pain in the ass these days.
But you already knew all that.
Can it work?
Well, yeah. A big honkin’ modern TV being driven by a quality 4K signal can be pretty great. We’re currently watching 3 Body Problem, which has occasional fabulous visuals and also good sound design. I’m pretty sure the data show that 4K, by any reasonable metric, offers enough resolution and color-space coverage for any screen that can fit in a reasonable home. (Sidebar: Why 8K failed.)
The best picture these days is the big-money streamer shows. But not only. On many evenings, I watch YouTube concert videos before I go to bed. The supply is effectively infinite. Some of them are shakycam productions filmed from row 54 (to be fair, some of those capture remarkably good sound). But others are quality 4K productions and I have to say that can be a pretty dazzling sensory experience.
Here are a couple of captures from a well-shot show on PJ Harvey’s current tour, which by the way is musically fabulous. No, I didn’t get it off the TV, I got it from a 4K monitor on my Mac, but I think it gives the feel.


Content Credentials here too.
AUDIO visual
In our previous place we had a big living room with the deranged-audiophile stereo in it, and the TV was in a little side-room we called the Video Cave. The new place has a media room with the big TV wall, so I integrated the systems and now the sound accompanying the picture goes through high-end amplification and (for the two front channels) speakers.
It makes more difference than I would have thought. If you want to improve your home-theatre experience, given that TV performance is plateauing, better speakers might be a good option.
Live sports
I like live-sports TV. I acknowledge many readers will find this distasteful, for reasons I can’t really disagree with; not least is maybe encouraging brain-damaging behavior in young men. I can’t help it; decades ago I was a pretty good basketball player at university and a few of those games remain among my most intense memories.
I mean, I like drama. In particular I like unscripted drama, where neither you nor your TV hosts know how the show’s going to end. Which is to say, live sports.
I’ve griped about this before, but once again: The state of the sports-broadcasting art is shamefully behind what the hardware can do.
The quality is all over the map, but football, both fútbol and gridiron, is generally awful. I’ve read that the problem is the expense of the in-stadium broadcast infrastructure, routing all the fat 4K streams to the TV truck and turning them into a coherent broadcast.
In practice, what we’re getting is not even as good as a quality 1080p signal. It’s worth noting that Apple TV’s MLS and MLB broadcasts are noticeably better (the sound is a lot better).
It can only improve, right?
Control how?
When I sit on the comfy video-facing furniture, I need to control Samsung, Parasound, Marantz, Roku, and Chromecast devices. We use a Logitech Harmony; I have another in reserve, both bought off eBay. Logitech has dropped the product but someone is still updating the database; or at least was through 2023, because it knows how to talk to that Samsung TV.
They work well enough that I don’t have to be there for other family members to watch a show. Once they wear out, I have no freaking idea what Plan B is. That’s OK, maybe I’ll die first. And because (as noted above) the audio side has superb interoperability, I can count on upgrading speakers and amplifiers and so on for as long as I last.
Golden age?
Yes, I guess, for TV hardware. As for the shows, who knows? Not my problem; I’m old enough and watch little enough that there’s plenty out there to fill the remainder of my life.
Page processed in 0.561 seconds.
Powered by SimplePie 1.3.1, Build 20121030095402. Run the SimplePie Compatibility Test. SimplePie is © 2004–2025, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.