ongoing by Tim Bray

ongoing fragmented essay by Tim Bray

Blueskid Demo 19 Sep 2021, 3:00 pm

I’ve been in the conversation around Twitter’s @bluesky project, and last December I posted @bluesky Identity, a proposal for mapping between social-media identities based on public keys and signatures. Recently @bluesky announced the Satellite contest, whose goal is to take identities on three or more online properties and “Link them in a way that anyone can verify you are the author/owner of all.” Which is more or less what @bluesky Identity is all about. So I pulled together a working demo called “Blueskid” (GitHub). This is a quick walk-through of Blueskid.

Blueskid?

Well, I needed a project name and “@bluesky identity” has six syllables. “Blueskid” is euphonious and only has two, and blueskid.net was available. And I get this mental image of a kid playing blues.

Contest?

Blueskid is not an entry in the Satellite contest. First of all, I’m sort of a @bluesky insider and the idea is to bring in ideas from the community. Second, Satellite is looking for something with a focus on decentralization and radical innovation. Blueskid uses public-key and ledger technologies that, in the software-technology context, are as old as dirt.

I offer Blueskid as a low bar that the Satellite offerings really ought to raise.

Let’s watch it at work.

The assertions

Here’s a recent twitter post:

twitter.com@timbray claims the Bluesky Identity 55555

https://twitter.com/timbray/status/1438391330879590400

This tweet contains an “assertion” representing a claim by the Provider Identity (PID) twitter.com@timbray to the Bluesky Identity (BID) 0000000000055555. Let’s unpack that.

  1. We know that the claim is from twitter.com@timbray because it’s posted to the @timbray Twitter account.

  2. Since we’re going to be posting assertions to social media, they need syntax to delimit them from any other text that might be in the post, and to separate the fields. Since I’m a fun-loving guy, the beginning and end of an assertion are marked by "🥁" (U+1F941 DRUM) and the fields are separated by "🎸" (U+1F3B8 GUITAR). I’m not going to claim this is optimal but it worked OK in the demo.

  3. This assertion has two fields. The first is “C”, saying that this is a Claim assertion. The second gives the BID that’s being claimed.

  4. In Blueskid, Bluesky Identities are represented by unsigned 64-bit integers. There’s a lot to be said about how they might be structured and minted, but for the purposes the demo we just need something that can be represented in a string, in this case upper-case hex characters.

Another Twitter post:

twitter.com@timbray grants BID 55555 to tumblr.com@t-runic

https://twitter.com/timbray/status/1439270526598332424

It was followed shortly by a Tumblr post:

tumblr.com@t-runic accepts BID 55555 from twitter.com@timbray

https://t-runic.tumblr.com/post/662682878549852160/mpe

These two assertions are designed to work together. Each has six fields:

  1. In the first, “G” says this assertion Grants a BID, “A” that it Accepts one.

  2. The second field gives the BID being granted.

  3. The third is a nonce (in base64). This is currently 64 bits, which is kind of short by nonce standards, and I need to find someone with real cryptographic/security skills for advice. I’m having trouble thinking through attack models. At the moment I think 64 bits is plenty. But it’d be unsurprising if I were wrong.

  4. The fourth is an ed25519 public key, once again in base64. The encoding uses the horrible old ASN.1/PEM/PKIX machinery, which would be silly if the whole world used Go, but many other popular libraries in popular languages assume this is the one and only way to interchange public keys. Thus it’s the right thing to do in an Internet Protocol.

  5. The fifth is the signature (base64 again) produced by applying the corresponding private key to the nonce.

  6. The fifth is the counterparty, in the Grant assertion the receiving PID and in the Accept assertion the granting PID.

There is a another pair of posts to grant that same 55555 Bluesky Identity from twitter.com@timbray to mastodon.cloud@timbray, here and here. Also, note that:

  1. The nonces are different and so are the signatures.

  2. The keys are identical.

  3. The BIDs are identical.

  4. The Grant post is known to be published by twitter.com@timbray and names tumblr.com@t-runic, while the Accept post is known to be published by tumblr.com@t-runic and names twitter.com@timbray.

You might ask where the private key corresponding to the public key is stored. The answer is “nowhere”; it existed in the Blueskid server just long enough to produce the two signatures, then it was overwritten. It doesn’t exist any more.

It is my belief that these social-media posts, taken together, establish that at some point the owner of twitter.com@timbray and of tumblr.com@t-runic had access to the same private key, and published commitments respectively to grant and accept the “55555” BID. (The same exercise was performed for mastodon.cloud@timbray.)

Blueskid also knows about an “Unclaim” assertion, not illustrated here, whose effect is what you’d expect.

Q.E.D.

My claim is that these assertions in social-media posts constitute a verifiable proof that the same entity controlled both PIDs and expressed an intent to share a BID.

But, even if you agree with me, the social-media posts by themselves aren’t very useful. If you wanted to know what BIDs exist and which PIDs they’re shared between, you’d need to read all the posts from everyone in the universe and look for Blueskid assertions. So…

The Ledger

As the @bluesky Identity post outlines, you need a Ledger to make this work. For each of the BID Claim, Grant, and Unclaim assertions, there needs to be a Ledger entry noting what has been done and pointing to the social-media posts that prove it. The Ledger needs to be publicly readable and reliably immutable. Clearly, by processing the Ledger, you can build a little database of what BIDs exist and which PIDs are mapped to them.

The Ledger could be constructed with blockchain technology. That’s not how I’d build it if you asked me to, but it’d work OK. The write rate is probably low enough to survive blockchain’s pathetic update performance.

There’s an important issue the Ledger needs to address, based on the fact that social-media posts are not immutable; even Tweets can be deleted. Simply because I publish an assertion pair like the one illustrated above doesn’t mean that everyone can be confident that they can go and verify it years hence.

Therefore, the Ledger implementation needs to make a believable claim that it won’t append anything to the ledger that it hasn’t verified by fetching the social-media posts and validating all the constraints listed above. I’m not sure what the best way to achieve this is, but I have one idea: There could be multiple implementations, each reading new assertions as they are added to the ledger, repeating the verification, and rejecting assertions that can’t be validated. Hey, this is starting to sound like a blockchain.

What Blueskid does

First of all, it helps generate assertions. For example, you can ask it to make that Twitter/Tumblr BID grant assertion pair for you. Send this to the /grant-assertions endpoint:

{
  "BID": "55555",
  "Granter": "twitter.com@timbray",
  "Accepter": "tumblr.com@t-runic"
}

Then it’ll come back with:

{
 "GrantAssertion": "🥁G🎸55555🎸0E8hIvntXJc=🎸MCowBQYDK2VwAyEAzHaDqVdyhle4wVY/leNyZrtBKJKMVqVWZFfVJ3S8v60=🎸U1vPM6cQ+c5rdTKwa/2l/wjr2Z0Zu33t/qE59+94Ni/0TjEjDqcAZ/LfaFcJ6i+v+uLNhiN5LeiekFYByPWVAQ==🎸tumblr.com@t-runic🥁",
 "AcceptAssertion": "🥁A🎸55555🎸V5+dt5Me0kw=🎸MCowBQYDK2VwAyEAzHaDqVdyhle4wVY/leNyZrtBKJKMVqVWZFfVJ3S8v60=🎸1fLK2wHtRA24c/wu9uiiB42WOFur3TI9VozsYKImY0Vq3HgwDJU6xCX8GiW8rM+KIjOUTem6sQt5vTybK+dbCw==🎸twitter.com@timbray🥁"
}

Then, once the assertions are posted to social media, you can update the ledger. Here’s an example of a post that records the Twitter/Tumblr assertion pair, which you’d post to the /grant-bid endpoint:

{
  "GrantPost": "https://twitter.com/timbray/status/1439270526598332424",
  "AcceptPost": "https://t-runic.tumblr.com/post/662682878549852160/mpe"
}   

After you’d posted that, sending a GET to the /ledger endpoint would yield this:

{
 "Records": [
  {
   "RecType": 0,
   "BID": "0000000000055555",
   "PIDs": [
    "twitter.com@timbray"
   ],
   "PostURLs": [
    "https://twitter.com/timbray/status/1438391330879590400"
   ],
   "Key": ""
  },
  {
   "RecType": 1,
   "BID": "0000000000055555",
   "PIDs": [
    "twitter.com@timbray",
    "tumblr.com@t-runic"
   ],
   "PostURLs": [
    "https://twitter.com/timbray/status/1439270526598332424",
    "https://t-runic.tumblr.com/post/662682878549852160/mpe"
   ],
   "Key": "MCowBQYDK2VwAyEA7bk+ldmZEGCSAdR1RQek1nQ4Lp058QpcaNGnDlfsS/A="
  },
  {
   "RecType": 1,
   "BID": "0000000000055555",
   "PIDs": [
    "twitter.com@timbray",
    "mastodon.cloud@timbray"
   ],
   "PostURLs": [
    "https://twitter.com/timbray/status/1439271202699157511",
    "https://mastodon.cloud/@timbray/106953703798946745"
   ],
   "Key": "MCowBQYDK2VwAyEA+BBQLd4ks4vdJZzX1F4j51gtyfJpLBFpeqkT7t5GJ/0="
  }
 ]

I’m not going to spelunk through the JSON, but it says that the BID was claimed then granted twice, and links to the social-media posts which contain the assertions that prove it.

The code tries to be careful. It blocks BIDs from being claimed more than once and, when it processes assertion pairs, takes care that all the conditions listed above apply: The BIDs and keys are the same, the nonces and signatures are different, the signatures validate, and so on. Also it enforces the @bluesky Identity constraint that no public key can be used in more than one BID-grant transaction.

It also provides endpoints that let you query all the BIDs associated with a PID (/pids-for-bid), the reverse query (/bids-for-pid), and given any PID, list the group of PIDs of which it’s a member and which are mapped together via at least one BID. Here’s a little terminal session:

Interacting with Blueskid on the command line

What Blueskid doesn’t do

It doesn’t actually post the assertions to the social-media sites; I did that by hand. This will require a lot of API wrangling and the APIs are frankly not that lovable. It does actually use the Twitter V2 API to retrieve tweets. But Tumblr and Mastodon are just HTTP GETs followed by code that roots through their horrible HTML to find the assertion.

Blueskid’s ledger is a fake. It’s in memory, not persisted at all, and it doesn’t do signature chaining to ensure that it’s immutable. Databases and Merkle trees are hard, but implementing them to do this kind of thing is a fully solved problem.

Acknowledgments

The idea of establishing key ownership by publishing signed assertions in social-media posts is originally due to Keybase.IO, quite some number of years ago.

This work has benefited from several interventions by Paul Hoffman.

Saving þ 9 Sep 2021, 3:00 pm

Herewith a lost-pet story with (spoiler) a happy ending, starring a real bloodhound. Soon to be a major motion picture, I bet.

Hunter

Here’s the hound, who’s called Hunter.

What happened was, we’ve been introducing our 9-month old cat þ (pronounced “Thorn”, because reasons) to the great outdoors. This is hard to avoid because we have a back porch we eat most meals on when it’s nice, and it’s a pain in the butt to not have the door to it open.

It’s been going great because þ is a cautious kind of cat. He skedaddles back inside or up a tree if a car goes down the road outside or someone drops a spoon on the floor or even sneezes. We kept him on a leash/harness but then after a while stopped, because he seemed to have no interest in going outside our yard. He’d flit briefly under the fence to the neighbor’s yard to investigate a chittering squirrel, but stays away from the street in front and the alley in back. And he never wanted to stay outside long.

Then, last Saturday, we needed to make an overnight trip to our cabin in connection with the renovations; our 22-year old son who lives in our basement was happy to mind the fort. When we left, þ was on a chair on the back porch, fascinated with all the to-ing and fro-ing with totes and duffel bags.

þ the cat

The picture we used in the lost-cat posters.
[Photo: Lauren Wood.]

When we came home Sunday, we discovered our son hadn’t seen him since we left. Suddenly we were The People With The Lost Pet. We put up posters. We walked the neighborhood, whistling and calling. We advertised on the SPCA lost-animals page and one on Facebook. Since þ is microchipped, we updated that registry too. Lauren, who is more assiduous than I, talked to more neighbors in 48 hours than we had in the last year or two. Everyone was sympathetic.

Now, we live in a dense urban neighborhood with lots of cars and the occasional coyote, not to mention, from time to time, damaged and possibly predatory humans. So, there are risks for cats. But still, it didn’t make sense, we just couldn’t figure out the scenario in which he’d managed to get far enough away to be really lost. We’d had other cats that roamed far and wide and casually walked into neighbors’ houses, about whom we worried terribly, but they all had long peaceful lives.

Eventually we ran across Pet Searchers Canada, whose service is comprehensive and bloodhound-assisted. Also it’s not cheap, but we were feeling pretty emotionally beat-up. After we’d signed the contract and paid, Savannah the handler showed up with Hunter the bloodhound.

Hunter the bloodhand

We provided þ’s harness as a scent sample, then Savannah and Hunter vanished for the best part of two hours. Savannah explained that she’d picked up his scent on the next street south and the intervening alley, and the western continuation of our alley. She told us she’d send a marked-up map showing where Hunter got the strongest signal, and advised us to go out after dark to check those areas out. Also to take along well-worn clothes, heavy with our scent, and drag them along behind to lay a scent trail us as we came home from our expedition, on the theory that the poor little unadventurous guy had got a little too far away and just didn’t know the way home.

She emailed us the marked-up map and Wednesday evening that’s what we did. And as we walked along, I occasionally offered the special “come here for a treat” whistle, and Lauren made “come for dinner” sounds. After dark is definitely the time to do this, because it’s quieter and also people aren’t going to be giving you the side-eye for walking along trailing a pair of jeans in the grass behind you.

While we were walking along the alley between our street and the next one south, territory we’d tried before albeit not in the quiet and the dark, suddenly there were plaintive cat cries answering our whistles and calls. þ has a high-pitched and penetrating voice when he has something he thinks it’s important that we hear.

We converged on someone’s tall back fence that faced the alley with an apparently-locked gate, but there was a gap under it and almost immediately, a pointy furry little black face looking through it. The gap was pretty low and þ had to put in some real squirming, but he made it out.

Now he’s at home. He won’t be going out for a while. He had a minor injury on one front leg, but already healing, no call for a vet visit.

What’s shocking is that he was in the back yard of a house across the street and maybe two houses east of us. I guess if you stay in your own yard you’re not going to learn how to find your way home. There’s a lesson in that.

Now, our neighborhood has plenty of cats and they do not co-exist very peaceably, so I suspect that poor þ was cornered and chased by one of the local feline bullies and that’s how he misplaced himself. Our daughter has vowed to walk him around the ’hood on his harness whether he wants to or not, so that he’ll at least know the nearby territory.

Judging by the number of posters on utility poles I see, pets often go missing. But you can do more than put up posters. I recommend bloodhounds.

LG C1 6 Sep 2021, 3:00 pm

What happened was, our TV is ten years old and (following on some renovations) we could use one at the cabin for rainy winter evenings. So I bought a 48" LG C1 4K OLED screen (48" is the smallest in the C1 line), which is kind of this year’s hot TV. It’s not a life-changer, but the world of TV has shifted some in a decade, so here’s a dispatch from the front. Includes a pointer to a truly great TV stand.

By the way, this thing seems to be on sale at all the big boxes, which is a little weird given the global supply-chain crunch. We got ours at Costco, but in the US it seems Amazon has ’em cheaper. [Caution: Affiliate link.]

OLED

The Wirecutter and several other review sites I visited seemed pretty unanimous that LG and Sony OLED are ahead of the pack, and LG is quite a bit cheaper. Also I liked the look of the sets. OLED, compared to other display technologies, is said to offer more dynamic range, better color gamut, faster pixels, and other goodies.

What they keep coming back to is “blacker blacks”, so bear that in mind.

LG C1 TV

Blacker blacks!

4K

It means, basically, 3840×2160, which is to say 8,294,400 pixels. Does anyone really need that many? I got interested in the subject back in 2013 when 4K was new, and wrote code to figure out if there’s any value-add compared to regular HD (1920x1080, a mere 2,073,600 pixels).

In Is 4K BS? I concluded that the pixel count probably didn’t matter, but boy did that blog piece ever go viral, so I ended up writing More Things About TV, which noted (among other things) that the 4K spec also bumps the frame rate and color depth. And here too, blacker blacks come up.

So, does it make a difference?

Yeah, but… well, first of all, modern TVs have way smaller bezels, so even though our video cave is pretty small, we were able to replace a 42" model with a 48". And that does add impact. Also, these days they’re thinner and sleeker and generally easier on the eye — among other things, the backs are smooth black surfaces without vents and other uglification. So in fact I could have squeezed in the 55" but it’s OK, what we got is big enough.

As for the rest, well — and here’s the important part — it really depends on the source. Like a lot of people, our family watches streaming services, mostly episodic TV but some movies, and also live sports via a cable provider. Except for baseball, for which we subscribe to MLB.tv. The cable is still on 720p, no 4K there.

So, if you have a well-photographed, well-produced show, and if its visual palette is sort of noir, then yeah, a big 4K OLED is going to make you smile and say “Wow!” A couple of examples would be Lupin on Netflix and especially The Expanse on Prime. In particular the later seasons. Space, baby!

As for live sports, the story isn’t good. The picture comes nowhere near pushing the edge of what the screen can do. Note that MLB.tv is 4K as opposed to cable’s 720p, but still. You get a close-up on a batter’s face and OK, it’s dramatic, but then the crowd shots and wide whole-field views are super-disappointing. I know that they can do better. I wonder what the bottleneck is, and suspect it’s just stingy management that’s unwilling to pay up to pump more bits through the wires.

I’ve been thinking about dumping cable and subscribing to one or two sports streamers (in Canada, Sportsnet and TSN). It might save money; if it got me a better picture it’d be a no-brainer. But I suspect the problem isn’t with the streamers, it’s at the source, with the leagues. Anyhow, interesting territory.

Software

These days you have to worry about your TV’s operating system. The LGs come wth “WebOS”; when I saw that name I thought “Didn’t that used to be the nice Palm thing that was killed by iOS and Android?” It turns out that this is that; a distant successor, anyhow, that’s weaved back and forth between owners and in and out of Open-Source respectability. See WebOS on Wikipedia for details. Anyhow, it’s cool that the TV runs Linux.

And it’s pleasant enough to interact with. But mostly we don’t, because the little Roku box that drove the previous TV still works fine, and it spits out 4K and has Netflix and Prime and (unlike WebOS) MLB and plenty of other nice stuff. It turns out Roku is Linux too, so there.

I generally like Roku, it seems to pretty well Just Work and get out of the way. But I suppose they’ll turn out to be evil, just like every other big player in the entertainment ecosystem.

Privacy

Modern TVs spy on you. They are part of the global adTech ecosystem, a dismal, dark, diseased, and dysfunctional landscape that, generally speaking, contains nothing good.

“I’ll get a dumb TV,” you exclaim, “Then they can’t track me!” Well no, but your cable box still is, and if you have a Roku or a Chromecast or really any other widget that routes entertainment bytes from the Internet to your eyeballs, it’s probably tracking the hell out of you.

My neighbor has erected a small but exotic-looking “digital antenna” on his roof and tells me he gets lots of channels in rock-solid first-class high-resolution high def. And yep, nobody’s tracking him. But, no internet goodies for him either.

The situation isn’t hopeless. In my case, I care about the TV itself, the Roku, and the cable box. A bit of Web search reveals, for anything reasonably modern (in my case, the Roku & LG), how to minimize tracking. I’ve done that of course, but in my heart I think they’re probably lying liars who are watching my unhealthy affection for big soccer tournaments and anything with good space battles or that has Idris Elba.

And bear in mind that your mobile phone is tracking you all the time too, as are bushels and bushels of JavaScript embedded in pretty well every Web page you visit. So I’m going to suggest that the TV may not be the most intrusive internet-connected device decorating your lifestyle.

But I still hate being watched in the TV cave, and think someone should pass draconian legislation to end this travesty.

Imperfections

We are happy customers of Logitech Harmony remotes, which are now being discontinued because Logitech is evil and hates customers. I joyfully discovered that the C1 TVs are in the Harmony database; maybe one of the last things to be added? So I reconfigured our remote’s setup but now it won’t sync with any of our computers. There are a bunch of workarounds and hacks we haven’t tried yet, and if all else fails you can buy a new Harmony on eBay, still pretty cheap.

My heart sinks at the prospect of operating a system with a Roku and a cable box and disk player and a PlayStation and a Chromecast, all plugged into an A/V receiver, without some sort of universal remote. Wish us luck. And it seems totally batshit crazy that there isn’t a good business to be built around solving this problem.

One other thing. We watch Netflix & Prime via the Roku, but now the TV has them too. Maybe the picture is better that way? But I haven’t figured out how to make the Marantz AV receiver route the HDMI Audio Return Channel (ARC) from the TV to the speakers. I’ll probably wrestle it to the ground eventually.

Hardware elevation and VESA joy

There’s a problem with the LG TVs: They ship with this ugly low-riding plasticky-silver stand that positions the screen just barely above whatever surface it’s sitting on. Which raises the question: Where do the speakers go? Assuming that you don’t want to use the shitty ones built into the TV.

Whether you’ve got a (*sigh*) sound-bar or (as in our case) a pair of very decent little PSB Alpha speakers with an outboard subwoofer, these are things that want to sit under the TV. But with that base they can’t.

I’m here to help. I spent an absurd amount of time searching for “tv stands” and “monitor risers” and other permutations, thus routing money from Amazon to Google because all the unsatisfactory answers had Amazon at the top of the list, and that doesn’t come for free.

It took forever, but I hit pay dirt, in the form of the STAND-TV00Y (great product name there) from an outfit called VIVO. Based on this product, I will definitely have a close look at VIVO next time I need to configure a desk/monitor combo.

VIVO Universal Tabletop TV Stand for 22 to 65 inch LCD Flat Screens

Our TV, like this one, is backed by a brick wall. I couldn’t resist this picture from the VIVO website, even though I’m deeply concerned about what’s in that ominous wooden ladle. I’m pretty sure the Worried Jungle People shouldn’t let the Stoned Jungle Person drink it.

Pardon me for going all fanboy, but this is brilliant. It comes with (and I sob that this should be such a rare thing) crystal-clear unambiguous directions for putting it together that Just Work.

I mentioned VESA (a.k.a. FDMI), which is the standard that describes how to fasten TVs to stands and booms and walls and so on. The VIVO uses that and my first experience with it is good. Except for, the stand comes with a little plastic multipouch containing a remarkable number of different-sized fastening bolts, because I guess VESA didn’t standardize that.

Should you upgrade?

Most people with TVs that work OK probably shouldn’t. But ten years’ progress does make a difference.

How Much Range? 1 Sep 2021, 3:00 pm

When someone wants to talk to me about my car, invariably the first question is some variation on “How far can you go on a charge?” The next is “How long does it take to charge?” Ladies, gentlemen, and other flavors, please take note. These questions are wrong. I’m here today to explain why, and suggest what the right ones are.

[This piece provoked by my recent Trans-Canada driving experiment.]

“How far can you go on a charge?”

For almost everyone, 95% of their driving is commuting and shopping and going to the gym or whatever. Every contemporary electric car you can buy has more than enough range. Most EV drivers I know charge less than once a week.

Therefore, the question is really only relevant if you need to drive long-haul. I’m going to define “long haul” as “more than 250km” (about 150 miles). That number 250 may be controversial but I think it’s reasonable, because as of mid-2021, it’s becoming easy to buy an EV with that kind of range, with the price creeping further into mass affordability every quarter.

Now, when you’re long-hauling, you’re never going to use all of your range. To start with, when you’re using one of the fast-chargers on the highway, the process slows down when your battery hits 80% full, by a factor of as much as three. So if you arrived at 20% full, it’d take you the same time to get from 20% to 80% as from 80% to 100%. Since you want to get back on the road, and you don’t want to hog the charger unduly, you usually take off when you hit 80%. So to answer the long-haul range query, start by subtracting 20%.

Not only do long-haulers not start out full, they don’t run the battery down to zero. These days, there’s always the danger that when you get to the charger, it’s broken or busy or you just can’t find it. So you need to leave some reserve. People who plan ahead generally look for a charger where there’s another nearby to serve as a Plan B. What do we mean by “nearby”? Well, if there’s a Plan B charger a couple of blocks from your target, you’ll be willing to run pretty far down. If the chargers are say 50km apart, you’re going to want more reserve.

So the correct arithmetic isn’t “max-range - 20%”, it’s “(max-range - 20%) - Plan-B-safety-margin”.

Of course, there’s a special case when you’re starting from home, or ending up there. Where by “home” I mean somewhere that there’s a reliable low-tech level-2 charger where you leave your car plugged in all night and it’s back at 100% in the morning. So when you’re starting from home you don’t need to compensate for that 20%, and when you’re ending up at home you don’t need the safety margin.

But it’s more complicated than that. Because your range depends on how fast you’re going, how often you’re stopping, whether you’re going up and down hills, how hot or cold it is, and how hard it’s raining. For example, the worst-case scenario I can think of is the eastbound BC Highway 5 (“The Coquihalla”) which is 500+ km long, mostly uphill, and has approximately 0km of flat sections. Also, it has a speed limit of 120km/h. Also, it’s in Canada, which means that the local climate includes rain, snow, and extreme temperatures.

Among all these variables, there’s one you can partly control: your speed. I’ve been told that the formula for air resistance includes at least one quantity containing the square of your speed. So when a long-hauler is calculating the next leg of their journey, they’ll need to take that into account.

So the correct question is something like: On the rare occasions I’m driving cross-country, how far can I go in one hop, after you take off 20% for charging efficiency (unless you’re starting at home), allow for Plan B at the destination (unless you’re ending at home), and compensate for speed, weather, temperature, and hills?”

So someone who asks me that question is apt to get a long answer. Or in the (unlikely) event that I don’t want to explain, or the (common) event that I don’t think they have the patience, I say “Max 400km best-case, but I can always get 300.”

The right question

I suggest “Can you go 250km between chargers on a cross-country trip?” I confess that I’m influenced by the design of Petro-Canada’s Electric Highway project, which aims to have chargers no more than 250km apart. I think that’s about right.

Depending on how the ecosystem of EVs grows, we might end up using either a larger or smaller number. Of course, the more charging networks are out there, the easier Plan B gets, so the minimum viable long-haul-leg range gets smaller.

“How long does it take to recharge?”

If you own an EV, your life will be much easier if you have reliable access to a “Level 2” charger. This can cost less than a thousand bucks if you’re lucky enough to have a garage that already has decent electrical service. But it’ll be more for most people. For those who park on the street or in their apartment’s basement, it can be a real problem.

With that Level 2, then for basically every electrical car on the market, if you adopt a discipline of “Plug it in overnight whenever it gets down below half charged”, you’ll never have to think about it.

So once again, this only matters when you’re long-hauling. But then it matters a lot, because it’ll have a major influence on how fast you get there.

Once again, the answer is complicated. This time I’ll cook the factors down into a list:

  1. How far do you have to go? If the next leg is much less than your range (after all the corrections and adjustments listed above) then just charge up that much, plus enough for Plan B.

  2. How fast can your car charge? Some of the older and cheaper electrics can barely soak up 50kW. Mainstream high-quality cars these days can use 100kW (up until 80% full, that is). The Porsche Taycan and Hyundai Ioniq 5, however, can both use more than 200kW and this is what I’d expect from the whole next generation of electrics.

  3. How fast can the charger pump electrons? In my recent Trans-Canada trip, I encountered “fast” chargers at 50, 100, 200, and 350kW.

The take-away

In areas of the world with a decent charging network, pretty well any reasonably recent EV will long-haul. Probably the most important quality-of-life factor is your charging speed.

The areas of the world without a decent network are shrinking and will shrink lots more, quickly.

CL XLI: Forest Stories 24 Aug 2021, 3:00 pm

Recently I’ve had the joy and privilege of time spent walking in the Pacific Northwest forest, on a small island where we engage in Cottage Life.. Walking in the forest provides a fine opportunity to think, although the raw beauty of the forest pouring in through your eyes and ears will regularly interrupt. While forest-walking, I thought about pictures, modern mapping technology, strangers’ identities, and The Green Knight movie.

Forest on Keats Island

Snapping

I have a problem: It’s hard to photograph the forest. Out of sheer embarrassment I won’t share the number of times a combination of light and space and color has brought the camera to my eye. Because almost every such effort, on later consideration, ends up looking like a snapshot of some trees. I occasionally get the light and color but the special space eludes almost always.

Challenge accepted, OK? If that rainforest thinks it can hide its beauty from my camera, it’s got another think coming. With any luck I should have a couple more decades of life to work on the problem.

Forest on Keats Island

Mapping

There’s a problem walking in these woods. The trail network is a bit complicated and generally speaking, the trail forks look like the other trail forks. This makes it hard to re-create an excellent walk with a length known in advance, for example when you’re showing off the island to a first-time visitor who might not be up for a challenging two-hour scramble.

So I decided to map them. I surveyed the (many!) Android apps designed for this purpose. It seems that AllTrails is the most popular, but I found its learning curve onerous. So I installed Gaia GPS and Lauren installed Wikiloc, and we set out. They both worked pretty well. I think that if you’re signed into Gaia, this map should show my recently-marked trails. But I’m not sure I actually understand the publishing process yet.

Having created a Gaia GPS account and used the app/site briefly, I was charmed to get an email from them advertising that they were hiring and anyone interested in some combination of cartography, mobile apps, and server-side tech should get in touch. If I were younger I might.

Forest on Keats Island

Green Knight

I have a special relationship with the poem behind the recent movie.

The movie was our first such outing since Covid started. We even took a public-transit train to get there. I masked on the train but, since the film’s been out for a while and isn’t a big hit, there were only six people in the theater, widely separated, so I went bare-faced. It was frankly a thrill to go out and do adult things.

As for the movie, meh. The middle section, with Gawain wandering the wilderness seeking the Green Chapel, was very good. But I thought the ending, completely different from the one in the poem, was not an improvement.

And while the location shooting was very beautiful, the sound design was awful, with obtrusive heavy-handed Foley; for example, Gawain‘s horse plods slowly down a muddy forest path, and with each pace a huge “thud!” explodes from the theater speakers.

I think the problem is that the movie didn’t take itself seriously enough, as witness the hokey episode titles and the really dorky final line of dialogue assigned to the Green Knight.

I hope someone tries again and does it better, because the underlying poem is a fine piece of work.

Forest on Keats Island

Identity

We were walking one of those trails and my eye was captured by a flash of rectangular white in the undergrowth. It turned out to be a BC Services Card, which combines the functions of driver’s license and healthcare access. I’d sure be upset if I dropped mine on a forest trail — I’ve never had to replace one but I imagine the bureaucratic snarl is pretty awful.

Fortunately, the card displays, along with the holder’s full, name, gender, and birth-date, their mailing address. So it was easy enough to put it in an envelope and drop it in a post-box.

But I was unsatisfied, because if it was my card I’d want to know right away that it’d been found. So I went to look up the holder, a woman who had a dirt-common surname but moderately unusual first and middle names; I thought given an email address or social-media handle, I could set her mind at rest.

Google: No luck. Facebook: No luck. LinkedIn: No luck. The phone company’s “white pages” site (if you don’t know what white pages are, that’s perfectly OK): Yes, correct first-name/last-name combo in the right suburb. I called it and got a fax machine. Uh…

Anyhow, she got the card and I got an online thank-you via LinkedIn. But, first of all, I was surprised that with this much information, I was still unable to find any online evidence of this person’s existence. Weird, right?

No, maybe I’m weird. Given a random slice of a thousand or so people across the population, how many of them should one expect to be able to turn up online? How far has the Internet penetrated, really, into the fabric of society?

I don’t know, but I’d like to. I’m the last person to ask because I live online and the space of people who don’t is pretty well closed to me.

Thanks for listening

And if there’s a forest anywhere near you, count yourself fortunate and go take a walk in it. You won’t regret it.

Apps Getting Worse 7 Aug 2021, 3:00 pm

Too often, a popular consumer app unexpectedly gets worse: Some combination of harder to use, missing features, and slower. At a time in history where software is significantly eating the world, this is nonsensical. It’s also damaging to the lives of the people who depend on these products.

First, a few examples to clarify the kind of thing I’m talking about. These are just the ones I’ve had personal experience with.

iPain

One super-obvious example is the long, sad story of iPhoto and iMovie.

For years after the introduction of iMovie ’08, you could still get and use the ’06 version and lots of people did, because it was simple, straightforward, and the obvious things you needed to do were always within reach. I was using the program back then and since I’m a tech geek updated to the newest and greatest, then was reduced to inchoate screams of rage by ’08. I couldn’t figure out how to do lots of obvious things, everything was klunky. There wasn’t a single dimension along which ’08 was better.

As for iPhoto, I never used it much, but my eighty-something mother did, and took lots of great photos with the Sony RX100 I gave her when I gave up on pocket cams. She’s not geeky but has a Bachelor’s in the sciences and is really smart. At some point they broke iPhoto so she couldn’t figure out how to do anything, and when she asked me for help she had tears in her eyes. I tried to get her fixed up, but she doesn’t take pictures much any more. I miss them.

Economist pain

I was still a Developer Advocate in the Android group when The Economist shipped their app. I thought it had one of the best user experiences ever. You started at the beginning of the current issue, swiped down through an article to the bottom, then swapped to bring the next article in from the right. It remembered where you’d got to, which supports The Economist’s vision of being a weekly newspaper; one pass through and you’re caught up on the world that week. There was always a gesture to get to the Table of Contents, but I found I usually didn’t need it much, just swiped over the things I didn’t care about. I praised it to the skies at the time, and (admittedly) since criticized its “Back” affordance, but that was a minor gripe.

The most recent version has been fancified and crippled. First of all, when you open the app, it doesn’t take you to where you were last reading. It insists on starting with “news of the day” (there are lots of other sites for that stuff) and you have to press “week” to get back into the actual publication. When you do that, even though it knows which articles you’ve read (marking them with a check-mark in the table of contents) it maddeningly doesn’t take you to where you were last. So you have to hunt through the table of contents to get yourself restarted.

And when you get to the bottom of an article, it doesn’t stop, it drops you into some weird bastardized section-specific table of contents thingie. All I want is to flip down then flip right until I get to the damn end of the damn magazine. Why?!

MLB

I’ve used the Roku/MLB combo for years to watch ball games on our big TV. The app has evolved over the years and mostly gotten better. I find things on Roku tend to be a little sluggish, but MLB wasn’t too bad; it’d drop you, pretty quickly, into a screen containing a nice picture of a baseball stadium, then overlay a grid of games that were on; pick the one you want and away you go.

Suddenly, it’s become immensely slower, and apparently is spending that time trying to use some AI voodoo to figure out which game I’d like to watch. After an endless delay, you get live video of the game it thinks you want to watch, with a few other games and menu choices overlaid around the edge. It’s reasonably good at guessing which game I want to watch, but way slower at getting me there than it used to be. When’s it’s (regularly) wrong, there are two (slow) menu transitions to get back to the grid of all the games.

Also, they screwed up the Android Auto app — I find listening to a game a good way to pass the time on the road. It’s always had a flaw in that it tries to guess which game you want to watch and starts playing that — the guesses are laughably bad and I often end up with something like Miami Marlins in Spanish. But, you were one tap away from a nice list of everything on offer.

Recently, the startup screen is trying be smarter, thus much slower, in presenting its guess as to what you might want to hear, with a few others (not all) offered as options. So I have to wait forever for this to manifest then hit a teeny little “More…” target to get the actual list of all the games on offer.

Why does this happen?

It’s obvious. Every high-tech company has people called “Product Managers” (PMs) whose job it is to work with customers and management and engineers to define what products should do. No PM in history has ever said “This seems to be working pretty well, let’s leave it the way it is.” Because that’s not bold. That’s not visionary. That doesn’t get you promoted.

It is the dream of every PM to come up with a bold UX innovation that gets praise, and many believe the gospel that the software is better at figuring out what the customer wants than the customer is. And you get extra points these days for using ML.

Also, any time you make any change to a popular product, you’ve imposed a retraining cost on its users. Unfortunately, in their evaluations, PMs consider the cost of customer retraining time to be zero.

How to fix this? Well, in my days at Amazon Web Services, I saw exactly zero instances of major service releases that, in the opinion of customers, crippled or broke the product. I’m not going to claim that our UX was generally excellent because it wasn’t; the fact that most users were geeks let us somewhat off the hook.

Why no breakage? Because these were Enterprise products, so the number of customers was orders of magnitude smaller than iAnything, so the PM could go talk to them and bounce improvement ideas off them. Customers are pretty good at spotting UX goofs in the making.

The evidence suggests that for mass-market products used by on the order of 107 people, it’s really difficult to predict which changes will be experienced as stupid, broken, and insulting.

Maybe we ought to start promoting PMs who are willing to stand pat for an occasional release or three. Maybe we ought to fire all the consumer-product PMs. Maybe we ought to start including realistic customer-retraining-cost estimates in our product planning process.

We need to stop breaking the software people use. Everyone deserves better.

Western Electric 5 Aug 2021, 3:00 pm

At 6:30 PM on Wednesday August 4th my 15-year-old daughter and I pulled up the Jaguar I-Pace electric car in front of my 91-year-old Mom’s place in Regina, Saskatchewan. I was tired and achey because I’d just finished driving 1,725km (1,072 miles) across two days to see her for the first time since Covid started. I was happy to see Mom, happy about the first road-trip in a long time, and happy to have tested the hypothesis that, in 2021, a fully-electric vehicle can handle long-haul travel.

Arriving in Regina after driving 1725km in electric car

My Mom, welcoming us to Saskatchewan. Normally she doesn’t look like the queen.

This essay gathers together the data from the trip and tries to draw conclusions. There’s also a real-time Twitter thread with typos and bad pictures.

To my non-metric readers: Sorry, it’s in km. I’ll convert a few of the key numbers.

The experience

It was pretty wonderful, actually. The Jaguar is a comfortable modern car with great seats, good audio, and all the automation you’d expect. It has awesome, overwhelming acceleration power for when you’re in a tricky passing situation. My daughter was excellent company. Cruising along a good road — and a lot of the Trans-Canada highway is — becomes a pretty pleasing experience.

The worst part, by a wide margin, was the wildfire smoke, between us and some of the world’s most fantastic scenery. But that’s a symptom of the onrushing climate crisis, and one of the best things we can do to mitigate the devastation is to stop burning fossil fuels to travel.

Smoky sun in Calgary

Smoky sun while charging in West Hills Mall, Calgary

Of course, this road trip was different from any previous experience, because charging. In a fossil car, you don’t have to think, you just wait for the tank to get a half or three quarters down, then pull over at the next station. Recharging requires planning; fortunately the tools are pretty good; more on that below.

The chargers

One reason I decided this experiment was worth trying was Petro-Canada’s message about their Electric Highway program, and I quote: “We have a charger every 250 km or less from Halifax, N.S. to Victoria, B.C.” There’s one not far from where I live in Vancouver, i tried it, and it worked first time with just a credit-card tap, no fuss no muss.

Trouble is, that quote is kind of a lie. There are gaps, and that’s when all the chargers are working, which fairly regularly they’re not. But my experience is that Petro-Can, while good, is never your only charging option.

Some background is required here. “High-power” Fast DC chargers come at multiple power levels: I saw 50, 100, 200, and 350kW. The difference makes a difference. Our Jag can only really charge at 100kW, but my personal perception is that the higher-power chargers fill up that last 20% much faster. And it feels complicated; for example, in my experience with a Co-op Connect charger, rated at “only” 100kW, it felt faster.

When you use these things, they feel like first-generation tech, pushing the edges of what’s possible (or at least maintainable). In particular, when you plug a 350kW charger into a car with a really low battery, once it’s finished syncing and starts pumping electrons, the sound torques up like a 747 taking off. And the installations include multiple big tall metal boxes (see the picture below). Also, the huge big thick connecting wire gets super hot to the touch.

Charging at a Petro-can in Canmore, Alberta

Charging can be glamorous!

Anyhow, my impression of the Petro-Can network remains mostly positive. The machines work well. It’s annoying that some are 100kW, some 200, and some 350, for no obvious reason. It’s annoying that sometimes they’re stuck into a weird grubby back corner of the lot in a way that makes it hard to get your car in the right position to reach the charging port with the wire. But, good on ’em.

Electrify Canada is another organization that’s promising a national network of fast chargers. They’re a partner of Electrify America, constructed by Volkswagen as part of their settlement over cheating on emissions testing. Anyhow, maybe they’ll be great some day. Once I was far enough into the trip to have Petro-Can fully worked out, I tried to find an Electrify charger in working condition but failed.

If you’re OK with using 50kW chargers there are loads and loads of options. Many smaller-town Visitor Centres and and Chambers of Commerce put one in, as has my own electrical utility, BC Hydro. Once you’ve worked with a higher-power charger though, they’re just not a satisfying experience.

The numbers

Each line in the table below represents one driving leg and includes the charging experience at the beginning of that leg (thus absent on each day’s initial leg). I think the column headings are mostly pretty obvious, except perhaps for:

  1. kWh/h is the amount of juice divided by the driving pause. Often the charger would report less time, which I put down to initialization delay, so I think I’m using the right value. The variation here is a little random, because how fast it goes depends strongly on how empty your battery is.

  2. km/ch stands for “km per charge-hour”, estimating the amount of road range you get per hour of charging. Which I think is a really important number.

  3. Network; “PC” is Petro-Canada, “Flo” isn’t an acronym, and “Co-op” is Co-op Connect.

There are two data sources: The drive-time data is from Jaguar’s trip logging via their Incontrol app; thus the awkwardness caused by non-charge roadside stops. The charge-time data is the output from the various charging sessions along the way. I’m certain that neither is perfect, but the results seem intuitively in the right neighborhood, based on my experience.

Charging, then driving
Start time End time Charge time kWh $ $/kWh kWh/h km/ch Network Start End Drive time km Speed Regen kWh/100km
6:26 8:00 PC Vancouver Hope 1:34 149.5 96 1.6 23.6
8:46 10:32 0:46 28.3 10.37 0.37 36.91 148.94 PC Hope Kamloops 1:46 193.8 112 3.5 26.7
11:26 12:46 0:54 55.8 13.97 0.25 62.00 250.17 PC Kamloops Salmon Arm 1:20 113.8 87 3.3 20.4
13:32 16:25 0:46 27.1 9.53 0.35 35.35 142.63 PC Salmon Arm Golden 2:53 248.8 88 6.9 22.4
17:11 17:25 0:46 48.8 10.70 0.22 63.65 256.83 PC Golden (roadside) 0:14 18.1 69 1.0 33.0
17:25 19:06 (roadside) Canmore 1:41 146.0 89 1.9 21.4
5:49 6:51 60.4 21.20 0.35 PC Canmore Calgary 1:02 101.2 96 1.6 23.9
7:44 9:51 0:53 22.3 13.85 0.62 25.25 101.86 Flo Calgary (roadside) 2:07 221.8 107 2.7 23.1
10:03 10:39 (roadside) Medicine Hat 0:36 64.1 108 0.8 23.9
11:45 13:56 1:06 72.5 21.08 0.29 65.91 265.94 PC Medicine Hat Swift Current 2:11 230.0 107 1.4 25.5
14:50 16:24 0:54 54.0 14.28 0.26 60.00 242.10 PC Swift Current Moose Jaw 1:34 170.7 110 1.1 25.6
16:43 17:27 0:19 23.0 5.54 0.24 72.63 293.07 Co-op Moose Jaw Regina 0:44 67.1 92 0.8 27.9
Total 6:24 392.20 $120.52 0.31 66.18 17:42 1724.9 26.6
Average 0:48 43.6 $13.39 52.7 212.7 1:28 143.74 97.5 24.8

Let’s have a closer look at the numbers that seem interesting to me.

Charge time

17:42 driving, 6:24 charging (including one evening charge after the day’s drive was finished). Not that great on the face of it. Now, it clearly could have been less; since my confidence that any given charger would Just Work started out weak, I was carefully allowing for failures and not running the battery very low. Later on in the trip as I gained confidence (specifically in the Petro-Canada network) I was willing to take on things like the Calgary-to-Medicine Hat leg, 2:43 and 285.9km, running the battery from 90% down to 11%.

Also my daughter is after all a teen-ager, and perhaps not quite as quick as I moving through cafes and restrooms and so on.

Also note that the legs are kind of short; this car can go 400km on a charge. But not when you’re on a big wide modern Prairie highway with almost no other traffic, blasting along at 110km/h (65mph) or more, continuously. I think you’d find this true of pretty well every electric vehicle.

But, here’s the thing: It didn’t feel excessive. I can only remember a total of maybe fifteen minutes when we were consciously just hanging waiting for charge. Most places, we got a coffee or lunch, hit the bathroom, took a walk around the block for our knees’ sake, and then it was time to unplug and go.

Having said that, this is a 2019 model-year car and the charging technology is improving. If we’d had a Porsche Taycan and nothing but 350kW chargers, the story would have been very different. Will the fast-charging technology make the leap into the mainstream-car price plane? Will 350kW chargers become ubiquitious? I’d like to know.

In this context, there’s another number there that I think is really interesting: The “km/c-h”, how far you can get on an hour’s charge. For this particular car on this selection of chargers, it was over 200km (124 miles) per charge-hour. I think that’s enough? Maybe in the lower regions of enough, but there.

And finally, I suspect every driving-safety professional would beam in approval of a power system that forces you to get out of the car and move around every couple of hundred km.

$$$

It cost us $120.52 in electricity. Is that a lot or a little? I tentatively think they’re undercharging. While the electricity itself is pretty cheap, the charging infrastructure isn’t. If this is going to work, the charging networks are going to have to make money and I don’t see it at these prices.

Bear in mind that at home with the Level 2 charger in the carport, charging feels close to free. Travel maybe doesn’t need to be as cheap as the networks are currently making it.

Also, charging by the minute seems wrong. I guess having a time-based component makes sense to keep slow chargers from soaking up all the time, but especially at an extra-high-powered charger, a Porsche Taycan is going to get a whole lot more range out of each minute than a five-year-old Nissan Leaf, so why should they pay less for the same amount of range? Hmmmm.

PlugShare

If you’ve got a Tesla there’s less planning, the cars know where the Superchargers are. If you have anything else, you really need PlugShare. There are a few apps in this space, but PlugShare is best at showing you a map with all the chargers on it, and thus helping you route-plan. The reason it works is because it’s social; whenever you hit a charging station you can “Check In” and leave a note saying whether it’s working and how fast it goes. This dramatically reduces the risk of rolling up to a station and finding it broken. I absolutely don’t think this journey would have been possible without it.

Pro tip: When you’re planning a trip on PlugShare, put in all the chargers you might be able to use as you go along. Then when you’re driving, you can look at your remaining range and your upcoming options and most choices become pretty easy. It’s got a limited but decent Android Auto app that I used a lot. (I assume CarPlay too?)

Jaguar and Mustang charging up in Hope, BC

Jaguar I-Pace and Mustang Mach-E charging up in Hope, BC.

Futures

A question: Are there enough chargers, or too many? At the moment, the answer is probably “too many”. One of the things I really worried about was limping into some charging station to find all the chargers occupied and having to wait for an hour before I could even start. The picture above shows the only time I saw other humans; a young couple with a four-day-old Mustang Mach-E, off for a joyride to Kamloops. Otherwise, the chargers we visited showed no signs of life. Somebody spent a lot of money to build an expensive resource that is today largely un-used.

Having said that, anyone with even a shred of optimism about our future has to believe there are going to be a whole lot more battery-electric cars coming. Here in BC at the west edge of Canada we have North America’s highest EV uptake, pushing 10% of new car sales.

When we were charging at the big Petro-Can station in Kamloops, walking from the the two well-positioned chargers to the coffee shop, we went by the gas-sales part, which was massive, at least a dozen pumps and cars lined up for every one.

At some point that picture will flip, and there’ll be occasional vendors that still sell gas, but mostly just slick, fast, chargers. I worry that the process will be kind of painful, but I’m sure it’ll happen. So I hope someone’s planning the transition.

Would you do it again?

Definitely.

And everyone should stop driving fossil vehicles starting now. Because the climate crisis is upon us. We can’t prevent it now, but we can save lives and reduce destruction if we slash carbon output. There’s no excuse not to.

Long Links 1 Aug 2021, 3:00 pm

Welcome to the August 1st edition of “Long Links”, which assembles long-form pieces that I have the luxury of enjoying due to semi-retirement. Nobody with a real job has time to read all this stuff, but one or two items might enrich your life without burning too many minutes. Note: There was no July 1st Long Links because either I was busier or the world’s long-form authors less prolific in June. Highlights this time out: Taxing wealth, attacking Amazon, guitar music, and God.

Many people have come to share the belief that the global distribution of wealth (and consequently, power) is so stupidly unequal as to be damaging to our economy and civic fabric. (I’m one of them.) What sort of policies might most effectively accomplish redistribution? The obvious answer is: tax. But it’s complicated.

Here’s useful little Twitter thread on basic income-tax dodging. For a much deeper look, check out Pro Publica’s The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax. Also, in Mother Jones: It’s Not Just Income Taxes. Billionaires Don’t Pay Inheritance Taxes Either.

So, what to do? There seems to be growing political will. Sharon Zhang at Truthout offers “Tax the Rich” Gains Momentum After Explosive Report on Billionaire Tax Dodging. The simplest possible approach would be a wealth tax, something like a fraction of a percentage point for holdings over a threshold such as $20M. Problem: That might well be unconstitutional in the US. So here’s a plausible alternative, a tax on unrealized capital gains: Don’t wait for billionaires to sell their stock. Tax their riches now.

Speaking of redistribution, The Economist, by most measures the best-written business-friendly news provider, surprised me with Workers on the march, which notes a rising tide of working-class economic dissatisfaction, and even allows that the workers may have a gripe. I’ve been a subscriber to the rag for decades, and can testify that The Economist has been one of loudest voices cheering on the growing imbalance over those years. Time after time they would call for “painful but necessary reform” and time after time, what they were calling for were changes to increase the power of employers and reduce that of workers. This is perhaps not the single best-written piece on this now-popular subject, but the fact of its existence feels significant.

Let’s move on to the Middle East. The (hopefully final) exit of Netanyahu from the center of Israeli politics is the biggest story in many years. For good solid analysis see The transformative legacy of Mr. Status Quo in +972, which is becoming one of my favorite sources for IsraPal reportage. (972 is the telephone country code for Israel.) For more on the subject, see the always-excellent Peter Beinart’s Benjamin Netanyahu, Father of our Illiberal Age.

Enough about the political world; let’s talk about serious stuff, namely music. Pitchfork, in its inimitably-overwritten style, offers many many words on Black Sabbath: Paranoid. Which, yes, is serious music.

A few months ago I published a few words somewhere about how much I enjoy surf-guitar instrumentals. Within a day or two I got the nicest email agreeing with me and offering to send me some. I was delighted and received an impeccably-packaged, beautifully played collection entitled Ancient Winds, from The Madeira. If the drums were mixed a little further forward, it’d be better, but it’s very good, the guitar-playing is exquisite. The maddening thing is that when I started to write this section, I totally failed to find the correspondence in my email, so I can’t thank the kind gentleman who sent me the record. Sorry and, if it was you, thanks!

Away from music, back to less-serious stuff. These days we are much troubled by the evangelists for and believers in conspiracy theories. Which leaves many of us shaking our heads: How can anyone believe that ridiculous crap?! Over at 538, Kaleigh Rogers and Jasmine Mithani try to explain, in Why People Fall For Conspiracy Theories. I thought it was compelling and useful.

No “Long Links” would be complete without something on the onrushing climate emergency. I offer some less-terrible-than-usual rhetoric: How the U.S. Made Progress on Climate Change Without Ever Passing a Bill.

One of the major news stories in the technology sphere was the drum-roll of draft legislation out of Representative Cicilline’s congressional committee aimed at reforming and constraining the Big Tech sphere, perhaps by breaking up a few of them. The Cicilline Salvo from Ben Thompson is a good introductory overview. John Gruber reacts, predictably, against the notion of breaking up Apple. He’s wrong, but always worth reading. David Heinemeier Hansson’s overview, Here comes the law, stands out by taking a close look at how this might affect software developers. (Spoiler: It’d be great!)

Speaking of developers, most organizations that employ them are now trying to figure out specifically whether they need to come back to the office and, more generally, what the future of the profession looks like. Steven Sinofsky, who at various times has run Windows and Office for Microsoft, offers Creating the Future of Work. I’d call it generally optimistic, and usefully cynical in noting that you can argue in theory, or you can buckle down and ship working technology, and “They who ship, win.” I don’t agree with all of it but was very glad to have read it.

Regular readers know of my ongoing fascination with the long-ongoing conundrum of whether Dark Matter, a theoretically-useful construct, actually exists. Testing galaxy formation and dark matter with low surface brightness galaxies casts still more doubt on whether it’s really out there.

Now, I’m not sure whether this next piece should be read as politics or comedy. National Review is one of the bastions of the American Right, although they are these days occasionally anti-Trump. Political Discrimination as Civil-Rights Struggle laments the decline of conservative respectability at universities, prestige publications, and the other habitats of the educated elite. The author bemoans the unwillingness of university women to date conservatives, and (as the title suggests) sees this as a civil-rights issues, the young Trumpkins unjustly starved of feminine company. There’s lots here to laugh at, but if you’re interested in how a (relatively) thoughtful section of the right wing sees the world, this covers that waterfront pretty well. There’s little risk it’ll change your mind on anything important, but some things that don’t make sense might become a bit more comprehensible.

Hey, let’s talk about another subject close to my heart: Making the Internet work better. There hasn’t been a time in my memory when Cory Doctorow hasn’t been active on the side of the angels. At the EFF site he’s published Adversarial Interoperability, an overview of his work with a whole lot of links to really good pieces of that work.

What is the Internet, anyhow? It’s not a thing or a place. In fact, it’s a collection of incredibly detailed and boring documents, published by the World Wide Web Consortium and the IEEE, but mostly by the Internet Engineering Task Force (IETF). These documents provide the information a programmer needs to make any piece of software or hardware connect to any Internet endpoint or service, usually without asking permission or making any payments. They are now a central component of humanity’s intellectual heritage. The Internet isn’t pefect — mistakes were made, as the saying goes — but by and large things work. The days when this sort of independent professional/technical organization could make all the rules may be ending because, like it or not, governments now think this stuff is too important to leave to the geeks. It doesn’t matter whether or not you or I agree. One of the few people who’s worked as long or as hard as Cory on making the Net better is Mark Nottingham (mnot), and he’s coming from a deep well of hands-on experience in How the Next Layer of the Internet is Going to be Standardised. If you care about the Net you should read this.

Since we’re talking about the Internet, let’s turn to my former employer Amazon, which is not having a good 2021 in public-image terms (financially it’s doing just fine).

While it’s true that I rage-quit the company last year, I’ve never seen myself as an enemy of Amazon, as such. I see the company more as a symptom of the hideously-imbalanced state of the global twenty-first economy. It’s a company that (I thought) plays by the rules. The problem is that those rules are so broken that the results are often hideous. In my experience on the AWS side, the company was intelligently and humanely managed, did a great job for its customers, and was by far not the worst place in Big Tech to work.

But these last few months, I keep reading really painful stories about Amazon. In Mother Jones, How Amazon Bullies, Manipulates, and Lies to Reporters is a nasty tale. Since leaving I’ve talked to quite a few professionally public-facing folk and they get this ugly expression, weary disdain I’d call it, whenever Amazon PR came up. The ultra-hard-line approach of “Every negative word written about us is a bug which must be squashed” is manifestly yielding diminishing returns. I’m pretty sure there was an Amazon side to some of the recent nasty stories that might have got more press if Amazon PR had been a little less scorched-earth.

This one is unsurprising: Amazon Delivery Companies Revolt Against Amazon, Shut Down. I hated these faux-independent firms that Amazon encouraged and financed the moment I heard about them, and could not for the life of me see why anyone would found one and take on the personal burden and liability in exchange for the privilege of being a leech whose blood-flow is dependent on the whims of a single whale. They were created in a way that left them intrinsically powerless, and now they’re learning the cost. The fact that Amazon, famous for being able to squeeze a profit out of any number of unglamorous businesses, wasn’t willing to take on this sector’s risk, should have been a big red light. I have no notion of the rights and wrongs or legal issues in what looks like nasty impending litigation, but still, entirely predictable.

Here’s the one that most shocked me: Amazon opens discrimination investigation after internal petition wins backing of hundreds of employees. Because if the accusations of bad behavior are true, they’re happening in AWS. Granted, in ProServe as opposed to one of the actual Service operators, but still. The other dimension of shock here is that anti-gay bigotry is alleged; my experience suggested that that particular culture war was over and done with, not just at AWS but across most of Big Tech, because the good guys won. Apple’s Tim Cook is not an aberration, and also it’s not just the “G” in the LGBTQ* spectrum that was well-represented and, it seemed to me, fully accepted, among those I worked with.

So I have to admit to apparently missing things that I shouldn’t have. And for heaven’s sake, it sounds like some ProServe heads need to roll, soonest.

Enough about Amazon. For a refreshing change, here’s Ed Snowden’s new Substack launch, Lifting the mask. I’m not 100% a fan of all the directions Snowden has gone, but damn, he’s an interesting guy to read.

While we’re speaking of historic figures, let’s turn our attention back a millennium. Josh Marshall, the founder and biggest voice at the excellent liberal politiblog Talking Points Memo, stumbled into that domain via a Ph.D. in History. This is from 2019: History’s Heroic Failures. It’s an entertaining and erudite romp through events around the year 1000, showing how even in those days, the world was interconnected to a really surprising degree. Also contains recommendations for books that look like they’d be great fun.

And finally, God. David Weinberger is a former colleague, a fine writer, and would be a friend were he not so far away. His Agnostic Belief, Believer's Experience talks thoughtfully about moral foundations and the absence of faith. It’s fun to read!

CL XL: Under and Over 29 Jul 2021, 3:00 pm

Wow, the last Cottage Life piece was in 2019, suggesting there was no such thing in 2020. And, what with Covid, there was less. While this story happened in situ, it’s really about something else: How much residential construction and software are like each other, and share the same really-important rule: Underpromise and overdeliver. [Includes compulsory nature shots.]

Evergreens by the edge of the ocean at high tide

You can tell that this is an exceptionally high tide.

Enlargement

What happened was, we decided our cabin (yes, that’s not the title of this blog series, but it’s what we say these days) needed to be bigger. It had only two small bedrooms. So when our kids were little, they could have their friends over and all crash in the room with two upper/lower bunkbeds. But we couldn’t have grown-up friends or relations over.

So, after discussions with a couple of contractors, we decided on adding an upper story with two more bedrooms and a bathroom, and thus double our carrying capacity. Work started last fall and completion was forecast for May, which meant we’d be able to enjoy it this summer.

Pain

As anyone who’s ever managed this sort of thing would expect, the schedule has gone seriously awry and the work is far from finished as I write this in late July. We’re actually here to assist with painting and sorting and decision-making. We have a working kitchen, a working bathroom, but it’s still basically a construction site.

As you can imagine, I have been testy with the contractors, who have alternated episodes of ghosting us with friendly promises of dates and work items that then don’t happen. Now, let’s be fair: They have had trouble with Covid and have been seriously jerked around by plumbing, insulating, and drywall subcontractors. (But you know, a key contractor skill is supposed to be managing the subs.)

Driftwood with green vegetation

Some pieces of driftwood stick for years, becoming more and more interesting.

At the end of the day, while the delays were annoying, shit does happen and we shouldn’t have been too surprised. The communication failure, however, was maddening. And, to my mind, overwhelmingly reminiscent of the kind of friction that occurs between customers and developers on many of the software projects I’ve been near. Anyhow, here’s a lightly-edited version of a note I sent to everyone we knew at the contractor.

Under and Over

Folks, I spent 40 years in industry doing construction projects. Software construction, but a lot of things in common with your work: Ambitious deliverables, demanding customers, deadlines not all of which were met, things which are supposed to fit together but don’t, hard-to-control dependencies on other people who didn’t work for me.

Let me pass on a free lesson I learned that I think is appropriate in both domains: Underpromise and overdeliver.

We understand you’re under-resourced for the current workload — the whole world is suffering from this problem at the moment. It’s irritating but understandable and forgivable.

What’s irritating and completely unnecessary is when we are told "These people will show up and do these things tomorrow" and then (a) they don’t show up and (b) and nobody tells us “plans have changed, they’re not going to show up”. Don’t make the commitment unless you’re sure you can do it and if you make the promise and have to break it, proactively get in front of the situation.

When I ask “could we have X done by date Y” and the answer is “we hope to do that”, I now assume it means “Nope”. So just say "probably not" and hey, if you get lucky, I’m going to be delighted, as opposed to being pissed when it doesn’t happen and there’s no messaging about that fact.

The happiness that the customer gets from glowing promises is NEVER as big as the anger when whatever it is doesn’t happen.

We’re not asking you to work harder or magically have more employees. Just to talk straight with us. This is the one thing we’ve asked for all along.

Dark rainforest view

It’s hard

I don’t know why, but I think there’s an essential human characteristic here; something that makes people hate being a bad-news bearer so much they’ll construct pleasing pieces of science fiction to avoid it. Even though the consequences are inevitably worse.

Yes, your customer will get cheesed off and grumpy when you tell them the truth. But less so than the alternative. Trust me on this.

Music Notes 17 Jul 2021, 3:00 pm

Herewith notes on what I’m listening to in 2021, and why that’s a problem. With recommendations both for music and for things we can do to keep it alive.

Sometimes I listen to music on LPs — usually a combination of classical, elderly, and obscure. Otherwise these days it’s mostly YouTube Music (YTM). Which is very good at one of its jobs, namely finding me interesting music. But it’s terrible at its other job, which is being a constructive part of the music ecosystem.

Pretty soon, Covid allowing, I’ll be adding another mode: Live concerts! You should too; more on that below.

YouTube Music

YouTube Music

It’s the successor to Google Music, which attracted most of its customers because it was quick, easy, and free to upload your own personal music collection (however acquired). My collection is old and eclectic and includes lots of stuff that, I’ve always assumed, would never make it into a mainstream online service. From my social-media stream, I learned I was far from alone in liking this.

GMusic automatically scanned your iTunes music library and efficiently uploaded it all with no fuss. YTMusic can upload but you have to do it a track at a time, so your 10K-song collection is a real problem. I wonder which Google Thought Leader decided to toss out the most attractive feature? Now to be fair, YTMusic did bring along my uploaded GMusic library so I’m fine personally. Maybe this was something useful only to grizzled Boomers and Google knows what it’s doing.

I decided to pay for YTMusic in the hope that money would filter through to musicians. When you first fire it up, it throws up a huge random selection of artists and asks you to select a few you like. It reacted badly to me picking twenty-five or so.

Given a little time for the algorithm to stabilize, I’d have to say it does an awesome job of discovery. I’ve fallen in love with multiple artists who (probably due to being old) I’d never heard of.

Having said that, I occasionally feel like I’m wrestling with the algorithm. The only tool you have are the thumbs up/down buttons, but it seems to interpret those sensibly. For some reason it initially thought I was all about slow dreamy/doomy stuff and yeah, I do like a lot of that, but then the world also has Rock & Roll and funk and bluegrass and, you know, everything created before 1900 or so.

Bohren & der Club of Gore

For a while it got the idea that all I really wanted was Bohren & der Club of Gore —  German Doom Jazz, more or less. And yeah, they’re fine. For a while.

Enough bitching. When I turn on what it calls “Your Supermix” I usually end up happy with what I hear.

On top of which there are some really brilliant thematic mixes; probably my favorite is Produced By: Sly & Robbie, just dripping with Reggae/Dub excellence and then some occasional surprises from for example Grace Jones.

I’m not saying Spotify or Apple or Amazon isn’t just as good at this stuff. I don’t use them so I don’t know.

Musical breakage

I’d like to introduce you to a couple of my new jams. But first, there’s something wrong with this picture: It’s starving musicians. For an excellent (albeit UK-focused) overview I recommend the BBC’s MPs call for complete reset of music streaming to ensure fair pay for artists. Basically, the streaming services pay a derisory pittance for each song delivered, which the business side eats most of and emits a few pennies to the actual musicians. It’s horrible.

I pay about US$8.50/month for YTMusic. A while back there was a week when I had to do a lot of driving. I told Android Auto “Play Radiohead” and left it there for a few days. I tried to work out how much Thom & the boys took home for earning quite a few hours of my continuous attention. It’s hard because the whole system is opaque; the answers I got were all over the map, but all amounted to “not enough for anyone to live on”.

Neither musicians nor (it seems) music lovers enjoy much political influence. And the music biz is, what’s left of it after recovering from its Twentieth-century addiction to selling cheap pieces of plastic at like 90% gross margin, is pretty happy with the way things are.

How can we help out the creators? Well, to the extent there are petitions to sign and campaigns to support, sign and support. But there’s one concrete thing you can do starting now that will send money to the people who need it and also improve your own quality of life.

Buy concert tickets!

Live performance is about the last useful way that a musician can generate noticeable revenue and retain a sane proportion of it. And it’s not a bed of roses, what with Ticketmaster’s egregious monopoly and the way a high proportion of tickets mysteriously migrate to extra-cost resellers. By the way, my own province is trying to do something about it with the just-arrived BC Ticket Sales Act. Good on ’em!

I’ve been watching the concert announcements like a hawk and have purchased tickets to upcoming Vancouver shows by Cousin Harley, Tinariwen, the Cowboy Junkies, July Talk, Godspeed You! Black Emperor, and Sons of Kemet.

You know what? Some of these are many months off. I might be out of town. I might be sick. I might be dead. Covid might come back and screw everything up again. So what? My concert-going budget for the last 19 months has been exactly zero, and it’s time to make up for that.

The classical concert scene seems to be having a really tough time getting rebooted. I hear them saying things like “We can’t book anything until we have absolute clarity about allowed audience sizes.” Um, there’s no flexibility even when the alternative is impoverishment? Go learn from the rockers and the jazzbos, they’re getting back on the damn road, figure it out.

Enough ranting about the industry. By way of thanks for listening, let me introduce you to a song.

Farewell Transmission

I was driving somewhere and suddenly there was a pair of voices flowing like water, a nice sinuous mellow male and then this woman wielding her voice like a razor. They sang alternately and together, in a graceful descending line:

The real truth about it is no one gets it right
The real truth about it is we’re all supposed to try
There ain’t no end to the sands I’ve been trying to cross
The real truth about it is my kind of life’s no better off
If I’ve got the maps or if I'm lost

Farewell Transmission

This song is Farewell Transmission, written by Jason Molina, whom I’d never heard of. He created a lot of good music and drank himself to death in 2013, aged 40. Damn, rock & roll eats so many of its children. The performance is by Kevin Morby and Waxahatchee’s Katie Crutchfield; the two are currently sweethearts.

There’s a YouTube video but they both look nervous, out of sorts —here’s the YTMusic link or just dial it up on whatever other streamer.

And when streaming technology turns you on to an artist you hadn’t known about, go look up their tour schedule and pull out your credit card if they’re coming anywhere near. Because streaming isn’t anywhere near the least you could do.

Where’s the Apple M2? 12 Jul 2021, 3:00 pm

DPReview just published Apple still hasn't made a truly “Pro” M1 Mac – so what’s the holdup? Following on the good performance and awesome power efficiency of the Apple M1, there’s a hungry background rumble in Mac-land along the lines of “Since the M1 is an entry-level chip, the next CPU is gonna blow everyone’s mind!” But it’s been eight months since the M1 shipped and we haven’t heard from Apple. I have a good guess what’s going on: It’s proving really hard to make a CPU (or SoC) that’s perceptibly faster than the M1. Here’s why.

Apple M1

Attribution: Henriok, CC0, via Wikimedia Commons

But first, does it matter? Obviously, people who (like me) spend a lot of time in compute-intensive programs like Lightroom Classic want those apps to be faster. To make it concrete: I’d cheerfully upgrade my 2019 16" MBP if there were an M1 version that was noticeably better. But there isn’t.

But let’s be clear: The M1 is plenty fast enough for the vast majority of what people do with computers: Email, video-watching, document-writing, slideshow-authoring, music playing, and so on. And it’s quiet and doesn’t use much juice. Yay. But…

The M1 is already fast!

Check out this benchmark in the DPReview piece.

DPReview Lightroom Classic import benchmark

If you’re interested in this stuff at all, you should really go read the article. There are lots more good graphs; also, the config and (especially) prices of the systems they benchmarked against are interesting.

I sorely miss the benchmark I saw in some other publication but can’t find now, where they measured the interactive performance when you load up a series of photos on-screen. These import & export measurements are useful, but frankly when I do that kind of thing I go read email or get a coffee while it’s happening, so it doesn’t really hold me up as such.

To date, I haven’t heard anyone saying Lightroom is significantly snappier on an M1 than on a recent Intel MBP. I’d be happy to be corrected.

Anyhow, this graph shows the M1 holding its own well against some pretty elite Intel and AMD silicon. (On top of which, it’ll be burning way fewer watts.) (But I don’t care that much when I’m at my desktop, which I usually am when doing media work.) So, right away, it looks like the M1 already sets a pretty high bar; a significant improvement won’t be cheap or easy.

If you look a little closer, the M1 clock speed maxes out at 3.2GHz, which is respectable but nothing special. In the benchmark above, the Intel CPU is specced to run at up to 5.1GHz and and the AMD at up to 4.6. It’s interesting that Apple is getting competitive performance with fewer (specced) compute cycles.

But there’s plenty more digging to do there; all these clock rates are marked “Turbo” or “Boost” and thus mean “The speed the chip is guaranteed to never go faster than”. The actual number of delivered cycles you get when wrangling a big RAW camera image is what matters. It’s not crazy to assume that’s at least related to the specced max clock, but also not obviously true.

So, one obvious path Apple can take toward a snappier-feeling experience is upping the clock rate. Which it’s fair to assume they’re working on. But that’s a steep hill to climb; it’s cost Intel and AMD billions upon billions of investment to get those clock rates up.

Obviously, the M1 is evidence that Apple has an elite silicon design team. They’ve proved they can squeeze more compute out of fewer cycles burning fewer watts. This does not imply that they’ll be able to squeeze more cycles out of today’s silicon processes. I’m not saying they can’t. But it’s not surprising that, 8 months post-M1, they haven’t announced anything.

But threads!

It’s a long time since Moore’s law meant faster cycle times; most of the transistors Moore gives you go into more cores per chip and more threads per core. Also, memory controllers and I/O.

In the benchmark above, the M1 has something like half the effective threads offered by the Intel & AMD competition. So, is it surprising that the M1 still competes so well?

Nope. Here’s the dirty secret: Making computer programs run faster by spreading the work around multiple compute units is hard. In fact, the article you are reading will be the seventy-sixth on this blog tagged Technology/Concurrency. It’s a subject I’ve put a lot of work into, because it’s hard in interesting ways.

I guarantee that the Lightroom engineers at Adobe have worked their asses off trying to use the threads on modern CPUs to make the program feel faster. I can personally testify that over the years I’ve spent with Lightroom, the speedups have been, um, modest, while the slowdown due to camera files getting bigger and photoprocessing tools more sophisticated have been, um, not modest.

A lot of times when you’re waiting, irritated, for a computer to do something, you’re blocked on a single thread’s progress. So GHz really can matter.

Here’s another fact that matters. As programmers try to spread work around multiple cores, the return you get from each one added tends to fall off. Discouragingly steeply. So, I have no trouble believing that, at the moment, the fact that the M1 doesn’t have as many threads just doesn’t matter for interactive media-wrangling software.

Which means that an M2 distinguished by having lots more threads probably wouldn’t make people very happy.

But memory!

Yep, one problem with the M1 is that it supports a mere 16G of RAM; the competitors in the benchmark both had 32. So when the M2 comes along and supports 64G, it’ll wipe the floor with those pussies, right?

Um, not really. Let’s just pop up the performance monitor on my 16" MBP here, currently running Signal, Element, Chrome, Safari, Microsoft Edge, Goland, IntelliJ, Emacs, and Word. Look at that, only 20 of my 32G are being used. But wait, Lightroom isn’t running! I can fix that, hold on a second. Now it’s up to 21.5G.

The fact that that I have 10G+ of RAM showing free shows that I’m under zero memory pressure. If this were a 16G box, some of those programs I’m not using just now would get squeezed out of memory and Lightroom would get what it needs.

OK, yes, I can and have maxed out this machine’s memory. But the returns on memory investment past 16G are, for most people, just not gonna be that dramatic in general and specifically, probably won’t make your media operations feel faster. I speculate that there are 4K video tasks like color grading where you might notice the effect.

I’m totally sure that if supporting 32G would take Apple Silicon to the next level, they’d have shipped their next chip by now. But it wouldn’t so they haven’t.

Before we leave the subject of memory behind, there’s the issue of memory controllers and caching architectures and so on. Having lots of memory doesn’t help if you can’t feed its contents to your CPU fast enough. Since CPUs run a lot faster than memory —really a lot faster — this is a significant challenge. If Apple could use their silicon talents to build a memory-access subsystem with better throughput and latency than the competition, I’m pretty sure you’d notice the effects and it wouldn’t be subtle. Maybe they can. But it’s not surprising that they haven’t yet.

But I/O!

Where does the stuff in memory come from? From your disks, which these days are totally not going to be anything that spins, they’re going to be memory-only-slower. It feels to me like storage performance has progressed faster than CPU or memory in recent years. This matters. Once again, if Apple could figure out a way to give the path to and from storage significantly lower latency and higher throughput, you’d notice all right.

And to combine themes, using multiple cores to access storage in parallel can be a fruitful source of performance improvements. But, once again, it’s hard. And in the specific case of media wrangling, is probably more Adobe’s problem than Apple’s.

GPUs

Everybody knows that GPUs are faster than CPUs for fast compute. So wouldn’t better GPUs be a good way to make media apps faster?

The idea isn’t crazy. The last few releases of Lightroom have claimed to make more use of the GPU, but I haven’t really felt the speedup. Perhaps that’s because the GPU on this Mac is a (yawn) 8GB AMD Radeon Pro 5500M?

Anyhow, it’d be really surprising if Apple managed to get ahead of GPU makers like NVidia. Now, at this point, based on the M1 we should expect surprises from Apple. But I’m not even sure that’d be their best silicon bet.

Summarizing

If Apple wanted to build the M2 of my dreams, a faster clock rate might help. A better memory subsystem almost certainly would. Seriously better I/O, too. And a breakthrough in concurrent-software tech. Things that probably wouldn’t help: More threads, more memory, better GPU.

Will there be an awesome M2?

Where by “awesome” I mean “Tim thinks Lightroom Classic feels a lot faster.” Honestly: I don’t know. I suspect there are a whole lot of Mac geeks out there who just assume that this is going to happen based on how great the M1 is. If you’ve read this far you’ll know that I’m less optimistic.

But, who knows? Maybe Apple can find the right combination of clock speedup and memory magic and concurrency voodoo to make it happen. Best of luck to ’em.

They’ll need it.

Murderbot Diaries 10 Jul 2021, 3:00 pm

I suffered a massive loss of productivity for a few days last week because I started reading the first of this series and found that I had to read them all pretty much without stopping.

The Murderbot Diaries by Martha Wells

Several people on social media and in real life, people whose taste I respect, had recommended these books by Martha Wells, but to be honest I was put off by the title. Right at this point in time I’m not looking for dark dystopian ultraviolence.

What happened was, Lauren found Network Effect (vol. 5) in one of the local Little Free Libraries and raved, so we both went back to the start and moved on from there. Hmm, I’m wondering if the increasingly ubiquitous LFL’s are becoming a force in book culture. Our experience suggests that publishers of extended series might benefit from driving around dropping individual books into LFL’s here and there.

But back to the books. Yes, there is ultraviolence. But the people and bots on the receiving ends pretty well all deserve what they’re getting. And anyhow that’s less than half the material. The majority is occupied by the protagonist’s internal monologue. Which is misanthropic, cynical and amusing, and has much more depth than would really be needed to just move the story along. It is impossible — well, it was impossible for me — to avoid starting to care about them.

The central issue is that while the protagonist does not think it’s human, it oozes humanity all over the place. And isn’t terribly happy about it.

Anyhow, fun stuff, super well-written, unqualifiedly recommended.

TV?

These days, any time you find yourself enjoying a fiction series, you have to wonder how it’ll play in extended-story-arc streaming TV.

Murderbot, well, I dunno. The ultraviolence could be done with lots of sci-fi-eye candy in The Expanse style, I imagine. But all that running commentary? It could be done — you’d need a super strong voice actor in perfect command of a broad range of tones. And I sense (but don’t have a clear picture of) an opportunity to break the fourth wall, where the diary itself becomes a character in the story.

Anyhow, if you haven’t read it already, chances are you’d enjoy it.

[Disclosure: There’s a link above to the books at Amazon and if you click it I might make a few pennies. Click like mad!]

Shorting Bitcoin 26 Jun 2021, 3:00 pm

I just bought put options on MicroStrategy ($MSTR), Coinbase ($COIN), and Purpose Bitcoin ETF ($BTCC-B.TO), all at a strike price not far off the current (late June) price, expiring around Christmas. Here’s the thinking.

Context

But first: This is part of this blog’s Investing theme, whose Intro makes it clear that I have no investment expertise and nobody should take this as investment advice, because it’s not. It’s just a bloggy disclosure of some of my own financial positions, which I owe readers anyhow.

Disclosures

I have personally made money buying and selling Bitcoin.

While I’m an admirer of the technology, I’ve repeatedly criticized Bitcoin specifically and blockchain in general, on the grounds that I’ve seen no practical real-world applications.

bitcoin logo

Attribution: Flying Logos, CC BY-SA 4.0, via Wikimedia Commons

Beliefs

I believe the following things about Bitcoin. This is not a scholarly article so I’m not going to provide references, but I’ve seen enough evidence that I’m willing to bet my own money based on them.

  1. A high proportion of all Bitcoins are owned by insiders; miners and people close to the exchanges. Their cost basis is much lower than the current Bitcoin price, and that cost is in practice sunk.

  2. Bitcoin is not usable as a currency because the transaction costs and latency are both too high. (Yes, I know about the Lightning network.)

  3. A high proportion of Bitcoin trading is intermediated by Tethers (USDT). There are strong reasons to suspect that Tethers are a highly unstable stablecoin. The facts about whatever backs them up are mostly unknown. In practice they’re quite difficult to convert to real money. There are repeated allegations that Tethers are created out of thin air to prop up the price of Bitcoin.

  4. The Bitcoin market is largely unregulated, and it’s easy to believe that much of the trading is seriously sketchy, whether that’s based on ad-hoc Tether creation, wash trading, or other well-known pump/dump schemes. These practices have run rampant on every financial market in human history that hasn’t regulated against them fiercely. Why should Bitcoin be any different?

  5. The net effect is that money flows in from, in effect, suckers and rubes, then into the pockets of the insiders. A bit goes back out to non-insiders but, as I know personally, converting Bitcoin into cash is a high-latency high-friction operation. Converting Tether to cash? Good luck with that.

  6. Bitcoin’s Byzantine-generals solution, based on proof-of-waste, is unacceptable in the face of the oncoming climate crisis.

  7. To the extent that Bitcoin has an ideology, it’s some sort of mutant greedhead libertarian claptrap. Most people on the scene can’t spell “ideology” and are there to make a quick speculative buck. Since Bitcoin has no practical uses, the buyer is a fool who is counting on eventually finding a greater fool.

My best guess is that pretty soon the supply of greater fools runs out. At that point the insiders holding the bulk of Bitcoins will rationally be willing to unload for dramatically lower prices, which probably leads to a dramatic deflation. This could be provoked by a Tether collapse, or legal action from any one of a number of governments, or the public exposure of egregious insider sleaze. Or some surprise of the kind that history is full of, the kind that nobody was expecting.

When will it happen? I dunno. I’ll be astonished if we get through 2021 without an explosion.

How to short Bitcoin?

The classic short would be to borrow Bitcoins and sell them, in the expectation of being able to buy them back for much less when the time comes to return them. But I’ve sold Bitcoin and I didn’t like the experience. Also, if I’m wrong, the downside is unlimited, which violates our #BeCareful investing principle. So no.

Some of the crypto exchanges offer options, including puts. But I personally have little to no faith in the integrity or durability of these organizations. Should Bitcoin take the kind of dive I expect, the chances of getting your Put exercised would be about zilch. So, no.

BTCC.B Canadian Bitcoin ETF

In Canada, there’s the Purpose Bitcoin ETF (BTCC.B on TSX), which actually trades on the mainstream market. Which I take to mean that a put option should exercise fine even during a meltdown because whoever wrote it would have had to establish their ability to cover margin. Nothing fancy about it, as I write this its assets are 21597.3588 Btc.

Then there’s this company called MicroStrategy ($MSTR) which has been around since 1989 and sells business-intelligence and analytics software. I have no idea if the software is any good.

They became infamous in March 2000 upon revealing “accounting problems”. The share price collapsed, marking the start of the dot-com crash. The US Securities & Exchange Commission sued their asses for fraud and eventually the company settled, paying big fines without admitting any guilt.

21 years later, the company still has the same CEO, Michael Saylor. His Twitter avatar now has Bitcoin-y laser eyeballs, as you can see in the tweet below.

Saylor announces more $MSTR Bitcoin buys

Between August 2020 and June 2021, MicroStrategy bought a lot of Bitcoin. There’s a prominent Bitcoin-labeled pointer on the front page at microstrategy.com which leads to hope.com — Bitcoin is Hope. It’s absurd.

Recall the words “rube” and “sucker” that I used above? I think Micro­Strategy is one of those, corporately. Maybe their share price can survive the Bitcoin bet going south? But I doubt it. So I bought puts.

Finally, Coinbase. I see no reason to think they’re dishonest or stupid, and I know people who’ve used them for Bitcoin trading and came away happy. If you believe in the long-term existence of a lively Bitcoin marketplace, they’re probably a good investment. But I don’t. So, I picked up puts.

Looking forward

Puts are pretty cheap. If I’m totally wrong and Bitcoin is still sailing along at the end of 2021, I’ll be annoyed but not impoverished. If it crashes I’ll be sad for the unfortunates who lost their stakes, and entirely unsympathetic to the insider community.

Will report back.

Investing Intro 25 Jun 2021, 3:00 pm

We’ve started to actively manage some of our family investments. It’s entertaining me, and I notice people really like talking about money, so why not talk about it here? This is the start of a new blog category.

[Important: I have no training or expertise in managing money, am not trying to influence or convince anyone, and you would be very foolish to treat this as investment advice, because it isn’t.]

[Also important: I think it’s important that you know about the financial interests of anyone whose words you’re reading, and any potential conflicts of interest. I will likely write positively or negatively about areas where I’m invested, and I think I owe my readers disclosure. So I might as well make blog fodder out of it.]

Background

I’ve been employed in the high-tech sector since 1981, my spouse since 1990 or so; salaries are good, stock options pay off sometimes, and we’ve had strokes of luck. We think we have enough saved up to get us by and educate our kids.

So we’ve parked the savings with a smallish money-management firm who build customers a conservative, balanced, and diversified portfolio in exchange for a very small fully-disclosed fee. The effect is that the money (net of fees) grows, not as fast as the stock market does when it’s on a tear (as at the moment) and when the market’s tumbling, shrinks a lot less.

This approach has worked OK for us — my involvement is limited to glancing at the balance once or twice a month — but I’m not claiming it’s the only way; I know people in similar situations who get good results working with giants like Fidelity and Vanguard.

We have a family corporation because back in the Nineties when I was an indie, IBM wanted me to consult for them but wouldn’t do the deal if we weren’t a company. Lauren’s used it since then to facilitate her consulting practice. Then, this year, the company had a little windfall when a US M&A deal unexpectedly turned some shares I’d earned at another advisory gig into cash.

So, rather than put the new eggs in the existing money-management basket, and because it wasn’t that much money, we decided to run it ourselves.

By the way: Canadian tax law means that a great big chunk of the windfall will eventually go off to Ottawa. Which I’m OK with; being Canadian is, on balance, a good financial bargain.

When we talked about managing this money ourselves, we agreed on a set of principles aimed at minimizing stress and maximizing peace-of-mind. Here they are.

Principle: Be careful

We’re cautious, no gambling instinct at all. So there’ll be no big white-knuckle bets like short-selling or option-writing.

Also, we believe in the conventional wisdom about buying low-cost ETFs not shares or mutuals, about diversifying, about not trying to time the market, and so on.

Principle: Do no harm

We are not gonna route money to anything that’s a significant contributor to the climate emergency.

Similarly, we’ll try to avoid supporting oppressive governments such as China’s and technologies which aim to achieve damaging ends, such as AdTech, surveillance, or the gig economy.

I’m starting to see interesting progressive investment opportunities; we’ll watch that space.

Principle: Support the transition

We think that fortunes will be made in transitioning the energy economy to clean, renewable sources. Others will be made remediating the damaging effects of climate change. Also, the planet needs these things to happen. So they feel like two good sectors to invest in.

Principle: Short bad tech

We have unusually deep exposure to the technology business. Still, I’d be nervous about trying to pick winners.

But, looking back, I’ve been good at spotting technology crazes that were empty at their core and failed to deliver value, and also big well-regarded companies that were making bad technology bets. On social media, I’ve sneered freely at various technologies I didn’t like. Now I can put a little money where my mouth is.

Going forward

I expect there to be occasional short blog pieces in which I discuss individual investment moves. I hope they start arguments. I’ll be honest when we turn out to have been wrong, and try to (at least mostly) restrain my gloating when we’re right.

Galaxy Tab S7+ 22 Jun 2021, 3:00 pm

I impulse-bought this big Samsung slab which I guess represents the state of the art in Android tabletry and is trying to occupy an iPad-like spot in the ecosystem. It’s got issues but I’m keeping it. I’m writing this based on my perception that not many people have a tablet that’s not an iPad, so the territory is only lightly explored.

Credit is due to Nelson Minar, who has a sort-of-blog where he diarizes his personal tech divagations. It was his short, to-the-point piece on the S7+ that awoke the impulse and got me here. If you’re interested at all in this thing you should go read that. I’ll wait.

This is not gonna be an iPad comparo, if only because I’m not up-to-date on those. I occasionally use a two-year-old basic entry-level iPad and it’s cool, but I don’t think it’s trying to be what this is trying to be. I’m not too worried; I think that there are a lot of people who (like me) are pretty invested in either the iOS or Android ecosystems and, if they’re the latter and want a tablet, this is going to be a strong candidate.

[Update: Nelson dropped in a comment below pointing to his earlier post, Android Tablet: Samsung Galaxy Tab S5e, which talks more generally about Android tablets and software on them.]

Samsung S7+ tab displaying an MLB.tv ball game

The best part…

Nelson opened with the screen quality and wow, well yeah. if you want specs go look at the Wikipedia entry but let me tell you, it’s really big and really bright and really sharp.

The screen is made much more useful by this, uh, flat thing, that attaches magnetically to the back of the tab and has a little bulge on it to store the “S pen” stylus. It’s got a flap thingie at the bottom that you can bend out and back — the resistance is stiff — which is designed to prop up the tablet, both horizontally and vertically; the pen-holding bulge keeps it from falling over. Here’s a picture.

Galaxy Tab S7+ mounted vertically running Kindle.

This is Kindle displaying the 2nd page of Fritz Leiber’s Our Lady of Darkness, which by the way is the best fiction ever written, if your criterion is imaginative and skilful use of San Francisco as a backdrop.

I normally read books either on paper or on a Kindle Oasis, and like both. But I have to say I really like having a huge slab of bright, crisp, well-typeset text that holds itself up so I can scratch my butt or sip my coffee while I read. I’ve already inhaled one book on it and I’m sure there will be more.

The Economist app is brilliant too; many articles fit on a single page. Nelson mentioned comic books, which I haven’t tried yet.

Fast!

Really freaking fast, I mean; another nice thing about the device. I’ve never felt the urge to complain about my (now-outmoded I guess) Pixel 3’s performance, but this does everything faster. Plus once you’ve experienced 120Hz scrolling, it starts to feel addictive.

Android

Off the top there’s all this weird Samsung shit in your face but I followed Nelson’s advice and dropped in Nova Launcher, yielding a very Pixel-like experience. And Android, all these years later, has the best notification system of any computing environment, any form factor, that I’ve ever been near. To this day I’ll be working on my Mac and a couple of notifications will float up in the corner of the screen when I’m zoned in on something else; so what I do when I want to refresh context is (walking across the room first if necessary) grab my Pixel and pull down the notifications to see what’s happened.

Plus the gestural navigation and (maybe most important?) a Back button that basically always does the right thing.

The worst thing

It’s a real klunker. If you put the back-case-thingie on the back and the keyboard (yes, we’ll get to that) on the front, the combo is heavier than my wife’s 13" M1 MacBook Air. I usually just have the back thingie attached, because it’s so useful and also the pen-holding bulge is a comfy carry-grip. Feels heavier than I’d like.

Stripped of all attachments it’s acceptably light, I guess, but my hands don’t like the sharp corners.

Photography

Yep, it’s got a camera. At one point in history I wondered why tablets might need them, then I saw tourists walking around taking pictures with iPads and realized that they had attained the long-cherished ideal of WYSIWYG photography. So why not? Here are two pictures I took just now with it.

White rose, shot with Galaxy Tab S7+Native honeysuckles, shot with Galaxy Tab S7+

They’re OK. What’s actually interesting is that they were not only shot but processed on the S7+ with Lightroom, which is actually pretty delightful on this device. Delightful enough, in fact, that I wonder if I should look at non-Classic Lightroom, which I think would let me edit my Fuji pix on the slab.

[Having said that, for some reason I can’t get Lightroom to auto-import the pictures.]

A keyboard, you said?

Indeed. I plugged it in and set it up and yeah, it works, but I was having severe cognitive dissonance. What’s it actually for? You’re not gonna set this contraption up to facilitate replying to a chat message. Don’t know about you, but I use a keyboard when I’m in creative mode, which often means writing. So, here’s a shot of the S7+ with keyboard beside my 16" MBP for context, set up for writing.

Galaxy Tab S7+ beside a 16" MacBook Pro

What, you wonder, might be on the screen? Well, Emacs, obviously, because that’s what I write this in. In fact, here’s a screen photo with the first few paragraphs of an early draft what you’re now reading.

My custom Emacs mode for blogging works fine, although the syntax coloring went off the rails somehow.

Emacs running on the Galaxy Tab S7+

Yaks were shaved. There seems to be no native Android Emacs? I’m surprised and disappointed. But you can install Termux, which gets you a perfectly acceptable shell environment and a pkg command that can install open-source packages from, uh, somewhere. Then it’s more work than you might think to get files and programs onto the device; I ended up using Curl mostly.

So yeah, I could in principle blog on this thing. Mind you, I’d need to get Perl and MySQL and so on running, but if Emacs can do it that ought to be possible.

OK, I kid. Normal people who write their blogs in a nice JS-browser environment would probably find themselves perfectly comfortable living their social-media lives on an S7+.

But that keyboard… it’s a pretty strange beast. It has 12 function keys and, down in the bottom left corner, Ctrl, Fn, Cmd, and Alt keys; to the spacebar’s right are keys labeled “Lang” and “Alt Gr”. Because I’m in Canada (I assume) some of the labels are bilingual and there’s a special key reserved for É. What’s just wrong is that some of the keys don’t produce what the labels say they do. While I was fooling with the shell I obviously needed “<” and “>”. There are keys with those symbols on them but they don’t emit those characters. Fortunately, my muscle memory took me to shift-, and shift-. which worked.

So something went off the rails here. Having said that, it’s a perfectly nice responsive keyboard and, with a bit of practice, I could live with it.

Miscellania

There’s a stylus called the “S Pen” that is said to be magically responsive, understand drawing pressure changes, and usable as a remote control so you can wiggle it around and drive the S7+ from across the table. Don’t ask me about drawing, one reason I like computers is that I have shaky hands, plus no shred of talent at drawing or penmanship. So it’s unlikely I’d ever use this thing.

It comes with a SIM card slot. Um, I guess that’s nice? Not sure what the scenario is that makes that interesting.

The battery life is OK. I binge-read most of a book in a single multi-hour sitting and that burned half the power. So it’s unlikely you’d ever run flat in a day; that’s all anyone really needs I think?

Samsung

I bought it direct from their website, which was cheaper than Amazon, and it showed up plenty fast. But they strongly de-emphasized the S7+ in favor of the smaller S7; near as I can tell there isn’t actually a dedicated S7+ page, and I had to do considerable backing and filling to actually order the big slab. Perhaps this is a consequence of me being in Canada?

Nelson says he thinks the S8 is imminent, which might also explain the weirdness. If he’s right, and the S8 turns out to be lighter and more graceful, I’ll be grouchy.

History

I have a bit, with Android tablets. Back in the fall of 2010 I took the first “Samsung Galaxy Tab” on a world tour. It was controversial because Google hadn’t managed to ship the first “official” Android Tablet, so Andy Rubin was pissed. Later, I repeatedly sang the praises of the Nexus 7, which I carried for years. Both of these were at the 7" form factor, so this is the only “big” tablet I’ve ever owned.

Does the world have a place for Android tablets? I dunno. But I’m holding on to this one for now.

Long Links 1 Jun 2021, 3:00 pm

Welcome to the June 2021 issue of Long Links, in which I curate long-form works that I enjoyed last month. Even if you think all these look interesting, you probably don’t have time to read them assuming you have a job, which I don’t. My hope is that one or two will reward your attention.

Has an Old Soviet Mystery at Last Been Solved? — they’re talking about the Dyatlov Pass incident, which has provided fuel for mystery-lovers and conspiracy nuts for a half-century now. If you’ve not heard the Dyatlov story you might want to read this anyhow because it’s colorful and fearful. If you have, then you definitely want to dive into this one because I’m pretty well convinced they’ve figured it out.

Chipotle Is a Criminal Enterprise Built on Exploitation. Tl;dr: New York is suing Chipotle’s ass, looking for a half-billion dollars in penalties for wage theft. Even by the low standards of 21st-century capitalism, Chipotle seems like a terrible citizen of the world. Don’t eat there.

Why Did It Take So Long to Accept the Facts About Covid?. Among the many reasons Covid-19 is interesting (aside from “Will it kill me?”) is as a case study of how science accumulates data, draws conclusions, and communicates them. The specific story is the move from the spring-2020 narrative of “Wash your hands, masks are irrelevant” to 2021’s “Indoor aerosol-based transmission is dominant, so let’s worry about that.” The earlier narrative probably cost us huge numbers of human lives. Nobody suspects anyone of evil motives, but it’s clearly a problem worth thinking about when the official narrative is so slow to update. Masterfully told by Zeynep Tufekci, a sociologist who has become one of the best commentators on Covid public-health issues.

Although I grew up in the Middle East, I’m reluctant to write about it because there’s lots of atrocities to denounce but no good guys to praise. The people who wrote the following are more courageous than I am. The central controversy is, of course, over whether the “Two-state solution” is still possible and if not, what then? Everyone agrees on one thing: The current offical “peace process” is dead and rotting stinkily. The Old Israeli-Palestinian Conflict Is Dead — Long Live the Emerging Israeli-Palestinian Conflict is from Nathan J. Brown at the Carnegie Endowment for International Peace; it writes off two-states, acknowledges that one-state is unlikely too, and offers tentative ideas about ways forward.

A Liberal Zionist’s Move to the Left on the Israeli-Palestinian Conflict is about Peter Beinart, a long-time lion of intellectual Judaism. He is a rigorous thinker and that rigor has forced him into a two-states-is-dead position. Now he’s arguing for the Palestinian Right of Return; just thinking this probably puts him at grave risk of assassination. This is a big long piece and although I’ve watched the Mideast closely for decades, I felt I’d learned useful things.

Gorshem Gorenberg has for a long time one of my favorite Israeli voices; sentimental but clear-eyed and really smart. His latest big piece is Israelis and Palestinians can’t go on like this. Weep for us. It’s a profoundly pessimistic piece about how Israel got into its current mindset, which is very hard for people who don’t live there to understand. Such strong writing.

Let’s talk about some cheerful stuff, in particular about recent progress on the climate emergency. Everyone’s already written about Big Oil’s defeats in the courts and boardrooms. So here’s J.P. Morgan’s Energy Outlook.. It’s huge and I haven’t read all of it, but it feels to me like a nice comprehensive summary of the current state of play. The investment community, of course, is trying to figure out how to make money in a post-fossil-fuels world. I wish them the best of luck and if you’re one of them, you should read this.

Staying with the climate emergency, check out Separating Hype from Hydrogen – Part Two: The Demand Side. Anyone who cares about this stuff has to be wondering if a hypothetical Hydrogen Economy is a significant part of our path forward. The question is a little hard to answer because for some reason hydrogen has attracted a cohort of pitchman who want to tell you it’s the best solution for everything. A close clear-eyed look suggests that yes, there is a role for hydrogen, but it’s less important than the enthusiasts want you to think. The conclusions are helpfully pictured in this slide.

More good news from Germany; the courts are starting to kick ass. Germany’s more ambitious climate goals pressure industry to clean up has the details.

Let’s talk about my favorite nontechnical hobby, photography. Hmm, all these pieces are from DPReview. Let’s start with New York Times unveils prototype system aimed at inspiring confidence in photojournalism. I may have mentioned the Content Authenticity Initiative before. On the Internet we say “Pictures or it didn’t happen!” but we should be worrying about “Pictures and it didn’t happen!”. Because photos and video are way too easy to manipulate these days. The Initiative, whose key launch partner was Adobe if I’m reading the history right, tries to use digital signatures to establish a provenance chain from a photographer to the graphic you see on your screen. I’m delighted this is happening, and optimistic that this description will raise consciousnesses about what’s possible these days with modern security technology. No, blockchain is not involved.

Enthusiast photographers tend to obsess about lenses, and one of the standard lenses almost every such person loves is a fast 50mm prime lens, a “nifty fifty”. They make the people you’re taking pictures of look better and have also traditionally also had the virtues of being cheap and simple. No longer. Why are modern 50mm lenses so damned complicated? explains.

Finally, it’s all in the photographer’s wrist. The Best & Worst Ways To Hold Your Camera is a YouTube full of exciting wrist action.

Hey, let’s do politics. These days, my feelings are that occasionally laughing at US “conservatives” is essential therapy, otherwise you might do something crazy, albeit not as crazy as what they’re doing. The G.O.P. Won It All in Texas. Then It Turned on Itself has details. Your eyes will roll.

David Shor, a Democratic-party strategist and number-cruncher impresses me more with everything he produces. For example David Shor on Why Trump Was Good for the GOP and How Dems Can Win in 2022 is a long interview with him, to which I say “Wow”.

Only one science/engineering entry this month. I am delighted every time I discover some obvious part of the human experience for which science doesn’t have a good explanation. We can all use the humility. For example: No One Can Explain Why Planes Stay in the Air.

Stepping across the Pacific, here’s Tired of Running in Place, Young Chinese ‘Lie Down’. Now watch out, this is from Sixth Tone, which is out of Shanghai and thus indirectly an organ of China’s ethnofascist autocracy. Having said that, they regularly manage to be interesting.

Ending the Long Links on a musical note, let me recommend Brent Morrison's Rockin’ Blues Show; an Internet Radio show and exactly what it says. Everybody’s life can benefit from rockin’ blues. And now for something completely different: Lebanese Music From A Millionaires' Playground is a production from 1962, featuring Fairuz, Lebanon’s musical queen, who in writing this I discovered is still living. Her voice has always touched my heart. Finally, something to ease your troubled mind: Holly Bowling, live on a Colorado mountaintop. She’s a pianist with (to me) a Keith Jarrett influence (not a bad thing) whose music is mostly sourced from songs, by the Grateful Dead and Phish. From those songs as performed live, of course.

Hang in there, everyone.

Sixteen Classics 22 May 2021, 3:00 pm

I just finished writing about the process of, and lessons from, processing 900 inherited LPs into my collection. I thought it wouldn’t be fair to stay meta, so here is a handful of my favorites from among the new arrivals. Yes, they are all music by dead white men, performed by other dead white men (and a couple of women). Sorry. They were released between 1953 and 1978.

You can buy these online at discogs.com but I don’t recommend it because Discogs is sleazy, or on eBay, but I don’t recommend that either. Because why not go to an actual record store (most towns have ’em) and cruise the Classical section? (It’s usually pretty quiet.) Pick up some Rock & Roll and Country and Dubstep and Reggaeton and Dixieland and Bollywood while you’re there too. My favorite thing to do in a record store is leaf through the “new arrivals”. Which, granted, wouldn’t include any of these.

Trumpet trills!

Trumpet Concertos played by Adolf Scherbaum

Trumpet Concertos, performed by Adolf Scherbaum. Early stuff; the composers are Haydn, Leopold Mozart, Marc Antoine Charpentier, Alessandro Stradella. Giuseppe Torelli, Vivaldi, Telemann, Johann Christoph Graupner, Johann Friedrich Fasch.

None of these works are musical monuments, but Scherbaum digs into the big baroque lilt and offers the juiciest trills and ornamentation I’ve ever heard anywhere; impossible not to smile. Sound quality is adequate.

Date: 1965.

Verdi Overtures

Opera, but no singing

Verdi Overtures, with Abbado and the London Symphony. I enjoy live opera but for no reason I can explain just don’t listen to recordings. Which is probably unfair to Verdi, because on the evidence here, he wrote really superior music. Abbado and the Londoners really bring it; this is the kind of record that, if you’re reading a book or something it regularly grabs your attention away. Terrific sound too.

Date: 1978.

David Oistrakh’s 60th birthday jubilee concert

Oistrakh!

David Oistrakh’s 60th Birthday Jubilee Concert. Oistrakh plays the Violin Concerto with the Moscow State Philharmonic, and then conducts them in the 6th “Pathétique” symphony. Recorded live on two successive evenings in September 1968 by Melodiya, the old Soviet record label.

This is my favorite among the 900 LPs, and it’s not close. I can’t say which performance I like better. Oistrakh, to quote George Clinton, tears the muthafuckin roof off in the violin concerto, with fabulous tone and dynamics, and isn’t shy about using showy cheap tricks, which is fine by me. Also this is one of Peter Ilyich’s best works.

The Pathétique is more controlled, but it’s controlled ferocity; some of the sequences have an aesthetic that is straight out of heavy metal.

Tchaikovsky Op. 50, Beaux Arts

In case it’s not obvious, listening to this album is like being at a very fine rock concert by someone who’s on top of their game, knows it, and is determined to send the audience home with their heads reeling.

Date: 1968.

Beaux Artistes

I mean the Beaux Arts Trio, who performed for fifty years starting in 1955. Tchaikovsky Piano Trio in A Minor, Op. 50 is a really fine piece of chamber writing and the Artistes bring loads of dynamics, this explodes out of the speakers.

Beethoven Archduke Trio, Beaux Arts

Beethoven “Archduke” trio, Op. 97. Another fine performance of essential music. There’s a problem with chamber music; it can be more fun to play than to listen to, in particular because a lot of recordings are dominated by steely mid-string surface noise. These two just speak truth, the wiggles in the LP grooves really capture what those fine instruments, well-played, sound like.

Date: 1971, 1975.

Mantovani, More Golden Hits

Very easy listening

Mantovani and his Orchestra — More Golden Hits. I have a bit of a soft spot for “easy listening” music from the period of my childhood, I think because my Dad liked to play it sometimes. Now I own several inches of such LPs, of which this is the best by miles. One generally doesn’t expect musical depth; in this case, one would be wrong. To start with, Mantovani gets a string sound — perhaps with the help of studio magic? — that is ravishing, just astonishingly sweet. Also, the arrangements are fine, the playing polished, and there are nice tunes here, especially Stranger in Paradise.

Date: 1976.

Victoria de los Ángeles

Hail Victoria

I mean Victoria de los Ángeles (1923-2005), Spanish soprano, and A World Of Song. I dropped the needle on this one and thought “pretty scratchy, probably not keeping it” but then Ms dlÁ started singing and I sat back and shut up. As I said, I rarely-to-never listen to recorded opera singers, but this is just an explosion of passion and power. I wish I’d found a way to see her perform in her lifetime. Wow.

Date: 1965.

Festival of Russian Music

Russian Reiner

Marche Slave — A Festival of Russian Music. I don’t know the Russian for “chestnut" in the musical sense, but these are those. If you ever took a high-school music class and the teacher played you Famous Russian Tunes, you’ll probably find them here. They’re famous because they’re good. But let’s be honest, this one makes the list at least partly because it’s an audio showpiece. Turn it up as loud as you can without someone calling 911, and sink into the musical goo.

Date: 1960.

Heifetz Bruch and Mozart

Bruch/Mozart violin

Heifetz — Bruch Concerto in G Minor — Mozart Concerto in D Major. Fine music, fine fiddler, fine orchestra, great sound; this is just a treat for the ears. The 19-year-old Mozart said his own debut performance of the D Major "went like oil" and I can see why, this is extremely smooth music.

Heifetz charges through the Bruch, high drama, maximum dynamics, thundering orchestra, soaring violin. What’s not to like?

Kyung-Wha Chung plays Bruch

Wait, did I mention the B side of the record first? Yes, Heifetz’s assault on the Bruch is good fun. But the stack of inherited LPs also included Bruch / Kyung-Wha Chung Violin Concerto and Scottish Fantasia. She decided to maximize flow not drama and it’s really a much better take. Then on the Fantasia she and the orchestra reach back and let ’er rip, impossible to listen without smiling.

Date: 1963, 1972.

Bach and Vivaldi sonatas by George Malcolm and Julian bream

Julian & George

Julian Bream / George Malcolm ‎– Sonatas For Lute And Harpsichord. The sonatas are by Bach and Vivaldi and they’re great. Also, I like that the cover photo of Bream and Malcolm looks like they’re just back from several hours at the pub. Here we have two instruments neither of which have any sustain, so achieving legato is a challenge, which the boys laugh at; the slow movements are the best part. They take liberties with the arrangements, with a sure hand I think. The shift between Johann Sebastian’s music and Antonio’s is unsubtle, and left me appreciating each of them more.

Date: 1969.

Barenboim plays Mozart concertos

Tight Mozart

Daniel Barenboim, The English Chamber Orchestra - Mozart ‎– Piano Concerto No. 14 In E-Flat, K. 449; Piano Concerto No. 15 In B-Flat, K. 450. It’s hard to go wrong with Barenboim and Mozart; also the orchestra is excellent and the recording just fine. But the thing that grabbed me here was the absolutely exceptional ensemble, as though the soloist and orchestra were sharinga single brain. Reminds me of a rock concert I stage-managed one time, the band had been on the road for a long time and, standing listening, one of my stagehands said to me "They, so tight, they loose.” Yeah.

Date: 1968.

Peaceful arias by Albinoni, Handel, and others

Serenity

No, not the spaceship in Firefly, the feeling. L'Adagio D'Albinioni / Le Largo De Haendel Et Huit Aria Célébres provides it. Along with Albinoni and Handel, there are tracks by Bach, Haydn, Gluck, and Mozart; every single one is a gem. Lots of modern-life situations demand serenity and it’s good to know there’s a flat round piece of vinyl that has it to offer.

Also, wonderful sound, but you probably won’t notice that.

Date: 1976.

Silverman plays Rachmaninoff

Take it down

Rachmaninoff / Silverman ‎– Piano Sonata No. 1 In D Minor, Op. 28 / Etude-Tableau In C Minor, Op. 33 No. 3 Silverman is a Canadian piano player who brings a lot of feeling to what he plays and I’m a fan. This outing is moody and restrained; I’d describe the tone as “heavy” in a good way, as in fully laden. There’s no rushing and no theatrics, just a whole lot of very good playing that is entirely sure of where it’s going and makes you content to wait while it gets there.

Date: 1976.

Dorati/Hungarica Haydn symphonies Volume 7

Symphony boxes

Haydn — Philharmonia Hungarica, Antal Dorati — Symphonies 20 - 35. Dorati put together an orchestra of Hungarians who got through the Iron Curtain after the 1956 upraising and they released all of Haydn’s hundred-plus symphonies, in multi-LP box sets. I inherited three such boxes, have only listened to a sampling of the sides, but have I ever enjoyed them. It’s mind-boggling that a mere mortal could create this much music at a very, very high standard. The production gets out of the way, the sound is decent, and the playing is unflashy but then so are the symphonies, mostly. Awfully good stuff.

Date: 1973.

Heifetz plays Bach solo violin

Violin alone

Heifetz Plays Bach — Unaccompanied Sonatas And Partitas (complete). For many years I’ve cherished Gidon Kremer’s recording of this material, blogging at length about it in 2007. Heifetz isn’t as precise and arguably doesn’t dig as deep into this very deep music. But it’s impossible not to enjoy because Heifitz sounds like he’s enjoying playing it so much. Kremer’s instrument vanishes for me, I just hear the music. Listening to Heifitz I think “That’s some damn fine fiddling.” The fact that it’s in mono is irrelevant for a recording of a single instrument, the sound is just fine.

Date: 1953.

LP Victory 22 May 2021, 3:00 pm

Some readers may remember a February 2019 blog post describing how I inherited 900 or so used LPs, mostly classical. As of this week, I’ve listened to all of them (or rejected them out of hand), and kept 220 or so (counting methods were imperfect). Herewith lessons and reconfigurations. Also there’s a companion piece on sixteen albums I especially liked.

The ritual

First of all, I just want to say how much I’ve enjoyed this. It was a late-evening thing, usually the kids away from the big room, so just Lauren and me or sometimes just me. The pleasure comes in quanta of a 20-minute LP side.

I’ll miss it. So as I sat down to write this I grabbed randomly into the middle of the serried record ranks and came up with Fela Kuti’s Rofororo Fight. No recollection when or where I came by this, but it’s good.

LPs, really?!

Yes, really. I do not argue that vinyl is somehow better, in some absolute way, than digital. Nor do I believe that what pleases me is euphonious distortion. If you care about these issues, I wrote a principled defence of my analog habit.

How many?

It occurred to me that I should probably say how many records I now have. So I started counting but found it wearying so I asked Google “How many LPs per foot?” and the answer seems to be 70-ish. If that’s true I have about 600, a number most real collectors would scoff at. But on average they’re really good.

Why 600+ discards?

Because they were scratched, or had lousy sound that wasn’t redeemed by great music, or because the music was objectively bad (for example operetta or Ferrante & Teicher or several albums featuring the stylings of the house band at some resort in Bermuda), or I just didn’t like it. An example of the latter would be anything by Bartok — people I respect like him but I just don’t get it.

On the other hand, I started this project convinced that I hated everything by Debussy but I ended up keeping a few. And, as a Canadian and Bach lover I’m thus a Glenn Gould fan, but the guy who passed on this collection to me had more GG than any human should reasonably be asked to enjoy.

Too classical?

It would be reasonable to wonder if all this music by Dead White Euros is, well, a suspect use of listening time? To which I answer, in 2345, the music from 2021 that’s still being listened to will probably be the really, really good stuff. And I know for a fact it’ll be less white and male.

I’ll be honest: My soul has shrunk in the absence of loud electric guitar noise. Which these days I enjoy only alone in my car. But it’s not like being in the room; I suspect that on my first post-Covid rock-&-roll excursion I’ll cry like a baby.

Too old?

When I look at my Sixteen Classics, the music isn’t just classical, it’s pretty old stuff. Nothing from Stravinsky, any of his contemporaries, or anything later, with the exception of Mantovani’s awesome easy-listening offering, not exactly ground-breaking stuff. I plead guilty and defer to the taste of the gentleman whose records I inherited.

Having said that, I kept a couple of Schoenbergs and Weberns that I expected to hate but didn’t. Should write about that stuff.

Discogs

I had the feeling that I should track which records I was keeping so as a matter of principle I recorded them all at discogs.com, so now you can visit my collection, currently composed almost entirely of the tour-through-900 output. I sort of hate Discogs because the time I used it to try to buy music was such a disaster. But the database is by any sane measure a treasure. And they let me use it for my personal collection-management purposes for no money, so what’s not to like?

New record player

Long-time readers will know that for many years I rejoiced in the sounds produced by my Simon Yorke Series 9. But it failed under pressure. Its construction, while marvelous, offers no protection from toddlers or kittens or clumsiness, and also it’s very difficult to get a new cartridge mounted correctly.

I’d bought, and rejoiced in the sound of, the hand-built Dynavector DV-20X2 and managed to get it perfectly in place and sounding wonderful when household circumstances destroyed it. So, consumed by anger, I put the Simon Yorke in a box and bought a nice Rega Planar Six and stayed with the Dynavector cartridge.

I should probably write more about this choice but for now, suffice it to say that the Rega has a substantial plastic lid that folds down and protects the delicate parts from family mischief. On top of which it sounds excellent. Sometimes I feel I’ve lost a fractional point of quality at the margin, but I can live with it.

Except for, I should sell the Simon Yorke to someone who’d appreciate it.

Nice little hobby

Vinyl has become one. There’s record-store culture and there’s record-player culture and there’s a lot of good music out. You can do it at a reasonable price. Sure, Spotify or YouTube will eventually stream everything, but there’s something to be said for going out and hunting your own music. And there’s a lot to be said for sitting down in a quiet room and listening carefully.

Testing in the Twenties 15 May 2021, 3:00 pm

Grown-up software developers know perfectly well that testing is important. But — speaking here from experience — many aren’t doing enough. So I’m here to bang the testing drum, which our profession shouldn’t need to hear but apparently does.

This was provoked by two Twitter threads (here and here) from Justin Searls, from which a couple of quotes: “almost all the advice you hear about software testing is bad. It’s either bad on its face or it leads to bad outcomes or it distracts by focusing on the wrong thing (usually tools)” and “Nearly zero teams write expressive tests that establish clear boundaries, run quickly & reliably, and only fail for useful reasons. Focus on that instead.” [Note: Justin apparently is in the testing business.]

Twitter threads twist and fork and are hard to follow, so I’m going to reach in and reproduce a couple of image grabs from one branch.

Picture credited to DoddsCredited to Spotify

Let me put a stake in the ground: I think those misshapen blobs are seriously wrong in important ways.

My prejudices

I’ve been doing software for money since 1979 and while it’s perfectly possible that I’m wrong, it’s not for lack of experience. Having said that, almost all my meaningful work has been low-level infrastructural stuff: Parsers, message routers, data viz frameworks, Web crawlers, full-text search. So it’s possible that some of my findings are less true once you get out of the infrastructure space.

History

In the first twenty years of my programming life, say up till the turn of the millennium, there was shockingly little software testing in the mainstream. One result was, to quote Gerald Weinberg’s often-repeated crack, “If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.”

Back then it seemed that for any piece of software I wrote, after a couple of years I started hating it, because it became increasingly brittle and terrifying. Looking back in the rear-view, I’m thinking I was reacting to the experience, common with untested code, of small changes unexpectedly causing large breakages for reasons that are hard to understand.

Sometime in the first decade of this millennium, the needle moved. My perception is that the initial impetus came at least partly out of the Ruby community, accelerated by the rise of Rails. I started to hear the term “test-infected”, and I noticed that code submissions were apt to be coldly rejected if they weren’t accompanied by decent unit tests.

Others have told me they initially got test-infected by the conversation around Martin Fowler’s Refactoring book, originally from 1999, which made the point that you can’t really refactor untested code.

In particular I remember attending the Scottish Ruby Conference in 2010 and it seemed like more or less half the presentations were on testing best-practices and technology. I learned lessons there that I’m still using today.

I’m pretty convinced that the biggest single contributor to improved software in my lifetime wasn’t object-orientation or higher-level languages or functional programming or strong typing or MVC or anything else: It was the rise of testing culture.

What I believe

The way we do things now is better. In the builders-and-programmers metaphor, civilization need not fear woodpeckers.

For example: In my years at Google and AWS, we had outages and failures, but very very few of them were due to anything as simple as a software bug. Botched deployments, throttling misconfigurations, cert problems (OMG cert problems), DNS hiccups, an intern doing a load test with a Python script, malfunctioning canaries, there are lots of branches in that trail of tears. But usually not just a bug.

I can’t remember when precisely I became infected, but I can testify: Once you are, you’re never going to be comfortable in the presence of untested code.

Yes, you could use a public toilet and not wash your hands. Yes, you could eat spaghetti with your fingers. But responsible adults just don’t do those things. Nor do they ship untested code. And by the way, I no longer hate software that I’ve been working on for a while.

I became monotonically less tolerant of lousy testing with every year that went by. I blocked promotions, pulled rank, berated senior development managers, and was generally pig-headed. I can get away with this (mostly) without making enemies because I’m respectful and friendly and sympathetic. But not, on this issue, flexible.

So, here’s the hill I’ll die on (er, well, a range of foothills I guess):

  1. Unit tests are an essential investment in your software’s future.

  2. Test coverage data is useful and you should keep an eye on it.

  3. Untested legacy code bases can and should be improved incrementally

  4. Unit tests need to run very quickly with a single IDE key-combo, and it’s perfectly OK to run them every few seconds like a nervous tic.

  5. There’s no room for testing religions; do what works.

  6. Unit tests empower code reviewers.

  7. Integration tests are super important and super hard, particularly in a microservices context.

  8. Integration tests need to pass 100%, it’s not OK for there to be failures that are ignored.

  9. Integration tests need to run “fast enough“.

  10. It’s good for tests to include benchmarks.

Now I’ll expand on the claims in that list. Some of them need no further defense (e.g. “unit tests should run fast”) and will get none. But first…

Can you prove it works?

Um, nope. I’ve looked around for high-quality research on testing efficacy, and didn’t find much.

Which shouldn’t be surprising. You’d need to find two substantial teams doing nontrivial development tasks where there is rough-or-better equivalence in scale, structure, tooling, skill levels, and work practices — in everything but testing. Then you’d need to study productivity and quality over a decade or longer. As far as I know, nobody’s ever done this and frankly, I’m not holding my breath. So we’re left with anecdata, what Nero Wolfe called “Intelligence informed by experience.”

So let’s not kid ourselves that our software-testing tenets constitute scientific knowledge. But the world has other kinds of useful lessons, so let’s also not compromise on what our experience teaches us is right.

Unit tests matter now and later

When you’re creating a new feature and implementing a bunch of functions to do it, don’t kid yourself that you’re smart enough, in advance, to know which ones are going to be error-prone, which are going to be bottlenecks, and which ones are going to be hard for your successors to understand. Nobody is smart enough! So write tests for everything that’s not a one-line accessor.

In case it’s not obvious, the graphic above from Spotify that dismisses unit testing with the label “implementation detail” offends me. I smell Architecture Astronautics here, people who think all the work is getting the boxes and arrows right on the whiteboard, and are above dirtying their hands with semicolons and if statements. If your basic microservice code isn’t well-tested you’re building on sand.

Working in a well-unit-tested codebase gives developers courage. If a little behavior change would benefit from re-implementing an API or two you can be bold, can go ahead and do it. Because with good unit tests, if you screw up, you’ll find out fast.

And remember that code is read and updated way more often than it’s written. I personally think that writing good tests helps the developer during the first development pass and doesn’t slow them down. But I know, as well as I know anything about this vocation, that unit tests give a major productivity and pain-reduction boost to the many subsequent developers who will be learning and revising this code. That’s business value!

Exceptions

Where can we ease up on unit-test coverage? Back in 2012 I wrote about how testing UI code, and in particular mobile-UI code, is unreasonably hard, hard enough to probably not be a good investment in some cases.

Here’s another example, specific to the Java world, where in the presence of dependency-injection frameworks you have huge files with literally thousands of lines of config gibberish [*cough* Spring Boot *cough*] and life’s just too short.

A certain number of exception-handling scenarios are so far-fetched that you’d expect your data center to be in flames before they happen, at which point an IOException is going to be the least of your troubles. So maybe don’t obsess about those particular if err != nil clauses.

Coverage data

I’m not dogmatic about any particular codebase hitting any particular coverage number. But the data is useful and you should pay attention to it.

First of all, look for anomalies: Files that have noticeably low (or high) coverage numbers. Look for changes between check-ins.

And coverage data is more than just a percentage number. When I’m most of the way through some particular piece of programming, I like to do a test run with coverage on and then quickly glance at all the significant code chunks, looking at the green and red sidebars. Every time I do this I get surprises, usually in the form of some file where I thought my unit tests were clever but there are huge gaps in the coverage. This doesn’t just make me want to improve the testing, it teaches me something I didn’t know about how my code is reacting to inputs.

Having said that, there are software groups I respect immensely who have hard coverage requirements and stick to them. There’s one at AWS that actually has a 100%-coverage blocking check in their CI/CD pipeline. I’m not sure that’s reasonable, but these people are doing very low-level code on a crucial chunk of infrastructure where it’s maybe reasonable to be unreasonable. Also they’re smarter than me.

Legacy code coverage

I have never, and mean never, worked with a group that wasn’t dragging along weakly-tested legacy code. Even a testing maniac like me isn’t going to ask anyone to retro-fit high-coverage unit testing onto that stinky stuff.

Here’s a policy I’ve seen applied successfully; It has two parts: First, when you make any significant change to a function that doesn’t have unit tests, write them. Second, no check-in is allowed to make the coverage numbers go down.

This works out well because, when you’re working with a big old code-base, updates don’t usually scatter uniformly around it; there are hot spots where useful behavior clusters. So if you apply this policy, the code’s “hot zone” will organically grow pretty good test coverage while the rest, which probably hasn’t been touched or looked at for years, is ignored, and that’s OK.

No religion

Testing should be an ultimately-pragmatic activity with no room for ideology.

Please don’t come at me with pedantic arm-waving about mocks vs stubs vs fakes; nobody cares. On a related subject, when I discovered that lots of people were using DynamoDB Local in their unit tests for code that runs against DynamoDB, I was shocked. But hey, it works, it’s fast, and it’s a lot less hassle than either writing yet another mock or setting up a linkage to the actual cloud service. Don’t be dogmatic!

Then there’s the TDD/BDD faith. Sometimes, for some people, it works fine. More power to ’em. It almost never works for me in a pure form, because my coding style tends to be chaotic in the early stages, I keep refactoring and refactoring the functions all the time. If I knew what I wanted them to do before I started writing them, then TDD might make sense. On the other hand, when I’ve got what I think is a reasonable set of methods sketched in and I’m writing tests for the basic code, I’ll charge ahead and write more for stuff that’s not there yet. Which doesn’t qualify me for a membership of the church of TDD but I don’t care.

Here’s another religion: Java doesn’t make it easy to unit-test private methods. Java is wrong. Some people claim you shouldn’t want to test those methods because they’re not part of the class contract. Those people are wrong. It is perfectly reasonable to compromise encapsulation and make a method non-private just to facilitate testing. Or to write an API to take an interface rather than a class object for the same reason.

When you’re running a bunch of tests against a complicated API, it’s tempting to write a runTest() helper that puts the arguments in the right shape and runs standardized checks against the results. If you don’t do this, you end up with a lot of repetitive cut-n-pasted code.

There’s room for argument here, none for dogma. I’m usually vaguely against doing this. Because when I change something and a unit test I’ve never seen before fails, I don’t want to have to go understand a bunch of helper routines before I can figure out what happened.

Anyhow, if your engineers are producing code with effective tests, don’t be giving them any static about how it got that way.

The reviewer’s friend

Once I got a call out of the blue from a Very Important Person saying “Tim, I need a favor. The [REDACTED] group is spinning their wheels, they’re all fucked up. Can you have a look and see if you can help them?” So I went over and introduced myself and we talked about the problems they were facing, which were tough.

Then I got them to show me the codebase and I pulled up a few review requests. The first few I looked at had no unit tests but did have notes saying “Unit tests to come later.” I walked into their team room and said “People, we need to have a talk right now.”

[Pause for a spoiler alert: The unit tests never come along later.]

Here’s the point: The object of code reviewing is not correctness-checking. A reviewer is entitled to assume that the code works. The reviewer should be checking for O(N3) bottlenecks, readability problems, klunky function arguments, shaky error-handling, and so on. It’s not fair to ask a reviewer to think about that stuff if you don’t have enough tests to demonstrate your code’s basic correctness.

And it goes further. When I’m reviewing, it’s regularly the case that I have trouble figuring out what the hell the developer is trying to accomplish in some chunk of code or another. Maybe it’s appropriate to put in a review comment about readability? But first, I flip to the unit test and see what it’s doing, because sometimes that makes it obvious what the dev thought the function was for. This also works for subsequent devs who have to modify the code.

Integration testing

The people who made the pictures up above all seem to think it’s important. They’re right, of course. I’m not sure the difference between “integration” and “end-to-end” matters, though.

The problem is that moving from monoliths to microservices, which makes these tests more important, also makes them harder to build. Which is another good reason to stick with a nice simple monolith if you can. No, I’m not kidding.

Which in turn means you have to be sure to budget time, including design and maintenance time, for your integration testing. (Unit testing is just part of the basic coding budget.)

Complete and fast

I know I find these hard to write and I know I’m not alone because I’ve worked with otherwise-excellent teams who have crappy integration tests.

One way they’re bad is that they take hours to run. This is hardly controversial enough to worth saying but, since it’s a target that’s often missed, let’s say it: Integration tests don’t need to be as quick as unit tests but they do need to be fast enough that it’s reasonable to run them every time you go to the bathroom or for coffee, or get interrupted by a chat window. Which, once again, is hard to achieve.

Finally, time after time I see integration-test logs show failures and some dev says “oh yeah, those particular tests are flaky, they just fail sometimes.” For some reason they think this is OK. Either the tests exercise something that might fail in production, in which case you should treat failures as blockers, or they don’t, in which case you should take them out of the damn test suite which will then run faster.

Benchmarks

Since I’ve almost always worked on super-performance-sensitive code, I often end up writing benchmarks, and after a while I got into the habit of leaving a few of them live in the test suite. Because I’ve observed more than a few outages caused by a performance regression, something as dumb as a config tweak pushing TLS compute out of hardware and into Java bytecodes. You’d really rather catch that kind of thing before you push.

Tooling

There’s plenty. It’s good enough. Have your team agree on which they’re going to use and become expert in it. Then don’t blame tools for your shortcomings.

Where we stand

The news is I think mostly good, because most sane organizations are starting to exhibit pretty good testing discipline, especially on server-side code. And like I said, this old guy sees a lot less bugs in production code than there used to be.

And every team has to wrestle with those awful old stagnant pools of untested legacy. Suck it up; dealing with that is just part of the job. Anyhow, you probably wrote some of it.

But here and there every day, teams lose their way and start skipping the hand-wash after the toilet visit. Don’t. And don’t ship untested code.

Page processed in 0.592 seconds.

Powered by SimplePie 1.3.1, Build 20210111163358. Run the SimplePie Compatibility Test. SimplePie is © 2004–2021, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.