Last updated: July 12, 2020 10:22 PM (All times are UTC.)

July 10, 2020

Reading List 262 by Bruce Lawson (@brucel)

July 09, 2020

Many parts of a modern software stack have been around for a long time. That has trade-offs, but in terms of user experience is a great thing: software can be incrementally improved, providing customers with familiarity and stability. No need to learn an entirely new thing, because your existing thing just keeps on working.

It’s also great for developers, because it means we don’t have to play red queen, always running just to stand still. We can focus on improving that customer experience, knowing that everything we wrote to date still works. And it does still work. Cocoa, for example, has a continuous history back to 2001, and there’s code written to use Cocoa APIs going back to 1994. Let’s port some old Cocoa software, to see how little effort it is to stay up to date.

Bean is a free word processor for macOS. It’s written in Objective-C, using mostly Cocoa (but some Carbon) APIs, and uses the Cocoa Text system. The current version, Bean 3.3.0, is free, and supports macOS 10.14-10.15. The open source (GPL2) version, Bean 2.4.5, supports 10.4-10.5 on Intel and PowerPC. What would it take to make that a modern Cocoa app? Not much—a couple of hours work gave me a fully-working Bean 2.4.5 on Catalina. And a lot of that was unnecessary side-questing.

Step 1: Make Xcode happy

Bean 2.4.5 was built using the OS X 10.5 SDK, so probably needed Xcode 3. Xcode 11 doesn’t have the OS X 10.5 SDK, so let’s build with the macOS 10.15 SDK instead. While I was here, I also accepted whatever suggested updated settings Xcode showed. That enabled the -fobjc-weak flag (not using automatic reference counting), which we can now just turn off because the deployment target won’t support it. So now we just build and run, right?

Not quite.

Step 2: Remove references to NeXT Objective-C runtime

Bean uses some “method swizzling” (i.e. swapping method implementations at runtime), mostly to work around differences in API behaviour between Tiger (10.4) and Leopard (10.5). That code no longer compiles:

/Users/leeg/Projects/Bean-2-4-5/ApplicationDelegate.m:66:23: error: incomplete
      definition of type 'struct objc_method'
                        temp1 = orig_method->method_types;
In file included from /Users/leeg/Projects/Bean-2-4-5/ApplicationDelegate.m:31:
/Applications/ note: 
      forward declaration of 'struct objc_method'
typedef struct objc_method *Method;

The reason is that when Apple introduced the Objective-C 2.0 runtime in Leopard, they made it impossible to directly access the data structures used by the runtime. Those structures stayed in the headers for a couple of releases, but they’re long gone now. My first thought (and first fix) was just to delete this code, but I eventually relented and wrapped it in #if !__OBJC2__ so that my project should still build back to 10.4, assuming you update the SDK setting. It now builds cleanly, using clang and Xcode 11.5 (it builds in the beta of Xcode 12 too, in fact).

OK, ship it, right?

Diagnose a stack smash

No, I launched it, but it crashed straight away. The stack trace looks like this:

* thread #1, queue = '', stop reason = EXC_BAD_ACCESS (code=2, address=0x7ffeef3fffc8)
  * frame #0: 0x00000001001ef576 libMainThreadChecker.dylib`checker_c + 49
    frame #1: 0x00000001001ee7c4 libMainThreadChecker.dylib`trampoline_c + 67
    frame #2: 0x00000001001c66fc libMainThreadChecker.dylib`handler_start + 144
    frame #3: 0x00007fff36ac5d36 AppKit`-[NSTextView drawInsertionPointInRect:color:turnedOn:] + 132
    frame #4: 0x00007fff36ac5e6d AppKit`-[NSTextView drawInsertionPointInRect:color:turnedOn:] + 443
    frame #40240: 0x00007fff36ac5e6d AppKit`-[NSTextView drawInsertionPointInRect:color:turnedOn:] + 443
    frame #40241: 0x00007fff36ac5e6d AppKit`-[NSTextView drawInsertionPointInRect:color:turnedOn:] + 443
    frame #40242: 0x00007fff36a6d98c AppKit`-[NSTextView(NSPrivate) _viewDidDrawInLayer:inContext:] + 328

That’s, um. Well, it’s definitely not good. All of the backtrace is in API code, except for main() at the top. Has NSTextView really changed so much that it gets into an infinite loop when it tries to draw the cursor?

No. Actually one of the many patches to AppKit in this app is not swizzled, it’s a category on NSTextView that replaces the two methods you can see in that stack trace. I could change those into swizzled methods and see if there’s a way to make them work, but for now I’ll remove them.

Side quest: rationalise some version checks

Everything works now. An app that was built for PowerPC Mac OS X and ported at some early point to 32-bit Intel runs, with just a couple of changes, on x86_64 macOS.

I want to fix one more thing. This message appears on launch and I would like to get rid of it:

2020-07-09 21:15:28.032817+0100 Bean[4051:113348] WARNING: The Gestalt selector gestaltSystemVersion
is returning 10.9.5 instead of 10.15.5. This is not a bug in Gestalt -- it is a documented limitation.
Use NSProcessInfo's operatingSystemVersion property to get correct system version number.

Call location:
2020-07-09 21:15:28.033531+0100 Bean[4051:113348] 0   CarbonCore                          0x00007fff3aa89f22 ___Gestalt_SystemVersion_block_invoke + 112
2020-07-09 21:15:28.033599+0100 Bean[4051:113348] 1   libdispatch.dylib                   0x0000000100362826 _dispatch_client_callout + 8
2020-07-09 21:15:28.033645+0100 Bean[4051:113348] 2   libdispatch.dylib                   0x0000000100363f87 _dispatch_once_callout + 87
2020-07-09 21:15:28.033685+0100 Bean[4051:113348] 3   CarbonCore                          0x00007fff3aa2bdb8 _Gestalt_SystemVersion + 945
2020-07-09 21:15:28.033725+0100 Bean[4051:113348] 4   CarbonCore                          0x00007fff3aa2b9cd Gestalt + 149
2020-07-09 21:15:28.033764+0100 Bean[4051:113348] 5   Bean                                0x0000000100015d6f -[JHDocumentController init] + 414
2020-07-09 21:15:28.033802+0100 Bean[4051:113348] 6   AppKit                              0x00007fff36877834 -[NSCustomObject nibInstantiate] + 413

A little history, here. Back in classic Mac OS, Gestalt was used like Unix programmers use sysctl and soda drink makers use high fructose corn syrup. Want to expose some information? Add a gestalt! Not bloated enough? Drink more gestalt!

It’s an API that takes a selector, and a pointer to some memory. What gets written to the memory depends on the selector. The gestaltSystemVersion selector makes it write the OS version number to the memory, but not very well. It only uses 32 bits. This turned out to be fine, because Apple didn’t release many operating systems. They used one octet each for major, minor, and patch release numbers, so macOS 8.5.1 was represented as 0x0851.

When Mac OS X came along, Gestalt was part of the Carbon API, and the versions were reported as if the major release had bumped up to 16: 0x1000 was the first version, 0x1028 was a patch level release 10.2.8 of Jaguar, and so on.

At some point, someone at Apple realised that if they ever did sixteen patch releases or sixteen minor releases, this would break. So they capped each of the patch/minor numbers at 9, and just told you to stop using gestaltSystemVersion. I would like to stop using it here, too.

There are lots of version number checks all over Bean. I’ve put them all in one place, and given it two ways to check the version: if -[NSProcessInfo isOperatingSystemAtLeastVersion:] is available, we use that. Actually that will never be relevant, because the tests are all for versions between 10.3 and 10.6, and that API was added in 10.10. So we then fall back to Gestalt again, but with the separate gestaltSystemVersionMajor/Minor selectors. These exist back to 10.4, which is perfect: if that fails, you’re on 10.3, which is the earliest OS Bean “runs” on. Actually it tells you it won’t run, and quits: Apple added a minimum-system check to Launch Services so you could use Info.plist to say whether your app works, and that mechanism isn’t supported in 10.3.

Ship it!

We’re done!

Bean 2.4.5 on macOS Catalina

Haha, just kidding, we’re not done. Launching the thing isn’t enough, we’d better test it too.

Bean 2.4.5 on macOS Catalina in dark mode

Dark mode wasn’t a thing in Tiger, but it is a thing now. Bean assumes that if it doesn’t set the background of a NSTextView, then it’ll be white. We’ll explicitly set that.

Actually ship it!

You can see the source on Github, and particularly how few changes are needed to make a 2006-era Cocoa application work on 2020-era MacOS, despite a couple of CPU family switches and a bunch of changes to the UI.

Repackaging for Uncertainty

It’s not news that the global covid-19 pandemic has severely impacted cultural institutions around the world. Here in New York, what was initially announced as a 1-month closure for Broadway was recently extended to the end of 2020, forcing theaters to go dark for at least 9 months. For single ticket purchases, sales can be painfully pushed back, shows rescheduled, credits offered, and emails blasted. But at this point, organizations are now looking at a 20/21 season that has been reduced by 50-70%. So while they ponder what a physically and financially safe reopening looks like, they’re also having to turn their attentions to another key portion of their audience: subscribers.

Every company will tell you they want to cultivate a relationship with their customers. From a brand perspective, you always want to have a key base of loyal consumers, and financially these are the patrons that will consistently return to make purchases. Arts and cultural institutions use annual packages to leverage their reliable volume and quality of programming, allowing them to build relationships with their patrons which span over decades, and sometimes generations. Rather than buying several tickets to individual shows, a package is a bundle deal. The benefits vary from org to org, early access to show dates, choice of seats, and discounted prices on the shows being some of the most common, but one thing is constant: in order to reap any of these benefits, patrons commit to multiple shows over the course of the year. As a relationship tool, packages are very effective. Theatre Communications Group (TCG) in New York City produces an annual fiscal analysis for national trends in the non-profit theater sector by gathering data from all across the country. In 2018 they gathered data from 177 theaters, and found that, on average, subscribers renewed 74% of the time.

This is about more than brand loyalty though: these packages represent a source of consistent income for theaters. Beyond the face value of individual tickets purchased by subscribers, which according to TCG’s research tally up to an average of $835k a year (with the highest earning tier of theaters surveyed pulling in an average of 2.5 million dollars), subscribers are much more likely to convert their attendance into other kinds of support for the organization. These patons can be marketed to with low risk (of annoyance) and high reward (in literal dollars). Thus, the value of subscriber relationships goes well beyond their sheer presence in the venue, making them one of the most consistently considered portions of the audience. At a time when uncertainty is the name of the game, a set of dependable patrons might seem like the perfect audience slice to reach out to right now.

However, since the rewards for packages typically revolve around early access, and require a multi-show commitment, subscription purchases and renewals usually receive a big push very early in the season sales cycle, putting them in a particularly vulnerable position at the start of the covid-19 pandemic. Furthermore, the extension of venue closures have not only shifted back sales dates, but also the seasons themselves leaving patrons with fewer shows to choose from, and fewer to commit to. This is forcing everyone to figure out how to salvage the potential income while still providing a valuable, and fair, experience to patrons. Thus far, we’re seeing three different answers to this question emerge.


One straightforward approach is to roll with the punches and create mixed packages between this season (Season A), and next season (Season B). This works particularly well for orgs with consistent types of content every year. For example if an organization has a package of 5 Jazz concerts in Season A, but 3 of them are cancelled due to the pandemic, you just take 3 shows from Season B to replace them. Behind the scenes, this strategy does require a lot of hands-on adjustments if shows continue to be cancelled, but with the benefit of preserving the structure of the original package. Patrons also have a transparent view into how the value of their package is being preserved, however they are still tied to a specific show date with no knowledge of what the situation will be at that point.

Voucher Exchange

Another solution which has started to arise is the idea of a voucher system. Rather than trying to reschedule after a show in a package is cancelled, patrons are given vouchers which can be redeemed for a ticket to a future performance. For organizations, this option puts a lot of the workload at the front end, as it requires detailed business consideration: do vouchers expire? If so, how far in the future? Do the vouchers have a dollar value, or can they be exchanged 1:1 for a production? What happens if prices change between now and reopening, or if a user wants a ticket of a different value? (You get the gist). All that being said, once those business rules are set, it has the potential to put the other choices in the hands of the patron. Consequently, for patrons this option takes off some of the pressure: they don’t need to commit to another uncertain date in the future, instead they can be assured they are receiving the value of their package at a time that they feel comfortable.

Pay It Forward

A third option is to push the guesswork entirely to the future and allow users to purchase a set bundle of shows as normal, but with no mention of dates or seats. Instead of setting a calendar for the year, patrons are committing to content: 5 shows, rather than 5 dates. Some organizations have had this in place for early renewals for years, and find it to be an easy way to service patrons who are loyal to the organization through thick and thin. Ultimately though, this allows both parties to make more informed decisions about their theatergoing habits closer to the show itself, rather than 3 months ahead of an unknown future. That being said, this solution requires a lot of upfront discussions within the organization, and to the patron, about what might happen if patrons cannot attend the dates they are assigned either due to conflicts or continued safety concerns.

Anyone who’s remotely involved in the arts & culture sector will not be surprised that there is no one-size-fits-all solution. Some organizations will enjoy the straight-forwardness of mixing packages, others will want to allow for uncertainty and opt for the voucher system or the pay-it-forward option, still others will come up with an infinite number of alternate approaches to this issue. And of course, these solutions are all dependent on an optimistic future which is still a huge question mark: some areas are opening up, others are extending their closures, previously bankable organizations are filing for bankruptcy, and for every positive trend in cases there’s a spike somewhere else. In the game of whack-a-mole that is covid-19, the path towards reopening, and specifically a positive subscriber experience, is a tightrope: business rules will need to be clearly defined, messaging carefully considered, and customer service well briefed on the new practices. No matter what solution organizations opt for, it will need to be tailor fitted so that the patron relationships which will keep theater alive beyond this pandemic can be cultivated. Otherwise they run the risk of patrons feeling milked for money, and lemming marched into the theater.

July 08, 2020

In Part One, I explored the time of transition from Mac OS 8 to Mac OS X (not a typo: Mac OS 9 came out during the transition period). From a software development perspective, this included the Carbon and Cocoa UI frameworks. I mooted the possibility that Apple’s plan was “erm, actually, Java” and that this failed to come about not because Apple didn’t try, but because developers and users didn’t care. The approach described by Steve Jobs, of working out what the customer wants and working backwards to the technology, allowed them to be flexible about their technology roadmap and adapt to a situation where Cocoa on Objective-C, and Carbon on C and C++, were the tools of choice.[*]

So this time, we want to understand what the plan is. The technology choices available are, in the simplistic version: SwiftUI, Swift Catalyst/UIKit, ObjC Catalyst/UIKit, Swift AppKit, ObjC AppKit. In the extended edition, we see that Apple still supports the “sweet solution” of Javascript on the web, and despite trying previously to block them still permits various third-party developer systems: Javascript in React Native, Ionic, Electron, or whatever’s new this week; Xamarin.Forms, JavaFX, Qt, etc.

What the Carbon/Cocoa plan tells us is that this isn’t solely Apple’s plan to implement. They can have whatever roadmap they want, but if developers aren’t on it it doesn’t mean much. This is a good thing: if Apple had sufficient market dominance not to be reasonably affected by competitive forces or market trends, then society would have a problem and the US DOJ or the EU Directorate-General for Competition would have to weigh in. If we don’t want to use Java, we won’t use Java. If enough of us are still using Catalyst for our apps, then they’re supporting Catalyst.

Let’s put this into the context of #heygate.

These apps do not offer in-app purchase — and, consequently, have not contributed any revenue to the App Store over the last eight years.

— Rejection letter from Apple, Inc. to Basecamp

When Steve Jobs talked about canning OpenDoc, it was in the context of a “consistent vision” that he could take to customers to motivate “eight billion, maybe ten billion dollars” of sales. It now takes Apple about five days to make that sort of money, so they’re probably looking for something more than that. We could go as far as to say that any technology that contributes to non-revenue-generating apps is an anti-goal for Apple, unless they can conclusively point to a halo effect (it probably costs Apple quite a bit to look after Facebook, but not having it would be platform suicide, for example).

From Tim Cook’s and Craig Federighi’s height, these questions about “which GUI framework should we promote” probably don’t even show up on the radar. Undoubtedly SwiftUI came up with the SLT before its release, but the conversation probably looked a lot like “developers say they can iterate on UIs really quickly with React, so I’ve got a TPM with a team of ten people working on how we counter that.” “OK, cool.” A fraction of a percent of the engineering budget to nullify a gap between the native tools and the cross-platform things that work on your stack anyway? OK, cool.

And, by the way, it’s a fraction of a percent of the engineering budget because Apple is so gosh-darned big these days. To say that “Apple” has a “UI frameworks plan” is a bit like saying that the US navy has a fast destroyers plan: sure, bits of it have many of them.

At the senior level, the “plan” is likely to be “us” versus “not us”, where all of the technologies you hear of in somewhere like ATP count as “us”. The Java thing didn’t pan out, Sun went sideways in the financial crisis of 2007, how do we make sure that doesn’t happen again?

And even then, it’s probably more like “preferably us” versus “not us, but better with us”: if people want to use cross-platform tools, and they want to do it on a Mac, then they’re still buying Macs. If they support Sign In With Apple, and Apple Pay, then they still “contribute any revenue to the App Store”, even if they’re written in Haskell.

Apple made the Mac a preeminent development and deployment platform for Java technology. One year at WWDC I met some Perl hackers in a breakout room, then went to the Presidio to watch a brown bag session by Python creator Guido van Rossum. When Rails became big, everyone bought a Mac Laptop and a Textmate license, to edit their files for their Linux web apps.

Apple lives in an ecosystem, and it needs help from other partners, it needs to help other partners. And relationships that are destructive don’t help anybody in this industry as it is today. … We have to let go of this notion that for Apple to win, Microsoft has to lose, OK? We have to embrace the notion that for Apple to win, Apple has to do a really good job.

— Steve Jobs, 1997

[*] even this is simplistic. I don’t want to go overboard here, but definitely would point out that Apple put effort into supporting Swing with native-esque controls on Java, language bridges for Perl, Python, Ruby, an entire new runtime for Ruby, in addition to AppleScript, Automator, and a bunch of other programming environments for other technologies like I/O Kit. Like the man said, sometimes the wrong choice is taken, but that’s good because at least it means someone was making a decision.

Tweets by substrakt

No CEO dominated a market without a plan, but no market was dominated by following the plan.

— I made this quote up. Let’s say it was Rockefeller or someone.

In Accidental Tech Podcast 385: Temporal Smear, John Siracusa muses on what “the plan” for Apple’s various GUI frameworks might be. In summary, and I hope this is a fair representation, he says that SwiftUI is modern, works everywhere but isn’t fully-featured, UIKit (including Mac Catalyst) is not as modern, not as portable, but has good feature coverage, and AppKit is old, works only on Mac, but is the gold standard for capability in Mac applications.

He compares the situation now with the situation in the first few years of Mac OS X’s existence, when Cocoa (works everywhere, designed in mid-80s, not fully-featured) and Carbon (works everywhere, designed in slightly earlier mid-80s, gold standard for Mac apps) were the two technologies for building Mac software. Clearly “the plan” was to drop Carbon, but Apple couldn’t tell us that, or wouldn’t tell us that, while important partners were still writing software using the library.

This is going to be a two-parter. In part one, I’ll flesh out some more details of the Carbon-to-Cocoa transition to show that it was never this clear-cut. Part two will take this model and apply it to the AppKit-to-SwiftUI transition.

A lot of “the future” in Next-merger-era Apple was based on neither C with Toolbox/Carbon nor Objective-C with OPENSTEP/Yellow Box/Cocoa but on Java. NeXT had only released WebObjects a few months before the merger announcement in December 1996, but around merger time they released WO 3.1 with very limited Java support. A year later, WO 3.5 with full Java Support (on Yellow Box for Windows, anyway). By May 2001, a few weeks after the GM release of Mac OS X 10.0, WebObjects 5 was released and had been completely rewritten in Java.

Meanwhile, Java was also important on the client. A January 1997 joint statement by NeXT and Apple mentions ObjC 0 times, and Java 5 times. Apple released the Mac Run Time for Java on that day, as well as committing to “make both Mac OS and Rhapsody preeminent development and deployment platforms for Java technology”—Rhapsody was the code-name-but-public for NeXT’s OS at Apple.

The statement also says “Apple plans to carry forward key technologies such as OpenDoc”, which clearly didn’t happen, and led to this exchange which is important for this story:

One of the things I’ve always found is that you’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re gonna try to sell it. And I’ve made this mistake probably more than anybody else in this room, and I’ve got the scar tissue to prove it.

Notice that the problem here is that Apple told us the plan, did something else, and made a guy with a microphone very unhappy. He’s unhappy not at Gil Amelio for making a promise he couldn’t keep, but at Steve Jobs for doing something different.

OpenDoc didn’t carry on, but Java did. In the Rhapsody developer releases, several apps (including TextEdit, which was sample code in many releases of OpenStep, Rhapsody and Mac OS X) were written in Yellow Box Java. In Mac OS X 10.0 and 10.1, several apps were shipped using Cocoa-Java. Apple successfully made Mac OS and Rhapsody (a single) preeminent development and deployment platform for Java technology.

I do most of my engineering on my PowerBook … it’s got fully-functional, high-end software development tools.

— James Gosling, creator of Java

But while people carried on using Macs, and people carried on using Java, and people carried on using Macs to use Java, few people carried on using Java to make software for Macs. It did happen, but not much. Importantly, tastemakers in the NeXT developer ecosystem who were perfectly happy with Objective-C thank you carried on being perfectly happy with Objective-C, and taught others how to be happy with it too. People turned up to WWDC in t-shirts saying [objc retain];. Important books on learning Cocoa said things like:

The Cocoa frameworks were written in and for Objective-C. If you are going to write Cocoa applications, use Objective-C, C, and C++. Your application will launch faster and take up less memory than if it were written in Java. The quirks of the Java bridge will not get in the way of your development. Also, your project will compile much faster.

If you are going to write Java applications, use Swing. Swing, although not as wonderful as Cocoa, was written from the ground up for Java.

— Aaron Hillegass, Cocoa Programming for Mac OS X

Meanwhile, WebObjects for Java was not going great guns either. It still had customers, but didn’t pick up new customers particularly well. Big companies who wanted to pay for a product with the word Enterprise in the title didn’t really think of Apple as an Enterprise-ish company, when Sun or IBM still had people in suits. By the way, one of Sun’s people in suits was Jonathan Schwartz, who had run a company that made Cocoa applications in Objective-C back in the 1990s. Small customers, who couldn’t afford to use products with Enterprise in the title and who had no access to funding after the dot-bomb, were discovering the open source LAMP (Linux, Apache, MySQL, PHP/Perl) stack.

OK, so that’s Cocoa, what about Carbon? It’s not really the Classic Mac OS Toolbox APIs on Mac OS X, it’s some other APIs that are like those APIs. Carbon was available for both Mac OS 8.1+ (as an add-on library) and Mac OS X. Developers who had classic Mac apps still had to work to “carbonise” their software before it would work on both versions.

It took significant engineering effort to create Carbon, effectively rewriting a lot of Cocoa to depend on an intermediate C layer that could also support the Carbon APIs. Apple did this not because it had been part of their plan all along, but because developers looked at Rhapsody with its Cocoa (ObjC and Java) and its Blue Box VM for “classic” apps and said that they were unhappy and wouldn’t port their applications soon. Remember that “you’ve got to start with the customer experience and work backwards to the technology”, and if your customer experience is “I want to use Eudora, Word, Excel, and Photoshop” then that’s what you give ’em.

With this view, Cocoa and Carbon are actually the same age. Cocoa is OpenStep minus Display PostScript (Quartz 2D/Core Graphics taking its place) and with the changes necessary to be compatible with Carbon. Carbon is some MacOS Toolbox-like things that are adapted to be compatible with Cocoa. Both are new to Mac developers in 2001, and neither is furthering the stated goal of making Mac OS a preeminent development and deployment environment for Java.

To the extent that Apple had a technology roadmap, it couldn’t survive contact with their customers and developers—and it was a good thing that they didn’t try to force it. To the extent that they had a CEO-level plan, it was “make things that people want to buy more than they wanted to buy our 1997 products”, and in 2001 Apple released the technology that would settle them on that path. It was a Carbonised app called iTunes.

July 06, 2020

Anti-lock brakes by Graham Lee

Chances are, if you bought a new car or even a new motorcycle within the last few years, you didn’t even get an option on ABS. It came as standard, and in your car was legally mandated. Anti-lock brakes work by measuring the rotational acceleration of the wheels, or comparing their rotational velocities. If one wheel is rotating very much slower than the others, or suddenly decelerates, it’s probably about to lock so the ABS backs off the pressure on the brake for that wheel.

ABS turns everyone into a pretty capable brake operator, in most circumstances. This is great, because many people are not pretty capable at operating brakes, even when they think they are, and ABS makes them better at it. Of course, some people are very capable at it, but ABS levels them too, making them merely pretty capable.

But even a highly capable brake operator can panic, or make mistakes. When that happens, ABS means that the worst effect of their mistake is that they are merely pretty capable.

In some circumstances, having ABS is strictly worse than not having it. An ABS car will take longer to stop on a gravel surface or on snow than a non-ABS car. Car with ABS tend to hit each other much less often than those without, but tend to run off the road more often than those without. But for most vehicles, the ABS is always-on, even in situations where it will get in your way. Bring up that it is getting in your way, and someone will tell you how much safer it is than not having it. Which is true, in the other situations.

Of course the great thing about anti-lock brakes is that the user experience is the same as what most sub-pretty-capable drivers had before. No need to learn a different paradigm or plan your route differently. When you want to stop, press the thing that makes the car stop very hard.

Something, something, programming languages.

July 04, 2020

Let’s look at other software on the desktop, to understand why there isn’t (as a broad, popular platform) Linux on the desktop, then how there could be.

Over on De Programmatica Ipsum I discussed the difference between the platform business model, and the technology platform. In the platform model, the business acts as a matchmaking agent, connecting customers to vendors. An agricultural market is a platform, where stud farmers can meet dairy farmers to sell cattle, for example.

Meanwhile, when a technology platform is created by a business, it enables a two-sided business model. The (technology) platform vendor sells their APIs to developers as a way of making applications. They sell their technology to consumers with the fringe benefit that these third-party applications are available to augment the experience. The part of the business that is truly a platform model is the App Store, but those came late as an effort to capture a share of the (existing) developer-consumer sales revenue, and don’t really make the vendors platform businesses.

In fact, I’m going to drop the word platform now, as it has these two different meanings. I’ll say “store” or “App Store” when I’m talking about a platform business in software, and “stack” or “software stack” when I’m talking about a platform technology model.

Stack vendors have previously been very protective of their stack, trying to fend off alternative technologies that allow consumers to take their business elsewhere. Microsoft famously “poisoned” Java, an early and capable cross-platform application API, by bundling their own runtime that made Java applications deliberately run poorly. Apple famously added a clause to their store rules that forbade any applications made using off-stack technology.

Both of these situations are now in the past: Microsoft have even embraced some cross-platform technology options, making heavy use of Electron in their own applications and even integrating the Chromium rendering engine into their own browser to increase compatibility with cross-platform technology and reduce the cost of supporting those websites and applications made with Javascript. Apple have abandoned that “only” clause in their rules, replacing it with a collection of “but also” rules: yes you can make your applications out of whatever you want, but they have to support sign-in and payment mechanisms unique to their stack. So a cross-stack app is de jure better integrated in Apple’s sandpit.

These actions show us how these stack vendors expect people to switch stacks: they find a compelling application, they use it, they discover that this application works better or is better integrated on another stack, and so they change to it. If you’re worried about that, then you block those applications so that your customers can’t discover them. If you’re not worried about that, then you allow the technologies, and rely on the fact that applications are commodities and nobody is going to find a “killer app” that makes them switch.

Allowing third-party software on your own platform (cross-stack or otherwise) comes with a risk, that people are only buying your technology as an incidental choice to run something else, and that if it disappears from your stack, those customers might go away to somewhere that it is available. Microsoft have pulled that threat out of their briefcase before, settling a legal suit with Apple after suggesting that they would remove Word and Excel from the Mac stack.

That model of switching explains why companies that are otherwise competitors seem willing to support one another by releasing their own applications on each others’ stacks. When Apple and Microsoft are in competition, we’ve already seen that Microsoft’s applications give them leverage over Apple: they also allow Apple customers to be fringe players in the Microsoft sandpit, which may lead them to switch (for example when they see how much easier it is for their Windows-using colleagues to use all of the Microsoft collaboration tools their employers use). But Apple’s applications also give them leverage over Microsoft: the famed “halo effect” of Mac sales being driven by the iPod fits this model: you buy an iPod because it’s cool, and you use iTunes for Windows. Then you see how much better iTunes for Mac works, and your next computer is a Mac. The application is a gateway to the stack.

What has all of this got to do with desktop Linux? Absolutely nothing, and that’s my point. There’s never been a “halo effect” for the Free Software world because there’s never been a nucleus around which that halo can form. The bazaar model does a lot to ensure that. Let’s take a specific example: for many people, Thunderbird is the best email client you can possibly get. It also exists on multiple stacks, so it has the potential to be a “gateway” to desktop Linux.

But it won’t be. The particular bazaar hawkers working on Thunderbird don’t have any particular commitment to the rest of the desktop Linux stack: they’re not necessarily against it, but they’re not necessarily for it either. If there’s an opportunity to make Thunderbird better on Windows, anybody can contribute to exploit that opportunity. At best, Thunderbird on desktop Linux will be as good as Thunderbird anywhere else. Similarly, the people in the Nautilus file manager area of the bazaar have no particular commitment to tighter integration with Thunderbird, because their users might be using GNUMail or Evolution.

At one extreme, the licences of software in the bazaar dissuade switching, too. Let’s say that CUPS, the common UNIX printing subsystem, is the best way to do printing on any platform. Does that mean that, say, Mac users with paper-centric workflows or lifestyles will be motivated to switch to desktop Linux, to get access to CUPS? No, it means Apple will take advantage of the CUPS licence to integrate it into their stack, giving them access to the technology.

The only thing the three big stack vendors seem to agree on when it comes to free software licensing is that the GPL version 3 family of licences is incompatible with their risk appetites, particularly their weaponised patent portfolios. So that points to a way to avoid the second of these problems blocking a desktop Linux “halo effect”. Were there a GPL3 killer app, the stack vendors probably wouldn’t pick it up and integrate it. Of course, with no software patent protection, they’d be able to reimplement it without problem.

But even with that dissuasion, we still find that the app likely wouldn’t be a better experience on a desktop Linux stack than on Mac, or on Windows. There would be no halo, and there would be no switchers. Well, not no switchers, but probably no more switchers.

Am I minimising the efforts of consistency and integration made by the big free software desktop projects, KDE and GNOME? I don’t think so. I’ve used both over the years, and I’ve used other desktop environments for UNIX-like systems (please may we all remember CDE so that we never repeat it). They are good, they are tightly integrated, and thanks to the collaboration on specifications in the Free Desktop Project they’re also largely interoperable. What they aren’t is breakout. Where Thunderbird is a nucleus without a halo, Evolution is a halo without a nucleus: it works well with the other GNOME tools, but it isn’t a lighthouse attracting users from, say, Windows, to ditch the rest of their stack for GNOME on Linux.

Desktop Linux is a really good desktop stack. So is, say, the Mac. You could get on well with either, but unless you’ve got a particular interest in free software, or a particular frustration with Apple, there’s no reason to switch. Many people do not have that interest or frustration.

July 03, 2020

July 02, 2020

Every week, I sit down with a coffee and work through this list. It takes anywhere between 20 and 60 minutes, depending on how much time I have.

Process #

  • Email inbox
  • Things inbox (Things is my task manager of choice)
  • FreeAgent (invoicing, expenses)
  • Trello (sales pipeline, active projects)
  • Check backups

Review #

  • Bank accounts
  • Quarterly goals
  • Yearly theme
  • Calendar
  • Things
    • Ensure all active projects are relevant and have a next action
    • Review someday projects
    • Check tasks have relevant tags

Plan #

  • Write out this weeks goals & intentions
  • Answer Mastermind Automatic Check-ins
    • Did you read, watch or listen to anything that you think the rest of the group might find useful or interesting?
    • What did you accomplish or learn this week?
    • What are your plans for the week?

This is one of the checklists I use to run my business.

If you have feedback, get in touch – I’d love to hear it.

These are the questions I ask a client to help guide our working engagement. It helps understand the clients needs and scope out a solution. I pick and choose the relevant items to the engagement at hand.

  • Describe your company and the products or services your business provides
  • When does the project need to be finished? What is driving that?
  • Who are the main contacts for this project and what are their roles?
  • What are the objectives for this project? (e.g. increase traffic, improve conversion rate, reduce time searching for information)
  • How will we know if this project has been a success? (e.g. 20% increase in sales, 20% increase in traffic)
  • If this project fails, what is most likely to have derailed us?
  • How do you intend to measure this?
  • What is your monthly traffic/conversions/revenues?
  • How have you calculated the value of this project internally?
  • What budget have you set for this project?
  • Who are your closest competitors?

This is one of the checklists I use to run my business. I run these answers past my Minimum Level of Engagement.

If you have feedback, get in touch – I’d love to hear it.

July 01, 2020

Leadership changes at Made

It’s been an odd few months for us here at Made Media, and I’m sure it has for all of you as well. Our clients all around the world have temporarily closed their doors to the public, and have gone through waves of grief, hope, and exhaustion. At the same time, our team has been busy working through the final stages of some big new projects, that are all due to launch in the next month. Tough times, new beginnings.

While all of us in the arts and cultural sector are struggling with the impact of Covid 19, we have also seen much increased demand for our Crowdhandler Virtual Waiting Room product. As a result of this we have made the decision to spin out this product from our core agency business in order to give it the focus and investment it deserves. Effective 1 July, we are setting up a new Crowdhandler company, majority-owned by Made, and we are making the following leadership changes:

I will become CEO of the new Crowdhandler company. It’s with a mixture of excitement and nostalgia that I am stepping down as CEO of Made Media today, after eighteen years, taking up the new role of non-executive Chair. A team of developers and cloud engineers from Made Media are moving with me to join the new Crowdhandler company.

James Baggaley, currently Managing Director, becomes CEO of Made Media. James joined Made in 2017 as Strategy Director, and since then has worked with Made clients around the world on some of our most involved projects. He has a long background of working in, with, and for arts and cultural organisations, with a focus on digital, e-commerce and ticketing strategy.

Meanwhile the Made team is on hand to help our clients as they navigate the coming months and beyond, and we can’t wait for all of these brilliant organisations to start welcoming audiences again.

June 30, 2020

Accessible data tables by Bruce Lawson (@brucel)

I’ve been working on a client project and one of the tasks was remediating some data tables. As I began researching the subject, it became obvious that most of the practical, tested advice comes from my old mates Steve Faulkner and Adrian Roselli.

I’ve collated them here so they’re in one place when I need to do this again, and in case you’re doing similar. But all the glory belongs with them. Buy them a beer when you next see them.

June 29, 2020

Announcing CultureCast

The rise of Covid-19 has forced cultural and arts organisations around the world to rapidly move their events online. In response to that, we have developed an easy to use, cloud-based paywall application to control access to your online video content.

It’s called CultureCast and it lets you control who has access to your video content, and sell access on a pay-per-view basis. It’s a standalone application, and you don’t have to be an existing Made client to start using it!

Tessitura integration

CultureCast is currently available exclusively for Tessitura users, because it integrates with the Tessitura REST API in order to authenticate and identify users.  You can sell video access via ticket purchases in Tessitura.  And control access to videos via constituency codes in Tessitura.


You can control the look and feel of CultureCast to match your brand, for no extra cost. You can also mask the URL with a dedicated subdomain (e.g.,  for a one-off setup fee.

Video embed

You can embed video content into CultureCast from any online video provider that supports OEmbed. Vimeo and Brightcove are supported out of the box, and we’re working on built-in support for further providers.

Stripe payments

Customer payments are straightforward. You can process in-app payments via Stripe (with support for ApplePay and other mobile payment methods), with a Stripe account in your name and with the money automatically paid out to you on a rolling basis. You can also control the look and feel of the confirmation emails within the Stripe dashboard.


There are no set-up costs to start using CultureCast.

You simply pay a service fee, set as a percentage of the revenue taken through the app. You are also responsible for paying Stripe payment processing fees, and for the cost of your video hosting platform. You don’t need to pay anything to get started, although there is a minimum charge once you reach over 1,000 video views per month via CultureCast.

This is just the beginning: we’re excited to continue development of CultureCast over the coming months and for that we need your feedback - so please try it out and let us know what you think!

If you’d like to get more details, check out our website at, or drop us an email at

June 28, 2020

These are the questions I ask myself to determine if a client will be a good fit to work with me.

  • Do they have a sufficient budget?
  • Is it a project I can add value to?
  • Do they have reasonable expectations?
  • Do they understand what’s involved (both their end and my end)?
  • Will this help me grow my skills and experience?
  • Will this help me with future sales?
  • Do I think this business/idea will succeed?
  • Do I have enough time to do my best work under their deadline?
  • Do they have clear goals for the project?
  • How organised are they?

This is one of the checklists I use to run my business.

If you have feedback, get in touch – I’d love to hear it.

The following are the checklists I use to run my business.

Each checklist has a few simple criteria:

  • It should be clear what the checklist is for and when it should be used
  • The checklist should be as short as possible
  • The wording should be simple and precise

Why use a checklist? #

I’ve previously written about the importance of a checklist. I summarised by saying:

The problem is that our jobs are too complex to be carried out by memory alone. The humble checklist provides defence against our own limitations in more tasks than we might realise.

David Perell in The Checklist Habit:

The less you have to think, the better. When you’re fully dependent on your memory, consider making a checklist. Don’t under-estimate the compounding benefits of fixing small operational leaks. Externalizing your processes is the best way to see them with clear eyes. Inefficiencies stand out like a lone tree on a desert plateau. Once the checklist is made, almost anybody in the organization can perform that operation.

The Checklist Manifesto is a great book on the subject.

Pre-project checklists #

Minimum level of engagement
Questions to determine if a client will be a good fit.

Questions to ask before a client engagement
Questions to run through before engaging on a project.

Project Starter Checklist (coming soon)
Things required before starting a project.

Website launch #

Pre-launch Checklist (coming soon)
Run through this checklist before launching a website

Post-launch Checklist (coming soon)
Run through this checklist after launching a website.

End of project #

Project Feedback (coming soon)
Questions once the project has been completed.

Testimonials (coming soon)
A series of questions to ask to get good testimonials.

Reviews #

Weekly review

Quarterly Review (coming soon)

Annual Review (coming soon)

June 27, 2020

WWDC2020 was the first WWDC I’ve been to in, what, five years? Whenever I last went, it was in San Francisco. There’s no way I could’ve got my employer to expense it this year had I needed to go to San Jose, nor would I have personally been able to cover the costs of physically going. So I wouldn’t even have entered the ticket lottery.

Lots of people are saying that it’s “not the same” as physically being there, and that’s true. It’s much more accessible than physically being there.

For the last couple at least, Apple have done a great job of putting the presentations on the developer site with very short lag. But remotely attending has still felt like being the remote worker on an office-based team: you know you’re missing most of the conversations and decisions.

This time, everything is remote-first: conversations happen on social media, or in the watch party sites, or wherever your community is. The bundling of sessions released once per day means there’s less of a time zone penalty to being in the UK, NZ, or India than in California or Washington state. Any of us who participated are as much of a WWDC attendee as those within a few blocks of the McEnery or Moscone convention centres.

June 26, 2020

Reading List 261 by Bruce Lawson (@brucel)

June 25, 2020

June 24, 2020

I sometimes get asked to review, or “comment on”, the architecture for an app. Often the app already exists, and the architecture documentation consists of nothing more than the source code and the folder structure. Sometimes the app doesn’t exist, and the architecture is a collection of hopes and dreams expressed on a whiteboard. Very, very rarely, both exist.

To effectively review an architecture and make recommendations for improving it, we need much more information than that. We need to know what we’re aiming for, so that we can tell whether the architecture is going to support or hinder those goals.

We start by asking about the functional requirements of the application. Who is using this, what are they using it for, how do they do that? Does the architecture make it easy for the programmers to implement those things, for the testers to validate those things, for whoever deploys and maintains the software to provide those things?

If you see an “architecture” that promotes the choice of technical implementation pattern over the functionality of the system, it’s getting in the way. I don’t need to know that you have three folders of Models, Views and Controllers, or of Actions, Components, and Containers. I need to know that you let people book childrens’ weather forecasters for wild atmospheric physics parties.

We can say the same about non-functional requirements. When I ask what the architecture is supposed to be for, a frequent response is “we need it to scale”. How? Do you want to scale the development team? By doing more things in parallel, or by doing the same things faster, or by requiring more people to achieve the same results? Hold on, did you want to scale the team up or down?

Or did you want to scale the number of concurrent users? Have you tried… y’know, selling the product to people? Many startups in particular need to learn that a CRM is a better tool for scaling their web app than Kubernetes. But anyway, I digress. If you’ve got a plan for getting to a million users, and it’s a realistic plan, does your architecture allow you to do that? Does it show us how to keep that property as we make changes?

Those important things that you want your system to do. The architecture should protect and promote them. It should make it easy to do the right thing, and difficult to regress. It should prevent going off into the weeds, or doing work that counters those goals.

That means that the system’s architecture isn’t really about the technology, it’s about the goals. If you show me a list of npm packages in response to questions about your architecture, you’re not showing me your architecture. Yes, I could build your system using those technologies. But I could probably build anything else, too.

June 22, 2020

I had planned to add anchor links to headings on this site when I came across On Adding IDs to Headings by Chris Coyier.

The Gutenberg heading block allows us to manually add IDs for each heading but, like Chris, I want IDs to be automatically added to each heading.

To see an example, navigate to my /uses page and hover over a title. You’ll see a # that you can click on which will jump directly to that heading.

Chris initially used jQuery to add IDs to headings but after that stopped working he started using a plugin called Add Anchor Links.

The plugin does the job but as someone who 1) likes to have as few plugins as possible installed and 2) loves to tinker, I thought I’d try and come up with a solution that doesn’t involve installing a plugin.

Here’s the code:

add_filter( 'render_block', 'origin_add_id_to_heading_block', 10, 2 );
function origin_add_id_to_heading_block( $block_content, $block ) {
	if ( 'core/heading' == $block['blockName'] ) {
		$block_content = preg_replace_callback("#<(h[1-6])>(.*?)</\\1>#", "origin_add_id_to_heading", $block_content);
	return $block_content;

function origin_add_id_to_heading($match) {
	list(, $heading, $title) = $match;
	$id = sanitize_title_with_dashes($title);
	$anchor = '<a href="#'.$id.'" aria-hidden="true" class="anchor" id="'.$id.'" title="Anchor for '.$title.'">#</a>';
	return "<$heading id='$id'>$title $anchor</$heading>";

To get this working, you’ll need to add this code to functions.php in your theme.

Here’s how the code snippet above works:

  • We’re using the render_block filter to modify the markup of the block before it is rendered
  • The render_block filter calls the origin_add_id_to_heading_block function which then checks to make sure we’re only updating the heading block
  • We then use the preg_replace_callback function to detect h1, h2, h3 etc using regex before calling the origin_add_id_to_heading function
  • The origin_add_id_to_heading function creates the markup we need: it creates the ID by taking the heading text and replacing spaces with dashes using WordPress’s sanitize_title_with_dashes function, then returns the markup we need

Let me know if you have any suggestions or improvements.

Update (24/06/20): This solution only works on posts/pages that use Gutenberg. The Add Anchor Links plugin works on all content so use this plugin if all of your content isn’t converted to Gutenberg.

Live and on-demand video integration for the Detroit Symphony Orchestra

Even before a global pandemic forced arts audiences to seek out their cultural activities from their homes, Detroit Symphony Orchestra had an accessibility-first mission to bring world class programming to audiences locally and around the world. When Made took on the challenge to create a web experience worthy of that mission, one of the key features was integrating with their live and on-demand video programming.

Prior to the planned launch of the new, the orchestra was switching to Vimeo OTT as a host for their DSO Live, DSO Replay, and DSO Classroom Edition channels. Some videos were to be open to the public, while others required viewers to make an annual donation to the orchestra before they could access them.

To accomplish this within the new website, we built a viewing experience using the Vimeo OTT API, paired with the Tessitura API for user credentials and access. When a user on goes to watch a DSO Replay video, they are prompted to login or make a donation to continue. Once they’ve been successfully identified as a donor, they then have access to the full back catalogue of DSO Replay content.

In order for Vimeo OTT to grant access to videos directly, accounts need to be created on their platform, based on the user’s email address. In order for Tessitura to grant video access, a specific constituency code must be active.


The first part of this implementation is to add the constituency when someone makes a contribution online. This allows them to watch the video directly and immediately through the DSO website. Additionally, an account is created in Vimeo OTT with the customer’s email address, so that future visits directly to the hosted platform do not require a website login. This also means that these customer credentials will work for any non-web deployments of Vimeo OTT, including via TV apps.


The other piece of this puzzle was how to provide access to customers who donate in person, over the phone, or by mail. We worked with the Vimeo team and DSO’s technology team on a scheduled SQL job, creating credentialed accounts within the OTT platform for any new offline donors.

Collectively, these pieces created a seamless integration with a third party OTT system, which allows their donors the ability to view videos either on the DSO site, or on the hosted platform. Controlling free versus paid access at the video level can be controlled entirely in Vimeo, while the mechanism for which specific users can access paid content is controlled in Tessitura.

Like many companies, stickee has traded office antics for lockdown life to keep everyone safe during the pandemic. While we may have swapped out desks...

The post stickee in lockdown appeared first on stickee.

June 18, 2020

My friend Andy Henson wrote an excellent primer on Roam Research, a tool I’ve been using a lot recently.

With Roam, you have to break some of your built-in expectations and assumptions. The atomic unit of a note is no longer a page. If you have only been used to the concept of a note being a page of text with a title, filled with one or more paragraphs you may be immediately put off with the bullets. This is part of its secret sauce. Each bullet, or block in Roam’s parlance, is its own thing. Typically a complete thought. It’s a note in itself. You can indent blocks infinitely, much like an outliner. But then, each word or phrases within it can be turned into references and given its own page. At which point, you can add more blocks of thoughts about that idea or concept, and collect further references to it.

I’ve had a sneak peek of Andy’s upcoming Roam Course and it’s going to be incredibly useful. If you have any interest in writing to improve your thinking, sign up to be notified when it launches.

June 16, 2020

Forearmed by Graham Lee

In researching my piece for the upcoming de Programmatica Ipsum issue on cloud computing, I had thoughts about Apple, arm, and any upcoming transition that didn’t fit in the context of that article. So here’s a different post, about that. I’ve worked at both companies so don’t have a neutral point of view, but I’ve also been in bits of the companies far enough from their missions that I don’t have any insider insight into this transition.

So, let’s dismiss the Mac transition part of this thread straight away: it probably will happen, for the same reasons that the PowerPC->Intel transition happened (the things Apple needed from the parts – mostly lower power consumption for similar performance – weren’t the same things that the suppliers needed, and the business Apple brought wasn’t big enough to make the suppliers change their mind), and it probably will be easier, because Apple put the groundwork in to make third-party devs aware of porting issues during the Intel transition, and encourage devs to use high-level frameworks and languages.

Whether you think the point is convergence (now your Catalyst apps are literally iPad IPAs that run on a Mac), or cost (Apple buy arm chipset licences, but have to buy whole chips from Intel, and don’t get the discount everybody else does for sticking the Intel Inside holographic sticker on the case), or just “betterer”, the arm CPUs can certainly provide. On the “betterer” argument, I don’t predict that will be a straightforward case of tomorrow’s arm Mac being faster than today’s Intel Mac. Partly because compilers: gcc certainly has better optimisations on Intel and I wouldn’t be surprised to find that llvm does too. Partly because workload, as iOS/watchOS/tvOS all keep the platform on guard rails that make the energy use/computing need expectations more predictable, and those guard rails are only slowly being added to macOS now.

On the other hand, it’s long been the case that computers have controller chips in for interfacing with the hardware, and that those chips are often things that could be considered CPUs for systems in their own rights. Your mac certainly already has arm chips in if you bought it recently: you know what’s running the OS for the touch bar? Or the T2 security chip? (Actually, if you have an off-brand PC with an Intel-compatible-but-not-Intel chip, that’s probably an arm core running the x86-64 instructions in microcode). If you beef one of those up so that it runs the OS too, then take a whole bunch of other chips and circuits off the board, you both reduce the power consumption and put more space in for batteries. And Apple do love talking battery life when they sell you a computer.

OK, so that’s the Apple transition done. But now back to arm. They’re a great business, and they’ve only been expanding of late, but it’s currently coming at a cost. We don’t have up to date financial information on Arm Holdings themselves since they went private, but that year they lost ¥31bn (I think about $300M). Since then, their corporate parent Softbank Group has been doing well, but massive losses from their Vision Fund have led to questions about their direction and particularly Masayoshi Son’s judgement and vision.

arm (that’s how they style it) have, mostly through their partner network, fingers in many computing pies. From the server and supercomputer chips from manufacturers like Marvell to smart lightbulbs powered by Nordic Semiconductor, arm have tentacles everywhere. But their current interest is squarely on the IoT side. When I worked in their HPC group in 2017, Simon Segars described their traditional chip IP business as the “legacy engine” that would fund the “disruptive unit” he was really interested in, the new Internet of Things Business Unit. Now arm’s mission is to “enable a trillion connected devices”, and you can bet there isn’t a world market for a trillion Macs or Mac-like computers.

If some random software engineer on the internet can work this out, you can bet Apple’s exec team have worked it out, too. It seems apparent that (assuming it happens) Apple are transitioning the Mac platform to arm at start of the (long, slow) exit arm make from the traditional computing market, and still chose to do it. This suggests something else in mind (after all, Apple already designs its chips in-house, so why not have them design RISC-V or MIPS chips, or something entirely different?). A quick timetable of Mac CPU instruction sets:

  • m68k 1984 – 1996, 12 years (I exclude the Lisa)
  • ppc 1994 – 2006, 12 years
  • x86 and x86-64 2006 – 2021?, 15 years?
  • arm 2020? – 203x?, 1x years?

I think it likely that the Mac will wind down with arm’s interest in traditional computing, and therefore arm will be the last ever CPU/SoC architecture for computers called Macs. That the plan for the next decade is that Apple is still at the centre of a services-based, privacy-focused consumer electronics experience, but that what they sell you is not a computer.

June 13, 2020

Reading List 260 by Bruce Lawson (@brucel)

June 07, 2020

Bella’s in the witch elm
Regal in a taffeta gown
Queen of her forest realm
Autumn leaves form her crown

Bella’s in the witch elm
She won’t say how she came there
She’s still as seasons turn
And winter winds wind her hair

Beneath the cries of the weeping willow
You can hear the sighs from the witch elm’s hollow

Bella’s in the witch elm
wearing her gold wedding ring
She’s silent at the coming of Spring
At the maypole, children dance and sing

Bella’s in the witch elm
Wearing just one summer shoe
She will never tell
Someone put here there – but who?

Who put Bella where she can’t see
Who put Bella in the Witch Elm Tree? Who?

Words and music © Bruce Lawson 2020, all rights reserved.
Production, drums and bass guitar: Shez of @silverlakemusic.

June 05, 2020

June 04, 2020

I’m not outside by Stuart Langridge (@sil)

I’m not outside.

Right now, a mass of people are in Centenary Square in Birmingham.

They’ll currently be chanting. Then there’s music and speeches and poetry and a lie-down. I’m not there. I wish I was there.

This is part of the Black Lives Matter protests going on around the world, because again a black man was murdered by police. His name was George Floyd. That was in Minneapolis; a couple of months ago Breonna Taylor, a black woman, was shot eight times by police in Louisville. Here in the UK black and minority ethnicity people die in police custody twice as much as others.

It’s 31 years to the day since the Tiananmen Square protests in China in which a man stood in front of a tank, and then he disappeared. Nobody even knows his name, or what happened to him.

The protests in Birmingham today won’t miss one individual voice, mine. And the world doesn’t need the opinion of one more white guy on what should be done about all this, about the world crashing down around our ears; better that I listen and support. I can’t go outside, because I’m immunocompromised. The government seems to flip-flop on whether it’s OK for shielding people to go out or not, but in a world where there are more UK deaths from the virus than the rest of the EU put together, where as of today nearly forty thousand people have died in this country — not been inconvenienced, not caught the flu and recovered, died, a count over half that of UK civilian deaths in World War 2 except this happened in half a year — in that world, I’m frightened of being in a large crowd, masks and social distancing or not. But the crowd are right. The city is right. When some Birmingham council worker painted over an I Can’t Breathe emblem, causing the council to claim there was no political motive behind that (tip for you: I’m sure there’s no council policy to do it, and they’ve unreservedly apologised, but whichever worker did it sure as hell had a political motive), that emblem was back in 24 hours, and in three other places around the city too. Good one, Birmingham.

Protestors in Centenary Square today

There are apparently two thousand of you. I can hear the noise from the window, and it’s wonderful. Shout for me too. Wish I could be there with you.

June 03, 2020

Here at stickee we are proud of the variety of specialised talent packed into our office every day. This week, we are pleased to welcome...

The post Mike Wilson joins the stickee team appeared first on stickee.

June 02, 2020

June 01, 2020

I enjoyed CGP Grey’s Lockdown Productivity video. I found the reminder about keeping strict boundaries on physical spaces particularly useful. And I love the sentiment: come back better than before.

May 31, 2020

AWS Certification Progress

In March, I achieved the AWS Cloud Practitioner certification. In May, I decided to go for the Associate Solutions Architect certification. It’s a lot more in depth than Cloud Practitioner, but I’m looking forward to coming out the other side.

To prepare, I bought the AWS Certified Solutions Architect - Associate 2020 course on Udemy. Because it was on sale, and I’m cheap.

What Motivates Me in My Job?

One of the tasks in Apprenticeship Pattens was to come up with 15 things that motivate you in your career. In no particular order:

  1. Making things, and doing it well.
  2. The money is good. Might be tacky to say it out loud, but I’m not complaining.
  3. Working on things that will hopefully have a positive impact on the world.
  4. I’m very lucky to have found work that is very close to play.
  5. Working on something as a team.
  6. Getting the chance to teach people things and elevate them to your level.
  7. Creating an ordered system out of a mess of ideas and requirements.
  8. The cool programmer aesthetic.
  9. Being able to (largely) plan out my own days, and come up with my own solutions to things.
  10. Flexing creative muscles.
  11. The tools! Software is cool.
  12. The hardware.
  13. Being able to surround myself with people much smarter than I am.
  14. Having an opportunity to write regularly.
  15. There’s always something to learn.

And then it asked for five more.

  1. It’s a respectable profession.
  2. Software is unlikely to be replaced or eliminated any time soon.
  3. The community has some amazing people in it.
  4. I get to contribute positively to my part of the community.
  5. I have flexibility to work remotely, or at odd hours.

Things I Read

May 30, 2020

Amiga-Smalltalk now has continuous integration, I don’t know if it’s the first Amiga program ever to have CI but definitely the first I know of. Let me tell you about it.

I’ve long been using AROS, the AROS Research Operating System (formerly the A stood for Amiga) as a convenient place to (manually) test Amiga-Smalltalk. AROS will boot natively on PC but can also be “hosted” as a user-space process on Linux, Windows or macOS. So it’s handy to build a program like Amiga-Smalltalk in the AROS source tree, then launch AROS and check that my program works properly. Because AROS is source compatible with Amiga OS (and binary compatible too, on m68k), I can be confident that things work on real Amigas.

My original plan for Amiga-Smalltalk was to build a Docker image containing AROS, add my test program to S:User-startup (the script on Amiga that runs at the end of the OS boot sequence), then look to see how it fared. But when I discussed it on the aros-exec forums, AROS developer deadwood had a better idea.

He’s created AxRuntime, a library that lets Linux processes access the AROS APIs directly without having to be hosted in AROS as a sub-OS. So that’s what I’m using. You can look at my Github workflow to see how it works, but in a nutshell:

  1. check out source.
  2. install libaxrt. I’ve checked the packages in ./vendor (and a patched library, which fixes clean termination of the Amiga process) to avoid making network calls in my CI. The upstream source is deadwood’s repo.
  3. launch Xvfb. This lets the process run “headless” on the CI box.
  4. build and run ast_tests, my test runner. The Makefile shows how it’s compiled.

That’s it! All there is to running your Amiga binaries in CI.

May 29, 2020

Reading List 259 by Bruce Lawson (@brucel)

May 28, 2020

As you may have noticed, I moved this site to new fangled static site generator Eleventy, using the Hylia starter kit as a base.

By default this uses Netlify, but I wasn't interested in the 3rd party CMS bit, so opted for a simple GitHub action for deploying. There's an exsiting action available for plain Eleventy apps over here. However it doesn't include the sass build part of the Hylia setup that's part of its npm scripts.

A quick bit of hacking about with one of the standard node actions and I built the following action to deploy instead:

name: Hylia Build
on: [push]

runs-on: ubuntu-18.04
- uses: actions/checkout@master
- uses: actions/setup-node@v1
node-version: '10.x'
- run: npm install
- run: npm run production
- name: Deploy
uses: peaceiris/actions-gh-pages@v1.1.0

Which hopefully is useful to somebody else.

Oh, and you'll need to add a passthrough copy of the CNAME file to the build if you are using a custom domain name. Add the following to your eleventy build:


And your domains CNAME file to the main source. Otherwise every time you push it'll get removed from the GitHub pages config of the output.

May 27, 2020

Mature Optimization by Graham Lee

This comment on why NetNewsWire is fast brings up one of the famous tropes of computer science:

The line between [performance considerations pervading software design] and premature optimization isn’t clearly defined.

If only someone had written a whole paper about premature optimization, we’d have a bit more information. …wait, they did! The idea that premature optimization is the root of all evil comes from Donald Knuth’s Structured Programming with go to Statements. Knuth attributes it to C.A.R. Hoare in The Errors of TeX, though Hoare denied that he had coined the phrase.

Anyway, the pithy phrase “premature optimization is the root of all evil”, which has been interpreted as “optimization before the thing is already running too slow is to be avoided”, actually appears in this context:

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, [they] will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgements about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail.

Indeed this whole subsection on efficiency opens with Knuth explaining that he does put a lot of effort into optimizing the critical parts of his code.

I now look with an extremely jaundiced eye at every operation in a critical inner loop, seeking to modify my program and data structure […] so that some of the operations can be eliminated. The reasons for this approach are that: a) it doesn’t take long, since the inner loop is short; b) the payoff is real; and c) I can then afford to be less efficinet in the other parts of my programs, which therefore are more readable and more easily written and debugged. Tools are being developed to make this critical-loop identification job easy (see for example [Dan Ingalls, The execution time profile as a programming tool] and [E. H. Satterthwaite, Debugging tools for high level languages]).

So yes, optimize your code, but optimize the bits that benefit from optimization. NetNewsWire is a Mac application, and Apple’s own documentation on improving your app’s performance describe an iterative approach for finding underperforming characteristics (note: not “what is next to optimize”, but “what are users encountering that needs improvement”), making changes, and verifying that the changes led to an improvement:

Plan and implement performance improvements by approaching the problem scientifically:

  1. Gather information about the problems your users are seeing.
  2. Measure your app’s behavior to find the causes of the problems.
  3. Plan one change to improve the situation.
  4. Implement the change.
  5. Observe whether the app’s performance improves.

I doubt that this post will change the “any optimization is the root of all evil” narrative, because there isn’t a similarly-trite epithet for the “optimize the parts that need it” school of thought, but at least I’ve tried.

May 26, 2020

This post is to encourage you to go and play a museum-themed online Escape Game I built. So, you can skip the rest of this article and head straight here to play! Now, you may have already seen that I have a brand new tutorial which shows you how to create your own online Audio […]

An interesting writeup by Brian Kardell on web engine diversity and ecosystem health, in which he puts forward a thesis that we currently have the most healthy and open web ecosystem ever, because we’ve got three major rendering engines (WebKit, Blink, and Gecko), they’re all cross-platform, and they’re all open source. This is, I think, true. Brian’s argument is that this paints a better picture of the web than a lot of the doom-saying we get about how there are only a few large companies in control of the web. This is… well, I think there’s truth to both sides of that. Brian’s right, and what he says is often overlooked. But I don’t think it’s the whole story.

You see, diversity of rendering engines isn’t actually in itself the point. What’s really important is diversity of influence: who has the ability to make decisions which shape the web in particular ways, and do they make those decisions for good reasons or not so good? Historically, when each company had one browser, and each browser had its own rendering engine, these three layers were good proxies for one another: if one company’s browser achieved a lot of dominance, then that automatically meant dominance for that browser’s rendering engine, and also for that browser’s creator. Each was isolated; a separate codebase with separate developers and separate strategic priorities. Now, though, as Brian says, that’s not the case. Basically every device that can see the web and isn’t a desktop computer and isn’t explicitly running Chrome is a WebKit browser; it’s not just “iOS Safari’s engine”. A whole bunch of long-tail browsers are essentially a rethemed Chrome and thus Blink: Brave and Edge are high up among them.

However, engines being open source doesn’t change who can influence the direction; it just allows others to contribute to the implementation. Pick something uncontroversial which seems like a good idea: say, AVIF image format support, which at time of writing (May 2020) has no support in browsers yet. (Firefox has an in-progress implementation.) I don’t think anyone particularly objects to this format; it’s just not at the top of anyone’s list yet. So, if you were mad keen on AVIF support being in browsers everywhere, then you’re in a really good position to make that happen right now, and this is exactly the benefit of having an open ecosystem. You could build that support for Gecko, WebKit, and Blink, contribute it upstream, and (assuming you didn’t do anything weird), it’d get accepted. If you can’t build that yourself then you ring up a firm, such as Igalia, whose raison d’etre is doing exactly this sort of thing and they write it for you in exchange for payment of some kind. Hooray! We’ve basically never been in this position before: currently, for the first time in the history of the web, a dedicated outsider can contribute to essentially every browser available. How good is that? Very good, is how good it is.

Obviously, this only applies to things that everyone agrees on. If you show up with a patchset that provides support for the <stuart> element, you will be told: go away and get this standardised first. And that’s absolutely correct.

But it doesn’t let you influence the strategic direction, and this is where the notions of diversity in rendering engines and diversity in influence begins to break down. If you show up to the Blink repository with a patchset that wires an adblocker directly into the rendering engine, it is, frankly, not gonna show up in Chrome. If you go to WebKit with a complete implementation of service worker support, or web payments, it’s not gonna show up in iOS Safari. The companies who make the browsers maintain private forks of the open codebase, into which they add proprietary things and from which they remove open source things they don’t want. It’s not actually clear to me whether such changes would even be accepted into the open source codebases or whether they’d be blocked by the companies who are the primary sponsors of those open source codebases, but leave that to one side. The key point here is that the open ecosystem is only actually open to non-controversial change. The ability to make, or to refuse, controversial changes is reserved to the major browser vendors alone: they can make changes and don’t have to ask your permission, and you’re not in the same position. And sure, that’s how the world works, and there’s an awful lot of ingratitude out there from people who demand that large companies dedicate billions of pounds to a project and then have limited say over what it’s spent on, which is pretty galling from time to time.

Brian references Jeremy Keith’s Unity in which Jeremy says: “But then I think of situations where complete unity isn’t necessarily a good thing. Take political systems, for example. If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!” This is true, but again the nuance is different, because what this is about is influence. If one party wins a large majority, then it doesn’t matter whether they’re opposed by one other party or fifty, because they don’t have to listen to the opposition. (And Jeremy makes this point.) This was the problem with Internet Explorer: it was dominant enough that MS didn’t have to give a damn what anyone else thought, and so they didn’t. Now, this problem does eventually correct itself in both browsers and political systems, but it takes an awfully long time; a dominant thing has a lot of inertia, and explaining to a peasant in 250AD that the Roman Empire will go away eventually is about as useful as explaining to a web developer in 2000AD that CSS is coming soon, i.e., cold comfort at best and double-plus-frustrating at worst.

So, a qualified hooray, I suppose. I concur with Brian that “things are better and healthier because we continue to find better ways to work together. And when we do, everyone does better.” There is a bunch of stuff that is uncontroversial, and does make the web better, and it is wonderful that we’re not limited to begging browser vendors to care about it to get it. But I think that definition excludes a bunch of “things” that we’re not allowed, for reasons we can only speculate about.

May 19, 2020

Kindness by Ben Paddock (@_pads)

It’s mental health awareness week and the theme is kindness.  

One of my more recent fond stories of kindness was a trip to Colchester zoo.  There was a long queue for tickets but my friends all had passes to get in.  I would have been waiting on my own for quite some time but a person at the front of the queue bought me a ticket so I could skip it (I did pay him).  I don’ know the guy’s name but that made my day.

Just one small, random act of kindness.

May 15, 2020

Reading List 258 by Bruce Lawson (@brucel)

May 14, 2020

Episode 6 of the SICPers podcast is over on Youtube. I introduce a C compiler for the Sinclair ZX Spectrum. For American readers, that’s the Timex Sinclair TS2068.

Remediating sites by Stuart Langridge (@sil)

Sometimes you’ll find yourself doing a job where you need to make alterations to a web page that already exists, and where you can’t change the HTML, so your job is to write some bits of JavaScript to poke at the page, add some attributes and some event handlers, maybe move some things around. This sort of thing comes up a lot with accessibility remediations, but maybe you’re working with an ancient CMS where changing the templates is a no-no, or you’re plugging in some after-the-fact support into a site that can’t be changed without a big approval process but adding a script element is allowed. So you write a script, no worries. How do you test it?

Well, one way is to actually do it: we assume that the way your work will eventually be deployed is that you’ll give the owners a script file, they’ll upload it somehow to the site and add a script element that loads it. That’s likely to be a very slow and cumbersome process, though (if it wasn’t, then you wouldn’t need to be fixing the site by poking it with JS, would you? you’d just fix the HTML as God intended web developers to do) and so there ought to be a better way. A potential better way is to have them add a script element that points at your script on some other server, so you can iterate on that and then eventually send over the finished version when done. But that’s still pretty annoying, and it means putting that on the live server (“a ‘staging’ server? no, I don’t think we’ve got one of those”) and then having something in your script which only runs it if it’s you testing. Alternatively, you might download the HTML for the page with Save Page As and grab all the dependencies. But that never works quite right, does it?

The way I do this is with Greasemonkey. Greasemonkey, or its Chrome-ish cousin Tampermonkey, has been around forever, and it lets you write custom scripts which it then takes care of loading for you when you visit a specified URL. Great stuff: write your thing as a Greasemonkey script to test it and then when you’re happy, send the script file to the client and you’re done.

There is a little nuance here, though. A Greasemonkey script isn’t exactly the same as a script in the page. This is partially because of browser security restrictions, and partially because GM scripts have certain magic privileged access that scripts in the page don’t have. What this means is that the Greasemonkey script environment is quite sandboxed away; it doesn’t have direct access to stuff in the page, and stuff in the page doesn’t have direct access to it (in the early days, there were security problems where in-page script walked its way back up the object tree until it got hold of one of the magic Greasemonkey objects and then used that to do all sorts of naughty privileged things that it shouldn’t have been able to, and so it all got rigorously sandboxed away to prevent that). So, if the page loads jQuery, say, and you want to use that, then you can’t, because your script is in its own little world with a peephole to the page, and getting hold of in-page objects is awkward. Obviously, your remediation script can’t be relying on any of these magic GM privileges (because it won’t have them when it’s deployed for real), so you don’t intend to use them, but because GM doesn’t know that, it still isolates your script away. Fortunately, there’s a neat little trick to have the best of both worlds; to create the script in GM to make it easy to test and iterate, but have the script run in the context of the page so it gets the environment it expects.

What you do is, put all your code in a function, stringify it, and then push that string into an in-page script. Like this:

// ==UserScript==
// @name     Stuart's GM remediation script
// @version  1
// @grant    none
// ==/UserScript==

function main() {
    /* All your code goes below here... */

    /* ...and above here. */

let script = document.createElement("script");
script.textContent = "(" + main.toString() + ")();";

That’s it. Your code is defined in Greasemonkey, but it’s actually executed as though it were a script element in the page. You should basically pretend that that code doesn’t exist and just write whatever you planned to inside the main() function. You can define other functions, add event handlers, whatever you fancy. This is a neat trick; I’m not sure if I invented it or picked it up from somewhere else years ago (and if someone knows, tell me and I’ll happily link to whoever invented it), but it’s really useful; you build the remediation script, doing whatever you want it to do, and then when you’re happy with it, copy whatever’s inside the main() function to a new file called whatever.js and send that to the client, and tell them: upload this to your creaky old CMS and then link to it with a script element. Job done. Easier for you, easier for them!

May 12, 2020

The AWS Certified Cloud Practitioner examination is intended for individuals who have the knowledge and skills necessary to effectively demonstrate an overall understanding of the AWS Cloud, independent of specific technical roles addressed by other AWS Certifications.

I had previous experience with AWS services such as EC2 and S3 from a previous role, but when I switched jobs in June my usage of AWS went through the roof. Call me boring, but I tend to learn best when I have a curriculum to follow, so I figured I’d better get some certifications under my belt. Especially as my company is kind enough to pay for them.

I decided to play it safe and start with the Cloud Practitioner exam, rather than going straight for one of the associate tier certifications.

I used the following resources, and I’m happy enough with them. I imagine the much-recommended courseware by the likes of a cloud guru are objectively better, the Cloud Practitioner exam is also straightforward enough that you can probably pass it without shelling out the big bucks for polished video courses.

I can’t remember exactly when I started seriously studying for it, but I’d say it was about three months from deciding to gain the certification to receiving the email with the good news.

Congratulations, You are Now AWS Certified!

Next, I’ve set my eyes on one of the associate level certifications. Probably Solutions Architect because it sounds fancy I’ve seen a few recommendations to start with that one before moving on to the others. Admittedly, most of these recommendations were from companies with a financial interest in people needing the training materials for as many exams as possible, but hey.

I’ve never regretted taking the time to build a strong foundation yet.


  • It took me 2-3 months, your mileage may vary
  • Free or inexpensive resources were more than adequate

May 11, 2020

“Look, it’s perfectly simple. Go back to work, but don’t use public transport. Travel in a chauffeur-driven ministerial limousine. Use common sense – under no circumstances shake hands with people you know to have the virus. Covid-19 appeared in December, which makes it a Sagittarius, so Taureans and Libras should wear masks. But it also appeared in China, which makes it a Rat, so anyone called Mickey or Roland is advised to wear gloves. We’re following the science, so here’s a graph.

Incomprehensible graph

Remember, this is Blighty, not a nation of Moaning Minnies, Fondant Fancies or Coughing Keirs (thanks, Dom!). England expects every interchangeable low-paid worker and old person in a care home to Do Their Duty: let’s just Get Dying Done. God save the Queen, Tally-ho!”

May 08, 2020

It lives! Kinda. Amiga-Smalltalk now runs on Amiga. Along the way I review The K&R book as a tutorial for C programming, mentioning my previous comparison to the Brad Cox and Bjarne Stroustrup books. I also find out how little I know “C”, it turns out I’ve been using GNU C for the last 20 years.

Thanks to Alan Francis for his part in my downfall.

May 06, 2020

Hammer and nails by Stuart Langridge (@sil)

There is a Linux distribution called Gentoo, named after a type of penguin (of course it’s named after a penguin), where installing an app doesn’t mean that you download a working app. Instead, when you say “install this app”, it downloads the source code for that app and then compiles it on your computer. This apparently gives you the freedom to make changes to exactly how that app is built, even as it requires you to have a full set of build tools and compilers and linkers just to get a calculator. I think it’s clear that the world at large has decided that this is not the way to do things, as evidenced by how almost no other OSes take this approach — you download a compiled binary of an app and run it, no compiling involved — but it’s nice that it exists, so that the few people who really want to take this approach can choose to do so.

This sort of thing gets a lot of sneering from people who think that all Linux OSes are like that, that people who run Linux think that it’s about compiling your own kernels and using the Terminal all the time. Why would you want to do that sort of thing, you neckbeard, is the underlying message, and I largely agree with it; to me (and most people) it seems complicated and harder work for the end user, and mostly a waste of time — the small amount of power I get from being able to tweak how a thing is built is vastly outweighed by the annoyance of having to build it if I want it. Now, a Gentoo user doesn’t actually have to know anything about compilation and build tools, of course; it’s all handled quietly and seamlessly by the install command, and the compilers and linkers and build tools are run for you without you needing to understand. But it’s still a bunch of things that my computer has to do that I’m just not interested in it doing, and I imagine you feel the same.

So I find it disappointing that this is how half the web industry have decided to make websites these days.

We don’t give people a website any more: something that already works, just HTML and CSS and JavaScript ready to show them what they want. Instead, we give them the bits from which a website is made and then have them compile it.

Instead of an HTML page, you get some templates and some JSON data and some build tools, and then that compiler runs in your browser and assembles a website out of the component parts. That’s what a “framework” does… it builds the website, in real time, from separate machine-readable pieces, on the user’s computer, every time they visit the website. Just like Gentoo does when you install an app. Sure, you could make the case that the browser is always assembling a website from parts — HTML, CSS, some JS — but this is another step beyond that; we ship a bunch of stuff in a made-up framework and a set of build tools, the build tools assemble HTML and CSS and JavaScript, and then the browser still has to do its bit to build that into a website. Things that should be a long way from the user are now being done much closer to them. And why? “We’re layering optimizations upon optimizations in order to get the SPA-like pattern to fit every use case, and I’m not sure that it is, well, worth it.” says Tom MacWright.

Old joke: someone walks into a cheap-looking hotel and asks for a room. You’ll have to make your own bed, says the receptionist. The visitor agrees, and is told: you’ll find a hammer and nails behind the door.

Almost all of us don’t want this for our native apps, and think it would be ridiculous; why have we decided that our users have to have it on their websites? Web developers: maybe stop insisting that your users compile your apps for you? Or admit that you’ll put them through an experience that you certainly don’t tolerate on your own desktops, where you expect to download an app, not to be forced to compile it every time you run it? You’re not neckbeards… you just demand that your users have to be. You’re neckbeard creators. You want to browse this website? Here’s a hammer and nails.

Unless you run Gentoo already, of course! In which case… compile away.

May 05, 2020

We’re Offering Free 1-Hour Digital Consultations

The Covid-19 pandemic has forced arts, cultural and live performance organisations to shut down, move online, and work in totally new ways. We don’t know how long the crisis will last, and we’re not sure what the world will look like in the immediate aftermath. But we do know that digital technology is going to be part of the strategic toolkit as we come out of this, and that as a result many leaders will be re-thinking how they make use of technology.

If you’re working through the ramifications of what this means for you, and would like some strategic consultation about the issues or challenges you’re facing, read on…

If you are:

  • a CEO, or senior marketing, development or IT leader,
  • And, you work in an arts, cultural, or live performance organisation,

I’d like to offer you a FREE consultation about anything digital-related you’d like to talk through. If you want to brainstorm a digital challenge, talk through your digital strategy for critical feedback, or just have a digital counselling session, this time is for you.

Perhaps you want to talk about making money from online performances or archival footage, develop your digital donor strategy during lockdown, or consider simpler online booking models during the period of social distancing.

About me: I’m Managing Director at Made Media — a leading digital strategy and design agency that works with live performance and cultural institutions across the world. I’ve got many years of experience working with arts and cultural organisations on their use of digital technology, with a particular focus on user experience, ticketing and CRM. I have a background in digital technology and arts management. Prior to joining Made, I worked as Administrative Director at The Place in London – the UK’s leading centre for contemporary dance development — and between 2009 and 2015 I held a series of leadership roles at Spektrix in both the US and UK.

Consultation slots are limited, and can be booked here:

The small-ish print:

  • Sessions will last up to 1 hour and will take place via Zoom.
  • You can come with a specific or general digital challenge, or email me in advance if you prefer (details will be in the confirmation email).
  • Session participation is limited to 2 people per organisation.
  • You don’t have to be a Made client to sign up. If you are a Made client, and would like this sort of consultation, do reach out to me or your account manager and we can set something up outside of this booking process.
  • Sessions are open to leaders at both commercial and nonprofit organisations.
  • Session times are listed as British Summer Time, you can change your time zone as you book to help you match it against your diary. I’ve tried to choose time slots with a good crossover between Europe and all parts of the mainland US. If these time zones don’t work out for you, please get in touch via our website and we’ll try and work something out.

May 01, 2020

Reading List 257 by Bruce Lawson (@brucel)

We’re back to Amiga-Smalltalk today, as the moment when it runs on a real Amiga inches closer. Listen here.

I think I’ve isolated all extraneous sound except the nearby motorway, which I can’t do much about. I hope the experience is better!

April 30, 2020

April 27, 2020

Remote Applause by Stuart Langridge (@sil)

That’s a cool idea, I thought.

So I built Remote Applause, which does exactly that. Give your session a name, and it gives you a page to be the “stage”, and a link to send to everyone in the “audience”. The audience link has “clap” and “laugh” buttons; when anyone presses one, your stage page plays the sound of a laughter track or applause. Quite neat for an afternoon hack, so I thought I’d talk about how it works.

the Remote Applause audience page

Basically, it’s all driven by WebRTC data connections. WebRTC is notoriously difficult to get right, but fortunately PeerJS exists which does most of the heavy lifting.1 It seemed to be abandoned a few years ago, but they’ve picked it up again since, which is good news. Essentially, the way the thing works is as follows:

When you name your session, the “stage” page calculates a unique ID from that name, and registers with that name on PeerJS’s coordination server. The audience page calculates the same ID2, registers itself with a random ID, and opens a PeerJS data connection to the stage page (because it knows what its ID is). PeerJS is just using WebRTC data connections under the covers, but the PeerJS people provide the signalling server, which the main alternative simple-peer doesn’t, and I didn’t want to have to run a signalling server myself because then I’d need server-side hosting for it.

The audience page can then send a “clap” or “laugh” message down that connection whenever the respective button is pressed, and the stage page receives that message and plays the appropriate sound. Well, it’s a fraction more complex than that. The two sounds, clapping and laughing, are constantly playing on a loop but muted. When the stage receives messages, it changes the volume on the sounds. Fortunately, the stage knows how many incoming connections there are, and it knows who the messages are coming in from, so it can scale the volume change appropriately; if most of the audience send a clap, then the stage can jack the clapping volume up to 100%, and if only a few people do then it can still play the clapping but at much lower volume. This largely does away with the need for moderation; a malicious actor who hammers the clap button as often as they can can at the very worst only make the applause track play at full volume, and most of the time they’ll be one in 50 people and so can only make it play at 5% volume or something.

There are a couple of extra wrinkles. The first one is that autoplaying sounds are a no-no, because of all the awful advertising people who misused them to have autoplaying videos as soon as you opened a page; sound can only start playing if it’s driven by a user gesture of some kind. So the stage has an “enable sounds” checkbox; turning that checkbox on counts as the user gesture, so we can start actually playing the sounds but at zero volume, and we also take advantage of that to send a message to all the connected audience pages to tell them it’s enabled… and the audience pages don’t show the buttons until they get that message, which is handy. The second thing is that when the stage receives a clap or laugh from an audience member it rebroadcasts that to all other audience members; this means that each audience page can show a little clap emoji when that happens, so you can see how many other people are clapping as well as hear it over the conference audio. And the third… well, the third is a bit more annoying.

If an audience member closes their page, the stage ought to get told about that somehow. And it does… in Firefox. The PeerJS connection object fires a close event when this happens, so, hooray. In Chrome, though, we never get that event. As far as I can tell it’s a known bug in PeerJS, or possibly in Chrome’s WebRTC implementation; I didn’t manage to track it down further than the PeerJS issues list. So what we also do in the stage is poll the PeerJS connection object for every connection every few seconds with setInterval, because it exposes the underlying WebRTC connection object, and that does indeed have a property dictating its current state. So we check that and if it’s showing disconnected, we treat that the same as the close event. Easily enough solved.

There are more complexities than that, though. WebRTC is pretty goshdarn flaky, in my experience. If the stage runner is using a lot of their bandwidth, then the connections to the stage drop, like, a lot, and need to be reconnected. I suppose it would be possible to quietly gloss over this in the UI and just save stuff up for when the connection comes back, but I didn’t do that, firstly because I hate it when an app pretends it’s working but actually it isn’t, and secondly because of…

Latency. This is the other big problem, and I don’t think it’s one that Remote Applause can fix, because it’s not RA’s problem. You see, the model for this is that I’m streaming myself giving a talk as part of an online conference, right? Now, testing has demonstrated that when doing this on Twitch or YouTube Live or whatever, there’s a delay of anything from 5 to 30 seconds or so in between me saying something and the audience hearing it. Anyone who’s tried interacting with the live chat while streaming will have experienced this. Normally that’s not all that big a problem (except for interacting with the live chat) but it’s definitely a problem for this, because even if Remote Applause is instantaneous (which it pretty much is), when you press the button to applaud, the speaker is 10 seconds further into their talk. So you’ll be applauding the wrong thing. I’m not sure that’s fixable; it’s pretty much an inherent limitation of streaming video. Microsoft reputedly have a low latency streaming video service but most people aren’t using it; maybe Twitch and YouTube will adopt this technology.

Still, it was a fun little project! Nice to have a reason to use PeerJS for something. And it’s hosted on Github Pages because it’s all client side, so it doesn’t cost me anything to run, which is nice and so I can just leave it up even if nobody’s using it. And I quite like the pictures, too; the stage page shows a view of an audience from the stage (specifically, the old Met in New York), and the audience page shows a speaker on stage (specifically, Rika Jansen (page in Dutch), a Dutch singer, mostly because I liked the picture and she looks cool).

  1. but it requires javascript! Aren’t you always going on about that, Stuart? Well, yes. However, it’s flat-out not possible to do real-time two-way communication sanely without it, so I’m OK with requiring JS in this particular case. For your restaurant menu, no.
  2. using a quick JS version of Java’s hashCode function, because PeerJS has requirements on ID strings that exclude some of the characters in base64 so I couldn’t use window.btoa(), I didn’t want (or need) a whole hash library, and the Web Crypto API is complex

April 25, 2020

The latest episode of SICPers, in which I muse on what programming 1980s microcomputers taught me about reading code, is now live. Here’s the podcast RSS feed.

It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration. Edsger Dijkstra, “How do we tell truths that might hurt?”

As always, your feedback and suggestions are most welcome.

April 24, 2020

Reading List 256 by Bruce Lawson (@brucel)

Paint a Rainbow by Ben Paddock (@_pads)

Paint a rainbow.
Nature’s colours have come to help you.

New patterns emerging.
Everyone is still learning.
Go easy, be gentle.

Realising what we cherish most.
Is still with us between these four walls.
In flesh or pixel form.

Dust off those running shoes.
Clean that paint brush.
Tune that guitar.

Grateful for the small things.
That delivery from your neighbours.
Those online game nights.

Take each day as it comes.
Tomorrow is tomorrow’s problem.
Embrace the new normal.

Empty ribbons of tarmac.
Fitter lungs.
Time on your side.

How to spend it best?
Paint a rainbow.

April 17, 2020

In Troubled Times by Ben Paddock (@_pads)

In troubled times.
We look to the past for comfort.
And the future for hope.
But what about the present?

Music from the 90s.
That new house with a cat and a garden.
The sky that is always blue.

That broken relationship.
Wondering will she ever speak to me again.
Feeling lost at sea.

An elevated heart rate.
These tense shoulders.
Climbing mountains.

Breath in, breath out.
Say hello to thoughts then wave them goodbye.
Rinse and repeat.

Relying on time and incremental change.
To get through to better days.
And on that first day with contentment and clarity.
Look back and smile.

Back to Top