Last updated: June 01, 2020 02:22 AM (All times are UTC.)

May 30, 2020

Amiga-Smalltalk now has continuous integration, I don’t know if it’s the first Amiga program ever to have CI but definitely the first I know of. Let me tell you about it.

I’ve long been using AROS, the AROS Research Operating System (formerly the A stood for Amiga) as a convenient place to (manually) test Amiga-Smalltalk. AROS will boot natively on PC but can also be “hosted” as a user-space process on Linux, Windows or macOS. So it’s handy to build a program like Amiga-Smalltalk in the AROS source tree, then launch AROS and check that my program works properly. Because AROS is source compatible with Amiga OS (and binary compatible too, on m68k), I can be confident that things work on real Amigas.

My original plan for Amiga-Smalltalk was to build a Docker image containing AROS, add my test program to S:User-startup (the script on Amiga that runs at the end of the OS boot sequence), then look to see how it fared. But when I discussed it on the aros-exec forums, AROS developer deadwood had a better idea.

He’s created AxRuntime, a library that lets Linux processes access the AROS APIs directly without having to be hosted in AROS as a sub-OS. So that’s what I’m using. You can look at my Github workflow to see how it works, but in a nutshell:

  1. check out source.
  2. install libaxrt. I’ve checked the packages in ./vendor (and a patched library, which fixes clean termination of the Amiga process) to avoid making network calls in my CI. The upstream source is deadwood’s repo.
  3. launch Xvfb. This lets the process run “headless” on the CI box.
  4. build and run ast_tests, my test runner. The Makefile shows how it’s compiled.

That’s it! All there is to running your Amiga binaries in CI.

May 29, 2020

Reading List 259 by Bruce Lawson (@brucel)

May 28, 2020

As you may have noticed, I moved this site to new fangled static site generator Eleventy, using the Hylia starter kit as a base.

By default this uses Netlify, but I wasn't interested in the 3rd party CMS bit, so opted for a simple GitHub action for deploying. There's an exsiting action available for plain Eleventy apps over here. However it doesn't include the sass build part of the Hylia setup that's part of its npm scripts.

A quick bit of hacking about with one of the standard node actions and I built the following action to deploy instead:

name: Hylia Build
on: [push]

jobs:
build_deploy:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@master
- uses: actions/setup-node@v1
with:
node-version: '10.x'
- run: npm install
- run: npm run production
- name: Deploy
uses: peaceiris/actions-gh-pages@v1.1.0
env:
PUBLISH_DIR: dist
PUBLISH_BRANCH: gh-pages
GITHUB_TOKEN: $

Which hopefully is useful to somebody else.

Oh, and you'll need to add a passthrough copy of the CNAME file to the build if you are using a custom domain name. Add the following to your eleventy build:

  config.addPassthroughCopy('CNAME');

And your domains CNAME file to the main source. Otherwise every time you push it'll get removed from the GitHub pages config of the output.

May 27, 2020

Mature Optimization by Graham Lee

This comment on why NetNewsWire is fast brings up one of the famous tropes of computer science:

The line between [performance considerations pervading software design] and premature optimization isn’t clearly defined.

If only someone had written a whole paper about premature optimization, we’d have a bit more information. …wait, they did! The idea that premature optimization is the root of all evil comes from Donald Knuth’s Structured Programming with go to Statements. Knuth attributes it to C.A.R. Hoare in The Errors of TeX, though Hoare denied that he had coined the phrase.

Anyway, the pithy phrase “premature optimization is the root of all evil”, which has been interpreted as “optimization before the thing is already running too slow is to be avoided”, actually appears in this context:

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, [they] will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgements about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail.

Indeed this whole subsection on efficiency opens with Knuth explaining that he does put a lot of effort into optimizing the critical parts of his code.

I now look with an extremely jaundiced eye at every operation in a critical inner loop, seeking to modify my program and data structure […] so that some of the operations can be eliminated. The reasons for this approach are that: a) it doesn’t take long, since the inner loop is short; b) the payoff is real; and c) I can then afford to be less efficinet in the other parts of my programs, which therefore are more readable and more easily written and debugged. Tools are being developed to make this critical-loop identification job easy (see for example [Dan Ingalls, The execution time profile as a programming tool] and [E. H. Satterthwaite, Debugging tools for high level languages]).

So yes, optimize your code, but optimize the bits that benefit from optimization. NetNewsWire is a Mac application, and Apple’s own documentation on improving your app’s performance describe an iterative approach for finding underperforming characteristics (note: not “what is next to optimize”, but “what are users encountering that needs improvement”), making changes, and verifying that the changes led to an improvement:

Plan and implement performance improvements by approaching the problem scientifically:

  1. Gather information about the problems your users are seeing.
  2. Measure your app’s behavior to find the causes of the problems.
  3. Plan one change to improve the situation.
  4. Implement the change.
  5. Observe whether the app’s performance improves.

I doubt that this post will change the “any optimization is the root of all evil” narrative, because there isn’t a similarly-trite epithet for the “optimize the parts that need it” school of thought, but at least I’ve tried.

May 26, 2020

This post is to encourage you to go and play a museum-themed online Escape Game I built. So, you can skip the rest of this article and head straight here to play! Now, you may have already seen that I’ve a brand new tutorial which shows you how to make your own online Audio Tours.  […]

An interesting writeup by Brian Kardell on web engine diversity and ecosystem health, in which he puts forward a thesis that we currently have the most healthy and open web ecosystem ever, because we’ve got three major rendering engines (WebKit, Blink, and Gecko), they’re all cross-platform, and they’re all open source. This is, I think, true. Brian’s argument is that this paints a better picture of the web than a lot of the doom-saying we get about how there are only a few large companies in control of the web. This is… well, I think there’s truth to both sides of that. Brian’s right, and what he says is often overlooked. But I don’t think it’s the whole story.

You see, diversity of rendering engines isn’t actually in itself the point. What’s really important is diversity of influence: who has the ability to make decisions which shape the web in particular ways, and do they make those decisions for good reasons or not so good? Historically, when each company had one browser, and each browser had its own rendering engine, these three layers were good proxies for one another: if one company’s browser achieved a lot of dominance, then that automatically meant dominance for that browser’s rendering engine, and also for that browser’s creator. Each was isolated; a separate codebase with separate developers and separate strategic priorities. Now, though, as Brian says, that’s not the case. Basically every device that can see the web and isn’t a desktop computer and isn’t explicitly running Chrome is a WebKit browser; it’s not just “iOS Safari’s engine”. A whole bunch of long-tail browsers are essentially a rethemed Chrome and thus Blink: Brave and Edge are high up among them.

However, engines being open source doesn’t change who can influence the direction; it just allows others to contribute to the implementation. Pick something uncontroversial which seems like a good idea: say, AVIF image format support, which at time of writing (May 2020) has no support in browsers yet. (Firefox has an in-progress implementation.) I don’t think anyone particularly objects to this format; it’s just not at the top of anyone’s list yet. So, if you were mad keen on AVIF support being in browsers everywhere, then you’re in a really good position to make that happen right now, and this is exactly the benefit of having an open ecosystem. You could build that support for Gecko, WebKit, and Blink, contribute it upstream, and (assuming you didn’t do anything weird), it’d get accepted. If you can’t build that yourself then you ring up a firm, such as Igalia, whose raison d’etre is doing exactly this sort of thing and they write it for you in exchange for payment of some kind. Hooray! We’ve basically never been in this position before: currently, for the first time in the history of the web, a dedicated outsider can contribute to essentially every browser available. How good is that? Very good, is how good it is.

Obviously, this only applies to things that everyone agrees on. If you show up with a patchset that provides support for the <stuart> element, you will be told: go away and get this standardised first. And that’s absolutely correct.

But it doesn’t let you influence the strategic direction, and this is where the notions of diversity in rendering engines and diversity in influence begins to break down. If you show up to the Blink repository with a patchset that wires an adblocker directly into the rendering engine, it is, frankly, not gonna show up in Chrome. If you go to WebKit with a complete implementation of service worker support, or web payments, it’s not gonna show up in iOS Safari. The companies who make the browsers maintain private forks of the open codebase, into which they add proprietary things and from which they remove open source things they don’t want. It’s not actually clear to me whether such changes would even be accepted into the open source codebases or whether they’d be blocked by the companies who are the primary sponsors of those open source codebases, but leave that to one side. The key point here is that the open ecosystem is only actually open to non-controversial change. The ability to make, or to refuse, controversial changes is reserved to the major browser vendors alone: they can make changes and don’t have to ask your permission, and you’re not in the same position. And sure, that’s how the world works, and there’s an awful lot of ingratitude out there from people who demand that large companies dedicate billions of pounds to a project and then have limited say over what it’s spent on, which is pretty galling from time to time.

Brian references Jeremy Keith’s Unity in which Jeremy says: “But then I think of situations where complete unity isn’t necessarily a good thing. Take political systems, for example. If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!” This is true, but again the nuance is different, because what this is about is influence. If one party wins a large majority, then it doesn’t matter whether they’re opposed by one other party or fifty, because they don’t have to listen to the opposition. (And Jeremy makes this point.) This was the problem with Internet Explorer: it was dominant enough that MS didn’t have to give a damn what anyone else thought, and so they didn’t. Now, this problem does eventually correct itself in both browsers and political systems, but it takes an awfully long time; a dominant thing has a lot of inertia, and explaining to a peasant in 250AD that the Roman Empire will go away eventually is about as useful as explaining to a web developer in 2000AD that CSS is coming soon, i.e., cold comfort at best and double-plus-frustrating at worst.

So, a qualified hooray, I suppose. I concur with Brian that “things are better and healthier because we continue to find better ways to work together. And when we do, everyone does better.” There is a bunch of stuff that is uncontroversial, and does make the web better, and it is wonderful that we’re not limited to begging browser vendors to care about it to get it. But I think that definition excludes a bunch of “things” that we’re not allowed, for reasons we can only speculate about.

May 19, 2020

Kindness by Ben Paddock (@_pads)

It’s mental health awareness week and the theme is kindness.  

One of my more recent fond stories of kindness was a trip to Colchester zoo.  There was a long queue for tickets but my friends all had passes to get in.  I would have been waiting on my own for quite some time but a person at the front of the queue bought me a ticket so I could skip it (I did pay him).  I don’ know the guy’s name but that made my day.

Just one small, random act of kindness.

May 15, 2020

Reading List 258 by Bruce Lawson (@brucel)

May 14, 2020

Episode 6 of the SICPers podcast is over on Youtube. I introduce a C compiler for the Sinclair ZX Spectrum. For American readers, that’s the Timex Sinclair TS2068.

Remediating sites by Stuart Langridge (@sil)

Sometimes you’ll find yourself doing a job where you need to make alterations to a web page that already exists, and where you can’t change the HTML, so your job is to write some bits of JavaScript to poke at the page, add some attributes and some event handlers, maybe move some things around. This sort of thing comes up a lot with accessibility remediations, but maybe you’re working with an ancient CMS where changing the templates is a no-no, or you’re plugging in some after-the-fact support into a site that can’t be changed without a big approval process but adding a script element is allowed. So you write a script, no worries. How do you test it?

Well, one way is to actually do it: we assume that the way your work will eventually be deployed is that you’ll give the owners a script file, they’ll upload it somehow to the site and add a script element that loads it. That’s likely to be a very slow and cumbersome process, though (if it wasn’t, then you wouldn’t need to be fixing the site by poking it with JS, would you? you’d just fix the HTML as God intended web developers to do) and so there ought to be a better way. A potential better way is to have them add a script element that points at your script on some other server, so you can iterate on that and then eventually send over the finished version when done. But that’s still pretty annoying, and it means putting that on the live server (“a ‘staging’ server? no, I don’t think we’ve got one of those”) and then having something in your script which only runs it if it’s you testing. Alternatively, you might download the HTML for the page with Save Page As and grab all the dependencies. But that never works quite right, does it?

The way I do this is with Greasemonkey. Greasemonkey, or its Chrome-ish cousin Tampermonkey, has been around forever, and it lets you write custom scripts which it then takes care of loading for you when you visit a specified URL. Great stuff: write your thing as a Greasemonkey script to test it and then when you’re happy, send the script file to the client and you’re done.

There is a little nuance here, though. A Greasemonkey script isn’t exactly the same as a script in the page. This is partially because of browser security restrictions, and partially because GM scripts have certain magic privileged access that scripts in the page don’t have. What this means is that the Greasemonkey script environment is quite sandboxed away; it doesn’t have direct access to stuff in the page, and stuff in the page doesn’t have direct access to it (in the early days, there were security problems where in-page script walked its way back up the object tree until it got hold of one of the magic Greasemonkey objects and then used that to do all sorts of naughty privileged things that it shouldn’t have been able to, and so it all got rigorously sandboxed away to prevent that). So, if the page loads jQuery, say, and you want to use that, then you can’t, because your script is in its own little world with a peephole to the page, and getting hold of in-page objects is awkward. Obviously, your remediation script can’t be relying on any of these magic GM privileges (because it won’t have them when it’s deployed for real), so you don’t intend to use them, but because GM doesn’t know that, it still isolates your script away. Fortunately, there’s a neat little trick to have the best of both worlds; to create the script in GM to make it easy to test and iterate, but have the script run in the context of the page so it gets the environment it expects.

What you do is, put all your code in a function, stringify it, and then push that string into an in-page script. Like this:

// ==UserScript==
// @name     Stuart's GM remediation script
// @version  1
// @grant    none
// ==/UserScript==

function main() {
    /* All your code goes below here... */



    /* ...and above here. */
}

let script = document.createElement("script");
script.textContent = "(" + main.toString() + ")();";
document.body.appendChild(script);

That’s it. Your code is defined in Greasemonkey, but it’s actually executed as though it were a script element in the page. You should basically pretend that that code doesn’t exist and just write whatever you planned to inside the main() function. You can define other functions, add event handlers, whatever you fancy. This is a neat trick; I’m not sure if I invented it or picked it up from somewhere else years ago (and if someone knows, tell me and I’ll happily link to whoever invented it), but it’s really useful; you build the remediation script, doing whatever you want it to do, and then when you’re happy with it, copy whatever’s inside the main() function to a new file called whatever.js and send that to the client, and tell them: upload this to your creaky old CMS and then link to it with a script element. Job done. Easier for you, easier for them!

May 12, 2020

The AWS Certified Cloud Practitioner examination is intended for individuals who have the knowledge and skills necessary to effectively demonstrate an overall understanding of the AWS Cloud, independent of specific technical roles addressed by other AWS Certifications.

I had previous experience with AWS services such as EC2 and S3 from a previous role, but when I switched jobs in June my usage of AWS went through the roof. Call me boring, but I tend to learn best when I have a curriculum to follow, so I figured I’d better get some certifications under my belt. Especially as my company is kind enough to pay for them.

I decided to play it safe and start with the Cloud Practitioner exam, rather than going straight for one of the associate tier certifications.

I used the following resources, and I’m happy enough with them. I imagine the much-recommended courseware by the likes of a cloud guru are objectively better, the Cloud Practitioner exam is also straightforward enough that you can probably pass it without shelling out the big bucks for polished video courses.

I can’t remember exactly when I started seriously studying for it, but I’d say it was about three months from deciding to gain the certification to receiving the email with the good news.

Congratulations, You are Now AWS Certified!

Next, I’ve set my eyes on one of the associate level certifications. Probably Solutions Architect because it sounds fancy I’ve seen a few recommendations to start with that one before moving on to the others. Admittedly, most of these recommendations were from companies with a financial interest in people needing the training materials for as many exams as possible, but hey.

I’ve never regretted taking the time to build a strong foundation yet.

tl;dr

  • It took me 2-3 months, your mileage may vary
  • Free or inexpensive resources were more than adequate

May 11, 2020

“Look, it’s perfectly simple. Go back to work, but don’t use public transport. Travel in a chauffeur-driven ministerial limousine. Use common sense – under no circumstances shake hands with people you know to have the virus. Covid-19 appeared in December, which makes it a Sagittarius, so Taureans and Libras should wear masks. But it also appeared in China, which makes it a Rat, so anyone called Mickey or Roland is advised to wear gloves. We’re following the science, so here’s a graph.

Incomprehensible graph

Remember, this is Blighty, not a nation of Moaning Minnies, Fondant Fancies or Coughing Keirs (thanks, Dom!). England expects every interchangeable low-paid worker and old person in a care home to Do Their Duty: let’s just Get Dying Done. God save the Queen, Tally-ho!”

May 08, 2020

It lives! Kinda. Amiga-Smalltalk now runs on Amiga. Along the way I review The K&R book as a tutorial for C programming, mentioning my previous comparison to the Brad Cox and Bjarne Stroustrup books. I also find out how little I know “C”, it turns out I’ve been using GNU C for the last 20 years.

Thanks to Alan Francis for his part in my downfall.

May 06, 2020

Hammer and nails by Stuart Langridge (@sil)

There is a Linux distribution called Gentoo, named after a type of penguin (of course it’s named after a penguin), where installing an app doesn’t mean that you download a working app. Instead, when you say “install this app”, it downloads the source code for that app and then compiles it on your computer. This apparently gives you the freedom to make changes to exactly how that app is built, even as it requires you to have a full set of build tools and compilers and linkers just to get a calculator. I think it’s clear that the world at large has decided that this is not the way to do things, as evidenced by how almost no other OSes take this approach — you download a compiled binary of an app and run it, no compiling involved — but it’s nice that it exists, so that the few people who really want to take this approach can choose to do so.

This sort of thing gets a lot of sneering from people who think that all Linux OSes are like that, that people who run Linux think that it’s about compiling your own kernels and using the Terminal all the time. Why would you want to do that sort of thing, you neckbeard, is the underlying message, and I largely agree with it; to me (and most people) it seems complicated and harder work for the end user, and mostly a waste of time — the small amount of power I get from being able to tweak how a thing is built is vastly outweighed by the annoyance of having to build it if I want it. Now, a Gentoo user doesn’t actually have to know anything about compilation and build tools, of course; it’s all handled quietly and seamlessly by the install command, and the compilers and linkers and build tools are run for you without you needing to understand. But it’s still a bunch of things that my computer has to do that I’m just not interested in it doing, and I imagine you feel the same.

So I find it disappointing that this is how half the web industry have decided to make websites these days.

We don’t give people a website any more: something that already works, just HTML and CSS and JavaScript ready to show them what they want. Instead, we give them the bits from which a website is made and then have them compile it.

Instead of an HTML page, you get some templates and some JSON data and some build tools, and then that compiler runs in your browser and assembles a website out of the component parts. That’s what a “framework” does… it builds the website, in real time, from separate machine-readable pieces, on the user’s computer, every time they visit the website. Just like Gentoo does when you install an app. Sure, you could make the case that the browser is always assembling a website from parts — HTML, CSS, some JS — but this is another step beyond that; we ship a bunch of stuff in a made-up framework and a set of build tools, the build tools assemble HTML and CSS and JavaScript, and then the browser still has to do its bit to build that into a website. Things that should be a long way from the user are now being done much closer to them. And why? “We’re layering optimizations upon optimizations in order to get the SPA-like pattern to fit every use case, and I’m not sure that it is, well, worth it.” says Tom MacWright.

Old joke: someone walks into a cheap-looking hotel and asks for a room. You’ll have to make your own bed, says the receptionist. The visitor agrees, and is told: you’ll find a hammer and nails behind the door.

Almost all of us don’t want this for our native apps, and think it would be ridiculous; why have we decided that our users have to have it on their websites? Web developers: maybe stop insisting that your users compile your apps for you? Or admit that you’ll put them through an experience that you certainly don’t tolerate on your own desktops, where you expect to download an app, not to be forced to compile it every time you run it? You’re not neckbeards… you just demand that your users have to be. You’re neckbeard creators. You want to browse this website? Here’s a hammer and nails.

Unless you run Gentoo already, of course! In which case… compile away.

May 05, 2020



We’re Offering Free 1-Hour Digital Consultations

The Covid-19 pandemic has forced arts, cultural and live performance organisations to shut down, move online, and work in totally new ways. We don’t know how long the crisis will last, and we’re not sure what the world will look like in the immediate aftermath. But we do know that digital technology is going to be part of the strategic toolkit as we come out of this, and that as a result many leaders will be re-thinking how they make use of technology.

If you’re working through the ramifications of what this means for you, and would like some strategic consultation about the issues or challenges you’re facing, read on…

If you are:

  • a CEO, or senior marketing, development or IT leader,
  • And, you work in an arts, cultural, or live performance organisation,

I’d like to offer you a FREE consultation about anything digital-related you’d like to talk through. If you want to brainstorm a digital challenge, talk through your digital strategy for critical feedback, or just have a digital counselling session, this time is for you.

Perhaps you want to talk about making money from online performances or archival footage, develop your digital donor strategy during lockdown, or consider simpler online booking models during the period of social distancing.

About me: I’m Managing Director at Made Media — a leading digital strategy and design agency that works with live performance and cultural institutions across the world. I’ve got many years of experience working with arts and cultural organisations on their use of digital technology, with a particular focus on user experience, ticketing and CRM. I have a background in digital technology and arts management. Prior to joining Made, I worked as Administrative Director at The Place in London – the UK’s leading centre for contemporary dance development — and between 2009 and 2015 I held a series of leadership roles at Spektrix in both the US and UK.

Consultation slots are limited, and can be booked here:
https://made.bookafy.com/?locale=en

The small-ish print:

  • Sessions will last up to 1 hour and will take place via Zoom.
  • You can come with a specific or general digital challenge, or email me in advance if you prefer (details will be in the confirmation email).
  • Session participation is limited to 2 people per organisation.
  • You don’t have to be a Made client to sign up. If you are a Made client, and would like this sort of consultation, do reach out to me or your account manager and we can set something up outside of this booking process.
  • Sessions are open to leaders at both commercial and nonprofit organisations.
  • Session times are listed as British Summer Time, you can change your time zone as you book to help you match it against your diary. I’ve tried to choose time slots with a good crossover between Europe and all parts of the mainland US. If these time zones don’t work out for you, please get in touch via our website and we’ll try and work something out.

May 01, 2020

Reading List 257 by Bruce Lawson (@brucel)

We’re back to Amiga-Smalltalk today, as the moment when it runs on a real Amiga inches closer. Listen here.

I think I’ve isolated all extraneous sound except the nearby motorway, which I can’t do much about. I hope the experience is better!

April 30, 2020

April 27, 2020

Remote Applause by Stuart Langridge (@sil)

That’s a cool idea, I thought.

So I built Remote Applause, which does exactly that. Give your session a name, and it gives you a page to be the “stage”, and a link to send to everyone in the “audience”. The audience link has “clap” and “laugh” buttons; when anyone presses one, your stage page plays the sound of a laughter track or applause. Quite neat for an afternoon hack, so I thought I’d talk about how it works.

the Remote Applause audience page

Basically, it’s all driven by WebRTC data connections. WebRTC is notoriously difficult to get right, but fortunately PeerJS exists which does most of the heavy lifting.1 It seemed to be abandoned a few years ago, but they’ve picked it up again since, which is good news. Essentially, the way the thing works is as follows:

When you name your session, the “stage” page calculates a unique ID from that name, and registers with that name on PeerJS’s coordination server. The audience page calculates the same ID2, registers itself with a random ID, and opens a PeerJS data connection to the stage page (because it knows what its ID is). PeerJS is just using WebRTC data connections under the covers, but the PeerJS people provide the signalling server, which the main alternative simple-peer doesn’t, and I didn’t want to have to run a signalling server myself because then I’d need server-side hosting for it.

The audience page can then send a “clap” or “laugh” message down that connection whenever the respective button is pressed, and the stage page receives that message and plays the appropriate sound. Well, it’s a fraction more complex than that. The two sounds, clapping and laughing, are constantly playing on a loop but muted. When the stage receives messages, it changes the volume on the sounds. Fortunately, the stage knows how many incoming connections there are, and it knows who the messages are coming in from, so it can scale the volume change appropriately; if most of the audience send a clap, then the stage can jack the clapping volume up to 100%, and if only a few people do then it can still play the clapping but at much lower volume. This largely does away with the need for moderation; a malicious actor who hammers the clap button as often as they can can at the very worst only make the applause track play at full volume, and most of the time they’ll be one in 50 people and so can only make it play at 5% volume or something.

There are a couple of extra wrinkles. The first one is that autoplaying sounds are a no-no, because of all the awful advertising people who misused them to have autoplaying videos as soon as you opened a page; sound can only start playing if it’s driven by a user gesture of some kind. So the stage has an “enable sounds” checkbox; turning that checkbox on counts as the user gesture, so we can start actually playing the sounds but at zero volume, and we also take advantage of that to send a message to all the connected audience pages to tell them it’s enabled… and the audience pages don’t show the buttons until they get that message, which is handy. The second thing is that when the stage receives a clap or laugh from an audience member it rebroadcasts that to all other audience members; this means that each audience page can show a little clap emoji when that happens, so you can see how many other people are clapping as well as hear it over the conference audio. And the third… well, the third is a bit more annoying.

If an audience member closes their page, the stage ought to get told about that somehow. And it does… in Firefox. The PeerJS connection object fires a close event when this happens, so, hooray. In Chrome, though, we never get that event. As far as I can tell it’s a known bug in PeerJS, or possibly in Chrome’s WebRTC implementation; I didn’t manage to track it down further than the PeerJS issues list. So what we also do in the stage is poll the PeerJS connection object for every connection every few seconds with setInterval, because it exposes the underlying WebRTC connection object, and that does indeed have a property dictating its current state. So we check that and if it’s showing disconnected, we treat that the same as the close event. Easily enough solved.

There are more complexities than that, though. WebRTC is pretty goshdarn flaky, in my experience. If the stage runner is using a lot of their bandwidth, then the connections to the stage drop, like, a lot, and need to be reconnected. I suppose it would be possible to quietly gloss over this in the UI and just save stuff up for when the connection comes back, but I didn’t do that, firstly because I hate it when an app pretends it’s working but actually it isn’t, and secondly because of…

Latency. This is the other big problem, and I don’t think it’s one that Remote Applause can fix, because it’s not RA’s problem. You see, the model for this is that I’m streaming myself giving a talk as part of an online conference, right? Now, testing has demonstrated that when doing this on Twitch or YouTube Live or whatever, there’s a delay of anything from 5 to 30 seconds or so in between me saying something and the audience hearing it. Anyone who’s tried interacting with the live chat while streaming will have experienced this. Normally that’s not all that big a problem (except for interacting with the live chat) but it’s definitely a problem for this, because even if Remote Applause is instantaneous (which it pretty much is), when you press the button to applaud, the speaker is 10 seconds further into their talk. So you’ll be applauding the wrong thing. I’m not sure that’s fixable; it’s pretty much an inherent limitation of streaming video. Microsoft reputedly have a low latency streaming video service but most people aren’t using it; maybe Twitch and YouTube will adopt this technology.

Still, it was a fun little project! Nice to have a reason to use PeerJS for something. And it’s hosted on Github Pages because it’s all client side, so it doesn’t cost me anything to run, which is nice and so I can just leave it up even if nobody’s using it. And I quite like the pictures, too; the stage page shows a view of an audience from the stage (specifically, the old Met in New York), and the audience page shows a speaker on stage (specifically, Rika Jansen (page in Dutch), a Dutch singer, mostly because I liked the picture and she looks cool).

  1. but it requires javascript! Aren’t you always going on about that, Stuart? Well, yes. However, it’s flat-out not possible to do real-time two-way communication sanely without it, so I’m OK with requiring JS in this particular case. For your restaurant menu, no.
  2. using a quick JS version of Java’s hashCode function, because PeerJS has requirements on ID strings that exclude some of the characters in base64 so I couldn’t use window.btoa(), I didn’t want (or need) a whole hash library, and the Web Crypto API is complex

April 25, 2020

The latest episode of SICPers, in which I muse on what programming 1980s microcomputers taught me about reading code, is now live. Here’s the podcast RSS feed.

It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration. Edsger Dijkstra, “How do we tell truths that might hurt?”

As always, your feedback and suggestions are most welcome.

April 24, 2020

Reading List 256 by Bruce Lawson (@brucel)

Paint a Rainbow by Ben Paddock (@_pads)

Paint a rainbow.
Nature’s colours have come to help you.

New patterns emerging.
Everyone is still learning.
Go easy, be gentle.

Realising what we cherish most.
Is still with us between these four walls.
In flesh or pixel form.

Dust off those running shoes.
Clean that paint brush.
Tune that guitar.

Grateful for the small things.
That delivery from your neighbours.
Those online game nights.

Take each day as it comes.
Tomorrow is tomorrow’s problem.
Embrace the new normal.

Empty ribbons of tarmac.
Fitter lungs.
Time on your side.

How to spend it best?
Paint a rainbow.

April 17, 2020

In Troubled Times by Ben Paddock (@_pads)

In troubled times.
We look to the past for comfort.
And the future for hope.
But what about the present?

Music from the 90s.
That new house with a cat and a garden.
The sky that is always blue.

That broken relationship.
Wondering will she ever speak to me again.
Feeling lost at sea.

An elevated heart rate.
These tense shoulders.
Climbing mountains.

Breath in, breath out.
Say hello to thoughts then wave them goodbye.
Rinse and repeat.

Relying on time and incremental change.
To get through to better days.
And on that first day with contentment and clarity.
Look back and smile.

April 16, 2020

Episode 2 is live!

The only link I promised this week is to the BCHS web application stack. Short for BSD, C, httpd, sqlite, it’s a minimalist approach to making web applications.

As ever, your feedback is welcome, here or wherever you find me.

April 15, 2020

A UK map made of squares by Stuart Langridge (@sil)

For a visualisation thing I was doing, I wanted a UK map made out of small squares: these seem a useful way to make heatmaps of the way a thing affects the UK. There are plenty of such maps but they all seem to be on stock image sites which want you to licence them and so on and that seems a bit annoying, so I figured I’d make one.

George Hodan has created a public domain (CC0) map of the UK (mirror here), so that was a good place to start. Then a small Python script and I’d made an SVG of the map:

a 20x30 map of the UK

and that’s all I wanted. Hooray. The script lets me tell it how many squares I want the UK map divided up into, so I generated it in various different sizes (10x15, 15x22, 20x30, 30x45, 40x61, 50x76) because that was convenient. If you want it in a size that isn’t one of those, grab the script and go for it.

April 11, 2020

I’m trying to record a cover version each week of songs that have really influenced me. They’re not especially polished, but it gives me a chance to experiment with my recording studio outside my usual working practices.

This is the first Velvet Underground song I heard. I was at a student party, sitting next to the speaker that Lou Reed suddenly shouts his vocals out of. It made me jump and I dropped the communal spliff into my beer. But I forgave them and became a total VU anorak.

April 10, 2020

Reading List 255 by Bruce Lawson (@brucel)

This week, my friend Vadim Makeev and I released the first episide of our podcast, The F-word, which discusses Front-end, browsers and standards. The web site is built on Eleventy, hosted on Github so anyone can contribute and has a 100% Lighthouse score. The pilot episode is 38 minutes long—why not have a listen!!

  • Inclusive Inputs – An exploration into how to make inputs more accessible.
  • Beginner’s Guide to Eleventy by Tatiana Mac
  • The WebAIM Million, updated – “Home pages with WCAG failures up to 98.1% (from 97.8% last year). Page complexity increased 10.4% in that time. Home pages with ARIA present averaged 60% more errors than those without.
  • Good Email Code – templates for HTML emails “making sure it is semantic, functional, accessible and meeting user expectations. Consistency between email clients and pixel perfect design are also important but always secondary.”
  • Web Animations in Safari 13.1
  • Updates to form controls and focus – Nice changes to forms aesthetics, focus and a11y in Chromium
  • Accessible SVGs – an oldie but gold article
  • Helping Seniors During the Covid-19 Crisis – How my chums at @wixeng partnered with local authorities to build a volunteer call center app to help vulnerable populations during the current crisis, in one week. “we would be happy to translate it to other languages, adjust it to other government regulations, and assist in implementing it, if requested.”
  • Webcam Hacking – “a technical walkthrough of how I discovered several zero-day bugs in Safari during my hunt to hack the iOS/MacOS camera. This project resulted in me gaining unauthorized access to Front & Rear Cameras, Mic, Plaintext Passwords & More”
  • colors.lol – “Overly descriptive color palettes.”

April 09, 2020

I made a podcast! Full show notes here due to the character limit at podbean.

  • Amiga-Smalltalk project on GitHub

  • Free books on Smalltalk: the three Addison-Wesley books “Smalltalk-80: The Interactive Programming Environment”, “Smalltalk-80: The Language and its Implementation” and “Smalltalk-80: Bits of History, Words of Advice” are mentioned in this podcast, and the second of those is “the blue book” at the centre of the episode.

  • AROS Research Operating System is the Amiga-compatible open source operating system. It can boot on (i386 or m68k) hardware, or run hosted in Linux, FreeBSD, macOS, and Windows.

Please let me know what you think! You can find me on twitter at @iwasleeg, and I gave out my email address in the podcast. You could also comment here!

Errata: I said the Amiga 1000 had 128kB of RAM but it had 256kB, sorry!

Managing remotely by Marc Jenkins (@marcjenkins)

Julie Zhuo’s tips on working from home during the lockdown are the best I’ve read so far. A few of my favourites:

  • Cancel as many meetings as you can
  • More documents, less powerpoints and keynotes
  • Turn some meetings into walking meetings
  • Do your toughest work when you have the most energy

April 08, 2020

A right-wing friend got angry with me because I refused to “clap for Boris”, saying now is not the time to make political points.

If you think this is not a time to make political points, you’re wrong. Boris Johnson has Covid-19 because he went around shaking Covid patients’ hands, against expert advice. Those experts who, in 2016, Gove said everyone is tired of.

He shook people’s hands because he had a plan to boost herd immunity – we should all “take it on the chin” he said. This policy was dreamed up by him and Dominic Cummings, who said “herd immunity to protect the economy and if a few pensioners die, so be it”. That’s your dad and my mum he was prepared to sacrifice.

And because of this deranged policy (which models showed would cause the death of an extra quarter of a million British people), he delayed ordering the Personal Protective Equipment that the health workers need — the health workers whom he voted to deny a 1% payrise to. Mass testing and contact tracing are what got China and South Korea through this. But even the lefty paper the Daily Mail is reporting that the “herd immunity” delay means we won’t have enough of the chemicals needed to produce the 100,000 tests that Matt Hancock promised by the end of the month. (After Johnson falsely promised 250,000.)

In October 2016 the UK government ran a national pandemic flu exercise, codenamed Exercise Cygnus. “We’ve just had in the UK a three-day exercise on flu, on a pandemic that killed a lot of people,” chief medical officer Sally Davies said at the time. “It became clear that we could not cope with the excess bodies,” Davies said. One conclusion was that Britain, as Davies put it, faced the threat of “inadequate ventilation” in a future pandemic.

What did the Tory government at the time do? Nothing. Johnson was a senior Cabinet Minster at that time.

Matt Hancock was invited by the EU to collaborate in bulk-buying ventilators. Johnson said no, because he didn’t like the politics of collaborating with the EU. End result? We don’t have enough ventilators.

I hope he gets better, because I’m a socialist so I value his life more than he values mine (or yours). I hope he recovers and comes back more humble, more humane. And as a patriot, I will not stop holding to account this dangerous man whose bad political choices mean that UK will have Europe’s worst death toll:

In the early stages of the UK outbreak, deaths climbed steeply, which the IHME says is a major driver of predicted deaths.

The flirtation in government with the idea of “herd immunity” as a way out of the epidemic meant there was a delay in implementing physical distancing until 23 March, when there were already 54 daily deaths.

It is unequivocally evident that social distancing can, when well-implemented and maintained, control the epidemic, leading to declining death rates.

His political choices will cause far more of our compatriots to die than would have otherwise. His policies require scrutiny. He deserves no applause.

In late February, I began to wake up to the fact that Covid-19 was going to dramatically impact all of our lives. I started checking the news for hours a day, following with trepidation as the pandemic was unfolding and markets began crashing around the globe.

Here’s the thing with the news: a small amount of high-quality information keeps you informed, but any more than that adds unnecessary stress and anxiety.

I’ve always struggled to get this balance right. I want to be informed and understand what’s going on, but I don’t want to be bombarded with information that doesn’t have any benefit.

I’ve since reigned in my impulses and got my news diet back on track. My current news source of choice is The Economist. I use the app on my iPhone to read the “Espresso” every morning, which contains 6-8 paragraphs of world news that is updated daily. It takes a few minutes and I’m done and it’s wonderful.

While I’ve cut back on my news consumption, I am still listening to podcasts to learn more about what’s going on. Podcasts are great because they allow for a deeper and more nuanced discussion.

Here are a few podcasts I’ve listened to about Covid-19 that were helpful or interesting in some way:

If you’ve listened to a podcast that you found useful (or an article, for that matter), I’d love to hear about it: marc@16by9.net.

April 07, 2020

Stay on target… by Graham Lee

I introduce the kind of customer who needs the Labrary’s advice with the following description:

Your software team was a sight to behold, when it started out. You very quickly got to an MVP, validated its fit with early successes, iterated on the user experience and added the missing features. You hired a few more developers to cover the demand.

Now, things are starting to feel slower. The team insists they’re still continuing apace, but you haven’t kept that initial excitement. Developers are grumbling about technical debt. The backlog keeps growing. Testers aren’t keeping up – despite automation. The initial customers aren’t getting the benefits they first expected, and new customers aren’t being won at the rate you’d like.

The problem was that the way you hit it out of the park worked well in the early days, when you had a green field project and no existing code or customers to support. Now your customers expect all new features and surprising and delightful interactions: but they also expect nothing to change, and certainly not to break. The desired qualities of your software have changed, and so the quality of your software must change.

Plenty of people, typically CTOs and heads of software development, typically at growth scale, identify with this description, so it’s worth digging deeper into how it comes about.

At the early stage, your company has a small team, a vague idea of what the product is, and no customers. Literally anything your engineering team can do will be valuable: it will either be a product that fits the market, or tell you where the market isn’t. Obviously there’s some hand-waving about being able to market and sell whatever it is that your engineering team build, but by and large anything you come up with is somehow useful. You are either defining a new market, in which case all work is market-leading, or entering an established market, in which case your direction is clear. It’s hard to make a wrong decision at this stage, but very easy to stick to one.

And while we all know the horror stories about shipping your prototype to production, it’s actually not a bad plan at this stage. You don’t know what will or won’t work, so getting something out there quickly is exactly the right thing to do. And your developers probably have a base standard of maturity even for prototype projects, so you’ll have version control, some form of testing infrastructure and CI, external libraries for data storage, it won’t be a complete wild west.

Things go, loosely speaking, in one of two directions here. If you fail to find the right customers and the right product, you’re out of money, thanks for playing. If you find the right product for the right people, then congratulations! You get more money, either through revenue or a funding round, and you grow the company. Maybe the programmer you were paying before becomes the CTO, maybe one or two of the contractors you worked with come on as perms, and you get a couple of new people come on in return for an OK salary and the promise of the stock sometime being worth something. One of those people is even a QA!

Of course, the cash injection (particularly if it came as a lump through funding that can be drawn down as necessary) gives you the headroom to do things properly. Technologies are chosen, an architecture is designed (usually just by connecting the technologies with arrows), and an attempt is made to build the new thing, support the old thing, and continue adding new features and differentiate in the market. A key customer demographic is sold the promise of the new system (it being exciting and more capable, at least that is what the roadmap says), takes delivery of the old system (it being ready), then takes up time asking for the new features. You either divert resources from the new system to the old to add the features there, or invent some unholy hybrid where your existing thing makes calls to the new thing for the new features with a load of data consistency glue binding the two together. We’ll call these customers “saps” for now. Also, whether you’ve caught up to your competitors or your competitors to them, you’re now having to maintain an edge.

Let’s take stock here. You have:

  1. Some saps, giving us actual money, on the old platform.
  2. Some hope that things will be easier once everything’s on the new platform.
  3. Pressure to stay ahead of/catch up to the competition in both places.

That’s more work! But it’s OK, you’ve got more people. But where you add to the old system (which pleases your saps) you take away from the new, so tend to favour unintrusive patches rather than deeper changes there. Which makes it harder to understand, and harder to support, which is more work! OK, so hire more people! But now engineering costs more. OK, so sell the original thing (not the new thing, it isn’t ready yet) to a few more customers! But now there are more customers, demanding more features, and more support. OK, so hire more people!

Run through that cycle a few times and you end up in the place I described in the Labrary blurb. You’ve got a big team, filled with capable engineers, working hard, and delivering…not as much as anyone would like. The problem is that working on the software is pulling them away from working for the company. The other problem is that you’re measuring how much the software gets worked on.

April 02, 2020

Gridizen Property Management App

Gridizen is a full service property management app for tenants and landlords. The app is available on iOS, Android and the Web and features sections for landlords to manage their tenants and tenants to reach out to their wider community and landlords.

Gridizen’s business analyst Burhan reached out to me months before we actually started working together. He had a product that him and his team really believed in but it just didn’t have the impact or user flow that they were happy with.

After a few months we finally got chance to work together and started designing some of the onboarding for their main landlord product.

We quickly moved through other areas of the app such as the iOS and Android screens shown here. We worked on their overall branding and front-end website which also got a lick of paint. The goal was simple, create a product that was easy to use and most importantly useful for tenants and landlords.

I’m really happy with how this project turned out. It’s quite a large project and only showing a small percentage of it.

If you like this design please share and if you’re in the market to book a freelance UI designer don’t hesitate to contact me.

The post Gridizen Product Design appeared first on .

April 01, 2020

It is 1 April. This is NOT a joke but it may not be real. It’s a science thought experiment.

There’s a tiny bit of physics in my distant past but I am not a real physicist. In the last few years I’ve noticed the models of information I’m investigating resemble the ‘multi-worlds interpretation’ of quantum mechanics, which I have only heard about via radio, TV and Wikipedia. Quantum Mechanics is always rubbing up against Relativity, another idea about which I have a frustratingly inadequate understanding. I decided to investigate these worlds of weirdness but I find my brain getting horribly scrambled when I try to visualise space-time. I can’t and I don’t think anyone can. Our visual system has evolved for seeing a 3D world and that world changes. Humanity had lived, before the early 20th Century in a non-relativistic world. Those who do ‘get modern physics’ seem to rely on a mathematical understanding of the concepts.

In my models of information, I’d been thinking about the lack of a time-dimension in most computation. We usually model change in the world as a series of states, where data about a new state replaces the previous states. This starts to cause difficulties when computers have parallel paths of computation, as current multi-core processing chips do. Software has begun to address the problems with ‘immutability’. In simple terms, instead of replacing a value, a new value is added to the end of  a sequence, so historic values are retained. We have gone back before the memory-constrained computer age to learn from the Victorian hand-written ledger.

I became aware of research work at Cambridge University to reconcile ‘QM and R’ which also modelled time as state-change but I’ve found it very difficult to think about ‘the state change dimension of space’ as a sequence of events without falling back on ideas of time, speed and rate of change. The ‘idea of time’ which may be a cultural concept is deeply woven into our current paradigm.

In trying to free my mind of time, I’ve been hanging out in ‘the difficult time questions’ corner of Quora. Someone gave me a breakthrough by describing a simple clock:
Imagine going into deep space where gravity and friction to movement can be assumed to be zero. Throw an object. The distance it has travelled is a clock.

An April Fool thought experiment of time:
Make the object a spacecraft containing a holographic camera. Time passed can now be measured as a distance. Let us assume we prepared by asking someone to invent a unit of distance and mark it repeatedly on a very long tape, alongside the path of our space-craft. We don’t know the size of that unit. Now retrieve the holographic camera and put the recording medium in a holographic projector and project it onto a screen.
Think of the ‘slide-show’ as you being equivalent to you travelling along a sequence of equally spaced ‘holographic plates’. Consider changing the distance between plates in some regions of the recording (analogous to compressing time) or having instantaneously (enough) reversed the spacecraft during recording. Evidence of events would be passed in the opposite order but time would still be one way. Time could appear to be reversed if the projector was modified to play the recording in reverse order but that’s model hacking not reality.

Stage 2 – imagine this model as a streamed live-view of the universe, with the universe interfering with distances, as Relativity tells us it does, and has been observed. The problem is that a lot of the science assumes time is constant and that’s difficult to disprove from inside the space-time paradigm. We can only observe a space/state-change view and we may have invented time. Have fun with it.

My company does some cool stuff with our CI pipeline. When someone submits a merge request, we know that this branch may require some manual testing before it gets merged. We enable this with a button on the merge request that lets people spin up a docker container running the proposed change, whacking it on our kubernetes cluster, and pointing a URL to it.

If your branch is called new-feature, you’ll get a shiny copy of the app running just for you at https://new-feature.review.example.com.

If your branch is called super-amazing-cool-new-feature-that-i-love-and-you-will-too, you would expect to be able to test your feature at https://super-amazing-cool-new-feature-that-i-love-and-you-will-too.review.example.com

So, we hit the button. Things begin to churn. We ping off a request to AWS Certificate Manager and… it tell us off.

Oh.

Apparently, the first domain name on a certificate can’t exceed 64 octets in length (including periods). Ours is… somewhat longer than that.

I wish someone had told me that AWS was going to reject my certificate request before I created the branch and wrote the code and created the merge request and waited for the automated test suite to run and waited for the docker container to be built.

Enter git hooks.

git hooks let us execute a script before or after certain git actions. You can do lots of awesome stuff with them, like blocking commits that don’t match certain criteria, triggering deploys in different environments, and more.

In this case, we want to warn ourselves whenever we create a branch name that is going to result in a domain name that won’t pass AWS Certificate Manager’s checks.

We know that the base of our URL is .review.example.com, which is 19 characters long. This gives us (assuming that all of our branch names are plain ol’ UTF-8) 45 characters to play with.

Let’s drop the following into .git/hooks/post-checkout and make it executable:

#!/bin/sh
RED='\033[0;31m'
NC='\033[0m'
branchname=$(git status | cut -d ' ' -f 3 | head -1)
if [ ${#branchname} -gt 45 ]
then
  printf "${RED}💩 This branch name is over 45 characters.\n"
  printf "If you proceed, you will be unable to generate a TLS certificate for your review app.${NC}\n"
fi

Now when we try to create a branch with a luxuriously long name, we get:

➜  hooks git:(master) git checkout -b super-amazing-cool-new-feature-that-i-love-and-you-will-too
Switched to a new branch 'super-amazing-cool-new-feature-that-i-love-and-you-will-too'
💩 This branch name is over 45 characters.
If you proceed, you will be unable to generate a TLS certificate for your review app.
➜  hooks git:(super-amazing-cool-new-feature-that-i-love-and-you-will-too)

Sadly there doesn’t seem to be any pre-checkout hook, so you can’t block a branch from being created (or confirm that it’s really what you want), you can only spit out a warning after the fact. But hey, it’s red!

March 28, 2020

As I’m self-isolating with my vegan daughter, I’ve been trying to cook healthy vegan meals. We had a couple of leeks in the fridge which needed to be used up, so I invented this leek and potato soup, which was pretty delicious.

Lovely soup

Ingredients

  • leeks
  • potatoes
  • medium onion
  • A stick of celery
  • 2 Veg stock cubes
  • Marmite
  • chilli flakes
  • Dried herbs (mine was a mix of basil, oregano, parsley, marjoram, sage and thyme according to the packet)

Peel and chop potatoes into cubes around 1cm. Chop the leeks into chunks about 1cm wide. Chop the onion finely. Fry it in a little oil. until it’s translucent. Add the potatoes and fry for a few minutes. Dissolve the stock cubes in around 2 litres of boiling water. Add the liquid to the pan. Add the leeks. Let it bubble. Chop the celery, and add to the pot. Add a tablespoon of marmite, 2-3 tablespoons of the mixed herbs and some chilli flakes. Stir it all up. Simmer for about 15 minutes. Enjoy!

March 27, 2020

In designing a relational database schema, many people will automatically create a column id integer primary key for every table, using their database’s automatic increment feature to assign a new value to each row they insert. My assertion is that this choice of primary key should be the last resort, not the first.

A database schema is a design artifact, describing the data we want to store and the relationships between records (rows) in those data. It is also meta-design, because the schema constrains us in designing the queries we use to work with the data. Using the same, minimal-effort primary key type for every table then avoids communicating information about the structure and meaning of the data in that table and imposes irrelevant features in the queries we design.

The fact that people use the name id for this autoincrementing integer field gives away the fact that the primary key is used to identify a row in a database. The primary key for a table should ideally be the minimal subset of relevant information that uniquely identifies an individual record. If you have a single column, say name, with not null and unique constraints, that’s an indicator (though not a cast-iron guarantee) that this column may be the table’s primary key. Sometimes, the primary key can be a tuple of multiple columns. A glyph can be uniquely identified by the tuple (character, font, swash) for example (it can, regardless of whether this is how your particular favourite text system represents it, or whether you think that this is a weird way to store ligatures). The glyphs “(e, Times New Roman Regular 16pt, normal)” and “(ct, Irvin Demibold 24pt, fancy)” are more readily recognisable than the glyphs “146276” and “793651”, even if both are ways to refer to the same data. A music album is identified by the artist and the album name (he says, side-eyeing Led Zeppelin): “A Night at the Opera” is ambiguous while “(Blind Guardian, A Night at the Opera)” is definitely not “(Queen, A Night at the Opera)”.

Use an integer identifier where there is no other way to uniquely identify rows in a table. Note: sometimes there is another, more meaningful way, even where that just means using somebody else’s unique identifier: different copies of the same book will have unique shelfmarks if they’re part of a library, for example. People in an organisation may have an employee number, or a single sign-on user name; though there may be privacy reasons not to use these.

A side-effect of using useful information to identify rows in a database is that it can simplify your queries, because where your foreign keys would otherwise be meaningless numbers, they now actually carry useful information. Here’s an example, from a real application, in which I’m sad to say I designed both the “before” and “after” schemata.

The app is a risk management tool. There are descriptions of risks (I’d like to believe that they all at least have a distinct description but I can’t be sure, so those will use integer id PKs), and for each risk there are people in certain roles who bring particular skills to bear on mitigating the risk. The same role can be applied to more than one risk, the same skill can be applied by more than one role, and one role may apply multiple skills, so there’s a three-way join to work out, for a given risk, what roles and skills are relevant.

The before schema:

create table risk (id integer primary key, description varchar not null, weight integer, severity integer, likelihood integer); -- many fields elided
create table role (id integer primary key, name varchar not null, unique(name)); -- ruh roh
create table skill (id integer primary key, name varchar not null, unique(name)); -- the same anti-pattern twice
create table risk_role_skill (id integer primary key, risk_id integer, role_id integer, skill_id integer, foreign key(risk_id) references risk(id), foreign key(role_id) references role(id), foreign key(skill_id) references skill(id));

In this application, we start by looking at a list of risks then inspect one to see what roles are relevant to mitigating it, and then what skills. So a valid question is: “given a risk, what roles are relevant to it?”

select distinct role.name inner join risk_role_skill on role.id = risk_role_skill.role_id where risk_role_skill.risk_id = ?;

But if we notice the names of each role and skill are unique, then we can surmise that they are sufficient to identify a given role or skill. In fact, the only information we have about roles or skills are the names.

create table risk (id integer primary key, description varchar not null, weight integer, severity integer, likelihood integer); -- many fields elided
create table role (name varchar primary key); -- uhhh...
create table skill (name varchar primary key); -- this still looks weird...
create table risk_role_skill (id integer primary key, risk_id integer, role_name varchar, skill_name varchar, foreign key(risk_id) references risk(id), foreign key(role_name) references role(name), foreign key(skill_name) references skill(name));

Here’s the new query:

select distinct role_name from risk_role_skill where risk_id = ?;

We’ve removed the join completely!

Two remaining points:

  1. There’s literally no information carried in the role and skill tables now, other than their identifying names. Does that mean we need the tables at all? In this case no, but in general we need to think here. How are the names in the join table going to get populated otherwise? If there are a limited set of valid values to choose from, then keeping a table with the range of values and a foreign key constraint to that table may be a good way to express the intent that the column’s content be drawn from that range. As an example, a particular bookstore may have printed, ebook, and audiobook media, so could restrict the medium field in their stock table to one of those values.
  2. Why does the risk_role_skill table have an identifier at all? It is a collection of associations between values, so a row’s content is that row’s identity.

Here’s the after schema:

create table risk (id integer primary key, description varchar, weight integer, severity integer, likelihood integer); -- many fields elided
create table risk_role_skill (risk_id integer, role varchar, skill varchar, foreign key(risk_id) references risk(id), primary key(risk_id, role, skill));

And the after query:

select distinct role from risk_role_skill where risk_id = ?;

Two fewer tables, no joins, altogether a much simpler database to understand and work with.

Reading List 254 by Bruce Lawson (@brucel)

Now wash your hands.



A Most Unusual World Theatre Day

Today is World Theatre Day, and the theatres of the world are shut down; along with the concert halls, opera houses, museums, galleries and other venues that enrich our public lives.

I’ve been lucky to enjoy performances in famous venues all around the world, from Shakespeare’s Globe, to the Sydney Opera House, to the Metropolitan Opera.

I’m fortunate that my job often allows me the privilege of visiting some of these venues when no-one else is there. It is very special indeed to experience the pure silence of an empty Walt Disney Concert Hall, or the expanse of the stage at the David H. Koch Theater, home of the New York City Ballet, when the masking is flown out. The buildings themselves are an intrinsic part of the cultural experience. These places are supposed to be full of people, yet they still manage to fizzle with energy and potential when they’re empty. That should give us hope.

I’ve spent years working in and around theatres, as an administrator, technician, box office assistant, and as a consultant and supplier. I’ve also spent time working as a lighting designer.

Lighting design is true theatrical power and magic. It’s thrilling to plunge a 2,000-seat auditoria into darkness, or to use light to transform an old church into a den for Shakespeare. It makes you viscerally appreciate how a shared environment can alter your response to art and ideas, and why these public spaces are so important (and why, as well, they need to be accessible to everyone).

Our cultural institutions are stepping up in this unprecedented moment, and stepping into our homes. The Metropolitan Opera’s nightly streams have been reaching hundreds of thousands of people. The UK’s National Theatre is opening up its back catalogue of NT Live performances online for free. Around the world, countless artists and companies are finding new ways to work and reach audiences who can no longer gather together to experience art and culture together in a shared physical environment.

This is all very welcome, and speaks to the vital part that art and culture plays in our lives. 

But none of this takes away from the magic of experiencing these incredible venues in person, with their history and aura, and I can’t wait until the day that they’re full once again.

March 25, 2020



Power Up Your Communication During Powerless Times

During these exceptional and difficult times that we are all experiencing,  how you communicate with your customers and patrons will have a strong impression and if done right will have a positive impact when things finally start getting back to some sort of normality.

We wanted to help power up your communication efforts by offering our top tips and thoughts :

The power of Email:
If you can, don’t wait for your customers to contact you with regards to cancellations or postponements. Make first contact. Email is the logical choice to achieve this. 

Whichever ticketing system or event management system you use, you should be able to export a list of all the email addresses of those who have booked tickets for the event to use as your mailing list.  Make sure you keep this list separate from existing marketing lists, the last thing you want is to annoy those already disappointed customers with unsolicited emails.  

When creating your communication make sure to set the options that are open to customers out clearly with clear call to actions.  If you are refunding all customers make sure you set expectations of when this will be processed, as the last thing you want is to communicate an incorrect timeframe which will result in added pressure to systems and processes that will no doubt already be overloaded. If you are asking customers to perform any actions on your website or to call your customer services, it might be a good idea to stagger your send, to help spread the demand on resources.

If you are asking customers to visit your website, triple-check your links. It may sound obvious but if you are rushing to get a communication out, a mistake here can make all your efforts null and void. 

The Power of Search:
You no doubt will see an increase in website traffic as more and more customers search to see if their event is still going ahead.  There are things you can do to communicate with customers within search engines, especially Google.  The first is to make sure you take advantage of the new schema data tags which have been introduced as a result of the current pandemic.  This means that you can tag events with statuses, such as cancelled and postponed, which will then be pulled in to Google’s rich results.  I would also recommend using Google My Business posts.  These posts are similar to those you might see on Facebook or Linkedin but appear in your brand search results.  You can include images and links within these posts.  The advantage of these is they take up a prime position in results, especially with mobile results.

Make sure to check your Google Search Console more regularly as this will give you an insight into what your website visitors are searching for to reach your site and what information they are coming to your site to find.  You can also compare this to any site search results if you have functionality on site.  This will help you work out your hierarchy of information and will inform how you organise this on site along with how you prioritise new content.

If you are still running any paid search ads associated with brand keywords make sure you include your main Covid-19 information page within site links, this will help your customers get to the information they require more efficiently.   

The Power of Content:
Obviously you will have published or will be publishing posts via your social channels keeping customers up to date with developments, which is the right thing to do.  This is really a note about communicating with current and potential customers while your current run is affected.  The current situation will present opportunities, as more and more people are isolated, spending more time at home, and they will be consuming more online content. At the same time, you may also find you have an opportunity to spend a little more time creating engaging and creative content of your own. Make sure you keep the audience in mind to what you create and share, think about their current situations and also pay attention to any current social trends, challenges and conversations, these will offer great opportunities to unleash your creativity.  

Don’t neglect the content you already have.  Many art organisations will have a fantastic back-catalogue of quality content from the archives and many of the great arts organisations are now making it available online for free, such at MET Opera, The Royal Albert Hall and The National Theatre.  This not only gives the chance for your regular visitors a chance to continue to enjoy the arts but also can be a great opportunity to reach new audiences.

Remember what you do now as an organisation will have a long-lasting impression on your current customers and things will return to normal, whatever that is.

We know each organization will be facing its own challenges and we are here to help. If we would like to discuss anything mentioned in this article further we would be more than happy to help get the right strategy for you.  
hello@made.media

March 23, 2020

I’ve been working from home for over 5 years. Over that time, I’ve improved my home office and tweaked my work-day routines. Being productive while working from home takes practice and experience. I’ve gotten pretty good at it.

That was until the past few weeks. My anxiety and productivity are on opposite tracks: while my anxiety has shot up, my productivity has fallen off a cliff.

I know many of you are now working from home for the first time. If you’re finding it difficult, know it’s not just you. We’re living in a time of unprecedented uncertainty. It’s difficult for everyone, even those of us who have been working from home for years.

Everything is pointing towards things getting worse over the next few weeks. For the short term at least, this is the new normal. And so, I’m hitting reset. I’m using all the tips and tricks I’ve learnt over the past few years to stay as focused as I can be.

Here’s how I’m trying to stay focused this week:

  • As much as I can, I’m resisting the urge to read news during the day. I have plenty of time to catch up on things in the evening.
  • I keep my phone in a different room while I work.
  • I’m using the Focus app to block Twitter, Slack and news sites.
  • I’m using the Streaks app to track 40 minute blocks of uninterrupted work (aka the pomodoro technique).
  • I’m being as realistic as I can about what I can accomplish on any given day. My task list is 4-5 things, max.
  • I’m having lots of conversations with family and friends. I have a few lunch-time video calls lined up. Social distancing doesn’t have to mean social isolation.
  • I’ve given myself permission to take regular extended breaks whenever I feel like it.
  • I’m meditating every morning for 10 minutes. Sam Harris just released a good podcast on why meditation matters in an emergency.
  • Music is helping. I have it playing all day in the office.
  • I’m trying to be as helpful and generous as I can to my clients. We’re all in this together.
  • I’m trying not to be too hard on myself.

How are you getting on working from home? What things that are helping you? Let me know – my email inbox is open: marc@16by9.net.

March 20, 2020

Jeremy Keith:

I’m quite certain that one positive outcome of The Situation will be a new-found appreciation for activities we don’t have to do. I’m looking forward to sitting in a pub with a friend or two, or going to see a band, or a play or a film, and just thinking “this is nice.”

March 16, 2020

To quote Wikipedia:

May you live in interesting times is an English expression which purports to be a translation of a traditional Chinesecurse. While seemingly a blessing, the expression is normally used ironically; life is better in “uninteresting times” of peace and tranquility than in “interesting” ones, which are usually times of trouble.

Online shopping at the Co-op by Stuart Langridge (@sil)

On the Saturday just gone, I thought to myself: OK, better get some food in. The cupboards aren’t bare or anything, but my freezer was showing a distinct skew towards “things that go with the main bit of dinner” and away from “things that are the main bit of dinner”, which is a long way of saying: didn’t have any meat. So, off to online shopping!

I sorta alternate between Tesco and Sainsbury’s for grocery shopping; Tesco decided they wouldn’t deliver to the centre of the city for a little while, but they’re back on it now. Anyway, I was rather disillusioned to see that both of them had no delivery slots available for at least a week. It seems that not only are people panic-buying toilet roll, they’re panic-buying everything else too. I don’t want to wait a week. So, have a poke around some of the others… and they’re all the same. Asda, Morrisons, Ocado, Iceland… wait a week at least for a delivery. Amazon don’t do proper food in the UK — “Amazon Pantry” basically sells jars of sun-dried tomatoes and things, not actual food — and so I was a little stymied. Hm. What to do? And then I thought of the Co-op. Which turned out to be an enormously pleasant surprise.

Their online shopping thing is rather neat. There is considerably less selection than there is from the big supermarkets, it must be admitted. But the way you order shows quite a lot of thinking about user experience. You go to the Co-op quickshop and… put in your postcode. No signup required. And the delay is close to zero. It’s currently 2pm on Monday, and I fill in my postcode and it tells me that the next available slot is 4pm on Monday. Two hours from now. That’s flat-out impossible everywhere else; the big supermarkets will only have slots starting from tomorrow even in less trying times. You go through and add the things you want to buy and then fill in your card details to pay… and then a chap on a motorbike goes to the Co-op, picks up your order, and drives it to your place. I got a text message1 when the motorbike chap set off, before he’d even got to the Co-op, giving me a URL by which I could track his progress. I got messages as he picked up the shopping and headed for mine. He arrived and gave me the stuff. All done.

It seemed very community-focused, very grass-roots. They don’t do their own deliveries; they use a courier, but a very local one. The stuff’s put into bags by your local Co-op and then delivered directly to you with very little notice. They’re open about the process and what’s going on. It seems so much more personal than the big supermarkets do… which I suppose is the Co-op’s whole shtick in the first place, and it’s commendable that they’ve managed to keep that the case even though they’ve moved online. And while the Co-op is a nationwide organisation, it’s also rather local and community-focused. I’ll be shopping there again; shame on me that I had to be pushed into it this first time.

  1. the company that they use to be couriers are called Stuart. This was confusing!

March 13, 2020

Reading List 253 by Bruce Lawson (@brucel)

March 12, 2020

March 03, 2020

We live in a world where websites and apps mostly make people unhappy. Buying or ordering or interacting with anything at all online involves a thousand little unpleasant bumps in the road, a thousand tiny chips struck off the edges of your soul. “This website uses cookies: accept all?” Videos that appear over the thing you’re reading and start playing automatically. Grant this app access to your contacts? Grant this app access to your location? “Sign up for our newsletter”, with a second button saying “No, because I hate free things and also hate America”. Better buy quick — there’s only 2 tickets/beds/rooms/spaces left! Now now now!

This is not new news. Everyone already knows this. If you ask people — ordinary, real people, not techies — about their experiences of buying things online or reading things online and say, was this a pleasant thing to do? were you delighted by it? then you’re likely to get a series of wry headshakes. It’s not just that everyone knows this, everyone’s rather inured to it; the expectation is that it will be a bit annoying but you’ll muddle through. If you said, what’s it like for you when your internet connection goes down, or you want to change a flight, they will say, yeah, I’ll probably have to spend half an hour on hold, and the call might drop when I get to queue position 2 and I’ll have to call again, and they’ll give me the runaround; the person on the call will be helpful, but Computer will Say No. Decent customer service is no longer something that we expect to receive; it’s something unusual and weird. Even average non-hostile customer service is now so unusual that we’re a bit delighted when it happens; when the corporate body politic rouses itself to do something other than cram a live rattlesnake up your bottom in pursuit of faceless endless profit then that counts as an unexpected and pleasant surprise.

It’d be nice if the world wasn’t like that. But one thing we’re a bit short of is the vocabulary for talking about this; rather than the online experience being a largely grey miasma of unidentified minor divots, can we enumerate the specific things that make us unhappy? And for each one, look at how it could be done better and why it should be done better?

Trine Falbe, Kim Andersen, and Martin Michael Frederiksen think maybe we can, and have written The Ethical Design Handbook, published by Smashing Media. It’s written, as they say, for professionals — for the people building these experiences, to explain how and why to do better, rather than for consumers who have to endure them. And they define “ethical design” as businesses, products, and services that grow from a principle of fairness and fundamental respect towards everyone involved.

They start with some justifications for why ethical design is important, and I’ll come back to that later. But then there’s a neat segue into different types of unethical design, and this is fascinating. There’s nothing here that will come as a surprise to most people reading it, especially most tech professionals, but I’d not seen it enumerated quite this baldly before. They describe, and name, all sorts of dark patterns and unpleasant approaches which are out there right now: mass surveillance, behavioural change, promoting addiction, manipulative design, pushing the sense of urgency through scarcity and loss aversion, persuasive design patterns; all with real examples from real places you’ve heard of. Medium hiding email signup away so you’ll give them details of your social media account; Huel adding things to your basket which you need to remove; Viagogo adding countdown timers to rush you into making impulsive purchases; Amazon Prime’s “I don’t want my benefits” button, meaning “don’t subscribe”. Much of this research already existed — the authors did not necessarily invent these terms and their classifications — but having them all listed one after the other is both a useful resource and a rather terrifying indictment of our industry and the manipulative techniques it uses.

However, our industry does use these techniques, and it’s important to ask why. The book kinda-sorta addresses this, but it shies away a little from admitting the truth: companies do this stuff because it works. Is it unethical? Yeah. Does it make people unhappy? Yeah. (They quote a rather nice study suggesting that half of all people recognise these tricks and distrust sites that use them, and the majority of those go further and feel disgusted and contemptuous.) But, and this is the kicker… it doesn’t seem to hurt the bottom line. People feel disgusted or distrusting and then still buy stuff anyway. I’m sure a behavioural psychologist in the 1950s would have been baffled by this: if you do stuff that makes people not like you, they’ll go elsewhere, right? Which is, it seems, not the case. Much as it’s occasionally easy to imagine that companies do things because they’re actually evil and want to increase the amount of suffering in the world, they do not. There are no actual demons running companies. (Probably. Hail to Hastur, just in case.) Some of it is likely superstition — everyone else does this technique, so it’ll probably work for us — and some of it really should get more rigorous testing than it does get: when your company added an extra checkbox to the user journey saying “I would not dislike to not not not sign not up for the newsletter”, did purchases go up, or just newsletter signups? Did you really A/B test that? Or just assume that “more signups, even deceptive ones = more money” without checking? But they’re not all uninformed choices. Companies do test these dark patterns, and they do work. We might wish otherwise, but that’s not how the world is; you can’t elect a new population who are less susceptible to these tricks or more offended by them, even if you might wish to.

And thereby hangs, I think, my lack of satisfaction with the core message of this book. It’s not going to convince anyone who isn’t already convinced. This is where we come back to the justifications mentioned earlier. “[P]rivacy is important to [consumers], and it’s a growing concern”, says the book, and I wholeheartedly agree with this; I’ve written and delivered a whole talk on precisely this topic at a bunch of conferences. But I didn’t need to read this book to feel that manipulation of the audience is a bad thing: not because it costs money or goodwill, but just because it’s wrong, even if it earns you more money. It’s not me you’ve gotta convince: it’s the people who put ethics and goodwill on one side of the balance and an increased bottom line on the other side and the increased bottom line wins. The book says “It’s not good times to gamble all your hard work for quick wins at the costs of manipulation”, and “Surveillance capitalism is unethical by nature because at its core, it takes advantage of rich data to profile people and to understand their behaviour for the sole purpose of making money”, but the people doing this know this and don’t care. It in fact is good times to go for quick wins at the cost of manipulation; how else can you explain so many people doing it? And so the underlying message here is that the need for ethical design is asserted rather than demonstrated. Someone who already buys the argument (say, me) will nod their way through the book, agreeing at every turn, and finding useful examples to bolster arguments or flesh out approaches. Someone who doesn’t already buy the argument will see a bunch of descriptions of a bunch of things that are, by the book’s definition, unethical… and then simply write “but it makes us more money and that’s my job, so we’re doing it anyway” after every sentence and leave without changing anything.

It is, unfortunately, the same approach taken by other important but ignored technical influences, such as accessibility or open source or progressive enhancement. Or, outside the tech world, environmentalism or vegetarianism. You say: this thing you’re doing is bad, because just look at it, it is… and here’s all the people you’re letting down or excluding or disenfranchising by being bad people, so stop being bad people. It seems intuitively obvious to anyone who already believes: why would you build inaccessible sites and exclude everyone who isn’t able to read them? Why would you build unethical apps that manipulate people and leave them unhappy and disquieted? Why would you use plastic and drive petrol cars when the world is going to burn? But it doesn’t work. I wish it did. Much as the rightness and righteousness of our arguments ought to be convincing in themselves, they are not, and we’re not moving the needle by continually reiterating the reasons why someone should believe.

But then… maybe that’s why the book is named The Ethical Design Handbook and not The Ethical Design Manifesto. I went into reading this hoping that what the authors had written would be a thing to change the world, a convincer that one could hand to unethical designers or ethical designers with unethical bosses and which would make them change. It isn’t. They even explicitly disclaim that responsibility early on: “Designers from the dark side read other books, not this one, and let us leave it at that,” says the introduction. So this maybe isn’t the book that changes everyone’s minds; that’s someone else’s job. Instead, it’s a blueprint for how to build the better world once you’ve already been convinced to do so. If your customers keep coming back and saying that they find your approach distasteful, if you decide to prioritise delight over conversions at least a little bit, if you’re prepared to be a little less rich to be a lot more decent, then you’ll need a guidebook to explain what made your people unhappy and what to do about it. In that regard, The Ethical Design Handbook does a pretty good job, and if that’s what you need then it’s worth your time.

This is an important thing: there’s often the search for a silver bullet, for a thing which fixes the world. I was guilty of that here, hoping for something which would convince unethical designers to start being ethical. That’s not what this book is for. It’s for those who want to but don’t know how. And because of that, it’s full of useful advice. Take, for example, the best practices chapter: it specifically calls out some wisdom about cookie warnings. In particular, it calls out that you don’t need cookie warnings at all if you’re not being evil about what you plan to allow your third party advertisers to do with the data. This is pretty much the first place I’ve seen this written down, despite how it’s the truth. And this is useful in itself; to have something to show one’s boss or one’s business analyst. If the word has come down from on high to add cookie warnings to the site then pushback on that from design or development is likely to be ignored… but being able to present a published book backing up those words is potentially valuable. Similarly, the book goes to some effort to quantify what ethical design is, by giving scores to what you do or don’t do, and this too is a good structure on which to hang a new design and to use to feed into the next thing your team builds. So, don’t make the initial mistake I did, of thinking that this is a manifesto; this is a working book, filled with how to actually get the job done, not a philosophical thinkpiece. Grab it and point at it in design meetings and use it to bolster your team through their next project. It’s worth it.

February 28, 2020

Reading List 252 by Bruce Lawson (@brucel)

February 27, 2020

Yesterday, we observed that the goal of considering the go to statement harmful was so that a programmer could write a correct program and have done with it. We noticed that this is never how computering works: many programs are not even instantaneously correct because they represent an understanding of a domain captured at an earlier time, before the context was altered by both external changes and the introduction of the software itself.

Today, let’s look at the benefits of removing the go to statement. Dijkstra again:

My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible.

This makes sense! Our source code is a static model of our software system, which is itself (we hope) a model of a problem that somebody has along with tools to help with solving that problem. But our software system is a dynamic actor that absorbs, transforms, and emits data, reacting to and generating events in communication with other human and non-human actors. We need to ensure that the dynamic behaviour is evident in the static model, so that we can “reason about” the development of the system. How does Dijkstra’s removal of go to help us achieve that?

Let us now consider how we can characterize the progress of a process. (You may think about this question in a very concrete manner: suppose that a process, considered as a time succession of actions, is stopped after an arbitrary action, what data do we have to fix in order that we can redo the process until the very same point?) If the program text is a pure concatenation of, say, assignment statements (for the purpose of this discussion regarded as the descriptions of single actions) it is sufficient to point in the program text to a point between two successive action descriptions. (In the absence of go to statements I can permit myself the syntactic ambiguity in the last three words of the previous sentence: if we parse them as “successive (action descriptions)” we mean successive in text space; if we parse as “(successive action) descriptions” we mean successive in time.) Let us call such a pointer to a suitable place in the text a “textual index.”

[…]

Why do we need such independent coordinates? The reason is – and this seems to be inherent to sequential processes – that we can interpret the value of a variable only with respect to the progress of the process. If we wish to count the number, n say, of people in an initially empty room, we can achieve this by increasing n by one whenever we see someone entering the room. In the in-between moment that we have observed someone entering the room but have not yet performed the subsequent increase of n, its value equals the number of people in the room minus one!

So the value of go to-less programming is that I can start at the entry point, and track every change in the system between there and the point of interest. But I only need to do that once (well, once for each different condition, loop, procedure code path, etc.) and then I can write down “at this index, these things have happened”. Conversely, I can start with the known state at a known point, and run the program counter backwards, undoing the changes I observe (obviously encountering problems if any of those are assignments). With go to statements present, I cannot know the history of how the program counter came to be here, so I can’t make confident statements about the dynamic evolution of the system.

This isn’t the only way to ensure that we understand a software system’s dynamic behaviour, which is lucky because it’s not a particularly good one. In today’s parlance, it “doesn’t scale”. Imagine being given the bug report “I clicked in the timeline on a YouTube video during an advert and the comments disappeared”, and trying to build a view of the stateful evolution of the entire YouTube system (or even just the browser, if you like, and if it turns out you don’t need the rest of the information) between main() and the location of the program counter where the bug emerged. Even if we pretend that a browser isn’t multithreaded, you would not have a good time.

Another approach is to encapsulate parts of the program, so that the amount we need to comprehend in one go is smaller. When you do that, you don’t need to worry about where the global program counter is or how it got there. Donald Knuth demonstrated this in Structured Programming with go to Statements, and went on to say that removing all instances of go to is solving the wrong problem:

One thing we haven’t spelled out clearly, however, is what makes some go to’s bad and others acceptable. The reason is that we’ve really been directing our attention to the wrong issue, to the objective question of go to elimination instead of the important subjective question of program structure.

In the words of John Brown [here, Knuth cites an unpublished note], “The act of focusing our mightiest intellectual resources on the elusive goal of go to-less programs has helped us get our minds off all those really tough and possibly unresolvable problems and issues with which today’s professional programmer would otherwise have to grapple.”

Much has been written on structured programming, procedural programming, object-oriented programming, and functional programming, which all have the same goal: separate a program into “a thing which uses this little bit of software, according to its contract” and “the thing that you would like to believe implements this contract”. OOP and FP additionally make explicit the isolation of state changes, so that you don’t need to know the whole value of the computer’s memory to assert conformance to the contract. Instead, you just need to know the value of the little bit of memory in the fake standalone computer that runs this one object, or this one function (or indeed model the behaviour of the object or function without reference to computer details like memory).

Use or otherwise of go to statements in a thoughtfully-designed (I admit that statement opens a can of worms) is orthogonal to understanding the behaviour of the program. Let me type part of an Array class based on a linked list directly into my blog editor:

Public Class Array Of ElementType
  Private entries As LinkedList(Of ElementType)
  Public Function Count() As Integer
    Dim list As LinkedList(Of ElementType) = entries
    Count = 0
  nonempty:
    If list.IsEmpty() Then GoTo empty
    Count = Count + 1
    list = list.Next()
    GoTo nonempty
  empty:
    Exit Function
  End Function

  Public Function At(index As Integer) As ElementType
    Dim cursor As Integer = index
    Dim list As LinkedList(Of ElementType) = entries
  next:
    If list.IsEmpty() Then Err.Raise("Index Out of Bounds Error")
    If cursor = 0 Then Return list.Element()
    list = list.Next()
    cursor = cursor - 1
    GoTo next
  End Function
End Class

While this code sample uses go to statements, I would suggest it’s possible to explore the assertion “objects of class Array satisfy the contract for an array” without too much difficulty. As such, the behaviour of the program anywhere that uses arrays is simplified by the assumption “Array behaves according to the contract”, and the behaviour anywhere else is simplified by ignoring the Array code entirely.

Whatever harm the go to statement caused, it was not as much harm as trying to define a “correct” program by understanding all of the ways in which the program counter arrived at the current instruction.

February 26, 2020

Aquamarine by Bruce Lawson (@brucel)

Here’s a newly-recorded version of one of the favourite songs I wrote (crappy cassette 4-track demo previously posted). I was obsessed with TS Eliot’s poem Marina, a monologue inspired by Shakespeare’s Pericles. So I ripped that off, nicked a line or two from The Waste Land, pinched a bit of Shakespeare’s The Tempest and, while the literary store detective was looking the other way, ran off with a bit of Dylan Thomas too. The Shakespeare reading is by my friend Richard Crowest. Bass guitar and production is by Shez.

Aquamarine –
I’m a ship becalmed after stormy seas.
You’ve been silver and green;
I love you best now for your clarity.
You sing to me in sharpened keys.
You bring me emeralds and harmonies.

I will be here for you if you’ll be here for me
Sometimes, the tide turns
and everything is monochrome.

Aquamarine –
Your wet hair dries in the warm sea breeze.
Lie still and dream
Of the mountains – there you feel free.
Sail across still memories
Under sleep where all the waters meet.

I will be here for you if you’ll be here for me
Dry stones and white bones
and everything is monochrome.

Aquamarine –
This music crept upon the water to me
I’m a machine
Powered by your electricity.
You sail across still memories
You bring me emeralds and energy.

I will be here for you if you’ll be here for me
From the horizon
The world can turn monochrome.

What seas, what shores,
what great rocks?

Seize what’s yours;
What grey rocks?

What islands? What waters lap at the bow?

The sea’s daughter, you ebb and you flow;
The sea’s daughter, emerald green;
The sea’s daughter, Aquamarine.

Lie still, be calm, and dream.
Aquamarine.

Words and music © Bruce Lawson 1991, all rights reserved.

Dijkstra didn’t claim to consider the go to statement harmful, not in those words. The title of his letter to CACM was provided by the editor, Niklaus Wirth, who did such a great job that the entire industry knows that go to is “Considered Harmful”, and that you can quickly rack up the clicks by considering other things harmful.

A deeper reading of his short (~1400 words) article raises some interesting points, that did not as yet receive as much airing. Here, in the interests of writing an even shorter letter, is just one.

My first remark is that, although the programmer’s activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, for it is this process that has to accomplish the desired effect; it is this process that in its dynamic behavior has to satisfy the desired specifications. Yet, once the program has been made, the “making’ of the corresponding process is delegated to the machine.

There are many difficulties with this statement, including the presumed gender of the programmer. Let us also consider the idea of a “correct” program, which does not exist for the majority of programmers. Eight years after Dijkstra’s letter was published, Belady and Lehman published the first law of program evolution dynamics:

_ Law of continuing change_. A system that is used undergoes continuing change until it is judged more cost effective to freeze and recreate it. Software does not face the physical decay problems that hardware faces. But the power and logical flexibility of computing systems, the extending technology of computer applications, the ever-evolving hardware, and the pressures for the exploitation of new business opportunities all make demands. Manufacturers, therefore, encourage the continuous adaptation of programs to keep in step with increasing skill, insight, ambition, and opportunity. In addition to such external pressures for change, there is the constant need to repair system faults, whether they are errors that stem from faulty implementation or defects that relate to weaknesses in design or behavior. Thus a programming system undergoes continuous maintenance and development, driven by mutually stimulating changes in system capability and environmental usage. In fact, the evolution pattern of a large program is similar to that of any other complex system in that it stems from the closed-loop cyclic adaptation of environment to system changes and vice versa.

This model of programming looks much more familiar to me when I reflect on my experience than the Dijkstra model. If Dijkstra’s programmer stopped programming when they have “constructed a correct program”, then their system would fail as it didn’t adapt to “increasing skill, insight, ambition, and opportunity”.

The programmer who would thrive in this environment is more akin to Ward Cunningham’s opportunistic rewriter, based on his experience of the WyCash Portfolio Management System. That programmer rewrites every module they touch, to ensure that it represents the latest information they have. We recognise the genesis of Ward’s “technical debt” concept in this quote, and also perhaps what we would now call “refactoring”:

Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.

The traditional waterfall development cycle has endeavored to avoid programming catastrophe by working out a program in detail before programming begins. We watch with some interest as the community attempts to apply these techniques to objects. However, using our debt analogy, we recognize this amounts to preserving the concept of payment up-front and in-full. The modularity offered by objects and the practice of consolidation make the alternative, incremental growth, both feasible and desirable in the competitive financial software market.

Ward also doesn’t use go to statements, his programming environment doesn’t supply them. But it is not the ability of his team to avoid incorrect programs by using other control structures that he finds valuable; rather the willingness of his programmers to jettison old code and evolve their system with its context.

February 21, 2020

Reading List 251 by Bruce Lawson (@brucel)

February 19, 2020

I like having the news headlines on my phone’s home screen. (Well, on the screen to the right.) It helps me keep up with what’s going on in the world. But it’s hard to find a simple headline home screen widget which isn’t full of ads or extra frippery or images or tracking; I just want headlines, plain text, not unpleasantly formatted, and high-density. I don’t want to see three headlines; I’d rather see ten. I tried a whole bunch of news headline home screen widgets and they’re all terrible; not information-dense enough, or they are but they’re ugly, or they insist on putting pictures in, or they display a ton of other information I don’t want.

It occurred to me that I don’t really need a news reader per se; just an RSS reader, which I then point at Google’s “all news” feed (which they move around from time to time but at time of writing in February 2020 is at https://news.google.com/news/rss). However, RSS reader widgets are also all terrible.

Finally, I thought: fine, I’ll just do it myself. But I really don’t want to write Java and set up Android Studio. So I installed Web Widget which just renders a web page to a home screen widget, and then wrote a simple web page and stuck it at the root of my phone’s storage. I can then point Web Widget at file:///sdcard/noos.html and it all works, and I can customise it how I like. Every one’s a winner. Nice simple way to create widgets that do what I want. They can’t be animated or anything, but if you want something which displays some external data and is happy to be polled every now and again to update, it’s perfectly fine. Sadly, there’s no continuity of storage (indexedDB exists but doesn’t persist and localStorage doesn’t exist at all), but it’s good for what I needed.

Back to Top