Last updated: December 21, 2022 11:22 AM (All times are UTC.)
What is “accessibility”? For some, it’s about ensuring that your sites and apps don’t block people with disabilities from completing tasks. That’s the main part of it, but in my opinion it’s not all of the story. Accessibility, to me, means taking care to develop digital services that are inclusive as possible. That means inclusive of people with disabilities, of people outside Euro-centric cultures, and people who don’t have expensive, top-the-range hardware and always-on cheap fast networks.
In his closely argued post The Performance Inequality Gap, 2023, Alex Russell notes that “When digital is society’s default, slow is exclusionary”, and continues
sites continue to send more script than is reasonable for 80+% of the world’s users, widening the gap between the haves and the have-nots. This is an ethical crisis for the frontend.
Big Al goes on to suggest that in order to reach interactivity in less than 5 seconds on first load, we should send no more that ~150KiB of HTML, CSS, images, and render-blocking font resources, and no more than ~300-350KiB of JavaScript. (If you want to know the reasoning behind this, Alex meticulously cites his sources in the article; read it!)
Now, I’m not saying this is impossible using modern frameworks and tooling (React, Next.js etc) that optimise for good “developer experience”. But it is a damned sight harder, because such tooling prioritises developer experience over user experience.
In January, I’ll be back on the jobs market (here’s my LinkTin resumé!) so I’ve been looking at what’s available. Today I saw a job for a Front End lead who will “write the first lines of front end code and set the tone for how the team approaches user-facing software development”. The job spec requires a “bias towards solving problems in simple, elegant ways”, and the candidate should be “confident building with…reliability and accessibility in mind”. Yet, weirdly, even though the first lines of code are yet to be written, it seems the tech stack is already decided upon: React and Next.js.
As Alex’s post shows, such tooling conspires against simplicity and elegance, and certainly against reliability and accessibility. To repeat his message:
When digital is society’s default, slow is exclusionary
Bad performance is bad accessibility.
Twitter currently has problems. Well, one specific problem, which is the bloke who bought it. My solution to this problem has been to move to Mastodon (@sil@mastodon.social if you want to do the same), but I’ve invested fifteen years of my life providing twitter.com with free content so I don’t really want it to go away. Since there’s a chance that the whole site might vanish, or that it continues on its current journey until I don’t even want my name associated with it any more, it makes sense to have a backup. And obviously, I don’t want all that lovely writing to disappear from the web (how would you all cope without me complaining about some random pub’s music in 2011?!), so I wanted to have that backup published somewhere I control… by which I mean my own website.
So, it would be nice to be able to download a list of all my tweets, and then turn that into some sort of website so it’s all still available and published by me.
Fortunately, Zach Leatherman came to save us by building a tool, Tweetback, which does a lot of the heavy lifting on this. Nice one, that man. Here I’ll describe how I used Tweetback to set up my own personal Twitter archive. This is unavoidably a bit of a developer-ish process, involving the Terminal and so on; if you’re not at least a little comfortable with doing that, this might not be for you.
This part is mandatory. Twitter graciously permit you to download a big list of all the tweets you’ve given them over the years, and you’ll need it for this. As they describe in their help page, go to your Twitter account settings and choose Your account > Download an archive of your data. You’ll have to confirm your identity and then say Request data. They then go away and start constructing an archive of all your Twitter stuff. This can take a couple of days; they send you an email when it’s done, and you can follow the link in that email to download a zip file. This is your Twitter backup; it contains all your tweets (and some other stuff). Stash it somewhere; you’ll need a file from it shortly.
You’ll need both node.js and git installed to do this. If you don’t have node.js, go to nodejs.org and follow their instructions for how to download and install it for your computer. (This process can be fiddly; sorry about that. I suspect that most people reading this will already have node installed, but if you don’t, hopefully you can manage it.) You’ll also need git installed: Github have some instructions on how to install git or Github Desktop, which should explain how to do this stuff if you don’t already have it set up.
Now, you need to clone the Tweetback repository from Github. On the command line, this looks like git clone https://github.com/tweetback/tweetback.git; if you’re using Github Desktop, follow their instructions to clone a repository. This should give you the Tweetback code, in a folder on your computer. Make a note of where that folder is.
Open a Terminal on your machine and cd into the Tweetback folder, wherever you put it. Now, run npm install to install all of Tweetback’s dependencies. Since you have node.js installed from above, this ought to just work. If it doesn’t… you get to debug a bit. Sorry about that. This should end up looking something like this:
$ npm install
npm WARN deprecated @npmcli/move-file@1.1.2: This functionality has been moved to @npmcli/fs
added 347 packages, and audited 348 packages in 30s
52 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
From here, you’re following Tweetback’s own README instructions: they’re online at https://github.com/tweetback/tweetback#usage and also are in the README file in your current directory.
Open up the zip file you downloaded from Twitter, and get the data/tweets.js file from it. Put that in the database folder in your Tweetback folder, then edit that file to change window.YTD.tweet.part0 on the first line to module.exports, as the README says. This means that your database/tweets.js file will now have the first couple of lines look like this:
module.exports = [
{
"tweet" : {
Now, run npm run import. This will go through your tweets.js file and load it all into a database, so it can be more easily read later on. You only need to do this step once. This will print a bunch of lines that look like { existingRecordsFound: 0, missingTweets: 122 }, and then a bunch of lines that look like Finished count { count: 116 }, and then it’ll finish. This should be relatively quick, but if you’ve got a lot of tweets (I have 68,000!) then it might take a little while. Get yourself a cup of tea and a couple of biscuits and it’ll be done when you’ve poured it.
If you’re setting up your own (sub)domain for your twitter archive, so it’ll be at the root of the website (so, https://twitter.example.com or whatever) then you can skip this step. However, if you’re going to put your archive in its own directory, so it’s not at the root (which I did, for example, at kryogenix.org/twitter), then you need to tell your setup about that.
To do this, edit the file eleventy.config.js, and at the end, before the closing }, add a new line, so the end of the file looks like this:
eleventyConfig.addPlugin(EleventyHtmlBasePlugin);
return {pathPrefix: "/twitter/"}
};
The string "/twitter/" should be whatever you want the path to the root of your Twitter archive to be, so if you’re going to put it at mywebsite.example.com/my-twitter-archive, set pathPrefix to be "/my-twitter-archive". This is only a path, not a full URL; you do not need to fill in the domain where you’ll be hosting this here.
As the Tweetback README describes, edit the file _data/metadata.js. You’ll want to change three values in here: username, homeLabel, and homeURL.
username is your Twitter username. Mine is sil: yours isn’t. Don’t include the @ at the beginning.
homeLabel is the thing that appears in the top corner of your Twitter archive once generated; it will be a link to your own homepage. (Note: not the homepage of this Twitter archive! This will be the text of a link which takes you out of the Twitter archive and to your own home.)
homeURL is the full URL to your homepage. (This is “https://kryogenix.org/” for me, for example. It is the URL that homeLabel links to.)

OK. Now you’ve done all the setup. This step actually takes all of that and builds a website from all your tweets.
Run npm run build.
If you’ve got a lot of tweets, this can take a long time. It took me a couple of hours, I think, the first time I ran it. Subsequent runs take a lot less time (a couple of minutes for me, maybe even shorter for you if you’re less mouthy on Twitter), but the first run takes ages because it has to fetch all the images for all the tweets you’ve ever written. You’ll want a second cup of tea here, and perhaps dinner.
It should look something like this:
$ npm run build
> tweetback@1.0.0 build
> npx @11ty/eleventy --quiet
[11ty] Copied 1868 files / Wrote 68158 files in 248.58 seconds (3.6ms each, v2.0.0-canary.18)
You may get errors in here about being unable to fetch URLs (Image request error Bad response for https://pbs.twimg.com/media/C1VJJUVXEAE3VGE.jpg (404): Not Found and the like); this is because some Tweets link to images that aren’t there any more. There’s not a lot you can do about this, but it doesn’t stop the rest of the site building.
Once this is all done, you should have a directory called _site, which is a website containing your Twitter archive! Hooray! Now you publish that directory, however you choose: copy it up to your website, push it to github pages or Netlify or whatever. You only need the contents of the _site directory; that’s your whole Twitter archive website, completely self-contained; all the other stuff is only used for generating the archive website, not for running it once it’s generated.
If you’re still using Twitter, you may post more Tweets after your downloadable archive was generated. If so, it’d be nice to update the archive with the contents of those tweets without having to request a full archive from Twitter and wait two days. Fortunately, this is possible. Unfortunately, you gotta do some hoop-jumping to get it.
You see, to do this, you need access to the Twitter API. In the old days, people built websites with an API because they wanted to encourage others to interact with that website programmatically as well as in a browser: you built an ecosystem, right? But Twitter are not like that; they don’t really want you to interact with their stuff unless they like what you’re doing. So you have to apply for permission to be a Twitter developer in order to use the API.
To do this, as the Tweetback readme says, you will need a Twitter bearer token. To get one of those, you need to be a Twitter developer, and to be that, you have to fill in a bunch of forms and ask for permission and be manually reviewed. Twitter’s documentation explains about bearer tokens, and explains that you need to sign up for a Twitter developer account to get them. Go ahead and do that. This is an annoying process where they ask a bunch of questions about what you plan to do with the Twitter API, and then you wait until someone manually reviews your answers and decides whether to grant you access or not, and possibly makes you clarify your answers to questions. I have no good suggestions here; go through the process and wait. Sorry.
Once you are a Twitter developer, create an app, and then get its bearer token. You only get this once, so be sure to make a note of it. In a clear allusion to the delight that this whole process brings to users, it probably will begin by screaming AAAAAAAAAAAAAAA and then look like a bunch of incomprehensible gibberish.
Now to pull in new data, run:
TWITTER_BEARER_TOKEN=AAAAAAAAAAAAAAAAAAq3874nh93q npm run fetch-new-data
(substituting in the value of your token, of course, which will be longer.)
This will fetch any tweets that aren’t in the database because you made them since! And then run npm run build again to rebuild the _site directory, and re-publish it all.
I personally run these steps (fetch-new-data, then build, then publish) daily in a cron job, which runs a script with contents (approximately):
#!/bin/bash
cd "$(dirname "$0")"
echo Begin publish at $(date)
echo Updating Twitter archive
echo ========================
TWITTER_BEARER_TOKEN=AAAAAAAAAAAAAA9mh8j9808jhey9w34cvj3g3 npm run fetch-new-data 2>&1
echo Updating site from archive
echo ==========================
npm run build 2>&1
echo Publishing site
echo ===============
rsync -e "ssh" -az _site/ kryogenix.org:public_html/twitter 2>&1
echo Finish publish at $(date)
but how you publish and rebuild, and how often you do that, is of course up to you.
What Tweetback actually does is use your twitter backup to build an 11ty static website. (This is not all that surprising, since 11ty is also Zach’s static site builder.) This means that if you’re into 11ty you could make the archive better and more comprehensive by adding stuff. There are already some neat graphs of most popular tweets, most recent tweets, the emoji you use a lot (sigh) and so on; if you find things that you wish that your Twitter archive contained, file an issue with Tweetback, or better still write the change and submit it back so everyone gets it!
Go to tweetback/tweetback-canonical and add yourself to the mapping.js file. What’s neat about this is that that file is used by tweetback itself. This means that if someone else with a Tweetback archive has a tweet which links to one of your Tweets, now their archive will link to your archive directly instead! It’s not just a bunch of separate sites, it’s a bunch of sites all of which are connected! Lots of connections between sites without any central authority! We could call this a collection of connections. Or a pile of connections. Or… a web!
That’s a good idea. Someone should do something with that concept.
You may, or may not, want to get off Twitter. Maybe you’re looking to get as far away as possible; maybe you just don’t want to lose the years of investment you’ve put in. But it’s never a bad thing to have your data under your control when you can. Tweetback helps make that happen. Cheers to Zach and the other contributors for creating it, so the rest of us didn’t have to. Tell them thank you.
Some posts are written so there’s an audience. Some are written to be informative, or amusing. And some are literally just documentation for me which nobody else will care about. This is one of those.
I’ve moved phone network. I’ve been with Three for years, but they came up with an extremely annoying new tactic, and so they must be punished. You see, I had an account with 4GB of data usage per month for about £15pm, and occasionally I’d go over that; a couple of times a year at most. That’s OK: I don’t mind buying some sort of “data booster” thing to give me an extra gig for the last few days before the next bill; seems reasonable.
But Three changed things. Now, you see, you can’t buy a booster to give yourself a bit of data until the end of the month. No, you have to buy a booster which gives you extra data every month, and then three days later when you’re in the new month, cancel it. There’s no way to just get the data for now.1
This is aggressively customer-hostile. There’s literally no reason to do this other than to screw people who forget to cancel it. Sure, have an option to buy a “permanent top-up”, no arguments with that. But there should also be an option to buy a temporary top-up, just once! There used to be!
I was vaguely annoyed with Three for general reasons anyway — they got rid of free EU roaming, they are unhelpful when you ask questions, etc — and so I was vaguely considering moving away regardless. But this was the straw that broke the camel’s back.2 So… time to look around.
I asked the Mastodon gang for suggestions, and I got lots, which is useful. Thank you for that, all.
The main three I got were Smarty, iD, and Giffgaff. Smarty are Three themselves in a posh frock, so that’s no good; the whole point of bailing is to leave Three. Giffgaff are great, and I’ve been hearing about their greatness for years, not least from popey, but they don’t do WiFi Calling, so they’re a no-no.3 And iD mobile looked pretty good. (All these new “MVNO” types of thing seem quite a lot cheaper than “traditional” phone operators. Don’t know why. Hooray, though.)
So off I went to iD, and signed up for a 30-day rolling SIM-only deal4. £7 per month. 12GB of data. I mean, crikey, that’s quite a lot better than before.
I need to keep my phone number, though, so I had to transfer it between networks. To do this, you need a “PAC” code from your old network, and you supply it to the new one. All my experience of dealing with phone operators is from the Old Days, and back then you had to grovel to get a PAC and your current phone network would do their best to talk you out of it. Fortunately, the iron hand of government regulation has put a stop to these sorts of shenanigans now (the UK has a good tech regulator, the Competition and Markets Authority5) and you can get a PAC, no questions asked, by sending an SMS with content “PAC” to 65075. Hooray. So, iD mobile sent me a new SIM in the post, and I got the PAC from Three, and then I told iD about the PAC (on the website: no person required), and they said (on the website), ok, we’ll do the switch in a couple of working days.
However, the SIM has some temporary number on it. Today, my Three account stopped working (indicating that Three had received and handled their end of the deal by turning off my account), and so I dutifully popped out the Three SIM from my phone6 and put in the new one.
But! Alas! My phone thought that it had the temporary number!
I think this is because Three process their (departing) end, there’s an interregnum, and then iD process their (arriving) end, and I was in the interregnum. I do not know what would have happened if someone rang my actual phone number during this period. Hopefully nobody did. I waited a few hours — the data connection worked fine on my phone, but it had the wrong number — and then I turned the phone off and left it off for ten minutes or so. Then… I turned it back on, and… pow! My proper number is mine again! Hooray!
That ought to have been the end of it. However, I have an Apple phone. So, in Settings > Phone > My Number, it was still reading the temporary number. Similarly, in Settings > Messages > iMessage > Send and Receive, it was also still reading the temporary number.
How inconvenient.
Some combination of the following fixed that. I’m not sure exactly what is required to fix it: I did all this, some more than once, in some random order, and now it seems OK: powering the phone off and on again; disabling iMessage and re-enabling it; disabling iMessage, waiting a few minutes, and then re-enabling it; disabling iMessage, powering off the phone, powering it back on again, and re-enabling it; editing the phone number in My Number (which didn’t seem to have any actual effect); doing a full network reset (Settings > General > Transfer or Reset Device > Reset > Reset Network Settings). Hopefully that’ll help you too.
Finally, there was voicemail. Some years ago, I set up an account with Sipgate, where I get a phone number and voicemail. The thing I like about this is that when I get voicemail on that number, it emails me an mp3 of the voicemail. This is wholly brilliant, and phone companies don’t do it; I’m not interested in ringing some number and then pressing buttons to navigate the horrible menu, and “visual voicemail” never took off and never became an open standard thing anyway. So my sipgate thing is brilliant. But… how do I tell my phone to forward calls to my sipgate number if I don’t answer? I did this once, about 10 years ago, and I couldn’t remember how. A judicious bit of web searching later, and I have the answer.
One uses a few Secret Network Codes to do this. It’s called “call diversion” or “call forwarding”, and you do it by typing a magic number into your phone dialler, as though you were ringing it as a number. So, let’s say your sipgate number is 0121 496 0000. Open up the phone dialler, and dial *61*01214960000# and press dial. That magic code, *61, sets your number to divert if you don’t answer it. Do it again with *62 to also divert calls when your phone is switched off. You can also do it again with *67 to divert calls when your phone is engaged, but I don’t do that; I want those to come through where the phone can let me switch calls.
And that’s how I moved phone networks. Stuart, ten years from now when you read this again, now you know how to do it. You’re welcome.
I wrote a couple of short blog posts for Open Web Advocacy (of which I’m a founder member) on our progress in getting regulators to overturn the iOS browser ban and end Apple’s stranglehold over the use of Progressive Web Apps on iThings.
TL:DR; we’re winning.
Flying home. More in love.

Here’s a YouTube video of a talk I gave for the nerdearla conference, with Spanish subtitles. Basically, it’s about Safari being “the new IE”, and what we at Open Web Advocacy are doing to try to end Apple’s browser ban on iOS and iPads, so consumers can use a more capable browser, and developers can deliver non-hamstrung Progressive Web Apps to iThing users.
Since I gave this talk, the UK Competition and Markets Authority have opened a market investigation into Apple’s iThings browser restriction – read News from UK and EU for more.
My grandson has recently learned to count, so I made a set of cards we could ‘play numbers’ with.

We both played. I showed him that you could write ‘maths sentences’ with the ‘and’ and the ‘is’ cards. Next time I visited, he searched in my bag and found the box of numbers. He emptied them out onto the sofa and completely unprompted, ‘wrote’:

I was ‘quite surprised’. We wrote a few more equations using small integers until one added to 8, then he remembered he had a train track that could be made into a figure-of-8 , so ‘Arithmetic Time’ was over but we squeezed in a bit of introductory set theory while tidying the numbers away.
From here on, I’m going to talk about computer programming. I won’t be explaining any jargon I use, so if you want to leave now, I won’t be offended.
I don’t want to take my grandson too far with mathematics in case it conflicts with what he will be taught at school. If schools teach computer programming, it will probably be Python and I gave up on Python.
Instead, I’ve been learning functional programming in the Clojure dialect of Lisp. I’ve been thinking for a while that it woud be much easier to learn functional programming if you didn’t already know imperative programming. There’s a famous text, known as ‘SICP’ or ‘The Wizard Book’ that compares Lisps with magic. What if I took on a sourceror’s apprentice to give me an incentive to learn faster? I need to grab him “to the age of 5”, before the Pythonista get him.
When I think about conventional programming, I make diagrams, and I’ve used Unified Modelling Language (UML) for business analysis, to model ‘data processing’ systems. An interesting feature of LIsps is that process is represented as functions and functions are a special type of data. UML is designed for Object Orient Programming. I haven’t found a way to make it work for Functional Programming (FP.)
So, how can I introduce the ideas of FP to a child who can’t read yet?
There’s a mathematical convention to represent a function as a ‘black-box machine’ with a hopper at the top where you pour in the values and an outlet at the bottom where the answer value flows out. My first thought was to make an ‘add function’ machine but Clojure “treats functions as first-class citizens”, so I’m going to try passing “+” in as a function, along the dotted line labelled f(). Here’s my first prototype machine, passed 3 parameters: 2, 1 and the function +, to configure the black box as an adding machine.

In a Lisp, “2 + 1” is written “(+ 2 1)”.
The ‘parens’ are ‘the black box’.
Now, we’ve made our ‘black box’ an adder, we pass in the integers 2 and 1 and they are transformed by the function into the integer 3.

We can do the same thing in Clojure. Lisp parentheses provide ‘the black box’ and the first argument is the function to use. Other arguments are the numbers to add.
We’ll start the Clojure ‘Read Evaluate Print Loop’ (REPL) now. Clojure now runs well from the command line of a Raspberry Pi 4 or 400 running Raspberry Pi OS.
$ clj
Clojure 1.11.1
user=> (+ 2 1)
3
user=>
Clearly, we have a simple, working Functional Program but another thing about functions is that they can be ‘composed’ into a ‘pipeline’, so we can set up a production line of functional machines, with the second function taking the output of the first function as one of it’s inputs. Using the only function we have so far:
![[5compose–IMG_20221116_135501768-2.jpg]]

In Clojure, we could write that explicitly as a pipeline, to work just like the diagram
(-> (+ 1 2) (+ 1))
4
or use the more conventional Lisp format (start evaluation at the innermost parens)
(+ (+ 1 2) 1)
4
However, unlike the arithmetic “+” operator, the Clojure “+” function can
add up more than 2 numbers, so we didn’t really need to compose the two “+” functions. This single function call would have got the job done:
(+ 1 2 1)
4
SImilarly, we didn’t need to use 2 cardboard black-boxes. We could just pour all the values we wanted adding up into the hopper of the first.
Clojure can handle an infinite number of values, for as long as the computer can, but I don’t think I’ll tell my grandson about infinity until he’s at least 4.
I am writing this from 32,000 feet above Australia. Modern technology never ceases to amaze.

It’s called scarcity, and we can’t wait to see what you do with it.
Let’s start with the important bit. I think that over the last year, with acceleration toward the end of the year, I have heard of over 100,000 software engineers losing their jobs in some way. This is a tragedy. Each one of those people is a person, whose livelihood is at the whim of some capricious capitalist or board of same. Some had families, some were working their dream jobs, others had quiet quit and were just paying the bills. Each one of them was let down by a system that values the line going up more than it values their families, their dreams, and their bills.
While I am sad for those people, I am excited for the changes in software engineering that will come in the next decade. Why? Because everything I like about computers came from a place of scarcity in computering, and everything I dislike about computers came from a place of abundance in computering.
The old, waterfall-but-not-quite, measure-twice-and-cut-once approach to project management came from a place of abundance. It’s cheaper, so the idea goes, to have a department of developers sitting around waiting for a functional specification to be completed and signed off by senior management than for them to be writing working software: what if they get it wrong?
The team at Xerox PARC – 50 folks who were just told to get on with it – designed a way of thinking about computers that meant a single child (or, even better, a small group of children) could think about a problem and solve it in a computer themselves. Some of those 50 people also designed the computer they’d do it on, alongside a network and some peripherals.
This begat eXtreme Programming, which burst onto the scene in a time of scarcity (the original .com crash). People had been doing it for a while, but when everyone else ran out of money they started to listen: a small team of maybe 10 folks, left to get on with it, were running rings around departments of 200 people.
Speaking of the .com crash, this is the time when everyone realised how expensive those Oracle and Solaris licenses were. Especially if you compared them with the zero charged for GNU, Linux, and MySQL. The LAMP stack – the beginning of mainstream adoption for GNU and free software in general – is a software scarcity feature.
One of the early (earlier than the .com crash) wins for GNU and the Free Software Foundation was getting NeXT to open up their Objective-C compiler. NeXT was a small team taking off-the-shelf and free components, building a system that rivalled anything Microsoft, AT&T, HP, IBM, Sun, or Digital were doing – and that outlived almost all of them. Remember that the NeXT CEO wouldn’t become a billionaire until his other company released Toy Story, and that NeXT not only did the above, but also defined the first wave of dynamic websites and e-commerce: the best web technology was scarcity web technology.
What’s happened since those stories were enacted is that computerists have collectively forgotten how scarcity breeds innovation. You don’t need to know how 10 folks round a whiteboard can outsmart a 200 engineer department if your department hired 200 engineers _this month_: just put half of them on solving your problems, and half of them on the problems caused by the first half.
Thus we get SAFe and Scrumbut: frameworks for paying lip service to agile development while making sure that each group of 10 folks doesn’t do anything that wasn’t signed off across the other 350 groups of 10 folks.
Thus we get software engineering practices designed to make it easier to add code than to read it: what’s the point of reading the existing code if the one person who wrote it has already been replaced 175 times over, and has moved teams twice?
Thus we get not DevOps, but the DevOps department: why get your developers and ops folks to talk to each other if it’s cheaper to just hire another 200 folks to sit between them?
Thus we get the npm ecosystem: what’s the point of understanding your code if it’s cheaper just to randomly import somebody else’s and hire a team of 30 to deal with the fallout?
Thus we get corporate open source: what’s the point of software freedom when you can hire 100 people to push out code that doesn’t fulfil your needs but makes it easier to hire the next 500 people?
I am sad for the many people whose lives have been upended by the current downturn in the computering economy, but I am also sad for how little gets done within that economy. I look forward to the coming wave of innovation, and the ability to once again do more with less.
Not everything has to eat the world and the definition of success isn’t always
Life, people, and technology are all more complicated than that.
It’s hard to be attached to something and have it fade away, but that’s part of being a human being and existing in the flow of time. That’s table stakes. I have treasured memories of childhood friends who I haven’t heard from in 20 years. Internet communities that came and went. They weren’t less valuable because they didn’t last forever.
Let a thing just be what it is. So what if it doesn’t pan out the way you expected? If the value you derive from The Thing is reliant on its permanence, you’re setting yourself up for disappointment anyway.
Alternatively, abandon The Thing altogether and, I dunno, go watch a movie or something. The world is your oyster.
No points for figuring out which drama I’m referring to.
swyx wrote an interesting article on how he applied a personal help timeout after letting a problem at work drag on for too long. The help timeout he recommends is something I’ve also recently applied with some of my coworkers, so I thought I’d summarise and plug it here.
There can be a lot of pressure not to ask for help. You might think you’re bothering people, or worse, that they’ll think less of your abilities. It can be useful to counter-balance these thoughts by agreeing an explicit help timeout for your team.
If you’ve been stuck on a task with no progress for x minutes/hours/days, write up your problem and ask the team for help.
There are a few advantages to this:
Read swyx’s article here: https://www.swyx.io/help-timeouts
It’s been a very, very long time since I’ve released code to the open web. In fast, the only contributions I’ve done in the last 5 years have been to Chakra and a few other OSS libraries. So, in an attempt to try something new, I recently delved into the world anew.
There have been a few things lately that I wanted to play with:
The app in question is not new, it’s not novel, it’s not even unique in the components it’s using. But it was quick (less than a weekend moring playing around on the sofa), it’s simple (with a minimal surface), and scratches an itch.
GeoIP-lookup is a Go app that uses a local MaxMind City database file, providing a REST API that returns info about an IP address. That’s it.
The app itself is very simple. HTTP, tiny bit of routing, reading databases, and serving responses. GitHub makes it simple to then run an action that generates a new image and pushes this to GHCR. Unfortunately I couldn’t work out what was expected for the signing to work, but that can come later.
The big remaining thing is tests. I’ve got some basic examples in to test the harness, but not much more at the moment. I’ve also learnt how much I miss strict type systems, how much I hate the front-end world, and how good it feels to just get things done and out. Perfect is the enemy of done.
Anyway, it’s live and feels good.
(Last Updated on )
I went to TechMids the last week. One of the talks that had the most immediate impact on me was the talk from Jen Lambourne, one of the authors of Docs for Developers.
One of the ideas contained in the talk was the following:
You might have significantly more impact by curating existing resources than creating new ones.
Like a lot of people, when I started getting into technical writing, I started with a lot of entry level content. Stuff like Ten Tips for a Healthy Codebase and The Early Return Pattern and You. Consequently, there’s a proliferation of 101 level content on blogs like mine, often only lightly re-hashing the source documentation. This isn’t necessarily a bad thing, and I’m definitely not saying that people shouldn’t be writing these articles. You absolutely should be writing this sort of thing if you’re trying to get better at technical writing, and even if there are 100,000 articles on which HTTP verb to use for what, yours could be the one to make it click for someone.
But, I should be asking myself what my actual goal is. If my main priority is to become a better writer with helping people learn being a close second, then I ought to crack on with writing that 100,001st article. If I’m focused specifically on having an impact on learning outcomes, I should consider curating rather than creating.
Maybe the following would be a good start:
Finally, because I love Ruby and because this is a resource that deserves another signpost, I was recently alerted to The Ruby Learning Center and its resources page. I hope they continue to grow. Hooray for the signpost makers and the librarians.
Hear this talk performed (with appropriate background music):
Friends and enemies, attendees of Tech Mids 2022.
Don’t read off the screen.
If I could offer you only one piece of advice for why and how you should speak in public, don’t read off the screen would be it. Reading your slides out is guaranteed to make your talk boring, whereas the rest of my advice has no basis in fact other than my own experience, and the million great people who gave me thoughts on Twitter.
I shall dispense this advice… now.
Every meetup in every town is crying out for speakers, and your voice is valuable. Tell people your story. The way you see things is unique, just like everybody else.
Everybody gets nervous about speaking sometimes. Anybody claiming that they don’t is either lying, or trying to sell you something. If you’re nervous, consider that its a mark of wanting to do a good job.
Don’t start by planning what you want to say. Plan what you want people to hear. Then work backwards from there to find out what to say to make that happen.
You can do this. The audience are on your side.
Find your own style. Take bits and pieces from others and make them something of your own.
Slow down. Breathe. You’re going faster than it feels like you are.
Pee beforehand. If you have a trouser fly, check it.
If someone tells you why you should speak, take their words with a pinch of salt, me included. If they tell you how to speak, take two pinches. But small tips are born of someone else’s bad experience. When they say to use a lapel mic, or drink water, or to have a backup, then listen; they had their bad day so that you didn’t have to.
Don’t put up with rambling opinions from questioners. If they have a comment rather than a question, then they should have applied to do a talk themselves. You were asked to be here. Be proud of that.
Practice. And then practice again, and again. If you think you’ve rehearsed enough, you haven’t.
Speak inclusively, so that none of your audience feels that the talk wasn’t for them.
Making things look unrehearsed takes a lot of rehearsal.
Some people script their talks, some people don’t. Whether you prefer bullet points or a soliloquy is up to you. Whichever you choose, remember: don’t just read out your notes. Your talk is a performance, not a recital.
Nobody knows if you make a mistake. Carry on, and correct it when you can. But keep things simple. Someone drowning in information finds it hard to listen.
Live demos anger the gods of speaking. If you can avoid a live demo, do so. Record it in advance, or prep it so that it looks live. Nobody minds at all.
Don’t do a talk only once.
Acting can be useful, if that’s the style you like. Improv classes, stage presence, how you stand and what you do with your hands, all of this can be taught. But put your shoulders back and you’ve got about half of it.
Carry your own HDMI adapter and have a backup copy of your talk. Your technology will betray you if it gets a chance.
Record your practices and watch yourself back. It can be a humbling experience, but you are your own best teacher, if you’re willing to listen.
Try to have a star moment: something that people will remember about what you said and the way you said it. Whether that’s a surprising truth or an excellent joke or a weird gimmick, your goal is to have people walk away remembering what you said. Help them to do that.
Now, go do talks. I’m Stuart Langridge, and you aren’t. So do your talk, your way.
But trust me: don’t read off the screen.
I’m going to start blogging again. No reason why, no reason why-not. A lot has happened in the last twelve months; head, wife, job, decisions. Expect lots of random things.
For now, I’m in Saundersfoot enjoying the culmination of the World Rowing Beach Sprint Finals. Take care.

Recently, “Stinky” Taylar and I were evaluating some third party software for accessibility. One of the problems was their sign-up form.

This simple two-field form has at least three problems:
U Nagaharu was a Korean-Japanese botanist. Why shouldn’t he sign up to your site? In Burmese “U” is a also a given name: painter Paw U Thet, actor Win U, historian Thant Myint U, and politicians Ba U and Tin Aung Myint U have this name. Note that for these Burmese people, their given names are not the “first name”; many Asian languages put the family name first, so their “first name” is actually their surname, not their given name.
Many Afghans have no surname. It is also common to have no surname in Bhutan, Indonesia, Myanmar, Tibet, Mongolia and South India. Javanese names traditionally are mononymic, especially among people of older generations, for example, ex-presidents Suharno and Sukarno, which are their full legal names.
Many other people go by one name. Can you imagine how grumpy Madonna, Bono and Cher would be if they tried to sign up to buy your widgets but they couldn’t? Actually, you don’t need to imagine, because I asked Stable Diffusion to draw “Bono, Madonna and Cher, looking very angrily at you”:

Imagine how angry your boss would be if these multi-millionaires couldn’t buy your thingie because you coded your web forms without questioning falsehoods programmers believe about names.
How did this happen? It’s pretty certain that these development teams don’t have an irrational hatred of Indonesians, South Indians, Koreans and Burmese people. It is, however, much more likely they despise Cher, Madonna, and Bono (whose name is “O’Nob” backwards).
What is far more likely is that no-one on these teams is from South East Asia, so they simply didn’t know that not all the world has American-style names. (Many mononymic immigrants to the USA might actually have been “given” or inherited the names “LNU” or “FNU”, which are acronyms of “Last name unknown” or “First name unknown”.)
This is why there is a strong and statistically significant correlation between the diversity of management teams and overall innovation and why companies with more diverse workforces perform better financially.
The W3C has a comprehensive look at Personal names around the world, written by their internationalisation expert, Richard Ishida. I prefer to ask for “Given name”, with no minimum or maximum length, and optional “family name or other names”.
So take another look at your name input fields. Remember, not everyone has a name like “Chad Pancreas” or “Bobbii-Jo Musteemuff”.
(Last Updated on )
Steve McLeod invited me on his podcast, Bootstrapped.fm, to discuss how I run a small web studio called 16by9.
Marc and I talk about what its like to start and build up this type of company, and how, with some careful thinking, you can avoid letting your business become something you never wanted it to be.
You can listen here.
I haven’t recorded many podcasts before but this was a blast. Massive thanks to Steve for inviting me on.
I’ve made a few content updates over the past week or so:
A few months back I also created an Unoffice Hours page. It’s one of the highlights of my week. If you fancy saying hello, book a call.
I’ve been documenting the various processes in my business over the past few months. This week, I’ve been thinking about the process of on-boarding new clients.
How do I ensure we’re a good fit? How do I go beyond what they’re asking for and really understand what they’re after? How do we transition from “we’ve never spoken before” to “I trust you enough to put down a deposit for this project”?
There’s another question that has occurred to me lately: have they commissioned a website before? And if so, how does this impact the expectations they have?
When I started building client websites – some 18 years ago(!) – the majority of people I worked with had never commissioned a website before.
These days when I speak to clients, they’re often on the 4th or 5th redesign of their website. Even if I’m asked to build a brand new website, most of the people I speak to have been through the process of having a website built numerous times before.
In other words: early in my career, most of the people I built websites for didn’t have any preconceived notions of how a website should be built or the process one goes through to create one. These days, they do.
Sometimes they have good experiences and work with talented freelancers or teams. But often I hear horror stories of how they’ve been burned through poor project planning or projects taking longer than expected and going over budget.
I’ve found it worthwhile to ask about these experiences. The quicker I can identify their previous experience and expectations, especially if they’re negative, the quicker I can reassure them that there’s a proven process that we’ll follow.
I was at the RSE conference in Newcastle, along with many people whom I have met, worked with, and enjoyed talking to in the past. Many more people whom I have met, worked with, and enjoyed talking to in the past were at an entirely different conference in Aberystwyth, and I am disappointed to have missed out there.
One of the keynote speakers at RSEcon22, Marlene Manghami, talked about the idea of transcendence through community membership. They cited evidence that fans of soccer teams go through the same hormonal shifts at the same intensity during the match as the players themselves. Effectively the fans are on the pitch, playing the game, feeling the same feelings as their comrades on the team, even though they are in the stands or even at home watching on TV.
I do not know that I have felt that sense of transcendence, and believe I am probably missing out both on strong emotional connections with others and on an ability to contribute effectively to society (to a society, to any society) by lacking the strong motivation that comes from knowing that making other people happier makes me happier, because I am with them.
So, I made a game. It’s called Farmbound. It’s a puzzle; you get a sequence of farm things — seeds, crops, knives, water — and they combine to make better items and to give you points. Knives next to crops and fields continually harvest them for points; seeds combine to make crops which combine to make fields; water and manure grow a seed into a crop and a crop into a field. Think of it like a cross between a match-3 game and Little Alchemy. The wrinkle is that the sequence of items you get is the same for the whole day: if you play again, you’ll get the same things in the same order, so you can learn and refine your strategy. It’s rather fun: give it a try.
It’s a web app. Works for everyone. And I thought it would be useful to explain why it is, why I think that’s the way to do things, and some of the interesting parts of building an app for everyone to play which is delivered over the web rather than via app stores and downloads.
Well, there are a bunch of practical reasons. You get completely immediate play with a web app; someone taps on a share link, and they’re playing. No installation, no platform detection, it Just Works (to coin a phrase which nobody has ever used before about apps ever in the history of technology). And for something like this, an app with platform-specific code isn’t needed: sure, if you’re talking to some hardware devices, or doing low-level device fiddling or operating system integration, you might need to build and deliver something separately to each platform. But Farmbound is not that. There is nothing that Farmbound needs that requires a native app (well, nearly nothing, and see later). So it isn’t one.
There are some benefits for me as the developer, too. Such things are less important; the people playing are the important ones. But if I can make things nicer for myself without making them worse for players, then I’m going to do it. Obviously there’s only one codebase. (For platform-specific apps that can be alleviated a little with cross-platform frameworks, some of which are OK these days.) One still needs to test across platforms, though, so that’s not a huge benefit. On the other hand, I don’t have to pay extra to distribute it (beyond it being on my website, which I’d be paying for anyway), and importantly I don’t have to keep paying in order to keep my game available for ever. There’s no annual tithe required. There’s no review process. I also get support for minority platforms by publishing on the web… and I’m not really talking about something in use by a half-dozen people here. I’m talking about desktop computers. How many people building a native app, even a relatively simple puzzle game like this, make a build for iOS and Android and Windows and Mac and Linux? Not many. The web gets me all that for minimal extra work, and if someone on FreeBSD or KaiOS wants to play, they can, as long as they’ve got a modern browser. (People saying “what about those without modern browsers”… see below.)
But from a less practical and more philosophical point of view… I shouldn’t need to build a platform-specific native app to make a game like this. We want a world where anyone can build and publish an app without having to ask permission, right? I shouldn’t need to go through a review process or be beholden to someone else deciding whether to publish my game. The web works. Would Wordle have become so popular if you had to download a Windows app or wait for review before an update happened? I doubt it. I used to say that if you’re building something complex like Photoshop then maybe go native, but in a world with Figma in it, that maybe doesn’t apply any more, and so Adobe listened to that and now Photoshop is on the web. Give people a thing which doesn’t need installation, gets them playing straight away, and works everywhere? Sounds good to me. Farmbound’s a web app.
Farmbound shouldn’t need its own domain, I don’t think. If people find out about it, it’ll likely be by shared links showing off how someone else did, which means they click the link. If it’s popular then it’ll be top hit for its own name (if it isn’t, the Google people need to have a serious talk with themselves), and if it isn’t popular then it doesn’t matter. And, like native app building, I don’t really want to be on the hook forever for paying for a domain; sure, it’s not much money, but it’s still annoying that I’m paying for a couple of ideas that I had a decade ago and which nobody cares about any more. I can’t drop them, because of course cool URIs don’t change, and I didn’t want to be thinking a decade from now, do I still need to pay for this?
In slightly more ego-driven terms, it being on my website means I get the credit, too. Plus, I quite like seeing things that are part of an existing site. This is what drove the (admittedly hipster-ish) rise of “tilde sites” again a few years ago; a bit of nostalgia for a long time ago. Fortunately, I’ve also got Cloudflare in front of my site, which alleviates worries I might have had about it dying under load, although check back with me again if that happens to see if it turns out to be true or not. (Also, I’m considering alternatives to Cloudflare at the moment too.)
Firstly, I separated the front and back ends and deployed them in different places. I’m not all that confident that my hosted site can cope with being hammered, if I’m honest. This is alleviated somewhat by cloud caching, and hopefully quite a bit more by having a service worker in place which caches almost everything (although see below about that), but a lot of this decision was driven by not wanting to incur a server hit for every visitor every time, as much as possible. This drove at least some of the architectural decisions. The front end is on my site and is plain HTML, CSS, and JavaScript. The back end is not touched when starting the game; it’s only touched when you finish a game, in order to submit your score and get back the best score that day to see if you beat that. That back end is written in Deno, and is hosted on fly.io, who seem pretty cool. (I did look at Deno Deploy, but they don’t do permanent storage.)
Part of the reason the back end is a bit of extra work is that it verifies your submitted game to check you aren’t cheating and lying about your score. This required me to completely reimplement the game code in Deno. Now, you may be saying “what? the front end game code is in JavaScript and so is the back end? why don’t they share a library?” and the answer is, because I didn’t think of it. So I wrote the front end first and didn’t separate out the core game management from all the “animate this stuff with CSS” bits, because it was a fun weekend project done as a proof of concept. Once I got a bit further into it and realised that I should have done that… I didn’t wanna, because that would have sucked all the fun out of the project like a vampire and meant that I’d have never done it. So, take this as a lesson: think about whether you want a thing to be popular up front. Not that you’ll listen to this advice, because I never do either.
Similarly, this means that there’s less in the way of analytics, so I don’t get information about users, or real-time monitoring of popularity. This is because I did not want to add Google Analytics or similar things. No personal data about you ever leaves your device. You’ll have noticed that there’s no awful pop-up cookie consent dialogue; this is because I don’t need one, because I don’t collect any analytics data about players at all! Guess what, people who find those dialogues annoying (i.e., everyone?) You can tell companies to stop collecting data about you and then they won’t need an annoying dialogue! And when they say no… well, then you’ll have learned something about how they view you as customers, perhaps. Similarly, when scores are submitted, there’s no personal information that goes with them. I don’t even know whether two scores were submitted by the same person; there’s no unique ID per person or per device or anything. (Technically, the IP is submitted to the server, of course, but I don’t record it or use it; you’ll have to take my word for that.)
This architecture split also partially explains why the game’s JavaScript-dependent. I know, right? Me, the bloke who wrote “Everyone has JavaScript, right?“, building a thing which requires JS to run? What am I doing? Well, honestly, I don’t want to incur repeated server hits is the thing. For a real project, something which was critical, then I absolutely would do that; I have the server game simulation, and I could relatively easily have the server pass back a game state along with the HTML which was then submitted. The page is set up to work this way: the board is a <form>, the things you click on are <button>s, and so on. But I’m frightened of it getting really popular and then me getting a large bill for cloud hosting. In this particular situation and this particular project, I’d rather the thing die than do that. That’s not how I’d build something more critical, but… Farmbound’s a puzzle game. I’m OK with it not working, and if I turn out to be wrong about that, I can change that implementation relatively quickly without it being a big problem. It’s not architected in a JS-dependent way; it’s just progressively enhanced that way.
I had a certain amount of hassle from iOS Safari. Some of this is pretty common — how do I stop a double-tap zooming in? How do I stop the page overscrolling? — but most of the “fixes” are a combination of experimentation, cargo culting ideas off Stack Overflow, and something akin to wishing on a star. That’s all pretty irritating, although Safari is hardly alone in this. But there is a separate thing which is iOS Safari specific, which is this: I can’t sensibly present an “add this to your home screen” hint in iOS browsers other than Safari itself. In iOS Safari, I can show a little hint to help people know that they can add Farmbound to their home screen (which of course is delayed until a second game is begun and then goes away for a month if you dismiss it, because hassling your own players is a foolish thing to do). But in non Safari iOS browsers (which, lest we forget, are still Safari under the covers; see Open Web Advocacy if this is a surprise to you or if you don’t like it), I can’t sensibly present that hint. Because those non-Safari iOS browsers aren’t allowed to add web apps to your home screen at all. I can’t even give people a convenient tap to open Farmbound in iOS Safari where they can add the app to their home screen, because there’s no way of doing that. So, apologies, Chrome iOS or Firefox iOS users and others: you’ll have to open Farmbound in Safari itself if you want an easy way to come back every day. At least for now.
And finally, and honestly most annoyingly, the service worker.
Building and debugging and testing a service worker is still so hard. Working out why this page is cached, or why it isn’t cached, or why it isn’t loading, is incredibly baffling and infuriating still, and I just don’t get it. I tried using “workbox”, but that doesn’t actually explain how to use it properly. In particular, for this use case, a completely static unchanging site, what I want is “cache this actual page and all its dependencies forever, unless there’s a change”. However, all the docs assume that I’m building an “app shell” which then uses fetch() to get data off the server repeatedly, and so won’t shut up about “network first” and “cache first, falling back” and so on rather than the “just cache it all because it’s static, and then shut up” methodology. And getting insight into why a thing loaded or didn’t is really hard! Sure, also having Cloudflare caching stuff and my browser caching stuff as well really doesn’t help here. But I am not even slightly convinced that I’ve done all this correctly, and I don’t really know how to be better. It’s too hard, still.
So that’s why Farmbound is the way it is. It’s been interesting to create, and I am very grateful to the Elite Farmbound Testing Team for a great deal of feedback and helping me refine the idea and the play: lots of love to popey, Roger, Simon, Martin, and Mark, as well as Handy Matt and my mum!
There are still some things I might do in the future (achievements? maybe), and I might change the design (I’m not great at visual design, as you can tell), and I really wish that I could have done all the animations with Shared Element Transitions because it would have been 312 times easier than the way I did it (a bunch of them add generated content and then web-animations-api move the ::before around, which I thought was quite neat but is also daft by comparison with SET). But I’m pleased with the implementation, and most importantly it’s actually fun to play. Getting over a thousand points is really good (although sometimes impossible, on some days), and I don’t really think the best strategies have been worked out yet. Is it better to make fields and tractors, or not go that far? Is water a boon or an annoyance? I’d be interested in your thoughts. Go play Farmbound, and share your results with me on Twitter!
My debut album is out, featuring 10 songs written while I was living in Thailand, India and Turkey. It’s quite a jumble of genres, as I like lots of different types of music, and not everyone will like it – I write the songs I want to hear, not for other people’s appetites.

You can buy it on Bandcamp for £2 or more, or (if you’re a cheapskate) you can stream it on Spotify or Apple Music. I am available for autographing breasts or buttocks.
(Last Updated on )
I was reflecting on things that I know now, a couple of decades in to my career, that I wish I had been told at the beginning. Many things came to mind, but the most immediate from a technological perspective was Smalltalk’s image model.
It’s not even the technology of the Smalltalk image that’s relevant, but the model of thinking that works well with it. In Smalltalk, there are two (three) important files for a given machine: the VM is the machine that can run Smalltalk; the image is a snapshot of all of the Smalltalk objects on the machine(; and the sources are the source code for the classes and methods in that image).
This has weird implications for how you work that differ greatly from “compile this text stream” or “interpret this text stream” programming environments. People who have used the ENVY/Developer tool generally seem to wax lyrical and wonder why it was never reinvented, like the rest of software engineering is the beach with the ruins of the Statue of Liberty poking out from the end of the Planet of the Apes. But the bit I wish I had been told about: the image model puts the “personal” in “personal computer” as far as programming is concerned. Every piece of software you write is part of your image: a peer of the rest of the software you wrote, of the software that other people wrote that you added, and of the software that was already there when you first booted the machine.
I wish I had been told to think like that: that each tool or project is not a separate tool or project, but a cumulative addition to the image. To keep everything I wrote, so that the next time I needed something I might not need to write it. To make sure, when using new things, that I could integrate them with the image (it didn’t exist at the time, but TruffleSQUEAK is very much this idea). To give up asking “how can I write software to solve this problem”, and to start asking “how can I solve this problem with software, writing some if necessary”?
It would be the difference between twenty of years of experience and one year of experience, twenty times over.
My Work Bezzie “Stinky” Taylar Bouwmeester and I take you on a wild, roller-coaster ride through the magical world of desktop screen readers. Who uses them? How they can help if developers use semantic HTML? How you can test your work with a desktop screenreader? (Parental discretion advised).
Last week I observed a blind screen reader user attempting to complete a legal document that had been emailed to them via DocuSign. This is a service that takes a PDF document, and turns it into a web page for a user to fill in and put an electronic signature to. The user struggled to complete a form because none of the fields had data labels, so whereas I could see the form said “Name”, “Address”, “Phone number”, “I accept the terms and conditions”, the blind user just heard “input, required. checkbox, required”.
Ordinarily, I’d dismiss the product as inaccessible, but DocuSign’s accessibility statement says “DocuSign’s eSignature Signing Experience conforms to and continually tests for GovernmentSection 508 and WCAG 2.1 Level AA compliance. These products are accessible to our clients’ customers by supporting Common screen readers” and had been audited by The Paciello Group, whom I trust.
So I set about experimenting by signing up for a free trial and authoring a test document, using Google Docs and exporting as a PDF. I then imported this into DocuSign and began adding fields to it. I noticed that each input has a set of properties (required, optional etc) and one of these is ‘Data Label’. Aha! HTML fields have a <label> associated with them (or should do), so I duplicated the text and sent the form to my Work Bezzie, Stinky Taylar, to test.

No joy. The labels were not announced. (It seems that the ‘data label’ field actually becomes a column header in the management report screen.) So I set about adding text into the other fields, and through trial and error discovered how to force the front-end to have audible data labels:
I think DocuSign is missing a trick here. Given the importance of input labels for screen readers, a DocuSign author should be prompted for this information, with an explanation of why it’s needed. I don’t think it would be too hard to find the text immediately preceeding the field (or immediately following it on the same line, in the case of radio/ checkboxes) and prefilling the prompt, as that’s likely to be the relevant label. Why go to all the effort to make an accessible product, then make it so easy for your customers to get it wrong?
Another niggle: on the front end, there is an invisible link that is visually revealed when tabbed to, and says “Press enter or use the screen reader to access your document”. However, the tester I observed had navigated directly to the PDF document via headings, and hadn’t tabbed to the hidden link. The ‘screenreader mode’ seemed visually identical to the default ‘hunt for help, cripples!” mode, so why not just have the accessible mode as the default?
All in all, it’s a good product, let down by poor usability and a ‘bolt-on’ approach. And, as we all know, built-in beats bolt-on. Bigly.
sequential-focus-navigation-starting-point

– Jennifer Daniel, the chair of the Unicode Consortium’s emoji subcommittee, asks “How can we reconcile the rapid ever changing way we communicate online with the formal methodical process of a standards body that digitizes written languages?” and introduces the Poopnado emojiI can usually muddle through whatever programming task is put in front of me, but I can’t claim to have a great eye for design. I’m firmly in the conscious incompetence stage of making things look good.
The good news for me and people like me is that you can fake it. Sort of. I doubt I’ll ever compete with people who actually know what they’re doing, but I have found some resources for making something that doesn’t look like the dog’s dinner.
I’d like to add some more free resources to this, so hopefully I’ll get back to it.
The upcoming issue of the SICPers newsletter is all about phrases that were introduced to computing to mean one thing, but seem to get used in practice to mean another. This annoys purists, pedants, and historians: it also annoys the kind of software engineer who dives into the literature to see how ideas were discussed and used and finds that the discussions and usages were about something entirely different.
So should we just abandon all technical terminology in computing? Maybe. Here’s an irreverent guide.
Luckily the industry doesn’t really use this term any more so we can ignore the changed meaning. The small club of people who still care can use it correctly, everybody else can carry on not using it. Just be aware when diving through the history books that it might mean “extreme late binding of all things” or it might mean “modules, but using the word class” depending on the age of the text.
Nope, this one’s in the bin, I’m afraid. It used to mean “not waterfall” and now means “waterfall with a status meeting every day and an internal demo every two weeks”. We have to find a new way to discuss the idea that maybe we focus on the working software and not on the organisational bureaucracy, and that way does not involve the word…
If you can hire a “DevOps engineer” to fulfil a specific role on a software team then we have all lost at using the phrase DevOps.
This one used to mean “psychologist/neuroscientist developing computer models to understand how intelligence works” and now means “an algorithm pushed to production by a programmer who doesn’t understand it”. But there is a potential for confusion with the minor but common usage “actually a collection of if statements but last I checked AI wasn’t a protected term” which you have to be aware of. Probably OK, in fact you should use it more in your next grant bid.
Previously something very specific used in the context of financial technology development. Now means whatever anybody needs it to mean if they want their product owner to let them do some hobbyist programming on their line-of-business software, or else. Can definitely be retired.
Was originally the idea that maybe the things your software does should depend on the things the customers want it to do. Now means automated tests with some particular syntax. We need a different term to suggest that maybe the things your software does should depend on the things the customers want it to do, but I think we can carry on using BDD in the “I wrote some tests at some point!” sense.
Definitely another one for the bin. If Tony Hoare were not alive today he would be turning in his grave.
Regular readers will recall that the UK competition regulator, CMA, investigated Apple and Google’s mobile ecosystems and concluded there is a need for regulation. Although they were initially looking mostly at native app stores, but quickly widened that to looking at how Apple’s insistence on all browsers using WebKit on iOS is preventing Progressive Web Apps from competing against single platform native apps.
CMA has announced its intention to being a market investigation specifically into the supply of mobile browsers and browser engines, and the distribution of cloud gaming services through app stores on mobile devices, and seeks your views. It doesn’t matter whether you are not based in UK; if you or your clients do business in UK, your views matter too.
Steve Fenton has published his response, as has Alistair Shepherd; here is mine, in case you need something to crib from to write yours. Send your response to the CMA mailbox browsersandcloud@cma.gov.uk before July 22nd.
I am a UK-based web developer and accessibility consultant, specialising in ensuring web sites are inclusive for people with disabilities or who experience other barriers to access–such as living in poorer nations where mobile data is comparatively expensive, networks may be slow and unreliable and people are generally accessing the web on cheap, lower-specification devices. I write in a personal capacity, and am not speaking on behalf of any clients or employers, past or present. You have my permission to publish or quote from this document, with or without attribution.
Many of my clients would like to make apps that are Progressive Web Applications. These are apps that are websites, built with long-established open technologies that work across all operating systems and devices, and enhanced to be able to work offline and have the look and feel of an application. Examples of ‘look and feel’ might be to render full-screen; to be saved with their own icon onto a device’s home screen; to integrate with the device’s underlying platform (with the user’s permission) in order to capture images from the camera; use the microphone for video conferencing; to send push notifications to the user.
The benefits of PWAs are advantageous to both the developer (and the business they work for) and the end user. Because they are based on web technology, a competent developer need only make one app that will work on iOS, Android, as well as desktop computers and tablets. This write-once approach has obvious benefits over developing a single-platform (“native”) app for iOS in addition to a single-platform app for Android and also a website. It greatly reduces costs because it greatly reduces complexity of development, testing and deploying.
The benefits to the user are that the initial download is much smaller than that for a single-platform app from an app store. When an update to the web app is pushed by a developer to the server, the user only downloads the updated pages, not the whole application. For businesses looking to reach customers in growing markets such as India, Indonesia, Nigeria and Kenya, this is a competitive advantage.
In the case of users with accessibility needs due to a disability, the web is a mature platform on which accessibility is a solved problem.
However, many businesses are not able to offer a Progressive Web App, largely due to Apple’s anti-competitive policy of requiring all browsers on iOS and iPad to use its own engine, called WebKit. Whereas Google Chrome on Mac, Windows and Android uses its own engine (called Blink), and Firefox on non-iOS/iPad platforms uses its own rendering engine (called Gecko), Apple’s policy requires Firefox and Chrome on iOS/iPad to be branded skins over WebKit.
This “Apple browser ban” has the unfortunate effect of ham-stringing Progressive Web Apps. Whereas Apple’s Safari browser allows web apps (such as Wordle) to be saved to the user’s home screen, Firefox and Chrome cannot do so–even though they all use WebKit. While single-platform iOS apps can send push notifications to the user, browsers are not permitted to. Push notifications are high on business’ priority because of how it can drive engagement. WebKit is also notably buggy and, with no competition on the iOS/iPad platform, there is little to incentivise Apple to invest more in its development.
Apple’s original vision for applications on iOS was Web Apps, and today they still claim Web Apps are a viable alternative to the App Store. Apple CEO Tim Cook made a similar claim last year in Congressional testimony when he suggested the web offers a viable alternative distribution channel to the iOS App Store. They have also claimed this during a court case in Australia with Epic.
Yet Apple’s own policies prevent Progressive Web Apps being a viable alternative. It’s time to regulate Apple into allowing other browser engines onto iOS/iPad and giving them full access to the underlying platform–just as they currently are on Apple’s MacOS, Android, Windows and Linux. Therefore, I fully support your proposal to make a reference in relation to the supply of mobile browsers and cloud gaming in the UK, the terms of reference, and urge a swift remedy: Apple must be required to allow alternate browser engines on iOS, with access to all of the same APIs and device integrations that Safari and Native iOS have access to.
Yours,
Bruce Lawson
If you’re looking for music to study to tonight, here’s Watering a Flower, by Haruomi Hosono. Originally recorded in 1984 to be in-store music for MUJI.
If you’re looking for a way to avoid studying, it’s the same link, but read the comments.
I’ve been maintaining websites in some form for a long time now, and here’s why maybe you should at least think about it.
You get almost total creative control.
The more content that gets generated inside the walled gardens of Twitter, Instagram, etc. the less weirdness, beauty and creativity we get on the web. When you post on someone else’s service, what you wanted to say is forced into a tiny rectangle and you might find that rectange getting smaller and more restrictive as times goes on.
It’ll last if you take care of it.
If you create your web page using the fundamental technologies, HTML and CSS, and resist the urge to jump onto the ever-turning wheel of more advanced technologies, you’ll have something that in ten years from now you can be pretty sure you’ll be able to slap onto a server and show people. The oft-referenced Space Jam website is a great example.
It doesn’t really even have to be a website.
You know what’s easier than writing HTML? Writing plain text. You know what web servers are perfectly happy to serve? A plain text web site.
Hard things are often worth it.
Learning to develop and host a website is harder than registering a Twitter account and merrily posting away, but you develop a useful skill and a valuable creative outlet. A lot of people liken creating a personal website to gardening. You carefully water, prune, and dote, and what you get is something you can cherish.
Hosting a website isn’t that difficult.
Again, it’s harder than using a third party service, but there are plenty of places to put your site for free or cheap:
It doesn’t really matter if nobody reads it.
Sure, one good thing about the walled gardens is that they’re relatively convenient when it comes to showing your stuff to other people in the garden. However, someone seeing your post isn’t really a human connection. Someone hitting like on your post isn’t really a human connection.
I’ve come to favour fewer, deeper interactions over a larger number of shallower ones, even if those likes do feel good. I’m not writing this to make myself out as the wise person who’s transcended the shallowness of social media. I’m writing it because it takes a deliberate effort for me not to fall into those traps. There’s some effort to recreate “likes” in the IndieWeb, but at the moment I view the lack of likes as more of a feature than a bug.
I recently taught an introduction to Python course, to final-year undergraduate students. These students had little to zero programming experience, and were all expected to get set up with Python (using the Anaconda environment, which we had determined to be the easiest way to get a reasonable baseline configuration) on laptops they had brought themselves.
What follows is not a slight on these people, who were all motivated, intelligent, and capable. It is a slight on the world of programming in the current ages, if you are seeking to get started with putting a general-purpose computer to your own purposes and merely own a general-purpose computer.
One person had a laptop that, being a mere six (6) years old, was too old to run the current version of Anaconda Distribution. We had to crawl through the archives, guessing what older version might work (i.e. might both run on their computer and still permit them to follow the course).
Another had a modern laptop and the same same version of Python/tools that everyone else was using, except that their IDE would crash if they tried to plot a graph in dark mode.
Another had, seemingly without having launched the shell while they owned their computer, got their profile into a state where none of the system binary folders were on their PATH. Hmm, python3 doesn’t work, let’s use which python to find out why not. Hmm, which doesn’t work, let’s use ls to find out why not. Hmmm…
Many, through not having used terminal emulators before, did not yet know that terminal emulators are modal. There are shell commands, which you must type when you see a $ (or a % or a >) and will not work when you can see a >>>. There are Python commands, which are the other way around. If you type a command that launches nano/pico, there are other rules.
By the way, condo and pip (and poetry, if you try to read anything online about setting up Python) are Python things but you cannot use them as Python commands. They are shell commands.
By the other way, everyone writes those shell commands with a $ at the front. You do not write the $. Oh, and by the other other way: they don’t necessarily tell you to open the Terminal to do it.
Different environments—the shell, visual studio code, Spyder, PyCharm—will do different things with respect to your “current working directory” when you run a script. They will not tell you that they have done this, nor that it is important, nor that it is why your script can’t find a data file that’s RIGHT THERE.
This is all way before we get to the dark art of comprehending exception traces.
When I were a lad and Silicon Valley were all fields, you turned a computer on and it was ready for some programming. I’m not suggesting returning to that time, computers were useless then. But I do think it is needlessly difficult to get started with “a programming language that lets you work quickly” in this time of ubiquitous programs.
Recently, the HTML spec changed to replace current outline algorithm with one based on heading levels. So the idea that you could use <h1> for a generic heading across your documents, and the browser would “know” which level it actually should be by its nesting inside <section> and other related “sectioning elements”, is no more.
This has caused a bit of anguish in my Twitter timeline–why has this excellent method of making reusable components been taken away? Won’t that hurt accessibility as documents marked up that way will now have a completely flat structure? We know that 85.7% of screen reader users finding heading level useful.
Here comes the shocker: it has never worked. No web browser has implemented that outlining algorithm. If you used <h1> across your documents, it has always had a flat structure. Nothing has been taken away; this part of the spec has always been a wish, but has never been reality.
One of the reasons I liked having a W3C versioned specification for HTML is that it would reflect the reality of what browsers do on the date at the top of the spec. A living standard often includes things that aren’t yet implemented. And the worse thing about having zombie stuff in a spec is that lots of developers believe (in good faith) that it accurately reflects what’s implemented today.
So it’s good that this is removed from the WHATWG specification (now that the W3C specs are dead). I wish that you could use one generic heading everywhere, and its computed level were communicated to assistive technology. Back n 1991, Sir Uncle Timbo himself wrote
I would in fact prefer, instead of <H1>, <H2> etc for headings [those come from the AAP DTD] to have a nestable <SECTION>..</SECTION> element, and a generic <H>..</H> which at any level within the sections would produce the required level of heading.
But browsers vendors ignored both Sir Uncle Timbo, and me (the temerity!) and never implemented it, so removing this from the spec will actually improve accessibility.
(More detail and timeline in Adrian Roselli’s post There Is No Document Outline Algorithm.)
If there’s a common thread through tech workers, it’s having a drawer full of stickers, accumulated indiscriminately at conferences and meetups, but which one can never quite bring themselves to attach to anything.
There are very understandable human reasons for this. Once that sticker is stuck, you’ve committed. Your enjoyment of that sticker is now bound inextricably to the lifetime of whatever you’ve stick it on. Getting rid of that thing means getting rid of that sticker and the memories that come with it. That sticker isn’t just a picture of a dog, it represents the memories of that time you went to Crufts or whatever. You might have stuck it on a laptop, which means you’ll probably only have that sticker for somewhere between four and eight more years. What a waste. Or you might have stuck it on one of your beautiful notebooks, which in practice means you’ll have it forever, as notebooks are another thing that most of us like to accumulate but balk at the idea of actually using.
So, like many of you, I kept my stickers in a little drawer to occasionally rifle through, smiling at the memories attached. Only, mathematically, I was wasting them.
Let’s say each sticker has a value l representing how much you like it. For convenience, we’ll give all stickers a fixed value of l=1.
Your enjoyment, e, of a sticker is then l * s where s is the total number of seconds for which you were looking at it. The success of your sticker strategy is measured by the sum of all e values.
Time for a worked example. Let’s say you have five stickers in your drawer and you look through the drawer once a month. You look at each sticker for a good 30 seconds before replacing it and moving on to the next one. You maintain this ritual for an admirable 60 years.
12 inspections for 60 years is 720 inspections. With a fixed l=1, each inspection gives you 30 * 5 * 1, for a total of e=150. Your lifetime e using the drawer strategy a hefty 108,000.
Now imagine you take those five stickers and put them on the back of your desk, where all five remain in your line of sight while you work. Keeping our convenient l=1 for each sticker, you’re racking up a whopping 5e per second. At this rate, you’ll catch up with your drawer using counterpart in 21,600 seconds, or 360 minutes, or six hours.
In other words, in a little less than a work day minus lunch, I’ve enjoyed my stickers as much as I would have done over 60 years if I’d kept them safe in a drawer and just looked at them once per month.
Don’t be a drawer. Be a sticker.
There’s a lot you can do with Ruby’s concepts of object individuation. A lot you probably shouldn’t do.
Some objects just hoard too many methods. By applying a modest tax rate, you can reclaim memory from your most bloated objects back to the heap.
class Object
def tax(tax_factor = 0.2)
meths = self.class.instance_methods(false)
tax_liability = (meths.length * tax_factor).ceil
tax_liability.times do
tax_payment = meths.shuffle.shift
instance_eval("undef #{tax_payment}")
end
end
end
class Citizen
def car; end
def house; end
def dog; end
def spouse; end
def cash; end
end
c = Citizen.new
c.tax
c.house
# undefined method `house' for #<Citizen:0x00007fc342a866c0>
Write your code like you write your emails when you’re trying a little too hard to come across as friendly.
klasses = ObjectSpace.each_object(Class)
klasses.each do |klass|
next if klass.frozen?
klass.instance_methods.each do |meth|
next if meth.to_s.end_with?('!')
klass.class_eval do
alias_method "#{meth}!".to_sym, meth
end
end
end
[1, 3, 2].max!
# 3
A fun game to play with your friends. Hides Wally in a randomly selected method, and you get to look for him!
klasses = ObjectSpace.each_object(Class)
klass = klasses.to_a.sample
method = klass.instance_methods.sample
klass.instance_eval do
define_method(method) do |*args|
puts "You found Wally!"
end
end
Budgeting is important.
$availableCalls = 1000
trace = TracePoint.new(:call) do |tp|
if tp.defined_class.ancestors.include?(Unsustainable)
raise StandardError.new, 'No more method calls. Go outside and play.' if $availableCalls.zero?
$availableCalls -= 1
end
end
trace.enable
module Unsustainable; end
class EndlessGrowth
include Unsustainable
def grow; end
end
1001.times { EndlessGrowth.new.grow }
# No more method calls. Go outside and play. (StandardError)
class Object
def self.inherited(subclass)
return if %w[Object].include?(self.name)
raise StandardError.new, '🏴'
end
end
class RulingClass; end
class UnderClass < RulingClass; end # raises StandardError
I was lucky enough to get one of the limited number of tickets for Brighton Ruby 2022, so off I trotted down to Brighton for a long weekend in the very comfortable Brighton Surf Guest House.
Joel Hawksley and his talk about getting GitHub’s 40k lines of custom CSS (sort of) under control with their design system, Primer.
Kelly Sutton–who I met at “breakfast”–talking about latency based Sidekiq queues. The idea is that you queue your jobs by expected latency (queue: :within_thirty_seconds) instead of priority which is ambiguous, and that way your auto-scaling and alerting can respond appropriately. A writeup will potentially be landing on the Gusto engineering blog some time in the near future. They also introduced me to the idea of having specific read-only queues for high throughput tasks that won’t overwhelm the primary database with writes.
Tom Stuart talked about Ruby 2.7+’s pattern matching feature, a powerful but under-hyped feature which to my understanding provides functionality similar to named regular expression captures, but for abitrary objects. He included several examples of how this can be used to reduce the amount of code required to do certain things.
Jemma Issroff presenting on object shapes and how the concept can potentially be applied to Ruby. This is the sort of “under the hood” improvement that I’m not so familiar with, and eager to learn more about. There’s an open issue on the Ruby tracker covering it in more detail.
Roberta Mataityte on the 9 rules of effective debugging, from the book Debugging: The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems by David Agans. A really useful talk, given how easy it is to get lost in the woods while tracking down a problem. I can’t personally vouch for the book, but Roberta’s take on the material definitely made it sound like a worthwhile read, so I’ll probably grab a copy at some point.
Emma Barnes on legacy and the context in which we create and use our tools. Amazing talk, but you probably had to be there.
John Cinnamond on the maybe monad, the null object pattern, and how learning different perspectives helps us truly understand ourselves. Sometimes I suspect that people are using programming to trick me into learning deeper lessons about the human condition. Tricksy.
Naomi Freeman on her framework (Freemwork?) for building psychologically safe teams. Unfortunately by this point I’d stopped taking notes on my phone because I didn’t want to appear disinterested, so I can’t remember the individual points.
Looking forward to next year.
Several years ago, I inherited a legacy application. It had multiple parts all logging back to a central ELK stack that had later been supplemented by additional Prometheus metrics. And every morning, I would wake up to a dozen alerts.
Obviously we tried to reduce the number of alerts sent. Where possible the system would self-heal, bringing up new nodes and killing off old ones. Some of the alerts were collected into weekly reports so they were still seen but, as they didn’t require immediate triage, could be held off.
But the damage was done.
No-one read the Slack alerts channel. Everyone had forwarded the emails to spam. The system cried wolf, but all the villagers had learnt to cover their ears.
With a newer project, I wanted an implicit rule; if an alert pops up, it is because it requires a human interaction. A service shut off because of a billing issue. A new deploy causing a regression in signups. These are things a human needs to step in and do something about (caveat emptor there is wiggle room in this).
There are still warnings been flagged up but developers can check in on these on their own time. Attention is precious and been pulled out of it every hour because of a NullPointer is not a worthy trade-off in my own experience.
A flood of false positives will make you blind to the real need of alerting; knowing when you’re needed.
There are seven basic story plots:
How many distinct software “plots” are there?
Let me know if you can think of any more.
This is particularly powerful when you transition between orders of magnitude and facilitate a positive feedback loop. Make my test suite 15% faster and I’ll thank you kindly. Make it 10x faster and I’ll love you forever. ↩
My talk at AppDevCon discussed the Requirements Trifecta but turned it into a Quadrinella: you need leadership vision, market feedback, and technical reality to all line up as listed in the trifecta, but I’ve since added a fourth component. You also need to be able to tell the people who might be interested in paying for this thing that you have it and it might be worth paying for. If you don’t have that then, if anybody has heard of you at all, it will be as a company that went out of business with a product “five years ahead of its time”: you were able to build it, it did something people could benefit from, in an innovative way, but nobody realised that they needed it.
A 45 minute presentation was enough to introduce that framework and describe it, but not to go into any detail. For example we say that we need “market feedback”, i.e. to know that the thing we are going to build is something that some customer somewhere will actually want. But how do we find out what they want? The simple answer, “we ask them”, turns out to uncover some surprising complexity and nuance.
At one end, you have the problem of mass-market scale: how do you ask a billion people what they want? It’s very expensive to do, and even more expensive to collate and analyse those billion answers. We can take some simplifying steps that reduce the cost and complexity, in return for finding less out. We can sample the population: instead of asking a billion people what they think, we can ask ten thousand people what they think and apply what we learn to all billion people.
We have to know that the way in which we select those 10,000 people is unbiased, otherwise we’re building for an exclusive portion of the target billion. Send a survey to people’s work email addresses on a Friday, and some will not pick it up until Sunday as their weekend is Fri-Sat. Others will be on holiday, or not checking their email that day, or feeling grumpy and inclined to answer with the opposite of their real thoughts, or getting everything done quickly before the weekend and disinclined to think about your questions at all.
Another technique we use is to simplify the questions—or at least the answers we’ll accept to those questions, to make it easier to combine and aggregate those answers. Now we have not asked “what do you think about this” at all; we have asked “which of these ways in which you might think about this do you agree with?” Because people are inclined to avoid conflict, they tend to agree with us. Ask “to what extent do you agree that spline reticulation is the biggest automation opportunity in widget frobnication?” and you’ll learn something different from the person who asked “to what extent do you agree that spline reticulation is the least important automation opportunity in widget frobnication?”
We’ll get richer information from deeper, qualitative interactions with people, and that tends to mean working with fewer people. At the extreme small end we have one person: an agent talks to their client about what that client would like to see. This is quite an easy case to deal with, because you have exactly one viewpoint to interpret.
Of course, that viewpoint could well be inconsistent. Someone can tell you that they get a lot of independence in how they work, then in describing their tasks list all the approvals and sign-offs they have to get. It can also be incomplete. A manager might not fully know all of the details of the work their reports do; someone may know their own work very well but not the full context of the process in which that work occurs. Additionally, someone may not think to tell you everything about their situation: many activities rely on tacit knowledge that’s hard to teach and hard to explain. So maybe we watch them work, rather than asking them how they work. Now, are they doing what they’re doing because that’s how they work, or because that’s how they behave when they’re being watched?
Their viewpoint could also be less than completely relevant: maybe the client is the person paying for the software, but are they the people who are going to use it? Or going to be impacted by the software’s outputs and decisions? I used the example in the talk of expenses software: very few people when asked “what’s the best software you’ve ever used” come up with the tool they use to submit receipts for expense claims. That’s because it’s written for the accounting department, not for the workers spending their own money.
So, we think to involve more people. Maybe we add people’s managers, or reports, or colleagues, from their own and from other departments. Or their customers, or suppliers. Now, how do we deal with all of these people? If we interview them each individually, then how do we resolve contradiction in the information they tell us? If we bring them together in a workshop or focus group, we potentially allow those contradictions to be explored and resolved by the group. But potentially they cause conflict. Or don’t get brought up at all, because the politics of the situation lead to one person becoming the “spokesperson” for their whole team, or the whole group.
People often think of the productiveness of a software team as the flow from a story being identified as “to do” to working software being released to production. I contend that many of the interesting and important decisions relating to the value and success of the software were made before that limited part of the process.
Many, many years ago I was an avid reader and writer of various fiction writing websites. There’s still links to them on this site, which shows a. how long they’ve been around and b. how out of date this site is. Recently I’ve been on a bit of binge, revisiting my past and re-reading these old stories. Which led me to a quest.
I built up a good rapport with several writers. PMs and later emails been traded back and forth, learning a bit more about the world as a young and naive kid. Some folks were on the far side of the world, others 30 miles down the road.
I’ve tried to get in touch with a few of these folks recently and hit the inevitable bit rot that seems to pervade the Internet nowadays. Dead emails, links to profiles on sites that no longer exist. It seems there’s no way to reach some folks.
Sleuthing through LinkedIn, Twitter, every open-source channel I can find has yielded no luck.
I guess, what I’m trying to say is, it’s inherently unverving and disheartening knowing there is someone, somewhere, out there in the world who I will probably never get to talk to again.
But you never know.
The field of software engineering doesn’t change particularly quickly. Tastes in software engineering change all the time: keeping up with them can quickly result in seasickness or even whiplash. For example, at the moment it’s popular to want to do server-side rendering of front end applications, and unpopular to do single-page web apps. Those of us who learned the LAMP stack or WebObjects are back in fashion without having to lift a finger!
Currently it’s fashionable to restate “don’t mock an interface you don’t own” as the more prescriptive, taste-driven statement “mocks are bad”. Rather than change my practice (I use mocks and I’m happy with that from 2014 is still OK), I’ll ask why has this particular taste arisen.
Mock objects let you focus on the ma, the interstices between objects. You can say “when my case controller receives this filter query, it asks the case store for cases satisfying this predicate”. You’re designing a conversation between independent programs, making restrictions about the messages they use to communicate.
But many people don’t think about software that way, and so don’t design software that way either. They think about software as a system that holistically implements a goal. They want to say “when my case controller receives this filter query, it returns a 200 status and the JSON representation of cases matching that query”. Now, the mocks disappear, because you don’t design how the controller talks to the store, you design the outcome of the request which may well include whatever behaviour the store implements.
Of course, tests depending on the specific behaviour of collaborators are more fragile, and the more specific prescription “don’t mock what you don’t control” uses that fragility: if the behaviour of the thing you don’t control changes, you won’t notice because your mock carries on working the way it always did.
That problem is only a problem if you don’t have any other method of auditing your dependencies for fitness for purpose. If you’re relying on some other interface working in a particular way then you should probably also have contract tests, acceptance tests, or some other mechanism to verify that it does indeed work in that way. That would be independent of whether your reliance is captured in tests that use mock objects or some other design.
It’ll only be a short while before mock objects are cool again. Until then, this was an interesting diversion.
My professional other half at Babylon Health, Taylar Bouwmeester, and I invite you to join us on a rollercoaster ride through the merry world of keyboard accessibility. It stars Brad Pitt as me and Celine Dion (she’s Canadian, you know) as Taylar.
As has been mentioned here before the UK regulator, the Competition and Markets Authority, are conducting an investigation into mobile phone software ecosystems, and they recently published the results of that investigation in the mobile ecosystems market study. They’re also focusing in on two particular areas of concern: competition among mobile browsers, and in cloud gaming services. This is from their consultation document:
Mobile browsers are a key gateway for users and online content providers to access and distribute content and services over the internet. Both Apple and Google have very high shares of supply in mobile browsers, and their positions in mobile browser engines are even stronger. Our market study found the competitive constraints faced by Apple and Google from other mobile browsers and browser engines, as well as from desktop browsers and native apps, to be weak, and that there are significant barriers to competition. One of the key barriers to competition in mobile browser engines appears to be Apple’s requirement that other browsers on its iOS operating system use Apple’s WebKit browser engine. In addition, web compatibility limits browser engine competition on devices that use the Android operating system (where Google allows browser engine choice). These barriers also constitute a barrier to competition in mobile browsers, as they limit the extent of differentiation between browsers (given the importance of browser engines to browser functionality).
They go on to suggest things they could potentially do about it:
A non-exhaustive list of potential remedies that a market investigation could consider includes:
- removing Apple’s restrictions on competing browser engines on iOS devices;
- mandating access to certain functionality for browsers (including supporting web apps);
- requiring Apple and Google to provide equal access to functionality through APIs for rival browsers;
- requirements that make it more straightforward for users to change the default browser within their device settings;
- choice screens to overcome the distortive effects of pre-installation; and
- requiring Apple to remove its App Store restrictions on cloud gaming services.
But, importantly, they want to know what you think. I’ve now been part of direct and detailed discussions with the CMA a couple of times as part of OWA, and I’m pretty impressed with them as a group; they’re engaged and interested in the issues here, and knowledgeable. We’re not having to educate them in what the web is. The UK’s potential digital future is not all good (and some of the UK’s digital future looks like it could be rather bad indeed!) but the CMA’s work is a bright spot, and it’s important that we support the smart people in tech government, lest we get the other sort.
So, please, take a little time to write down what you think about all this. The CMA are governmental: they have plenty of access to windy bloviations about the philosophy of tech, or speculation about what might happen from “influencers”. What’s important, what they need, is real comments from real people actually affected by all this stuff in some way, either positively or negatively. Tell they whether you think they’ve got it right or wrong; what you think the remedies should be; which problems you’ve run into and how they affected your projects or your business. Earlier in this process we put out calls for people to send in their thoughts and many of you responded, and that was really helpful! We can do more this time, when it’s about browsers and the Web directly, I hope.
If you feel as I do then you may find OWA’s response to the CMA’s interim report to be useful reading, and also the whole OWA twitter thread on this, but the most important thing is that you send in your thoughts in your own words. Maybe what you think is that everything is great as it is! It’s still worth speaking up. It is only a good thing if the CMA have more views from actual people on this, regardless of what those views are. These actions that the CMA could take here could make a big difference to how competition on the Web proceeds, and I imagine everyone who builds for the web has thoughts on what they want to happen there. Also there will be thoughts on what the web should be from quite a few people who use the web, which is to say: everybody. And everybody should put their thoughts in.
So here’s the quick guide:
Go to it. You have a month. It’s a nice sunny day in the UK… why not read the report over lunchtime and then have a think?
Bizarrely, the Guinness book of world records lists the “first microcomputer” as 1980’s Xenix. This doesn’t seem right to me:
When a programmer says that they are ‘self-taught’ or that they “taught themselves to code”, what do they mean by it?
Did they sit down at a computer, with reference to no other materials, and press buttons and click things until working programs started to emerge?
It’s unlikely that they learned to program this way. More probable is that our “self-taught” programmer had some instruction. But what? Did they use tutorials or reference content? Was the material online, printed, or hand written? Did it include audio or visual components? Was it static or dynamic?
What feedback did they get? Did their teaching material encourage reflection, assessment, or other checkpoints? Did they have access to a mentor or community of peers, experts, or teachers? How did they interact with that community? Could they ask questions, and if so what did they do with the answers?
What was it that they taught themselves? Text-based processing routines in Commodore BASIC, or the Software Engineering Body of Knowledge?
What were the gaps in their learning? Do they recognise those gaps? Do they acknowledge the gaps? Do they see value in the knowledge that they skipped?
And finally, why do they describe themselves as ‘self-taught’? Is it a badge of honour, or of shame? Does it act as a signal for some other quality? Why is that quality desirable?
Normally, I bang on endlessly about Web Accessibility, but occasionally branch out to bore about other things. For Global Accessibility Awareness Day last week, my employers at Babylon Health allowed me to publish a 30 min workshop I gave to our Accessibility Champions Network on how to make accessible business documents. Ok, that might sound dull, but according to I.M.U.S., for every external document an organisation publishes, it generates 739 for internal circulation. I’m using Google Docs in the talk, but the concepts are equally applicable to Microsoft Word, Apple Pages, and to authoring web content.
It’s introduced by my Professional Better Half, Taylar Bouwmeester –recipient of the coveted “Friendliest Canadian award” and winner of a gold medal for her record of 9 days of unbroken eye contact in the all-Canada Games– and then rapidly goes downhill thereafter. But you might enjoy watching me sneeze, sniff, and cough because I was under constant assault from spring foliage jizzing its pollen up my nostrils. Hence, it’s “R”-rated. Captions are available (obvz) – thanks Subly!
(Last Updated on )
Ken Kocienda (unwrapped twitter thread, link to first tweet):
I see so many tweets about agile, epics, scrums, story points, etc. and none of it matters. We didn’t use any of that to ship the best products years ago at Apple.
Exactly none of the most common approaches I see tweeted about all the time helped us to make the original iPhone. Oh, and we didn’t even have product managers.
Do you know what worked?
A clear vision. Design-led development. Weekly demos to deciders who always made the call on what to do next. Clear communication between cross functional teams. Honest feedback. Managing risk as a function of the rate of actual progress toward goals.
I guess it’s tempting to lard on all sorts of processes on top of these simple ideas. My advice: don’t. Simple processes can work. The goal is to ship great products, not build the most complex processes. /end
We can talk about the good and the bad advice in this thread, and what we do or don’t want to take away, but it’s fundamentally not backed up by strong argument. Apple did not do this thing that is talked about now back in the day, and Apple is by-our-lady Apple, so you don’t need to do this thing that is talked about now.
There is lots that I can say here, but my secondary thing is to ask how much your organisation and problem look like Apple’s organisation and problem before adopting their solutions, technological or organisational.
My primary thing is that pets.com didn’t use epics, scrums, story points, etc. either. Pick your case studies carefully.
Here is my personal submission to the U.S. National Telecommunications and Information Administration’s report on Competition in the Mobile App Ecosystem. Feel free to steal from it and send yours before before 11:59 p.m. Eastern Time on Monday May 23, 2022. I also contributed to the Open Web Advocacy’s response.
I am a UK-based web developer and accessibility consultant, specialising in ensuring web sites are inclusive for people with disabilities or who experience other barriers to access–such as living in poorer nations where mobile data is comparatively expensive, networks may be slow and unreliable and people are generally accessing the web on cheap, lower-specification devices.
Although I am UK-based, I have clients around the world, including the USA. And, of course, because the biggest mobile platforms are Android and iOS/iPad, I am affected by the regulatory regime that applies to Google and Apple. I write in a personal capacity, and am not speaking on behalf of any clients or employers, past or present. You have my permission to publish or quote from this document, with or without attribution.
Many of my clients would like to make apps that are Progressive Web Applications. These are apps that are websites, built with long-established open technologies that work across all operating systems and devices, and enhanced to be able to work offline and have the look and feel of an application. Examples of ‘look and feel’ might be to render full-screen; to be saved with their own icon onto a device’s home screen; to integrate with the device’s underlying platform (with the user’s permission) in order to capture images from the camera; use the microphone for video conferencing; to send push notifications to the user.
The benefits of PWAs are advantageous to both the developer (and the business they work for) and the end user. Because they are based on web technology, a competent developer need only make one app that will work on iOS, Android, as well as desktop computers and tablets. This write-once approach has obvious benefits over developing a single-platform (“native”) app for iOS in addition to a single-platform app for Android and also a website. It greatly reduces costs because it greatly reduces complexity of development, testing and deploying.
The benefits to the user are that the initial download is much smaller than that for a single-platform app from an app store. When an update to the web app is pushed by a developer to the server, the user only downloads the updated pages, not the whole application. For businesses looking to reach customers in growing markets such as India, Indonesia, Nigeria and Kenya, this is a competitive advantage.
In the case of users with accessibility needs due to a disability, the web is a mature platform on which accessibility is a solved problem.
However, many businesses are not able to offer a Progressive Web App, largely due to Apple’s anti-competitive policy of requiring all browsers on iOS and iPad to use its own engine, called WebKit. Whereas Google Chrome on Mac, Windows and Android uses its own engine (called Blink), and Firefox on non-iOS/iPad platforms uses its own rendering engine (called Gecko), Apple’s policy requires Firefox and Chrome on iOS/iPad to be branded skins over WebKit.
This “Apple browser ban” has the unfortunate effect of ham-stringing Progressive Web Apps. Whereas Apple’s Safari browser allows web apps (such as Wordle) to be saved to the user’s home screen, Firefox and Chrome cannot do so–even though they all use WebKit. While single-platform iOS apps can send push notifications to the user, browsers are not permitted to. Push notifications are high on business’ priority because of how it can drive engagement. WebKit is also notably buggy and, with no competition on the iOS/iPad platform, there is little to incentivise Apple to invest more in its development.
Apple’s original vision for applications on iOS was Web Apps, and today they still claim Web Apps are a viable alternative to the App Store. Apple CEO Tim Cook made a similar claim last year in Congressional testimony when he suggested the web offers a viable alternative distribution channel to the iOS App Store. They have also claimed this during a court case in Australia with Epic.
Yet Apple’s own policies prevent Progressive Web Apps being a viable alternative. It’s time to regulate Apple into allowing other browser engines onto iOS/iPad and giving them full access to the underlying platform–just as they currently are on Apple’s MacOS, Android, Windows and Linux.
Yours,
Bruce Lawson
(Last Updated on )
There are some files that you just pretty much never want to commit to version control. Depending on your needs, this might be something like your node_modules folder or it could be the .DS_Store file created by MacOS that holds information about the containing folder.
You can configure git to take advantage of a global .gitignore file for these cases.
.gitignore_global and keep it in my home directory.git config --global core.excludesfile ~/.gitignore_globals
You should now find that git will ignore any files mentioned in your global gitignore file as well as your project specific ones.
There are some files that you just pretty much never want to commit to version control. Depending on your needs, this might be something like your node_modules folder or it could be the .DS_Store file created by MacOS that holds information about the containing folder.
You can configure git to take advantage of a global .gitignore file for these cases.
.gitignore_global and keep it in my home directory.git config --global core.excludesfile ~/.gitignore_globals
You should now find that git will ignore any files mentioned in your global gitignore file as well as your project specific ones.
This is good for you, but it’s also good for the whole team. With each developer responsible for ignoring their own OS specific files or IDE preferences, the project .gitignore can be used to focus exclusively on project specific artefacts.
(Last Updated on )
Ok, so you’re making a React or React Native app. Don’t! Make a Progressive Web App. Sprinkle some Trusted Web Activity goodness to put it in the Play store wrap it with Capacitor.js if it needs push notifications or to go in the App Store (until the EU Digital Markets Act is ratified so Apple is required to allow more capable browsers on iOS).
But maybe you’re on a project that is already React Native, perhaps because some psycho manager flew in, demanded it and then returned to lurk in Hades. In which case, this might help you.
I like Expo (and wrote some random Expo tips). Expo Snacks are like ‘codepens’ for React Native.
Open Accessibility bugs – Facebook’s official list, and accompanying blog post.
All the Birmingham-flavoured tech content on this page is supplied by local bloggers:
Want your blog's content featured here?
For information on submitting your blog for inclusion on this list, visit our blog submission page on Birmingham.IO.
All content, trademarks, artwork, and associated imagery are trademarks and/or copyright material of their respective owners. All rights reserved.
Back to Top