Last updated: August 17, 2022 03:22 AM (All times are UTC.)

June 01, 2030

August 16, 2022

My Work Bezzie “Stinky” Taylar Bouwmeester and I take you on a wild, roller-coaster ride through the magical world of desktop screen readers. Who uses them? How they can help if developers use semantic HTML? How you can test your work with a desktop screenreader? (Parental discretion advised).

August 15, 2022

Last week I observed a blind screen reader user attempting to complete a legal document that had been emailed to them via DocuSign. This is a service that takes a PDF document, and turns it into a web page for a user to fill in and put an electronic signature to. The user struggled to complete a form because none of the fields had data labels, so whereas I could see the form said “Name”, “Address”, “Phone number”, “I accept the terms and conditions”, the blind user just heard “input, required. checkbox, required”.

Ordinarily, I’d dismiss the product as inaccessible, but DocuSign’s accessibility statement says “DocuSign’s eSignature Signing Experience conforms to and continually tests for GovernmentSection 508 and WCAG 2.1 Level AA compliance. These products are accessible to our clients’ customers by supporting Common screen readers” and had been audited by The Paciello Group, whom I trust.

So I set about experimenting by signing up for a free trial and authoring a test document, using Google Docs and exporting as a PDF. I then imported this into DocuSign and began adding fields to it. I noticed that each input has a set of properties (required, optional etc) and one of these is ‘Data Label’. Aha! HTML fields have a <label> associated with them (or should do), so I duplicated the text and sent the form to my Work Bezzie, Stinky Taylar, to test.

DocuSign's back-end to add fields to a PDF.

No joy. The labels were not announced. (It seems that the ‘data label’ field actually becomes a column header in the management report screen.) So I set about adding text into the other fields, and through trial and error discovered how to force the front-end to have audible data labels:

  • Text fields should have the visible label duplicated in the ‘tooltip’ field.
  • Radio buttons and checkboxes: type the question (eg, what would be the <legend>) into the ‘group tooltip’ field.
  • Each individual checkbox or radio button’s label should be entered into the “checkbox/ radio button value” field.

I think DocuSign is missing a trick here. Given the importance of input labels for screen readers, a DocuSign author should be prompted for this information, with an explanation of why it’s needed. I don’t think it would be too hard to find the text immediately preceeding the field (or immediately following it on the same line, in the case of radio/ checkboxes) and prefilling the prompt, as that’s likely to be the relevant label. Why go to all the effort to make an accessible product, then make it so easy for your customers to get it wrong?

Another niggle: on the front end, there is an invisible link that is visually revealed when tabbed to, and says “Press enter or use the screen reader to access your document”. However, the tester I observed had navigated directly to the PDF document via headings, and hadn’t tabbed to the hidden link. The ‘screenreader mode’ seemed visually identical to the default ‘hunt for help, cripples!” mode, so why not just have the accessible mode as the default?

All in all, it’s a good product, let down by poor usability and a ‘bolt-on’ approach. And, as we all know, built-in beats bolt-on. Bigly.

August 05, 2022

Reading List 293 by Bruce Lawson (@brucel)

  • Northern Bloke talking about CSS on YouTube link of the month: Be the browser’s mentor, not its micromanager – Andy Bell on how we can hint the browser, rather than micromanage it by leaning into progressive enhancement, CSS layout, fluid type & space and modern CSS capabilities to build resilient front-ends that look great for everyone, regardless of their device, connection speed or context
  • Focus management still matters – Sarah Higley takes us on a magical mystery tour of sequential-focus-navigation-starting-point
  • Date and Time Pickers for All “the release of the React Aria and React Spectrum date and time picker components… a full suite of fully featured components and hooks including calendars, date and time fields, and range pickers, all with a focus on internationalization and accessibility. It also includes @internationalized/date, a brand new framework-agnostic library for locale-aware date and time manipulation … All of our date and time picker components have been tested across desktop and mobile devices, and with many different input methods including mouse, touch, and keyboard. We have worked hard to ensure screen reader announcements are clear and consistent.
  • Replace the outline algorithm with one based on heading levels – The HTML spec now reflects what actually happens in browsers, rather than what we wish happens. The outlining algorithm allowed what would be effectively a generic heading element, with a level corresponding to its nesting in sectioning elements. But no browser implemented it, so the spec reverts to reality so developers didn’t mistakenly believe it is possible.
  • What is the best way to mark up an exclusive button group? by Lea Verou
  • Perceived affordances and the functionality mismatch – a companion piece by Léonie Watson
  • JSX in the browser by Chris Ferdinandi
  • An Accessibility-First Approach To Chart Visual Design
  • Bunny Fonts “is an open-source, privacy-first web font platform designed to put privacy back into the internet…with a zero-tracking and no-logging policy”, so an alternative to Google Fonts
  • Font Subsetting Strategies: Content-Based vs Alphabetical – “Font subsetting allows you to split a font’s characters (letters, numbers, symbols, etc.) into separate files so your visitors only download what they need. There are two main subsetting strategies that have different advantages depending on the type of site you’re building.”
  • Dragon versions and meeting accessibility guidelines “Dragon responds to the visible text label, the accessible name and the “name” attribute … Microsoft are in the process of buying Nuance, and it’s a measure of how unpopular Nuance is that most people think that a Microsoft takeover would be a very good thing.”
  • The Surprising Truth About Pixels and Accessibility – “Should I use pixels or ems/rems?!”
    by Josh W. Comeau
  • Android accessibility: roles and TalkBack by Graeme Coleman (Tetralogical)
  • Three Steps To Start Practicing Inclusive Product Development – “Product teams aren’t intentionally designing products that exclude users, but a lack of team diversity, specialized knowledge and access to feedback from people with disabilities results in users being left behind.”
  • Court OKs billion-dollar Play Store gouging suit against Google
  • The hidden history of screen readers – “For decades, blind programmers have been creating the tools their community needs”
  • Introducing: Emoji Kitchen 😗👌💕 – Jennifer Daniel, the chair of the Unicode Consortium’s emoji subcommittee, asks “How can we reconcile the rapid ever changing way we communicate online with the formal methodical process of a standards body that digitizes written languages?” and introduces the Poopnado emoji

I can usually muddle through whatever programming task is put in front of me, but I can’t claim to have a great eye for design. I’m firmly in the conscious incompetence stage of making things look good.

The good news for me and people like me is that you can fake it. Sort of. I doubt I’ll ever compete with people who actually know what they’re doing, but I have found some resources for making something that doesn’t look like the dog’s dinner.

I’d like to add some more free resources to this, so hopefully I’ll get back to it.

July 25, 2022

The upcoming issue of the SICPers newsletter is all about phrases that were introduced to computing to mean one thing, but seem to get used in practice to mean another. This annoys purists, pedants, and historians: it also annoys the kind of software engineer who dives into the literature to see how ideas were discussed and used and finds that the discussions and usages were about something entirely different.

So should we just abandon all technical terminology in computing? Maybe. Here’s an irreverent guide.

Object-Oriented Programming

Luckily the industry doesn’t really use this term any more so we can ignore the changed meaning. The small club of people who still care can use it correctly, everybody else can carry on not using it. Just be aware when diving through the history books that it might mean “extreme late binding of all things” or it might mean “modules, but using the word class” depending on the age of the text.

Agile

Nope, this one’s in the bin, I’m afraid. It used to mean “not waterfall” and now means “waterfall with a status meeting every day and an internal demo every two weeks”. We have to find a new way to discuss the idea that maybe we focus on the working software and not on the organisational bureaucracy, and that way does not involve the word…

DevOps

If you can hire a “DevOps engineer” to fulfil a specific role on a software team then we have all lost at using the phrase DevOps.

Artificial Intelligence

This one used to mean “psychologist/neuroscientist developing computer models to understand how intelligence works” and now means “an algorithm pushed to production by a programmer who doesn’t understand it”. But there is a potential for confusion with the minor but common usage “actually a collection of if statements but last I checked AI wasn’t a protected term” which you have to be aware of. Probably OK, in fact you should use it more in your next grant bid.

Technical Debt

Previously something very specific used in the context of financial technology development. Now means whatever anybody needs it to mean if they want their product owner to let them do some hobbyist programming on their line-of-business software, or else. Can definitely be retired.

Behaviour-Driven Development

Was originally the idea that maybe the things your software does should depend on the things the customers want it to do. Now means automated tests with some particular syntax. We need a different term to suggest that maybe the things your software does should depend on the things the customers want it to do, but I think we can carry on using BDD in the “I wrote some tests at some point!” sense.

Reasoning About Software

Definitely another one for the bin. If Tony Hoare were not alive today he would be turning in his grave.

July 18, 2022

Regular readers will recall that the UK competition regulator, CMA, investigated Apple and Google’s mobile ecosystems and concluded there is a need for regulation. Although they were initially looking mostly at native app stores, but quickly widened that to looking at how Apple’s insistence on all browsers using WebKit on iOS is preventing Progressive Web Apps from competing against single platform native apps.

CMA has announced its intention to being a market investigation specifically into the supply of mobile browsers and browser engines, and the distribution of cloud gaming services through app stores on mobile devices, and seeks your views. It doesn’t matter whether you are not based in UK; if you or your clients do business in UK, your views matter too.

Steve Fenton has published his response, as has Alistair Shepherd; here is mine, in case you need something to crib from to write yours. Send your response to the CMA mailbox browsersandcloud@cma.gov.uk before July 22nd.

I am a UK-based web developer and accessibility consultant, specialising in ensuring web sites are inclusive for people with disabilities or who experience other barriers to access–such as living in poorer nations where mobile data is comparatively expensive, networks may be slow and unreliable and people are generally accessing the web on cheap, lower-specification devices. I write in a personal capacity, and am not speaking on behalf of any clients or employers, past or present. You have my permission to publish or quote from this document, with or without attribution.

Many of my clients would like to make apps that are Progressive Web Applications. These are apps that are websites, built with long-established open technologies that work across all operating systems and devices, and enhanced to be able to work offline and have the look and feel of an application. Examples of ‘look and feel’ might be to render full-screen; to be saved with their own icon onto a device’s home screen; to integrate with the device’s underlying platform (with the user’s permission) in order to capture images from the camera; use the microphone for video conferencing; to send push notifications to the user.

The benefits of PWAs are advantageous to both the developer (and the business they work for) and the end user. Because they are based on web technology, a competent developer need only make one app that will work on iOS, Android, as well as desktop computers and tablets. This write-once approach has obvious benefits over developing a single-platform (“native”) app for iOS in addition to a single-platform app for Android and also a website. It greatly reduces costs because it greatly reduces complexity of development, testing and deploying.

The benefits to the user are that the initial download is much smaller than that for a single-platform app from an app store. When an update to the web app is pushed by a developer to the server, the user only downloads the updated pages, not the whole application. For businesses looking to reach customers in growing markets such as India, Indonesia, Nigeria and Kenya, this is a competitive advantage.

In the case of users with accessibility needs due to a disability, the web is a mature platform on which accessibility is a solved problem.

However, many businesses are not able to offer a Progressive Web App, largely due to Apple’s anti-competitive policy of requiring all browsers on iOS and iPad to use its own engine, called WebKit. Whereas Google Chrome on Mac, Windows and Android uses its own engine (called Blink), and Firefox on non-iOS/iPad platforms uses its own rendering engine (called Gecko), Apple’s policy requires Firefox and Chrome on iOS/iPad to be branded skins over WebKit.

This “Apple browser ban” has the unfortunate effect of ham-stringing Progressive Web Apps. Whereas Apple’s Safari browser allows web apps (such as Wordle) to be saved to the user’s home screen, Firefox and Chrome cannot do so–even though they all use WebKit. While single-platform iOS apps can send push notifications to the user, browsers are not permitted to. Push notifications are high on business’ priority because of how it can drive engagement. WebKit is also notably buggy and, with no competition on the iOS/iPad platform, there is little to incentivise Apple to invest more in its development.

Apple’s original vision for applications on iOS was Web Apps, and today they still claim Web Apps are a viable alternative to the App Store. Apple CEO Tim Cook made a similar claim last year in Congressional testimony when he suggested the web offers a viable alternative distribution channel to the iOS App Store. They have also claimed this during a court case in Australia with Epic.

Yet Apple’s own policies prevent Progressive Web Apps being a viable alternative. It’s time to regulate Apple into allowing other browser engines onto iOS/iPad and giving them full access to the underlying platform–just as they currently are on Apple’s MacOS, Android, Windows and Linux. Therefore, I fully support your proposal to make a reference in relation to the supply of mobile browsers and cloud gaming in the UK, the terms of reference, and urge a swift remedy: Apple must be required to allow alternate browser engines on iOS, with access to all of the same APIs and device integrations that Safari and Native iOS have access to.

Yours,

Bruce Lawson

July 16, 2022

If you’re looking for music to study to tonight, here’s Watering a Flower, by Haruomi Hosono. Originally recorded in 1984 to be in-store music for MUJI.

If you’re looking for a way to avoid studying, it’s the same link, but read the comments.

I’ve been maintaining websites in some form for a long time now, and here’s why maybe you should at least think about it.

You get almost total creative control.

The more content that gets generated inside the walled gardens of Twitter, Instagram, etc. the less weirdness, beauty and creativity we get on the web. When you post on someone else’s service, what you wanted to say is forced into a tiny rectangle and you might find that rectange getting smaller and more restrictive as times goes on.

It’ll last if you take care of it.

If you create your web page using the fundamental technologies, HTML and CSS, and resist the urge to jump onto the ever-turning wheel of more advanced technologies, you’ll have something that in ten years from now you can be pretty sure you’ll be able to slap onto a server and show people. The oft-referenced Space Jam website is a great example.

It doesn’t really even have to be a website.

You know what’s easier than writing HTML? Writing plain text. You know what web servers are perfectly happy to serve? A plain text web site.

Hard things are often worth it.

Learning to develop and host a website is harder than registering a Twitter account and merrily posting away, but you develop a useful skill and a valuable creative outlet. A lot of people liken creating a personal website to gardening. You carefully water, prune, and dote, and what you get is something you can cherish.

Hosting a website isn’t that difficult.

Again, it’s harder than using a third party service, but there are plenty of places to put your site for free or cheap:

It doesn’t really matter if nobody reads it.

Sure, one good thing about the walled gardens is that they’re relatively convenient when it comes to showing your stuff to other people in the garden. However, someone seeing your post isn’t really a human connection. Someone hitting like on your post isn’t really a human connection.

I’ve come to favour fewer, deeper interactions over a larger number of shallower ones, even if those likes do feel good. I’m not writing this to make myself out as the wise person who’s transcended the shallowness of social media. I’m writing it because it takes a deliberate effort for me not to fall into those traps. There’s some effort to recreate “likes” in the IndieWeb, but at the moment I view the lack of likes as more of a feature than a bug.

July 11, 2022

I recently taught an introduction to Python course, to final-year undergraduate students. These students had little to zero programming experience, and were all expected to get set up with Python (using the Anaconda environment, which we had determined to be the easiest way to get a reasonable baseline configuration) on laptops they had brought themselves.

What follows is not a slight on these people, who were all motivated, intelligent, and capable. It is a slight on the world of programming in the current ages, if you are seeking to get started with putting a general-purpose computer to your own purposes and merely own a general-purpose computer.

One person had a laptop that, being a mere six (6) years old, was too old to run the current version of Anaconda Distribution. We had to crawl through the archives, guessing what older version might work (i.e. might both run on their computer and still permit them to follow the course).

Another had a modern laptop and the same same version of Python/tools that everyone else was using, except that their IDE would crash if they tried to plot a graph in dark mode.

Another had, seemingly without having launched the shell while they owned their computer, got their profile into a state where none of the system binary folders were on their PATH. Hmm, python3 doesn’t work, let’s use which python to find out why not. Hmm, which doesn’t work, let’s use ls to find out why not. Hmmm…

Many, through not having used terminal emulators before, did not yet know that terminal emulators are modal. There are shell commands, which you must type when you see a $ (or a % or a >) and will not work when you can see a >>>. There are Python commands, which are the other way around. If you type a command that launches nano/pico, there are other rules.

By the way, condo and pip (and poetry, if you try to read anything online about setting up Python) are Python things but you cannot use them as Python commands. They are shell commands.

By the other way, everyone writes those shell commands with a $ at the front. You do not write the $. Oh, and by the other other way: they don’t necessarily tell you to open the Terminal to do it.

Different environments—the shell, visual studio code, Spyder, PyCharm—will do different things with respect to your “current working directory” when you run a script. They will not tell you that they have done this, nor that it is important, nor that it is why your script can’t find a data file that’s RIGHT THERE.

This is all way before we get to the dark art of comprehending exception traces.

When I were a lad and Silicon Valley were all fields, you turned a computer on and it was ready for some programming. I’m not suggesting returning to that time, computers were useless then. But I do think it is needlessly difficult to get started with “a programming language that lets you work quickly” in this time of ubiquitous programs.

July 07, 2022

Recently, the HTML spec changed to replace current outline algorithm with one based on heading levels. So the idea that you could use <h1> for a generic heading across your documents, and the browser would “know” which level it actually should be by its nesting inside <section> and other related “sectioning elements”, is no more.

This has caused a bit of anguish in my Twitter timeline–why has this excellent method of making reusable components been taken away? Won’t that hurt accessibility as documents marked up that way will now have a completely flat structure? We know that 85.7% of screen reader users finding heading level useful.

Here comes the shocker: it has never worked. No web browser has implemented that outlining algorithm. If you used <h1> across your documents, it has always had a flat structure. Nothing has been taken away; this part of the spec has always been a wish, but has never been reality.

One of the reasons I liked having a W3C versioned specification for HTML is that it would reflect the reality of what browsers do on the date at the top of the spec. A living standard often includes things that aren’t yet implemented. And the worse thing about having zombie stuff in a spec is that lots of developers believe (in good faith) that it accurately reflects what’s implemented today.

So it’s good that this is removed from the WHATWG specification (now that the W3C specs are dead). I wish that you could use one generic heading everywhere, and its computed level were communicated to assistive technology. Back n 1991, Sir Uncle Timbo himself wrote

I would in fact prefer, instead of <H1>, <H2> etc for headings [those come from the AAP DTD] to have a nestable <SECTION>..</SECTION> element, and a generic <H>..</H> which at any level within the sections would produce the required level of heading.

But browsers vendors ignored both Sir Uncle Timbo, and me (the temerity!) and never implemented it, so removing this from the spec will actually improve accessibility.

(More detail and timeline in Adrian Roselli’s post There Is No Document Outline Algorithm.)

July 06, 2022

If there’s a common thread through tech workers, it’s having a drawer full of stickers, accumulated indiscriminately at conferences and meetups, but which one can never quite bring themselves to attach to anything.

There are very understandable human reasons for this. Once that sticker is stuck, you’ve committed. Your enjoyment of that sticker is now bound inextricably to the lifetime of whatever you’ve stick it on. Getting rid of that thing means getting rid of that sticker and the memories that come with it. That sticker isn’t just a picture of a dog, it represents the memories of that time you went to Crufts or whatever. You might have stuck it on a laptop, which means you’ll probably only have that sticker for somewhere between four and eight more years. What a waste. Or you might have stuck it on one of your beautiful notebooks, which in practice means you’ll have it forever, as notebooks are another thing that most of us like to accumulate but balk at the idea of actually using.

So, like many of you, I kept my stickers in a little drawer to occasionally rifle through, smiling at the memories attached. Only, mathematically, I was wasting them.

Let’s say each sticker has a value l representing how much you like it. For convenience, we’ll give all stickers a fixed value of l=1.

Your enjoyment, e, of a sticker is then l * s where s is the total number of seconds for which you were looking at it. The success of your sticker strategy is measured by the sum of all e values.

Time for a worked example. Let’s say you have five stickers in your drawer and you look through the drawer once a month. You look at each sticker for a good 30 seconds before replacing it and moving on to the next one. You maintain this ritual for an admirable 60 years.

12 inspections for 60 years is 720 inspections. With a fixed l=1, each inspection gives you 30 * 5 * 1, for a total of e=150. Your lifetime e using the drawer strategy a hefty 108,000.

Now imagine you take those five stickers and put them on the back of your desk, where all five remain in your line of sight while you work. Keeping our convenient l=1 for each sticker, you’re racking up a whopping 5e per second. At this rate, you’ll catch up with your drawer using counterpart in 21,600 seconds, or 360 minutes, or six hours.

In other words, in a little less than a work day minus lunch, I’ve enjoyed my stickers as much as I would have done over 60 years if I’d kept them safe in a drawer and just looked at them once per month.

Don’t be a drawer. Be a sticker.

July 05, 2022

There’s a lot you can do with Ruby’s concepts of object individuation. A lot you probably shouldn’t do.

Taxation

Some objects just hoard too many methods. By applying a modest tax rate, you can reclaim memory from your most bloated objects back to the heap.

class Object
  def tax(tax_factor = 0.2)
    meths = self.class.instance_methods(false)
    tax_liability = (meths.length * tax_factor).ceil

    tax_liability.times do
      tax_payment = meths.shuffle.shift
      instance_eval("undef #{tax_payment}")
    end
  end
end

class Citizen
  def car; end
  def house; end
  def dog; end
  def spouse; end
  def cash; end
end

c = Citizen.new
c.tax
c.house
# undefined method `house' for #<Citizen:0x00007fc342a866c0>

Excitability

Write your code like you write your emails when you’re trying a little too hard to come across as friendly.

klasses = ObjectSpace.each_object(Class)
klasses.each do |klass|
  next if klass.frozen?

  klass.instance_methods.each do |meth|
    next if meth.to_s.end_with?('!')

    klass.class_eval do
      alias_method "#{meth}!".to_sym, meth
    end
  end
end

[1, 3, 2].max!
# 3

Where’s Wally?

A fun game to play with your friends. Hides Wally in a randomly selected method, and you get to look for him!

klasses = ObjectSpace.each_object(Class)
klass = klasses.to_a.sample
method = klass.instance_methods.sample

klass.instance_eval do
   define_method(method) do |*args|
     puts "You found Wally!"
   end
end

Finite resources

Budgeting is important.

$availableCalls = 1000

trace = TracePoint.new(:call) do |tp|
  if tp.defined_class.ancestors.include?(Unsustainable)
    raise StandardError.new, 'No more method calls. Go outside and play.' if $availableCalls.zero?

    $availableCalls -= 1
  end
end
trace.enable

module Unsustainable; end

class EndlessGrowth
  include Unsustainable

  def grow; end
end

1001.times { EndlessGrowth.new.grow }
# No more method calls. Go outside and play. (StandardError)

Ruby for anarchists

class Object
  def self.inherited(subclass)
    return if %w[Object].include?(self.name)
    raise StandardError.new, '🏴'
  end
end

class RulingClass; end
class UnderClass < RulingClass; end # raises StandardError

July 04, 2022

I was lucky enough to get one of the limited number of tickets for Brighton Ruby 2022, so off I trotted down to Brighton for a long weekend in the very comfortable Brighton Surf Guest House.

Outside of the talks

  • Met some nice people and had some nice conversations. Thanks to Hans, Benjamin and Kelly for letting me awkwardly approach their table for some pre-talk chatter. All of them had travelled a lot further than I had to be there, so I’m glad they made it. The guy I had a nice chat to about software as a medical device, the necessary paperwork that goes with it, and the joy of CE markings. And Mark, with whom I had a lovely chat about IndieWeb stuff, so it only feels appropriate to plug his site.
  • Andy Croll made a brief plea for the community to hire and mentor junior members of the Rails community. After all, seniors don’t grow on trees.
  • Got some exceptionally nice conference swag:

The talks

Encapsulation. React. Entropy.

Joel Hawksley and his talk about getting GitHub’s 40k lines of custom CSS (sort of) under control with their design system, Primer.

  • Bundle splitting & Chrome’s CSS coverage tools
  • “TDD but the T is Twitter”
  • Falling into the Pit of Success - Jeff Atwood
  • CSS linting as a CI step
  • Sending data to Datadog for custom dashboards

Billions. Redis. Efficiency.

Kelly Sutton–who I met at “breakfast”–talking about latency based Sidekiq queues. The idea is that you queue your jobs by expected latency (queue: :within_thirty_seconds) instead of priority which is ambiguous, and that way your auto-scaling and alerting can respond appropriately. A writeup will potentially be landing on the Gusto engineering blog some time in the near future. They also introduced me to the idea of having specific read-only queues for high throughput tasks that won’t overwhelm the primary database with writes.

Patterns. Enthusiasm. Adoption.

Tom Stuart talked about Ruby 2.7+’s pattern matching feature, a powerful but under-hyped feature which to my understanding provides functionality similar to named regular expression captures, but for abitrary objects. He included several examples of how this can be used to reduce the amount of code required to do certain things.

Instance. Shapes. Performance.

Jemma Issroff presenting on object shapes and how the concept can potentially be applied to Ruby. This is the sort of “under the hood” improvement that I’m not so familiar with, and eager to learn more about. There’s an open issue on the Ruby tracker covering it in more detail.

Elusive. Rules. Clarity.

Roberta Mataityte on the 9 rules of effective debugging, from the book Debugging: The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems by David Agans. A really useful talk, given how easy it is to get lost in the woods while tracking down a problem. I can’t personally vouch for the book, but Roberta’s take on the material definitely made it sound like a worthwhile read, so I’ll probably grab a copy at some point.

Dust. Legacy. Story.

Emma Barnes on legacy and the context in which we create and use our tools. Amazing talk, but you probably had to be there.

Nil. Maybe. Same.

John Cinnamond on the maybe monad, the null object pattern, and how learning different perspectives helps us truly understand ourselves. Sometimes I suspect that people are using programming to trick me into learning deeper lessons about the human condition. Tricksy.

Reframe. Support. Embrace.

Naomi Freeman on her framework (Freemwork?) for building psychologically safe teams. Unfortunately by this point I’d stopped taking notes on my phone because I didn’t want to appear disinterested, so I can’t remember the individual points.

Looking forward to next year.

July 01, 2022

Several years ago, I inherited a legacy application. It had multiple parts all logging back to a central ELK stack that had later been supplemented by additional Prometheus metrics. And every morning, I would wake up to a dozen alerts.

Obviously we tried to reduce the number of alerts sent. Where possible the system would self-heal, bringing up new nodes and killing off old ones. Some of the alerts were collected into weekly reports so they were still seen but, as they didn’t require immediate triage, could be held off.

But the damage was done.

No-one read the Slack alerts channel. Everyone had forwarded the emails to spam. The system cried wolf, but all the villagers had learnt to cover their ears.

With a newer project, I wanted an implicit rule; if an alert pops up, it is because it requires a human interaction. A service shut off because of a billing issue. A new deploy causing a regression in signups. These are things a human needs to step in and do something about (caveat emptor there is wiggle room in this).

There are still warnings been flagged up but developers can check in on these on their own time. Attention is precious and been pulled out of it every hour because of a NullPointer is not a worthy trade-off in my own experience.

A flood of false positives will make you blind to the real need of alerting; knowing when you’re needed.

June 28, 2022

There are seven basic story plots:

  1. Overcoming the monster
  2. Rags to riches
  3. The quest
  4. Voyage and return
  5. Comedy
  6. Tragedy
  7. Rebirth

How many distinct software “plots” are there?

  1. Making an impossible thing possible
  2. Automating or eliminating a manual thing
  3. Making a slow thing faster1
  4. Making a manual thing easier - inline definitions/translations, note-taking software that allows you to easily group related notes
  5. Entertaining the user

Let me know if you can think of any more.

  1. This is particularly powerful when you transition between orders of magnitude and facilitate a positive feedback loop. Make my test suite 15% faster and I’ll thank you kindly. Make it 10x faster and I’ll love you forever. 

June 25, 2022

My talk at AppDevCon discussed the Requirements Trifecta but turned it into a Quadrinella: you need leadership vision, market feedback, and technical reality to all line up as listed in the trifecta, but I’ve since added a fourth component. You also need to be able to tell the people who might be interested in paying for this thing that you have it and it might be worth paying for. If you don’t have that then, if anybody has heard of you at all, it will be as a company that went out of business with a product “five years ahead of its time”: you were able to build it, it did something people could benefit from, in an innovative way, but nobody realised that they needed it.

A 45 minute presentation was enough to introduce that framework and describe it, but not to go into any detail. For example we say that we need “market feedback”, i.e. to know that the thing we are going to build is something that some customer somewhere will actually want. But how do we find out what they want? The simple answer, “we ask them”, turns out to uncover some surprising complexity and nuance.

At one end, you have the problem of mass-market scale: how do you ask a billion people what they want? It’s very expensive to do, and even more expensive to collate and analyse those billion answers. We can take some simplifying steps that reduce the cost and complexity, in return for finding less out. We can sample the population: instead of asking a billion people what they think, we can ask ten thousand people what they think and apply what we learn to all billion people.

We have to know that the way in which we select those 10,000 people is unbiased, otherwise we’re building for an exclusive portion of the target billion. Send a survey to people’s work email addresses on a Friday, and some will not pick it up until Sunday as their weekend is Fri-Sat. Others will be on holiday, or not checking their email that day, or feeling grumpy and inclined to answer with the opposite of their real thoughts, or getting everything done quickly before the weekend and disinclined to think about your questions at all.

Another technique we use is to simplify the questions—or at least the answers we’ll accept to those questions, to make it easier to combine and aggregate those answers. Now we have not asked “what do you think about this” at all; we have asked “which of these ways in which you might think about this do you agree with?” Because people are inclined to avoid conflict, they tend to agree with us. Ask “to what extent do you agree that spline reticulation is the biggest automation opportunity in widget frobnication?” and you’ll learn something different from the person who asked “to what extent do you agree that spline reticulation is the least important automation opportunity in widget frobnication?”

We’ll get richer information from deeper, qualitative interactions with people, and that tends to mean working with fewer people. At the extreme small end we have one person: an agent talks to their client about what that client would like to see. This is quite an easy case to deal with, because you have exactly one viewpoint to interpret.

Of course, that viewpoint could well be inconsistent. Someone can tell you that they get a lot of independence in how they work, then in describing their tasks list all the approvals and sign-offs they have to get. It can also be incomplete. A manager might not fully know all of the details of the work their reports do; someone may know their own work very well but not the full context of the process in which that work occurs. Additionally, someone may not think to tell you everything about their situation: many activities rely on tacit knowledge that’s hard to teach and hard to explain. So maybe we watch them work, rather than asking them how they work. Now, are they doing what they’re doing because that’s how they work, or because that’s how they behave when they’re being watched?

Their viewpoint could also be less than completely relevant: maybe the client is the person paying for the software, but are they the people who are going to use it? Or going to be impacted by the software’s outputs and decisions? I used the example in the talk of expenses software: very few people when asked “what’s the best software you’ve ever used” come up with the tool they use to submit receipts for expense claims. That’s because it’s written for the accounting department, not for the workers spending their own money.

So, we think to involve more people. Maybe we add people’s managers, or reports, or colleagues, from their own and from other departments. Or their customers, or suppliers. Now, how do we deal with all of these people? If we interview them each individually, then how do we resolve contradiction in the information they tell us? If we bring them together in a workshop or focus group, we potentially allow those contradictions to be explored and resolved by the group. But potentially they cause conflict. Or don’t get brought up at all, because the politics of the situation lead to one person becoming the “spokesperson” for their whole team, or the whole group.

People often think of the productiveness of a software team as the flow from a story being identified as “to do” to working software being released to production. I contend that many of the interesting and important decisions relating to the value and success of the software were made before that limited part of the process.

June 24, 2022

Reading List 292 by Bruce Lawson (@brucel)

June 22, 2022

Lookup by Luke Lanchester (@Dachande663)

Many, many years ago I was an avid reader and writer of various fiction writing websites. There’s still links to them on this site, which shows a. how long they’ve been around and b. how out of date this site is. Recently I’ve been on a bit of binge, revisiting my past and re-reading these old stories. Which led me to a quest.

I built up a good rapport with several writers. PMs and later emails been traded back and forth, learning a bit more about the world as a young and naive kid. Some folks were on the far side of the world, others 30 miles down the road.

I’ve tried to get in touch with a few of these folks recently and hit the inevitable bit rot that seems to pervade the Internet nowadays. Dead emails, links to profiles on sites that no longer exist. It seems there’s no way to reach some folks.

Sleuthing through LinkedIn, Twitter, every open-source channel I can find has yielded no luck.

I guess, what I’m trying to say is, it’s inherently unverving and disheartening knowing there is someone, somewhere, out there in the world who I will probably never get to talk to again.

But you never know.

The field of software engineering doesn’t change particularly quickly. Tastes in software engineering change all the time: keeping up with them can quickly result in seasickness or even whiplash. For example, at the moment it’s popular to want to do server-side rendering of front end applications, and unpopular to do single-page web apps. Those of us who learned the LAMP stack or WebObjects are back in fashion without having to lift a finger!

Currently it’s fashionable to restate “don’t mock an interface you don’t own” as the more prescriptive, taste-driven statement “mocks are bad”. Rather than change my practice (I use mocks and I’m happy with that from 2014 is still OK), I’ll ask why has this particular taste arisen.

Mock objects let you focus on the ma, the interstices between objects. You can say “when my case controller receives this filter query, it asks the case store for cases satisfying this predicate”. You’re designing a conversation between independent programs, making restrictions about the messages they use to communicate.

But many people don’t think about software that way, and so don’t design software that way either. They think about software as a system that holistically implements a goal. They want to say “when my case controller receives this filter query, it returns a 200 status and the JSON representation of cases matching that query”. Now, the mocks disappear, because you don’t design how the controller talks to the store, you design the outcome of the request which may well include whatever behaviour the store implements.

Of course, tests depending on the specific behaviour of collaborators are more fragile, and the more specific prescription “don’t mock what you don’t control” uses that fragility: if the behaviour of the thing you don’t control changes, you won’t notice because your mock carries on working the way it always did.

That problem is only a problem if you don’t have any other method of auditing your dependencies for fitness for purpose. If you’re relying on some other interface working in a particular way then you should probably also have contract tests, acceptance tests, or some other mechanism to verify that it does indeed work in that way. That would be independent of whether your reliance is captured in tests that use mock objects or some other design.

It’ll only be a short while before mock objects are cool again. Until then, this was an interesting diversion.

My professional other half at Babylon Health, Taylar Bouwmeester, and I invite you to join us on a rollercoaster ride through the merry world of keyboard accessibility. It stars Brad Pitt as me and Celine Dion (she’s Canadian, you know) as Taylar.

June 17, 2022

Help the CMA help the Web by Stuart Langridge (@sil)

As has been mentioned here before the UK regulator, the Competition and Markets Authority, are conducting an investigation into mobile phone software ecosystems, and they recently published the results of that investigation in the mobile ecosystems market study. They’re also focusing in on two particular areas of concern: competition among mobile browsers, and in cloud gaming services. This is from their consultation document:

Mobile browsers are a key gateway for users and online content providers to access and distribute content and services over the internet. Both Apple and Google have very high shares of supply in mobile browsers, and their positions in mobile browser engines are even stronger. Our market study found the competitive constraints faced by Apple and Google from other mobile browsers and browser engines, as well as from desktop browsers and native apps, to be weak, and that there are significant barriers to competition. One of the key barriers to competition in mobile browser engines appears to be Apple’s requirement that other browsers on its iOS operating system use Apple’s WebKit browser engine. In addition, web compatibility limits browser engine competition on devices that use the Android operating system (where Google allows browser engine choice). These barriers also constitute a barrier to competition in mobile browsers, as they limit the extent of differentiation between browsers (given the importance of browser engines to browser functionality).

They go on to suggest things they could potentially do about it:

A non-exhaustive list of potential remedies that a market investigation could consider includes:
  • removing Apple’s restrictions on competing browser engines on iOS devices;
  • mandating access to certain functionality for browsers (including supporting web apps);
  • requiring Apple and Google to provide equal access to functionality through APIs for rival browsers;
  • requirements that make it more straightforward for users to change the default browser within their device settings;
  • choice screens to overcome the distortive effects of pre-installation; and
  • requiring Apple to remove its App Store restrictions on cloud gaming services.

But, importantly, they want to know what you think. I’ve now been part of direct and detailed discussions with the CMA a couple of times as part of OWA, and I’m pretty impressed with them as a group; they’re engaged and interested in the issues here, and knowledgeable. We’re not having to educate them in what the web is. The UK’s potential digital future is not all good (and some of the UK’s digital future looks like it could be rather bad indeed!) but the CMA’s work is a bright spot, and it’s important that we support the smart people in tech government, lest we get the other sort.

So, please, take a little time to write down what you think about all this. The CMA are governmental: they have plenty of access to windy bloviations about the philosophy of tech, or speculation about what might happen from “influencers”. What’s important, what they need, is real comments from real people actually affected by all this stuff in some way, either positively or negatively. Tell they whether you think they’ve got it right or wrong; what you think the remedies should be; which problems you’ve run into and how they affected your projects or your business. Earlier in this process we put out calls for people to send in their thoughts and many of you responded, and that was really helpful! We can do more this time, when it’s about browsers and the Web directly, I hope.

If you feel as I do then you may find OWA’s response to the CMA’s interim report to be useful reading, and also the whole OWA twitter thread on this, but the most important thing is that you send in your thoughts in your own words. Maybe what you think is that everything is great as it is! It’s still worth speaking up. It is only a good thing if the CMA have more views from actual people on this, regardless of what those views are. These actions that the CMA could take here could make a big difference to how competition on the Web proceeds, and I imagine everyone who builds for the web has thoughts on what they want to happen there. Also there will be thoughts on what the web should be from quite a few people who use the web, which is to say: everybody. And everybody should put their thoughts in.

So here’s the quick guide:

  1. You only have until July 22nd
  2. Read Mobile browsers and cloud gaming from the CMA
  3. Decide for yourself:
    • How these issues have personally affected you or your business
    • How you think changes could affect the industry and consumers
    • What interventions you think are necessary
  4. Email your response to browsersandcloud@cma.gov.uk

Go to it. You have a month. It’s a nice sunny day in the UK… why not read the report over lunchtime and then have a think?

June 16, 2022

Bizarrely, the Guinness book of world records lists the “first microcomputer” as 1980’s Xenix. This doesn’t seem right to me:

  1. Xenix is an operating system, not a microcomputer.
  2. Xenix was announced in 1980 but not shipped until 1981.
  3. The first computer to be designed around a microprocessor is also the first computer to be described in patents and marketing materials as a “microcomputer”—the Micral N.

June 06, 2022

Posts by John Sear (@DiscoStu_UK)

June 01, 2022

Reading List 291 by Bruce Lawson (@brucel)

  • Link o’the week: A Management Maturity Model for Performance – “Despite advances in browser tooling, automated evaluation, lab tools, guidance, and runtimes, however, teams struggle to deliver even decent performance with today’s popular frameworks. This is not a technical problem per se — it’s a management issue, and one that teams can conquer with the right frame of mind and support” by Big Al Russell
  • The tech tool carousel by Andy Bell
  • Internet Explorer retires on June 15, 2022 – For a long time, it was my browser of choice for downloading Firefox.
  • React Native Accessibility – GAAD 2022 Update – React Native accessibility is dragging itself into the 21st century
  • HTML Sanitizer API – Chromium & Firefox intend to ship a new HTML Sanitizer API, which developers can use to remove content that may execute script from arbitrary, user-supplied HTML content. The goal is to make it easier to build XSS-free web applications.
  • W3C Ethical Web Principles – “The web should be a platform that helps people and provides a positive social benefit. As we continue to evolve the web platform, we must therefore consider the consequences of our work. The following document sets out ethical principles that will drive the W3C’s continuing work in this direction”
  • What’s new for the web platform – Jake and Una at Google i/o yesterday showing new web platform features. Highlights: native <dialog> element for popup dialogs that has a11y baked in, the ability to give accent colours in CSS to form controls, declarative lazy loading of off-screen/ less important images. This should allow us to remove hacky components from our web pages, so they’ll be faster (as they’re in the browser) and more likely to be secure and accessible. Can we kill crappy framework dialogs and other form components and replace them with native browser-based equivalents?
  • The page transition API uses CSS animations for highly customisable wiggly things to make your sites and PWAs feel more natively app-like. It’s really nice (don’t be put off by Jank Architect’s infomercial demeanour)
  • Debugging accessibility with Chrome DevTools – another Google i/o vid
  • WordPress’ market share is shrinking – “If WordPress wants to maintain its market share or better yet, grow it, it’ll have to get its act together. That means it should focus on the performance of these sites across the spectrums of site speed and SEO. The Full Site Editing project is simply taking far too long. That’s causing the rest of the platform to lag behind current web trends.”
  • Responsive layouts for large screen development – “More than 250 million large screen Android devices are currently in use, including tablets, foldables, and Chrome OS.”
  • The UK’s Digital Markets Unit: we’re not making any progress, but we promise we will “in due course” – “at least this document confirms that it is still Government policy to do it “in due course” and “when Parliamentary time allows””
  • Porting Zelda Classic to the Web – deep dive into the technical challenge to port an ancient game written in C++ to Web Assembly

On self-taught coders by Graham Lee

When a programmer says that they are ‘self-taught’ or that they “taught themselves to code”, what do they mean by it?

Did they sit down at a computer, with reference to no other materials, and press buttons and click things until working programs started to emerge?

It’s unlikely that they learned to program this way. More probable is that our “self-taught” programmer had some instruction. But what? Did they use tutorials or reference content? Was the material online, printed, or hand written? Did it include audio or visual components? Was it static or dynamic?

What feedback did they get? Did their teaching material encourage reflection, assessment, or other checkpoints? Did they have access to a mentor or community of peers, experts, or teachers? How did they interact with that community? Could they ask questions, and if so what did they do with the answers?

What was it that they taught themselves? Text-based processing routines in Commodore BASIC, or the Software Engineering Body of Knowledge?

What were the gaps in their learning? Do they recognise those gaps? Do they acknowledge the gaps? Do they see value in the knowledge that they skipped?

And finally, why do they describe themselves as ‘self-taught’? Is it a badge of honour, or of shame? Does it act as a signal for some other quality? Why is that quality desirable?

May 25, 2022

Normally, I bang on endlessly about Web Accessibility, but occasionally branch out to bore about other things. For Global Accessibility Awareness Day last week, my employers at Babylon Health allowed me to publish a 30 min workshop I gave to our Accessibility Champions Network on how to make accessible business documents. Ok, that might sound dull, but according to I.M.U.S., for every external document an organisation publishes, it generates 739 for internal circulation. I’m using Google Docs in the talk, but the concepts are equally applicable to Microsoft Word, Apple Pages, and to authoring web content.

It’s introduced by my Professional Better Half, Taylar Bouwmeester –recipient of the coveted “Friendliest Canadian award” and winner of a gold medal for her record of 9 days of unbroken eye contact in the all-Canada Games– and then rapidly goes downhill thereafter. But you might enjoy watching me sneeze, sniff, and cough because I was under constant assault from spring foliage jizzing its pollen up my nostrils. Hence, it’s “R”-rated. Captions are available (obvz) – thanks Subly!

(Last Updated on )

Ken Kocienda (unwrapped twitter thread, link to first tweet):

I see so many tweets about agile, epics, scrums, story points, etc. and none of it matters. We didn’t use any of that to ship the best products years ago at Apple.

Exactly none of the most common approaches I see tweeted about all the time helped us to make the original iPhone. Oh, and we didn’t even have product managers.

Do you know what worked?

A clear vision. Design-led development. Weekly demos to deciders who always made the call on what to do next. Clear communication between cross functional teams. Honest feedback. Managing risk as a function of the rate of actual progress toward goals.

I guess it’s tempting to lard on all sorts of processes on top of these simple ideas. My advice: don’t. Simple processes can work. The goal is to ship great products, not build the most complex processes. /end

We can talk about the good and the bad advice in this thread, and what we do or don’t want to take away, but it’s fundamentally not backed up by strong argument. Apple did not do this thing that is talked about now back in the day, and Apple is by-our-lady Apple, so you don’t need to do this thing that is talked about now.

There is lots that I can say here, but my secondary thing is to ask how much your organisation and problem look like Apple’s organisation and problem before adopting their solutions, technological or organisational.

My primary thing is that pets.com didn’t use epics, scrums, story points, etc. either. Pick your case studies carefully.

May 21, 2022

Here is my personal submission to the U.S. National Telecommunications and Information Administration’s report on Competition in the Mobile App Ecosystem. Feel free to steal from it and send yours before before 11:59 p.m. Eastern Time on Monday May 23, 2022. I also contributed to the Open Web Advocacy’s response.

I am a UK-based web developer and accessibility consultant, specialising in ensuring web sites are inclusive for people with disabilities or who experience other barriers to access–such as living in poorer nations where mobile data is comparatively expensive, networks may be slow and unreliable and people are generally accessing the web on cheap, lower-specification devices.

Although I am UK-based, I have clients around the world, including the USA. And, of course, because the biggest mobile platforms are Android and iOS/iPad, I am affected by the regulatory regime that applies to Google and Apple. I write in a personal capacity, and am not speaking on behalf of any clients or employers, past or present. You have my permission to publish or quote from this document, with or without attribution.

Many of my clients would like to make apps that are Progressive Web Applications. These are apps that are websites, built with long-established open technologies that work across all operating systems and devices, and enhanced to be able to work offline and have the look and feel of an application. Examples of ‘look and feel’ might be to render full-screen; to be saved with their own icon onto a device’s home screen; to integrate with the device’s underlying platform (with the user’s permission) in order to capture images from the camera; use the microphone for video conferencing; to send push notifications to the user.

The benefits of PWAs are advantageous to both the developer (and the business they work for) and the end user. Because they are based on web technology, a competent developer need only make one app that will work on iOS, Android, as well as desktop computers and tablets. This write-once approach has obvious benefits over developing a single-platform (“native”) app for iOS in addition to a single-platform app for Android and also a website. It greatly reduces costs because it greatly reduces complexity of development, testing and deploying.

The benefits to the user are that the initial download is much smaller than that for a single-platform app from an app store. When an update to the web app is pushed by a developer to the server, the user only downloads the updated pages, not the whole application. For businesses looking to reach customers in growing markets such as India, Indonesia, Nigeria and Kenya, this is a competitive advantage.

In the case of users with accessibility needs due to a disability, the web is a mature platform on which accessibility is a solved problem.

However, many businesses are not able to offer a Progressive Web App, largely due to Apple’s anti-competitive policy of requiring all browsers on iOS and iPad to use its own engine, called WebKit. Whereas Google Chrome on Mac, Windows and Android uses its own engine (called Blink), and Firefox on non-iOS/iPad platforms uses its own rendering engine (called Gecko), Apple’s policy requires Firefox and Chrome on iOS/iPad to be branded skins over WebKit.

This “Apple browser ban” has the unfortunate effect of ham-stringing Progressive Web Apps. Whereas Apple’s Safari browser allows web apps (such as Wordle) to be saved to the user’s home screen, Firefox and Chrome cannot do so–even though they all use WebKit. While single-platform iOS apps can send push notifications to the user, browsers are not permitted to. Push notifications are high on business’ priority because of how it can drive engagement. WebKit is also notably buggy and, with no competition on the iOS/iPad platform, there is little to incentivise Apple to invest more in its development.

Apple’s original vision for applications on iOS was Web Apps, and today they still claim Web Apps are a viable alternative to the App Store. Apple CEO Tim Cook made a similar claim last year in Congressional testimony when he suggested the web offers a viable alternative distribution channel to the iOS App Store. They have also claimed this during a court case in Australia with Epic.

Yet Apple’s own policies prevent Progressive Web Apps being a viable alternative. It’s time to regulate Apple into allowing other browser engines onto iOS/iPad and giving them full access to the underlying platform–just as they currently are on Apple’s MacOS, Android, Windows and Linux.

Yours,

Bruce Lawson

(Last Updated on )

May 20, 2022

There are some files that you just pretty much never want to commit to version control. Depending on your needs, this might be something like your node_modules folder or it could be the .DS_Store file created by MacOS that holds information about the containing folder.

You can configure git to take advantage of a global .gitignore file for these cases.

  1. Create your global ignore file somewhere on your computer. The name and location aren’t actually important, but I call mine .gitignore_global and keep it in my home directory.
  2. Configure your global git configuration to read it with the following command.
git config --global core.excludesfile ~/.gitignore_globals

You should now find that git will ignore any files mentioned in your global gitignore file as well as your project specific ones.

There are some files that you just pretty much never want to commit to version control. Depending on your needs, this might be something like your node_modules folder or it could be the .DS_Store file created by MacOS that holds information about the containing folder.

You can configure git to take advantage of a global .gitignore file for these cases.

  1. Create your global ignore file somewhere on your computer. The name and location aren’t actually important, but I call mine .gitignore_global and keep it in my home directory.
  2. Configure your global git configuration to read it with the following command.
git config --global core.excludesfile ~/.gitignore_globals

You should now find that git will ignore any files mentioned in your global gitignore file as well as your project specific ones.

This is good for you, but it’s also good for the whole team. With each developer responsible for ignoring their own OS specific files or IDE preferences, the project .gitignore can be used to focus exclusively on project specific artefacts.

May 19, 2022

(Last Updated on )

Ok, so you’re making a React or React Native app. Don’t! Make a Progressive Web App. Sprinkle some Trusted Web Activity goodness to put it in the Play store wrap it with Capacitor.js if it needs push notifications or to go in the App Store (until the EU Digital Markets Act is ratified so Apple is required to allow more capable browsers on iOS).

But maybe you’re on a project that is already React Native, perhaps because some psycho manager flew in, demanded it and then returned to lurk in Hades. In which case, this might help you.

Testing

I like Expo (and wrote some random Expo tips). Expo Snacks are like ‘codepens’ for React Native.

Bugs?

Open Accessibility bugs – Facebook’s official list, and accompanying blog post.

May 11, 2022

There is a difference between a generalist software engineer, and a polyglot programmer. What is that difference, and why did I smoosh the two together in yesterday’s post?

A polyglot programmer is a programmer who can use, or maybe has used, multiple programming languages. That doesn’t mean that they have relevant experience or skills in any other part of software engineering. They might, and so might a monoglot programmer. But (in a work context) they’re working as a programmer and paid for doing the programming.

A generalist software engineer has knowledge and experience relevant to the rest of the software engineering endeavour. Not just the things that programmers need to know beyond the programming, but the whole practice.

In my experience, a generalist software engineer is usually also a polyglot programmer, for a couple of reasons.

  • to get enough experience at all the bits of software engineering, the generalist’s career has probably survived multiple programming language hype cycles.
  • in getting that experience, they have probably had to work with different languages in different contexts: whatever their software products are made out of; scripts for deployment and operations; plugins for various tools; changes to existing components.

More importantly, the programming part of the process is a small (crucial, admittedly, small nonetheless) intersection with the generalist’s work. Knowing all of the gotchas of any one language is something they can ask the specialists about. Fred Brooks would have had a Chief Programmer who knew what the team was trying to do, and a language lawyer who specialised in the tricks of the programming language.

Probably the closest thing that many organisations know how to pay for to the generalist software engineer in modern times is the “agile consultant”. I don’t think it’s strictly a fair comparison. Strictly speaking, an agile consultant is something like a process expert, systems analyst, or cybernetician. They understand how the various parts of the software engineering endeavour affect each other, and can help a team to improve its overall effectiveness by knowing which part needs to change and how.

And, because they use the word agile in their title, they do this with an air of team empowerment and focus on delivery.

Knowing how the threat model impacts on automated testing enables process improvement, but does not necessarily make a software engineering generalist. An actually-general generalist could perhaps also do the threat modelling and create the automated tests. To avoid drawing distinctions between genuine and inauthentic inhabitants of Scotland let’s say that a generalist can work in ‘several’ parts of the software engineering field.

We come back to the problem described in yesterday’s post, but without yesterday’s programming-centric perspective. A manager knows that they need a security person and a test automation person, but does not necessarily know how to deal with a security-and-test-automation-and-maybe-other-things person.

This topic of moving from a narrow focus on programming to a broader understanding of software engineering is the subject of my newsletter: please consider signing up if this topic interests you.

May 10, 2022

After publishing podcast Episode 53: Specialism versus generality, Alan Francis raised a good point:

This could be very timely as I ponder my life as a generalist who has struggled when asked to fit in a neat box career wise.

https://twitter.com/PossiblyAlan/status/1523755064879632384

I had a note about hiring in my outline for the episode, and for some reason didn’t record a segment on it. As it came up, I’ll revisit that point.

It’s much easier to get a job as a specialist software engineer than as a generalist. I don’t think that’s because more people need specialists than need generalists, though I do think that people need more specialists than they need generalists.

For a start, it’s a lot easier to construct an interview for a specialist. Have you got any experience with this specialism? What have you done with it? Do you know the answers to these in-group trick questions?

Generalists won’t do well at that kind of question. Why bother remembering the answer to a trick question about some specific technology, when you know how to research trick answers about many technologies? But the interviewer hears “doesn’t even know the first trick answer” and wonders how do I know you can deliver a single pint of software on our stack if you can’t answer a question set by a junior our-stack-ist?

If you want to hire a generalist software engineer…ah. Yes. I think that maybe some people don’t want to, whether or not they know what the generalist would do. They seem to think it’s a “plural specialist”, and that a generalist would know all the trick questions from multiple specialisms.

This is the same thinking that yields “a senior developer is like a junior developer but faster”; it is born of trying to apply Taylorian management science to knowledge work: a junior Typescript programmer can sling a bushel of Typescript in a day. Therefore a senior Typescript programmer can sling ten gallons, and a generalist programmer can sling one peck of Typescript, two of Swift, and one of Ruby in the same time.

I think that the hiring managers who count contributions by the bushel are the ones who see software engineering as a solitary activity. “The frontend folks aren’t keeping pace with the backend folks, so let’s add another frontend dev and we’ll have four issues in progress instead of three.” I have written about the flawed logic behind one person per task before.

A generalist may be of the “I have solved problems using computers before, and can use computers to solve your problem” kind, in which case I might set aside an hour to pair with them on solving a problem. It would be interesting to learn both how they solve it and how they communicate and collaborate while doing so.

Or they may be of the “I will take everybody on the team out to lunch, then the team will become better” kind. In which case I would invite them to guest-facilitate a team ceremony, be it an architecture discussion, a retrospective, or something else, and see how they uncover problems and enable solutions.

In each case the key is the collaboration. A software engineering generalist might understand and get involved with the whole process using multiple technologies, but that does not mean that they do all of the work in isolation themselves. You don’t replace your whole team, but you do (hopefully) improve the cohesiveness of the whole team’s activity.

Of course, a generalist shouldn’t worry about trying to get hired despite having to sneak past the flawed reasoning of the bushel of trick questions interview. They should recognise that the flawed reasoning means that they won’t work well with that particular manager, and look elsewhere.

May 06, 2022

Reading List 290 by Bruce Lawson (@brucel)

USA readers: you have just over 2 weeks to tell the US regulator your thoughts on the Apple Browser Ban, whether you’re in favour of Apple allowing real browser choice on iOS by setting Safari free, or against it. You’re welcome to use Bringing Competition to Walled Gardens, our response to a similar investigation by the UK Competition and Markets Authority for inspiration/ cutting and pasting. Make your voice heard!

There are still many situations where it’s not feasible to stop a process, attach the debugger, and start futzing with memory. We can argue over whether this is because the industry didn’t learn enough from the Pharo folks later. For now, let’s pretend that’s axiomatic: a certain amount of debugging (and even testing) starts with asking the program to report on its state.

It’s common to do this with print statements, whatever they’re called in your language. The technique is even called “printf debugging” sometimes. Some teams even have lint rules to stop you merging changes that call System.out.println or console.error because they know you’re all doing it. I think you should carry on doing it, and you should be encouraged to commit the changes.

Just don’t call your print function. This isn’t for any performance/timing/buffer flushing reason, though those are sometimes relevant and in the relevant cases sometimes problematic and in the problematic cases sometimes important. It’s more because it’s overwhelming. Instead, call your syslog/OSLog/logger function, with an appropriate severity level (probably DEBUG) and some easily filterable preamble. Now commit those log messages.

One benefit of doing this is that you capture the fact that you need this information to diagnose problems in this module/subsystem/class/whatever. Next time you have the problem, you already know at least some of the information that will help.

Another benefit is that you can enable this logging in the field without having to deploy a new version with different printf statements. Just change the log level or the capture filter and start getting useful information. Change it back when you’re done.

There are caveats here: if the information you need to log is potentially sensitive (personal information, crypto material) you may be better off not having any way to turn it on in production.

The third benefit is that you communicate to everybody else that this part of the code is hard to understand and needed careful inspection. This can be the motivation the team needs to discuss a redesign, or it can help other people find smoking guns when trying to diagnose other failures.

May 03, 2022

It’s hard to argue that any one approach to, well, anything in software is better or worse than any others, because very few people are collecting any data and even fewer are reporting what they’re trying. Worst is understanding how requirements are understood, prioritised, and implemented. Companies can be very opaque when it comes to deciding what they do and when they do it, even if you’re on the inside.

This can be frustrating: if you can see what you think is the path between here and all the money, then it can be enervating to find the rest of the organisation is following a different path. Doubly so if you can’t see, and aren’t shown, any reason to follow that path.

What we do know is that the same things that work in small companies don’t tend to work in large ones: search for any company name X in the phrase “why doesn’t X listen to customer feedback” and you’ll find multiple complaints. Customers get proxied, or aggregated, or weighed against potential customers. We feel like they aren’t listening to us, and it’s because they aren’t. Not to us as individuals anyway. If a hundred thousand of us are experiencing kernel panics or two million of us have stopped engaging with feed ads, something will get done about it.

That said, there are some things I’ve seen at every scale that need to be in balance for an organisation to be pointing in the right direction. Three properties of how they work out what they’re doing, that need to be in synergy or at least balanced against one another. I now ask about these three things when I meet with a new (to me) company, and use these as a sense check for how I think the engagement is going to work out.

Leadership Vision

One third is that the company has some internal driving force telling it which way to go and what challenges to try to solve. Without this, it will make do with quick wins and whatever looks easy, potentially even straying a long way from its original mission as it takes the cheapest step in whatever direction looks profitable. That can lead to dissatisfied employees, who joined the company to change the world and find they aren’t working on what was promised.

On the other hand, too much focus on the vision can lead to not taking material reality into account. A company that goes bust because “our product was ten years ahead of its time, the customers weren’t ready” is deluding itself: it went bust because the thing they wanted to sell was not something enough other people wanted to buy.

Market Feedback

One third is that the company has some external input telling it what customers are trying to do, what problems people have, what people are willing to pay for those problems to go away, and what competitors are doing to address those problems. Without this, the company will make things that people don’t want to buy, lose out on sales opportunities where they don’t describe what people have in a way that makes them want it, or will find themselves outcompeted and losing to alternative vendors.

On the other hand, too much focus on market feedback can lead to a permanently unsatisfying death march. Sales folks will be sure that they can close the next big deal if only they had this one other feature, and always be one other feature from closing that deal. Customers can always find something to be unhappy about if sufficiently prodded, so there will always be more to do.

Technical Reality

The third third is that the company has some internal feedback mechanism telling it what is feasible with the assets it already has, what the costs and risks (including opportunity costs) are of going in any direction, and what work it needs to do now to enable things that are planned in the future. Without this, forward progress collapses under the weight of what is erroneously called technical debt. Nothing can be done, because doing anything takes far too long.

On the other hand, too much focus on technical desiderata can lead to the permanent rewrite. Technical folks always have a great idea for how to clean things up, and even if all of their suggestions are good ones there comes a time when a lumberjack has to accept that the next action is not to replace their axe sharpeners but is to cut down a tree. Features are delayed until the Next Great Replatforming is complete, which never comes because the Subsequent Replatforming After That One gets greenly.

The Trifecta

I don’t think it’s particularly MBA material to say that “a company should have a clear leadership vision moderated by marketing reality and internal capability”. But in software I see the three out of kilter often enough that I think it’s worth writing the need down in the hope that some people read it.

I was just changing error handling around net timeouts, and naturally I wanted to test it. There are plenty of good tools floating around for throttling your network connection to test under different speeds and conditions, but frankly I couldn’t be bothered with all that.

The Ruby:

require 'socket'

t = TCPServer.new(9999)

while s = t.accept
  puts 'accepted connection, snoozin :)'
  sleep 60
  s.close
end

The bash:

ruby -rsocket -e "t = TCPServer.new(9999); while s = t.accept; puts 'accepted connection, snoozin :)'; sleep 60; s.close; end"

April 29, 2022

Reading The Well Grounded Rubyist, I was reminded that among the operators you can override on an object is the unary bang.

This is the operator that you would normally use to find the logical negation of an object. Since Ruby gives us the concepts of truthiness and falsiness, this means that the logical negation of all values other than nil or false is false.

The result is that people often (inadvisably) use ! as a presence check, e.g.

book = Book.find_by(title: 'The Well Grounded Rubyist')
handle_missing_book if !book

In this case, overriding the unary bang for an object seems like a potential footgun. What if we override this method like so?

class Thing
  def !
    return truthy_value
  end
end

We end up with the following scenario:

my_thing = Thing.new

# Executes. Our object isn't bil or false, so we expected this.
do_thing if my_thing

# Doesn't execute. Still going to plan.
do_thing unless my_thing

# Both execute. What?!
do_thing if !my_thing
do_thing if not my_thing

This is confusing, and I immediately take two things from it:

  1. Don’t rely on truthiness to check presence, perform an explicit call to #nil? (or #blank? or #present? if you’re using Rails.)
  2. You shouldn’t override the unary bang method.

I’ll stand by the first one, but surely there must be exceptions to the second. It seems like such a bad idea that I really want there to be exceptions.

Please let me know if you’ve got any.

April 28, 2022

In the USA, the National Telecommunications and Information Administration (NTIA) is requesting comments on competition in the mobile application ecosystem after Biden signed Executive Order 14036 on Promoting Competition in the American Economy:

today a small number of dominant internet platforms use their power to exclude market entrants, to extract monopoly profits, and to gather intimate personal information that they can exploit for their own advantage. Too many small businesses across the economy depend on those platforms and a few online marketplaces for their survival

NTIA is looking for “concrete and specific information as to what app developers, organizations, and device (i.e.,phones; tablets) users experience, and any potential challenges or barriers that limit app distribution or user adoption”. Written comments must be received on or before 11:59 p.m. Eastern Time on May 23, 2022.

Several of its questions encompass Apple hamstringing Progressive Web Apps by requiring all iThing browsers use its own bundled WebKit framework, which has less power than Safari or single-platform iOS apps. Here are some of the questions:

  • How should web apps (browser-based) or other apps that operate on a mobile middleware layer be categorized?
  • What unique factors, including advantages and obstacles, are there generally for app development — especially start-ups — that are relevant for competition? 
  • Are there studies or specific examples of the costs or advantages for app developers to build apps for either, or both, of the main operating systems, iOS and Android (which have different requirements)? 
  • What other barriers (e.g.,legal, technical, market, pricing of interface access such as Application Programing Interfaces [APIs]) exist, if any, in fostering effective interoperability in this ecosystem?
  • How do policy decisions by firms that operate app stores, build operating systems, or design hardware impact app developers (e.g., terms of service for app developers)?
  • How do, or might, alternative app stores (other than Google Play or the Apple App Store), affect competition in the mobile app ecosystem?
  • What evidence is there to assess whether an app store model is necessary for mobile devices, instead of the general-purpose model used for desktop computing applications?
  • Is there evidence of legitimate apps being rejected from app stores or otherwise blocked from mobile devices? Is there evidence that this is a common occurrence or happens to significant numbers of apps?
  • Are there specific unnecessary (e.g., technical) constraints placed on this ability of app developers to make use of device capabilities, whether by device-makers, service providers or operating system providers, that impact competition?

I urge American developers to send comments to NTIA, whether you’re in favour of Apple allowing real browser choice on iOS by setting Safari free, or against it. You’re welcome to use Bringing Competition to Walled Gardens, our response to a similar investigation by the UK Competition and Markets Authority for inspiration/ cutting and pasting. Make your voice heard!

I was in a few interviews last year. One of the things to come up was the question of whether to share interview questions in advance. Opinions seemed split.

This isn’t a comprehensive post and I don’t have any iron-clad answers. If you happen to read this and have any compelling arguments either way, I’d love to hear them.

Let’s start with the pros.

You level the playing field. Not everyone is comfortable answering questions on the spot while under pressure. Plenty of people will have fantastic answers to the common question format of “tell me about a time when…” but freeze when the spotlight is on them. If the role in question seldom requires immediate answers to difficult questions, your process is filtering people out based on an irrelevant skill.

You get more relevant answers. If you ask me to tell you about a time I resolved a difficult disagreement with a coworker, I’m going to have a better answer if I have five minutes to think about it than if I feel the pressure to say something within 30 seconds or blow the interview.

And the cons?

The interview becomes a script reading exercise. There are a couple ways to reduce this risk:

  • Send topics rather than verbatim questions.
  • Ask follow-up questions. There’s no reason you can’t use the candidate’s answer as a jumping off point to probe deeper.

The candidate might over-prepare. This concern was actually raised by potential candidates. This could be mitigated by stressing up-front that the questions are jumping off points.

The candidate might cheat. In theory, a candidate with prior knowledge of the questions can pull the wool over your eyes with brilliantly prepared examples based on totally fictional experience.

My only thought here is that it’s common for a company to have a probationary period in which you should be able to tell if someone has lied during an interview and take the appropriate action. If your company isn’t equipped to detect that someone isn’t qualified during that period, they probably didn’t need the unfair advantage in the interview.

Overall, I think giving candidates some opportunity to prepare in advance is going to result in higher quality interviews with happier candidates. How about you?

April 22, 2022

Reading List 289 by Bruce Lawson (@brucel)

April 14, 2022

On or Between by Graham Lee

The new way to model concurrency is with coroutines (1963), i.e. the async/await dance or (building upon) call-with-concurrent-continuation. The new new way is with actors (1973), and the old and busted ways are with threads (1966), and promises (1976).

As implemented in many programming languages, these ideas are on a piece of logic, making the concurrency model an existential part of the software model. Would we say that a payroll budget is a serial queue, or that awarding a pay rise is a coroutine? We probably would not, but our software tools make us do so.

This is not necessary. For example, in Eiffel’s SCOOP (previously discussed on this very blog) the change that is made to the model is that objects indicate when their collaborators can have separate execution contexts. They do not need to: turn off the parallel execution engine and everything runs sequentially. Otherwise, the details of how, and if, objects are running in different contexts are not part of the solution model.

Various other solutions rely on the call as a binding point in software execution, to allow the asynchronous nature to be injected as a connector between parts of the solution model. So in Objective-C or Smalltalk you might inject futures or queue confinement via proxy objects. Then your solution model hasn’t got any noise about concurrent execution at all, but your implementation model still gets to choose where code gets run when one part of the solution sends a message to another part.

What’s the difference? To me it’s one of focus: if I want a clearly correct implementation of my solution model that might be an efficient program, then I would choose the connector approach: putting implementation details between my solution elements. If I want a clearly efficient program that might be a correct implementation of my solution model, then I would choose the annotation approach: putting implementation details on my solution elements. In my experience more of the software I’ve worked on has worked incorrectly than worked poorly, and when it’s worked poorly I haven’t wanted to change its correctness, so I would prefer to separate the two.

None of this is a complaint about any tools, or any libraries: you can build connectors when your tools give you annotations, by putting the annotations on things between your solution models. You can build annotations when your tools give you connectors, by hard-coding the use of connectors in the solution implementations. It’s simply a question of what your design makes obvious.

April 06, 2022

Apple have shared initial timings for this year’s WorldWide Developer Conference. In typical in-person years this would be the trigger for various “WWDC attendee tips” posts (don’t forget to drink water! Remember to sleep sometime through the week! Don’t go to the Moscone centre, they’ve moved the conference!) but that has not been the case through the pandemic. Instead WWDC has been fully online, so you just need to get the Developer app and watch the videos.

This year, it’s sort of hybrid, in that it appears the event will be online-first with a watch party of sorts on the first day. This happened at the fully in-person events anyway, at least at the Moscone: the keynote room filled up quickly and attendees were directed to other rooms to watch a stream. Other talks would be streamed to screens around the conference venue: I remember watching Crusty’s guide to protocol-oriented programming at an in-conference sports bar with a couple of good friends.

It’s also a great way to run a hybrid event: it’s much too easy (as those of us who worked remote in the pre-pandemic times will remember) for online attendees to be second-class citizens in the hybrid world. Making it clear that the event is an online event with the ability to engage from an on-site presence removes that distinction.

Some people will stay away, on the basis that travelling all the way to California to watch AppleTV is not a compelling use of resources. Honestly with this pandemic not being over anywhere except the minds of the politicians who need sacrifices to the line, that’s not a bad thing. Except that these people will miss out on networking, which is a bad thing.

Networking is such a big part of WWDC that plenty of people who didn’t have tickets to the for-realsies iterations would go anyway, maybe going to after parties, maybe going to AltConf (another opportunity to watch a stream of WWDC, alongside original talks by community members). But that was for a week of networking, not a day of watching TV.

That’s OK. Hopefully online watch parties, and local watch parties, will spring up, making the networking side of WWDC more accessible. Making WWDC truly world-wide.

April 05, 2022

I just heard someone using the phrase “first-class citizen” in a programming podcast, and that led me to ponder the use of that phrase. The podcast was Swift Package Manager SuperPowers from Empower Apps. Empower’s a great podcast, this is a great episode, and the idea of first-class citizenship comes up only in passing and is basically immaterial. But not to this post!

Whatever the situation that leads someone to say “first-class citizen”, there’s a value judgement being made: this thing is good, because it is more conformant to the (probably unspoken) rules of the road of the ecosystem, platform, or whatever the thing is supposed to be a citizen of.

Many of us in the privileged software engineering world live in societies that do not have overt “levels” of citizenship. That said, there still are multiple levels: nationals, resident aliens, temporary visitors, and prisoners may all have different rights and responsibilities. And in societies where there are still explicit or tacit class hierarchies, making reference to them is often somewhere between impolite and offensive. So this idea of “first-class citizenship” comes with a big side-wink: it’s a first-class citizen, we both know what I mean.

An obvious way for a technology to be a first-class citizen of something is for it to be made, distributed, or otherwise blessed by the maker of the something it’s a citizen of. That’s the context in this show: Swift Package Manager is a first-class citizen of the Apple software development platform because it’s a first-party component. Now, it’s a first-party component with the job of giving access to third-party components, so there’s a limit to the vendor ownership, but nonetheless it is there.

In this sense, first-class citizenship confers clear benefits. If someone is already playing in the Apple tools sandpit, they already have access to SwiftPM. They may not already have access to CocoaPods, so the one step “fetch all the packages” becomes two steps: fetch the package tool, then fetch all the packages.

That bit is easier, but it evidently isn’t sufficient. Is the other tool better at fetching packages correctly, or better for writing packages, or more secure, or easier to use? When we say “better”, better at what, and for whom?

It’s possible for something that is first-party to not be first-class. Mac OS X came with the Tcl language for a couple of decades but I can’t find evidence online that it was ever referred to as a “first-class citizen” of the Apple ecosystem. In 2022 you wouldn’t call OpenDoc or the Macintosh Runtime for Java first-class citizens either, because the vendor no longer supports them. Actually it’d be an interesting exercise to take an old Apple Developer Connection CD (the earliest I have will be from around 2005), and find out how much of that content is still supported, and of that subset how much you could get broad agreement for the first-class nature of. I’d be willing to bet that even though ObjC is still around, distributed, supported, and developed, a decent chunk of the community would think twice about calling it first-class.

But then, it’s also possible for things that are third-party to be first-class. Apparently, Java is a first-class citizen in a Docker ecosystem now. And Data should be a first-class citizen in the Enterprise (this is, by the way, a spoiler for the Star Trek: The Next Generation episode the measure of a man).

When third-party things are first-class, we’re saying that going the extra step of getting this thing you don’t already have is better than stopping here and using what you already own. Again we have the questions of better at what and for whom. Really any of this stuff lies on a continuum. Consider a database. You use a database because it’s cheaper, faster, and better to turn your stuff into the database’s model and use the database vendor’s structures and algorithms than it is to design your own efficient storage and retrieval format. If you could do that well (and that’s a big if) then your hand-rolled thing would probably be better for your application than the database, but also all maintenance work falls onto a very small community: you. On the other hand you could use whatever comes in the box, which has the exact opposite characteristics. They each have different types of first-classness.

And then there’s a (very old, but very common) definition of a data type being first-class or not depending on whether they can be represented by variables or expressions. So when Javascript developers say “functions are first-class citizens in Javascript”, they mean that JS has a feature that ALGOL did not.

April 01, 2022

Reading List 288 by Bruce Lawson (@brucel)

March 29, 2022

James Koppel tells us that software engineers keep using the word “abstraction” and that he does not think it means what they think it means. I believe that he is correct, and that the confusion over the term abstraction comes from thinking that programming is about abstraction.

Programming is refinement, not abstraction. You take your idea of what the software should be doing and you progressively refine that idea until what you get it so rote, so formulaic, so prescribed, that you can create a plan that a computer will follow reliably and accurately. Back in the day, that process used to be done literally in the order written as a sequence of distinct activities.

  • You take your idea of what the software should be doing: requirements gathering
  • refine that idea: software specification
  • create a plan that a computer will follow: construction
  • reliably and accurately: verification and validation

It doesn’t matter what paradigm you’re using, what tools, or whether you truly go from one step to the next as described here, what you’re doing is specifying the software you need, and getting so specific that you end up with instructions for a computer. Specific is the opposite of abstract: the process is the opposite of abstraction.

Thus Niklaus Wirth talks about Program Development by Stepwise Refinement. He describes a procedure for solving the 8 queens problem, then refines that procedure until it is entirely described in terms of Algol (more or less) instructions. He could have started by describing a function that turns the set of possible chess boards into the set of boards that solve 8 queens, or he could have started by describing the communication between the board and the queens that would cause them to solve 8 queens.

This is not to say that abstraction doesn’t appear in software development. Wirth’s starting point is an abstract procedure for solving a specific problem: you can follow that procedure to solve 8 queens, you just have to do a lot of colouring in yourself which a computer is incapable of. Maybe GPT-3 could follow that first procedure; maybe one of the intermediate ones.

And his end point is an abstract definition of the instructions the computer will do: you can say j<1 and the computer will do something along the lines of loading the word at the same address previously associated with j into an accumulator, subtracting one, checking the flags index, and conditionally modifying the program counter. And “loading the word at the same address” is itself an abstraction: what really happens might involve page faults, loading data from permanent storage, translation lookaside buffers, and other magic.

Abstractions in this view are a “meet in the middle” situation, not a goal: you can refine/specify your solution until you meet the thing that will do the rest of the work of refinement. Or sometimes a “meet off to the side” situation: if you can make your program’s data look like bags of tuples then you can use the relational model to do a lot of the storage and retrieval work, even if nothing in your problem description looks anything like bags of tuples.

Notice that Wirth’s last section is about generalisation, not abstraction: solving the “N queens” problem is not any less specific than solving the 8 queens problem.

March 21, 2022

There’s an idea doing the rounds that the “unit” in “unit test” means the unity of the test, rather than a test of a software unit. Moreover, that it originally meant this, and that anyone who says “unit test” to mean the test of a software unit is misguided.

Here’s the report of the 1968 NATO conference on software engineering. On their page 20 (as marked, not as in the PDF) is a diagram of a waterfall-esque system development process, featuring these three phases among others (emphasis mine):

  • unit design
  • unit development
  • unit test

“Unit” meaning software unit is used throughout.

March 15, 2022

Why are we like this? by Graham Lee

The recent post on addressing “technical debt” did the rounds of the usual technology forums, where it raised a reasonable question: why are people basing these decisions on balancing engineering-led with customer-led tasks on opinion? Why don’t engineers take an evidence-based approach to such choices?

The answer is complex but let’s start at the top: there’s too much money in software. There have been numerous crises in the global economy since software engineering has existed, but really the only one with any material effect on the software sector was the dot-com crash. The lesson there was “have a business plan”: plenty of companies had raised billions in speculative funding on the basis that they were on the internet but once the first couple started to fold, the investors pulled out en masse and the money was sucked from the room. This is the time that gave us Agile (constantly demonstrate that you’re delivering value to your customer), Lean Startup (demonstrate that you’re adding value with as little expenditure as possible), and Lean Software Development (eliminate all of the waste in your process).

Nobody ever demonstrated that Agile, Lean or whatever were better in some objective metric, what they did was tell convincing stories. Would you like to find out that you’ve built the wrong thing two years from now, or two weeks from now? Would you prefer to read an interim draft functional specification, or use working software? Let’s be clear though, nobody ever showed that what we were doing before that was better in any objective way either: software was written by defence contractors and electronics hardware companies, and they grandfathered in the processes used to procure and develop hardware. You can count the number of industry pundits advocating for a genuinely evidence-led approach to software cost control on two fingers (Barry Boehm and Watts Humphries) and you can still raise valid questions about the validities of either of their systems.

Since then, software teams have become less fragile to economic shock. This was already happening in the 2007 credit crunch (the downturn at the beginning of the 2007-2008 global financial crisis). The CFO where I worked explained that bookings of their subscription-based software would go up during a recession. Why? Because people were not confident enough to buy outright or to enter relatively cheap, long-term arrangements like three year contracts. They would instead take the more expensive but more flexible shorter-term contracts so that they could cancel or move if their belts needed tightening. After the crisis, the adoption of subscription-based pricing models has only increased in software, and extended to adjacent fields like media and hardware.

All of this means that there is relative stability in software businesses, and there is still growing demand for software engineers. That has meant that there isn’t the need for systematic approaches to cost-reduction hawked by every single thinker in the “software crisis” era: note that there hasn’t been significant movement beyond Agile, Lean or whatever in the subsequent two decades. They’re good enough, and there is no impetus to find out what’s better. In fact both Agile with its short increments and Lean Startup with its pivoting are optimised for the “get out quickly at any cost” flexibility that also leads customers to choose short-term subscription pricing: when the customers for your VR pet grooming business dry up you can quickly pivot to online fraud detection.

With no need to find or implement better approaches there’s also no need to particularly require software engineers to have a systematic approach or a detailed understanding of the knowledge of their industry. Thus software engineering—particularly programming—remains a craft-based discipline where anyone with an interest can start out at the bottom, learn on the job through mentoring and self-study, and use a process of survivor bias to get along. Did anyone demonstrate in 2002 that there’s objective benefit to a single-page application? Did anyone demonstrate in 2008 that there’s objective benefit to a native mobile app? Did anyone demonstrate in 2016 that there’s objective benefit to a Dapp? Has anyone crunched the numbers to resolve whether DevOps or Site Reliability Engineering is the one true way to do operations? No, but it doesn’t matter: there’s more than enough money to put into these things. And indeed most of those choices listed above are immaterial to where the money comes from or goes, but would be the sorts of “technical debt” transitions that engineering teams struggle to pay for.

You might ask why I’m at all interested in taking a systematic approach to our work when I also think it’s not necessary. Even if it isn’t necessary for survival, it’s definitely professional, and justifiable. When the clients do come to reduce their expenditure, or even when they don’t but are deciding who to go with, the people who can demonstrate that they systematically maximise the output of their work will be the preferred choice.

March 13, 2022

I invented a solitaire card game. I was thinking about solo roleplaying, and the Carta SRD stuff I did for Borealis, and I was thinking about the idea of the cards forming the board you’re playing on and also being the randomness tool. Then I came up with the central mechanic of the branching tree, and the whole game sorta fell into place, and now I’ve been experimenting with it for a day so I want to write it down.

I’m not sure about the framing device; originally it was a dungeon crawl, but since it’s basically “have a big series of battles” I’m a bit uncomfortable with that story since it’s very murder-y. So maybe it’s a heist where you’re defeating many traps to steal the big diamond from a rich guy’s booby-trapped hoard? Not sure, suggestions welcomed.

The game

Required: a pack of cards. Regular 52-card deck. You don’t need jokers.

Setup

Remove the Ace of Spades from the deck and place it in front of you, face up, on the table. This is where you start; the entrance to the dungeon. Deal four cards face down vertically above it in a line. Above that, place the King of Diamonds, face up, so you have six cards in a vertical line, K♦ at the top, 4 face-down cards, A♠ at the bottom. The K♦ is the target, the thing you’re trying to get to (it is treasure, because it is the king, do you see, of diamonds! get it? it’s the biggest diamond). Now return the four face-down cards to the deck and shuffle the deck; this is to ensure that there are exactly four card lengths between A♠ and K♦. Deal one card, face up, over to the left. This is your SKILL score. Its suit is not relevant; its value is the important number. Ace is 1, JQK are 11, 12, 13. Higher is better.

The play: basic

In each round, you confront a monster. Deal one card face up, to the right, which represents the monster, and the monster’s SKILL score (A=1, 2=2, K=13).

Deal three cards face down: these are the monster’s attacks. Deal yourself three cards. Now turn the monster’s cards over. You now pair your cards against the monster’s, in whichever order you please, so there are three pairs. Your score in each pair is your SKILL plus your dealt card; the monster’s score in each pair is its SKILL plus its dealt card.

For each pair where your score is higher than the monster’s, you get a point; for each where the monster wins, you lose a point; a tie scores 0. This means that you will have a score between 3 and -3 for the round.

An example: imagine that you have a SKILL of 7, and you deal the 9♦ as the monster. This means this monster has a SKILL of 9. You then deal three cards for monster attacks; 4♥, J♥, 6♣. You deal yourself three cards: 2♣, 7♦, Q♣. So you elect to pair them as follows:

monsteryouresult
4♥ + SKILL 9 = 137♦ + SKILL 7 = 14we win! +1
J♥ (11) + SKILL 9 = 202♣ + SKILL 7 = 9big loss: -1
6♣ + SKILL 9 = 15Q♣ (12) + SKILL 7 = 19we win! +1

So that’s an overall score of +2 this round: you won the round and defeated the monster!

If your score for the round is positive, then next round when you deal the card for the monster, you can deal this many extra cards and choose the monster you want from them. (So since we won with +2, next round we deal the next three cards out and choose the one we want to be the monster. The other two cards are returned to the pack, which is shuffled.) If your score is negative, then you have to remove that many cards from the dungeon (which will be explained shortly). (The Ace of Spades always stays in place.)

Return the monster attack and your attack cards to the deck and shuffle it ready for the next round. If your round score was negative or zero, or if the monster was a King, then put the monster card in the discard pile. If your score was positive, then add the monster card to the dungeon.

Adding to the dungeon

To add a card to the dungeon, it must be touching the last card that was added to the dungeon (or the Ace of Spades, if no card has yet been added). Rotate the card so that its orientation is the same as the card’s value, on a clock face. So a Queen (value 12) is placed vertically. A 3 is placed horizontally. An Ace is placed pointing diagonally up and to the right. This should be at 30 degrees, but you can eyeball it; don’t get the protractor out. Remember, it must be touching or overlapping the last card that was added. In this way, the path through the dungeon grows. The goal is to have the path reach the King of Diamonds; if there is a continuous path from A♠ to K♦ then you have won!

  1. The setup
  2. a 3 is placed, touching the Ace of Spades, and pointing in the direction of a 3 on a clock
  3. an 8 is placed, touching the 3 (because the 3 was the last added card). Note that it needs to point “downwards” towards an 8 on a clock face
  4. a Queen is placed, pointing upwards towards the 12

Optional rules, ramblings, and clarifications

That’s the game. Build the dungeon turn by turn, see if you can obtain the treasure, the king of diamonds. Here are some extra thoughts, possible extra rules, and questions that I have which I haven’t yet worked out answers for.

Special” cards: armour and weapons

It would be nice to add a bit more skill and planning to the game; there’s not a huge amount of agency. So here’s the first optional rule, about armour and weapons. A monster card which is a Club is potentially a weapon. If you deal a monster card that’s a Club, then you can elect to treat it as a weapon instead and put it in your stash. When you have weapons in your stash, you can choose to add the stored weapon to any one attack, and it adds its rank to your score for that attack. (So if you have SKILL of 7, and you play a 3 for an attack, and you have a 4♣ stored as a weapon, then you can play that 4 as well for a total score of 14, not 10.) Similarly, a monster card that’s a Spade can be treated as armour. If you have armour, the next attack you lose will become a tie. So if your score for a round would be -2 (you lost two attacks and tied one) but you have an armour card then you would discard your armour card and that score becomes -1 (one loss, two ties). Clubs and Spades used this way go into the discard pile, not back into the pack.

This is an optional rule becuase I’m not sure about the balancing of it. In particular, when do you get to add a weapon card? Should you have to add a weapon before the monster attacks are turned over, so it’s a bit of a gamble? Or can you add it when you know whether it’ll win or not? (If yes, then everyone holds weapons until they know they’ll make the difference between a win and a loss, which doesn’t require any skill or judgement to do.)

The length of the dungeon

The distance of 4 cards is based on some rough simulations I ran which suggest that with a 4 card distance a player should win about 5% of the time, which feels about right for a difficult solitaire game; you want to not win that often, but not so infrequently that you doubt that winning is possible. But changing the distance to 3 cards may make a big difference there (it should give a win about one time in 10, in the simulation).

Removing cards

Question: should it be allowed to delete cards in the middle of the path if you lose, thus leaving a gap in the path. You shouldn’t be able to win by reaching the king of diamonds if there’s a gap, of course, but having gaps mid-game seems ok. However, then you have to be able to add cards to fill the gap up, which seems very difficult. This is because we have to require that newly added cards are added to the end of the path, otherwise everyone makes all “negative” cards simply build by touch the Ace of Spades and so we never actually go backwards.

Angle of cards

Cards are reversible. So an 8, which should be a negative card, is actually the same as a 2, which is positive. What’s the best way to enforce this? When considering the “clock face” for orientation, does the centre of the clock face have to be in the centre of the most recent card?

Also, kings not being able to add to the path seems a bit arbitrary. Problem is that there aren’t 13 hours on a clock. This can obviously be justified in-universe (maybe kings are boss monsters or something?) but it feels a bit of a wart.

And that’s it

That’s it. Game idea written down, which should hopefully get it out of my head. If anyone else plays it, or has thoughts on the rules, on improvements, or on a theme and setting, I’d love to hear them; @sil on Twitter is probably the easiest way.

The phrase “technical debt” appears in scare quotes here because, as observed in The Unreasonable Ineffectiveness of Considering Things Harmful, technical debt has quite a specific meaning and I’m talking about something broader here. Quoting Ward Cunningham:

Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.

Ward Cunningham, the Wycash Portfolio Management System

It’s not old code that’s technical debt, it’s lightly-designed code. That thing that seemed like the solution when you first thought of it. Yes, ship it, by all means, but be ready to very quickly rewrite it when you learn more.

Some of what people mean when they say “we need to bring our technical debt under control” is that kind of technical debt, struggling under the compound interest of if statement accrual as multiple developers have added behaviour without adding design. But there are other things. Cutting corners is not technical debt, it’s technical gambling. Updating external dependencies is not technical debt repayment, but it does still need to be done. Removing deprecated symbols is paying for somebody else’s technical debt, not yours: again you still have to do it. Replacing last month’s favoured npm modules with this month’s is not technical debt, it’s buying yourself a new toy.

But all of these things get done, and all of these things need to get done. It’s the cost of deploying a system into an evolving context (and as I’ve said before, even that act of deployment itself triggers evolution). So the question is when, how often, how much?

Some teams put their “engineering requirements”, their name for the evolution-coping tasks, onto the same backlog as the product requirements, then try to advocate for prioritising them alongside feature requests and bug fixes. Unfortunately this rarely works: the perceived benefit of the engineering activity is zero retained customers plus zero acquired customers = zero revenue, and yet it costs the same as fixing a handful of customer-reported bugs.

So, other groups just try to carve out time. Maybe it’s “20% of developer effort on the sprint is not for product tasks”. Maybe it’s “there is a week between delivering one iteration and starting the next”. Maybe it’s “whoever is on support rotation can pick up engineering tasks when there’s no fire to put out”. And the most anti- of all the patterns is the “hardening sprint”: once per quarter we’ll spend two weeks fixing the problems we’ve been making for ourselves in the intervening time. All of these have the benefit of giving a predictable cadence, though they still suffer a bit from that product envy problem: why are we paying for these engineers to do non-useful work when they could be steadily adding value?

The key point is that part about steadily adding value. We know the reason we need to do this: it’s to avoid being brought to Ward’s stand-still. We need to consolidate what we’ve learned, we need to evolve the system to adapt to evolutionary changes in its context, we need to fix past mistakes. And we need to do it constantly. Remember the quote: “Every minute spent on not-quite-right code counts as interest on that debt”.

Ultimately, these attempts to carve out time are requests to do our jobs properly, directed at people who don’t have the same motivations that we do. That’s not to say that their motivations are wrong. Like us, they only have a partial view of the overall picture. Unlike us, that view does not extend to an understanding of how expensive a liability our source code is.

When we ask for time between this iteration and the next to “service technical debt”, we are saying “I know that I’m doing a bad job, I know what I need to do to be doing a good job, and I would like to do a good job for four hours in a fortnight’s time on Friday afternoon, if that’s alright with you”. Ironically we do not end up doing a better job, we normalise doing a bad job for the next couple of weeks (and undoubtedly finding that some delivery/support/operations problem gets in the way for those four hours anyway).

I recommend to my mentees, reports, and any engineer who will listen to avoid advocating for time-boxed good work. I propose building the trust relationship where the people who need the code written are happy that the code is being written, and being written well, without feeling the need to check over our shoulders to see how the sausage is made. Then we don’t need to justify doing a good job, and certainly don’t need to ask permission: we just do it while we’re going. When someone asks how long it’ll take to do something, the answer is how long it’ll take to do properly, with all the rewriting, testing, and everything else it takes to do it properly. What they get out of the end is something worth having, that doesn’t need hardening, or 20% of the effort dedicated to patching it up.

And of course, what they get is something that will almost immediately need a rewrite.

March 10, 2022

(Last Updated on )

Your chums at WebAIM report

The Wall Street Journal indicated that many companies are looking for personnel with accessibility skills and that they can’t find them easily…The number of job listings with ‘accessibility’ in the title grew 78% in the year ending in July [2021] from the previous 12 months, LinkedIn said.

Get ahead! Crush, grind, mash and maim lesser developers! Earn more cash! You can acquire better “accessibility skills” than 90% of the developers on the market by

(The first two are foundational skills for anyone whose code is allowed anywhere near a web browser. The latter is a foundational skill for ‘being a decent human being’)

March 09, 2022

Back in Apple Silicon, Xeon Phi, and Amigas I asked how Apple would scale the memory up in a hypothetical Mac Pro based on the M1. We still don’t know because there still isn’t one, although now we sort of do know.

The M1 Ultra uses a dedicated interconnect allowing two (maybe more, but definitely two) M1 Max to act as a single SoC. So in an M1 Ultra-powered Mac Studio, there’ll be two M1 packages connected together, acting as if the memory is unified.

It remains to be seen whether the interconnect is fast enough that the memory appears unified, or whether we’ll start to need thread affinity APIs to say “this memory is on die 0, so please run this thread on one of the cores in die 0”. But, as predicted, they’ve gone for the simplest approach that could possibly work.

BTW here’s my unpopular opinion on the Mac Studio: it’s exactly the same as the 2013 Mac Pro (the cylinder one). Speeds, particularly for external peripherals on USB and Thunderbolt, are much faster, so people are ready to accept that their peripherals should all be outside the box. But really the power was all in using the word Studio instead of Pro, so that people don’t think this is the post-cheesegrater Mac.

March 04, 2022

Reading List 287 by Bruce Lawson (@brucel)

  • Interop 2022 is a really exciting collaboration between Igalia, Bocoup, Webkit, Microsoft Edge and Google Chrome to enhance interoperability between browsers. Yay! I want to snog all of them.
  • A Complete Guide to CSS Cascade Layers by Miriam Suzanne (who wrote the spec, so she should know. Vadim and I interviewed her on the F-word episode 11.)
  • Hello, CSS Cascade Layers by Ahmad Shadeed
  • Are we live? – “If you have an interface where content is dynamically updated, and when the content is updated it does not receive focus, then you likely are going to need a live region.” Scott O’Hara does a deep dive into the fun quirks of live regions in real Assistive Tech.
  • Say Hello to selectmenu, a Fully Style-able select Element – Can’t wait to see this in browsers, given that 93.6% of React select components are literally vast lumps of carcinogenic walrus turds, forged in Mordor by Margaret Thatcher and Hitler.
  • What makes writing more readable? “An examination of translating text to make it as accessible as possible.” I found this fascinating, especially as each paragraph of the article has a translation next to it
  • Version 100 in Chrome and Firefox “Chrome and Firefox will reach version 100 in a couple of months. This has the potential to cause breakage on sites that rely on identifying the browser version to perform business logic. This post covers the timeline of events, the strategies that Chrome and Firefox are taking to mitigate the impact, and how you can help.”
  • PWA Haven – Really neat collection of utility apps, all implemented as PWAs and using powerful Project Fugu. “The goal is to have PWA’s replace as many simple native apps as possible” by ThaUnknown_
  • I’ve Built a Public World Atlas with 2,500 Datasets to Explore Inspired by Encarta, built in Python, accessible at worldatlas.org

March 03, 2022

Having the right data by Graham Lee

In the beginning there was the relational database, and it was…OK, I guess. It was based on the relational model, and allowed operations that were within the relational algebra.

I mean it actually didn’t. The usual standard for relational databases is ISO 9075, or SQL. It doesn’t really implement the relational model, but something very similar to it. Still, there is a standard way for dealing with relational data, using a standard syntax to construct queries and statements that are mathematically provable.

I mean there actually isn’t. None of the “SQL databases” you can get hold of actually implement the SQL standard accurately or in its entirety. But it’s close enough.

At some point people realised that you couldn’t wake up the morning of your TechCrunch demo and code up your seed-round-winning prototype before your company logo hit the big screen, because it involved designing your data model. So the schemaless database became popular. These let you iterate quickly by storing any data of any shape in the database. If you realise you’re missing a field, you add the field. If you realise you need the data to be in a different form, you change its form. No pesky schemata to migrate, no validation.

I mean actually there is. It’s just that the schema and the validation are the responsibility of the application code: if you add a field, you need to know what to do when you see records without the field (equivalent to the field being null in a relational database). If you realise the data need to be in a different form, you need to validate whether the data are in that form and migrate the old data. And because everyone needs to do that and the database doesn’t offer those facilities, you end up with lots of wasteful, repeated, buggy code that sort of does it.

So the pendulum swings back, and we look for ways to get all of that safety back in an automatic way. Enter JSON schema. Here’s a sample of the schema (not the complete thing) for Covid-19 cases in Global.health:

{
  bsonType: 'object',
  additionalProperties: false,
  properties: {
    location: {
      bsonType: 'object',
      additionalProperties: false,
      properties: {
        country: { bsonType: 'string', maxLength: 2, minLength: 2 },
        administrativeAreaLevel1: { bsonType: 'string' },
        administrativeAreaLevel2: { bsonType: 'string' },
        administrativeAreaLevel3: { bsonType: 'string' },
        place: { bsonType: 'string' },
        name: { bsonType: 'string' },
        geoResolution: { bsonType: 'string' },
        query: { bsonType: 'string' },
        geometry: {
          bsonType: 'object',
          additionalProperties: false,
          required: [ 'latitude', 'longitude' ],
          properties: {
            latitude: { bsonType: 'number', minimum: -90, maximum: 90 },
            longitude: { bsonType: 'number', minimum: -180, maximum: 180 }
          }
        }
      }
    }
  }
}

This is just the bit that describes geographic locations, relevant to the falsehoods we believed about countries in an earlier post. This schema is stored as a validator in the database (you know, the database that’s easier to work with because it doesn’t have validators). But you can also validate objects in the application if you want. (Actually we currently have two shadow schemas: a Mongoose document description and an OpenAPI specification, in the application. It would be a good idea to normalise those: pull requests welcome!)

February 23, 2022

Recently Dan North asked the origin of the software design aphorism “make it work, make it right, make it fast”. Before delving into that story, it’s important to note that I had already heard this phrase. I don’t know where, it’s one of those things that’s absorbed into the psyche of some software engineers like “goto considered harmful”, “adding people to a late project makes it later” or “premature optimisation is the root of all evil”.

My understanding of the quote was something akin to Greg Detre’s description: we want to build software to do the right thing, then make sure it does it correctly, then optimise.

Make it work. First of all, get it to compile, get it to run, make sure it spits out roughly the right kind of output.

Make it right. It’s time to test it, and make sure it behaves 100% correctly.

Make it fast. Now you can worry about speeding it up (if you need to). […]

When you write it down like this, everyone agrees it’s obvious.

Greg Detre, “Make it work, make it right, make it fast

That isn’t what everybody thinks though, as Greg points out. For example, Henrique Bastos laments that some teams never give themselves the opportunity to “make it fast”. He interprets making it right as being about design, not about correctness.

Just after that, you’d probably discovered what kind of libraries you will use, how the rest of your code base interacts with this new behavior, etc. That’s when you go for refactoring and Make it Right. Now you dry things out and organize your code properly to have a good design and be easily maintainable.

Henrique Bastos, “The make it work, make it right, make it fast misconception

We already see the problem with these little pithy aphorisms: the truth that they convey is interpreted by the reader. Software engineering is all about turning needs into instructions precise enough that a computer can accurately and reliably perform them, and yet our knowledge is communicated in soundbites that we can’t even agree on at the human level.

It wasn’t hard to find the source of that quote. There was a special issue of Byte magazine on the C programming language in August 1983. In it, Stephen C. Johnson and Brian W. Kernighan describe modelling systems processing tasks in C.

But the strategy is definitely: first make it work, then make it right, and, finally, make it fast.

Johnson and Kernighan, “The C Language and Models for Systems Programming

This sentence comes at the end of a section on efficiency, which follows a section on “Higher-Level Models” in which the design of programs that use C structures to operate on problem models, rather than bits and words, are described. The efficiency section tells us that higher-level models can make a program less efficient, but that C gives people the tools to get close to the metal to speed up the 5% of the code that’s performance critical. That’s where they lead into this idea that making it fast comes last.

Within context, the “right” that they want us to make appears to be the design/model type of “right”, not the correctness kind of right. This seems to make sense: if the thing is not correct, in what sense are you suggesting that you have already “made it work”?

A second source, contemporary with that Byte article, seems to seal the deal. Butler Lampson’s hints deal with ideas from various different systems, including Unix but also the Xerox PARC systems, Control Data Corporation mainframes, and others. He doesn’t use the phrase we’re looking for but his Figure 1 does have “Does it work?” as a functionality problem, from which follow “Get it right” and “Make it fast” as interface design concerns (with making it fast following on from getting it right). Indeed “Get it right” is a bullet point and cautionary tale at the end of the section on designing simple interfaces and abstractions. Only after that do we get to making it fast, which is contextualised:

Make it fast, rather than general or powerful. If it’s fast, the client can program the function it wants, and another client can program some other function. It is much better to have basic operations executed quickly than more powerful ones that are slower (of course, a fast, powerful operation is best, if you know how to get it). The trouble with slow, powerful operations is that the client who doesn’t want the power pays more for the basic function. Usually it turns out that the powerful operation is not the right one.

Butler W. Lampson, Hints for Computer System Design

So actually it looks like I had the wrong idea all this time: you don’t somehow make working software then correct software then fast software, you make working software and some inputs into that are the abstractions in the interfaces you design and the performance they permit in use. And this isn’t the only aphorism of software engineering that leads us down dark paths. I’ve also already gone into why the “premature optimisation” quote is used in misguided ways, in mature optimisation. Note that the context is that 97% of code doesn’t need optimisation: very similar to the 95% in Johnson and Kernighan!

What about some others? How about the ones that don’t say anything at all? It used to be common in Perl and Cocoa communities to say “simple things simple; complex things possible”. Now the Cocoa folks think that the best way to distinguish value types from identity types is the words struct and class (not, say, value and identity) so maybe it’s no longer a goal. Anyway, what’s simple to you may well be complex to me, and what’s complex to you may well be simplifiable but if you stop at making it possible, nobody will get that benefit.

Or the ones where meanings shifted over time? I did a podcast episode on “working software over comprehensive documentation”. It used to be that the comprehensive documentation meant the project collateral: focus on building the thing for your customer, not appeasing the project office with TPS reports. Now it seems to mean any documentation: we don’t need comments, the code works!

The value in aphorisms is similar to the value in a pattern language: you can quickly communicate ideas and intents. The cost of aphorisms is similar to the cost in a pattern language: if people don’t understand the same meaning behind the terms, then the idea spoken is not the same as the idea received. It’s best with a lot of the aphorisms in software that are older than a lot of the people using them to assume that we don’t all have the same interpretation, and to share ideas more precisely.

(I try to do this for software engineers and for programmers who want to become software engineers, and I gather all of that work into my fortnightly newsletter. Please do consider signing up!)

February 18, 2022

I suppose if I’m going to have a tagline like “from programming to software engineering”, we ought to have some kind of shared understanding of what that journey entails. It would be particularly useful to agree on the destination.

The question “what is software engineering?” doesn’t have a single answer. Plenty of people have the job title “software engineer”, or work in the “engineering” department of a software company, so we could say that it’s whatever they do. But that’s not very constructive. It’s too broad: anyone who calls themselves a software engineer could be doing anything, and it would become software engineering. It’s also too narrow: the people with the job title “software engineer” are often the programmers, and there’s more to software engineering than programming. I wrote a whole book, APPropriate Behaviour, about the things programmers need to know beyond the programming; that only scratches the surface of what goes into software engineering.

Sometimes the definition of software engineering is a negative-space definition, telling us what it isn’t so that something else can fill that gap. In Software Craftsmanship: The New Imperative, Pete McBreen described that “the software engineering approach” to building software is something that doesn’t work for many organisations, because it isn’t necessary. He comes close to telling us what that approach is when he says that it’s what Watts Humphries is lamenting in Why Don’t They Practice What We Preach? But again, there are reasons not to like this definition. Firstly, it’s a No True Scotsman definition. Software engineering is whatever people who don’t do it properly, the way software craftsmen do it, do. Secondly it’s just not particularly fruitful: two decades after his book was published, most software organisations aren’t using the craftsmanship model. Why don’t they practice what he preaches?

I want to use a values-oriented definition of software engineering: software engineering is not what you do, it’s why you do what you do, and how you go about doing it. No particular practice is or isn’t software engineering, but the way you evaluate those practices and whether or not to adopt them can adopt an engineering perspective. Similarly, this isn’t methodology consultancy: the problems your team has with Agile aren’t because you aren’t Agiling hard enough and need to hire more Agile trainers. But the ways in which you reflect on and adapt your processes can be informed by engineering.

I like Shari Lawrence Pfleeger’s definition, in her book Software Engineering: The Production of Quality Software:

There may be many ways to perform a particular task on a particular system, but some are better than others. One way may be more efficient, more precise, easier to modify, easier to use, or easier to understand. Consequently, Software Engineering is about designing and developing high-quality software.

There’s a bit of shorthand, or some missing steps here, that we could fill in. We understand that of the many ways to build a software system, we can call some of them “better”. We declare some attributes that contribute to this “betterness”: efficiency, precision, ease of adaptation, ease of use, ease of comprehension. This suggests that we know what the properties of the software are, which ones are relevant, and what values a desirable system would have for those properties. We understand what would be seen as a high-quality product, and we choose to build the software to optimise for that view of quality.

The Software Engineering degree course I teach on offers a similar definition:

Software Engineering is the application of scientific and engineering principles to the development of software systems—principles of design, analysis, and management—with the aim of:

  • developing software that meets its requirements, even when these requirements change;
  • completing the development on time, and within budget;
  • producing something of lasting value—easy to maintain, re-use, and re-deploy.

So again we have this idea that there are desirable qualities (the requirements, the lasting value, ease of maintenance, re-use, and re-deployment; and also the project-level qualities of understanding and controlling the schedule and the cost), and the idea that we are going to take a principled approach to understanding how our work supports these properties.

Let me summarise: software engineering is understanding the desired qualities of the software we build, and taking a systematic approach to our work that maximises those qualities.

February 15, 2022

My Python script for Global.health was not running in production, because it couldn’t find some imports. Now the real solution is to package up the imports with setuptools and install them at runtime (we manage the environments with poetry), but the quick solution is to fix up the path so that they get imported anyway. Or so I thought.

The deployment lifecycle of this script is that it gets packaged into a Docker image and published to Amazon Elastic Container Repository. An EventBridge event triggers a Batch job definition using that image to be queued. So to understand why the imports aren’t working, we need to understand the Docker image.

docker create --name broken_script sha256:blah gives me a container based on the image. Previously I would have started that image and launched an interactive shell to poke around, but this time I decided to try something else: docker export broken_cleanup | tar tf - gives me the filesystem listing (and all I would’ve done with the running shell was various ls incantations, so that’s sufficient).

Indeed my image had various library files alongside the main script:

/app/clean_old_ingestion_source_files.py
/app/EventBridgeClient.py
/app/S3Client.py
/app/__init__.py

Those supporting files should be in a subfolder. My copy command was wrong in the Dockerfile:

COPY clean_old_ingestion_source_files.py aws_access ./

This copies the content of aws_access into the current folder, I wanted to copy the folder (and, recursively, its content). Simple fix: break that line into two, putting the files in their correct destinations. Now rebuild the image, and verify that it is fixed. This time I didn’t export the whole filesystem from a container, I exported the layers from the image.

docker image save sha256:blah | tar xf -

This gives me a manifest.json showing each layer, and a tar file with the content of that layer. Using this I could just get the table of content for the layer containing my Python files, and confirm that they are now organised correctly.

February 11, 2022

Reading List 286 by Bruce Lawson (@brucel)

February 08, 2022

Halloween is Over by Graham Lee

Back in 2016, I sent the following letter to Linux Voice, and it was published in issue 24 as the star letter. LV came to an end (and made all of their content available as Creative Commons) when they merged with Linux Magazine. The domain still exists, but the certificate expired years ago; you should search for it if you’re interested in back numbers for the magazine and willing to take the risk on their SSL.

I think my letter is still relevant, so I’m reproducing it. Here’s what I wrote:

LV issue 023 contained, as have prior numbers, many jabs at Microsoft as the natural enemy of the Free Software believer. It’s time to accept that the world has changed.Like many among your staff and readers, I remember that period when the infamous Halloween memos were leaked, and we realised joyfully that the Free Software movement was big enough to concern the biggest software company in the world.

I remember this not because it was recent, but because I am old: this happened in 1998. Large companies like Microsoft can be slow to change, so it is right that we remain sceptical of their intentions with Free and open source software, but we need to remember that if we define our movement as Anti-Microsoft, it will live or die by their fortunes alone.

While we jab at Azure for their plush Tux swag, Apple has become one of the largest companies on the planet. It has done this with its proprietary iPhone and iOS platforms, which lock in more first-party applications than 1990s Windows did when the antitrust cases started flying. You can download alternatives from its store (and its store alone), but the terms of business on that store prohibit copyleft software. The downloads obtained by Apple’s users are restricted by DRM to particular Apple accounts.

Meanwhile, Apple co-opts open source projects like Clang and LLVM to replace successful Free Software components like GCC. How does the availability of a cuddly Tux with Microsoft branding stack up to these actions in respect to the FSF’s four freedoms?

We celebrate Google for popularising the Linux kernel through its Android mobile OS, and companies like it, including Facebook and Twitter, for their contributions to open source software. However, these companies thrive by providing proprietary services from their own server farms. None has embraced the AGPL, a licence that extends freedom to remote users of a hosted service. Is it meaningful to have the freedom to use a browser or a mobile device for any purpose, if the available purposes involve using non-free services?

So yes, Microsoft is still important, and its proprietary Windows and Office products are still huge obstacles to the freedom of computer users everywhere. On the other hand, Microsoft is no longer the headline company defining the computing landscape for many people. If the Free Software movement is the “say no to Microsoft” movement, then we will not win. Rather we will become irrelevant at the same time as our nemesis in Redmond.

You may think that Steve Jobs is an unlikely role model for someone in my position, but I will end by paraphrasing his statement on his return to Apple. We need to get out of the mindset that for the Four Freedoms to win, Microsoft has to lose.

Graham Lee

Their deputy editor responded.

I had never stopped to consider this, but what you say makes 100% sense. In practice though, for most people Microsoft is still the embodiment of proprietary software. Apple is arguably a more serious threat, but Microsoft keeps shooting itself in the foot, so it’s an easier target for us. Apple at least makes a lot of good products along with its egregious attitudes towards compatibility, planned obsolescence and forced upgrades; Microsoft seems to be successful only by abusing its market position.

Andrew Gregory

Things have changed a bit since then: Apple have made minimal efforts to permit alternative apps in certain categories; Microsoft have embraced and extended more open source technologies; various SaaS companies have piled in on the “open source but only when it works in our favour” bandwagon; Facebook renamed and is less likely to be praised now than it was in 2016.

But also things have stayed the same. As my friend and stream co-host Steven Baker put it, there’s a reason there isn’t an M in FAANG. Microsoft isn’t where the investors are interested any more, and they shouldn’t be where Free Software’s deciding battles are conducted.

If you like my writing on software engineering please subscribe to my fortnightly newsletter where I aggregate it from across the web, as well as sharing the things I’ve been reading about software engineering!

Back to Top