Last updated: December 07, 2019 02:22 PM (All times are UTC.)

December 07, 2019

December 06, 2019

Reading List 244 by Bruce Lawson (@brucel)

December 04, 2019

December 03, 2019

Accessibility made the UK national TV news yesterday, hot on the heels of a report The business case for inclusive design by the UK disability charity Scope, which shows that around 50% of disabled people couldn’t buy something online that they wanted.

An accessible site is therefore a huge business opportunity, given that the latest Purple Pound estimate is £274 billion. (The Purple Pound a proxy for the purchasing power of the disabled community.)

Here’s a 4 minute interview on Channel 5 News to help persuade your bosses/ colleagues of the business case for accessibility.

Most of the problems that Glen talks about are easy to diagnose and solve. In fact, last week I wrote a handy Checklist to avoid the most common accessibility errors. Use it and make more money while being a better person.

Free online course!

If you want to learn more, as it’s International Day of Persons with Disabilities, W3C launched an Introduction to Web Accessibility free online course in cooperation with UNESCO. Enrol from today; the course starts 28 January 2020.

November 30, 2019

I recently had one of those moments where some code failed, but not in the way I expected it to. To create a contrived example, the scenario was basically this:

class LocationsController < ApplicationController
  def update
    my_location = Location.find(params[:id])
    do_something_with(location)
  end
end

This code won’t do what I want it to because I have defined the variable my_location but then passed location to another method. What I initially expected upon finding the bug is that I would have seen a NameError complaining that the local variable location didn’t exist. But I didn’t. As far as I could tell, location existed and it was nil.

It wasn’t referenced anywhere else in LocationsController, nor in ApplicationController. So what is it? Fortunately, Ruby provides a great way to find out.

class LocationsController < ApplicationController
  def update
    puts method(:location).source_location
    # "/usr/local/bundle/gems/actionpack-5.0.7.2/lib/action_controller/metal.rb"
    # 149
  end
end

That’s that mystery solved. There is a method called location implemented a layer or two up the inheritance hierarchy. We knew it had to come from somewhere, but source_location lets us determine the exact location.

November 27, 2019

Fitness & Motivation

With the year speeding towards an end I wanted to write something about 2019 and how fitness saved my freelance career.

Oh yeah… this article contains pictures of me with my boobs out, apologies in advance.

It’s no secret that I love and hate freelancing in equal measures. I’m the first to admit I’m not always the best at keeping healthy attitude towards myself and my work. Ask any of my online buddies and they’ll probably tell you at least once where I’ve complained to them and commented on wanting to take a “proper job”

Get a “proper” job??

Well 2019 was the first year where I almost did get a proper job. I was talking to several startups about taking a role with them, at first it was a fun flirt with “what ifs” and quickly became real when I was being asked for salary expectation.

Before I really could accept any of these roles I had to remind myself of the 11+ years I’d been working as my own boss and ask myself “Why are you looking for a job?”

The answer was all about Motivation. I had none. 11 years of freelancing had worn me down and you know what, I wasn’t that 26 year old that started freelancing either.

You’re too old!!

Another huge factor was my age. Things were changing, I was struggling with my skin, back pains, anxieties – all sorts.

I assumed taking a role would take away all the stresses of being your own boss and give me that buzz to go to work… I quickly realised I’d still be me… just in another role so had to go deeper to the problem.

I read online that energy levels and motivation can easily be linked with how healthy you are… so I decided to try and get fit.

Let’s start running

My fitness journey started when we moved house. We had moved to lovely Shrewsbury where the River Severn runs through. I had always wanted to run around the river so started slowly by doing the Couch to 5k. It was starting to work – I had hit my 5k target earlier and even worked up to just under 8km and then, BOOM. Injured.

That was mid-late 2018. For 3-6 months I did nothing about it. I had also hurt my back doing manual labour in the garden. I felt like I was falling apart!

So now I had these injuries, this boredom at work and no motivation to get anything done.

Let’s lift some weights

My wife had started going to a ‘Strong Girls’ club that was 1 hour strength training sessions ran by a lovely local couple Simon and Suze. Their company Core Fitness / Studio Four also did mens sessions but I didn’t like the idea… at first.

It took my wife about 6 weeks of suggesting I went to the mens sessions before I finally agreed to go – and I loved it.

It was nice to get out of the house and talk to people that wasn’t my dog. I was working muscles I never knew I had and then BOOM. Injury, the bloody back went again.

Let’s see a professional

This time I took my back injury seriously, booked into 3 Osteopath sessions (which didn’t really work for me) and 3 Chiropractor sessions (Which thankfully did work for me). The diagnosis was a strained butt (not even joking) and a strained neck.

After the 3 sessions I was feeling back to feeling some level of normal and excited to get started on the fitness sessions again.

Let’s seek dietary advice

At the same time I spoke to Suze about my skin, as a nutritionist she quickly got to the bottom of my skin woes. I was eating too much crap, snacking on sugar based foods. Turns out 90% of my foods were bad for people like me with Eczema. Inflammatory! I shouted.

Lets eat clean-ish

I quit sugar, dairy and wheat and my skin slowly retuned to normal. Don’t worry gang, the weekends are fairly full of cheat meals but I’ve never gone back to dairy or wheat.

Sugar is the one area I still struggle with but even now I’ve reduced my sugar LOADS. I’m sure I’ll be rebooting my sugar levels very soon don’t you worry.

Let’s train!

So back to the training. I’ve been going now for a solid 6 months – stronger each session. I’ve even started doing some PT with Simon, the hour a week is great to just focus on my body and not worry about work.

So currently Monday is PT, Tuesday is Strength Training, Thursday is Circuits and Saturday is a home workout. I love it. I’m addicted to working out.

The results are incredible.

Regardless of how I look I was back in the zone, firing on all cylinders. I was fitter than ever and people were noticing that I was looking healthy. Even my mother in Australia notices it via a blurry Skype session.

My productivity was back up, my confidence was up. Even the anxieties I was having before are fading away – leaving the house regularly is good for you… who knew??

The new me has since doubled down on freelancing and began working with startups and tech businesses more long term, less about the 5 day projects and more about the 6-12 month engagements.

My work has improved and honestly enjoy getting to my desk in the mornings. Earnings are up and my accountant is happy.

To avoid future back problems I also treated myself to a Herman Miller Aeron chair and making more use of my standing desk.

So this is my journey in photos below. I wasn’t huge to begin with but you can see how unhealthy I looked.

So TL;DR – Exercising and eating better has helped me focus, gain confidence, reduce anxieties and find love for my job once more.

I’m fairly confident nobody is reading this far down the page but feel free to tweet me with your own fitness-freelance stories. #freelancefit

Cat Photo by FuYong Hua on Unsplash
Hiring Photo by Free To Use Sounds on Unsplash
Photo by Aarón Blanco Tejedor on Unsplash

The post Fitness & Motivation appeared first on .

November 26, 2019

Last week I was moaning about the fact that 63% of developers surveyed don’t test accessibility. And I was banging on about editing a ‘learn HTML’ book which was riddled with basic accessibility errors, when Frederik replied in order to shut my whining and make me do something about it:

This isn’t a comprehensive guide to accessibility, but we’ll look at ways to avoid the most common accessibility errors identified by the WebAIM accessibility analysis of the top 1,000,000 home pages, and the HTTPArchive 2019 Web Almanac analysis of 5.8 million pages. I’m not going to get philosophical; if you’re reading this, I assume you care about why, and just want some tips on how. (But if you need to convince someone else, here’s the 4 minute business case for accessibility.)

Insufficient colour contrast

83% of homepages have low colour contrast. There are several ways to check this. I personally use Ada Rose Cannon’s handy Contrast Checker Widget, which sits in my bookmarks bar like a useful Clippy and goes through the current tab and highlights where there isn’t enough contrast. Or you can use Firefox’s Accessibility Inspector in the devtools to check and tweak the CSS until you get a pass. To check a particular combination of colours, contrastchecker will give you AA and AAA ratings. whocanuse.com will tell you which particular types of visual impairments may have difficulty with your chosen colours.

Missing alternative text for images

A whopping 68% of homepages had missing alt text (NOT alt tags). Every <img> must have alternate text. Here are basic rules:

  1. If the image is purely decorative, it must have empty alt text: alt="". But it should probably be in CSS, anyway.
  2. If an image is described in body text it should have empty alt alt="", to avoid repetition. But be careful if it’s an <img> in a <figure> (hat-tip to Mallory).
  3. If an image is the content of a link (for example, your organisation’s logo can be clicked to go to the homepage) the alternate text should describe the destination of the link. For example, alt="home page".

Heydon Pickering’s revenge.css bookmarklet does a quick and easy test to diagnose these, although I feel some of its other warnings are now outdated – I’ve filed an issue.

Empty links, empty buttons

I don’t know why anyone would do this, but apparently 58% of homepages tested had empty links, and 25% had empty buttons. I’m assuming this means they were empty of text, and contained only an image or an image of text. In the case of buttons, HTTPArchive Almanac says “often the reason this confusion occurs is due to the lack of a textual label. For example, a button displaying a left-pointing arrow icon to signify it’s the “Back” button, but containing no actual text”. (They found 75% of pages do this.) If that’s the case, the image needs alternate text that describes the function of the button or destination of the link. And don’t use icon fonts.

Use Heydon’s revenge.css bookmarklet to diagnose these.

Missing form input labels

52% of homepages had missing form labels. I prefer to wrap the label around its input like this:


<label>Email adddress: 
  <input type="email" />
</label>

I find it’s more robust than associating a form with a label using the for="id" pattern. If you can’t use an HTML label element, you can label an input for assistive technology using aria-label="useful instruction" or (less useful) a title attribute on the input. Use Heydon’s revenge.css bookmarklet to test these. WebAIM has more advanced form labelling techniques.

Missing document language

23% if homepages didn’t declare the human language of the document. This matters because (for instance) the word “six” is pronounced by a French screen reader very differently from an English screen reader. It’s simple to do: <html lang="en"> tells assistive tech that the main language of this page is English. The codes are defined in BCP47.

Missing <main> elements

The HTTPArchive study of 5.8million pages shows that only 26% of pages have a <main> element and 8.06% of pages erroneously contained more than one main landmark, leaving these users guessing which landmark contains the actual main content.

Solution: wrap your main content, that is, stuff that isn’t header, primary navigation or footer, in a <main> element. All browsers allow you to style it, and assistive technologies know what to do with it.

Happily, more than 50% of pages use <nav> <footer> and <header>. In my opinion, <nav> should go around only your primary navigation (and can be nested inside <header> if that suits you). In its survey of screen reader users, WebAIM found that 26% of screen reader users frequently or always use these landmarks when navigating a page.

Here’s a YouTube video of blind screenreader user Leonie Watson talking through how she navigates this site using the HTML semantics we’ve discussed.

YouTube video

There’s lots more to accessibility, especially if you have lots of JavaScript widgets and single-page application architecture. But my list will help you to avoid the most common accessibility errors and become a web superhero adored by millions. Moritz Gießmann has a nice single-page Accessibility Cheatsheet.

You can also make tagged accessible PDFs from your pages using Prince—it’s free for non-commercial use. If you’re one of the React Kool-Kidz™, I recommend using Tenon-UI: Tenon’s accessible React components library.

Buy me a pint when you see me next. xxx

Dive deeper?

The single source of truth is Web Content Accessibility Guidelines (WCAG) 2.1 from W3C. UK’s Government Digital Service has a good readable Understanding WCAG 2.1 guide. For advanced applications requiring ARIA, I find W3C’s Using ARIA invaluable.

You want tools? The BBC has open-sourced its BBC Accessibility Standards Checker. Google Lighthouse and Tenon.io are also very good. Please note that no automated tool can be completely reliable, as the fun article Building the most inaccessible site possible with a perfect Lighthouse score demonstrates.

If you want a self-paced course, on International Day of Persons with Disabilities, W3C launched an Introduction to Web Accessibility free online course in cooperation with UNESCO. Enrol now; the course starts 28 January 2020.

It’s common in our cooler-than-Agile, post-Agile community to say that Agile teams who “didn’t get it” eschewed good existing practices in their rush to adopt new ways of thinking. We don’t need UML, we’re Agile! Working software over comprehensive documentation!

This short post is a reminder that it ran both ways, and that people used to the earlier ways of thinking also eschewed integrating their tools into the Agile methodology. We don’t need Agile, we’re Model-Driven! Here’s an excerpt from 2004’s UML 2 Toolkit:

Certain object-oriented methods on the market today, such as The Unified Process, might be considered processes. However, other lightweight methods, such as Extreme Programming (XP), are not robust enough to be considered oricesses in the way we use the term here. Although they provide a valuable set of interrelated techniques, they are often referred to as “small m methodologies” rather than as software-engineering processes.

This is in the context of saying that UML supports software-engineering processes in which a sequence of activities produce “documentation…about the system being built” and “a product that solves the initial problems is introduced and delivered”. So XP is not robust enough at doing those things for UML advocates to advocate UML in the context of XP.

So it’s not just that Agilistas abandoned existing practices, but there’s an extent to which existing practitioners abandoned Agilistas too.

November 25, 2019

November 22, 2019

Reading List 243 by Bruce Lawson (@brucel)

It’s been a busy week for one of the projects I’m involved with, along with my old chum Håkon Wium Lie (co-creator of CSS). Prince is a software package that produces beautiful, accessible PDFs from HTML, SVG and CSS. On Tuesday we released Prince 13 with support for CSS variables (aka custom properties), lots of goodies for non-Latin scripts like Arabic & Indic, & support for fragmenting single-column/row flex containers across multiple pages. Give it a whirl if you need to produce PDFs – it’s free for non-commercial use.

Then the next day, we open-sourced our Allsorts font parser, shaping engine, & subsetter for OpenType, WOFF, and WOFF2 under the Apache 2.0 license so everyone can have better Chinese, Japanese, Korean, and Indic scripts in PDF. Allsorts was extracted from Prince and is implemented in Rust.

November 20, 2019

A powerful platform for wealth managers

Multi award-winning account aggregation software, moneyinfo is the client portal that delivers clients’ secure access to their entire financial life in one place all beautifully presented under your own brand.

I’ve been doing some consultancy work with moneyinfo for quite a while now. I’ve started by giving their software a modern refresh and since moved onto their app designs, desktop designs and plenty of other areas of the business that needed some UX and UI assistance.

The team at moneyinfo are really smart and never reject any of my ideas before I’ve presented them.

They are really open to the idea of bringing in modern design ideas into their business.

The project can be tricky at times due to how much white labelling is needed within their brand. Each element needs to be considered for multiple colours and components are highly customisable which means no two screens will look the same.

Find out more about money info on their website.

If you like this design please share and if you’re in the market to book a freelance UI designer don’t hesitate to contact me.

The post MoneyInfo App Design appeared first on .

A community of news, events and offers in your phone

PostBoard is a platform for the curation, sale and sponsorship of digital content linked by a common theme and often tied to a common geolocation.

PostBoard founder Saqib Malik came to me with this really interesting idea for a local community app, he wanted the product to really speak to small communities. The app focuses on three main areas, offers, news and events. The app was designed to be highly searchable and promote local businesses.

PostBoard UI Design

Saqib had a really great eye for design and knew what he wanted to see from the start.

We worked closely to design an app that was simple to use but had a smart modern look.

Make sure you check out PostBoard if you’re in London, sign up for more information on their website.

If you like this design please share and if you’re in the market to book a freelance UI designer don’t hesitate to contact me.

The post PostBoard UI & UX Design appeared first on .

November 15, 2019

Imagine starting your day realising that someone found an outdated dependency in your project, upgraded it, opened a pull request with a detailed description and the test suite had already been run. All that was left was for you to do is carry out a code review and hit that merge button! That someone is Dependabot.

Why keep your dependencies up to date?

It is very important to keep your project’s dependencies up to date for 2 reasons:

  • The latest version is usually the best one (new features, better security, improved performance and bug fixes)
  • Iterative improvements are better than big-bang changes

Plus it is really satisfying when the project is up to date.

How to keep your dependencies up to date

One way would be to regularly ask your package manager to list out all the outdated dependencies. Upgrade them one by one, by checking the changelogs and then opening pull requests for each of them.

The better way is to have Dependabot doing all of that work for you. Dependabot pulls down your dependency files and looks for any outdated or insecure packages. It then opens individual pull requests to update each outdated/insecure dependency, with the changelog and release notes for each pull request and the test suite already executed, leaving just a review for you to do before hitting merge. That seemed like such a good idea to us here at Talis, that we’ve implemented this approach across many of our repositories.

Configuring Dependabot

Simply jump on the Dependabot app in Github marketplace and set up a plan (don’t worry, it’s free!). Then you will get a nice dashboard to configure Dependabot settings. At Talis we are using a configuration file placed in the root of the repo .dependabot/config.yml for Dependabot configurations.

Dependabot has a lot of configuration options. Some of the important and useful ones are

  • package_manager: What package manager to use
  • update_schedule: How often to check for updates
  • default_reviewers: Reviewers to set on pull requests
  • allowed_updates: Limit which updates to allow e.g security updates only, top level dependencies only
  • version_requirment_updates: How to update your package manifest (e.g. package.json, Gemfile etc)

These options can be configured per repo. You can find all the options here.

Then there are some account level settings in the Dependabot app which are applied on all the repos. Some of the important ones are:

  • Automatically rebase PRs: We have that turned off, as we do not want Dependabot kicking off new builds all the time
  • PRs rate limit: Limit of initial pull requests created for new projects. It’s a good idea to tweak this before adding Dependabot to any repo, so you aren’t overloaded by pull requests

This is what a PR created by Dependabot looks like:

dependabot-pr

The PR includes a very pretty description of the changes included. Dependabot also aggregates everyone’s test results into a compatibility score, so you can be certain a dependency update is backwards compatible and bug-free. There are also some commands which can be used to perform certain actions on the PR e.g rebase, recreate.

Dependabot and Github Security Alerts

Github also uses Dependabot with their security alerts. You get a “Create automated security fix” against an alert in the security tab of your repo, which uses Dependabot to create a PR to upgrade the insecure dependency.

This comes with some caveats at the time of writing this:

  • Creating an automated security fix from this section does not pick up the configurations you have defined in your repo for Dependabot
  • Github security alerts doesn’t seem to notify about sub-level dependencies for npm-shrinkwrap.json, whereas Dependabot does

To conclude, if you would like to automate your dependency updates, Dependabot is for you. And yes, we are on the right path to be taken over by bots.

skynet

November 14, 2019

November 12, 2019

Aldi (like Lidl) is a strange store. Not content with simply offering cut price groceries, if you manage to walk out of the store without a brand new HD TV, a WIFI-extending double power socket, and an inflatable dingy in your bag with your 17 tins of bakes beans, you’ve got more resolve than I.

The middle aisle, as it is known, is a constantly rotating stock of goodies available at bargain prices. With everything from baby toys to inverter generators, it’s not a surprise to see it often has a wide range of tools under their Workzone and Ferrex brands available to buy.

The problem with this is you can’t just pop to Aldi anytime to pick up the random orbit sander you want, instead you have to wait for it to reappear, and pick it up before they sell out. This has improved somewhat recently with the introduction of their e-commerce store, which lets you pre-order items, and tends to have stock for longer than the brick and mortar stores, but this also operates on the principle of when it’s gone, it’s gone, so if you miss it, you’ll need to wait until they feature it again.

There’s also no grantee that the drill they’re selling next week is the same one they sold last year, because Workzone and Ferrex are what’s know as private (or phantom) brands. These are brands owned not by a manufacturer or producer but by a retailer or supplier who gets its goods made by a contract manufacturer under its own label. As such, The Ferrex power screwdriver you purchased last week was likely manufactured by an entirely different company to the table saw you’re going to buy next week.

Often times it’s easy to see who the manufacturer of a product is (it will sometimes have this information available in the small print on the box), and other times it’s obvious who it might be, so you’re able to do a direct price comparison, and check the reviews of the product on sites like Amazon before you part with your cash.

But, are Workzone and Ferrex tools any good? In my opinion, the answer is yes.

As a new maker with limited budget, I’ve purchased my fair share of tools from Aldi (13 at last count, as you can see from my tools list), and so far I’ve only had a problem with one of them.

The tool in question was a Ferrex band saw, with the problem relating to the saw blade refusing to stay in place, and slipping off as soon as it started to turn. So far as I can tell, I was simply unfortunate, as I have used someone else’s Aldi band saw (which worked perfectly), and around nine-out-of-ten reviews on their own site were positive, so I guess I just got a dodgy one.

But not to fear, because in addition to their 3 year warranty, they offer a 30 day money back guarantee, and had no problem immediately refunding my money. I would have ordered another one with the refund, but seeing as I’d purchased it with money I didn’t really have, I decided I’d wait until next year.

Now, please understand that a £30 drill from Aldi isn’t going to compare with a £300 one from a brand like DeWalt. But they’re not really aimed at the same people, and for someone like me, who is a weekend woodworker and DIYer, I’m very happy to recommend Aldi’s Workzone and Ferrex products.

November 10, 2019

If you’re in a purely software business, your constraining resource is often (not always, not even necessarily in most cases, but often) the rate at which software gets changed. Well, specifically, the rate at which software gets changed in a direction your customers or potential customers are interested in. This means that the limiting factor on growth is likely to be rate at which you can add features or fixes that attract new customers, or maintain old customers.

It’s common to see business where this constraint is not understood throughout management, particularly manifesting in sales. In a business to business context, symptoms include:

  • sales teams close deals with promises of features that don’t exist, and can’t exist soon.
  • there’s no time to fix bugs or otherwise clean up because of the new feature backlog.
  • new features get added to the backlog based on the size of the requesting customer, not the cost/benefit of the customer.
  • the product roadmap is “what we said we’d have, to whom, by when”, not “what we will have”.

As Eliyahu Goldratt says, you have to subordinate the whole process to the constraint. That means incentivising people to sell something a lot like what you have now, over selling a bigger number of things you don’t have now and won’t have soon.

November 08, 2019

Reading List 242 by Bruce Lawson (@brucel)

What I’ve Been Doing

Having had time to settle in to my new job, it’s become more apparent by the day that it was a good move. I have spent a lot of time enjoying the new set of challenges provided by my job, and being part of a robust code review culture does wonders for keeping you on your best behaviour. It also polishes your ego on the occasions where people don’t find much fault with your work, which I am striving to increase the frequency of.

What I’m Doing Now

I’ve enjoyed the opportunity to get a firmer grasp of AWS services beyond EC2. I’ve particularly enjoyed getting to grips with kubernetes, as my previous ops experience mostly extended to SSH and hope. I’m aiming to pursue some AWS certifications to show off the cool new stuff I’m picking up. Does that let me put letters after my name? It had better.

Reading

November 07, 2019

Immutable changes by Graham Lee

The Fixed-Term Parliaments Act was supposed to bring about a culture change in the parliament and politics of the United Kingdom. Moving for the second reading of the bill that became this Act, Nick Clegg (then deputy prime minister, now member for Facebook Central) summarized that culture shift.

The Bill has a single, clear purpose: to introduce fixed-term Parliaments to the United Kingdom to remove the right of a Prime Minister to seek the Dissolution of Parliament for pure political gain. This simple constitutional innovation will none the less have a profound effect because for the first time in our history the timing of general elections will not be a plaything of Governments. There will be no more feverish speculation over the date of the next election, distracting politicians from getting on with running the country. Instead everyone will know how long a Parliament can be expected to last, bringing much greater stability to our political system. Crucially, if, for some reason, there is a need for Parliament to dissolve early, that will be up to the House of Commons to decide. Everyone knows the damage that is done when a Prime Minister dithers and hesitates over the election date, keeping the country guessing. We were subjected to that pantomime in 2007. All that happens is that the political parties end up in perpetual campaign mode, making it very difficult for Parliament to function effectively. The only way to stop that ever happening again is by the reforms contained in the Bill.

As we hammer out the detail of these reforms, I hope that we are all able to keep sight of the considerable consensus that already exists on the introduction of fixed-term Parliaments. They were in my party’s manifesto, they have been in Labour party manifestos since 1992, and although this was not an explicit Conservative election pledge, the Conservative manifesto did include a commitment to making the use of the royal prerogative subject to greater democratic control, ensuring that Parliament is properly involved in all big, national decisions—and there are few as big as the lifetime of Parliament and the frequency of general elections.

When a parliament is convened, the date of the next general election automatically gets scheduled for the first Thursday in May, five years out. The Commons could vote, with a qualified majority, to hold an election earlier, or an election would automatically be triggered if the government lost a no-confidence vote, but the prime minister cannot unilaterally declare an election date to suit their popularity with the franchise.

Observed behaviour shows that the Act has been followed to the letter, up to the current dissolution which required a specific change to the rules. Has the spirit of the Act, the motivation presented above, survived intact? The dates of elections since the Act passed were:

  • 7 May 2015, the first Thursday in May at the end of a five-ish-year Parliament, chosen to bring the existing behaviour into sync with the planned behaviour.
  • 8 June 2017, after a qualified majority vote within the terms of the Act.
  • 12 December 2019, after the aforementioned Early Parliamentary General Election Act.

The reason for the disparity is that the intended goal—a predictable release schedule that makes it easier for everyone involved to prepare—doesn’t match the cultural drivers. The desire to release when we’re ready, and have the features that we want to see, remains immutable, and means that even though we’ve adopted the new rules, we aren’t really playing by them.

I was tempted to hit “publish” at this point and leave the software engineering analogy unspoken. I powered on: here are a few examples I’ve seen where the rule changes have been imposed but the cultural support for the new rules hasn’t been nurtured.

  • Regular releases, but the release is “internal only” or completely unreleased until all of the planned features are ready;
  • Short sprints, where everything that has gone from development into QA is declared “done”;
  • Sprint commitments, where the team also describe “stretch goals” that are expected to be delivered;
  • Sustainable pace, where the “velocity” is expected to increase monotonically;
  • Self-organizing teams, where the manager feeds back on everybody’s status update at the daily stand-up;
  • Continuous integration, where the team can disable or skip tests that fail.

All of these can be achieved without the attached sabotage, but that requires more radical changes than adding a practice to the team’s menu. Radical, because you have to go to the root of why you’re doing what you do. Ask what you’re even trying to achieve by having a software team working on your software, then compare how well your existing practice and your proposed practice support that value. If the proposed practice is better, then adopt it, but there’s going to be a transition period where you continually explain why you’re adopting it, show the results, and (constructively, politely, and firmly) guide people toward acceptance of and commitment to the new practice. Otherwise you end up with a new fixed-term parliament starting whenever people feel like it.

November 05, 2019

On exploding boilers by Graham Lee

Throughout our history, it has always been standardisation of components that has enabled creations of greater complexity.

This quote, from Simon Wardley’s finding a path, reminded me of the software industry’s relationship with interchangeable parts.

Brad Cox, in both Object-Oriented Programming: an Evolutionary Approach and Superdistribution, used physical manufacturing analogies (to integrated circuits, and to rifles) to invoke the concept of a “software industrial revolution” that would allow end users to assemble off-the-shelf parts to solve their problems. His “software ICs” built on ideas expressed at least as early as 1968 by Doug McIlroy. Joe Armstrong talked about a universal function registry, so that if someone writes sin/1 everybody else can use it.

Of course we have a lot of reusable components in software engineering now, and we can thank the Free Software movement at least as much as any paradigm of organising programming instructions. CTAN, CPAN, and later repositories act as the “component catalogues” that Cox discussed. If you want to make a computer do something, you can probably find an npm module or a Ruby gem that does most of the work for you. The vast majority of such components have free licenses, it’s rare to pay for a reusable component.

The extent to which they’re “standard parts”, on the model of interchangeable nuts and bolts or integrated circuits, is debatable. Let’s say that you download a package from the NPM. We know that you use it by calling require (or maybe import)…but what does that give you? An object? A constructor? A regular function? Does it run anything as a result of calling require? Does it work in your node/ionic/electron/etc. context? Is it even a lump of regular javascript, or is it a Real, or to have access to a JVM, or some other niche requirement?

Whatever these “standard parts” and however they’re used, you’re probably still doing a bunch of coding. These parts will do computery stuff, or maybe generic behaviour like authentication, date UIs, left-padding strings and the like. Usually we still have to develop ours apps as “engineered” software projects with significant levels of custom coding, to make those “standard parts” actually solve a useful problem. There are still people working for retail companies maintaining online store applications across the four corners of the globe, despite the fact that globes don’t have corners, these things all work the same way, and the risks associated with getting them wrong are significant.

Perhaps this is because software is a distinct thing, and we can never treat it like industrial product manufacturing.

Perhaps this is because our ambition always runs out ahead of our capability. Whatever we can reproducibly build, we’d like to be building something greater.

Perhaps this is because we’re still in the cottage industry stage, where we don’t yet know whether or how to standardise the parts, and occasionally the boilers explode.

November 01, 2019

Reading List 241 by Bruce Lawson (@brucel)

October 31, 2019

Sprouts by Graham Lee

Having discussed reasons for change with a colleague on my team, we came up with the sprouts of change. Good software is antifragile in the face of changing:

  • Situation
  • People
  • Requirements
  • Organisation
  • Understanding
  • Technology
  • Society

Like any good acronym, it’s really tenuous.

October 30, 2019

Change by Graham Lee

I was just discussing software architecture and next steps with a team building a tool to help analyse MRI images of brains. Most of the questions we asked explored ways to proceed by focussing on change:

  • what if the budget for that commercial component shows up? How would that change the system?
  • what if you find this data source isn’t good enough? How would you find that out?
  • which of these capabilities does the customer find most important? When will they change their minds?

that sort of thing.

We have all sorts of words for planning for, and mitigating the risk of, changes in low-level software design. In fact a book on building maintainable software talks about nothing else, because maintainable software is antifragile software.

But it happened that I wasn’t reading that book at the time, I was reading about high-level design and software architecture. The guide I was reading talked a lot about capturing the requirements and constraints in your software architecture, and this is all important stuff. If someone’s paying for your thing, you need to ensure it can do the things they’re paying for it to do. After all, they’re probably paying to be able to do the things that your software lets them do; they aren’t paying to have some software. Software isn’t real.

However, most of the reason your development will slow down once you’ve got that first version out of the door is that the world (which might be real) changes in ways that it’s hard to adapt your software to. Most of the reason you’re not adding new features is that you’re fixing bugs, i.e. changing the behaviour of the software from one that matches the flawed conception you had of what it should do to one that matches the flawed conception you now have of what it should do.

A good architecture should identify, localise, and separate sources of change in the software system. And then it should probably do whatever you think the customers think they want.

October 28, 2019

With the rise of critical writing like Bertand Meyer’s Agile! The Good, the Hype, and the Ugly, Daniel Mezick’s Agile-Industrial Complex, and my own Fragile Manifesto, it’s easy to conclude the this Agile thing is getting tired. We’re comfortable enough now with the values and principles of the manifesto that, even if software has exited the perennial crisis, we still have problems, we’re willing to criticise our elders and betters rather than our own practices.

It’s perhaps hard to see from this distance, but the manifesto for Agile Software Development was revolutionary when it was published. Not, perhaps, among the people who had been “doing it and helping others to do it”.

Nor, indeed, would it have been seen as revolutionary to the people who were supposed to read it at the time. Of course we value working software over comprehensive documentation. Our three-stage signoff process for the functional specification before you even start writing any software is because we want working software. We need to control the software process so that non-working software doesn’t get made. Yes, of course working software is the primary measure of progress. The fact that we don’t know whether we have any working software until two thirds of the project duration is passed is just how good management works.

At one point, quite a few years after the manifesto was published and before everybody used the A-word to mean “the thing we do”, I worked at a company with a very Roycean waterfall process. The senior engineering management came from a hardware engineering background, where that approach to project management was popular and successful (and maybe helpful, but I’m not a hardware engineer). To those managers, Agile was an invitation for the inmates to take over the asylum.

Developers are notoriously fickle and hard to manage, and you want them to create their own self-organising team? Sounds like anarchy! We understand that you want to release a working increment every two to four weeks with a preference toward the shorter duration, but doesn’t that mean senior managers will spend their entire lives reviewing and signing off on functional specifications and test plans?

The managers who were open to new ideas were considering the Rational Unified Process, which by that time could be defined as Agile for the “nobody ever got fired for buying an IBM” crowd:

The Rational Unified Process. Image: wikimedia

That software engineering department now has different management and is Agile. They have releases at least every month (they already released daily, though those releases were of minimal scope). They respond to change rather than follow a plan (they already did this, though through hefty “change control” procedures). They meet daily to discuss progress (they already did this).

But, importantly, they do the things they do because it helps them release software, not because it helps them hit project milestones. The revolution really did land there.

October 24, 2019

In one part of the book Zen and the Art of Motorcycle Maintenance, which is neither about Zen nor motorcycle maintenance, there are two motorcycles and two riders. John Sutherland is a romanticist who appreciates the external qualities of his motorcycle: its aesthetics, and its use as a vehicle. The narrator is a classicist who appreciates the internal qualities of his motorcycle: its workings, parts, and mechanisms. When Sutherland has a problem with his bike he takes it to a mechanic. When the narrator does, he rationalises about the problem and attempts to discover a solution.

The book, which as its subtitle gives away is “an inquiry into values”, then follows the narrator’s exploration of a third way of considering quality that marries the romantic and classical notions holistically.

Now we come onto software. Software doesn’t exist. At some level, its abstractions and mathematics get translated into a sequence of states of an electronic machine that turns logic into procedure: but even that is a description that’s a few degrees abstracted from what software and computers really do.

Nonetheless, software has external and internal qualities. It has aesthetics and utility, and can be assessed romantically. A decidedly pedestrian word to describe the romanticist view of software is “requirements”, but it’s a common word in software engineering that means the right thing.

Software also has workings, parts, and mechanics. Words from software engineering to describe the classical view of software include architecture, design, clean code, SOLID…

…there are many more of these words! Unsurprisingly, the people who build software and who change software tend to take a classical view of the software, and have a lot more words to describe its internal qualities than its external qualities.

Typically, the people who are paying for software are interested in the romantic view. They want it to work to achieve some goal, and want someone else (us!) to care about what makes it work. Perhaps that’s why so many software teams phrase their requirements as “As a romantic, I want to task so that I can goal.”

Which is to say that making software professionally involves subordinating classical interpretations of quality to romantic interpretations. Which is not to say that a purely-classical viewpoint is unvaluable. It’s just a different thing from teaching a computer somersaults for a paying audience.

And maybe that subordination of our classical view to the customer/gold owner’s romantic view is the source of the principles:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

and:

Working software is the primary measure of progress.

In fact, this second one is not quite true. It suggests that you could somehow “count software”, and the more (working) software you’ve delivered, the better you’re doing. In fact, romanticism shows us that people only want software in that it enables some process or business opportunity, or makes it more efficient, or reduces errors, or lets them enjoy some downtime, or helps them achieve some other goal. So really progress toward that goal is the primary measure of progress, and working software is a leading metric that we hope tells us how we’re working toward that goal.

So all of those code quality and software architecture things are in support of the external view of the software, which is itself in support of some other, probably non-software-related, goal. And that’s why the cleanliness, or architectural niceness, or whatever classical quality, of the code is not absolute, but depends on how those qualities support the romantic qualities of the code.

Real life comes at you fast, though. When you’re working on version 1, you want to do as little work, as quickly as possible, to get to the point where you can validate that there are enough customers who derive enough value to make the product worthwhile. But by the time you come to work on version 1.0.1, you wish you’d taken the time to make version 1 maintainable and easy to change. Most subsequent versions are a little from column A and a little from column B, as you try new things and iterate on the things that worked.

As fast as possible, but no faster, I guess.

October 18, 2019

Save time writing release notes!

Why are release notes important?

Here at Talis, we use release notes to communicate changes to the rest of the business. They consist of a high-level overview of the changes made, along with links to Github issues where the reader can find more details should they require it. They also contain data that can be important operationally, such as the date we deployed the release and the build number that generated the artifact. This small piece of information is vital should a bug be discovered in production and we need to trace it back to a specific change.

For example, let’s say we refactored part of our codebase, then a month later we discovered some documents were missing attributes. When the discovery is made that we are missing attributes on documents, we can find the earliest document that is missing attributes and then immediately find releases around that time. From there we can easily debug the cause of the problem, and then work on a fix as well as fixing all documents since that release.

Save time writing release notes!

As important as release notes are, don’t waste your time writing them. The process can easily be automated! For years we have had a bash script that would generate release notes. This bash script would go through all the commits since the last release and then output some markdown for you to copy and paste into the Github release. As you can imagine, it wasn’t much fun running a bash script to then copy and paste the output into Github. But it did the job we needed it to do.

Moving to Release Drafter

One of our developers, Ben, spent his personal development time (here at Talis, developers can spend half a day a week learning new technologies or anything unrelated to their current theme of work) investigating better ways to automate the creation of our release notes and subsequently gave a talk to the rest of the development team about Release Drafter. The main idea behind Release Drafter is that it will create a draft release with all the changes in master since you last published a release, you can also categorise the contents based on the tags added to the pull request. Then when you come to releasing you just add the tag and the title, then hit the publish button.

Shortly after the talk, we began trialling Release Drafter on several repositories with a custom configuration. The standout feature for us was the ability to parse commit messages with replacers. This meant we could extract data from commit messages to provide links to issues. After a week or so of trialling Release Drafter, we made the decision to roll it out across all our repositories. To do this with a custom configuration we created a .github repo in the organisation and added our config to .github/release-drafter.yml as per the documentation. We then installed Release Drafter as an app for the organisation and enabled it for the majority of our repositories.

So with a simple configuration our release notes are now generated automatically. This does require that commits start with ISSUE-<issue_number> to correctly link back to the issue within Github.

_extends: .github
name-template: Release <TAG> on YYYY-MM-DD
replacers:
  - search: '/ISSUE-(\d+)/gi'
    replace: '[ISSUE-$1](https://github.com/talis/planning/issues/$1)'
categories:
  - title: 'Features'
    label: 'feature'
  - title: 'Bug Fixes'
    label: 'bug'
template: |
  $CHANGES

Thanks…

Now our process for creating release notes has been simplified, from manually copying the output of a bash script, to editing a precompiled release draft - not a single developer misses the trusty old bash script we used to run. Thanks to the team at Release Drafter for helping to make our workflow that little bit easier.

October 16, 2019

A question of focus by Graham Lee

The problem with The Labrary is that I offer to do so many things – because I could do them, and do them well – that it can be hard to find the one thing I could do for you that would be most helpful:

  • Artificial Intelligence
  • Agile Development
  • Continuous Delivery
  • Software Architecture
  • Technical Writing
  • Developer Experience
  • Programmer Mentoring

Each of these supports the mission of “making it faster and easier to make high-quality software that respects privacy and freedom”, but all of them is overwhelming. I have credentials/experience to back up each of them, but probably don’t have the reputation as a general expert that someone like Dan North or Liz Keogh can use to have people ask me anything.

So I want to pick one. One thing, probably from that list, and pivot to focus on that. Or at least get in through the door that way, then have the conversations about the other things once you know how much faster and easier I make it for you to make high-quality software.

And I’d really value your suggestions. Which one thing do you know me for, above all others? Which one thing is the pain that the place you work, or places you’ve worked, most need fixing?

Comment here, have a chat, send an email. Thanks for helping me find out what I want to be when I grow up.

October 11, 2019

Reading List 240 by Bruce Lawson (@brucel)

Hello, you kawaii kumquat! Here’s this week’s lovely list ‘o’ links for your reading pleasure. It’s been a while because I’ve been gallivanting around Japan, Romania and the Netherlands, so this is a bumper edition.

There won’t be a reading list next week as I’m going off the grid to read books and record music in a 400 year old farmhouse in the countryside, far from WiFi and the bleeps and bloops of notifications. Until next time, hang loose and stay groovy.

October 09, 2019

A paraphrased conversation, the other day, between me and a customer of one of my customers:

Me: Are you experienced at working with my customer’s developer APIs?

Them: I always feel like a newbie, because there’s so much stuff. But I always end up finding the docs I’m looking for.

Me: I’m writing the docs.

Them: Well, thanks! :D

Whether you’re writing developer APIs or graphical user interfaces, quality documentation that’s easy to find and use when needed is the best way to turn customers from novices who find the complexity offputting, to novices who know they’ll be able to tackle whatever’s coming their way.

Quality documentation is also useful for improving the quality of the software itself.

Docs-driven development

If you already know about test-driven development, you know that a benefit of TDD as a design tool is that it encourages you to think about your code from the perspective of how it will be used. Rather than implementing an algorithm then exposing an API that you hope will be useful, you design the API that helps solve the problem then implement an algorithm to support the use of that API.

Documentation is another tool for encouraging empathy in design. For every point you have to explain, you get to ask:

  • “why do I have to explain this?”

  • “Is there another way to design this such that I don’t need to tell people about this detail?”

  • “Is the thing that I’m telling people how to do, the thing that they would expect to want to do?”

Dev-driven documentation

The questions listed above can be most effectively answered if documentation is part of your iterative cycle of continuous improvement. Documentation can inform design and development, by pointing out cumbersome or difficult parts of the implementation. Development can inform documentation, by showing where the complexity lies and how to deal with it.

Documentation interacts with other activities, too. Test plans should ensure that they cover examples or walkthroughs from the documentation, so that you know the examples you’re giving to your customers actually work. Documenters should collaborate with testers to ensure a shared understanding of what the software is aiming to achieve.

Documents like API specifications, user manuals, or walkthrough videos should be versioned and built alongside the corresponding versions of the software.

Working software over comprehensive documentation

Throughout these activities, the point is not to generate documentation for its own sake. One office I worked in had a shelf containing several feet of documentation for a UNIX system that I never opened: the online documentation and a couple of cookbook-style books were sufficient.

The reason for putting effort into your software’s documentation is that this effort yields improvements in the software. A more empathetic design, a better-tested implementation, and more confident customers are all steps on the path to higher-quality software, easier and faster. And of course, the Labrary can help you with that.

October 04, 2019

October 03, 2019

As a software engineer, it’s easy to get work engineering software. Well, maybe not easy, but relatively so: that is the kind of work that comes along most. The kind of work that people are confident I can do. That they can’t do, so would like me to do for money.

It’s also usually the worst work available.

I don’t want to take your shopping list of features, give you a date and a cost, then make those features. Neither of us will be very happy, even if it goes well.

I want to get an understanding of your problem, and demonstrate how software can help in solving it. Maybe what we need to understand isn’t the problem you presented, but the worse problem that wasn’t on your mind. Or the opportunity that’s worth more than a solution to either problem.

Perhaps we ask a question, in solving your problem, to which the answer is that we don’t know, and now we have another problem.

You might not need me to build all of the features you thought of, just one of them. Perhaps that one works better if we don’t build it, but configure something that already exists. Or make it out of paper.

You understand your problem and its domain very well. I understand software very well. Let’s work together on combining that expertise, and both be happier in the process.

September 20, 2019

Recently I was looking at improving the performance of rendering a Space Syntax OpenMapping layer in MapServer. The openmapping_gb_v1 table is in a PostGIS database with an index on the geometry column. The original MapServer DATA configuration looked like this:

DATA "wkb_geometry from (select meridian_class_scale * 10 as line_width, * from openmapping …

September 19, 2019

[:ChangeLog
(:v0.2
“recognise that ‘resolving dependencies’ and ‘build’ are different operations.”
“Add conversion tools between project.clj & deps.edn”)
(:v0.3
“Remove CLI tools described as a build tool comment.”)]

WARNING
I still have much to learn about ClojureScript tooling but I thought I’d share what (I think) I’ve learned, as I have found it difficult to locate advice for beginners that is still current. This is very incomplete. It may stay that way or I may update it into a living document. I don’t actually have much advice to give and it’s only about the paths that have interested me.

Clojure development requires:

  • a text editor,

and optionally,

  • a REPL, for a dynamic coding environment
  • dependency and build tool(s).

The absolute minimum Clojure environment is a Java .jar file, containing the clojure.main/main entry point, which can be called with the name of your file.clj as a parameter, to read and run your code. I don’t think anyone does that, after they’ve run it once to check it flies.

Based on 2 books, ‘Clojure for the Brave & True’ and ‘Living Clojure’, my chosen tools are emacs for editing, with CIDER connecting a REPL, and Leiningen as dependency & build tool. ‘lein repl’ can also start a REPL.
Boot is available as an alternative to Leiningen but I got the impression it might be a bit too ‘exciting’ for a Clojure noob like me, so I haven’t used it yet.
CIDER provides a client-server link between an editor (I’m learning emacs) and a REPL.

If you use Leiningen, it comes with a free set of assumptions about development directory structure and the expectation that you will create a file, project.cli in the root directory of each ‘project’, containing a :dependencies vector. Then magic happens. If your change the dependencies of your project, the config fairies work out everything else that needs changing.

Next, I wanted to start using ClojureScript (CLJS.) I assumed that the same set of tools would extend. I was wrong to assume.
Unfortunately, CLJS tooling is less standardised and doesn’t seem to have reached such a stable state.

In ‘Living Clojure’, Carin Meier suggests using cljsbuild. It uses the lein-cljsbuild plugin and the command:

lein cljsbuild auto

to start a process which automatically re-compiles whenever a change is saved to the cljs source file. If the generated JavaScript is open in a browser, then the change will be shown in the browser window. This is enough to get you going. It is my current state.

I’ve read that there are other tools such as Figwheel, now transitioning to ‘Figwheel Main’ which hot-load the transcribed code into the browser as you change it.
There is a lein-figwheel as well as a lein-cljsbuild, which at least sounds like a drop-in replacement. I suspect it isn’t that simple.

There are several REPLs, though there seems to be some standardisation around nrepl.
It was part of the Clojure project but now has its own nrepl/nrepl repository on Github. It is used by Clojure, ‘lein repl’ and by CIDER.

There is something called Piggieback which adds CLJS support to NREPL. There is a CIDER Piggieback and an NREPL Piggieback. I have NO IDEA! (yet.)
shadow-cljs exists. Sorry, that’s all I have there too.

At this point in my confusion, a dependency issue killed my tool-chain.
I think one of the config fairies was off sick that day. The fix was a re-install of an emacs module. This forced me to explore possible reasons. I discovered the Clojure ‘Getting Started’ page had changed (or I’d never read it.)
https://clojure.org/guides/getting_started

There are also (now?) ‘Deps and the CLI Tools’ https://clojure.org/guides/deps_and_cli and https://clojure.org/reference/deps_and_cli

I think these are new and I believe they are intended to be the beginners’ entry point into Clojure development, before you dive into the more complex tools. There are CLI commands: ‘clojure’ and a wrapper that provides line-editing, ‘clj’
and a file called ‘deps.edn’ which specifies the dependencies, much as ‘projects.clj’ :depencies vector does for Leiningen but with a different syntax.

I’m less clear if these are also intended as new tools for experts, to be used by ‘higher order’ tools like Leiningen and Figwheel, or whether they will be adopted by those tools.

[ On the day I wrote this, I had a tip from didibus on clojureverse.org that there are plugins for Leiningen and Boot to use an existing deps.edn,

so perhaps this is the coming future standard for specifying & resolving dependencies, while lein and boot continue to provide additional build capabilities. deps.edn refers to Maven. I discovered elsewhere that Maven references existed but were hidden away inside Leiningen. It looks like I need to learn a little about Apache Maven. I didn’t come to Closure via Java but I can see the advantages to Java practitioners of using their standard build tool. I may need to drop down into Java one day, so I guess I may as well learn about Java-land now.

Also via: https://clojureverse.org/t/has-anyone-written-a-tool-to-spit-out-a-deps-edn-from-a-project-clj/2086, there is a https://github.com/hagmonk/depify, which ‘goes the other way’, trying it’s best to convert a project.clj to a deps.edn. Hopefully that would be a ‘one-off’? ]

I chose the Clojure language for its simplicity. The tooling journey has been longer than I expected, so I hope this information cuts some corners for you.

[ Please let me know if I’m wrong about any of this or if there are better, current documents that I should read. ]

September 13, 2019

Reading List 239 by Bruce Lawson (@brucel)

Hello, you cheeky strawberry! Here’s this week’s lovely list ‘o’ links for your reading pleasure.

There won’t be a reading lost for a few weeks as I’m writing this from a train to London, commencing a 3 week jaunt around conferences in Japan and Europe. Until next time, hang loose and stay groovy.

September 12, 2019

September 02, 2019

I use the free version of the excellent Mailchimp for WP plugin to allow visitors to this site to sign up for my Magical Brucie Mail and get my Reading List delivered to their inboxes.

When I did my regular accessibility check-up (a FastPass with the splendid Accessibility Insights for Web Chromium plugin by Microsoft) I noticed that the Mailchimp signup form fails two WCAG guidelines:

label: Form elements must have labels (1) WCAG 1.3.1, WCAG 3.3.2

This is because the out-of-the-box default form doesn’t associate its email input with a label:


<p>
	<label>Email address:</label>
	<input type="email" name="EMAIL"  required />	
</p>

I’ve raised an issue on GitHub. Update, 6 Sept: the change was turned down.

Luckily, the plugin allows you to customise the default form. So I’ve configured the plugin to associate the label and input by nesting the input inside the label. (This is more robust than using the IDref way because it’s not susceptible to Metadata partial copy-paste necrosis. (I also killed the placeholder attribute because I think it’s worthless on a single-input form.)

You can do this by choosing “Mailchimp for WP” in your WordPress dashboard’s sidebar, choosing “Form” and then over-riding the default with this:


<p>
	<label>Email address: 
	<input type="email" name="EMAIL"  required />
	</label>
</p>

<p>
	<input type="submit" value="Sign up" />
</p>

And, yay!

August 30, 2019

Reading List 238 by Bruce Lawson (@brucel)

Bit of a plug: I’m co-curating and MCing JSCamp – a one-day JavaScript conference in Bucharest, Romania on 24th of September. It’s the conference I want to attend – not full of frameworks and shinies, but full of funny, thought-provoking talks about making the Web better. The speaker line-up is cosmic, maaaan. Bucharest is a lovely city, based on Paris, accommodation and food is cheap and it’s and very easy to get to from anywhere in Europe. Come along, or tell your friends! Or both! (And no, I’m not on a percentage!)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way.

August 28, 2019

I just had the misfortune to read the following:

“Would you accept a good PR for Hitler?” -> well, for me, if it’s good code, why not?

This was written by somebody deliberately invoking an extreme example in their argument that we ought to “leave politics at the door” when it comes to tech, but it’s a sentiment that I have seen repeated too many times over the past few days.

I can’t help but notice that pleas to leave politics at the door are almost invariably uttered by people like me: white, male, secure in our positions.

So yes, the person who wrote that was being deliberately extreme by using Hitler as an example, but let’s run with it.

Let’s be very generous and assume the best. Let’s say that the contributor in question, despite being a genocidal ghoul, is going to be on their best behaviour. No inciting violence in repository issues. No hateful rhetoric in code reviews. No fascist imagery in giant ASCII code comments.

Do you really not see how working alongside this person might make someone uncomfortable?

Put yourself in the shoes of someone Hitler doesn’t like very much. It’s a big list, so this step shouldn’t take a lot of imagination. Now imagine that you and Hitler are contributing to the same project. You check your e-mails and see their name. They’re reviewing your code. They’re talking to your friends. You have to trust them to be impartial and leave their attempts to harm you to outside of project hours.

Imagine that if you speak up about how horrible this feels, you get told to pipe down. Gosh, why do you have to make this so political?

Now imagine that the other people on the project are perfectly happy to work with them. He writes good code, so his presence doesn’t bother them. Boy, they sure are getting along. Just Hitler and other members of your community. Getting along like a house on fire.

The fact that he doesn’t view you as a person isn’t important because it’s not relevant to the project. Don’t bring it up. You’re making it political.

And that’s the best case, where your antagonist doesn’t use their influence in the project to harm you. The best case.

Must feel real good.

August 23, 2019

Reading List 237 by Bruce Lawson (@brucel)

Bit of a plug: I’m co-curating and MCing JSCamp – a one-day JavaScript conference in Bucharest, Romania on 24th of September. It’s the conference I want to attend – not full of frameworks and shinies, but full of funny, thought-provoking talks about making the Web better. The speaker line-up is cosmic, maaaan. Bucharest is a lovely city, based on Paris, accommodation and food is cheap and it’s and very easy to get to from anywhere in Europe. Come along, or tell your friends! Or both! (And no, I’m not on a percentage!)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way.

August 21, 2019

In the beginning, there was the green field. The lead developer, who may have been the only developer, agreed with the product owner (or “the other member of the company” as they were known) what they would build for the first two weeks. Then File->New Project… happened, and they smashed it out of the park.

The amorphous and capricious “market” liked what they had to offer, at least enough to win some seed funding. The team grew, and kept the same cadence: see what we need to do for the next ten business days, do it, celebrate that we did it.

As the company, its customers, and its market mature, things start to slow down. It’s imperceptible at first, because velocity stays constant. The CTO can’t help but think that they get a lot less out of a 13-point story than they used to, but that isn’t a discussion they’re allowed to have. If you convert points into time then you’re doing old waterfall thinking, and we’re an agile team.

Initially the dysfunction manifests in other ways. Developers complain that they don’t get time to refactor, because “the business” doesn’t understand the benefits of clean code. Eventually time is carved out to clean things up, whether in “hardening sprints” or in effort allocated to “engineering stories”. We are getting as much done, as long as you ignore that less of it is being done for the customers.

Stories become task-sliced. Yes, it’s just adding a button, but we need to estimate the adding a component task, the binding the action task, the extending the reducer task, the analytics and management intelligence task. Yes we are getting as much done, as long as you ignore that less of it has observable outcomes.

Rework increases too, as the easy way to fit a feature into the code isn’t the way that customers want to use it. Once again, “the business” is at fault for not being clear about what they need. Customers who were previously flagship wins are now talked about as regressive laggards who don’t share the vision. Stories must have clearer acceptance criteria, the definition of done must be more explicit: but obviously we aren’t talking about a specification document because we’re an agile team. Yes we’re getting as much done, as long as you ignore that a lot of what we got done this fortnight was what we said we’d done last fortnight.

Eventually forward progress becomes near zero. It becomes hard to add new features, indeed hard even to keep up with the competitors. It’s only two years ago that we were five years ahead of them. People start demoing new ideas in separate apps, because there’s no point dreaming about adding them to our flagship project. File->New Project… and start all over again.

What happened to this team? Or really, to these teams, as I’ve seen this story repeated over and over. They misread “responding to change over following a plan” as “we don’t need no stinking plan”.

Even if you don’t know exactly where you are going at any time, you have a good idea where you think you’re going. It might be spread around the company, which is why we need the experts around the table. Some examples of where to find this information:

  • The product owner has a backlog of requested features that have yet to be built.
  • The sales team have a CRM indicating which prospects are hottest, and what they need to offer to close those deals.
  • The marketing director has a roadmap slide they’re presenting at a conference next month.
  • The CTO has budget projections for the next financial year, including headcount changes and how they plan to reorganise the team to incorporate these changes.
  • The CEO knows where they want to position the company in the market over the next two years, and knows which competitors, regulatory changes, and customer behaviours threaten that position and what about them makes them a threat.
  • Countless spreadsheets, databases, and “business intelligence” dashboards across multiple people and departments.

No, we don’t know the future, but we do know which futures are likely and of those, which are desirable. Part of embracing change is to make those futures easier to cope with. The failure mode of many teams is to ignore all futures because we aren’t in any of them yet.
We should be ready for the future we expect, and both humble and adaptable enough to get ready for a different future when things change. Our software should represent our current knowledge of our problem and its solution, including knowledge about likely developments (hey, maybe there’s a reason they call us developers!). Don’t add the things you aren’t going to need, but don’t exclude the possibility of adding them out of spite for a future that may well come to pass.

August 19, 2019

One of the principles behind the manifesto for Agile software development says:

Business people and developers must work
together daily throughout the project.

I don’t like this language. It sets up the distinction between “engineering” and “the business”, which is the least helpful language I frequently encounter when working in companies that make software. I probably visibly cringe when I hear “the business doesn’t understand” or “the business wants” or similar phrases, which make it clear that there are two competing teams involved in producing the software.

Neither team will win. “We” (usually the developers, and some/most others who report to the technology office) are trying to get through our backlogs, produce working software, and pay down technical debt. However “the business” get in the way with ridiculous requirements like responding to change, satisfying customers, working within budget, or demonstrating features to prospects.

While I’ve long pushed back on software people using the phrase “the business” (usually just by asking “oh, which business do you work for, then?”) I’ve never really had a replacement. Now I try to say “experts around the table”, leaving out the information about what expertise is required. This is more inclusive (we’re all experts, albeit in different fields, working together on our common goal), and more applicable (in research software engineering, there often is no “the business”). Importantly, it’s also more fluid, our self-organising team can identify lack of expertise in some area and bring in another expert.

August 17, 2019

Most of what I know about “the economy” is outdated (Adam Smith, Karl Marx, John Maynard Keynes) or incorrect (the news) so I decided to read a textbook. Basic Economics, 5th Edition by Thomas Sowell is clear, modern, and generally an argument against economic regulation, particularly centralised planning, tariffs, and price control. I still have questions.

The premise of market economics is that a free market efficiently uses prices to allocate scarce resources that have alternative uses, resulting in improved standard of living. But when results are compared, they are given in terms of economic metrics, like unemployment, growth, or GDP/GNP. The implication is that more consuming is correlated with a better standard of living. Is that true? Are there non-economic measurements of standard of living, and do they correlate with the economic measurements?

Even if an economy does yield “a better standard of living”, shouldn’t the spread of living standards and the accessibility of high standards across the population be measured, to determine whether the market economy is benefiting all participants or emulating feudalism?

Does Dr. Sowell arrive at his office at 9am and depart at 5pm? The common 40-hour work week is a result of labour unions and legislation, not supply and demand economics. Should we not be free to set our own working hours? Related: is “unemployment” such a bad thing, do we really need everybody to work their forty hours? If it is a bad thing, why not reduce the working week and have the same work done by more people?

Sowell’s argument allows that some expenses, notably defence, are better paid for centrally and collectively than individually. We all get the same benefit from national defence, but even those who are willing to pay would receive less benefit from a decentralised, individually-funded defence. Presumably the same argument can be applied to roads, too, or space races. But where are the boundaries? Why centralised military, say, and not centralised electricity supply, healthcare, mains water, housing, internet service, or food supply? Is there a good “grain size” for such centralising influences (it can’t be “the nation”, because nations vary so much in size and in centralisation/federation) and if so, does it match the “grain size” for a market economy?

The argument against a centralised, planned economy is that there’s too much information required too readily for central planners to make good judgements. Most attempts at a planned economy preceded broad access to the internet and AI, two technologies largely developed through centralised government funding. For example, the attempt to build a planned economy in Chile got as far as constructing a nationwide Telex network before being interrupted by the CIA-funded Pinochet coup. Is this argument still valid?

Companies themselves are centralised, planned economies that allocate scarce resources through a top-down bureaucracy. How big does a company need to get before it is not the market, but the company’s bureaucracy, that is the successful system for allocating resources?

August 16, 2019

Reading List 236 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way.

August 09, 2019

Introduction A Moment of Madness is a project that I’ve been collaborating on with Katie Day of The Other Way Works for over 5 years now!!!  In fact you can even see some of the earlier blog posts on it here: In 2014 back when it was still called ‘Agent in a Box’ In 2016 […]
Introduction This is a loooong overdue post about a collaborative, strategy game I built last summer (2018), SCOOT3.  ‘Super Computer Operated Orchestrations of Time 3’ is a hybrid board game / videogame with Escape Game and Strategy elements designed to be played in teams of up to 10 and takes ~45 mins.  It is one […]

Reading List 235 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way!

August 06, 2019

The first supper by Daniel Hollands (@limeblast)

Toward the back of our garden, just before the newly build workshop, we have a beautiful pergola that is intertwined with a wisteria tree. This provides cover to a patio area which is prime real estate for a dining table to be placed in it, so a few weeks ago, I took up the challenge of building one.

The plans I used for the build were created by Shanty2Chic, and are special because they’re designed to be built out of stud timber 2x4s using nothing but a mitre saw and pocket hole jig.

Mitre saw mishap

I’d had the date of the build scheduled in my diary for around three weeks, and made sure I had everything I needed in time. The date of the build was important as we had a garden party scheduled for the following weekend – so imagine my frustration when just a week before, the base on my mitre saw cracked.

Thankfully, the saw has a three year warranty on it, and Evolution were happy to pick it up for repair – and with credit where credit is due, they got it back to me in time for the build – even if there was a slight cock-up when shipping it back to me (two saws got swapped during packing, meaning my saw went to someone else, and I received theirs), which Evolution were also quick to rectify.

What Evolution didn’t do, however, was calibrate the saw before sending it back to me, Something I didn’t work out until after I’d built the two trestles. I considered scrapping them and starting again after I’d calibrated the saw, but decided they were stable enough, even if a little wonky.

At least I know how to calibrate the mitre saw now, and have learned a valuable lesson about being square.

Pocket hole jig

Apparently, pocket hole jigs are a bit divisive in the woodworking community, and are often viewed as “cheating” or “not real woodworking” by the elite. Steve Ramsey has recently put out a video highlighting this nonsense for it is, which I’m really happy about, as I’d hate for someone to be shamed out of using a perfectly suitable tool for a job based on the idiotic opinions of the elite minority.

If it’s stupid but it works, it isn’t stupid.

Murphy’s Law

That said, other than briefly seeing one at The Building Block, and watching makers use them in YouTube videos, I’d never used one myself, which is why I asked my parents for one as a birthday present.

Kreg are the brand which seem most popular, and I very nearly asked for the K5 Master System, but after doing some research I decided to go for the Trend PH/JIG/AK. Partly because they’re a British company, but mostly because it’s made out of aluminium, rather than plastic like the Kreg jigs, which should make it more durable.

For obvious reasons, I can’t compare it with any other jigs, but I can say that the kit came with everything I needed, including a clamp, some extension bars to help hold longer pieces of wood, and a small sample of screws, all housed in a robust case. Because I’d need a lot more screws than it came supplied with, I also picked up the PH/SCW/PK1 Pocket Hole Screw Pack (one of the few things I didn’t buy from Amazon, as it’s listed for half the price at Trend Direct).

I found using the jig to be easy and it worked perfectly, even if drilling nine pieces of wood five times each became a little tedious. My only complaint is the square driver screws, which are apparently designed to avoid cam out, but cammed out a lot anyway. Maybe I was doing it wrong?

The build

Other than the calibration squareness issues mentioned above, I think the built went well. I’m a lot more confident with my skills now than I was a year ago, although it’s obvious I’ve got a lot to learn.

Although the plans did have an accompanying video, it served as more of an overview and general build video, rather than what I was used to from The Weekend Woodworker (which features much more hands on instruction at each stage). But armed with the knowledge I’d gained in the past year, I felt able to step up to the challenge of reading the plans, and following the instructions myself.

I made some small variations to the plans for the top – specifically I decided to use the full 2.4 meter length of the studs, rather than cutting them down to the 1.9 meters as defined in the plans. This is because we had plenty of space under the pergola, and it would allow additional people to sit at the ends. I also decided to leave the breadboards off, as I think they’re purely decorative in this instance, and I decided it wasn’t worth the extra wood.

I painted it using Cuprinol Garden Shades; Silver Birch for the top, and Natural Stone for the base.

Initially I attempted to use the Cuprinol Spray & Brush unit that we’d picked up to paint our fence, but it didn’t work very well. I think this is because it’s designed to cover much larger surfaces with a lot more paint than I needed, so because I barely filled it with paint, it spluttered as air got into the pipe.

There’s a paint spray gun on sale in Aldi right now, which I think would have been much better suited to the task, but it’s a little bit more than I can afford right now.

Costs

All in all, the total cost was just under £200:

  • A hair under £100 for the lumber, which I got from Wickes. The plans called for 17 studs, so I ordered 20 (with the extra three acting as fuck-up insurance), and only ended up using 16 of them.
  • £85 for the chairs, which were 25% off from Homebase off due to an end of season sale.
  • around £35 for the paint, screws and glue, etc.

This sounded like quite a lot to me at first, but after seeing that Homebase are selling a vaguely comparable table for £379 without the chairs, it doesn’t seem too bad after all.

Conclusion

I’m happy with how it turned out, and I think it looks great under the pergola. If I was to do it again, I’d make it slightly shorter than it is, or buy slightly taller chairs, but that’s a minor issue as it’s still perfectly usable – at the very least I had no complaints during the party. It seems to have impressed at one person though, as I might have a commission to build one for someone else in the near future, which would be awesome.

July 30, 2019

I originally wrote this as an answer to a question on Quora but I’m increasingly concerned at the cost of higher education for young people from families that are not wealthy. I had parents who would have sacrificed anything for my education but I had clever friends who were not so fortunate. The system is bleeding talent into dead-end jobs. Below, I consider other models of training as I hope it might start a conversation in the technology community and the political infrastructure that trickles money down into it.

Through learning about ‘Agile’ software development, I became interested in related ‘Lean’ thinking. It borrows from Japanese cultural ideas and the way the martial arts are taught. I think the idea is that first you do, then you learn and finally you understand (as illustrated by the film ‘Karate Kid’.) That requires a ‘master’ or ‘Sensei’ to guide and react to what s/he sees about each individual’s current practice. It seems a good model for programming too. There may be times when doing is easier if you gain some understanding before you ‘do’ and advice and assistance with problem solving could be part of this. I’m not alone in thinking this way, as I see phrases like “kata” and “koans” appearing around software development.

I’ve also seen several analogies to woodworking craft which suggests that a master-apprentice relationship might be appropriate. There is even a ‘Software Craftsmanship’ movement. This could work as well in agile software development teams, as it did for weavers of mediaeval tapestries.

A female Scrum Master friend assures me that the word “master” is not gendered in either of these contexts. Of course, not all great individual crafts people make good teachers but teams with the best teachers would start to attract the best apprentices.

If any good programmers aren’t sure about spending their valuable developer’s time teaching, I recommend the “fable in novella form” Jonathan Livingston Seagull, written by Richard Bach, about a young seagull that wants to excel at flying.

Small software companies ‘have a lot on’ but how much would they need to be paid to take on an apprentice in their development teams, perhaps with weekly day-release to a local training organisation? I’d expect a sliding scale to employment as they became productive or were rejected back into the cold, hard world if they weren’t making the grade.

July 29, 2019

July Catchup by James Nutt (@jsrndoftime)

New Job

I started a new job. While my responsibilities and skills have changed a lot since I started my career in 2011, this is actually the first time I have moved company. Just shy of eight years in the same place. That’s basically a millennium in tech years. Long enough that I felt I was long overdue a change. A big change, with lots of big emotions attached.

While it had been on the cards for a fair while, the decision to pull the trigger on switching jobs ended up being something of an impulse. A friend prodded me about the opening on Slack, reckoning I might be a good fit, and the time between applying and signing the contract was really short. Short enough that it hadn’t really hit me until a good week or so afterwards what I had done.

Still, I’m massively enjoying the new job, the new team, the new toys, and the new tech.

New Rig

One of the many perks of this new job is that they’ve got me going on a fancy new MacBook Pro 2018. I’d never used a MacBook before, so the majority of my initial interactions with my new colleagues, who I am desperate to impress, were along the lines of “how do I change windows on this thing?” A stellar first impression. I definitely know how to perform basic computing tasks. Honestly.

First Conference

A Brighton Ruby branded keep cup

On the 5th July I took the day off work to go down to Brighton for my first programming conference, Brighton Ruby. Not much to say about it other than that it was a blast, I met some nice people, learned a few very cool things and hope to go back next year.

And there’s a street food market up the road from the Brighton Dome that does a mean jerk chicken.

New Bod

Just kidding. But I have started going to the gym again. You know, you reach a point where you wake up and your back already hurts and you just go “nah”.

New Gems

I’ve gotten to play with some great new Ruby gems lately that I think are worth sharing.

  • VCR - Record your test suite’s HTTP interactions and replay them during future test runs for fast, deterministic, accurate tests.
  • Shoulda Matchers - Simple one-liner tests for common Rails functionality.
  • Byebug - Byebug is a simple to use and feature rich debugger for Ruby.

Good Reads

July 26, 2019

There are only so many pens, bottle openers, pin badges, tote bags, water bottles, usb drives or beer mats that anyone needs. And that threshold has long since been met and surpassed. Time for something more interesting.

July 25, 2019

My first rails app by Graham Lee

I know, right? I first learned how to rails back when Rails 3 was new, but didn’t end up using it (the backend of the project I was working on was indeed written in Rails, but by other people). Then when I worked at Big Nerd Ranch I picked up bits and pieces of knowledge from the former Highgroove folks, but again didn’t use it. The last time I worked on a real web app for real people, it was in node.js (and that was only really vending a React SPA, so it was really in React). The time before that: WebObjects.

The context of this project is that I had a few days to ninja out an end-to-end concept of a web application that’s going to be taken on by other members of my team to flesh out, so it had to be quick to write and easy to understand. My thought was that Rails is stable and trusted enough that however I write the app, with roughly no experience, would not diverge far from however anyone else with roughly no experience would do it, so there wouldn’t be too many surprises. That the testing story for Rails is solid, that websites in Rails are a well-understood problem.

Obviously I could’ve chosen any of a plethora of technologies and made my colleagues live with the choice, but that would potentially have sunk the project. Going overly hipster with BCHS, Seaside or Phoenix would have been enjoyable but left my team-mates with a much bigger challenge than “learn another C-like OOP language and the particular conventions of this three-tier framework”. Similarly, on the front end, I just wrote some raw JS that’s served by Rails’s asset pipeline, with no frameworks (though I did use Rails.ajax for async requests).

With a day and a half left, I’m done, and can land some bonus features to reduce the workload for my colleagues. Ruby is a joy to use, although it is starting to show some of the same warts that JS suffers from: compare the two ways to make a Ruby hash with the two ways to write JS functions. The inconsistency over brackets around message sends is annoying, too, but livable.

Weirdly testing in Rails seems to only be good for testing Ruby, not JS/Coffeescript/whatever you shove down the frontend. I ended up using the teaspoon gem to run Javascript tests using Jasmine, but it felt weird having to set all that up myself when Rails goes out of its way to make tests for you in Ruby-land. Yes, Rails is in Ruby. But Rails is a web framework, and JS is a necessary evil on the web.

Most of my other problems came from the incompatibility of Ruby versions (I quickly gave up on rvm and used Docker, writing a small wrapper script to run the CD pipeline and give other devs commands like ‘build’, ‘test’, ‘run’, ‘stop’, ‘migrate’) and the changes in Rails API between versions 3-5. A lot of content on blogs[*] and stackoverflow don’t specify the version of Rails or Ruby they’re talking about, so the recommendations may not work the same way.

[*] I found a lot of Rails blogs that just reiterate examples and usage of API that’s already present in the rdoc. I don’t know whether this is SEO poisoning, or people not knowing that the official documentation exists, or there being lots of low-quality blogs.

But overall, Railsing was fun and got me quickly to my destination.

July 23, 2019

‘Looming Hell by Daniel Hollands (@limeblast)

I don’t remember what I was doing when first I stumbled upon The Interlace Project – a “practice-based research project that combines the traditional manufacturing techniques of spinning and weaving with emergent e-textile technologies” – and ordinarily I wouldn’t have given it another though, but as I’d had some exposure to weaving courtesy of my friend Emma, I figured I’d investigate further.

Keeping true to their premise of “Open Source Weaving”, they offer two loom designs for download, along with tutorials on how to build them and instructions on how to use them.

The Frame loom is a simple yet efficient design which lets you laser cut all the components you need out of a sheet of 3mm MDF measuring no more than 15x20cm. On the contrary, the Rigid Heddle Loom is a more complex affair requiring more, yet readily available, materials to build.

I sent the link to Emma, asking if it was something she’d be interested in, and to no one’s surprise she immediately responded that she’d love to have the Rigid Heddle loom. I countered with the offer of building the Frame loom instead.

Thanks to my membership of Cheltenham Hackspace I had access to a laser cutter, but even though I’ve used one before I’d forgotten most of what I’d previously learned. Thankfully, everyone that I’ve met at the space have been really nice and helpful, and James, one of the directors, was happy to spend a couple of hours one Wednesday evening showing me how it worked.

The design gets loaded into the laser cutter software, and modified to match the colours required for each of the three functions it’s capable of, red for vector cutting, blue for vector etching, and black for raster etching. Apparently vector etching isn’t very reliable, so it was recommended to avoid it if possible.

Unlike the last laser cutter I used, which was able to calculate the speed and intensity of the laser automatically based on the material settings you chose, this one needed you to set these values manually. Thankfully there was a chart of all the laser cutter compatible materials available and their relevant settings. There was also a chart of all the materials which must not be used in the laser cutter (did you know PVC emits chlorine gas when cut with a laser?)

I must admit, I pretty much just stood in awe as James configured everything on the computer, placed the sheet of MDF in the machine, aligned the laser head, and started the first of three runs. It was done in three to ensure the inner components were successful, before moving outward, else an outer cut could cause the middle to fall slightly, resulting in an out of focus laser which might prevent the cut from succeeding.

All in all, the cutting process took about 16 minutes, and cost the princely sum of £3.60 (£2 for the 60x40cm sheet of MDF, the vast majority of which remained unused, and £1.60 for the laser time).

Much like the Inkle loom, I have no idea how this works, but Emma does, so I’ll send it to her shortly, and will post updates of her creations in the future.

July 19, 2019

Reading List 234 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way!

July 18, 2019

My Delicious Library collection just hit 1,000 books. That’s not so big, it’s only a fraction of the books I’ve read in my life. I only started cataloguing my books a few years ago.

What is alarming about that is that most of the books are in my house, and most are in physical form. I read a lot, and the majority of the time I’m reading something I own. The reason it’s worrying is that these books take up a lot of space, and cost a lot of money.

I’ve had an on-again, off-again relationship with ebooks. Of course they take up less space, and are more convenient when travelling. The problems with DRM and ownership mean that I tend to only use ebooks now for books from Project Gutenberg or the internet archive, and PDFs of scholarly papers.

And not even that second one, due to the lack of big enough readers. For a long time I owned and enjoyed a Kindle DX, with a screen big enough that a typical magazine page was legible without zooming in. Zooming in on a columnar page is horrific. It’s like watching a tennis match through a keyhole. But the Kindle DX broke, is no longer a thing, and has no competitors. I don’t enjoy reading on regular computer screens, so the option of using a multipurpose tablet is not a good one.

Ebooks also suffer from being out of sight and out of mind. I actually bought some bundle of UX/HCI/design books over a year ago, and have never read them. When I want to read, I look at my pile of unread books and my shelves. I don’t look in ~/Documents/ebooks.

I do listen to audiobooks when I commute, but only when I commute. It’d be nice to have some kind of multimodal reader, across a “printed” and “spoken” format. The Kindle text-to-speech was not that, when I tried it. Jeremy Northam does a much better job of reading out The Road to Wigan Pier than an automated speech synthesiser does.

The technique I’m trying at the moment involves heavy use of the library. I’m a member of both the local municipal library and a big university library. I subscribe to a literary review magazine, the London Review of Books. When an article in there intrigues me, I add the book to the reading list in the library app. When I get to it, I request the book.

That’s not necessarily earth-shattering news. Both public and subscription libraries have existed for centuries. What’s interesting is that for this dedicated reader and technology professional, the digital revolution has yet to usurp the library and its collection of bound books.

July 15, 2019

Love them or hate them, PDFs are a fact of life for many organisations. If you produce PDFs, you should make them accessible to people with disabilities. With Prince, it’s easy to produce accessible, tagged PDFs from semantic HTML, CSS and SVG.

It’s an enduring myth that PDF is an inaccessible format. In 2012, the PDF profile PDF/UA (for ‘Universal Accessibility’) was standardised. It’s the U.S. Library of Congress’ preferred format for page-oriented content and the International Standard for accessible PDF technology, ISO 14289.

Let’s look at how to make accessible PDFs with Prince. Even if you already have Prince installed, grab the latest build (think of it as a stable beta for the next version) and install it; it’s a free license for non-commercial use. Prince is available for Windows, Mac, Linux, Free BSD desktops and wrappers are available for Java, C#/ .NET, ActiveX/COM, PHP, Ruby on Rails and Node/ JavaScript for integrating Prince into websites and applications.

Here’s a trivial HTML file, which I’ve called prince1.html.

<!DOCTYPE html>
<html>
<meta charset=utf-8>
<title>My lovely PDF</title>
<style>
        h1 {color:red;}
        p {color:green;}
</style>
<h1>Lovely heading</h1>
<p>Marvellous paragraph!</p>
</html>

From the command line, type

$ prince prince1.html

Prince has produced prince1.pdf in the same folder. (There are many command line switches to choose the name of the output file, combine files into a single PDF etc., but that’s not relevant here. Windows fans can also use a GUI.)

Using Adobe Acrobat Pro, I can inspect the tag structure of the PDF produced:

Acrobat screenshot: no tags available

As you can see, Acrobat reports “No Tags available”. This is because it’s perfectly legitimate to make inaccessible PDFs – documents intended only for printing, for example. So let’s tell Prince to make a tagged PDF:

$ prince prince1.html --tagged-pdf

Inspecting this file in Acrobat shows the tag structure:

Acrobat screenshot showing tags

Now we can see that under the <Document> tag (PDF’s equivalent of a <body> element), we have an <H1> and a <P>. Yes, PDF tags often —but not always— have the same name as their HTML counterparts. As Adobe says

PDF tags are similar to tags used in HTML to make Web pages more accessible. The World Wide Web Consortium (W3C) did pioneering work with HTML tags to incorporate the document structure that was needed for accessibility as the HTML standard evolved.

However, the fact that the PDF now has structural tags doesn’t mean it’s accessible. Let’s try making a PDF with the PDF-UA profile:

$ prince prince1.html --pdf-profile="PDF/UA-1"

Prince aborts, giving the error “prince: error: PDF/UA-1 requires language specification”. This is because our HTML page is missing the lang attribute on the HTML element, which tells assistive technologies which language the text is written in. This is very important to screen reader users, for example; the pronunciation of the word “six” is very different in English and French.

Unfortunately, this is a very common error on the Web; WebAIM recently analysed the accessibility of the top 1,000,000 home pages and discovered that a whopping 97.8% of home pages had detectable accessibility failures. A missing language specification was the fifth most common error, affecting 33% of sites.

screenshot from webaim showing most common accessibility errors on top million homepagesImage courtesy of webaim.org, © WebAIM, used by kind permission

Let’s fix our web page by amending the HTML element to read <html lang=en>.

Now it princifies without errors. Inspecting it in Acrobat Pro, we see a new <Annot> tag has appeared. Right-clicking on it in the tag inspector reveals it to be the small Prince logo image (that all free licenses generate), with alternate text “This document was created with Prince, a great way of getting web content onto paper”:

Acrobat screenshot with annotation on the Prince logo added with free licenses

This generation of the <Annot> with alternate text, and checking that the document’s language is specified allows us to produce a fully-accessible PDF, which is why we generally advise using the --pdf-profile="PDF/UA-1" command line switch rather than --tagged-pdf.

Adobe maintains a list of Standard PDF tags, most of which can easily be mapped by Prince to HTML counterparts.

Customising Prince’s default mappings

Prince can’t always map HTML directly to PDF tags. This could be because there isn’t a direct counterpart in HTML, or it could be because the source markup has conflicting markup and styling.

Let’s look at the first scenario. HTML has a <main> element, which doesn’t have a one-to-one correspondence with a single PDF tag. On many sites, there is one article per document (a wikipedia entry, for example), and it’s wrapped by a <main> element, or some other element serving to wrap the main content.

Let’s look at the wikipedia article for stegosaurus, because it is the best dinosaur.

We can see from browser developer tools that this article’s content is wrapped with <div id=”bodyContent”>. We can tell Prince to map this to the PDF <Art> tag, defined as “Article element. A self-contained body of text considered to be a single narrative” by adding a declaration in our stylesheet:

#bodyContent { prince-pdf-tag-type: Art; }

On another site, we might want to map the <main> element to <Art>. The same method applies:

Main { prince-pdf-tag-type: Art;}

Different authors’ conventions over the years is one reason why Prince can’t necessarily map everything automatically (although, by default HTML <article> gets mapped to <Art>).

Therefore, in this new build of PrinceXML, much of the mapping of HTML elements to PDF tags has been removed from the logic of Prince, and into the default stylesheet html.css in the style sub-folder. This makes it clearer how Prince maps HTML elements to PDF tags, and allows the author to override or customise it if necessary.

Here is the relevant section of the default mappings:

article { prince-pdf-tag-type: Art }
section { prince-pdf-tag-type: Sect }
blockquote { prince-pdf-tag-type: BlockQuote }
h1 { prince-pdf-tag-type: H1 }
h2 { prince-pdf-tag-type: H2 }
h3 { prince-pdf-tag-type: H3 }
h4 { prince-pdf-tag-type: H4 }
h5 { prince-pdf-tag-type: H5 }
h6 { prince-pdf-tag-type: H6 }
ol { prince-pdf-tag-type: OL }
ul { prince-pdf-tag-type: UL }
li { prince-pdf-tag-type: LI }
dl { prince-pdf-tag-type: DL }
dl > div { prince-pdf-tag-type: DL-Div }
dt { prince-pdf-tag-type: DT }
dd { prince-pdf-tag-type: DD }
figure { prince-pdf-tag-type: Div } /* figure grouper */
figcaption { prince-pdf-tag-type: Caption }
p { prince-pdf-tag-type: P }
q { prince-pdf-tag-type: Quote }
code { prince-pdf-tag-type: Code }
img, input[type="image"] {
prince-pdf-tag-type: Figure;
prince-alt-text: attr(alt);
}
abbr, acronym {
prince-expansion-text: attr(title)
}

There are also two new properties, prince-alt-text and prince-expansion-text, which can be overridden to support the relevant ARIA attributes.

Uncle Hakon shouting at me in ParisUncle Håkon shouting at me last month in Paris

Taking our lead from wikipedia again, we might want to produce a PDF table of contents from the ‘Contents’ box. Here is the Contents for the entry about otters (which are the best non-dinosaurs):

screenshot of wikipedia's in-page table of contents

The box is wrapped in an unordered list inside a <div id=”toc”>. To make this into a PDF Table of Contents (<TOC>), I add these lines to Prince’s HTML.css (because obviously I can’t touch the wikipedia source files):

#toc ul {prince-pdf-tag-type: TOC;} /*Table of Contents */
#toc li {prince-pdf-tag-type: TOCI;} /* TOC item */

This produces the following tag structure:

Acrobat screenshot showing PDF table of contents based on the wikipedia table of contents

In one of my personal sites, I use HTML <nav> as the wrapper for my internal navigation, so would use these declaration instead:

nav ul {prince-pdf-tag-type: TOC;}
nav li {prince-pdf-tag-type: TOCI;}

Only internal links are appropriate for a PDF Table of Contents, which is why Prince can’t automatically map <nav> to <TOC> but makes it easy for you to do so, either by editing html.css directly, or by pulling in a supplementary stylesheet.

Mapping when semantic and styling conflict

There are a number of tricky questions when it comes to tagging when markup and style conflict. For example, consider this markup which is used to “fake” a bulleted list visually:


<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<title>My lovely PDF</title>
<style>
div div {display:list-item;
    list-style-type: disc;
    list-style-position: inside;}
</style>
<div>

    <div>One</div>
    <div>Two</div>
    <div>Three</div>

</div>

Browsers render it something like this:

what looks like a bulleted list in a browser

But this merely looks like a bulleted list — it isn’t structurally anything other than three meaningless <div>s. If you need this to be tagged in the output PDF as a list (so a screen reader user can use a keyboard short cut to jump from list to list, for example), you can use these lines of CSS:

body>div {prince-pdf-tag-type: UL;}
div div {prince-pdf-tag-type: LI;}

Prince creates custom OL-L and UL-L tags which are role-mapped to PDF’s list structure tag <L>. Prince also sets the ListNumbering attribute when it can infer it.

Mapping ARIA roles

Often, developers supplement their HTML with ARIA roles. This can be particularly useful when retrofitting legacy markup to be accessible, especially when that markup contains few semantic elements — the usual example is adding role=button to a set of nested <div>s that are styled to look like a button.

Prince does not do anything special with ARIA roles, partly because, as webaim reports,

they are often used to override correct HTML semantics and thus present incorrect information or interactions to screen reader users

But by supplementing Prince’s html.css, an author can map elements with specific ARIA roles to PDF tags. For example, if your webpage has many <div role=”article”> you can map these to pdf <Art> tags thus:

div[role="article"] {prince-pdf-tag-type: Art;}

Conclusion

As with HTML, the more structured and semantic the markup is, the better the output will be. But of course, Prince cannot verify that alternate text is an accurate description of the function of an image. Ultimately claiming that a document meets the PDF/UA-1 profile actually requires some human review, so Prince has to trust that the author has done their part in terms of making the input intelligible. Using Prince, it’s very easy to turn long documents —even whole books— into accessible and attractive PDFs.

Back to Top