Last updated: August 23, 2019 11:22 AM (All times are UTC.)

August 21, 2019

In the beginning, there was the green field. The lead developer, who may have been the only developer, agreed with the product owner (or “the other member of the company” as they were known) what they would build for the first two weeks. Then File->New Project… happened, and they smashed it out of the park.

The amorphous and capricious “market” liked what they had to offer, at least enough to win some seed funding. The team grew, and kept the same cadence: see what we need to do for the next ten business days, do it, celebrate that we did it.

As the company, its customers, and its market mature, things start to slow down. It’s imperceptible at first, because velocity stays constant. The CTO can’t help but think that they get a lot less out of a 13-point story than they used to, but that isn’t a discussion they’re allowed to have. If you convert points into time then you’re doing old waterfall thinking, and we’re an agile team.

Initially the dysfunction manifests in other ways. Developers complain that they don’t get time to refactor, because “the business” doesn’t understand the benefits of clean code. Eventually time is carved out to clean things up, whether in “hardening sprints” or in effort allocated to “engineering stories”. We are getting as much done, as long as you ignore that less of it is being done for the customers.

Stories become task-sliced. Yes, it’s just adding a button, but we need to estimate the adding a component task, the binding the action task, the extending the reducer task, the analytics and management intelligence task. Yes we are getting as much done, as long as you ignore that less of it has observable outcomes.

Rework increases too, as the easy way to fit a feature into the code isn’t the way that customers want to use it. Once again, “the business” is at fault for not being clear about what they need. Customers who were previously flagship wins are now talked about as regressive laggards who don’t share the vision. Stories must have clearer acceptance criteria, the definition of done must be more explicit: but obviously we aren’t talking about a specification document because we’re an agile team. Yes we’re getting as much done, as long as you ignore that a lot of what we got done this fortnight was what we said we’d done last fortnight.

Eventually forward progress becomes near zero. It becomes hard to add new features, indeed hard even to keep up with the competitors. It’s only two years ago that we were five years ahead of them. People start demoing new ideas in separate apps, because there’s no point dreaming about adding them to our flagship project. File->New Project… and start all over again.

What happened to this team? Or really, to these teams, as I’ve seen this story repeated over and over. They misread “responding to change over following a plan” as “we don’t need no stinking plan”.

Even if you don’t know exactly where you are going at any time, you have a good idea where you think you’re going. It might be spread around the company, which is why we need the experts around the table. Some examples of where to find this information:

  • The product owner has a backlog of requested features that have yet to be built.
  • The sales team have a CRM indicating which prospects are hottest, and what they need to offer to close those deals.
  • The marketing director has a roadmap slide they’re presenting at a conference next month.
  • The CTO has budget projections for the next financial year, including headcount changes and how they plan to reorganise the team to incorporate these changes.
  • The CEO knows where they want to position the company in the market over the next two years, and knows which competitors, regulatory changes, and customer behaviours threaten that position and what about them makes them a threat.
  • Countless spreadsheets, databases, and “business intelligence” dashboards across multiple people and departments.

No, we don’t know the future, but we do know which futures are likely and of those, which are desirable. Part of embracing change is to make those futures easier to cope with. The failure mode of many teams is to ignore all futures because we aren’t in any of them yet.
We should be ready for the future we expect, and both humble and adaptable enough to get ready for a different future when things change. Our software should represent our current knowledge of our problem and its solution, including knowledge about likely developments (hey, maybe there’s a reason they call us developers!). Don’t add the things you aren’t going to need, but don’t exclude the possibility of adding them out of spite for a future that may well come to pass.

August 19, 2019

One of the principles behind the manifesto for Agile software development says:

Business people and developers must work
together daily throughout the project.

I don’t like this language. It sets up the distinction between “engineering” and “the business”, which is the least helpful language I frequently encounter when working in companies that make software. I probably visibly cringe when I hear “the business doesn’t understand” or “the business wants” or similar phrases, which make it clear that there are two competing teams involved in producing the software.

Neither team will win. “We” (usually the developers, and some/most others who report to the technology office) are trying to get through our backlogs, produce working software, and pay down technical debt. However “the business” get in the way with ridiculous requirements like responding to change, satisfying customers, working within budget, or demonstrating features to prospects.

While I’ve long pushed back on software people using the phrase “the business” (usually just by asking “oh, which business do you work for, then?”) I’ve never really had a replacement. Now I try to say “experts around the table”, leaving out the information about what expertise is required. This is more inclusive (we’re all experts, albeit in different fields, working together on our common goal), and more applicable (in research software engineering, there often is no “the business”). Importantly, it’s also more fluid, our self-organising team can identify lack of expertise in some area and bring in another expert.

August 17, 2019

Most of what I know about “the economy” is outdated (Adam Smith, Karl Marx, John Maynard Keynes) or incorrect (the news) so I decided to read a textbook. Basic Economics, 5th Edition by Thomas Sowell is clear, modern, and generally an argument against economic regulation, particularly centralised planning, tariffs, and price control. I still have questions.

The premise of market economics is that a free market efficiently uses prices to allocate scarce resources that have alternative uses, resulting in improved standard of living. But when results are compared, they are given in terms of economic metrics, like unemployment, growth, or GDP/GNP. The implication is that more consuming is correlated with a better standard of living. Is that true? Are there non-economic measurements of standard of living, and do they correlate with the economic measurements?

Even if an economy does yield “a better standard of living”, shouldn’t the spread of living standards and the accessibility of high standards across the population be measured, to determine whether the market economy is benefiting all participants or emulating feudalism?

Does Dr. Sowell arrive at his office at 9am and depart at 5pm? The common 40-hour work week is a result of labour unions and legislation, not supply and demand economics. Should we not be free to set our own working hours? Related: is “unemployment” such a bad thing, do we really need everybody to work their forty hours? If it is a bad thing, why not reduce the working week and have the same work done by more people?

Sowell’s argument allows that some expenses, notably defence, are better paid for centrally and collectively than individually. We all get the same benefit from national defence, but even those who are willing to pay would receive less benefit from a decentralised, individually-funded defence. Presumably the same argument can be applied to roads, too, or space races. But where are the boundaries? Why centralised military, say, and not centralised electricity supply, healthcare, mains water, housing, internet service, or food supply? Is there a good “grain size” for such centralising influences (it can’t be “the nation”, because nations vary so much in size and in centralisation/federation) and if so, does it match the “grain size” for a market economy?

The argument against a centralised, planned economy is that there’s too much information required too readily for central planners to make good judgements. Most attempts at a planned economy preceded broad access to the internet and AI, two technologies largely developed through centralised government funding. For example, the attempt to build a planned economy in Chile got as far as constructing a nationwide Telex network before being interrupted by the CIA-funded Pinochet coup. Is this argument still valid?

Companies themselves are centralised, planned economies that allocate scarce resources through a top-down bureaucracy. How big does a company need to get before it is not the market, but the company’s bureaucracy, that is the successful system for allocating resources?

August 16, 2019

Reading List 236 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way.

August 09, 2019

Introduction A Moment of Madness is a project that I’ve been collaborating on with Katie Day of The Other Way Works for over 5 years now!!!  In fact you can even see some of the earlier blog posts on it here: In 2014 back when it was still called ‘Agent in a Box’ In 2016 […]
Introduction This is a loooong overdue post about a collaborative, strategy game I built last summer (2018), SCOOT3.  ‘Super Computer Operated Orchestrations of Time 3’ is a hybrid board game / videogame with Escape Game and Strategy elements designed to be played in teams of up to 10 and takes ~45 mins.  It is one […]

Reading List 235 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way!

August 06, 2019

The first supper by Daniel Hollands (@limeblast)

Toward the back of our garden, just before the newly build workshop, we have a beautiful pergola that is intertwined with a wisteria tree. This provides cover to a patio area which is prime real estate for a dining table to be placed in it, so a few weeks ago, I took up the challenge of building one.

The plans I used for the build were created by Shanty2Chic, and are special because they’re designed to be built out of stud timber 2x4s using nothing but a mitre saw and pocket hole jig.

Mitre saw mishap

I’d had the date of the build scheduled in my diary for around three weeks, and made sure I had everything I needed in time. The date of the build was important as we had a garden party scheduled for the following weekend – so imagine my frustration when just a week before, the base on my mitre saw cracked.

Thankfully, the saw has a three year warranty on it, and Evolution were happy to pick it up for repair – and with credit where credit is due, they got it back to me in time for the build – even if there was a slight cock-up when shipping it back to me (two saws got swapped during packing, meaning my saw went to someone else, and I received theirs), which Evolution were also quick to rectify.

What Evolution didn’t do, however, was calibrate the saw before sending it back to me, Something I didn’t work out until after I’d built the two trestles. I considered scrapping them and starting again after I’d calibrated the saw, but decided they were stable enough, even if a little wonky.

At least I know how to calibrate the mitre saw now, and have learned a valuable lesson about being square.

Pocket hole jig

Apparently, pocket hole jigs are a bit divisive in the woodworking community, and are often viewed as “cheating” or “not real woodworking” by the elite. Steve Ramsey has recently put out a video highlighting this nonsense for it is, which I’m really happy about, as I’d hate for someone to be shamed out of using a perfectly suitable tool for a job based on the idiotic opinions of the elite minority.

If it’s stupid but it works, it isn’t stupid.

Murphy’s Law

That said, other than briefly seeing one at The Building Block, and watching makers use them in YouTube videos, I’d never used one myself, which is why I asked my parents for one as a birthday present.

Kreg are the brand which seem most popular, and I very nearly asked for the K5 Master System, but after doing some research I decided to go for the Trend PH/JIG/AK. Partly because they’re a British company, but mostly because it’s made out of aluminium, rather than plastic like the Kreg jigs, which should make it more durable.

For obvious reasons, I can’t compare it with any other jigs, but I can say that the kit came with everything I needed, including a clamp, some extension bars to help hold longer pieces of wood, and a small sample of screws, all housed in a robust case. Because I’d need a lot more screws than it came supplied with, I also picked up the PH/SCW/PK1 Pocket Hole Screw Pack (one of the few things I didn’t buy from Amazon, as it’s listed for half the price at Trend Direct).

I found using the jig to be easy and it worked perfectly, even if drilling nine pieces of wood five times each became a little tedious. My only complaint is the square driver screws, which are apparently designed to avoid cam out, but cammed out a lot anyway. Maybe I was doing it wrong?

The build

Other than the calibration squareness issues mentioned above, I think the built went well. I’m a lot more confident with my skills now than I was a year ago, although it’s obvious I’ve got a lot to learn.

Although the plans did have an accompanying video, it served as more of an overview and general build video, rather than what I was used to from The Weekend Woodworker (which features much more hands on instruction at each stage). But armed with the knowledge I’d gained in the past year, I felt able to step up to the challenge of reading the plans, and following the instructions myself.

I made some small variations to the plans for the top – specifically I decided to use the full 2.4 meter length of the studs, rather than cutting them down to the 1.9 meters as defined in the plans. This is because we had plenty of space under the pergola, and it would allow additional people to sit at the ends. I also decided to leave the breadboards off, as I think they’re purely decorative in this instance, and I decided it wasn’t worth the extra wood.

I painted it using Cuprinol Garden Shades; Silver Birch for the top, and Natural Stone for the base.

Initially I attempted to use the Cuprinol Spray & Brush unit that we’d picked up to paint our fence, but it didn’t work very well. I think this is because it’s designed to cover much larger surfaces with a lot more paint than I needed, so because I barely filled it with paint, it spluttered as air got into the pipe.

There’s a paint spray gun on sale in Aldi right now, which I think would have been much better suited to the task, but it’s a little bit more than I can afford right now.


All in all, the total cost was just under £200:

  • A hair under £100 for the lumber, which I got from Wickes. The plans called for 17 studs, so I ordered 20 (with the extra three acting as fuck-up insurance), and only ended up using 16 of them.
  • £85 for the chairs, which were 25% off from Homebase off due to an end of season sale.
  • around £35 for the paint, screws and glue, etc.

This sounded like quite a lot to me at first, but after seeing that Homebase are selling a vaguely comparable table for £379 without the chairs, it doesn’t seem too bad after all.


I’m happy with how it turned out, and I think it looks great under the pergola. If I was to do it again, I’d make it slightly shorter than it is, or buy slightly taller chairs, but that’s a minor issue as it’s still perfectly usable – at the very least I had no complaints during the party. It seems to have impressed at one person though, as I might have a commission to build one for someone else in the near future, which would be awesome.

July 30, 2019

I originally wrote this as an answer to a question on Quora but I’m increasingly concerned at the cost of higher education for young people from families that are not wealthy. I had parents who would have sacrificed anything for my education but I had clever friends who were not so fortunate. The system is bleeding talent into dead-end jobs. Below, I consider other models of training as I hope it might start a conversation in the technology community and the political infrastructure that trickles money down into it.

Through learning about ‘Agile’ software development, I became interested in related ‘Lean’ thinking. It borrows from Japanese cultural ideas and the way the martial arts are taught. I think the idea is that first you do, then you learn and finally you understand (as illustrated by the film ‘Karate Kid’.) That requires a ‘master’ or ‘Sensei’ to guide and react to what s/he sees about each individual’s current practice. It seems a good model for programming too. There may be times when doing is easier if you gain some understanding before you ‘do’ and advice and assistance with problem solving could be part of this. I’m not alone in thinking this way, as I see phrases like “kata” and “koans” appearing around software development.

I’ve also seen several analogies to woodworking craft which suggests that a master-apprentice relationship might be appropriate. There is even a ‘Software Craftsmanship’ movement. This could work as well in agile software development teams, as it did for weavers of mediaeval tapestries.

A female Scrum Master friend assures me that the word “master” is not gendered in either of these contexts. Of course, not all great individual crafts people make good teachers but teams with the best teachers would start to attract the best apprentices.

If any good programmers aren’t sure about spending their valuable developer’s time teaching, I recommend the “fable in novella form” Jonathan Livingston Seagull, written by Richard Bach, about a young seagull that wants to excel at flying.

Small software companies ‘have a lot on’ but how much would they need to be paid to take on an apprentice in their development teams, perhaps with weekly day-release to a local training organisation? I’d expect a sliding scale to employment as they became productive or were rejected back into the cold, hard world if they weren’t making the grade.

July 29, 2019

July Catchup by James Nutt (@jsrndoftime)

New Job

I started a new job. While my responsibilities and skills have changed a lot since I started my career in 2011, this is actually the first time I have moved company. Just shy of eight years in the same place. That’s basically a millennium in tech years. Long enough that I felt I was long overdue a change. A big change, with lots of big emotions attached.

While it had been on the cards for a fair while, the decision to pull the trigger on switching jobs ended up being something of an impulse. A friend prodded me about the opening on Slack, reckoning I might be a good fit, and the time between applying and signing the contract was really short. Short enough that it hadn’t really hit me until a good week or so afterwards what I had done.

Still, I’m massively enjoying the new job, the new team, the new toys, and the new tech.

New Rig

One of the many perks of this new job is that they’ve got me going on a fancy new MacBook Pro 2018. I’d never used a MacBook before, so the majority of my initial interactions with my new colleagues, who I am desperate to impress, were along the lines of “how do I change windows on this thing?” A stellar first impression. I definitely know how to perform basic computing tasks. Honestly.

First Conference

A Brighton Ruby branded keep cup

On the 5th July I took the day off work to go down to Brighton for my first programming conference, Brighton Ruby. Not much to say about it other than that it was a blast, I met some nice people, learned a few very cool things and hope to go back next year.

And there’s a street food market up the road from the Brighton Dome that does a mean jerk chicken.

New Bod

Just kidding. But I have started going to the gym again. You know, you reach a point where you wake up and your back already hurts and you just go “nah”.

New Gems

I’ve gotten to play with some great new Ruby gems lately that I think are worth sharing.

  • VCR - Record your test suite’s HTTP interactions and replay them during future test runs for fast, deterministic, accurate tests.
  • Shoulda Matchers - Simple one-liner tests for common Rails functionality.
  • Byebug - Byebug is a simple to use and feature rich debugger for Ruby.

Good Reads

July 26, 2019

There are only so many pens, bottle openers, pin badges, tote bags, water bottles, usb drives or beer mats that anyone needs. And that threshold has long since been met and surpassed. Time for something more interesting.

July 25, 2019

My first rails app by Graham Lee

I know, right? I first learned how to rails back when Rails 3 was new, but didn’t end up using it (the backend of the project I was working on was indeed written in Rails, but by other people). Then when I worked at Big Nerd Ranch I picked up bits and pieces of knowledge from the former Highgroove folks, but again didn’t use it. The last time I worked on a real web app for real people, it was in node.js (and that was only really vending a React SPA, so it was really in React). The time before that: WebObjects.

The context of this project is that I had a few days to ninja out an end-to-end concept of a web application that’s going to be taken on by other members of my team to flesh out, so it had to be quick to write and easy to understand. My thought was that Rails is stable and trusted enough that however I write the app, with roughly no experience, would not diverge far from however anyone else with roughly no experience would do it, so there wouldn’t be too many surprises. That the testing story for Rails is solid, that websites in Rails are a well-understood problem.

Obviously I could’ve chosen any of a plethora of technologies and made my colleagues live with the choice, but that would potentially have sunk the project. Going overly hipster with BCHS, Seaside or Phoenix would have been enjoyable but left my team-mates with a much bigger challenge than “learn another C-like OOP language and the particular conventions of this three-tier framework”. Similarly, on the front end, I just wrote some raw JS that’s served by Rails’s asset pipeline, with no frameworks (though I did use Rails.ajax for async requests).

With a day and a half left, I’m done, and can land some bonus features to reduce the workload for my colleagues. Ruby is a joy to use, although it is starting to show some of the same warts that JS suffers from: compare the two ways to make a Ruby hash with the two ways to write JS functions. The inconsistency over brackets around message sends is annoying, too, but livable.

Weirdly testing in Rails seems to only be good for testing Ruby, not JS/Coffeescript/whatever you shove down the frontend. I ended up using the teaspoon gem to run Javascript tests using Jasmine, but it felt weird having to set all that up myself when Rails goes out of its way to make tests for you in Ruby-land. Yes, Rails is in Ruby. But Rails is a web framework, and JS is a necessary evil on the web.

Most of my other problems came from the incompatibility of Ruby versions (I quickly gave up on rvm and used Docker, writing a small wrapper script to run the CD pipeline and give other devs commands like ‘build’, ‘test’, ‘run’, ‘stop’, ‘migrate’) and the changes in Rails API between versions 3-5. A lot of content on blogs[*] and stackoverflow don’t specify the version of Rails or Ruby they’re talking about, so the recommendations may not work the same way.

[*] I found a lot of Rails blogs that just reiterate examples and usage of API that’s already present in the rdoc. I don’t know whether this is SEO poisoning, or people not knowing that the official documentation exists, or there being lots of low-quality blogs.

But overall, Railsing was fun and got me quickly to my destination.

July 23, 2019

‘Looming Hell by Daniel Hollands (@limeblast)

I don’t remember what I was doing when first I stumbled upon The Interlace Project – a “practice-based research project that combines the traditional manufacturing techniques of spinning and weaving with emergent e-textile technologies” – and ordinarily I wouldn’t have given it another though, but as I’d had some exposure to weaving courtesy of my friend Emma, I figured I’d investigate further.

Keeping true to their premise of “Open Source Weaving”, they offer two loom designs for download, along with tutorials on how to build them and instructions on how to use them.

The Frame loom is a simple yet efficient design which lets you laser cut all the components you need out of a sheet of 3mm MDF measuring no more than 15x20cm. On the contrary, the Rigid Heddle Loom is a more complex affair requiring more, yet readily available, materials to build.

I sent the link to Emma, asking if it was something she’d be interested in, and to no one’s surprise she immediately responded that she’d love to have the Rigid Heddle loom. I countered with the offer of building the Frame loom instead.

Thanks to my membership of Cheltenham Hackspace I had access to a laser cutter, but even though I’ve used one before I’d forgotten most of what I’d previously learned. Thankfully, everyone that I’ve met at the space have been really nice and helpful, and James, one of the directors, was happy to spend a couple of hours one Wednesday evening showing me how it worked.

The design gets loaded into the laser cutter software, and modified to match the colours required for each of the three functions it’s capable of, red for vector cutting, blue for vector etching, and black for raster etching. Apparently vector etching isn’t very reliable, so it was recommended to avoid it if possible.

Unlike the last laser cutter I used, which was able to calculate the speed and intensity of the laser automatically based on the material settings you chose, this one needed you to set these values manually. Thankfully there was a chart of all the laser cutter compatible materials available and their relevant settings. There was also a chart of all the materials which must not be used in the laser cutter (did you know PVC emits chlorine gas when cut with a laser?)

I must admit, I pretty much just stood in awe as James configured everything on the computer, placed the sheet of MDF in the machine, aligned the laser head, and started the first of three runs. It was done in three to ensure the inner components were successful, before moving outward, else an outer cut could cause the middle to fall slightly, resulting in an out of focus laser which might prevent the cut from succeeding.

All in all, the cutting process took about 16 minutes, and cost the princely sum of £3.60 (£2 for the 60x40cm sheet of MDF, the vast majority of which remained unused, and £1.60 for the laser time).

Much like the Inkle loom, I have no idea how this works, but Emma does, so I’ll send it to her shortly, and will post updates of her creations in the future.

July 19, 2019

Reading List 234 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way!

July 18, 2019

My Delicious Library collection just hit 1,000 books. That’s not so big, it’s only a fraction of the books I’ve read in my life. I only started cataloguing my books a few years ago.

What is alarming about that is that most of the books are in my house, and most are in physical form. I read a lot, and the majority of the time I’m reading something I own. The reason it’s worrying is that these books take up a lot of space, and cost a lot of money.

I’ve had an on-again, off-again relationship with ebooks. Of course they take up less space, and are more convenient when travelling. The problems with DRM and ownership mean that I tend to only use ebooks now for books from Project Gutenberg or the internet archive, and PDFs of scholarly papers.

And not even that second one, due to the lack of big enough readers. For a long time I owned and enjoyed a Kindle DX, with a screen big enough that a typical magazine page was legible without zooming in. Zooming in on a columnar page is horrific. It’s like watching a tennis match through a keyhole. But the Kindle DX broke, is no longer a thing, and has no competitors. I don’t enjoy reading on regular computer screens, so the option of using a multipurpose tablet is not a good one.

Ebooks also suffer from being out of sight and out of mind. I actually bought some bundle of UX/HCI/design books over a year ago, and have never read them. When I want to read, I look at my pile of unread books and my shelves. I don’t look in ~/Documents/ebooks.

I do listen to audiobooks when I commute, but only when I commute. It’d be nice to have some kind of multimodal reader, across a “printed” and “spoken” format. The Kindle text-to-speech was not that, when I tried it. Jeremy Northam does a much better job of reading out The Road to Wigan Pier than an automated speech synthesiser does.

The technique I’m trying at the moment involves heavy use of the library. I’m a member of both the local municipal library and a big university library. I subscribe to a literary review magazine, the London Review of Books. When an article in there intrigues me, I add the book to the reading list in the library app. When I get to it, I request the book.

That’s not necessarily earth-shattering news. Both public and subscription libraries have existed for centuries. What’s interesting is that for this dedicated reader and technology professional, the digital revolution has yet to usurp the library and its collection of bound books.

July 15, 2019

Love them or hate them, PDFs are a fact of life for many organisations. If you produce PDFs, you should make them accessible to people with disabilities. With Prince, it’s easy to produce accessible, tagged PDFs from semantic HTML, CSS and SVG.

It’s an enduring myth that PDF is an inaccessible format. In 2012, the PDF profile PDF/UA (for ‘Universal Accessibility’) was standardised. It’s the U.S. Library of Congress’ preferred format for page-oriented content and the International Standard for accessible PDF technology, ISO 14289.

Let’s look at how to make accessible PDFs with Prince. Even if you already have Prince installed, grab the latest build (think of it as a stable beta for the next version) and install it; it’s a free license for non-commercial use. Prince is available for Windows, Mac, Linux, Free BSD desktops and wrappers are available for Java, C#/ .NET, ActiveX/COM, PHP, Ruby on Rails and Node/ JavaScript for integrating Prince into websites and applications.

Here’s a trivial HTML file, which I’ve called prince1.html.

<!DOCTYPE html>
<meta charset=utf-8>
<title>My lovely PDF</title>
        h1 {color:red;}
        p {color:green;}
<h1>Lovely heading</h1>
<p>Marvellous paragraph!</p>

From the command line, type

$ prince prince1.html

Prince has produced prince1.pdf in the same folder. (There are many command line switches to choose the name of the output file, combine files into a single PDF etc., but that’s not relevant here. Windows fans can also use a GUI.)

Using Adobe Acrobat Pro, I can inspect the tag structure of the PDF produced:

Acrobat screenshot: no tags available

As you can see, Acrobat reports “No Tags available”. This is because it’s perfectly legitimate to make inaccessible PDFs – documents intended only for printing, for example. So let’s tell Prince to make a tagged PDF:

$ prince prince1.html --tagged-pdf

Inspecting this file in Acrobat shows the tag structure:

Acrobat screenshot showing tags

Now we can see that under the <Document> tag (PDF’s equivalent of a <body> element), we have an <H1> and a <P>. Yes, PDF tags often —but not always— have the same name as their HTML counterparts. As Adobe says

PDF tags are similar to tags used in HTML to make Web pages more accessible. The World Wide Web Consortium (W3C) did pioneering work with HTML tags to incorporate the document structure that was needed for accessibility as the HTML standard evolved.

However, the fact that the PDF now has structural tags doesn’t mean it’s accessible. Let’s try making a PDF with the PDF-UA profile:

$ prince prince1.html --pdf-profile="PDF/UA-1"

Prince aborts, giving the error “prince: error: PDF/UA-1 requires language specification”. This is because our HTML page is missing the lang attribute on the HTML element, which tells assistive technologies which language the text is written in. This is very important to screen reader users, for example; the pronunciation of the word “six” is very different in English and French.

Unfortunately, this is a very common error on the Web; WebAIM recently analysed the accessibility of the top 1,000,000 home pages and discovered that a whopping 97.8% of home pages had detectable accessibility failures. A missing language specification was the fifth most common error, affecting 33% of sites.

screenshot from webaim showing most common accessibility errors on top million homepagesImage courtesy of, © WebAIM, used by kind permission

Let’s fix our web page by amending the HTML element to read <html lang=en>.

Now it princifies without errors. Inspecting it in Acrobat Pro, we see a new <Annot> tag has appeared. Right-clicking on it in the tag inspector reveals it to be the small Prince logo image (that all free licenses generate), with alternate text “This document was created with Prince, a great way of getting web content onto paper”:

Acrobat screenshot with annotation on the Prince logo added with free licenses

This generation of the <Annot> with alternate text, and checking that the document’s language is specified allows us to produce a fully-accessible PDF, which is why we generally advise using the --pdf-profile="PDF/UA-1" command line switch rather than --tagged-pdf.

Adobe maintains a list of Standard PDF tags, most of which can easily be mapped by Prince to HTML counterparts.

Customising Prince’s default mappings

Prince can’t always map HTML directly to PDF tags. This could be because there isn’t a direct counterpart in HTML, or it could be because the source markup has conflicting markup and styling.

Let’s look at the first scenario. HTML has a <main> element, which doesn’t have a one-to-one correspondence with a single PDF tag. On many sites, there is one article per document (a wikipedia entry, for example), and it’s wrapped by a <main> element, or some other element serving to wrap the main content.

Let’s look at the wikipedia article for stegosaurus, because it is the best dinosaur.

We can see from browser developer tools that this article’s content is wrapped with <div id=”bodyContent”>. We can tell Prince to map this to the PDF <Art> tag, defined as “Article element. A self-contained body of text considered to be a single narrative” by adding a declaration in our stylesheet:

#bodyContent { prince-pdf-tag-type: Art; }

On another site, we might want to map the <main> element to <Art>. The same method applies:

Main { prince-pdf-tag-type: Art;}

Different authors’ conventions over the years is one reason why Prince can’t necessarily map everything automatically (although, by default HTML <article> gets mapped to <Art>).

Therefore, in this new build of PrinceXML, much of the mapping of HTML elements to PDF tags has been removed from the logic of Prince, and into the default stylesheet html.css in the style sub-folder. This makes it clearer how Prince maps HTML elements to PDF tags, and allows the author to override or customise it if necessary.

Here is the relevant section of the default mappings:

article { prince-pdf-tag-type: Art }
section { prince-pdf-tag-type: Sect }
blockquote { prince-pdf-tag-type: BlockQuote }
h1 { prince-pdf-tag-type: H1 }
h2 { prince-pdf-tag-type: H2 }
h3 { prince-pdf-tag-type: H3 }
h4 { prince-pdf-tag-type: H4 }
h5 { prince-pdf-tag-type: H5 }
h6 { prince-pdf-tag-type: H6 }
ol { prince-pdf-tag-type: OL }
ul { prince-pdf-tag-type: UL }
li { prince-pdf-tag-type: LI }
dl { prince-pdf-tag-type: DL }
dl > div { prince-pdf-tag-type: DL-Div }
dt { prince-pdf-tag-type: DT }
dd { prince-pdf-tag-type: DD }
figure { prince-pdf-tag-type: Div } /* figure grouper */
figcaption { prince-pdf-tag-type: Caption }
p { prince-pdf-tag-type: P }
q { prince-pdf-tag-type: Quote }
code { prince-pdf-tag-type: Code }
img, input[type="image"] {
prince-pdf-tag-type: Figure;
prince-alt-text: attr(alt);
abbr, acronym {
prince-expansion-text: attr(title)

There are also two new properties, prince-alt-text and prince-expansion-text, which can be overridden to support the relevant ARIA attributes.

Uncle Hakon shouting at me in ParisUncle Håkon shouting at me last month in Paris

Taking our lead from wikipedia again, we might want to produce a PDF table of contents from the ‘Contents’ box. Here is the Contents for the entry about otters (which are the best non-dinosaurs):

screenshot of wikipedia's in-page table of contents

The box is wrapped in an unordered list inside a <div id=”toc”>. To make this into a PDF Table of Contents (<TOC>), I add these lines to Prince’s HTML.css (because obviously I can’t touch the wikipedia source files):

#toc ul {prince-pdf-tag-type: TOC;} /*Table of Contents */
#toc li {prince-pdf-tag-type: TOCI;} /* TOC item */

This produces the following tag structure:

Acrobat screenshot showing PDF table of contents based on the wikipedia table of contents

In one of my personal sites, I use HTML <nav> as the wrapper for my internal navigation, so would use these declaration instead:

nav ul {prince-pdf-tag-type: TOC;}
nav li {prince-pdf-tag-type: TOCI;}

Only internal links are appropriate for a PDF Table of Contents, which is why Prince can’t automatically map <nav> to <TOC> but makes it easy for you to do so, either by editing html.css directly, or by pulling in a supplementary stylesheet.

Mapping when semantic and styling conflict

There are a number of tricky questions when it comes to tagging when markup and style conflict. For example, consider this markup which is used to “fake” a bulleted list visually:

<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<title>My lovely PDF</title>
div div {display:list-item;
    list-style-type: disc;
    list-style-position: inside;}



Browsers render it something like this:

what looks like a bulleted list in a browser

But this merely looks like a bulleted list — it isn’t structurally anything other than three meaningless <div>s. If you need this to be tagged in the output PDF as a list (so a screen reader user can use a keyboard short cut to jump from list to list, for example), you can use these lines of CSS:

body>div {prince-pdf-tag-type: UL;}
div div {prince-pdf-tag-type: LI;}

Prince creates custom OL-L and UL-L tags which are role-mapped to PDF’s list structure tag <L>. Prince also sets the ListNumbering attribute when it can infer it.

Mapping ARIA roles

Often, developers supplement their HTML with ARIA roles. This can be particularly useful when retrofitting legacy markup to be accessible, especially when that markup contains few semantic elements — the usual example is adding role=button to a set of nested <div>s that are styled to look like a button.

Prince does not do anything special with ARIA roles, partly because, as webaim reports,

they are often used to override correct HTML semantics and thus present incorrect information or interactions to screen reader users

But by supplementing Prince’s html.css, an author can map elements with specific ARIA roles to PDF tags. For example, if your webpage has many <div role=”article”> you can map these to pdf <Art> tags thus:

div[role="article"] {prince-pdf-tag-type: Art;}


As with HTML, the more structured and semantic the markup is, the better the output will be. But of course, Prince cannot verify that alternate text is an accurate description of the function of an image. Ultimately claiming that a document meets the PDF/UA-1 profile actually requires some human review, so Prince has to trust that the author has done their part in terms of making the input intelligible. Using Prince, it’s very easy to turn long documents —even whole books— into accessible and attractive PDFs.

July 12, 2019

July 09, 2019

I’ve just finished teaching a four-day course introducing software engineering for the first time. My plan is to refine the course (I’m teaching it again in October), and it will eventually become the basis for doctoral training programmes in research software engineering at Oxford, and part of a taught Masters. My department already has an M.Sc. in Software Engineering for commercial engineers (in fact I have that degree), and we want to do the same for software engineers in research context.

Of course, I can also teach your team about software engineering!

Some challenges that came up:

  • I’m too comfortable with the command-line to get people past the initial unfamiliar discomfort. From that perspective, command-line tools are all unusably hard. I’ve learnt from various sources to try foo --help, man foo, and other incantations. Others haven’t.

  • git, in particular, is decidedly unfriendly. What I want to do is commit my changes. What I have to do is stage my changes, then commit my staged changes. As a result, teaching git use takes a significant chunk of the available time, and still leaves confusion.

  • you need to either tell people how to set their core.editor, or how to quit vim.

  • similarly, there’s a world of difference between python and python3, and students aren’t going to interpret the sorts of errors you et if you choose the wrong one.

  • Introduce a tangent, and I run the risk of losing people to that tangent. I briefly mentioned UML while discussing diagrams of objects, as a particular syntax for those diagrams. In the subsequent lab, some people put significant time into making sure their diagrams were valid UML.

  • Finding the trade-off between presentation, tutorial, and self-directed exercise is difficult. I’m used to presentations and will happily talk on many topics, but even I get bored of listening to me after the ~50% of the time I’ve spent speaking on this course. It must be worse for the students. And there’s no substitute for practical experience, but that must be supported by guidance.

  • There are so many topics that I didn’t get to cover!

    • only having an hour for OOP is a sin
    • which means I didn’t even mention patterns or principles
    • similarly, other design techniques like functional programming got left off
    • principles like Agile Software Development, Software Craftsmanship, or Devops don’t get a mention
    • continuous integration and continuous delivery got left off. Even if they didn’t, the amount of work involved in going from “I have a Python script” to “I run my tests whenever I change my script, and update my PYpi package whenever they pass” is too damn high.
    • forget databases, web servers, browsers, mobile apps, desktop apps, IoT, or anything that isn’t a command line script or a jupyter notebook
    • and machine learning tools
    • and concurrency, processes and process improvement, risk management, security, team dynamics, user experience, accessibility…

It’s only supposed to be a taster but I have to trade off introducing everything with showing the value present in anything. What this shows, as I found when I wrote APPropriate Behaviour, is that there’s a load that goes into being a programmer that is not programming.

July 05, 2019

Reading List 233 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way!

July 04, 2019

This conference season I’ve spoken at some events for non-frontenders, suggesting that people invest time in learning the semantics of HTML. After all, there are only 120(ish) elements; the average two year old knows 100 words and by the time a child is three will have a vocabulary of over 300 words.

A few people asked me the difference between <article> and <section>. My reply: don’t worry. Simply, don’t use <section>. its only use is in the HTML Document Outline Algorithm, which isn’t implemented anywhere, and seemingly never will be. For the same reason, don’t worry about the <hgroup> element.

But do use <article>, and not just for blog posts/ news stories. It’s not just for articles in the news sense, it’s for discrete self-contained things. Think “article of clothing”, not “magazine article”. So a list of videos should have each one (and its description) wrapped in an <article>. A list of products, similarly. Consider adding microdata from, as that will give you better search engine results and look better on Apple watches.

And, of course, do use <main>, <nav>, <header> and <footer>. It’s really useful for screen reader users – see my article The practical value of semantic HTML.

Happy marking up!

June 28, 2019

The Fruit of my Loom by Daniel Hollands (@limeblast)

Following on from my post about the loom I build for my friend Emma, I have an update to share on her progress using it, delivered via the medium of WhatsApp images, and the comments she put next to them.

June 27, 2019

For reasons that will become clear, I can’t structure this article as a “falsehoods programmers believe” article, much as that would add to the effect.

There are plenty of such articles in the world, so turn to your favourite search engine, type in “falsehoods programmers believe”, and orient yourself to this concept. You’ll see plenty of articles that list statements that challenge assumptions about a particular problem domain. Some of them list counterexamples, and a subset of those give suggestions of ways to account for the counterexamples.

As the sort of programmer who writes falsehoods programmers believe articles, my belief is that interesting challenges to my beliefs will trigger some curiosity, and lead me to research the counterexamples and solutions. Or at least, to file away the fact that counterexamples exist until I need it, or am otherwise more motivated to learn about it.

But that motivation is not universal. The fact that I treat it as universal turns it into a falsehood I believe about readers of falsehoods articles. Complaints abound that falsehoods articles do not lead directly to fish on the plate. Some readers want a clear breakdown from “thing you might think is true but isn’t true” to “Javascript you can paste in your project to account for it not being true”. These people are not well-served by falsehoods articles.

June 26, 2019

Longer, fuller stacks by Graham Lee

Thinks to self: OK, this “full-stack” project is going to be fairly complex. I need:

  • a database. I don’t need it yet, I’ll defer that.
  • a thing that runs on the server, listens for HTTP requests from a browser, builds responses, and sends them to the browser.
  • a thing that runs on the browser, built out of bits assembled by the server thing, that the user can interact with.

What I actually got was:

  • a thing that runs on the server.
  • a thing that defines the environment for the server.
  • a thing that defines the environment on development machines so that you can run server-development tasks.
  • a thing that turns code that can’t run in a browser into code that can run in a browser.
  • a thing that turns code that can run in a browser into code that does run in real browsers.
  • a headless browser that lets me test browser code.
    • BTW, it doesn’t work with that server environment.
    • a thing that shows how Linux binaries are loaded, to work out how to fix the environment.
    • also BTW, it doesn’t run headless without setting some environment variable
    • a thing that is used for cross-platform native desktop apps, that I can configure to work headless.
  • a thing that builds the bits assembled by the server thing so that the test thing can see the code being tested.

And somehow, people argue that this is about reducing development costs.

June 24, 2019

Freelancing & Dogs

You may be a freelancer and thinking about getting a dog, well YOU MUST but with some considerations. Here is a post about Freelancing & Dogs.

Betsy our Cockapoo has been in the family for 3 years now and has added so much joy to my life but when you work from home owning a dog does come with it’s challenges.

Below is a list of random Pros and Cons about owning a dog while being a freelancer.

Pro: You’ll need to walk your dog which means leaving your studio and getting lots of lovely fresh air and exercise. Turn the walk into a run and get your heart rate up, the dog will sleep forever afterwards too.

Con: Your dog will probably bark at anything or anyone that moves past your house especially when you’re on an important call. Make sure you have your finger on the mute button.

Pro: When you’re having a tough day they will always be there to make you feel better, without fail your K9 friend will make you laugh or help de-stress you a cuddle or that much needed attention we all crave.

Con: They want to play 24/7 and often trick you into playing. You’ll be busy, working on that masterpiece of design and then out of nowhere they’ll pretend they need a wee. You stop what you’re doing and then show up at the backdoor with a tennis ball in their mouth. Cheeky things.

Pro: Clients love dogs, you’ll find if you get talking to a potential client they have dogs of their own and you’ll instantly have a common thing to bond over. Bonding always helps land that new client before you know it you’ll be holding hands walking the dogs.

Con: You can’t just take a in-house project without planning where the dog will stay for the day. Larger dogs can stay at home for long periods of time but smaller dogs will need loo breaks and walks. This comes at an expense too if you don’t have family nearby. Dog walkers are typically £10 an hour.

Pro: They’re really good company, sometimes freelancing can be very lonely with days passing without any meaningful interactions with your clients. Especially if you work from home like me. A dog is a great daily companion.

Con: They want to eat all your food, I’m serious. If you have something nice they want it. If you drop something on the floor, they’ll eat it. If you have a desk snack, they expect some too.

I’m fairly confident nobody is reading this but feel free to tweet me with your own dog-freelance stories. #freelancedogs

The post Freelancing & Dogs appeared first on .

June 21, 2019

What’s the worst that could happen?

The importance of releasing code early

As a developer, I think it’s great to be able to see the real world impact of my work, so when I started at Talis I was very keen to get something into production as quick as possible. Etsy have previously spoken about how they try to get new engineers to deploy to production on their first day. Within my first few days I had already conducted a bug investigation for a member of the product team, and shortly after made my first production release. Not only is releasing code in the first few days of a new job a great morale boost, but it also removes the fear associated with releasing code. After all, if a new starter can be trusted to release code in the first few days, how scary can it be?

In order to be able to enable new starters to release in their first few days, we have to provide appropriate tooling and resources to make this possible.

Ramp up

New developers at Talis are given a series of ramp up stories. These are generally small bugs or improvements that don’t require a great deal of code change, but do require the set up of a local environment, going through our code review process, using our chat ops to release to an on demand environment before finally releasing to production. The main purpose of these ramp up stories is to get new starters comfortable with releasing code to production. In fact, over the last year, we made 54 releases to our main product in just 117 working days. This doesn’t include the numerous releases we made to the microservices that support our core product.

Specifying the work

Every team at Talis works in 2 week sprints, with the work broken down into multiple stories. The aim is that at the completion of each story we are able to release the work done for that story. In its simplest form, a story consists of a description of the bug or feature along with a proposed solution or desired outcomes. In the fortnightly sprint planning meetings, the team will go through each upcoming story to provide estimates and talk through the proposed solutions or outcomes. Even in my very first planning meeting, I was encouraged to ask questions and make suggestions about the stories we were discussing. Developers can then pick up stories for the current sprint and have all the details needed to complete the story.


The fun stuff

Now that we have a well-defined story with clear outcomes, we’re ready to get on with writing some code. Everyone is free to use the IDE of their choosing (as well as operating system), and we have pretty much every major IDE being used by someone in the office. All that matters is that you have a development environment that you are comfortable with. Pair Programming is commonplace within Talis and is encouraged. This is particularly useful whenever you start work on a project or area of the code you’ve not worked on before.

Peer review

Once I had self reviewed my own work and got a green build on our CI server, it was time to ask for a peer review. I feel like this first review is always daunting for developers - after all, it’s the first time your new colleagues will see your work and you want to make a good impression. Thankfully any fears I had were soon alleviated when I received an approval, and got asked to release it…

Don’t worry about it

At Talis releasing is not a big deal. After all, by the time we get to releasing our code has already been reviewed by another member of the team and automated testing has been performed on our CI server. The first step in our release is to deploy our code to an on-demand environment which mimics our production environment. Creating an on-demand environment is made simple with our chat ops:

<service name> create ondemand <code version>

On our staging environment, we will perform some automated testing using New Relic Synthetics. Once everyone is satisfied that everything is working as expected, then we can deploy to production. This time the process is slightly different and another developer will be required to approve the request (this approval means that a second developer is available should any problems occur during or after the release):

<service name> deploy <code version>

Once this has been approved by a second developer, your code will be deployed to production. At this point, we monitor our throughput and error rates for any anomalies, and if we detect any at this point we can check the logs for more detail and, if required, we can rollback to the previous version by deploying that version using the same command. All this means that even if there is an issue with a release, we can return our systems to a stable state.

Further Reading

Our Engineering Handbook has more details on our general approaching to Software Engineering.

June 20, 2019

I am writing a blog post, in which I intend to convince you of my case. A coherent argument must be created, in which the benefits of my view are enumerated. Paragraphs are introduced to separate the different parts of the argument.

The scene was set in the first sentence, so readers know that the actor in the following sentences must be me. Repeating that information would be redundant. Indeed, it was clearly me who set that scene, so no need to mention me at the start of this paragraph. An article in which each sentence is about the author, and not the article’s subject, could be perceived as a sign of arrogance. This perception is obviously performed by the reader of the article, so there is no need to explicitly call that out.

The important features of the remaining sentences in the first paragraph are those relating to the structure of the article. These structural elements are subjects upon which I act, so bringing them to the fore in my writing involves suppressing the object, the actor in the text. I can do this by choosing to use the passive voice.

Unfortunately, grammar checkers throughout the world of computing give the impression that the passive voice is always bad. Millions of people are shown underlining, highlighting, and inline tips explaining that their writing is wrong. Programmers have leaked the abstraction that everything in their world is either 1 or 0, into a world where that does not make sense. Sentences are either marked active (1), correct (1), or passive (0), incorrect (0).

Let us apply that to other fields of creative endeavor. Vincent: a starry night is not that brightly colored. 0. You used too much paint on the canvas. 0. Stars are not that big. 0.

Emily: too many hyphens. 0. No need to capitalize “microscope”. 0. Sentence fragment. 0.

June 17, 2019

Last week, I was invited to address the annual conference of the UK Association for Accessible Formats. I found myself sitting next to a man with these two refreshable braille displays, so I asked him what the difference is.

Two similar refreshable braille displays, side by side

On the left is his old VarioUltra 20, which can connect to devices via USB, Bluetooth, and can take a 32MS SD card, for offline use (reading a book, for example). It’s also a note-taker. He told me it cost around £2500. On the right is his new Orbit Reader 20, “the world’s most affordable Refreshable Braille Display” with similar functionality, which costs £500.

As he wasn’t deaf-blind, I asked why he uses such expensive equipment, when devices have built-in free screen readers. One of his reasons was, in retrospect, so blazingly obvious, and so human.

He likes to read his kids bedtime stories. With the braille display, he can read without a synthesised voice in his ear. Therefore, he could do all the characters’ voices himself to entertain his children.

My take-home from this: Of course free screen readers are an enormous boon, but each person has their own reasons for choosing their assistive technologies. Accessibility isn’t a technological problem to be solved. It’s an essential part of the human condition: we all have different needs and abilities.

June 12, 2019

June 11, 2019

I’ve just returned from a fantastic week in Copenhagen at the 2019 Ecsite Conference – Pushing Boundaries hosted at The Experimentarium in Hellerup.  It was my 4th Ecsite, having contributed to previous Ecsite conferences in Graz, Porto and Geneva.  Here’s some details from Ecsite 2017 in Porto where in 2 days we built an Audio […]

June 06, 2019

Since starting The Labrary late last year, I’ve been able to work with lots of different organisations and lots of different people. You too can hire The Labrary to make it easier and faster to create high-quality software that respects privacy and freedom, though not before January 2020 at the earliest.

In fact I’d already had a portfolio career before then, but a sequential one. A couple of years with this employer, a year with that, a phase as an indie, then back to another employer, and so on. At the moment I balance a 50% job with Labrary engagements.

The first thing to notice is that going part time starts with asking the employer. Whether it’s your current employer or an interviewer for a potential position, you need to start that conversation. When I first went from full-time to 80%, a few people said something like “I’d love to do that, but I doubt I’d be allowed”. I infer from this that they haven’t tried asking, which means it definitely isn’t about to happen.

My experience is that many employers didn’t even have the idea of part-time contracts in mind, so there’s no basis on which they can say yes. There isn’t really one for “no” either, except that it’s the status quo. Having a follow-up conversation to discuss their concerns both normalises the idea of part-time employees, and demonstrates that you’re working with them to find a satisfactory arrangement: a sign of a thoughtful employee who you want to keep around, even if only some of the time!

Job-swapping works for me because I like to see a lot of different contexts and form synthetic ideas across all of them. Working with different teams at the same time is really beneficial because I constantly get that sense of change and excitement. It’s Monday, so I’m not there any more, I’m here: what’s moved on in the last week?

It also makes it easier to deal with suboptimal working environments. I’m one of those people who likes being in an office and the social connections of talking to my team, and doesn’t get on well with working from home alone (particularly when separated from my colleagues by timezones and oceans). If I only have a week of that before I’m back in society, it’s bearable, so I can consider taking on engagements that otherwise wouldn’t work for me. I would expect that applies the other way around, for people who are natural hermits and would prefer not to be in shared work spaces.

However, have you ever experienced that feeling of dread when you come back from a week of holiday to discover that pile of unread emails, work-chat-app notifications, and meeting bookings you don’t know the context for? Imagine having that every week, and you know what job-hopping is like. I’m not great at time management anyway, and having to take extra care to ensure I know what project C is up to while I’m eyeballs-deep in project H work is difficult. This difficulty is compounded when clients restrict their work to their devices; a reasonable security requirement but one that has led to the point now where I have four different computers at home with different email accounts, VPN access, chat programs, etc.

Also, absent employee syndrome hits in two different ways. For some reason, the median lead time for setting up meetings seems to be a week. My guess is that this is because the timeslot you’re in now, while you’re all trying to set up the meeting, is definitely free. Anyway. Imagine I’m in now, and won’t be next week. There’s a good chance that the meeting goes ahead without me, because it’s best not to delay these things. Now imagine I’m not in now, but will be next week. There’s a good chance that the meeting goes ahead without me anyway, because nobody can see me when they book the meeting so don’t remember I might get involved.

That may seem like your idea of heaven: a guaranteed workaround to get out of all meetings :). But to me, the interesting software engineering happens in the discussion and it’s only the rote bits like coding that happen in isolation. So if I’m not in the room where the decisions are made, then I’m not really engineering the software.

Maybe there’s some other approach that ameliorates some of the downsides of this arrangement. But for me, so far, multiple workplaces is better than one, and helping many people by fulfilling the Labrary’s mission is better than helping a few.

Last week we had the pleasure of welcoming technology and game enthusiast, Alyssa, to our office. Here is what she wrote about her week of...

The post Alyssa’s Work Experience at Stickee appeared first on stickee.

We are delighted to announce that we have won the award for ‘VR Product of the Year’ at the prestigious National Technology Awards. The NTA...

The post stickee wins VR Product of the Year appeared first on stickee.

June 05, 2019

If you’ve used the Rails framework, you will probably recognise this:

class Comment < ApplicationRecord
  belongs_to :article

This snippet of code implies three things:

  1. We have a table of comments.
  2. We have a table of articles.
  3. Each comment is related to an article by some ID.

Rails users will take for granted that if they have an instance of the Comment class, they will be able to execute some_comment.article to obtain the article that the comment is related to.

This post will give you an extremely simplified look at how something like Rails’ ActiveRecord relations can be achieved. First, some groundwork.


Modules in Ruby can be used to extend the behaviour of a class, and there are three ways in which they can do this: include, prepend, and extend. The difference between the three? Where they fall in the method lookup chain.

class MyClass
  prepend PrependingModule
  include IncludingModule
  extend ExtendingModule

In the above example:

  • Methods from PrependingModule will be created as instance methods and override instance methods from MyClass.
  • Methods from IncludingModule will be created as instance methods but not override methods from MyClass.
  • Methods from ExtendingModule will be added as class methods on MyClass.

We can do fun things with extend.

Executing Code During Interpretation Time

module Ownable
  def belongs_to(owner)
    puts "I belong to #{owner}!"

class Item
  extend Ownable
  belongs_to :overlord

In the above code, we’re just defining a module and a class. No instance of the class is ever created. However, when we execute just this code in an IRB session, you will see “I belong to overlord!” as the output. Why? Because the code we write while defining a class is executed as that class definition is being interpreted.

What if we re-write our module to define a method using Ruby’s define_method?

module Ownable
  def belongs_to(owner)
    define_method(owner.to_sym) do
      puts self.object_id

Whatever we passed as the argument to belongs_to will become a method on instances of our Item class.

our_item = Item.first
#  => 70368441222580

Excellent. You may have heard this term before, but this is “metaprogramming”. Writing code that writes code. You just metaprogrammed.

Tying It Together

You might also notice that we’re getting closer to the behaviour that we would expect from Rails.

So let’s say we have our Item class, and we’re making a videogame, so we’re going to say that our item belongs to a player.

class Item
  extend Ownable
  belongs_to :player

Our Rails-like system could make some assumptions about this.

  1. There is a table in the database called players.
  2. There is a column in our items table called player_id.
  3. The player model is represented by the class Player.

Let’s return to our module and tweak it based on these assumptions.

module Ownable
  def belongs_to(owner)
    define_method(owner.to_sym) do
      # We need to get `Player` out of `:player`
      klass = owner.to_s.capitalize.constantize
      # We need to turn `:player` into `:player_id`
      foreign_key = "#{owner}_id".to_sym
      # We need to execute the actual query
      klass.find_by(id: self.send(foreign_key))
      # SELECT * FROM players WHERE id = :player_id LIMIT 1

class Item
  extend Ownable
  belongs_to :player

my_item = Item.first
# SELECT * FROM players WHERE id = 1 LIMIT 1
# => #<Player id: 12>


June 03, 2019

Ideas and solutions for tech founders on a tight budget

When it comes to building your first product or website you’ll quickly learn how cost is a huge factor in working with the right people – but that shouldn’t stop you from launching your product or website.

Most of my projects start at around £3,000 because of how complex product design can be, the bare fact is that good design takes time and costs money. So what happens if you have a much lower budget?

Here are a few suggestions on how to proceed on a small budget.

Adjust scope

The first thing you can do is adjust your scope, you don’t have to launch a feature rich product! You can start with a basic MVP that will take less time to design and build (You can even skip the build and work with a prototype – Investors don’t care if it’s built as long as they can see potential)

Don’t do too much too soon when actually you can launch and become successful with far less. So try adjusting your scope.

Buy a pre-designed theme / template

Another great option is to buy a pre-designed theme or template. There are thousands of these online.

A theme will help you get a basic look and feel for your product and while you will have to revisit the design in your businesses future, a theme will be a great start and cost less than $100/£100

Lots of options available to you. For Web you can look at Wix, Squarespace and most decent hosting packages have web templates free. WordPress has a wealth of excellent themes.

For app design you’ll have to be a little more technical by downloading a GUI kit (pre designed apps). You can download these for free (InVision have some amazing ones – look here) or you can also pay for more premium UI’s over at Creative Market or UI8.

Another great resource is to give a search for ‘GUI’ ‘Free UI Kits’ – The community is very giving! 🙂

Find a cheaper / junior designer

Another great option is to search for a designer who’s new to the industry.

These will typically be college students or recent graduates. It’s easy to reach out to your local University and ask them to recommend someone for some work experience.

Another option is to head to sites like Fiverr, Peopleperhour and Upwork and search for low budget designers who have good reviews. Be careful, they could end up selling you a template or somebody else’s hard work. Be firm with your brief.


We’re really lucky to live in a world with so many excellent online free learning resources so why not try and learn it yourself to get started?

Figma is free and excellent, Sketch and Framer have free trials and worth looking at Adobe XD if you have a CC account. Download, install and jump onto YouTube and follow someone like Pablo Stanley who gives excellent tutorials.

Feeling like you can spend some cash on your learning? Try TeamTreehouse or who have video courses that will walk you through the basics and get you designing in no time.

Find a tech business partner

When I launched my first startup I traded my design time for some developer time. Martin eventually became my co-founder and we managed to get Howler to decent place before closing it last year.

Ask around, some designers/developers may have an opening and find your product interesting enough to give you some time. It’s worth going in with some investment leads or at minimum a business plan to hook their interest.

Stagger costs

If they don’t want to join the business they may be open to staggering costs so you get the perfect product but at an affordable monthly payment.

While this might not float with most, it could work nicely for professionals who take on monthly retainers.

Ask your network for help

Everyone knows someone that’s looking for work, so don’t be afraid to ask for help. I’m always recommending designers and developers to people who’ve contacted me.

So if your own network comes up empty, ask some designers on Twitter if they can recommend someone. Typically we’re willing to help, give it a try!

Keep looking!

I’m a firm believer in ‘pay for what you get’ but that doesn’t mean there won’t be a designer within your budget you just have to keep looking.

I did a poll and the results were very interesting, take a look.

I hope this helps, I really do. It breaks my heart turning away enthusiastic passionate tech startup founders because of budget.

Go make something amazing.

Follow me on Twitter & thanks to everyone who contributed to this blog.

Photo by Marc Schäfer on Unsplash

The post Ideas and solutions for tech founders on a tight budget appeared first on .

June 01, 2019

I own a workshop! by Daniel Hollands (@limeblast)

I moved into my current house around a year ago, and as I mentioned at the time, I was super excited by the prospect of owning a shed. Fast forward a year, and much like a Pokémon, the shed has evolved into a workshop, thanks to a very generous donation by my parents.

I plan on doing a full post about the workshop (including a video tour of my setup) in the near future, but for now, here are a series of photos taken at the end of each day over the course of a week, as the fine folks at Sheds R Us laid the groundwork and constructed it.

May 31, 2019

Reading List 232 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way!

But we are hackers and hackers have black terminals with green font colors ~ John Nunemaker

This is the second in a series of posts on useful bash aliases and shell customisations that developers here at Talis use for their own personal productivity. In this post I describe how I configured my shell to automatically change my terminal theme when I connect to a remote machine in any of our AWS accounts.


As I’ve mentioned previously, at Talis, we run most of our infrastructure on AWS. This is spread over multiple accounts, which exist to separate our production infrastructure from development/staging infrastructure. Consequently we can find ourselves needing to SSH onto boxes across these various accounts. For me it is not uncommon to be connected to multiple machines across these accounts, and what I found myself needing was a way to quickly tell which of these were production boxes and which were servers in our development account.


All of my development work is done on a Macbook Pro running macOS. Several years ago I started using iTerm2 as my terminal emulator instead of the built in terminal which has always felt particularly limited. Given these constraints the solution I came up with was to implement a wrapper around an ssh command that would tell iTerm2 when to switch themes so that we can use different colors for production environments vs development.

In order to work it requires you to create three profiles in iTerm2, and for the purposes of this each of these profiles is essentially the theme you want to use. When creating a profile you can customise colors, fonts etc. But crucially for each of them you need to enter a value in the badge field. This tells iTerm2 what to set as the badge, which is displayed as a watermark on the terminal. In this case I wanted to use the host of the machine that I’ve connected to which I specify as current_user_host in my script; therefore the value for the badge field needs to be set to \(user.current_ssh_host).

iTerm profile

When you’ve created the profiles you can add the following to your ~/.aliases file to ensure that the ssh wrapper script knows which profiles to use for the three themes it requires.

export SSH_DEFAULT_THEME=Spacemacs
export SSH_DANGER_THEME=Production
export SSH_WARNING_THEME=Staging

Once this is done you can use the wrapper script. Do the following:

  • copy the contents of the script to /usr/local/bin/ssh (or anywhere as long as it’s on your PATH)
  • now when you issue an ssh command in the terminal the script captures the hostname of the machine that you are trying to connect to
  • it then uses awslookup to check to see which AWS account that host resides in.
  • in my case, if it’s in the production account it tells iTerm to switch to the SSH_DANGER_THEME, and if it’s in our development account it uses the SSH_WARNING_THEME.
  • the terminal will then switch to the corresponding theme.
  • when you exit your ssh session the wrapper resets the theme back to your default.

For example, when I ssh to a production server, my terminal automatically switches to this:

Danger Theme

And when I connect to a development server, it automatically changes to this:

Warning Theme

As soon as I exit the ssh session the terminal is restored to my default theme.

Whilst this is a very specific solution for macOS you can achieve similar results on Linux. Enjoy!

May 30, 2019

The Logical Fallacy by Graham Lee

Nary a week goes by without seeing a post by a programmer, for programmers, on the subject of logical fallacies in arguments. This week’s, courtesy of hacker news, is not egregious, enlightening, or indeed different in any way from the usual torrent. It is merely the one that prompted me into writing this article. The most frequent, and most severe, logical fallacy I encounter among programmers is this one:

  • basing your argument on logic.

Now, obviously, for a fallacy to be recognised it needs to have a Latin name, so I’m going to call this one argumentum ex logica.

Argumentum ex logica is the fallacious reasoning that the best course of action for a group of people is the one that can be arrived at by logical deduction. No need to consider the emotions of the people involved, or the aesthetic properties of any potential solutions. Just treat your workplace like your high school debating club, pick (seemingly arbitrarily) some axioms, and batter your way through to your preferred conclusion.

If people disagree with you on (unreasonable) emotional grounds, just name their logical fallacies so you can overrule their arguments, like you’re in an episode of Ally McBeal and want their comments stricken from the record. If people manage to find a flaw in the logic of your argument, just pick a new axiom you’d never mentioned before and carry on.

The application of the argumentum ex logica fallacy is frequently accompanied by descriptions of the actions of “the brain”, that strange impish character that sits inside each of us and causes us to divert from the true path of Syrran of Vulcan. Post hoc ergo propter hoc, we are told, is an easy mistake to make because “the brain” sees successive events as related.

Here’s the weird thing. We all have a “the brain” inside us, as an important part of our being. By writing off “the brain” as a mistaken and impure lump of wet fat, programmers are saying that they are building their software not for humans. There must be some other kind of machine that functions on purely logical grounds, for whom their software is intended. It should not be.

May 28, 2019

I’m doing some changes to this WordPress site and wanted to get out of the loop of FTPing a new version of my CSS to the live server and refreshing the browser. Rather than clone the site and setup a dev server, I wanted to host it on my local machine so the cycle of changing and testing would be faster and I could work online.

Nice people on Twitter recommended Local By Flywheel which was easy to install and get going (no dependancy rabbit hole) and which allows you to locally host multiple sites. It also has a really intuitive UI.

To clone my live site, I installed the BackUpWordPress plugin, told it to backup the MySQL database and the files (eg, the theme, plugins etc) and let it run. It exports a file that Local by Flywheel can easily injest – simply drag and drop it onto Local’s start screen. (There’s a handy video that shows how to do it.)

For some reason, although I use the excellent Make Paths Relative plugin, the link to my main stylesheet uses the absolute path, so I edited my local header.php (in ⁨Users⁩ ▸ ⁨brucelawson⁩ ▸ ⁨Local Sites⁩ ▸ ⁨brucelawsoncouk1558709320complete201905241-clone⁩ ▸ ⁨app⁩ ▸ ⁨public⁩ ▸ ⁨wp-content⁩ ▸ ⁨themes⁩ ▸ ⁨HTML5⁩⁩ ) to point to the local copy of the CSS:

link rel="stylesheet" href="http://brucelawson.local/wp-content/themes/HTML5/style.css" media="screen".

And that’s it – fire up Local, start the server, get coding!

If you’re having problems with the local wp-admin redirecting to your live site’s admin page, Flywheel engineers suggest:

  1. Go to the site in Local
  2. Go to Database » Adminer
  3. Go to the wp_XXXXX_options table (click the select button beside it in the sidebar)
  4. Make sure both the siteurl and home options are set to the appropriate local domain. If not, use the edit links to change them.

May 24, 2019

Reading List 231 by Bruce Lawson (@brucel)

Stickee Technology Ltd, Solihull, Birmingham, B90 4SB – (£18’000-£24’000) Overview We are looking for an admin account exec to join our existing digital team. This...

The post Admin Account Exec appeared first on stickee.

May 21, 2019

Domain-specific markup for fun and profit

It doesn’t come as a surprise to Dull Old Web Farts (DOWFs) like me to learn last month that Google gives a search boost to sites that use structured data (as well as rewarding sites for being performant and mobile-friendly). Google has brilliant heuristics for analysing the content of sites, but developers being explicit and marking up their content using subject-specific vocabularies means more robust results.

For the first time (to my knowledge), Google has published some numbers on how structured data affects business. The headlines:

  • Jobrapido’s overall organic traffic grew by 115%, and they have seen a 270% increase in new user registrations from organic traffic
  • After the launch of job posting structured data, Google organic traffic to ZipRecruiter job pages converted at a rate three times higher than organic traffic from other search engines. The Google organic conversion rate on job pages was also more than 4.5 times higher than it had been previously, and the bounce rate for Google visitors to job pages dropped by over 10%.
  • In the month following implementation, Eventbrite saw roughly a 100-percent increase in the typical year-over-year growth of traffic from Google Search
  • Traffic to all Rakuten Recipe pages from search engines soared 2.7 times, and the average session duration was now 1.5 times longer than before.

Impressive, indeed. So how do you do it? For this site, I chose a vocabulary from

These vocabularies cover entities, relationships between entities and actions, and can easily be extended through a well-documented extension model. Over 10 million sites use to markup their web pages and email messages. Many applications from Google, Microsoft, Pinterest, Yandex and others already use these vocabularies to power rich, extensible experiences.

Because this is a blog, I chose the BlogPosting schema, and I use the HTML5 microdata syntax. So each article is marked up like this:

<article itemscope itemtype="">
  <h2 itemprop="headline" id="post-11378">The HTML Treasure Hunt</h2>
  <time itemprop="dateCreated pubdate datePublished" 
    datetime="2019-05-20">Monday 20 May 2019</time>

The values for the microdata attributes are specified in the schema vocabulary, except the pubdate value on itemprop which isn’t from, but is required by Apple for WatchOS because, well, Apple likes to be different.

And that’s basically it. All of this, of course, is taken care of by one WordPress template, so it’s automatic.

Metadata partial copy-paste necrosis for misery and loss

One thing puzzles me, however; Google documentation says that Google Search supports structured data in any of three formats: JSON-LD, RDFa and microdata formats, but notes “Google recommends using JSON-LD for structured data whenever possible”.

However, no reason is given for preferring JSON-LD except “Google can read JSON-LD data when it is dynamically injected into the page’s contents, such as by JavaScript code or embedded widgets in your content management system”. I guess this could be an advantage, but one of the other “features” of JSON-LD is, in my opinion, a bug:

The markup is not interleaved with the user-visible text

I strongly feel that metadata that is separated from the user-visible data associated with it highly susceptible to metadata partial copy-paste necrosis. User-visible text is also developer-visible text. When devs copy/ paste that, it’s very easy to forget to copy any associated metadata that’s not interleaved, leading to errors. (And Google will penalise errors: structured data will not show up in search results if “The structured data is not representative of the main content of the page, or is potentially misleading”.)

An example of metadata partial copy-paste necrosis can be seen in the commonly-recommended accessible form pattern:

<label for="my-input">Your name:</label>
<input id="my-input"/>

As Thomas Caspars wrote

I’ve contacted chums in Google to ask why JSON-LD is preferred, but had no reply. (I may go as far as trying to “reach out” next time.)

Andrew wrote

I’m pretty sure Google prefers JSON-LD over microdata because it’s easier for them to stealborrow the data for their own use in that format. When I was working on a screen-scraping project a few years ago, I found that to be the case. Since then, I’ve come to believe that is really about making it easier for the big guys to profit from data collection instead of helping site owners improve their SEO. But I’m probably just being a conspiracy theorist.

Speculation and conspiracy theories aside, until there’s a clear reason why I should use JSON-LD over interleaved microdata, I’m keeping it as it is.

Google replies

Updated 23 May: Dan Brickley, a Google employee who is Lord of, wrote this thread on Twitter:

May 20, 2019

The HTML Treasure Hunt by Bruce Lawson (@brucel)

Here are my slides for The HTML Treasure Hunt, my keynote at the International JavaScript Conference last week. They probably don’t make much sense on their own, unfortunately, as I use slides as pointers for me to ramble about the subject, but a video is coming soon, and I’ll post it here.

Update! Here’s the video! Starts at 18:08.

Given that one of my themes was “write less JS and more HTML”, feedback was great! Attendees gave me 4.8 out of 5 for “Quality of the presentation” (against a conference average of 4.0) and 4.9 for “Speaker’s knowledge of the subject” (against an average of 4.5). Comments included:

great talk! reminding of the basics we often forget.

amazing way to start the second day of this conference. inspiring to say the least. great job Bruce

very entertaining and great message. excellent speaker

Thanks, that was a talk a lot of us needed.

Remarkable presentation. Thought provoking, backed with statistics. Well presented.

Very experienced and inspiring speaker. I would really like to incorporate this new ideas (for me) in my code

I think there’s a room full of people going to re-learn HTML after that inspiring talk!

If you’d like me to talk at your event, give training at your organisation, or help CTO your next development project, get in touch!

May 16, 2019

Deprecating yarn by Graham Lee

In which I help Oxford University CS department with their threading issues.

We are looking for a content executive to join our existing digital marketing team. This is an excellent opportunity to join a dynamic and innovative...

The post Content Executive Role appeared first on stickee.

May 13, 2019

Niche-audience topic time: if you’re in Oxford Uni, I’m giving a one-day course on collaborative software engineering with git and GitHub (the ideas apply to GitLab, Bitbucket etc. too) on 4th June, 10-3 at the Maths Institute. Look out for information from the OxfordRSE group with a sign-up link!

May 10, 2019

May 08, 2019

While Alan Turing is regarded by many as the grandfather of Artificial Intelligence, George Boole should be entitled to some claim to that epithet too. His Investigation of the Laws of Thought is nothing other than a systematisation of “those universal laws of thought which are the basis of all reasoning”. The regularisation of logic and probability into an algebraic form renders them amenable to the sort of computing that Turing was later to show could be just as well performed mechanically or electronically as with pencil and paper.

But when did people start talking about the logic of binary operations in computers as being due to Boole? Turing appears never to have mentioned his name: although he certainly did talk about the benefits of implementing a computer’s memory as a collection of 0s and 1s, and describe operations thereon, he did not call them Boolean or reference Boole.

In the ACM digital library, Symbolic synthesis of digital computers from 1952 is the earliest use of the word “Boolean”. Irving S. Reed describes a computer as “a Boolean machine” and “an automatic operational filing system” in its abstract. He cites his own technical report from 1951:

Equations (1.33) and (1.35) show that the simple Boolean system, given in (1.34) may be analysed physically by a machine consisting of N clocked flip flops for the dependent variables and suitable physical devices for producing the sum and product of the various variables. Such a machine will be called the simple Boolean machine.

The best examples of simple Boolean machines known to this author are the Maddidas and (or) universal computers being built or considered by Computer Research Corporation, Northrop Aircraft Inc, Hughes Aircraft, Cal. Tech., and others. It is this author’s belief that all the electronic and digital relay computers in existence today may be interpreted as simple Boolean machines if the various elements of these machines are regarded in an appropriate manner, but this has yet to be proved.

So at least in the USA, the correlation between digital computing and Boolean logic was being explored almost as soon as the computer was invented. Though not universally: the book “The Origins of Digital Computers” edited by Brian Randell, with articles from Charles Babbage, Grace Hopper, John Mauchly, and others, doesn’t mention Boole at all. Neither does Von Neumann’s famous “first draft” report on the EDVAC.

So, second question. Why do programmers spell Boole bool? Who first decided that five characters was too many, and that four was just right?

Some early programming languages, like Lisp, don’t have a logical data type at all. Lisp uses the empty list to mean “false” and anything else to mean true. Snobol is weird (he said, surprising nobody). It also doesn’t have a logical type, conditional execution being predicated on whether an operation signals failure. So the “less than” function can return the empty string if a<b, or it can fail.

Fortran has a LOGICAL type, logically. COBOL, being designed to be illogical wherever Fortran is logical, has a level 88 data type. Simula, Algol and Pascal use the word ‘boolean’, modulo capitalisation.

ML definitely has a bool type, but did it always? I can’t see whether it was introduced in Standard ML (1980s-1990), or earlier (1973+). Nonetheless, it does appear that ML is the source of misspelled Booles.

May 03, 2019

Reading List 230 by Bruce Lawson (@brucel)

May 01, 2019

April 29, 2019

Digital Declutter by Graham Lee

I’ve been reading and listening to various books about the attention/surveillance economy, the rise of fascism in the Anglosphere and beyond, and have decided to disconnect from the daily outrage and the impotent swiping of “social” “content”. The most immediately actionable advice came from Cal Newport’s Digital Minimalism. I will therefore be undertaking a digital declutter in May.

Specifically this means:

  • no social media. In fact I haven’t been on most of them all of April, so this is already in play. By continuing it into May, I intend to do a better job of choosing things to do when I’m not hitting refresh.
  • alerts on chat apps on for close friends and family only.
  • streaming TV only when watching with other people.
  • Email once per day.
  • no RSS.
  • audiobooks only while driving.
  • Slack once per day.
  • Web browsing only when it progresses a specific work, or non-computering, task.
  • at least one walk per day, of at least half an hour, with no technology.
  • Phone permanently in Do Not Disturb mode.

It’s possible that I end up blogging more, if that’s what I start thinking of when I’m not browsing the twitters. Or less. We’ll find out over the coming weeks.

My posts for De Programmatica Ipsum are written and scheduled, so service there is not interrupted. And I’m not becoming a hermit, just digitally decluttering. Arrange Office Hours, come to Brum AI, or find me somewhere else, if you want to chat!

My flat has terrible mobile coverage. It’s okaaaay-ish in the living room and dead in the bedrooms, and it’s infuriating. You might be thinking, but Stuart! you live right in the centre of Birmingham! surely coverage in a city centre would be amazing! at which point you will get a look like this


and then I will say, yeah, it’s something to do with the buildings or concrete or something, it’s fine when you’re outside. It’s an old building. Maybe they put copper in the walls like in the Rookery or something. Anyway, whatever, it never worked, regardless of which operator you’re on. So, when I moved in, I contacted my operator, Three, and said: this sucks, do something about it. They said, sure thing, install our Wi-Fi Calling app, Three in-Touch. Which I did, and managed for two years.

I say “managed” because that app is terrible. The UI is clunky, it doesn’t handle picture messages, there’s no way to mark a message read, the phone call quality cuts out and breaks all the time, and most annoyingly when you get an SMS it shows a notification but doesn’t play a sound, so I have no idea that I got a notification1. I’ve missed loads of SMSes over the last couple of years because of that.

Anyway, the Three In-Touch2 app popped up a little dialogue box last week:

We are updating our network. From 15 May 2019, the Three inTouch app will no longer be supported. Your call history and SMS sent or received will remain visible but you will no longer be able to make calls or send or receive SMS through the app. If you delete the app, your call and SMS history will be lost. WiFi calling is already built into most handsets these days and will continue to work in any enabled handset without need of the app

Ah, thought I. WiFi calling is already built in, is it?

You see, there’s a problem with that. On iOS it’s built in. On Android, on a lot of phones, it’s built in only if you bought the phone from your carrier, which I never do. It needs some special bit of config built into the firmware for wifi calling to work.3 So I thought, well, I’m screwed then. Because Three won’t update my Z5 Compact to have whatever firmware it needs to do wifi calling, they’ve killed the (execrable, but functional) app, and I’m not buying a new phone until the phone industry stops making them the size of a football pitch.4 I got on the live chat thing with Three, who (predictably) said, yeah, we’re not sending you the firmware, pull the other one, son, it’s got bells on.5

And then, a surprise. Because they said, howsabout we send you a Three HomeSignal device? It’s a box that plugs into your wifi and then your phone sees it as a phone antenna and you get good coverage.

And my brain went, what, a femtocell? A femtocell like I asked you about when I first started having coverage problems and you swore blind didn’t exist because everyone has wifi calling now? One of those femtocells?

Having been taught to never look a gift femtocell in the mouth, though, I didn’t say anything except “yes please I’d like that, cheers very much”. And so it arrived in the post two days later. Result.

However, the user guide leaves a bit to be desired.

The Three HomeSignal user guide, which says to plug in the ethernet cable and the power, and does not at all mention that it also needs a SIM card

I do not know why the user guide completely ignores that you need to also plug a SIM card into the HomeSignal device, but it does in fact ignore that. You should not ignore it: you need to do it, otherwise it doesn’t work; you’ll get an error, which is the LED flashing red five times, which means (in the only mention of a SIM anywhere in the user guide) “SIM card problem”. At which point you think: what bloody SIM card?

More confusingly still, Three include two SIM cards in the package. One is a Pay as you Go SIM. This is not the one you want. I don’t know why the hell they put that in; fine, if you wanna sell me on your products, go for it, but you made the HomeSignal process a billion times more confusing. The SIM card that goes into the HomeSignal device is on a special green card, and it says “HomeSignal device only” on it. Put that one in the HomeSignal box. The other one, the Pay as you Go one, you should sellotape it to a brick and then send it back to Three, postage on delivery.

Once you’ve done that, it works. So, Three, if you’re listening: one bonus point for finally deciding to update the awful Three in Touch app. Minus twenty points for not having a replacement for people who didn’t buy phones from you. Plus another twenty points for offering me the Three HomeSignal femtocell which fixes my problem. Minus a little bit for the bad instructions which don’t say that I have to put a SIM card in, and then minus quite a lot more for putting two SIM cards in the box when I do finally work that out! So, on balance, this is probably about neutral; they have fixed a problem I shouldn’t have and which is partially caused by phone manufacturers, using a technically very clever solution which was confusingly explained. Business as usual in the mobile phone world, I suppose, unless you’re using an Apple phone at which point everything works really smoothly until it suddenly doesn’t. One day someone will get all of this right, surely?

  1. They have had, looking at the internet, this reported to them about six hundred billion times
  2. they don’t seem to be anything like consistent about the spelling and punctuation of the name of this app, so I don’t see why I should be either
  3. It seems like this might be fixed in very recent versions of Android? It’s not at all clear. This problem is of course exacerbated by not getting system updates unless your phone is less than a week old.
  4. I saw a bloke on Twitter the other day say rather sneerily, I bet all the people who are mocking folding phones like the Samsung Fold now also mocked the Samsung Note until they realised they like big screens after all. No, sneery bloke. I mock the folding phones now for being a (terribly clever technical) solution in search of a problem, I hated the huge Note when it first came out, and I hate all huge phones now. For god’s sake, phone industry, make a small flagship phone. Just one. I suppose it’ll have to be an Xperia Compact again, but it’d be good if there were competition for it.
  5. They also said that they’re rebuilding the Three inTouch app to be good and work with 5G. Apparently. At some unknown point in the future. I carefully did not point out that if you’re building a replacement for something then there needs to be overlap between the two things rather than an interregnum during which every user is screwed, because that information needs to go to their project manager rather than their support team, but to be clear: to whichever project manager thinks this is an acceptable way to do deployments, I hope someone hides snakes in your car.

April 26, 2019

Reading List 229 by Bruce Lawson (@brucel)

April 22, 2019

Nokē UI Design

A smart security platform for your home

After completing my work with Prempoint I was put in touch with Co-founder and President David Gengler of Nokē. He and his team had a growing product that was in need of some UI design and UX love.

Nokē is a powerful bluetooth lock platform with integrated, smart locking access control, automated key management and audit trails. Nokē is an essential part of any home and business lock management system.

Nokē UI Design

My first task was to learn all about how their locks and platforms worked. I was sent some Bluetooth locks and got busy testing and working out how I could improve the UX of their platform.

The next task was to create some high fidelity wireframes and talk David and his team through my ideas. I presented my wireframes via a screen share and made a list of feedback which lead to a Q&A session, further exploring the possibilities and features.

Nokē UI Design

I then took some time away from the team and designed some fresh UI designs for both their iOS app and Web App portal.

The designs were well received which lead me to start working with the teams developers, making sure every detail was covered.

Nokē UI Design

This project was a huge step forward for me and my career as not only was I working on a product that was cross-platform but also a physical product that had it’s own limitations and possibilities.

The end product is a clean, easy to use and smart application that I’m very proud of. What’s even more exciting is that my design will be the basis of other areas of their business as they continue to grow and release new products.

“Working with Mike was a real pleasure for my team and me. His work is top notch and he moves quickly through understanding a project to initial designs and incorporates suggestions quickly and efficiently throughout the design process. We will continue to work with Mike on future projects and have already recommended him to others.”

David Gengler. Co-Founder & President Nokē

To find out more about Nokē click here

If you need UI/UX Design make sure you get in touch!

Please note final production designs may differ from those shown.

The post Nokē iOS & Web App UI Design appeared first on .

April 20, 2019

My SEO Story by Mike Hince (@zer0mike)

My SEO Story

A while back I wrote a blog about my 10 years as a freelancer and got some great reactions from my Twitter followers. This is the other side of the story (my SEO story!) on how Google helped me get to where I am today.

It started years ago when I was made redundant and began trading as Sans Deputy Creative, a title that everyone I ever told had trouble with. After a few years I decided to drop the name and went with a website that reflected me and used my name in the title. >

I’m not very technical so I knew the new website had to be a solution I could update easily and landed on a WordPress theme called Goodspace. It ticked all the boxes and was exactly what I needed to get my portfolio live quickly.

Over the years the theme has been customised heavily not only by myself but some really talented developers. Sebastian Lenton, Day Media and Tier One have all helped me get my website to where it is today.

At the time of launching I knew I wanted to design apps and had been designing my own products for a number of years and posting them on Dribbble. I filled my portfolio with a few live products that I’d launched, some websites I’ve designed for clients and a good handful of my own app concepts.

I made use of the brilliant plugin Yoast – making sure each page had a green light for the SEO and BOOM I was off.

After a few months remarkably Google liked my site and boosted me high up on the ranking for search terms like “Freelance UI Designer” “Freelance UX Designer” and “Freelance App Designer” and I started getting enquiries! In fact they poured in, in 2014/15 I was tracking them and stopped at 500 for the year.

That’s when it started to go wrong.

I got busy, I neglected my website and over the course of two very busy years the visits to my site dropped massively. Now I don’t think it’s as simple as Google falling out of love with my site, I think there are many factors.

One. Google changes it’s search algorithm all the time and if you don’t know about what they’ve tweaked you may suffer page drops.

Two. There was more competition, more talented designers started covering UI/UX and the search results would be stretched thinner.

Three. Location became a factor, Google favours sites that have location tags, I kept my vague on purpose to attract remote clients.

Four. Most importantly… I stopped posting, Google probably didn’t think my website was active.

So what did I do?

Firstly I contacted Zack Neary-Hayes and purchased a site audit package from him. He quickly identified areas where my site was struggling including many fine details that I won’t go into here (you should purchase one from him instead!). He provided me an action list of things to improve and was worth every penny.

I got to work fixing as many of the issues as I could myself, for the more technical problems I reached out to Mahtab at TierOne. He fixed errors and changed areas of my site that I’d redesigned to better attract SEO rankings (more internal linking to name one change).

I started blogging again, I posted new work to my portfolio section and shared my site on social media where possible (without spamming people).

I resubmitted my sitemap and then played the waiting game.

It’s taken me about 6-9 months for Google to start picking up my website again and as of today it’s showing signs of climbing to it’s former glory once more.

I’ve even started getting new enquiries again complimenting me on my SEO, so it must be working!

Moral of the story: keep posting! Don’t let your website go idle no matter how busy you are.

Thanks for reading!

Photo by Tom Grimbert on Unsplash
Photo by William Iven on Unsplash
Photo by Fancycrave on Unsplash

The post My SEO Story appeared first on .

April 19, 2019

Fleet Management Software

AutoGear is a successful technology business based in Oslo, Norway. They provide a piece of technology that fits into your vehicle and tracks your milage for personal and work usage. In Norway there are laws preventing work vehicles being used for personal reasons and AutoGear helps businesses make sure this law is followed.


AutoGear approached Mike to take creative lead for UI – UX design of their web dashboards.


The project was an extremely complicated one and I’m thrilled with the results.

complex UI design

Mobile Route Map UI

The post AutoGear UX – UI Design appeared first on .

April 18, 2019

I run a company, a mission-driven software consultancy that aims to make it easier and faster to make high-quality software that preserves privacy and freedom. On the homepage you’ll find Research Watch, where I talk about research papers I read. For example, the most recent article is Runtime verification in Erlang by using contracts, which was presented at a conference last year. Articles from the last few decades are discussed: most is from the last couple of years, nothing yet is older than I am.

At de Programmatica Ipsum, I write on “individuals, interactions, and the true valuation of the things on the left” with Adrian Kosmaczewski and a glorious feast of guest writers. The most recent issue was on work, the upcoming issue is on programming history. You can subscribe or buy our back-catalogue to read all the issues.

Anyway, those are other places where you might want to read my writing. If people are interested I could publish their feeds here, but you may as well just check each out yourself :).

OneZone UI Design by Mike Hince (@zer0mike)

OneZone UI Design

A new way to discover your city

OneZone was a client of top-tier development studio Kanso and as we’ve worked together on various iPhone projects before CEO Robin called me up and me to be involved in the UX and UI design process. Knowing the quality of work Kanso produce I was happy to jump onboard.

OneZone UI Design

OneZone founder Natasha Zone had a strong vision for what she wanted and had already designed a prototype herself which was the basis of my work on this project.

Myself, the team at Kanso and Natasha all got together in Cardiff to workshop through her ideas and formulate a plan of action. This included reviewing the prototype, user journeys, persona review and a lengthy lean canvas discussion.

OneZone UI Design

Natasha has a great eye for detail and a excellent sense of clean design using white space to it’s advantage.

This played to my strengths and was happy to work along side her to create the screens shown here.

I was involved in refining the UX flow and adding the final touches to the UI Design.

OneZone UI Design

This project was an interesting challenge and I’m really happy with the end results.

All that’s left for me to say is to go download her app on iOS or Android!

If you like this design please share and if you’re in the market to book a freelance UI designer don’t hesitate to contact me.

The post OneZone UI Design appeared first on .

Half a bee by Graham Lee

When you’re writing Python tutorials, you have to use Monty Python references. It’s the law. On the 40th anniversary of the release of Monty Python’s Life of Brian, I wanted to share this example that I made for collections.defaultdict that doesn’t fit in the tutorial I’m writing. It comes as homage to the single Eric the Half a Bee.

from collections import defaultdict

class HalfABee:
    def __init__(self):
        self.is_a_bee = False
    def value(self):
        self.is_a_bee = not self.is_a_bee
        return "Is a bee" if self.is_a_bee else "Not a bee"

>>> eric = defaultdict(HalfABee().value, {})
>>> print(eric['La di dee'])
Is a bee
>>> print(eric['La di dee'])
Not a bee

Dictionaries that can return different values for the same key are a fine example of Job Security-Driven Development.

Back to Top