Last updated: January 19, 2021 11:22 AM (All times are UTC.)

January 18, 2021

If I could string a thread through my childhood, the pins that hold the thread in place would be all the times I hit my head.

Me and my best friend (at the time) used to play a game called Dizzy Egg. It was a simple game. The object was to spin around as many times as we could and then try not to fall over. I usually fell over, and this usually meant hitting my head on the unforgiving concrete.

In the same playground, I ran—for no particular reason—head first into the white painted wall of one of the school buildings. Luckily, it stayed white.

I was part of a weekend football club. Football Fun. A better name for it might have been “Football Keeps Hitting Me In The Face.” I’m not sure what it was about that football or my face, but the two were inseparable. You couldn’t keep them apart.

I remember one final and dramatic incident. On running through a metal gate, the gate swung closed and tried to run through me. One minute we were running and chasing and laughing. The next I was on the floor, bleeding a lot and saying some words that weren’t suitable for the playground.

That one needed a trip to the hospital and I still have the scar.

January 15, 2021

Reading List 269 by Bruce Lawson (@brucel)

January 14, 2021

Here’s what I’ve been working on (with others, of course) since February.

January 12, 2021

I used to write my assertions like this:

assert user.active?
refute user.inactive?

Then I joined a team where I was encouraged to write this:

assert_equal true, user.active?
assert_equal false, user.inactive?

What? That doesn’t look very nice. That’s doesn’t feel very Ruby. It’s less elegant!

Here’s the thing, though: your unit tests aren’t about being elegant. They’re about guaranteeing correctness. You open the door to some insidious bugs when you test truthiness instead of truth.

  • You might perform an assignment rather than comparing equality.
def active?
  # Should be status == :active
  status = :active
end
  • You might check the presence of an array, rather than its length.
def has_users?
  # Should be user_list.any?
  user_list
end

def user_list
  []
end

Over time, I’ve gotten used to it. This style still chafes, but not as bugs caused by returning the wrong value from a predicate method.

January 10, 2021

OpenUK Honours by Stuart Langridge (@sil)

So, I was awarded a medal.

OpenUK, who are a non-profit organisation supporting open source software, hardware, and data, and are run by Amanda Brock, have published the honours list for 2021 of what they call “100 top influencers across the UK’s open technology communities”. One of them is me, which is rather nice. One’s not supposed to blow one’s own trumpet at a time like this, but to borrow a line from Edmund Blackadder it’s nice to let people know that you have a trumpet.

There are a bunch of names on this list that I suspect anyone in a position to read this might recognise. Andrew Wafaa at ARM, Neil McGovern of GNOME, Ben Everard the journalist and Chris Lamb the DPL and Jonathan Riddell at KDE. Jeni Tennison and Jimmy Wales and Simon Wardley. There are people I’ve worked with or spoken alongside or had a pint with or all of those things — Mark Shuttleworth, Rob McQueen, Simon Phipps, Michael Meeks. And those I know as friends, which makes them doubly worthy: Alan Pope, Laura Czajkowski, Dave Walker, Joe Ressington, Martin Wimpress. And down near the bottom of the alphabetical list, there’s me, slotted in between Terence Eden and Sir Tim Berners-Lee. I’ll take that position and those neighbours, thank you very much, that’s lovely.

I like working on open source things. It’s been a strange quarter-of-a-century, and my views have changed a lot in that time, but I’m typing this right now on an open source desktop and you’re probably viewing it in an open source web rendering engine. Earlier this very week Alan Pope suggested an app idea to me and two days later we’d made Hushboard. It’s a trivial app, but the process of having made it is sorta emblematic in my head — I really like that we can go from idea to published Ubuntu app in a couple of days, and it’s all open-source while I’m doing it. I like that I got to go and have a curry with Colin Watson a little while ago, the bloke who introduced me to and inspired me with free software all those years ago, and he’s still doing it and inspiring me and I’m still doing it too. I crossed over some sort of Rubicon relatively recently where I’ve been doing open source for more of my life than I haven’t been doing it. I like that as well.

There are a lot of problems with the open source community. I spoke about divisiveness over “distros” in Linux a while back. It’s still not clear how to make open source software financially sustainable for developers of it. The open source development community is distinctly unwelcoming at best and actively harassing and toxic at worst to a lot of people who don’t look like me, because they don’t look like me. There’s way too much of a culture of opposing popularity because it is popularity and we don’t know how to not be underdogs who reflexively bite at the cool kids. Startups take venture capital and make a billion dollars when the bottom 90% of their stack is open source that they didn’t write, and then give none of it back. Products built with open source, especially on the web, assume (to use Bruce Lawson’s excellent phrasing) that you’re on the Wealthy Western Web. The list goes on and on and on and these are only the first few things on it. To the extent that I have any influence as one of the one hundred top influencers in open source in the UK, those are the sort of things I’d like to see change. I don’t know whether having a medal helps with that, but last year, 2020, was an extremely tough year for almost everyone. 2021 has started even worse: we’ve still got a pandemic, the fascism has gone from ten to eleven, and none of the problems I mentioned are close to being fixed. But I’m on a list with Tim Berners-Lee, so I feel a little bit warmer than I did. Thank you for that, OpenUK. I’ll try to share the warmth with others.

Yr hmbl crspndnt, wearing his medal

January 07, 2021

  1. Seeing in the new year in Barcelona
  2. This gift from my parents: a book telling the story we were told as kids, written by my dad and illustruated by my mum
  3. This gift from my wife: a watercolour painting that hangs in the home office
  4. Veganuary and Dry January
  5. Getting a puppy
  6. Finally getting a full nights sleep after getting said puppy
  7. Zoom quizzes with family and friends
  8. Having a year theme (mine was the year of calm)
  9. The Waking Up meditation app
  10. Making carrot cake for the wife’s lockdown birthday
  11. My mastermind buddies Andy and Blair
  12. The Bosh cookbooks
  13. Celebrating my sister’s 30th with family
  14. Formula 1 putting on a fantastic season despite everything
  15. Not doomscrolling
  16. The inline-block community
  17. Playing Overcooked 2 with the wife
  18. Making jalapeno poppers with fresh allotment chillies
  19. The Mandalorian
  20. Hot summer days
  21. Cutting my own hair for the first time
  22. Islay whisky masterclass at HTFW with Tommy and Jonny
  23. Discovering Beck’s back catalogue
  24. The wife’s new Alfa Romeo Giulietta
  25. Listening to nerdy discussions on Accidental Tech Podcast
  26. Tommy Bank’s Food Box
  27. Switching to Nova, Panic’s new code editor
  28. The Economist’s Daily Espresso
  29. Upgrading to a Kindle Oasis
  30. Reading Search For A Whisky Bothie with a dram
  31. Watching James Acaster: Cold Lasagne Hate Myself 1999
  32. Rediscovering Metallica’s S&M
  33. Receiving an email from Derek Sivers about his new book and insta-buying
  34. Watching Formula 1 Esports
  35. Journalling in the Theme System Journal
  36. Spontaneous trips to our local bakery to buy bagels and donuts
  37. Long walks without headphones
  38. Book club calls with Andy, Matthew and Eddie
  39. The Off Menu Podcast
  40. Making a video of birthday messages from friends and family for the wife’s lockdown birthday
  41. Messing around on this website
  42. Dave Grohl and Nandi’s drum battle
  43. Eating fish and chips by the sea
  44. Watching SpaceX launch astronauts into space
  45. Using good quality tools
  46. Cotswold Cream Liqueur
  47. Last of Us 2
  48. Watching Great British Menu
  49. Working on my 16by9 rebrand with Jordan
  50. iOS 14’s widgets
  51. Long phone calls with family and friends
  52. Devin Townsend
  53. Putting up new wall art in the office
  54. Learning about zettlekasten
  55. Making great coffee with a V60
  56. Sapiens by Yuval Noah Harari
  57. Reading Monevator’s weekly commentary over a cuppa every Saturday morning
  58. Getting a dishwasher for the first time
  59. Playing online chess with Dad
  60. Michael McIntyre’s Netflix special
  61. Making and eating a Friday night curry
  62. Boris Johnson memes
  63. Lazy weekends
  64. Getting into rum
  65. Holidaying in St. Ives
  66. Dogs walk on the beach
  67. Dining at the Oyster Club
  68. Watching the Silverstone Grand Prix at Caffeine and Machine
  69. Being offered a well paid full-time position and turning it down due being happy where I am
  70. Reading How to be an Antiracist and trying to be more antiracist
  71. Afterlife Season 2
  72. Playing Firewatch for the first time
  73. The wife’s apple crumble made with apples from the garden
  74. Not doing much DIY
  75. Taking photos on my iPhone
  76. Making home-made pizza
  77. Lockdown beer box from local brewery Purity
  78. CGP Grey’s Spaceship You video
  79. Sum by David Eagleman
  80. Boarding the Tesla investing train
  81. Coconut milk chai latte’s
  82. Working from my home office
  83. Matt Berninger’s Serpentine Prison
  84. Watching modern F1 cars race on old circuits such as Mugello and Imola
  85. Philps cornish pasties
  86. Sam Harris’ voice of reason in an often unreasonable world
  87. MKBHD’s tech reviews
  88. Making my mum’s potato and leek soup recipe that was my favourite as a kid
  89. Nailing this years vegetarian christmas dinner
  90. My carrot and orange birthday cake
  91. Dog cuddles
  92. The Dithering podcast
  93. Biffy Clyro’s live performance of ‘A Celebration Of Endings’
  94. Binge watching Ben Fogle: New Lives In The Wild
  95. Weekend morning’s spent drinking coffee and reading RSS feeds
  96. Dog walks in the woods
  97. The sound of paired HomePods in the home office
  98. Mum’s curry
  99. Watching birds on the newly installed bird feeder in the garden
  100. Seeing the back of 2020

gets is seen is basically every introductory Ruby tutorial, but they rarely tell the whole story.

You’ll be told to write something like this:

#!/usr/bin/env ruby

puts "What is your name?"
your_name = gets.chomp
puts "Hi, #{your_name}!"

Confusingly, if this is in a script that takes additional command line arguments, you may not see “Hi, Janet!”

If you execute ./gets.rb 123 you will pretty quickly be greeted by the following error:

./gets.rb:4:in `gets': No such file or directory @ rb_sysopen - 123 (Errno::ENOENT)

The tutorial didn’t warn you about this. The tutorial is giving you a reduced view of things that may, if you’re like me, leave you scratching your head several years later.

gets doesn’t just read user input from $stdin. gets refers to Kernel#gets, and it behaves like so:

Returns (and assigns to $_) the next line from the list of files in ARGV (or $*), or from standard input if no files are present on the command line.

If you really, truly want to prompt the user for their input, you can call gets on $stdin directly. And who wouldn’t, with a user like you?

your_name = $stdin.gets.chomp

January 04, 2021

My coworker showed me something cool today. Like a lot of developers, there are certain machines that I find myself SSHing into repeatedly. Not all of them are directly accessible from the network I’m on. Some of them require me to connect via a jump host.

Ordinarily, I manually create a tunnel between a port on my local machine to my final destination via this tunnel. This is cool, but it’s a bunch of steps to remember, especially if you’re manually creating your SSH tunnels via the command line. Especially if you’re trying to remember a bunch of IP addresses.

Apparently, you can just add hosts to your ~/.ssh/config.

Host jump-host
    Hostname x.x.x.x
    IdentityFile /path/to/key/proxy.pem
    User ubuntu

Host destination
    Hostname y.y.y.y
    IdentityFile /path/to/key/destination.pem
    User ubuntu
    ProxyJump jump-host

With this in place, ssh destination gets you there with zero fuss.

January 03, 2021

I’ve been reading On Writing Well by William Zinsser. One of the early instructions he gives is to cut out filler words.

Well, I have a confession to make. I’m particularly guilty of one particular habit that I just can’t seem to shake.

I start a lot of sentences with “well,” and, well, it’s something I’ve been trying to cut down on.

Out of interest, I just ran the following search in my employer’s Slack:

from:@james Well,

The results were not pleasing. I don’t want to say how many times I’ve done it within Slack’s recent memory, but I didn’t dare venture past page 1 of 36.

Well, that just won’t do.

I don’t even know why I do it. It’s not a hedging word, designed to protect me from any criticism I might incur from taking a firm position, as it does nothing to minimise the strength of the statement that follows it. It’s just a noise I make, involuntarily, like um or ah.

It’s five characters (and a space) I can do without.

January 02, 2021

Novel bean incoming by Graham Lee

You may remember in July I updated the open source Bean word processor to work with then-latest Xcode and macOS. Over the last couple of days I’ve added iCloud Drive support (obviously only if the app is associated with an App Store ID, but everyone gets the autosave changes), and made sure it works on Big Sur and Apple Silicon.

Alongside this, a few little internal housekeeping changes: there’s now a unit test target, the app uses base localisation, and for each file I had to edit, I cleaned up some warnings.

Developers can try this new version out from source. I haven’t created a new build yet, because I’m still in the process of removing James Hoover’s update-checking code which would replace Bean with the proprietary version from his website. I’ll create and host a Sparkle appcast for automatic updates before doing a binary release, which will support macOS 10.6-11.1.

I use Inline Class if a class is no longer pulling its weight and shouldn’t be around any more. Often this is the result of refactoring that moves other responsibilities out of the class so there are little left.

I recently finished reading Refactoring: Ruby Edition, and while there was a lot of talk about extracting logic into single purpose classes, there was also mention of removing classes that weren’t deemed to be “pulling their weight.”

This left me with a question. How little is too little? I’m assuming that when the author says there is little left in the class, they don’t mean there’s nothing left, so what sort of classes might exist that don’t deserve the mental space they occupy?

The example given in the book is that of a TelephoneNumber class which is quite close to simply being a value object, but without the immutability or comparison logic. Its only role is to put an area code and a phone number into a nicely formatted string. This logic is pulled into the Person class, to whom the phone number belongs.

So say you’ve got class B that you’re considering merging into class A. Some good reasons to make this merge might be:

  • B only operates on instances of A or its fields.
  • B is only referenced by A.
  • You struggle to differentiate between the domain concepts modeled by A and B.

I think it’s less a question of whether a class has enough responsibility, and more a question of whether a class has enough responsibility that rightfully belongs to it.

December 31, 2020

After a relatively lively month or two at work (during which we still managed to get a pretty major feature and some nice quality of life stuff ready to ship), it felt like time to disconnect a bit.

That’s why other than writing some brief notes on The Adapter Pattern, I spent the bulk of this month deliberately not doing much that could be construed as “work” outside of my regular work hours.

Instead, I spent a lot of time getting things ready for the first Christmas in which I visited neither my own family nor my in-laws, spent a lot of time sitting on the sofa reading, and just generally relaxed.

The new year is knocking, so it’s time to shake the dust off and get back to it.

Things I Read

  • Encapsulating Ruby on Rails views
  • How to avoid alert fatigue
  • A Promised Land - I’m about half way through this so far, and it’s an interesting behind-the-scenes view into Obama’s ascent to the presidency and some of the challenges faced up to and including the financial crisis that marked the beginning of his first term.
  • So Good They Can’t Ignore You - Cal Newport (author of Deep Work) presents the point of view that what he calls the Passion Hypothesis, that a fulfilling career comes from following your dreams, is at best misleading and at worst dangerous. Instead, he describes how work satisfaction comes from building rare and valuable skills, and strategically cashing in those skills for greater autonomy and purpose in one’s work.
  • A Thousand Splendid Suns - Set against the last few decades of Afghanistan’s history, this heartbreaking novel tells the story of the friendship between two women as their lives intersect.

December 28, 2020

This is the third of a series of posts on Jenkins and Unity.

In this post I’ll outline how I set up Jenkins to make reliable repeatable Unity builds, using a pipelines script (also called a jenkinsfile) that’s stored inside source control. I’ll outline the plugins I used and the reasons behind some of my choices. This is not a tutorial in Jenkins or pipeline scripts. It’s more of a tour.

I’m not an expert in Jenkins – most of this is pieced together from the Pipeine docs and google. It may be that there are better ways to achieve the same results – I’d be keen to hear about them in the comments!

This post just deals with the jenkinsfile. In a future post I’ll deal with how I configured Jenkins to use it.

I am using Jenkins v2.257.

Why Pipelines?

You can set up Jenkins to do almost anything you want via the web interface. This is ok for experimenting, but it has drawbacks:

  • config changes aren’t easy to document
  • nor can they be rolled back
  • it’s hard to share common build setups between related Jenkins jobs

All these go away if we move to a piepelines script stored in a file (commonly called “the jenkinsfile”) within the project’s source control.

Supported jobs

In Jenkins, a “job” is a particular way of building your project. This one jenkinsfile will run multiple jobs for us:

  • VCS Push: every time anyone pushes to SVN, Jenkins will pull latest, run some quick automated tests, and make a build. If any step fails, Jenkins reports a failure. The build artifact is thrown away.
  • Health Check: on a schedule, multiple times a day, Jenkins will do some more time-consuming automated tests, and make a build. If any step fails, Jenkins reports a failure. The build artifact is thrown away.
  • Deploy: at 6am and 12pm, do everything Health Check does, but make the result available to QA. They’ll then smoke the build further and make it available to everyone else if it’s good.

The Health Check build exists because there’s a big gap between our 12pm build and the next day’s 6am build. If someone commits something at 1pm that would fail the slow automated tests, we might not know until the next morning, and the QA build will already be late. Now the Health Check build runs multiple times in the afternoon, and we can fix stuff before we log off for the day.

File structure

When you’re googling for pipelines info, you’ll discover it’s a concept that has undergone a few revisions. What we’re talking about here is a “declarative pipeline” (as opposed to the older “scripted pipeline”), which is (mostly) composed of a jenkinsfile written in a subset of the Groovy language, which runs on the JVM.

A declarative pipelines script is roughly defined as a series of “stages”, where each stage is a series of commands, or “steps”. Stages can be hierarchical and nest and run in parallel, but for our purposes, we’re going to stay pretty linear and flat. If any stage fails, the subsequent stages won’t run.

We’re going to have six stages:

  • Clean – we’ve all had corrupted library files, and sometimes you want to make really sure you’re starting from scratch.
  • Prewarm – getting things ready so we can start the builds
  • Editmode Tests – fast declarative-style tests running under the Unity Test Framework
  • Playmode Tests – slower tests that require running the game. See my recent post about running playmode tests on Jenkins.
  • Build – we always care if this succeeds, but depending on the job we may throw away any produced artifact. We want to capture those #if UNITY_EDITOR errors!
  • Deploy – putting the build artifact somewhere useful. This is somewhat project-dependent and, at the moment, confidential, so I will only give the briefest outline of this step.

Metadata

You’ll find the full jenkinsfile at the bottom of the page, but let’s break it down by starting at the top.

The pipeline itself doesn’t start until the pipeine keyword, but since this is a subset of Groovy, we can define constants at the top.

UNITY_PATH = "C:\\Program Files\\Unity\\Hub\\Editor\\2019.4.5f1\\Editor\\Unity.exe"

There’s probably a better service-orientated way of installing and locating Unity, but in this case we chose to just manually create a windows machine on ec2, remote desktop in, and install Unity. Ship it.

Next we start the pipeline, and define parameters. These appear as dropdowns when starting a job in Jenkins, and you access them further down the script using params.PARAMETER_NAME:

pipeline {
    parameters {
        choice(name: 'TYPE', choices: ['Debug', 'Release', 'Publisher'], description: 'Do you want cheats or speed?')
        choice(name: 'BUILD', choices: ['Fast', 'Deploy', 'HealthCheck'], description: 'Fast builds run minimal tests and make a build. HealthCheck builds run more automated tests and are slower. Deploy builds are HealthChecks + a deploy.')
        booleanParam(name: 'CLEAN', defaultValue: false, description: 'Tick to removed cached files - will take an eon')
        booleanParam(name: 'SKIP_PLAYMODE_TESTS', defaultValue: false, description: 'In an emergency, to allow Deploy builds to work with a failing playmode test')
    }

Here’s how they appear in Jenkins in the Build With Parameters tab:

More details on each param:

  • Type: we’ll pass this option directly to our Unity build function, which will set various scripting defines as required.
  • Build: this is used only within the jenkinsfile, to switch different parts of the pipeline in and out. It correlates with the job types mentioned above.
  • Clean: sometimes you gotta nuke the site from orbit
  • Skip Playmode Tests: a nod to practicality – I love automated tests but very rarely you know the test failure is manageable, and you need a build right now. In practice our team rarely uses this option.

Here follows some boilerplate for Jenkins to understand how to run our job:

agent {
    node {
        label "My Project"
        // force everyone to the space space, so we can share a library file.
        customWorkspace 'workspace\\MyProject'
    }
}
options {
    timestamps()
    // as a failsafe. our build tend around the 15min mark, so 45 would be excessive.
    timeout(time: 45, unit: 'MINUTES')
}

The node block is about finding an ec2 instance to run the job on. We’ll deal with ec2 setup in a future post. The customWorkspace setting is a cost-saving measure: on ec2, the size of persistent SSD storage is a significant part of our costs. We could save money by switching to spinning rust, but we want the build speed of an SSD. Instead, we’ll try to keep SSD size down by not having multiple versions of the same project checked out all over the drive. In practice, we mostly only work in trunk anyway, and when we build another branch it’s not massively divergent. We may revisit this over the course of the project.

Clean

Our first stage! It’s pretty simple. It only runs if the Clean parameter has been set, and it just runs some dos commands to clean out the library and temp folders:

stages {
    stage('Clean') {
        when {
            expression { return params.CLEAN }
        }
        steps {
            bat "if exist Library (rmdir Library /s /q)"
            bat "if exist Temp (rmdir Temp /s /q)"
        }
    }

(I’m surprised that using the boolean Clean param in a when block isn’t easier? I may have missed some better syntax)

Prewarm

Prewarm helps set the stage for the coming attractions. We’re going to use some script blocks here to drop into the old scripted pipeline format, which lets us do some more complex logic.

The first thing we want to do is figure out what branch we’re on. Jenkins will always run your pipeline with some predefined environment variables, including some which seem to imply they’ll contain your branch name, but try as I might they didn’t seem to work for us. Maybe it’s because we’re using SVN? So I had to figure it out for myself:

stage('Prewarm') {
    steps {
        script {
            // not easy to get jenkins to tell us the current branch! none of the built-in envs seem to work?
            // let's just ask svn directly.
            def get_branch_script = 'svn info | select-string "Relative URL: \\^\\/(.*)" | %{\$_.Matches.Groups[1].Value}'
            env.R7_SVN_BRANCH = powershell(returnStdout: true, script:get_branch_script).trim()
        }

We’ll call out to powershell because urgh grepping in dos is esoteric. We’ll store it in a new env variable, which will let us use this info in future stages.

Next we’ll set up the build name:

buildName "${BUILD_NUMBER} - ${TYPE}@${env.R7_SVN_BRANCH}/${SVN_REVISION}"
script {
    if (params.BUILD == 'Deploy') {
        buildName "${env.BUILD_DISPLAY_NAME} ^Deploy"
    }
    if (params.BUILD == 'HealthCheck') {
        buildName "Health Check of ${TYPE}@${env.R7_SVN_BRANCH}/${SVN_REVISION}"
    }
    if (params.BUILD == 'HealthCheck' || params.BUILD == 'Deploy') {
        // let's warn that a deploy build is in progress
        slackSend color: 'good',
            message: ":hourglass: build ${env.BUILD_DISPLAY_NAME} STARTED (<${env.BUILD_URL}|Open>)",
            channel: "project_builds"
    }
}

the buildName command lets you set your build name, and env.BUILD_DISPLAY_NAME contains the current version of that. The default Jenkins build name is just “#99” or whatever, which is less than helpful. Here, we’ll make sure it’s of the format “JenkinsBuildNumber – Type@Branch/Changelist [^Deploy]”. It’ll then be obvious from a glance in both the Jenkins dash and slack notifications what’s building and why.

We also send a slack notification of health check and deploy builds, since it’s useful to know they’ve started. It gives people a good sense of if their commits have made it into the build or not. More on notifications below.

Next some more housekeeping for the automated tests, which communicate with Jenkins via xml files:

// clean tests
bat "if exist *tests.xml (del *tests.xml)"

Finally, we’ll open Unity once and close it. One persistent problem with Unity in automated systems is of serialisation errors from out-of-date code working with new data. For instance, let’s assume you’ve got a bunch of existing scriptable assets, and your latest commit refactors them. On the build server, Unity will open, validate the assets with the pre-refactor code that it has from the last run, throw some errors because nothing looks right, then rebuild the code. Subsequent launches will succeed because both the data and the code are in sync. So, to keep those spurious errors out of real build logs, we’ll do this kind of ghost-open:

retry(count: 2) {
    bat "\"${UNITY_PATH}\" -nographics -buildTarget Win64 -quit -batchmode -projectPath . -executeMethod MyBuildFunction -MyBuildType \"${params.TYPE}\" -MyTestOnly -logFile"
}

This is the first time we’ve seen Jenkinfile talk to Unity! We’ll explain more in the next section, but just pretend you understand it for now. The important part is -MyTestOnly, which tells our build function to only set script defines, recompile, and quit.

We wrap the whole thing into a retry block as a side effect of us building multiple branches in one workspace. Sometimes, we get a “library corrupted” failure when switching. Running a second time makes it go away – no explicit Clean required. Lots of getting Unity running on Jenkins is just experimenting and making do with what works!

You also see some examples of groovy’s string interpolation. I admit I’m no expert, bu there seems to be about a dozen ways of doing string interp in groovy, and not all approaches work in all locations. If one didn’t work, I went on to the next, and what you see here is the one that works here.

Talking to Unity

We need to convince Unity to do what we want, and we want any failures to produce useful output in the Jenkins dashboard. You can find more in the Unity docs but I found the best way to get output was to have -logFile last, with no path set.

To convince Unity to do what we want, we use the -executeMethod parameter. That will call a static c# function of your choice. How to make builds from within Unity is outside the scope of this blog post.

Here’s our next few stages, and the various ways they call to Unity:

stage ('Editmode Tests') {
    steps {
        bat "\"${UNITY_PATH}\" -nographics -batchmode -projectPath . -runTests -testResults editmodetests.xml -testPlatform editmode -logFile"
    }
}
stage ('Playmode Tests') {
    when {
        expression {
            return (params.BUILD == 'Deploy' || params.BUILD == 'HealthCheck') && !params.SKIP_PLAYMODE_TESTS
        }
    }
    steps {
        // no -nographics on playmode tests. they don't log right now? which is weird cuz the editmode tests do with almost the same setup.
        bat "\"${UNITY_PATH}\" -batchmode -projectPath . -runTests -testResults playmodetests.xml -testPlatform playmode -testCategory \"BuildServer\" -logFile"
    }
}
stage ('Build') {
    steps {
        bat "\"${UNITY_PATH}\" -nographics -buildTarget Win64 -quit -batchmode -projectPath . -executeMethod MyBuildFunction  -MyBuildType \"${params.TYPE}\" -logFile"
    }
}

Deployment

This is project and platform specific, so I won’t go into details, but let’s assume you’re zipping or packaging a build folder and sending somewhere.

Here we’d be able to use the branch environment variables to maybe choose a destination folder. We’re also able to reuse the build name environment variables. We created both of those in Prewarm.

stage ('Deploy') {
    when {
        expression { return params.BUILD == 'Deploy' }
    }
    steps {
        // ... how to get a build to your platform of choice ...
        slackSend color: 'good', message: ":ship: build ${env.BUILD_DISPLAY_NAME} DEPLOYED (<${env.BUILD_URL}|Open>)", channel: "project_builds"
    }
}

We also post that the build has been deployed. More on notifications below.

Notifications and wrap-up

The post section of the jenkinsfile contains blocks that will run after the main job, in different circumstances. We mostly use them to report progress to slack:

post {
    always {
        nunit testResultsPattern: '*tests.xml'
    }
    success {
        script {
            if (params.BUILD == 'HealthCheck') {
                slackSend color: 'good',
                    message: ":green_heart: build ${env.BUILD_DISPLAY_NAME} SUCCEEDED (<${env.BUILD_URL}|Open>)",
                    channel: "project_builds"
            }
        }
    }
    fixed {
        slackSend color: 'good',
            message: ":raised_hands: build ${env.BUILD_DISPLAY_NAME} FIXED (<${env.BUILD_URL}|Open>)",
            channel: "project_builds"
    }
    aborted {
        slackSend color: 'danger',
            message: ":warning: build ${env.BUILD_DISPLAY_NAME} ABORTED. Was it intentional? (<${env.BUILD_URL}|Open>)",
            channel: "project_builds"
    }
    failure {
        slackSend color: 'danger',
            message: ":red_circle: build ${env.BUILD_DISPLAY_NAME} FAILED (<${env.BUILD_URL}|Open>)",
            channel: "project_builds"
    }
}                    

The first step here is to always report the automated test results to Jenkins with the nunit plugin. Unity’s test reports are in the nunit format, and this plugin converts it to the junit format that Jenkins expects.

We post all failures to the slack channel, and all fixed builds, but we don’t post all successes. With builds on every push that might make the build channel too noisy. We do however post when Health Checks succeed, since that’s good affirmation.

We use the Slack Notification plugin. The slack color attributes here doesn’t seem to work for us? So we use emojis to make it easy to scan what’s happening. Here’s an example from slack:

Porting an existing job to pipelines

Jenkins includes a snippet generator, which allows you to make freestyle blocks and see the generated pipeline script, which is very handy for porting freestyle jobs:

The full file

UNITY_PATH = "C:\\Program Files\\Unity\\Hub\\Editor\\2019.4.5f1\\Editor\\Unity.exe"
pipeline {
    parameters {
        choice(name: 'TYPE', choices: ['Debug', 'Release', 'Publisher'], description: 'Do you want cheats or speed?')
        choice(name: 'BUILD', choices: ['Fast', 'Deploy', 'HealthCheck'], description: 'Fast builds run minimal tests and make a build. HealthCheck builds run more automated tests and are slower. Deploy builds are HealthChecks + a deploy.')
        booleanParam(name: 'CLEAN', defaultValue: false, description: 'Tick to removed cached files - will take an eon')
        booleanParam(name: 'SKIP_PLAYMODE_TESTS', defaultValue: false, description: 'In an emergency, to allow Deploy builds to work with a failing playmode test')
    }
    agent {
        node {
            label "My Project"
            // force everyone to the space space, so we can share a library file.
            customWorkspace 'workspace\\MyProject'
        }
    }
    options {
        timestamps()
        // as a failsafe. our build tend around the 15min mark, so 45 would be excessive.
        timeout(time: 45, unit: 'MINUTES')
    }
    // post stages only kick in once we definitely have a node
    stages {
        stage('Clean') {
            when {
                expression { return params.CLEAN }
            }
            steps {
                bat "if exist Library (rmdir Library /s /q)"
                bat "if exist Temp (rmdir Temp /s /q)"
            }
        }
        stage('Prewarm') {
            steps {
                script {
                    // not easy to get jenkins to tell us the current branch! none of the built-in envs seem to work?
                    // let's just ask svn directly.
                    def get_branch_script = 'svn info | select-string "Relative URL: \\^\\/(.*)" | %{\$_.Matches.Groups[1].Value}'
                    env.R7_SVN_BRANCH = powershell(returnStdout: true, script:get_branch_script).trim()
                }
                buildName "${BUILD_NUMBER} - ${TYPE}@${env.R7_SVN_BRANCH}/${SVN_REVISION}"
                script {
                    if (params.BUILD == 'Deploy') {
                        buildName "${env.BUILD_DISPLAY_NAME} ^Deploy"
                    }
                    if (params.BUILD == 'HealthCheck') {
                        buildName "Health Check of ${TYPE}@${env.R7_SVN_BRANCH}/${SVN_REVISION}"
                    }
                    if (params.BUILD == 'HealthCheck' || params.BUILD == 'Deploy') {
                        // let's warn that a deploy build is in progress
                        slackSend color: 'good',
                            message: ":hourglass: build ${env.BUILD_DISPLAY_NAME} STARTED (<${env.BUILD_URL}|Open>)",
                            channel: "project_builds"
                    }
                }
                // clean tests
                bat "if exist *tests.xml (del *tests.xml)"
                // need an initial open/close to clean out the serialisation. without this you can get things validating on old code!!
                // do it twice, to make it more tolerant of bad libraries when switching branches
                retry(count: 2) {
                    bat "\"${UNITY_PATH}\" -nographics -buildTarget Win64 -quit -batchmode -projectPath . -executeMethod MyBuildFunction -MyBuildType \"${params.TYPE}\" -MyTestOnly -logFile"
                }
            }
        }
        stage ('Editmode Tests') {
            steps {
                bat "\"${UNITY_PATH}\" -nographics -batchmode -projectPath . -runTests -testResults editmodetests.xml -testPlatform editmode -logFile"
            }
        }
        stage ('Playmode Tests') {
            when {
                expression {
                    return (params.BUILD == 'Deploy' || params.BUILD == 'HealthCheck') && !params.SKIP_PLAYMODE_TESTS
                }
            }
            steps {
                // no -nographics on playmode tests. they don't log right now? which is weird cuz the editmode tests do with almost the same setup.
                bat "\"${UNITY_PATH}\" -batchmode -projectPath . -runTests -testResults playmodetests.xml -testPlatform playmode -testCategory \"BuildServer\" -logFile"
            }
        }
        stage ('Build') {
            steps {
                bat "\"${UNITY_PATH}\" -nographics -buildTarget Win64 -quit -batchmode -projectPath . -executeMethod MyBuildFunction  -MyBuildType \"${params.TYPE}\" -logFile"
            }
        }
        stage ('Deploy') {
            when {
                expression { return params.BUILD == 'Deploy' }
            }
            steps {
                // ... how to get a build to your platform of choice ...
                slackSend color: 'good', message: ":ship: build ${env.BUILD_DISPLAY_NAME} DEPLOYED (<${env.BUILD_URL}|Open>)", channel: "project_builds"
            }
        }        
    }
    post {
        always {
            nunit testResultsPattern: '*tests.xml'
        }
        success {
            script {
                if (params.BUILD == 'HealthCheck') {
                    slackSend color: 'good',
                        message: ":green_heart: build ${env.BUILD_DISPLAY_NAME} SUCCEEDED (<${env.BUILD_URL}|Open>)",
                        channel: "project_builds"
                }
            }
        }
        fixed {
            slackSend color: 'good',
                message: ":raised_hands: build ${env.BUILD_DISPLAY_NAME} FIXED (<${env.BUILD_URL}|Open>)",
                channel: "project_builds"
        }
        aborted {
            slackSend color: 'danger',
                message: ":warning: build ${env.BUILD_DISPLAY_NAME} ABORTED. Was it intentional? (<${env.BUILD_URL}|Open>)",
                channel: "project_builds"
        }
        failure {
            slackSend color: 'danger',
                message: ":red_circle: build ${env.BUILD_DISPLAY_NAME} FAILED (<${env.BUILD_URL}|Open>)",
                channel: "project_builds"
        }
    }                    
}

Congrats to reading to the end! Your prize is a nice fish: 🐟

December 19, 2020

Gratitude by Ben Paddock (@_pads)

Amid the turmoil of this challenging year, I’d like to reflect on 2020 and give my thanks to a lot of people who have made it positive and because of them, reminded me I have so much in life.  I keep a gratitude journal listing 3 things I am grateful for every day.  That’s become a pretty comprehensive list but below are some of the highlights, extended.

Thank you to my fellow Essex compatriot Matt for your support, counsel and especially the cookbooks and pictures of your amazing feasts!

A big shout out goes to Jim for the various virtual pub sessions we’ve had on Fridays after work.  You’ve patiently listened to me spout a many a tale of woe for too long now.

The awesome members of the Warwick Lanterne Rouge Cycling Club.  Despite social restrictions, key members have worked tirelessly behind the scenes to keep social activities going.  In particular, I’d like to thank my Sunday group of Tommy, John, Matt, Martin and Adam for keeping me company in the lanes and providing great chat over coffee and cake in the Cotswolds.  You make my weekends.

My awesome neighbours in the block of flats I live in have helped kept me sane living on my own and relieved the cabin fever.  Matt and Sarah who live above me, our chats on the 1st floor are a joy and your baking is exceptional, especially without a mixer!  

To my support household of Wayne, Amanda and Sophie - what absolute heroes you are.  You have so much on your plate with your own challenges and yet you have given me so much of your time and company, expecting nothing in return.  You all deserve such a better 2021.

Special mention goes to the Thursday Night Gaming Group of Malc, Nad, Omar, Usman, Simon, Andy, Shaun, Nazeef and even Sleepy. The banter and smack talk always makes for an entertaining evening!

Whilst I’ve missed the good beer of the White Horse pub in Leamington, the Leam Geeks meetup has continued online and it’s been great to see faces of the regulars - Both Richs, Rob, Tim, Matt.

As always, my Colchester crew of Dan, Andy, Tony, Stu, Paul, Zena & Steff who somehow still talk to me after putting up with over a quarter of a century of nonsense from me.  It’s good to know you’re always around.

Also thanks to my various Nottingham uni groups of Oli, Jack, Andy, Rich, Dave, Craig, Sam, Jo and Jason who have kept in touch over WhatsApp.  There were a few plans we had this year that were scuppered but I’m sure we’ll get opportunities again soon.

My guitar tutor Alfie, who has amazing patience and is a highly gifted teacher.  Thanks for helping me find a creative outlet to give my brain a break from the day-to-day problem solving it normally gets caught up in.

My former work colleagues, of which there are many who have been great people to know and work with, but, in particular, Rich G, RIch T, Emma, Dan, Steve, Ash, Matt and Chris.  Thanks for your support and patience with me.  When I was melting down and causing more problems than I was solving, you were still there for me.

Of course, my family are my rock.  Mum, Dad, Matt & Grandma you are special people who love me unconditionally, always.

There are many more to mention, and no doubt if you are reading this and we’ve interacted this year, I’m really appreciative for the connection.  May everyone have a much more prosperous 2021.

December 18, 2020

Reading List 268 by Bruce Lawson (@brucel)

  • Link o’ The Week: Why it’s good for users that HTML, CSS and JS are separate languages by Hidde de Vries
  • Bonus Link o’ The Week: Web Histories – “The Web Histories project is an attempt to document the stories of those who have helped to shape the open web platform, with a focus on those people who are often underrepresented in the formal histories.” by Rachel Andrew
  • The F-word episode 7 – Leningrad Lothario, Russian roister-doister Vadim spared a moment from his dizzying social life to sit with me and discuss Chrome 88 and the latest web development Grand Unification Proposal (= make it all JSON)
  • A Comprehensive Guide to Accessible User Research – “Researchers often want to include people with access needs in their studies but don’t know where to begin. This three-part series covers the various considerations for adapting your practice to include people with disabilities.”
  • Under-Engineered Responsive Tables – “I am just going to show you the bare minimum you need to make a WCAG-compliant responsive HTML table” says Uncle Adrian Roselli: <div role="region" aria-labelledby="Caption01" tabindex="0"> <table>[…]</table> </div>
  • Resources for developing accessible cards/tiles
  • Accessible SVGs – I needed this again this week/
  • Alt vs figcaption – what’s the difference, when should you use which and how.
  • Weaving Web Accessibility With Usability gives actionable advice on usability testing with disabled people (and quotes me!)
  • Welcome to Your Bland New World “The VC community is an increasingly predictable and lookalike bunch that just seems to follow each other around from one trivial idea to another”. Excellent article!
  • Is Stencil a Better React? by Wix Engineering (a React contributor). “the same JSX as React & some of the same concepts… Compiles to standard web components with optimal performance…Stencil was faster compared to React… also faster compared to Svelte, at least within this use case.”
  • CPRA Passes, Further Bolstering Privacy Regulations and Requirements in California “agreement obtained through use of dark patterns does not constitute consent”
  • 2020 Affordability Report – “Over a billion people live in the 57 countries in our survey that are yet to meet the UN Broadband Commission’s ‘1 for 2’ affordability threshold. 1GB is the minimum that allows someone to use the internet effectively”
  • Clean Advertising – an interesting article by Jezza with a bonus useful tip about tigers which I’ll be trying tonight.
  • New UK tech regulator to limit power of Google and Facebook – “the dominance of just a few big tech companies is leading to less innovation, higher advertising prices and less choice and control for consumers”

December 16, 2020

December 11, 2020

December 10, 2020

December 04, 2020

November 30, 2020

November 27, 2020

November 23, 2020

Reading List 267 by Bruce Lawson (@brucel)

November 22, 2020

We built a one-hour Zoom-based Escape Game experience that allowed large groups of players (50+) to work in small teams to solve puzzles related to the Ballantine House and it’s history. The game has a single host (playing the character of maid or butler) that allows them to manage the entire experience.

November 20, 2020

I was thinking about the different ways I work with museums: Free Email support, consultancy, group workshops & bespoke experiences. And wanted to compare those by taking Museum Tour Guides as an example.

November 19, 2020

(This is really part 2 of a blog post about the ‘different ways I work with museums’. You can read this on it’s own or start with part 1 first.) For this project Outside Studios found me (I think via the free tutorials). They were doing a large redevelopment over the entire Workhouse site and […]

November 17, 2020

The Silent Network by Graham Lee

People say that the internet, or maybe specifically the web, holds the world’s information and makes it accessible. Maybe there was a time when that was true. But currently it’s not: probably not because the information is missing, but because the search engines think they know better than you what you want.

I recently had cause to look up an event that I know happened: at an early point in the iPod’s development, Steve Jobs disparaged MP3 players using NAND Flash storage. What were his exact words?

Jobs also disparaged the Adobe (formerly Macromedia) Flash Player media platform, in a widely-discussed blog post on his company website many years later. I knew that this would be a closely-connected story, so I crafted my search terms to exclude it.

Steve Jobs NAND Flash iPod. Steve Jobs Flash MP3 player. Steve Jobs NAND Flash -Adobe. Did any of these work? No, on multiple search engines. Having to try multiple search engines and getting the wrong results on all of them is 1990s-era web experience. All of these search terms return lists of “Thoughts on Flash” (the Adobe player), reports on that article, later news about Flash Player linking subsequent outcomes to that article, hot takes on why Jobs was wrong in that article, and so on. None of them show me what I asked for.

Eventually I decided to search the archives of one particular blog, which didn’t make the search engines prefer relevant results but which did reduce the quantity of irrelevant results. Finally, on the second page of articles from Daring Fireball about “Steve Jobs NAND flash storage iPod”, I found Flash Gordon. I still don’t have the quote, I have an article about a later development citing a dead link story that is itself interpreting the quote.

That’s the closest modern web searching tools would let me get.

November 13, 2020

The new M1 chip in the new Macs has 8-16GB of DRAM on the package, just like many mobile phones or single-board computers. But unlike many desktop, laptop or workstation computers (there are exceptions). In the first tranche of Macs using the chip, that’s all the addressable RAM they have (i.e. ignoring caches), just like many mobile phones or single-board computers. But what happens when they move the Apple Silicon chips up the scale, to computers like the iMac or Mac Pro?

It’s possible that these models would have a few GB of memory on-package and access to memory modules connected via a conventional controller, for example DDR4 RAM. They almost certainly would if you could deploy multiple M1 (or successor) packages on a single system. Such a Mac would be a non-uniform memory access architecture (NUMA), which (depending on how it’s configured) has implications for how software can be designed to best make use of the memory.

NUMA computing is of course not new. If you have a computer with a CPU and a discrete graphics processor, you have a NUMA computer: the GPU has access to RAM that the CPU doesn’t, and vice versa. Running GPU code involves copying data from CPU-memory to GPU-memory, doing GPU stuff, then copying the result from GPU-memory to CPU-memory.

A hypothetical NUMA-because-Apple-Silicon Mac would not be like that. The GPU shares access to the integrated RAM with the CPU, a little like an Amiga. The situation on Amiga was that there was “chip RAM” (which both the CPU and graphics and other peripheral chips could access), and “fast RAM” (only available to the CPU). The fast RAM was faster because the CPU didn’t have to wait for the coprocessors to use it, whereas they had to take turns accessing the chip RAM. Nonetheless, the CPU had access to all the RAM, and programmers had to tell `AllocMem` whether they wanted to use chip RAM, fast RAM, or didn’t care.

A NUMA Mac would not be like that, either. It would share the property that there’s a subset of the RAM available for sharing with the GPU, but this memory would be faster than the off-chip memory because of the closer integration and lack of (relatively) long communication bus. Apple has described the integrated RAM as “high bandwidth”, which probably means multiple access channels.

A better and more recently analogy to this setup is Intel’s discontinued supercomputer chip, Knight’s Landing (marketed as Xeon Phi). Like the M1, this chip has 16GB of on-die high bandwidth memory. Like my hypothetical Mac Pro, it can also access external memory modules. Unlike the M1, it has 64 or 72 identical cores rather than 4 big and 4 little cores.

There are three ways to configure a Xeon Phi computer. You can not use any external memory, and the CPU entirely uses its on-package RAM. You can use a cache mode, where the software only “sees” the external memory and the high-bandwidth RAM is used as a cache. Or you can go full NUMA, where programmers have to explicitly request memory in the high-bandwidth region to access it, like with the Amiga allocator.

People rarely go full NUMA. It’s hard to work out what split of allocations between the high-bandwidth and regular RAM yields best performance, so people tend to just run with cached mode and hope that’s faster than not having any on-package memory at all.

And that makes me think that a Mac would either not go full NUMA, or would not have public API for it. Maybe Apple would let the kernel and some OS processes have exclusive access to the on-package RAM, but even that seems overly complex (particularly where you have more than one M1 in a computer, so you need to specify core affinity for your memory allocations in addition to memory type). My guess is that an early workstation Mac with 16GB of M1 RAM and 64GB of DDR4 RAM would look like it has 64GB of RAM, with the on-package memory used for the GPU and as cache. NUMA APIs, if they come at all, would come later.

November 12, 2020

November 10, 2020

November 03, 2020

In case you ever need it. If you’re searching for something like “deleted login shell Mac can’t open terminal”, this is the post for you.

I just deleted my login shell (because it was installed with homebrew, and I removed homebrew without remembering that I would lose my shell). That stopped me from opening a Terminal window, because it would immediately bomb out as it was unable to open the shell.

Unable to open a normal Terminal window, anyway. In the Shell menu, the “New Command…” item let me run /bin/bash -l, from which I got to a login-like bash shell. Then I could run this command:

chsh -s /bin/zsh

Enter my password, and then I have a normal shell again.

(So I could then install MacPorts, and then change my shell to /opt/local/bin/bash)

November 02, 2020

Thinking back over the last couple of years, I’ve had to know quite a bit about a few different topics to be able to write good software. Those topics:

  • Epidemiology
  • Architecture
  • Plant Sciences
  • Histology
  • Education

Not much knowledge in each field, though definitely expert-level knowledge: enough to have conversations as a peer with experts in those fields, to be able to follow the jargon, and to be able to make reasoned assessments of the experts’ suggestions in software designed for use by those experts. And, where I’ve wanted to propose alternate suggestions, enough expert-level knowledge to identify and justify a different design.

Going back over the rest of my career in software:

  • Pensions, investments, and saving
  • Mobile Telecommunications
  • Terry Pratchett’s Discworld
  • Macromolecular crystallography
  • Synchrotron accelerators
  • Home automation
  • Yoga
  • Soda drinks dispensers
  • Banks
  • Quite a bit of physics

In fact I’d estimate that I’ve spent less than 40% of my “professional career” in jobs where knowing software or even computers in general was the whole thing.

Working in software is about understanding a system to the point where you can use a computer to make a meaningful and beneficial contribution to that system. While the systems thinking community have great tools for mapping out the behaviour of systems, they are only really good for making initial models. In order to get good at it, we have to actually understand the system on its own terms, with the same ideas and biases that the people who interact regularly with the system use.

But of course, because we’re hired for our computering skills, we get to experience and work with all of these different systems. It’s perhaps one of the best domains in which to be a polymath. To navigate it effectively, we need to accept that we are not the experts. We’re visitors, who get to explore other people’s worlds.

We should take them up on that offer, though, if we’re going to be effective. If we maintain the distinction between “technical” and “non-technical” people, or between “engineering” and “the business”, then we deny ourselves the ability to learn about the problem we’re trying to solve, and to make a good solution.

October 31, 2020

My dad’s got a Brother DCP-7055W printer/scanner, and he wanted to be able to set it up as a network scanner to his Ubuntu machine. This was more fiddly than it should be, and involved a bunch of annoying terminal work, so I’m documenting it here so I don’t lose track of how to do it should I have to do it again. It would be nice if Brother made this easier, but I suppose that it working at all under Ubuntu is an improvement on nothing.

Anyway. First, go off to the Brother website and download the scanner software. At time of writing, https://www.brother.co.uk/support/dcp7055/downloads has the software, but if that’s not there when you read this, search the Brother site for DCP-7055 and choose Downloads, then Linux and Linux (deb), and get the Driver Installer Tool. That’ll get you a shell script; run it. This should give you two new commands in the Terminal: brsaneconfig4 and brscan-skey.

Next, teach the computer about the scanner. This is what brsaneconfig4 is for, and is all done in the Terminal. You need to know the scanner’s IP address; you can find this out from the scanner itself, or you can use avahi-resolve -v -a -r to search your network for it. This will dump out a whole load of stuff, some of which should look like this:

=  wlan0 IPv4 Brother DCP-7055W                             UNIX Printer         local
   hostname = [BRN008092CCEE10.local]
   address = [192.168.1.21]
   port = [515]
   txt = ["TBCP=F" "Transparent=T" "Binary=T" "PaperCustom=T" "Duplex=F" "Copies=T" "Color=F" "usb_MDL=DCP-7055W" "usb_MFG=Brother" "priority=75" "adminurl=http://BRN008092CCEE10.local./" "product=(Brother DCP-7055W)" "ty=Brother DCP-7055W" "rp=duerqxesz5090" "pdl=application/vnd.brother-hbp" "qtotal=1" "txtvers=1"]

That’s your Brother scanner. The thing you want from that is address, which in this case is 192.168.1.21.

Run brsaneconfig4 -a name="My7055WScanner" model="DCP-7055" ip=192.168.1.21. This should teach the computer about the scanner. You can test this with brsaneconfig4 -p which will ping the scanner, and brsaneconfig4 -q which will list all the scanner types it knows about and then list your added scanner at the end under Devices on network. (If your Brother scanner isn’t a DCP-7055W, you can find the other codenames for types it knows about with brsaneconfig4 -q and then use one of those as the model with brsaneconfig4 -a.)

You only need to add the scanner once, but you also need to have brscan-skey running always, because that’s what listens for network scan requests from the scanner itself. The easiest way to do this is to run it as a Startup Application; open Startup Applications from your launcher by searching from it, and add a new application which runs the command brscan-skey, and restart the machine so that it’s running.

If you don’t have the GIMP1 installed, you’ll need to install it.

On the scanner, you should now be able to press the Scan button and choose Scan to PC and then Scan Image, and it should work. What will happen is that your machine will pop up the GIMP with the image, which you will then need to export to a format of your choice.

This is quite annoying if you need to scan more than one thing, though, so there’s an optional extra step, which is to change things so that it doesn’t pop up the GIMP and instead just saves the scanned photo which is much nicer. To do this, first install imagemagick, and then edit the file /opt/brother/scanner/brscan-skey/script/scantoimage-0.2.4-1.sh with sudo. Change the last line from

echo gimp -n $output_file 2>/dev/null \;rm -f $output_file | sh &

to

echo convert $output_file $output_file.jpg 2>/dev/null \;rm -f $output_file | sh &

Now, when you hit the Scan button on the scanner, it will quietly create a file named something like brscan.Hd83Kd.ppm.jpg in the brscan folder in your home folder and not show anything on screen, and this means that it’s a lot easier to scan a bunch of photos one after the other.

  1. I hate this name. It makes us look like sniggering schoolboys. GNU Imp, maybe, or the new Glimpse fork, but the upstream developers don’t want to change it

Rake Is Awesome

I sat down to start compiling these notes, and of course got sidetracked putting together the Rakefile which can be found in the root of this repo.

Hacktoberfest

I was all gung-ho about getting started with Hacktoberfest this year, but I’m not sure I can muster the energy. I absolutely do not need any more t-shirts, and the negative energy around the event’s growing spam problem just turns me off participating entirely.

Regardless, I’m enjoying stewarding How Old Is It? the only project I’ve had that’s gotten more than 10 stars on GitHub. I hope I can at least help some other non-spammy participants get their t-shirts.

What Even Is Typing?

Trying to get better at navigating around my code editor without using a mouse. This is motivated by the audiobook of The Pragmatic Programmer that I’m listening to, in which they discuss how efficiency can be improved by reducing the friction between your brain and your computer.

The issue isn’t that taking your hand off the keyboard, placing it on the mouse, clicking some stuff and then moving your hand back to the keyboard takes too much time; realistically the extra time added by mouse usage is going to be dwarfed by a the time spent in meetings or making seven or eight cups of tea.

The issue is that it’s a distraction that takes your mind off what you’re writing.

Rails Test Assistant

There was some functionality in RubyMine that I missed, and wanted to replicate inside VSCode. Rather than use one of the existing plugins that more than adequately solve the problem, I decided to write my own. Because, y’know. Of course I would.

Hello, Rails Test Assistant.

Things I Read

October 26, 2020

October 19, 2020

If programmers were just more disciplined, more professional, they’d write better software. All they need is a code of conduct telling them how to work like those of us who’ve worked it out.

The above statement is true, which is a good thing for those of us interested in improving the state of software and in helping our fellow professionals to improve their craft. However, it’s also very difficult and inefficient to apply, in addition to being entirely unnecessary. In the common parlance of our industry, “discipline doesn’t scale”.

Consider the trajectory of object lifecycle management in the Objective-C programming language, particularly the NeXT dialect. Between 1989 and 1995, the dominant way to deal with the lifecycle of objects was to use the +new and -free methods, which work much like malloc/free in C or new/delete in C++. Of course it’s possible to design a complex object graph using this ownership model, it just needs discipline, that’s all. Learn the heuristics that the experts use, and the techniques to ensure correctness, and get it correct.

But you know what’s better? Not having to get that right. So around 1994 people introduced new tools to do it an easier way: reference counting. With NeXTSTEP Mach Kit’s NXReference protocol and OpenStep’s NSObject, developers no longer need to know when everybody in an app is done with an object to destroy it. They can indicate when a reference is taken and when it’s relinquished, and the object itself will see when it’s no longer used and free itself. Learn the heuristics and techniques around auto releasing and unretained references, and get it correct.

But you know what’s better? Not having to get that right. So a couple of other tools were introduced, so close together that they were probably developed in parallel[*]: Objective-C 2.0 garbage collection (2006) and Automatic Reference Counting (2008). ARC “won” in popular adoption so let’s focus there: developers no longer need to know exactly when to retain, release, or autorelease objects. Instead of describing the edges of the relationships, they describe the meanings of the relationships and the compiler will automatically take care of ownership tracking. Learn the heuristics and techniques around weak references and the “weak self” dance, and get it correct.

[*] I’m ignoring here the significantly earlier integration of the Boehm conservative GC with Objective-C, because so did everybody else. That in itself is an important part of the technology adoption story.

But you know what’s better? You get the idea. You see similar things happen in other contexts: for example C++’s move from new/delete to smart pointers follows a similar trajectory over a similar time. The reliance on an entire programming community getting some difficult rules right, when faced with the alternative of using different technology on the same computer that follows the rules for you, is a tough sell.

It seems so simple: computers exist to automate repetitive information-processing tasks. Requiring programmers who have access to computers to recall and follow repetitive information processes is wasteful, when the computer can do that. So give those tasks to the computers.

And yet, for some people the problem with software isn’t a lack of automation but a lack of discipline. Software would be better if only people knew the rules, honoured them, and slowed themselves down so that instead of cutting corners they just chose to ignore important business milestones instead. Back in my day, everybody knew “no Markdown around town” and “don’t code in an IDE after Labour Day”, but now the kids do whatever they want. The motivations seem different, and I’d like to sort them out.

Let’s start with hazing. A lot of the software industry suffers from “I had to go through this, you should too”. Look at software engineering interviews, for example. I’m not sure whether anybody actually believes “I had to deal with carefully ensuring NUL-termination to avoid buffer overrun errors so you should too”, but I do occasionally still hear people telling less-experienced developers that they should learn C to learn more about how their computer works. Your computer is not a fast PDP-11, all you will learn is how the C virtual machine works.

Just as Real Men Don’t Eat Quiche, so real programmers don’t use Pascal. Real Programmers use FORTRAN. This motivation for sorting discipline from rabble is based on the idea that if it isn’t at least as hard as it was when I did this, it isn’t hard enough. And that means that the goalposts are movable, based on the orator’s experience.

This is often related to the term of their experience: you don’t need TypeScript to write good React Native code, just Javascript and some discipline. You don’t need React Native to write good front-end code, just JQuery and some discipline. You don’t need JQuery…

But along with the term of experience goes the breadth. You see, the person who learned reference counting in 1995 and thinks that you can only really understand programming if you manually type out your own reference-changing events, presumably didn’t go on to use garbage collection in Java in 1996. The person who thinks you can only really write correct software if every case is accompanied by a unit test presumably didn’t learn Eiffel. The person who thinks that you can only really design systems if you use the Haskell type system may not have tried OCaml. And so on.

The conclusion is that for this variety of disciplinarian, the appropriate character and quantity of discipline is whatever they had to deal with at some specific point in their career. Probably a high point: after they’d got over the tricky bits and got productive, and after you kids came along and ruined everything.

Sometimes the reason for suggesting the disciplined approach is entomological in nature, as in the case of the eusocial insect the “performant” which, while not a real word, exists in greater quantities in older software than in newer software, apparently. The performant is capable of making software faster, or use less memory, or more concurrent, or less dependent on I/O: the specific characteristics of the performant depend heavily on context.

The performant is often not talked about in the same sentences as its usual companion species, the irrelevant. Yes, there may be opportunities to shave a few percent off the runtime of that algorithm by switching from the automatic tool to the manual, disciplined approach, but does that matter (yet, or at all)? There are software-construction domains where specific performance characteristics are desirable, indeed that’s true across a lot of software. But it’s typical to focus performance-enhancing techniques on the bits where they enhance performance that needs enhancing, not to adopt them across the whole system on the basis that it was better when everyone worked this way. You might save a few hundred cycles writing native software instead of using a VM for that UI method, but if it’s going to run after a network request completes over EDGE then trigger a 1/3s animation, nobody will notice the improvement.

Anyway, whatever the source, the problem with calls for discipline is that there’s no strong motivation to become more disciplined. I can use these tools, and my customer is this much satisfied, and my employer pays me this much. Or I can learn from you how I’m supposed to be doing it, which will slow me down, for…your satisfaction? So you know I’m doing it the way it’s supposed to be done? Or so that I can tell everyone else that they’re doing it wrong, too? Sounds like a great deal.

Therefore discipline doesn’t scale. Whenever you ask some people to slow down and think harder about what they’re doing, some fraction of them will. Some will wonder whether there’s some other way to get what you’re peddling, and may find it. Some more will not pay any attention. The dangerous ones are the ones who thought they were paying attention and yet still end up not doing the disciplined thing you asked for: they either torpedo your whole idea or turn it into not doing the thing (see OOP, Agile, Functional Programming). And still more people, by far the vast majority, just weren’t listening at all, and you’ll never reach them.

Let’s flip this around. Let’s look at where we need to be disciplined, and ask if there are gaps in the tool support for software engineers. Some people want us to always write a failing test and make it pass before adding any code (or want us to write a passing test and revert our changes if it accidentally fails): does that mean our tools should not let us write code for which there’s no test? Does the same apply for acceptance tests? Some want us to refactor mercilessly; does that mean our design tools should always propose more parsimonious alternatives for passing the same tests? Some say we should get into the discipline of writing code that always reveals its intent: should the tools make a crack at interpreting the intention of the code-as-prose?

October 16, 2020

Reading List 266 by Bruce Lawson (@brucel)

It’s been a while; since the last Reading List! Since then, Vadim Makeev and I recorded episode 6 of The F-Word, our podcast, on Mozilla layoffs, modals and focus, AVIF, AdBlock Plus lawsuit. We also chatted with co-inventor of CSS, Håkon Wium Lie, and Brian Kardell of Igalia about the health of the web ecosystem. Anyway, enough about me. Here’s what I’ve been reading about the web since the last mail.

October 13, 2020

I had an item in OmniFocus to “write on why I wish I was still using my 2006 iBook”, and then Tim Sneath’s tweet on unboxing a G4 iMac sealed the deal. I wish I was still using my 2006 iBook. I had been using NeXTSTEP for a while, and Mac OS X for a short amount of time, by this point, but on borrowed hardware, mostly spares from the University computing lab.

My “up-to-date” setup was my then-girlfriend’s PowerBook G3 “Wall Street” model, which upon being handed down to me usually ran OpenDarwin, Rhapsody, or Mac OS X 10.2 Jaguar, which was the last release to boot properly on it. When I went to WWDC for the first time in 2005 I set up X Post Facto, a tool that would let me (precariously) install and run 10.3 Panther on it, so that I could ask about Cocoa Bindings in the labs. I didn’t get to run the Tiger developer seed we were given.

When the dizzying salary of my entry-level sysadmin job in the Uni finally made a dent in my graduate-level debts, I scraped together enough money for the entry-level 12” iBook G4 (which did run Tiger, and Leopard). I think it lasted four years until I finally switched to Intel, with an equivalent white acrylic 13” MacBook model. Not because I needed an upgrade, but because Apple forced my hand by making Snow Leopard (OS X 10.6) Intel-only. By this time I was working as a Mac developer so had bought in to the platform lock-in, to some extent.

The treadmill turns: the white MacBook was replaced by a mid-decade MacBook Air (for 64-bit support), which developed a case of “fruit juice on the GPU” so finally got replaced by the 2018 15” MacBook Pro I use to this day. Along the way, a couple of iMacs (both Intel, both aluminium, the second being an opportunistic upgrade: another hand-me-down) came and went, though the second is still used by a friend.

Had it not been for the CPU changes and my need to keep up, could I still use that iBook in 2020? Yes, absolutely. Its replaceable battery could be improved, its browser could be the modern TenFourFox, the hard drive could be replaced with an SSD, and then I’d have a fast, quiet computer that can compile my code and browse the modern Web.

Would that be a great 2020 computer? Not really. As Steven Baker pointed out when we discussed this, computers have got better in incremental ways that eventually add up: hardware AES support for transparent disk encryption. Better memory controllers and more RAM. HiDPI displays. If I replaced the 2018 MBP with the 2006 iBook today, I’d notice those things get worse way before I noticed that the software lacked features I needed.

On the other hand, the hardware lacks a certain emotional playfulness: the backlight shining through the Apple logo. The sighing LED indicating that the laptop is asleep. The reassuring clack of the keys.

Are those the reasons this 2006 computer speaks to me through the decades? They’re charming, but they aren’t the whole reason. Most of it comes down to an impression that that computer was mine and I understood it, whereas the MBP is Apple’s and I get to use it.

A significant input into that is my own mental health. Around 2014 I got into a big burnout, and stopped paying attention to the updates. As a developer, that was a bad time because it was when Apple introduced, and started rapidly iterating on, the Swift programming language. As an Objective-C and Python expert (I’ve published books on both), with limited emotional capacity, I didn’t feel the need to become an expert on yet another language. To this day, I feel like a foreign tourist in Swift and SwiftUI, able to communicate intent but not to fully immerse in the culture and understand its nuances.

A significant part of that is the change in Apple’s stance from “this is how these things work” to “this is how you use these things”. I don’t begrudge them that at all (I did in the Dark Times), because they are selling useful things that people want to use. But there is decidedly a change in tone, from the “Come in it’s open” logo on the front page of the developer website of yore to the limited, late open source drops of today. From the knowledge oriented programming guides of the “blue and white” documentation archive to the task oriented articles of today.

Again, I don’t begrudge this. Developers have work to do, and so want to complete their tasks. Task-oriented support is entirely expected and desirable. I might formulate an argument that it hinders “solutions architects” who need to understand the system in depth to design a sympathetic system for their clients’ needs, but modern software teams don’t have solutions architects. They have their choice of UI framework and a race to an MVP.

Of course, Apple’s adoption of machine learning and cloud systems also means that in many cases, the thing isn’t available to learn. What used to be an open source software component is now an XPC service that calls into a black box that makes a network request. If I wanted to understand why the spell checker on modern macOS or iOS is so weird, Apple would wave their figurative hands and say “neural engine”.

And a massive contribution is the increase in scale of Apple’s products in the intervening time. Bear in mind that at the time of the 2006 iBook, I had one of Apple’s four Mac models, access to an XServe and Airport base station, and a friend who had an iPod, and felt like I knew the whole widget. Now, I have the MBP (one of six models), an iPhone (not the latest model), an iPad (not latest, not Pro), the TV doohickey, no watch, no speaker, no home doohickey, no auto-unlock car, and I’m barely treading water.

Understanding a G4-vintage Mac meant understanding PPC, Mach, BSD Unix, launchd, a couple of directory services, Objective-C, Cocoa, I/O Kit, Carbon, AppleScript, the GNU tool chain and Jam, sqlite3, WebKit, and a few ancillary things like the Keychain and HFS+. You could throw in Perl, Python, and the server stuff like XSAN and XGrid, because why not?

Understanding a modern Mac means understanding that, minus PPC, plus x86_64, the LLVM tool chain, sandbox/seatbelt, Scheme, Swift, SwiftUI, UIKit, “modern” AppKit (with its combination of layer-backed, layer-hosting, cell-based and view-based views), APFS, JavaScript and its hellscape of ancillary tools, geocoding, machine learning, the T2, BridgeOS…

I’m trying to trust a computer I can’t mentally lift.

I was shoulder-surfing my coworker the other day when he did something that I imagine is common knowledge to everyone except me.

When I’m trying to do something like monitor how quickly a file is growing, it’s not uncommon to see a terminal window on my screen that looks like this:

➜ du -hs index.html
4.0K	index.html
➜ du -hs index.html
4.0K	index.html
➜ du -hs index.html
5.0K	index.html
➜ du -hs index.html
6.0K	index.html
➜ du -hs index.html
8.0K	index.html
➜ du -hs index.html
12.0K	index.html

Not only is this untidy, you hardly look impressive, sitting there jabbingly wildly at your up and return keys.

This is why I found it somewhat revelatory when my coworker entered the command watch du -hs index.html and I saw something like the following:

Every 2.0s: du -hs index.html

4.0K    index.html

From the man pages:

NAME
watch - execute a program periodically, showing output fullscreen

SYNOPSIS
watch [options] command

DESCRIPTION
watch runs command repeatedly, displaying its output and errors (the first screenfull). This allows you to watch the program output change over time. By default, command is run every 2 seconds and watch will run until interrupted.

If you’re a macOS user like myself, this command is available via the Homebrew package watch.

October 12, 2020

I had need to test an application built for Linux, and didn’t want to run a whole desktop in a window using Virtualbox. I found the bits I needed online in various forums, but nowhere was it all in one place. It is now!

Prerequisites: Docker and XQuartz. Both can be downloaded from homebrew.

Create a Dockerfile:

FROM debian:latest


RUN apt-get update && apt-get install -y iceweasel


RUN export uid=501 gid=20 && \
    mkdir -p /home/user && \
    echo "user:x:${uid}:${gid}:User,,,:/home/user:/bin/bash" >> /etc/passwd && \
    echo "staff:x:${uid}:" >> /etc/group && \
    echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
    chmod 0440 /etc/sudoers && \
    chown ${uid}:${gid} -R /home/user


USER user
ENV HOME /home/user
CMD /usr/bin/iceweasel

It’s good to mount the Downloads folder within /home/user, or your Documents, or whatever. On Catalina or later you’ll get warnings asking whether you want to give Docker access to those folders.

First time through, open XQuartz, goto preferences > Security and check the option to allow connections from network clients, quit XQuartz.

Now open XQuartz, and in the xterm type:

$ xhost + $YOUR_IP
$ docker build -f Dockerfile -t firefox .
$ docker run -it -e DISPLAY=$YOUR_IP:0 -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/Downloads:/home/users/Downloads firefox

Enjoy firefox (or more likely, your custom app that you’re testing under Linux)!

Iceweasel on Debian on macOS

October 09, 2020

October 06, 2020

September 30, 2020

kubectl explain

This tweet by @ienmiell put me on to kubectl explain. It’s awesome. It tells you right there in your terminal what the stuff in your Kubernetes resources actually means.

Things I Read

September 28, 2020

September 27, 2020

(Read part 1, about creating parametric level smoke tests, here)

Now we have some useful tests, we want our Jenkins install to run them frequently. That way, we really quick feedback on failures. Here’s how we do this at Roll7.

Running tests from the command line

Let’s assume you already have a server to build your game. I won’t cover how to set up a unity Jenkins build server here, except to say that the Unity3d plugin is very helpful when you have lots of unity versions. (Although we couldn’t get the log parameter to be picked up unless we specified the path fully in our command line arguments!)

The command line to run a Unity Test Framework job is as follows:

-batchmode -projectPath "." -runTests -testResults playmodetests.xml -testPlatform playmode -logFile ${WORKSPACE}\playtestlog.txt

You can read more about command line params here, but let’s break that down:

  • -batchmode tells unity not to open a GUI, which is vital for a build server. We don’t want it to hang on a dialog! You can also use Application.isBatchMode to test for this flag in your code.
  • -projectPath "." just tells unity to load the project in our working directory
  • -runTests starts a Unity Test Framework job as soon as the editor loads. It’ll run some specified tests, spit out some output, and make sure test failures cause a non-zero return code.
  • -testPlatform playmode tells the UTF to run our playmode tests, which are the ones we care about for this blog post. You can also use editmode.
  • -testResults playmodetests.xml states where to spit out a report of the run, which will include failures as well as logs. The report is formatted as an nunit test result XML. Jenkins has a plugin that can fail a job based on this file, and present a nice reporting UI.
  • -logFile ${WORKSPACE}\playtest.txt specifies where to write the editor log – by default it won’t stream into the console. The ${WORKSPACE} is a jenkins environment variable, and we found specifying it was the only way to get the unity3d plugin to find the log.

…And that’s enough to do a test run.

Playmode tests in batch

This page of the docs mentions that the WaitForEndOfFrame coroutine is a bad idea in batch mode. This is because “systems like animation, physics and timeline might not work correctly in the Editor”.

There’s not much more detail than this!

In practice, we’ve found any playmode tests that depend on movement, positioning or animation fails pretty reliably. We get around this by explicitly marking tests we know can run on the server with the [Category("BuildServer")] attribute. We can then use the -testCategory "BuildServer" parameter to only run these tests.

This is pretty limiting! But there’s still plenty of value in just making sure your levels, enemies and weapons work without firing errors or warnings.

In the near future we’ll be experimenting with an ec2 instance that has a GPU, to let us run without batchmode, and also allow us to run playmode tests on finished builds more easily.

Gotchas and practicalities

  • Currently, we find a full playmode test run is exceedingly slow, and we don’t yet know why. What takes a minute or two locally takes tens of minutes on the ec2 instance. It’s not a small instance, either! So we’re only scheduling a test run for our twice-daily steam builds, instead of for every push.
  • WaitForEndOfFrame doesn’t work in batch mode, so beware of using that in your tests, or anything your tests depend on.
  • The vagaries of unity’s compile step mean that some sometimes you can get out-of-date OnValidate calls running on new data as you open the editor. Maybe this is fixable with properly defined assemblies, but we hackily get around it by doing a dummy build right at the start of the jenkins job. It goes as far as setting the correct platform, waits for all the compiles, and quits. Compile errors still cause failures, which is good.
  • If you want editmode and playmode tests in the same job, just run the editor twice. We do this with two different testResults xmls, and we can use a wildcard in the nunit jenkins plugin to pick up both.
  • To test your command line in dos, use the start command: start /wait "" "path to unity.exe" -batchmode ... The extra empty spaces are important if your unity path has spaces in it too. To see the last command’s return code in dos, use echo %ERRORLEVEL%.
  • These are running playmode tests in the editor on the build server. We haven’t yet got around to making playmode tests work in a build. That might end up as a follow-up post!

September 15, 2020

Self-organising teams by Graham Lee

In The Manifesto for Anarchic Software Development I noted that one of the agile manifesto principles is for self-organising teams, and that those tend not to exist in software development. What would a self-organising software team look like?

  1. Management hire a gang and set them a goal, and delegate all decisions on how to run the gang and who is in the gang to members of the gang.
  2. The “team lead” is not in charge of decision-making, but a consultant who can advise gang members on their work. The team lead probably works directly on the gang’s product too, unless the gang is too large.
  3. One member of the gang acts as a go-between for management and communicates openly with the rest of the gang.
  4. Any and all members of the gang are permitted to criticise the gang’s work and the management’s direction.
  5. The lead, the management rep, and the union rep are all posts, not people. The gang can recall any of them at any time and elect a replacement.
  6. Management track the outcomes of the gang, not the “productivity” of individuals.
  7. Management pay performance-related benefits like bonuses to the gang for the gang’s collective output, not to individuals.
  8. The gang hold meetings when they need, and organise their work how they want.

September 14, 2020

Someone has been trolling Apple’s Siri team hard on how they think numbers are pronounced. Today is the second day where I’ve missed a turn due to it. The first time because I didn’t understand the direction, the second because the pronunciation was so confusing I lost focus and drove straight when I should have turned.

The disembodied voice doesn’t even use a recognisable dialect or regional accent, it just gets road numbers wrong. In the UK, there’s a hierarchy of motorways (M roads, like M42), A roads (e.g. A34), B roads (e.g. B3400), and unclassified roads. It’s a little fluid around the edges, but generally you’d give someone the number of an M or A road if you’re giving them directions, and the name of a B road.

Apple Maps has always been a bit weird about this, mostly preferring classifications but using the transcontinental E route numbers which aren’t on signs in the UK and aren’t used colloquially, or even necessarily known. But now its voice directions pronounce the numbers incomprehensibly. That’s ok if you’re in a car and the situation is calm enough that you can study the CarPlay screen to work out what it meant. But on a motorbike, or if you’re concentrating on the road, it’s a problem.

“A” is pronounced “uh”, as if it’s saying “a forty-six” rather than “A46”. Except it also says “forrysix”. Today I got a bit lost going from the “uh foreforryfore” to the “bee forryaytoo” and ended up going in, not around, Coventry.

Entering Coventry should always be consensual.

I’ve been using Apple Maps since the first version which didn’t even know what my town was called, and showed a little village at the other end of the county if you searched for it by name. But with the successive apologies, replatformings, rewrites, and rereleases, it always seems like you take one step forward and then at the roundabout use the fourth exit to take two steps back.

September 12, 2020

Go on, read the manifesto again. You’ll see that it’s a manifesto for anarchism, for people coming together and contributing equally toward solving problems. From each according to their ability, to each according to their need.

The best architectures, requirements, and designs
emerge from self-organizing teams.

While new to software developers in the beginning of this millennium, this would not have been news to architects who noticed the same thing in 1962. A digression: this was more than a decade before architects made their other big contribution to software engineering, the design pattern. The RIBA report noticed two organisations of teams:

One was characterised by a procedure which began by the invention of a building shape and was followed by a moulding of the client’s needs to fit inside this three-dimensional preconception. The other began with an attempt to understand, fully the needs of the people who were to use the building around which, when they were clarified, the building would be fitted.

There were trade-offs between these styles, but the writers of the RIBA report clearly found some reason “to value individuals and interactions over processes and tools”:

The work takes longer and is often unprofitable to the architect, although the client may end up with a much cheaper building put up more quickly than he had expected. Many offices working in this way had found themselves better suited by a dispersed type of work organisation which can promote an informal atmosphere of free-flowing ideas.

Staff retention was higher in the dispersed culture, even though the self-organising nature of the teams meant that sometimes the senior architect was not the project lead, but found themselves reporting to a junior because ideas trumped length of tenure.

This description of self-organising teams in architecture makes me realise that I haven’t knowingly experienced a self-organising team in software, even when working on a team that claimed self-organisation. The idea is prevalent in software of a “platform shop”: a company that builds Rails websites, or Java micro services, or Swift native apps. This is software’s equivalent of beginning “by the invention of a building shape”, only more so: begin by the application of an existing building shape, no invention required.

As the RIBA report notes, this approach “clearly goes with rather autocratic forms of control”. By centralising the technology in the solution design, people can argue that experience with that technology stack (and more specifically, with the way it’s applied in this organisation) is the measure of success, and use that to impose or reinforce a hierarchy.

Clearly, length of tenure becomes a proxy measure for authority in such an organisation. The longer you’ve been in the company, the more experience you have contorting their one chosen solution to attempt to address a client’s problem. Never mind that there are other skills needed in designing a software product (not least of which is actually understanding the problem), and never mind that this “experience” is in repeated application of an unsuitable template: one year of experience ten times over, rather than ten years of experience.

You may be familiar with Unity’s Test Runner window, where you can execute tests and see results. This is the user-facing part of the Unity Test Framework, which is a very extensible system for running tests of any kind. At Roll7 I recently set up the test runner to automatically run simple smoketests on every level of our (unannounced) game, and made jenkins report on failures. In this post I’ll outline how I did the former, and in part two I’ll cover the later.

Play mode tests, automatically generated for every level
Some of our [redacted] playmode and editmode tests, running on Jenkins

(I’m going to assume you have passing knowledge of how to write tests for the test runner)

(a lot of this post is based on this interesting talk about UTF from its creators at Unite 2019)

The UTF is built upon NUnit, a .net testing framework. That’s what provides all those [TestFixture] and [Test] attributes. One feature of NUnit that UTF also supports is [TestFixtureSource]. This attribute allows you to make a sort of “meta testfixture”, a template for how to make test fixtures for specific resources. If you’re familiar with parameterized tests, it’s like that but on a fixture level.

We’re going to make a TestFixtureSource provider that finds all level scenes in our project, and then the TestFixutreSource itself that loads a specific level and runs some generic smoke tests on it. The end result is that adding a new level will automatically add an entry for it to the play mode tests list.

There’s a few options for different source providers (see the NUnit docs for more), but we’re going to make an IEnumerable that finds all our level scenes. The results of this IEnumerable are what gets passed to our constructor – you could use any type here.

class AllRequiredLevelsProvider : IEnumerable<string>
{
    IEnumerator<string> IEnumerable<string>.GetEnumerator()
    {
        var allLevelGUIDs = AssetDatabase.FindAssets("t:Scene", new[] {"Assets/Scenes/Levels"} );
        foreach(var levelGUID in allLevelGUIDs)
        {
            var levelPath = AssetDatabase.GUIDToAssetPath(levelGUID);
            yield return levelPath;
        }
    }
    public IEnumerator GetEnumerator() => (this as IEnumerable<string>).GetEnumerator();
}

Our TestFixture looks like a regular fixture, except also with the source attribute linking to our provider. Its constructor takes a string that defines which level to load.

[TestFixtureSource(typeof(AllRequiredLevelsProvider))]
public class LevelSmokeTests
{
    private string m_levelToSmoke;
    public LevelSmokeTests(string levelToSmoke)
    {
        m_levelToSmoke = levelToSmoke;
    }

Now our fixture knows which level to test, but not how to load it. TestFixtures have a [SetUp] attribute which runs before each test, but loading the level fresh for each test would be slow and wasteful. Instead let’s use [OneTimeSetup] (👀 at the inconsistent capitalisation) and to load and unload our level for each fixture. This depends somewhat on your game implementation, but for now let’s go with UnityEngine.SceneManagement:

// class LevelSmokeTests {
    [OneTimeSetUp]
    public void LoadScene()
    {
        SceneManager.LoadScene(m_levelToSmoke);
    }

Finally, we need some tests that would work on any level we throw at it. The simplest approach is probably to just watch the console for errors as we load in, sit in the level, and then as we load out. Any console errors at any of these stages should fail the test.

UTF provides LogAsset to validate the output of the log, but at this time it only lets you prescribe what should appear. We don’t care about Debug.Log() output, but want to know if there was anything worse than that. Particularly, in our case, we’d like to fail for warnings as well as errors. Too many “benign” warnings can hide serious issues! So, here’s a little utility class called LogSeverityTracker, that helps check for clean consoles. Check the comments for usage.

Our tests can use the [Order] attribute to ensure they happen in sequence:

// class LevelSmokeTests {
    [Test, Order(1)]
    public void LoadsCleanly()
    {
        m_logTracker.AssertCleanLog();
    }

    [UnityTest, Order(2)]
    public IEnumerator RunsCleanly()
    {
        // wait some arbitrary time
        yield return new WaitForSeconds(5);
        m_logTracker.AssertCleanLog();
    }

    [UnityTest, Order(3)]
    public IEnumerator UnloadsCleanly()
    {
        // how you unload is game-dependent 
        yield return SceneManager.LoadSceneAsync("mainmenu");
        m_logTracker.AssertCleanLog();
    }

Now we’re at the point where you can hit Run All in the Test Runner and see each of your levels load in turn, wait a while, then unload. You’ll get failed tests for console warnings or errors, and newly-added levels will get automatically-generated test fixtures.

More tests are undoubtedly more useful than less. Depending on the complexity and setup of your game, the next steps might be to get the player to dumbly walk around for a little bit. You can get a surprising amount of info from a dumb walk!

In part 2, I’ll outline how I added all this to jenkins. It’s not egregiously hard, but it can be a bit cryptic at times.

September 11, 2020

Reading List 265 by Bruce Lawson (@brucel)

September 09, 2020

Dos Amigans by Graham Lee

Tomorrow evening (for me; 1800UTC on 10th Sept) Steven R. Baker and I will twitch-stream our journey learning how to write Amiga software. Check out dosamigans.tv!

September 06, 2020

I finally got around to reading Cal Newport’s latest book: Digital Minimalism. Newport’s previous book, Deep Work, is one of my favourites so I had high expectations – and it delivered. Go read it if you haven’t already.

Newport makes the case that much of the technology that we use – in particular smartphones and social media – has a detrimental impact on our ability to live a deep life. Newport describes the deep life as “focusing with energetic intention on things that really matter – in work, at home, and in your soul – and not wasting too much attention on things that don’t.”

The antidote to the addiction that many of us have to our devices is to become a digital minimalist. Newport defines Digital Minimalism as, “a philosophy of technology use in which you focus your online time on a small number of carefully selected and optimized activities that strongly support things you value, and then happily miss out on everything else.”

The first step to becoming a digital minimalist is to do a thirty-day digital declutter, where you take a break from optional technologies in your life to rediscover more satisfying and meaningful pursuits.

There’s three steps to the digital declutter process:

  1. Define your technology rules. Decide which technologies fall into the “optional” category. The heuristic Newport recommends is: “consider the technology optional unless its temporary removal would harm or significantly disrupt the daily operation of your professional or personal life”.
  2. Take a thirty-day break. During this break, explore and rediscovered activities and behaviours that you find satisfying and meaningful.
  3. Reintroduce technology. Starting from a blank slate, slow reintroduce technologies that add value to your life and determine how you will use them to maximise this value.

This was a timely read for me. I’ve slipped back into bad habits despite knowing full well the toll that social media and my smartphone can have on me.

September felt like a good time to hit reset and do my own digital declutter experiment so for the next thirty days I’ve committed to:

  • No Twitter use (deleted TweetBot from my phone and iPad, blocked access on Mac)
  • No Instagram use (deleted app from my phone)
  • No email on phone
  • No Trading 212 on my phone
  • Not wearing my Apple Watch
  • No news consumption (RSS and a brief check of the news in the morning is okay)

I’ve introduced a few rules that I’m doing my best to follow:

  • No screens in the bedroom
  • Leave my phone in another room while working
  • Run Focus on my Mac while doing 40 minute work sessions (this blocks email, Slack, and a slew of distracting websites)

I’m also tracking a few habits and metrics every day using the Theme System Journal:

  • 10 minutes+ of meditation
  • 30 minutes+ reading
  • 10k steps
  • No alcohol
  • Journaling
  • Whether I’ve completed my daily highlight
  • The number of sessions of deep work I’ve completed (I aim for 4 40-minute sessions per day)
  • Hours on my phone/pickups via iOS’s ScreenTime feature

I am convinced that a reduction in the time I spend on Twitter and Instagram will be beneficial. The Apple Watch is more interesting: I use the health and fitness tracking features which I find useful, but I am also convinced that it creates a low-level anxiety (have I closed my rings? what’s my heart rate? etc.). It’ll be interesting to see how I feel about the Apple Watch at the end of the month.

September 04, 2020

Free as in Water by Graham Lee

The whole “Free as in beer versus free as in freedom” thing confuses people. Or maybe it doesn’t, and it allows detractors to sow fear, uncertainty and doubt over free software by feigning confusion. Either way, people express confusion.

What is “free as in beer”? Beer is never free, it costs money. Oh, you mean when someone gives me free beer. So, like a round-ordering system, where there’s an expectation that I’ll reciprocate later? Or a promotional beer, where there’s a future expectation that I’ll buy more beer?

No, we mean the beer that a friend buys you when you’re out together and they say “let’s get a couple of beers”. There’s no financial tally kept, no expectation to reciprocate, because then it wouldn’t be friendship: it would be some exchange-mediated relationship that can be nullified by balancing the books. There’s no strings attached, just beer (or coffee, or orange squash, whatever you drink). You get the beer, you don’t pay: but you don’t get to make your own beer, or improve that beer. Gratuity, but no liberty.

Various extensions have been offered to the gratis-vs-libre discussions of freedom. One of the funniest, from a proprietary software vendor’s then-CEO, was Scott McNealy’s “free as in puppies”: implying that while the product may be gratis, there’s work to come afterwards.

I think another extension to help software producers like you and me understand the point of the rights conferred by free software is “free as in water”. In so-called developed societies, most of us pay for water, and most of us have a reasonable expectation of a right to access for water. In fact, we often don’t pay for water, we pay for the infrastructure that gets clean, fresh water to our houses and returns soiled water to the treatment centres. If we’re out of our houses, there are public water fountains in urban areas, and a requirement for refreshment businesses to supply fresh water at no cost.

Of course, none of this is to say that you can’t run a for-profit water business. Here in the UK, that infrastructure that gets the main water supply to our houses, offices and other buildings is run for profit, though there are certain expectations placed on the operators in line with the idea that access to water is a right to be enjoyed by all. And nothing stops you from paying directly for the product: you can of course go out and buy a bottle of Dasani. You’ll end up with water that’s indistinguishable from anybody else’s water, but you’ll pay for the marketing message that this water changes your life in satisfying ways.

When the expectation of the “freedom to use the water, for any purpose” is violated, people are justifiably incensed. You can claim that water isn’t a human right, and you can expect that view to be considered dehumanising.

Just as water is necessary to our biological life, so software has become necessary to our social and civic lives due to its eating the world. It’s entirely reasonable to want insight and control into that process, and to want software that’s free as in water.

In the spring of 2020, the GNOME project ran their Community Engagement Challenge in which teams proposed ideas that would “engage beginning coders with the free and open-source software community [and] connect the next generation of coders to the FOSS community and keep them involved for years to come.” I have a few thoughts on this topic, and so does Alan Pope, and so we got chatting and put together a proposal for a programming environment for making simple apps in a way that new developers could easily grasp. We were quite pleased with it as a concept, but: it didn’t get selected for further development. Oh well, never mind. But the ideas still seem good to us, so I think it’s worth publishing the proposal anyway so that someone else has the chance to be inspired by it, or decide they want it to happen. Here:

Cabin: Creating simple apps for Linux, by Stuart Langridge and Alan Pope

I’d be interested in your thoughts.

August 28, 2020

A surprising new creative hobby – Painting Portraits

I started painting at the start of lockdown and here’s my story.

It’s May, Boris Johnson has just told us all we’re not to leave the house and the nation stays indoors. Like many, I turned to surprising new areas of interests to keep me sane – for me it was painting portraits of famous people.

I’ve never picked up a paintbrush for artistic reasons and if I’m honest I’ve never really liked art yet as of August 2020 I have over 80 paintings to my name. So what happened?

We’ll it all started with Grayson Perry’s Art Club – I actually missed the portrait episode but when I saw my wife was watching it I sat and watched with interest.

It was actually seeing Joe Lycett (from my home town of Solihull) painting Chris Whitty that made me think “Oh I’d like to have a go at this, he looks like he’s having fun regardless of the finished product”

My early Acrylic Portrait PaintingsFirst paintings I loved.

Thankfully for me the start up was cheap as my mother-in-law Rosy had loads of acrylic paints and an easel I could have. So I got painting and although the first few were very rough I posted them to Facebook and people really reacted to them so it encouraged me to keep painting.

It was the Richard Ayoade and Frank N’ Furter (Rocky Horror Show) paintings that made me feel I could actually take this further.

A whole load of my acrylic portrait paintingsMy early works

Setting up Portmates

At the time I was posting these paintings to my personal Instagram and Facebook pages which had limited reach as my Facebook is pretty locked down so it was natural to create a new page for my artwork.

I decided to call my brand Portmates, which is a mixture of Portraits and ‘Your mates’ as I was calling my paintings mates at the time. It’s a bit cheesy but it’s stuck.

I did a self portrait which my Facebook page followers LOVED and that became my brand identity and even done versions with 80’s style glasses for other art sets, but more on that later.

My branding based on my self portraitMy self portrait became my brand ID.

Page Growth

I was loving it. For once in my career I was in control. I didn’t have to wait for developers or beg, borrow and steal time from people so I could launch products of my own. I just painted and used my background in product design to release things I could sell.

In the early days I was doing lots of ‘Win a portrait’ competitions and free commissions which really helped get my art in front of new people. I have a whole gallery of people with my paintings. It’s ace! I love seeing them out in the wild. One painting is even hanging in a local hair dressers.

Some of my art sales and commissions Out in the wild!

Facebook groups

Part of my enjoyment is sharing to various facebook groups. I can’t thank Staceylee at the Portrait Artists UK group for the shares and kind words. It’s been a huge part of my journey and will continue to do so.

Stacey has often shared some of my live painting sessions and even purchased a set of cards from me she has really pushed me forward even if she’s unaware she’s been doing it.

The other group I had early success with was the Grayson Perry Art Cub group, they also fed me some very lovely feedback in the early days.

Me, Mulder and Scully paintingI joined the FBI

Developing my style

One of the things that struck me early on was how many of the established artists in these groups were praising my style and how I’d managed to find my artistic voice really quickly.

This lead me to think about the paintings I’ve done before and how I could expand them, make them feel more like original pieces of art rather than perhaps fairly crude paintings.

A weekend of painting portraits A weekends work

I had already been using fairly heavy lines in my paintings and once I upgraded some equipment (better paint brushes, moving from paper to canvases) I was able to be more experimental with my paintings. (It’s far easier to make mistakes on Canvas as you can paint over them without destroying the paper)

I’m now adding far more colour and being braver with my art. Because I have a design background I’m leaning into that – I think my art is somewhere between graphic art and portrait art.

More of my portrait art

Selling art

To my surprise I was getting enquires about buying some of my paintings. To this date I’ve sold three paintings (roughly one a month) which as a hobby ain’t bad at all!

I also started creating my own products such as Art Cards and Prints, which If I’m honest haven’t sold very well but I have sold a few… but as it’s still early days I’m happy with having some stock so when I do find new audiences they’ll have something to buy from me.

80's action art collectionMy 80’s action Art Cards

I’ve experimented with Shopify, Etsy but settled on having a Big Cartel Store. You can of course buy them here.

I’m going to be focusing on selling originals for the next few months before they take over my office and I have to use them as fuel.

I’ve been approached by an online gallery which is exciting, my work should be available to sell from them very soon.

Oh by the way, if you’re interested in the process of getting products from painting to printed items I can sell I did a tweet thread about how I did it (TL;DR, scan, touch up, print) – Read it here.

My art cards volume one available for sale.Art Cards Volume 1

Benefits of painting

My skin has always sucked. I scratch myself senseless and it effects every elements of my life. I don’t sleep well and it effects my mental health. Thankfully since I’ve started painting my skin has recovered MASSIVELY.

Although I still have a way to go before my skin is healthy the painting has really helped me destress and focus my efforts on something other than work, fitness or TV.

When I paint there’s nothing else on my mind, just focusing on the art and getting it how I want it to look (which can take a while!)

(Fitness is still really important to me I’m just less obsessed with how I look in a mirror)

The future

The future is really exciting. I’m planning on expanding my collection of portraits, perhaps even aiming to do a self funded exhibition or even at an established gallery.

I plan on releasing more prints in time for Black Friday / Christmas rush. I know there’s a demand for some of the legends, especially Elton and Freddie so I’ll probably get some of those made ready for Christmas.

Some legends

I’m also going to be exploring other mediums, I’ve ordered two blank skateboards which I’m going to be painting some designs on!

But if I’m honest with myself I don’t see art being my full time focus any time soon as I still really love my UI / UX app design work so I’m mainly going to keep enjoying the new creative outlet I’ve found for myself.

I also need to remind myself I’m very new to this and if any good things are on my horizon with my art it will happen in time. ❤

So TL;DR – I paint now.

Please follow me on all the socials, my Instagram is buzzing at the moment and my Facebook page is a real lovely community of people. Find all my links here.

I’m fairly confident nobody is reading this far down the page but feel free to tweet me with your own lockdown creative stories.

Paintings I did on holiday.Paintings I did on Holiday

The post A surprising new creative hobby – Painting Portraits appeared first on .

Back to Top