Last updated: February 28, 2021 06:22 AM (All times are UTC.)
With my review of the Ooznest WorkBee CNC being published in Hackspace Magazine last week, I figured this would be a good time to run though the general process I follow when designing for, and running jobs on, the CNC.
I have to admit, I’m not entirely sure where the idea for the eventual design came from, I just know I wanted something personal, which could act as a portfolio piece for Monumental Me.
It didn’t start life as a fully formed concept, rather a vague idea of two sets of footprints to represent Lucy and I.
I’m no artist, but I’m slowly getting better at pulling third party assets into (hopefully) cohesive designs. More often than not I find myself browsing VectorStock, which I’ve come to rely on as an indispensable resource, and a further source of inspiration.
This is where the idea of the cat prints going off on their own came from, as the asset already existed, and perfectly represented the head-strong nature of Apricat – so all I needed to do was import the asset and incorporate it into the design.
Affinity Designer is my tool of choice for producing the bulk of my designs. I’m barely scratching the surface of what it can do, but each time I boot it up, I learn something new.
For this particular design, I had an asset which featured something like 10 different pairs of feet, from which I chose the ones I liked the best, being sure to choose different shaped feet for the each of us, and then set about positioning each foot print individually.
The design gets exported from Affinity Designer as an SVG, which I import into Vectric VCarve. This powerful, yet really simple to use tool, makes it really easy to calculate the gcode toolpaths required by the CNC, all while providing a really accurate 3D representation of what the final design will look like.
It’s not a cheap piece of software, but it does a hell of a lot of heavy lifting, meaning that even a complete novice such as myself can produce excellent results.
Back in the real world, after applying a layer of vinyl, I’ll secure the stock I’m using to the spoilerboard on the CNC using some screws (I need a better camping method, but this will do for now), then will run the job via my workshop laptop.
Ear and eye protection are of paramount importance, as is my attention to the job as it runs, to ensure nothing goes wrong. On more than one occasion I’ve had a small miscalculation cause the job to go askew, forcing me to perform an emergency stop. Thankfully, these are becoming more and more rare, as I learn from my mistakes.
This is where the layer of vinyl applied in the previous step helps by providing a mask over any parts of the display we don’t want paint on.
A healthy coat of sanding sealer is applied to the surface to help prevent the paint bleeding into the grain, then I let lose with the spray paint, being sure to apply it from all angles to ensure the whole of the carved section is covered.
Once the paint is dry and the vinyl removed, and I do a light sanding. I don’t want to be too aggressive at this point or I’ll sand away some of the more intricate details of the carving.
This is followed by a coat of Danish Oil, which helps the natural beauty of the wood to show through, while providing a small amount of protection. You can apply up to three or four coats, but I typically leave it at one for anything which is designed for display purposes, rather than active use or handling.
No display is complete without the ability to hang it. I’ve tried using Command strips for mounting displays on the wall in the past, but the Danish oil stopped them from sticking very well, so I’ve started to use these sawtooth picture frame hangers.
Bang a nail in the wall, hang the display on it, step back, and enjoy your work.
You’re probably aware that between this blog, De Programmatica Ipsum, and various books, I write a lot about software engineering and software engineers.
You may know that I also present a podcast on software engineering topics, and co-host two live streams on Amiga programming and Objective-C programming.
I do all of this because I want to. I want to engage in conversations about software engineering; I want to help my colleagues and peers; I want to pass on my experience to others. Of course, this all takes rather a lot of time, and a not-insignificant amount of money. Mostly in hosting fees, but also a surprising chunk on library memberships, purchase of out-of-print materials on software engineering, and event attendance. More than my academic (i.e. not-for-profit) salary was designed to withstand. None of these projects is ad-supported, and that’s not about to change.
I’ve launched a Patreon page, where if you enjoy anything I write, say, or show, you can drop me a little bit of cash to say thanks. There’s no obligation: nothing I currently make freely available is going behind a paywall, and I’m not planning any “subscriber-only content” in the future. All I’m saying is if you’ve enjoyed what I’ve been producing, and having my voice in the software engineering fray, here’s another way in which you can say thank you.
Yesterday, I received my first Covid vaccine. I was expecting to be in the next group of people invited, as I have multiple sclerosis, which is a disease in which my own immune system tries to kill me, and many Covid deaths are caused by the body’s own immune system. My good chum Stuart Langridge wrote up his vaccination experience; here’s mine.
Out of the blue I received an SMS on Friday morning:
Our records show that you are eligible for your COVID vaccination. Appointments are now available at Villa Park and Millennium Point. Book here: https://www.birminghamandsolihullcovidvaccine.nhs.uk/book/
Your GP Surgery.
The website is on a legit domain, and linked to a booking system run by drdoctor.co.uk, which was a pretty crap experience (which I reported to them); top tip: you need to have your NHS number to book, and if you don’t, you might lose your chosen slot and have to start all over again. And that was that; a confirmation SMS came through:
Confirmation of your appointment: Sat 13 Feb at 4:10pm at Villa Park, B6 6HE. You appointment at Villa Park COVID Vaccination Clinic is confirmed at Villa Park, Holte Suite, Trinity Road, Birmingham, B6 6HE. https://www.avfc.co.uk/villa-park/travel-parking
Villa Park is the stadium for the worst Birmingham football team, so it was nice that something positive was going to happen there. As I approached in the car, there were plenty of temporary signposts to the Covid Vaccination Centre to help people find it.
I arrived 20 minutes early (I’m paranoid about missing appointments) and although the site had told me not to enter more than 10 minutes before my slot, it didn’t appear to be crowded so I went in. It was basically a big room with check-in desks around the perimeter and at least 20 vaccination stations in the centre. The bloke at the door told me to go up to checkin desk 12; the lady asked me for my reference number (I hadn’t been sent one), my NHS number (I hadn’t been told to bring it) and then my name and address.
After verifying that I had an appointment, she asked me to sit on one of the chairs placed 2 metres apart, facing her (so we weren’t all staring at people having their jabs while we waited, which was a thoughtful touch for those nervous of needles, like me).
A friend had been vaccinated the day before at an alternate vaccination hub and there had been a clerical error which meant too many people had showed up, so it took her 3 hours from entering to leaving, so I’d bought a book. But I only had time to take the selfie above before a man came up and asked me to follow him to a vaccination station where an assistant was finishing cleaning the chair. I sat down, confirmed my name, and rolled up my sleeve.
The syringe was bigger than a flu jab and while I honestly felt no pain at all as the needle went in, it was in my arm for a few seconds as there was presumably more vaccine in there than the flu jab, which is pretty much instantaneous. Then the syringe-wielder told me that I had to wait in another area for 15 minutes before driving, laughed when I asked if I could have a sticker, but gave me the best sticker I’ve ever received:
I asked which vaccine I’d received; it was the Oxford one. She gave me an info leaflet, a card with a URL and a phone number for booking the second jab and graciously accepted my gratitude. By 16:06, four minutes before my appointment, I was sitting in the waiting area, reading my book for 15 minutes.
The whole thing was brilliant; calm, professional, well-organised and reassuring. Today my arm has a slight soreness (just like my annual flu jab) but I feel fine. Actually, I feel better than fine. I feel optimistic, for the first time in a year.
Doubtless, the government will try to claim this as their triumph. It isn’t. It’s a triumph of science and socialised public sector medicine. The government gave billions to private sector cronies for a test-and-trace fiasco and for the last ten years have underfunded the National Health Service. Many leading Conservatives have openly called for its privatisation. Remember that when the next election comes around.
Thank you, Science; thank you, social health care.
Reading Uncle @aardrian’s
Like every other thought-leader, I follow Mike Taylr on social media. Ever since Shingy left AOL, “Mikey” has moved to the top spot of everyone’s Twitter “Futurist Gurus” Twitter list. This morning I awoke to read Twitter abuzz with exictement over Mike’s latest Nolidge Bom:
new interview question: on a whiteboard, re-implement the following in React (using the marker color of your choice) pic.twitter.com/o6F1iMzhQY
— Mike Taylor (@miketaylr) February 10, 2021
Of course, like anyone who’s ever sat a maths exam and been told to “show your working out”, you know that the widely diverse interview panel of white 20-ish year old men is as interested in how you arrived at your answer as in the answer itself. Given Mikey’s standing in the industry and the efficiency of his personal branding consultants, this question will soon be common for those interviewing in Big Tech, as it’s an industry that prides itself on innovative disruption by blindly copying each other. So let’s analyse it.
It’s obvious that the real test is your choice of marker colour. So, how would you go about making the right decision? Obviously, that depends where you’re interviewing.
If you’re interviewing for Google or one of its wannabes, simply set up a series of focus groups to choose the correct shade of blue.
If you’re interviewing for Apple or its acolytes, sadly, white ink won’t work on a whiteboard, no matter how aesthetically satisfying that would be. So choose a boring metallic colour and confidently assert any answer you give with “I KNOW BEST”.
If you’re interviewing for Microsoft, the colour doesn’t matter; just chain the marker to the whiteboard and say “you can’t change the marker, it’s an integral part of the whiteboard”, even after it stops working.
If you’re interviewing for Facebook or one of its wannabes, trawl through previous posts by the panellists, cross reference it with those of their spouses, friends and their friends to find their favourite colours, factor in their Instagram posts, give a weighting to anything they’ve ever bought on a site they’ve signed in using Facebook, and use that colour while whispering “Earth is flat. Vaccines cause cancer. Trump is the saviour. Muslims are evil. Hire me” subliminally over and over again.
Good luck in the new job! May your stocks vest well.
(Last Updated on )
The other day I was tearing my hair out wondering why an HTML form I was debugging wouldn’t focus on the form field when I was tapping on the associated label. The HTML was fine:
<label for="squonk">What's your name, ugly?</label>
<input id="squonk">
I asked my good chum Pat “Pattypoo” Lauke for help, and without even looking at the form or the code, he asked “Does it turn off pointer events in the CSS?”
second time i've seen this recently, so to save others a lengthy bug-hunt: if you have a properly associated <label>, but clicking/tapping it doesn't set focus to its correctly associated <input>/form element… check if for some reason the <label> has pointer-events:none in CSS
— patrick h. lauke #toryScum #clapForFlagWankers (@patrick_h_lauke) February 5, 2021
Lo and FFS, there it was! label {pointer-events:none;}
! This daft bit of CSS breaks the browser default behaviour of an associated label, and makes the hit target smaller than it would otherwise be. Try clicking in the “What’s your name, ugly?” text:
I’m jolly lucky to have the editor of the Pointer Events spec as my chum. But why would anyone ever do this? (That line of CSS, I mean, not edit a W3C spec; you do the editing for the sex and the glory.)
Once again, Pat whipped out his code ouija board:
aha, now i remember when i first saw a few weeks ago – testing something based on material design for web https://t.co/YkEKXkU0To pic.twitter.com/31S74X1i4R
— patrick h. lauke #toryScum #clapForFlagWankers (@patrick_h_lauke) February 5, 2021
And, yes—the presentation had originally been Material Design floating labels, and this line of CSS had been cargo-culted into the new design. So don’t disable pointer events on forms—and, while you’re at it, Stop using Material Design text fields!
(Last Updated on )
That nice Marcy Sutton asked me to test and give feedback about a new product she’s involved with called Evinced, which describes itself as an “Enterprise grade digital accessibility platform for modern software development teams”. Quite what “enterprise grade” means is beyond me, but it’s basically software that can crawl a website from a root domain and check its code against some rules and report back. There are similar tools on the market, and I’ve recently been working with a Client to integrate Tenon into their workflow, so wanted to compare them.
“Now hang on!”, I hear you say. “automated tools are terrible!”. Well, yes and no. Certainly, overlays etc that claim to automatically fix the problems are terrible, but tools that help you identify potential problems can be very useful.
It’s also true that automated tools can’t spot every single accessibility error; they can tell you if an image is missing alternate text, but not that <img src=dog.png alt=”a cat”> has useless alt text. Only a human can find all the errors.
However, many errors are machine-findable. The most-common errors on WebAIM’s survey of the top million homepages are low contrast text, missing alternative text, empty links, missing form input labels, empty buttons and missing document language, all of which were found by running automated tests on them (which, presumably, the developers never did before they were pushed to production).
I personally feel that a good automated scanner is a worthwhile investment for any large site to catch the “lowest hanging fruit”. While some things can’t be automatically tested, other things can, and other aspects live in a grey area depending on the rigour of the test.
For example, a naive colour contrast test might compare CSS color with background-color, and give a pass or fail; a less naive test will factor in any CSS opacity set on the text and ignore off-screen/ hidden text. A sophisticated contrast test could take a screenshot of text over an image or gradient and do a pixel-by-pixel analysis. To do a test on a screenshot would require actually rendering a page. Like Tenon, Evinced doesn’t just scan the HTML, but renders the pages in a headless browser, which allows the DOM to be tested (although I don’t believe it tests colour contrast in this way).
Evinced uses Axe Core, and open-source library also used by Google Lighthouse. It also contains other (presumably proprietary secret-source) tests so that “if interactable components are built with divs, spans, or images – Evinced will detect if they are broken”.
The proof of the pudding with automated site scanners is how well they report errors they’ve found. It’s all very well reporting umpty-squllion errors, but if it’s not reported in any actionable way, it’s not helpful.
Like all the automated scanners I’ve tried, errors are grouped according to severity. However, if those categories correspond with WCAG A, AA and AAA violations, that’s not made clear anywhere.
It’s a fact of corporate life that most organisations will attempt to claim AA compliance, so need to know the errors by WCAG compliance.
One innovative and useful reporting method is by what Evinced calls component grouping: “Consolidates hundreds of issues to a handful of broken code components”.
With other scanners, it takes a trained eye to look through thousands of site-wide errors and realise that a good percentage of them are because of one dodgy piece of code that is repeated on every page. Evinced analyses pages and identifies these repeated components for you, so you know where to concentrate your efforts to get gross numbers down. (We all know that in the corporate world, a quick fix that reduces 10,000 errors to 5,000 errors buys you time to concentrate on the really gnarly remaining problems.)
There’s a vague suggestion that this grouping is done by Artificial Intelligence/ Machine Learning. The algorithm obviously has quite clever rules, and shows me a screenshot of areas on my pages it has identified as components. It’s unclear whether this is real Machine Learning, eg whether it will improve as its corpus of naughty pages gets larger.
I don’t recall signing anything to allow my data to be used to improve the corpus; perhaps a discount for not-for-profits/ time-limited demos could be offered to organisations allowing their data to be added to the training data, if indeed that’s how it “learns”.
Many of these site scanners are made by engineers for engineers and have the similar high levels of UX one would expect from JavaScripters.
Tenon has some clunkers in its web interface (for example, it’s hard to re-run a previously defined scan) because it’s most commonly accessed via its API rather than web back-end.
Evinced makes it easy to re-run a scan from the web interface, and also promises an API (pricing is not announced yet) but also suffers from its UI. For example, one of my pet peeves is pages telling me I have errors but not letting me easily click to see them, requiring me to hunt. The only link in this error page, for example, goes to the “knowledge base” that describes the generic error, but not to a list of scanned pages containing the error.
(After I gave feedback to the developers, they told me the info is there if you go to the components view. But that requires me to learn and remember that. Don’t make me think!)
There’s also terminology oddities. When setting up a new site for testing, the web interface requires a seed URL and to press a button marked “start mapping”, after which the term “mapping” is no longer used, and I’m told the system is “crawling”. Once the crawl was complete, I couldn’t see any results. It took a while for me to realise that “crawling” and “mapping” are the same thing (getting a list of candidate URLs) and after the mapping/ crawling stage, I need to then do a “scan”.
A major flaw is the ability to customise tests. In Tenon I can turn off tests on an adhoc basis if, for example, one particular test is giving me false positives, or I only want to test for level A failures. This is unavailable in Evinced’s web interface.
Another important but often-overlooked UI aspect of these “Enterprise” site scanners is the need to share results across the enterprise. While it’s common to sell “per-seat” licenses, it’s also necessary for the licensee to able to share information with managers, bean-counters, legal eagles and the like. Downloading a CSV doesn’t really help; it’s much more useful to be able to share a link to the results of a run and let the recipient investigate the reports and issues, but not let them change any configuration or kick off any new scans. This is missing in both Evinced and Tenon.
The system is currently in Beta and definitely needs some proper usability testing with real target users and UI love. One niggle is the inaccuracy of its knowledge base (linked from the error reports). For example, about the footer
element, Evinced says
Since the <footer> element includes information about the entire document, it should be at the document level (e.g., it should not be a child element of another landmark element). There should be no more than one ><footer> element on the same document
This is directly contradicted by the HTML specification, which says
The footer element represents a footer for its nearest ancestor sectioning content or sectioning root element… Here is a page with two footers, one at the top and one at the bottom… Here is an example which shows the footer element being used both for a site-wide footer and for a section footer.
I saw no evidence of this incorrect assumption about the footer element in the tests, however.
All in all, the ability of Evinced to identify repeated ‘components’ and understand the intended use of some splodge of JavaScriptted labyrinth of divs is a welcome feature and its main selling point. It’s definitely one to watch when the UX is sleeker (presumably when it comes out of Beta).
Another day, another developer explaining that they don’t follow some popular practice. And their reason? Nothing more than because other people do the thing. “Best practices don’t exist,” they airily intone. “They’re really mediocre practices”.
In one sense, they’re correct. Best practices need to be evidence-based, and there’s precious little evidence in software engineering. In a regulated profession, you could avoid using accepted best practice, but if something went wrong and you ended up on the receiving end of a malpractice suit, you would lose.
So best practice as an argument in software engineering has two weaknesses: the first is that there’s no basis in evaluation of practice; and the second is that being a monetised hobby rather than a profession there’s no incentive to discover and adopt best practice anyway.
But those arguments mean that best practices are indistinguishable from alternative practices, not inherently worse. If a programmer discards a practice because they claim it’s considered best practice, they’re really just stamping their foot and shouting “I don’t wanna!”
They’re rejecting the remaining evidence in favour of the practice—that it’s survived scrutiny by a large cohort of their peers—in favour of making their monetised hobby look more like their headcanonical version of the hobby. “We are uncovering better ways of making software by doing it and by helping others to do it” be damned: I want to use this thing I read a substack post about yesterday!
Dig deeper, and you’ll find only platitudinous justification based on thought-terminating cliche: I’ve already covered “Reasoning about code”, and maybe some time I’ll cover “Right tool for the job”. This time, let’s look at “things won’t advance unless some of us try new ways of doing it”.
People tried new ways of making new steam engines all the time, during the industrial revolution. People tried new ways of making chimneys all the time, during the 15th and 16th centuries. A lot of factories and trains exploded, and a lot of buildings burnt down. If you live in a house with a chimney now, or you have ever taken a train, it’s significantly less likely to have self-immolated than at earlier times in history.
It’s not, for the most part, due to misunderstood lone geniuses rejecting what everybody else was doing, but a small amount of incremental development and a large amount of theoretical advance. It’s no coincidence that the field of thermodynamics advanced leaps and bounds during the steam age. Brad Cox makes this point about software too, in almost everything he wrote on the topic: you don’t get as much advance from random walks in the cottage industry as you do from standardisation, mass production, the division of labour, and interchangeable parts that can be evaluated on merit with reference to a strong theoretical underpinning.
Of course, the “reason about code” crowd try to stop this from happening, because if that advance happened then the code-reasoning would quickly disappear to be replaced with the problem-domain-reasoning that’s significantly harder and less of a hobby. Hence the sabotage of best practice: let’s put a stop to this before anybody realises it’s more than sufficient to the task at hand.
Alan Kay once referred to a LISP evaluator written in LISP as “the Maxwell’s Equations of software”. But what software needs before a James Clerk Maxwell are the Gibbs, Boltzmanns, Joules and Lavoisiers, the people who can stop us from blowing things up in production.
Starting next week: [objc retain]; in which Steven Baker and I live-code Objective-C on a modern free software platform. Wednesday, February 10th, 1900UTC. More info at objc-retain.com.
In these difficult times, Lawrence Vagner and I felt a solemn duty to heal the world with a hopeful message of love and cross-cultural unity to a disco beat. So here is our Eurovision entry: Saperlipopette!
Get your hotpants on & boogie for a better tomorrow.
“Saperlipopette” is a very dated French “swear word” translating to “goodness me” or “fiddlesticks”, the kind of thing you’d say if a child were in earshot. My chum Lawrence Vagner taught it to me when they invited me to speak at ParisWeb. I got a daft tune in my head and “Saperlipopette” fitted the melody. (The rest of the lyrics practically wrote themselves, and make a damed sight more sense than the 1968 song with the same title. In fact, I had to discard a couple of verses.) I invited Lawrence to duet with me, which was fun as they’d never sung before, and we had to do it remotely due to lockdown.
It’s made with Reason Studio, using the Reason Disco and Norman Cook refills as well as built-in instruments, and a French accordion sample I found. My chum Shez twiddled the knobs, Lawrence made the website, which is hosted by Netlify.
With Ruby 3.0 the standard library is going to become default gems.
A Ruby installation has three parts. The standard library, default gems, and bundled gems.
The standard library is the core language and utilities. Default gems are gems that cannot be removed, but need not be required. Bundled gems are gems that come along with a Ruby installation, but must be explicitly required as a dependency and can be removed.
There is an ongoing effort to extract parts of the Ruby standard library to default gems. By keeping the standard library itself lean, you free its component parts from being unnecessarily tied to a larger development and release cycle, as well as making bits easier to remove or deprecate as time goes on.
It’s my birthday!
This year, in the midst of a coronavirus lockdown, it’s been something of a quiet one. I got lots of nice best wishes from a bunch of people, which is terribly pleasing, and I had a nice conversation with the family over zoom. Plus, a really good Chinese takeaway delivered as a surprise from my mum and dad, and I suspect that if there were a video of them signing up for a Deliveroo account to do so it would probably be in the running for the Best Comedy BAFTA award.
Also I spent some time afternoon doing the present from my daughter, which is the Enigmagram, an envelope of puzzles which unlock a secret message (which is how I discovered it was from my daughter). I like this sort of thing a lot; I’ve bought a couple of the Mysterious Package Company‘s experiences as presents and they’re great too. Must be a fun job to make these things; it’s like an ARG or something, which I’d also love to run at some point if I had loads of time.
I’ve just looked back at last year’s birthday post, and I should note that Gaby has excelled herself again with birthday card envelope drawing this year, but nothing will ever, ever exceed the amazing genius that is the bookshelf portal that her and Andy got me for Christmas. It is amazing. Go and watch the video immediately.
Time for bed. I have an electric blanket now, which I was mocked for, but hey; I’m allowed. It’s really cosy and warm. Shut up.
Another day, another post telling me to do something, or not do something, or adopt some technology, or not adopt some technology, or whatever it is that they want me to do, because it makes it easier to “reason about the code”.
It’s a scam.
More precisely, it’s a thought-terminating cliche. Ironic, as the phrase “reason about” is used as a highfalutin synonym for “think about”. The idea is that there’s nowhere to go from here. I want to do things one way, some random on medium dot com wants me to do things another way, their way makes it easier to reason about the code, therefore that’s the better approach.
It’s a scam.
Let’s start with the fact that people don’t think—sorry, reason—about things the same way. If we did, then there’d be little point to things like film review sites, code style guides, or democracy. We don’t know precisely what influences different people to think in different ways about different things, but we have some ideas. Some of the ideas just raise other questions: like if you say “it’s a cultural difference” then we have to ask “well, why is it normal for all of the people in that culture to think this way, and all of the people in this culture to think that way?”
This difference between modes of thought arises in computing. We know, for example, that you can basically use any programming language for basically any purpose, because back in the days when there were intellectual giants in computering they demonstrated that all of these languages are interchangeable. They did so before we’d designed the languages. So choice of programming language is arbitrary, unless motivated by external circumstances like which vendor your CTO plays squash with or whether you are by nature populist or contrarian.
Such differences arise elsewhere than in choice of language. Comprehension of paradigms, for example: the Smalltalk folks noticed it was easier to teach object-oriented programming to children than to professional programmers, because the professional programmers already had mental toolkits for comprehending programming that didn’t integrate with the object model. It’s easier for them to “reason about” imperative code than objects.
OK, so when someone says that something makes it easier to “reason about” the code, what they mean is that that person find it easier to think about code in the presence of this property. I mean, assuming they do, and are not disingenuously proposing a suggestion that you do something when they’ve run out of reasons you should do it but still think it’d be a good idea. But wait.
It’s a scam.
Code is a particular representation of, at best, yesterday’s understanding of the problem you’re trying to solve. “Reasoning about code” is by necessity accidental complexity: it’s reflecting on and trying to understand a historical solution of the problem as you once thought it was. That’s effort that could better be focussed on checking whether your understanding of the problem is indeed correct, or needs updating. Or on capturing a solution to an up-to-the-minute model of the problem in executable form.
This points to a need for code to be deletable way faster than it needs to be thought about.
Reasoning about code is a scam.
Today I was vaccinated for Covid.
It occurred to me that people might have a question or two about the process, what it’s like, and what happens, and I think that’s reasonable.
I, along with many others, have written about the influence of Xerox PARC on Apple. The NeXT workstation was a great example of getting an approximation to the Smalltalk concept out using off-the-shelf parts, and Jobs often presaged iCloud with his discussion of NetInfo, NFS, and even the magneto-optical drive. He’d clearly been paying attention to PARC’s Ubiquitous Computing model. And of course the iPad with Siri is what you get if you marry the concept of the DynaBook with a desire to control the entire widget, not ceding that control to some sap who’s only claim to fame is that they bought the thing.
Sorry, they licensed the thing.
There are some good signs that Apple are still following the ubicomp playbook, and that’s encouraging because it will make a lot of their products better integrated, and more useful. Particularly, the Apple Watch is clearly the most “me” of any of my things (it’s strapped to my arm, while everything else is potentially on a desk in a different room, stuck to my wall, or in my pocket or bag) so it makes sense that that’s the thing I identify with to everything else. Unlocking a Mac with my watch is great, and using my watch to tell my TV that I’m the one plugging away at a fitness workout is similarly helpful.
To continue along this route, the bigger screen devices (the “boards”, “pads”, and “tabs” of ubicomp; the TVs, Macs, iPads, and iPhones of Apple’s parlance) need to give up their identities as “mine”. This is tricky for the iPhone, because it’s got an attachment to a phone number and billing account that is certainly someone’s, but in general the idea should be that my watch tells a nearby screen that it’s me using it, and that it should have access to my documents and storage. And, by extension, not to somebody else’s.
A scene. A company is giving a presentation, with a small number of people in the room and more dialled in over FaceTime (work with me, here). It’s time for the CTO to present the architecture, so she uses the Keynote app on her watch to request control of the Apple TV on the wall. It offers a list of her presentations in iCloud, she picks the relevant one by scrolling the digital crown, and now has a slide remote on her wrist, and her slides on the screen.
This works well if the Apple TV isn’t “logged in” to an iCloud account or Apple ID, but instead “borrows” access from the watch. Because the watch is on my wrist, so it’s the thing that is most definably “mine”, unlike the Apple TV and the FaceTime call which are “my employer’s”.
LIVEstep is a GNUstep desktop on a FreeBSD live CD, and it comes with the GNUstep developer tools including ProjectCenter. This video is a “Hello, World” walkthrough using ProjectCenter on LIVEstep. PC is much more influenced by the NeXT Project Builder than by Xcode, so it might look a little weird to younger eyes.
If I could string a thread through my childhood, the pins that hold the thread in place would be all the times I hit my head.
Me and my best friend (at the time) used to play a game called Dizzy Egg. It was a simple game. The object was to spin around as many times as we could and then try not to fall over. I usually fell over, and this usually meant hitting my head on the unforgiving concrete.
In the same playground, I ran—for no particular reason—head first into the white painted wall of one of the school buildings. Luckily, it stayed white.
I was part of a weekend football club. Football Fun. A better name for it might have been “Football Keeps Hitting Me In The Face.” I’m not sure what it was about that football or my face, but the two were inseparable. You couldn’t keep them apart.
I remember one final and dramatic incident. On running through a metal gate, the gate swung closed and tried to run through me. One minute we were running and chasing and laughing. The next I was on the floor, bleeding a lot and saying some words that weren’t suitable for the playground.
That one needed a trip to the hospital and I still have the scar.
Here’s what I’ve been working on (with others, of course) since February.
I used to write my assertions like this:
assert user.active?
refute user.inactive?
Then I joined a team where I was encouraged to write this:
assert_equal true, user.active?
assert_equal false, user.inactive?
What? That doesn’t look very nice. That’s doesn’t feel very Ruby. It’s less elegant!
Here’s the thing, though: your unit tests aren’t about being elegant. They’re about guaranteeing correctness. You open the door to some insidious bugs when you test truthiness instead of truth.
def active?
# Should be status == :active
status = :active
end
def has_users?
# Should be user_list.any?
user_list
end
def user_list
[]
end
Over time, I’ve gotten used to it. This style still chafes, but not as bugs caused by returning the wrong value from a predicate method.
So, I was awarded a medal.
OpenUK, who are a non-profit organisation supporting open source software, hardware, and data, and are run by Amanda Brock, have published the honours list for 2021 of what they call “100 top influencers across the UK’s open technology communities”. One of them is me, which is rather nice. One’s not supposed to blow one’s own trumpet at a time like this, but to borrow a line from Edmund Blackadder it’s nice to let people know that you have a trumpet.
There are a bunch of names on this list that I suspect anyone in a position to read this might recognise. Andrew Wafaa at ARM, Neil McGovern of GNOME, Ben Everard the journalist and Chris Lamb the DPL and Jonathan Riddell at KDE. Jeni Tennison and Jimmy Wales and Simon Wardley. There are people I’ve worked with or spoken alongside or had a pint with or all of those things — Mark Shuttleworth, Rob McQueen, Simon Phipps, Michael Meeks. And those I know as friends, which makes them doubly worthy: Alan Pope, Laura Czajkowski, Dave Walker, Joe Ressington, Martin Wimpress. And down near the bottom of the alphabetical list, there’s me, slotted in between Terence Eden and Sir Tim Berners-Lee. I’ll take that position and those neighbours, thank you very much, that’s lovely.
I like working on open source things. It’s been a strange quarter-of-a-century, and my views have changed a lot in that time, but I’m typing this right now on an open source desktop and you’re probably viewing it in an open source web rendering engine. Earlier this very week Alan Pope suggested an app idea to me and two days later we’d made Hushboard. It’s a trivial app, but the process of having made it is sorta emblematic in my head — I really like that we can go from idea to published Ubuntu app in a couple of days, and it’s all open-source while I’m doing it. I like that I got to go and have a curry with Colin Watson a little while ago, the bloke who introduced me to and inspired me with free software all those years ago, and he’s still doing it and inspiring me and I’m still doing it too. I crossed over some sort of Rubicon relatively recently where I’ve been doing open source for more of my life than I haven’t been doing it. I like that as well.
There are a lot of problems with the open source community. I spoke about divisiveness over “distros” in Linux a while back. It’s still not clear how to make open source software financially sustainable for developers of it. The open source development community is distinctly unwelcoming at best and actively harassing and toxic at worst to a lot of people who don’t look like me, because they don’t look like me. There’s way too much of a culture of opposing popularity because it is popularity and we don’t know how to not be underdogs who reflexively bite at the cool kids. Startups take venture capital and make a billion dollars when the bottom 90% of their stack is open source that they didn’t write, and then give none of it back. Products built with open source, especially on the web, assume (to use Bruce Lawson’s excellent phrasing) that you’re on the Wealthy Western Web. The list goes on and on and on and these are only the first few things on it. To the extent that I have any influence as one of the one hundred top influencers in open source in the UK, those are the sort of things I’d like to see change. I don’t know whether having a medal helps with that, but last year, 2020, was an extremely tough year for almost everyone. 2021 has started even worse: we’ve still got a pandemic, the fascism has gone from ten to eleven, and none of the problems I mentioned are close to being fixed. But I’m on a list with Tim Berners-Lee, so I feel a little bit warmer than I did. Thank you for that, OpenUK. I’ll try to share the warmth with others.
gets
is seen is basically every introductory Ruby tutorial, but they rarely tell the whole story.
You’ll be told to write something like this:
#!/usr/bin/env ruby
puts "What is your name?"
your_name = gets.chomp
puts "Hi, #{your_name}!"
Confusingly, if this is in a script that takes additional command line arguments, you may not see “Hi, Janet!”
If you execute ./gets.rb 123
you will pretty quickly be greeted by the following error:
./gets.rb:4:in `gets': No such file or directory @ rb_sysopen - 123 (Errno::ENOENT)
The tutorial didn’t warn you about this. The tutorial is giving you a reduced view of things that may, if you’re like me, leave you scratching your head several years later.
gets
doesn’t just read user input from $stdin. gets
refers to Kernel#gets, and it behaves like so:
Returns (and assigns to $_) the next line from the list of files in ARGV (or $*), or from standard input if no files are present on the command line.
If you really, truly want to prompt the user for their input, you can call gets
on $stdin directly. And who wouldn’t, with a user like you?
your_name = $stdin.gets.chomp
My coworker showed me something cool today. Like a lot of developers, there are certain machines that I find myself SSHing into repeatedly. Not all of them are directly accessible from the network I’m on. Some of them require me to connect via a jump host.
Ordinarily, I manually create a tunnel between a port on my local machine to my final destination via this tunnel. This is cool, but it’s a bunch of steps to remember, especially if you’re manually creating your SSH tunnels via the command line. Especially if you’re trying to remember a bunch of IP addresses.
Apparently, you can just add hosts to your ~/.ssh/config
.
Host jump-host
Hostname x.x.x.x
IdentityFile /path/to/key/proxy.pem
User ubuntu
Host destination
Hostname y.y.y.y
IdentityFile /path/to/key/destination.pem
User ubuntu
ProxyJump jump-host
With this in place, ssh destination
gets you there with zero fuss.
I’ve been reading On Writing Well by William Zinsser. One of the early instructions he gives is to cut out filler words.
Well, I have a confession to make. I’m particularly guilty of one particular habit that I just can’t seem to shake.
I start a lot of sentences with “well,” and, well, it’s something I’ve been trying to cut down on.
Out of interest, I just ran the following search in my employer’s Slack:
from:@james Well,
The results were not pleasing. I don’t want to say how many times I’ve done it within Slack’s recent memory, but I didn’t dare venture past page 1 of 36.
Well, that just won’t do.
I don’t even know why I do it. It’s not a hedging word, designed to protect me from any criticism I might incur from taking a firm position, as it does nothing to minimise the strength of the statement that follows it. It’s just a noise I make, involuntarily, like um or ah.
It’s five characters (and a space) I can do without.
You may remember in July I updated the open source Bean word processor to work with then-latest Xcode and macOS. Over the last couple of days I’ve added iCloud Drive support (obviously only if the app is associated with an App Store ID, but everyone gets the autosave changes), and made sure it works on Big Sur and Apple Silicon.
Alongside this, a few little internal housekeeping changes: there’s now a unit test target, the app uses base localisation, and for each file I had to edit, I cleaned up some warnings.
Developers can try this new version out from source. I haven’t created a new build yet, because I’m still in the process of removing James Hoover’s update-checking code which would replace Bean with the proprietary version from his website. I’ll create and host a Sparkle appcast for automatic updates before doing a binary release, which will support macOS 10.6-11.1.
I use Inline Class if a class is no longer pulling its weight and shouldn’t be around any more. Often this is the result of refactoring that moves other responsibilities out of the class so there are little left.
I recently finished reading Refactoring: Ruby Edition, and while there was a lot of talk about extracting logic into single purpose classes, there was also mention of removing classes that weren’t deemed to be “pulling their weight.”
This left me with a question. How little is too little? I’m assuming that when the author says there is little left in the class, they don’t mean there’s nothing left, so what sort of classes might exist that don’t deserve the mental space they occupy?
The example given in the book is that of a TelephoneNumber
class which is quite close to simply being a value object, but without the immutability or comparison logic. Its only role is to put an area code and a phone number into a nicely formatted string. This logic is pulled into the Person
class, to whom the phone number belongs.
So say you’ve got class B
that you’re considering merging into class A
. Some good reasons to make this merge might be:
B
only operates on instances of A
or its fields.B
is only referenced by A
.A
and B
.I think it’s less a question of whether a class has enough responsibility, and more a question of whether a class has enough responsibility that rightfully belongs to it.
After a relatively lively month or two at work (during which we still managed to get a pretty major feature and some nice quality of life stuff ready to ship), it felt like time to disconnect a bit.
That’s why other than writing some brief notes on The Adapter Pattern, I spent the bulk of this month deliberately not doing much that could be construed as “work” outside of my regular work hours.
Instead, I spent a lot of time getting things ready for the first Christmas in which I visited neither my own family nor my in-laws, spent a lot of time sitting on the sofa reading, and just generally relaxed.
The new year is knocking, so it’s time to shake the dust off and get back to it.
This is the third of a series of posts on Jenkins and Unity.
In this post I’ll outline how I set up Jenkins to make reliable repeatable Unity builds, using a pipelines script (also called a jenkinsfile) that’s stored inside source control. I’ll outline the plugins I used and the reasons behind some of my choices. This is not a tutorial in Jenkins or pipeline scripts. It’s more of a tour.
I’m not an expert in Jenkins – most of this is pieced together from the Pipeine docs and google. It may be that there are better ways to achieve the same results – I’d be keen to hear about them in the comments!
This post just deals with the jenkinsfile. In a future post I’ll deal with how I configured Jenkins to use it.
I am using Jenkins v2.257.
You can set up Jenkins to do almost anything you want via the web interface. This is ok for experimenting, but it has drawbacks:
All these go away if we move to a piepelines script stored in a file (commonly called “the jenkinsfile”) within the project’s source control.
In Jenkins, a “job” is a particular way of building your project. This one jenkinsfile will run multiple jobs for us:
The Health Check build exists because there’s a big gap between our 12pm build and the next day’s 6am build. If someone commits something at 1pm that would fail the slow automated tests, we might not know until the next morning, and the QA build will already be late. Now the Health Check build runs multiple times in the afternoon, and we can fix stuff before we log off for the day.
When you’re googling for pipelines info, you’ll discover it’s a concept that has undergone a few revisions. What we’re talking about here is a “declarative pipeline” (as opposed to the older “scripted pipeline”), which is (mostly) composed of a jenkinsfile written in a subset of the Groovy language, which runs on the JVM.
A declarative pipelines script is roughly defined as a series of “stages”, where each stage is a series of commands, or “steps”. Stages can be hierarchical and nest and run in parallel, but for our purposes, we’re going to stay pretty linear and flat. If any stage fails, the subsequent stages won’t run.
We’re going to have six stages:
You’ll find the full jenkinsfile at the bottom of the page, but let’s break it down by starting at the top.
The pipeline itself doesn’t start until the pipeine
keyword, but since this is a subset of Groovy, we can define constants at the top.
UNITY_PATH = "C:\\Program Files\\Unity\\Hub\\Editor\\2019.4.5f1\\Editor\\Unity.exe"
There’s probably a better service-orientated way of installing and locating Unity, but in this case we chose to just manually create a windows machine on ec2, remote desktop in, and install Unity. Ship it.
Next we start the pipeline, and define parameters. These appear as dropdowns when starting a job in Jenkins, and you access them further down the script using params.PARAMETER_NAME
:
pipeline { parameters { choice(name: 'TYPE', choices: ['Debug', 'Release', 'Publisher'], description: 'Do you want cheats or speed?') choice(name: 'BUILD', choices: ['Fast', 'Deploy', 'HealthCheck'], description: 'Fast builds run minimal tests and make a build. HealthCheck builds run more automated tests and are slower. Deploy builds are HealthChecks + a deploy.') booleanParam(name: 'CLEAN', defaultValue: false, description: 'Tick to removed cached files - will take an eon') booleanParam(name: 'SKIP_PLAYMODE_TESTS', defaultValue: false, description: 'In an emergency, to allow Deploy builds to work with a failing playmode test') }
Here’s how they appear in Jenkins in the Build With Parameters tab:
More details on each param:
Here follows some boilerplate for Jenkins to understand how to run our job:
agent { node { label "My Project" // force everyone to the space space, so we can share a library file. customWorkspace 'workspace\\MyProject' } } options { timestamps() // as a failsafe. our build tend around the 15min mark, so 45 would be excessive. timeout(time: 45, unit: 'MINUTES') }
The node
block is about finding an ec2 instance to run the job on. We’ll deal with ec2 setup in a future post. The customWorkspace
setting is a cost-saving measure: on ec2, the size of persistent SSD storage is a significant part of our costs. We could save money by switching to spinning rust, but we want the build speed of an SSD. Instead, we’ll try to keep SSD size down by not having multiple versions of the same project checked out all over the drive. In practice, we mostly only work in trunk anyway, and when we build another branch it’s not massively divergent. We may revisit this over the course of the project.
Our first stage! It’s pretty simple. It only runs if the Clean parameter has been set, and it just runs some dos commands to clean out the library and temp folders:
stages { stage('Clean') { when { expression { return params.CLEAN } } steps { bat "if exist Library (rmdir Library /s /q)" bat "if exist Temp (rmdir Temp /s /q)" } }
(I’m surprised that using the boolean Clean param in a when block isn’t easier? I may have missed some better syntax)
Prewarm helps set the stage for the coming attractions. We’re going to use some script
blocks here to drop into the old scripted pipeline format, which lets us do some more complex logic.
The first thing we want to do is figure out what branch we’re on. Jenkins will always run your pipeline with some predefined environment variables, including some which seem to imply they’ll contain your branch name, but try as I might they didn’t seem to work for us. Maybe it’s because we’re using SVN? So I had to figure it out for myself:
stage('Prewarm') { steps { script { // not easy to get jenkins to tell us the current branch! none of the built-in envs seem to work? // let's just ask svn directly. def get_branch_script = 'svn info | select-string "Relative URL: \\^\\/(.*)" | %{\$_.Matches.Groups[1].Value}' env.R7_SVN_BRANCH = powershell(returnStdout: true, script:get_branch_script).trim() }
We’ll call out to powershell because urgh grepping in dos is esoteric. We’ll store it in a new env variable, which will let us use this info in future stages.
Next we’ll set up the build name:
buildName "${BUILD_NUMBER} - ${TYPE}@${env.R7_SVN_BRANCH}/${SVN_REVISION}" script { if (params.BUILD == 'Deploy') { buildName "${env.BUILD_DISPLAY_NAME} ^Deploy" } if (params.BUILD == 'HealthCheck') { buildName "Health Check of ${TYPE}@${env.R7_SVN_BRANCH}/${SVN_REVISION}" } if (params.BUILD == 'HealthCheck' || params.BUILD == 'Deploy') { // let's warn that a deploy build is in progress slackSend color: 'good', message: ":hourglass: build ${env.BUILD_DISPLAY_NAME} STARTED (<${env.BUILD_URL}|Open>)", channel: "project_builds" } }
the buildName
command lets you set your build name, and env.BUILD_DISPLAY_NAME
contains the current version of that. The default Jenkins build name is just “#99” or whatever, which is less than helpful. Here, we’ll make sure it’s of the format “JenkinsBuildNumber – Type@Branch/Changelist [^Deploy]”. It’ll then be obvious from a glance in both the Jenkins dash and slack notifications what’s building and why.
We also send a slack notification of health check and deploy builds, since it’s useful to know they’ve started. It gives people a good sense of if their commits have made it into the build or not. More on notifications below.
Next some more housekeeping for the automated tests, which communicate with Jenkins via xml files:
// clean tests bat "if exist *tests.xml (del *tests.xml)"
Finally, we’ll open Unity once and close it. One persistent problem with Unity in automated systems is of serialisation errors from out-of-date code working with new data. For instance, let’s assume you’ve got a bunch of existing scriptable assets, and your latest commit refactors them. On the build server, Unity will open, validate the assets with the pre-refactor code that it has from the last run, throw some errors because nothing looks right, then rebuild the code. Subsequent launches will succeed because both the data and the code are in sync. So, to keep those spurious errors out of real build logs, we’ll do this kind of ghost-open:
retry(count: 2) { bat "\"${UNITY_PATH}\" -nographics -buildTarget Win64 -quit -batchmode -projectPath . -executeMethod MyBuildFunction -MyBuildType \"${params.TYPE}\" -MyTestOnly -logFile" }
This is the first time we’ve seen Jenkinfile talk to Unity! We’ll explain more in the next section, but just pretend you understand it for now. The important part is -MyTestOnly
, which tells our build function to only set script defines, recompile, and quit.
We wrap the whole thing into a retry
block as a side effect of us building multiple branches in one workspace. Sometimes, we get a “library corrupted” failure when switching. Running a second time makes it go away – no explicit Clean required. Lots of getting Unity running on Jenkins is just experimenting and making do with what works!
You also see some examples of groovy’s string interpolation. I admit I’m no expert, bu there seems to be about a dozen ways of doing string interp in groovy, and not all approaches work in all locations. If one didn’t work, I went on to the next, and what you see here is the one that works here.
We need to convince Unity to do what we want, and we want any failures to produce useful output in the Jenkins dashboard. You can find more in the Unity docs but I found the best way to get output was to have -logFile
last, with no path set.
To convince Unity to do what we want, we use the -executeMethod
parameter. That will call a static c# function of your choice. How to make builds from within Unity is outside the scope of this blog post.
Here’s our next few stages, and the various ways they call to Unity:
stage ('Editmode Tests') { steps { bat "\"${UNITY_PATH}\" -nographics -batchmode -projectPath . -runTests -testResults editmodetests.xml -testPlatform editmode -logFile" } } stage ('Playmode Tests') { when { expression { return (params.BUILD == 'Deploy' || params.BUILD == 'HealthCheck') && !params.SKIP_PLAYMODE_TESTS } } steps { // no -nographics on playmode tests. they don't log right now? which is weird cuz the editmode tests do with almost the same setup. bat "\"${UNITY_PATH}\" -batchmode -projectPath . -runTests -testResults playmodetests.xml -testPlatform playmode -testCategory \"BuildServer\" -logFile" } } stage ('Build') { steps { bat "\"${UNITY_PATH}\" -nographics -buildTarget Win64 -quit -batchmode -projectPath . -executeMethod MyBuildFunction -MyBuildType \"${params.TYPE}\" -logFile" } }
This is project and platform specific, so I won’t go into details, but let’s assume you’re zipping or packaging a build folder and sending somewhere.
Here we’d be able to use the branch environment variables to maybe choose a destination folder. We’re also able to reuse the build name environment variables. We created both of those in Prewarm.
stage ('Deploy') { when { expression { return params.BUILD == 'Deploy' } } steps { // ... how to get a build to your platform of choice ... slackSend color: 'good', message: ":ship: build ${env.BUILD_DISPLAY_NAME} DEPLOYED (<${env.BUILD_URL}|Open>)", channel: "project_builds" } }
We also post that the build has been deployed. More on notifications below.
The post
section of the jenkinsfile contains blocks that will run after the main job, in different circumstances. We mostly use them to report progress to slack:
post { always { nunit testResultsPattern: '*tests.xml' } success { script { if (params.BUILD == 'HealthCheck') { slackSend color: 'good', message: ":green_heart: build ${env.BUILD_DISPLAY_NAME} SUCCEEDED (<${env.BUILD_URL}|Open>)", channel: "project_builds" } } } fixed { slackSend color: 'good', message: ":raised_hands: build ${env.BUILD_DISPLAY_NAME} FIXED (<${env.BUILD_URL}|Open>)", channel: "project_builds" } aborted { slackSend color: 'danger', message: ":warning: build ${env.BUILD_DISPLAY_NAME} ABORTED. Was it intentional? (<${env.BUILD_URL}|Open>)", channel: "project_builds" } failure { slackSend color: 'danger', message: ":red_circle: build ${env.BUILD_DISPLAY_NAME} FAILED (<${env.BUILD_URL}|Open>)", channel: "project_builds" } }
The first step here is to always
report the automated test results to Jenkins with the nunit
plugin. Unity’s test reports are in the nunit format, and this plugin converts it to the junit format that Jenkins expects.
We post all failures to the slack channel, and all fixed builds, but we don’t post all successes. With builds on every push that might make the build channel too noisy. We do however post when Health Checks succeed, since that’s good affirmation.
We use the Slack Notification plugin. The slack color
attributes here doesn’t seem to work for us? So we use emojis to make it easy to scan what’s happening. Here’s an example from slack:
Jenkins includes a snippet generator, which allows you to make freestyle blocks and see the generated pipeline script, which is very handy for porting freestyle jobs:
UNITY_PATH = "C:\\Program Files\\Unity\\Hub\\Editor\\2019.4.5f1\\Editor\\Unity.exe" pipeline { parameters { choice(name: 'TYPE', choices: ['Debug', 'Release', 'Publisher'], description: 'Do you want cheats or speed?') choice(name: 'BUILD', choices: ['Fast', 'Deploy', 'HealthCheck'], description: 'Fast builds run minimal tests and make a build. HealthCheck builds run more automated tests and are slower. Deploy builds are HealthChecks + a deploy.') booleanParam(name: 'CLEAN', defaultValue: false, description: 'Tick to removed cached files - will take an eon') booleanParam(name: 'SKIP_PLAYMODE_TESTS', defaultValue: false, description: 'In an emergency, to allow Deploy builds to work with a failing playmode test') } agent { node { label "My Project" // force everyone to the space space, so we can share a library file. customWorkspace 'workspace\\MyProject' } } options { timestamps() // as a failsafe. our build tend around the 15min mark, so 45 would be excessive. timeout(time: 45, unit: 'MINUTES') } // post stages only kick in once we definitely have a node stages { stage('Clean') { when { expression { return params.CLEAN } } steps { bat "if exist Library (rmdir Library /s /q)" bat "if exist Temp (rmdir Temp /s /q)" } } stage('Prewarm') { steps { script { // not easy to get jenkins to tell us the current branch! none of the built-in envs seem to work? // let's just ask svn directly. def get_branch_script = 'svn info | select-string "Relative URL: \\^\\/(.*)" | %{\$_.Matches.Groups[1].Value}' env.R7_SVN_BRANCH = powershell(returnStdout: true, script:get_branch_script).trim() } buildName "${BUILD_NUMBER} - ${TYPE}@${env.R7_SVN_BRANCH}/${SVN_REVISION}" script { if (params.BUILD == 'Deploy') { buildName "${env.BUILD_DISPLAY_NAME} ^Deploy" } if (params.BUILD == 'HealthCheck') { buildName "Health Check of ${TYPE}@${env.R7_SVN_BRANCH}/${SVN_REVISION}" } if (params.BUILD == 'HealthCheck' || params.BUILD == 'Deploy') { // let's warn that a deploy build is in progress slackSend color: 'good', message: ":hourglass: build ${env.BUILD_DISPLAY_NAME} STARTED (<${env.BUILD_URL}|Open>)", channel: "project_builds" } } // clean tests bat "if exist *tests.xml (del *tests.xml)" // need an initial open/close to clean out the serialisation. without this you can get things validating on old code!! // do it twice, to make it more tolerant of bad libraries when switching branches retry(count: 2) { bat "\"${UNITY_PATH}\" -nographics -buildTarget Win64 -quit -batchmode -projectPath . -executeMethod MyBuildFunction -MyBuildType \"${params.TYPE}\" -MyTestOnly -logFile" } } } stage ('Editmode Tests') { steps { bat "\"${UNITY_PATH}\" -nographics -batchmode -projectPath . -runTests -testResults editmodetests.xml -testPlatform editmode -logFile" } } stage ('Playmode Tests') { when { expression { return (params.BUILD == 'Deploy' || params.BUILD == 'HealthCheck') && !params.SKIP_PLAYMODE_TESTS } } steps { // no -nographics on playmode tests. they don't log right now? which is weird cuz the editmode tests do with almost the same setup. bat "\"${UNITY_PATH}\" -batchmode -projectPath . -runTests -testResults playmodetests.xml -testPlatform playmode -testCategory \"BuildServer\" -logFile" } } stage ('Build') { steps { bat "\"${UNITY_PATH}\" -nographics -buildTarget Win64 -quit -batchmode -projectPath . -executeMethod MyBuildFunction -MyBuildType \"${params.TYPE}\" -logFile" } } stage ('Deploy') { when { expression { return params.BUILD == 'Deploy' } } steps { // ... how to get a build to your platform of choice ... slackSend color: 'good', message: ":ship: build ${env.BUILD_DISPLAY_NAME} DEPLOYED (<${env.BUILD_URL}|Open>)", channel: "project_builds" } } } post { always { nunit testResultsPattern: '*tests.xml' } success { script { if (params.BUILD == 'HealthCheck') { slackSend color: 'good', message: ":green_heart: build ${env.BUILD_DISPLAY_NAME} SUCCEEDED (<${env.BUILD_URL}|Open>)", channel: "project_builds" } } } fixed { slackSend color: 'good', message: ":raised_hands: build ${env.BUILD_DISPLAY_NAME} FIXED (<${env.BUILD_URL}|Open>)", channel: "project_builds" } aborted { slackSend color: 'danger', message: ":warning: build ${env.BUILD_DISPLAY_NAME} ABORTED. Was it intentional? (<${env.BUILD_URL}|Open>)", channel: "project_builds" } failure { slackSend color: 'danger', message: ":red_circle: build ${env.BUILD_DISPLAY_NAME} FAILED (<${env.BUILD_URL}|Open>)", channel: "project_builds" } } }
Congrats to reading to the end! Your prize is a nice fish:
Amid the turmoil of this challenging year, I’d like to reflect on 2020 and give my thanks to a lot of people who have made it positive and because of them, reminded me I have so much in life. I keep a gratitude journal listing 3 things I am grateful for every day. That’s become a pretty comprehensive list but below are some of the highlights, extended.
Thank you to my fellow Essex compatriot Matt for your support, counsel and especially the cookbooks and pictures of your amazing feasts!
A big shout out goes to Jim for the various virtual pub sessions we’ve had on Fridays after work. You’ve patiently listened to me spout a many a tale of woe for too long now.
The awesome members of the Warwick Lanterne Rouge Cycling Club. Despite social restrictions, key members have worked tirelessly behind the scenes to keep social activities going. In particular, I’d like to thank my Sunday group of Tommy, John, Matt, Martin and Adam for keeping me company in the lanes and providing great chat over coffee and cake in the Cotswolds. You make my weekends.
My awesome neighbours in the block of flats I live in have helped kept me sane living on my own and relieved the cabin fever. Matt and Sarah who live above me, our chats on the 1st floor are a joy and your baking is exceptional, especially without a mixer!
To my support household of Wayne, Amanda and Sophie - what absolute heroes you are. You have so much on your plate with your own challenges and yet you have given me so much of your time and company, expecting nothing in return. You all deserve such a better 2021.
Special mention goes to the Thursday Night Gaming Group of Malc, Nad, Omar, Usman, Simon, Andy, Shaun, Nazeef and even Sleepy. The banter and smack talk always makes for an entertaining evening!
Whilst I’ve missed the good beer of the White Horse pub in Leamington, the Leam Geeks meetup has continued online and it’s been great to see faces of the regulars - Both Richs, Rob, Tim, Matt.
As always, my Colchester crew of Dan, Andy, Tony, Stu, Paul, Zena & Steff who somehow still talk to me after putting up with over a quarter of a century of nonsense from me. It’s good to know you’re always around.
Also thanks to my various Nottingham uni groups of Oli, Jack, Andy, Rich, Dave, Craig, Sam, Jo and Jason who have kept in touch over WhatsApp. There were a few plans we had this year that were scuppered but I’m sure we’ll get opportunities again soon.
My guitar tutor Alfie, who has amazing patience and is a highly gifted teacher. Thanks for helping me find a creative outlet to give my brain a break from the day-to-day problem solving it normally gets caught up in.
My former work colleagues, of which there are many who have been great people to know and work with, but, in particular, Rich G, RIch T, Emma, Dan, Steve, Ash, Matt and Chris. Thanks for your support and patience with me. When I was melting down and causing more problems than I was solving, you were still there for me.
Of course, my family are my rock. Mum, Dad, Matt & Grandma you are special people who love me unconditionally, always.
There are many more to mention, and no doubt if you are reading this and we’ve interacted this year, I’m really appreciative for the connection. May everyone have a much more prosperous 2021.
<div role="region" aria-labelledby="Caption01" tabindex="0"> <table>[…]</table> </div>
img loading="lazy"
.People say that the internet, or maybe specifically the web, holds the world’s information and makes it accessible. Maybe there was a time when that was true. But currently it’s not: probably not because the information is missing, but because the search engines think they know better than you what you want.
I recently had cause to look up an event that I know happened: at an early point in the iPod’s development, Steve Jobs disparaged MP3 players using NAND Flash storage. What were his exact words?
Jobs also disparaged the Adobe (formerly Macromedia) Flash Player media platform, in a widely-discussed blog post on his company website many years later. I knew that this would be a closely-connected story, so I crafted my search terms to exclude it.
Steve Jobs NAND Flash iPod. Steve Jobs Flash MP3 player. Steve Jobs NAND Flash -Adobe. Did any of these work? No, on multiple search engines. Having to try multiple search engines and getting the wrong results on all of them is 1990s-era web experience. All of these search terms return lists of “Thoughts on Flash” (the Adobe player), reports on that article, later news about Flash Player linking subsequent outcomes to that article, hot takes on why Jobs was wrong in that article, and so on. None of them show me what I asked for.
Eventually I decided to search the archives of one particular blog, which didn’t make the search engines prefer relevant results but which did reduce the quantity of irrelevant results. Finally, on the second page of articles from Daring Fireball about “Steve Jobs NAND flash storage iPod”, I found Flash Gordon. I still don’t have the quote, I have an article about a later development citing a dead link story that is itself interpreting the quote.
That’s the closest modern web searching tools would let me get.
The new M1 chip in the new Macs has 8-16GB of DRAM on the package, just like many mobile phones or single-board computers. But unlike many desktop, laptop or workstation computers (there are exceptions). In the first tranche of Macs using the chip, that’s all the addressable RAM they have (i.e. ignoring caches), just like many mobile phones or single-board computers. But what happens when they move the Apple Silicon chips up the scale, to computers like the iMac or Mac Pro?
It’s possible that these models would have a few GB of memory on-package and access to memory modules connected via a conventional controller, for example DDR4 RAM. They almost certainly would if you could deploy multiple M1 (or successor) packages on a single system. Such a Mac would be a non-uniform memory access architecture (NUMA), which (depending on how it’s configured) has implications for how software can be designed to best make use of the memory.
NUMA computing is of course not new. If you have a computer with a CPU and a discrete graphics processor, you have a NUMA computer: the GPU has access to RAM that the CPU doesn’t, and vice versa. Running GPU code involves copying data from CPU-memory to GPU-memory, doing GPU stuff, then copying the result from GPU-memory to CPU-memory.
A hypothetical NUMA-because-Apple-Silicon Mac would not be like that. The GPU shares access to the integrated RAM with the CPU, a little like an Amiga. The situation on Amiga was that there was “chip RAM” (which both the CPU and graphics and other peripheral chips could access), and “fast RAM” (only available to the CPU). The fast RAM was faster because the CPU didn’t have to wait for the coprocessors to use it, whereas they had to take turns accessing the chip RAM. Nonetheless, the CPU had access to all the RAM, and programmers had to tell `AllocMem` whether they wanted to use chip RAM, fast RAM, or didn’t care.
A NUMA Mac would not be like that, either. It would share the property that there’s a subset of the RAM available for sharing with the GPU, but this memory would be faster than the off-chip memory because of the closer integration and lack of (relatively) long communication bus. Apple has described the integrated RAM as “high bandwidth”, which probably means multiple access channels.
A better and more recently analogy to this setup is Intel’s discontinued supercomputer chip, Knight’s Landing (marketed as Xeon Phi). Like the M1, this chip has 16GB of on-die high bandwidth memory. Like my hypothetical Mac Pro, it can also access external memory modules. Unlike the M1, it has 64 or 72 identical cores rather than 4 big and 4 little cores.
There are three ways to configure a Xeon Phi computer. You can not use any external memory, and the CPU entirely uses its on-package RAM. You can use a cache mode, where the software only “sees” the external memory and the high-bandwidth RAM is used as a cache. Or you can go full NUMA, where programmers have to explicitly request memory in the high-bandwidth region to access it, like with the Amiga allocator.
People rarely go full NUMA. It’s hard to work out what split of allocations between the high-bandwidth and regular RAM yields best performance, so people tend to just run with cached mode and hope that’s faster than not having any on-package memory at all.
And that makes me think that a Mac would either not go full NUMA, or would not have public API for it. Maybe Apple would let the kernel and some OS processes have exclusive access to the on-package RAM, but even that seems overly complex (particularly where you have more than one M1 in a computer, so you need to specify core affinity for your memory allocations in addition to memory type). My guess is that an early workstation Mac with 16GB of M1 RAM and 64GB of DDR4 RAM would look like it has 64GB of RAM, with the on-package memory used for the GPU and as cache. NUMA APIs, if they come at all, would come later.
In case you ever need it. If you’re searching for something like “deleted login shell Mac can’t open terminal”, this is the post for you.
I just deleted my login shell (because it was installed with homebrew, and I removed homebrew without remembering that I would lose my shell). That stopped me from opening a Terminal window, because it would immediately bomb out as it was unable to open the shell.
Unable to open a normal Terminal window, anyway. In the Shell menu, the “New Command…” item let me run /bin/bash -l, from which I got to a login-like bash shell. Then I could run this command:
chsh -s /bin/zsh
Enter my password, and then I have a normal shell again.
(So I could then install MacPorts, and then change my shell to /opt/local/bin/bash)
Thinking back over the last couple of years, I’ve had to know quite a bit about a few different topics to be able to write good software. Those topics:
Not much knowledge in each field, though definitely expert-level knowledge: enough to have conversations as a peer with experts in those fields, to be able to follow the jargon, and to be able to make reasoned assessments of the experts’ suggestions in software designed for use by those experts. And, where I’ve wanted to propose alternate suggestions, enough expert-level knowledge to identify and justify a different design.
Going back over the rest of my career in software:
In fact I’d estimate that I’ve spent less than 40% of my “professional career” in jobs where knowing software or even computers in general was the whole thing.
Working in software is about understanding a system to the point where you can use a computer to make a meaningful and beneficial contribution to that system. While the systems thinking community have great tools for mapping out the behaviour of systems, they are only really good for making initial models. In order to get good at it, we have to actually understand the system on its own terms, with the same ideas and biases that the people who interact regularly with the system use.
But of course, because we’re hired for our computering skills, we get to experience and work with all of these different systems. It’s perhaps one of the best domains in which to be a polymath. To navigate it effectively, we need to accept that we are not the experts. We’re visitors, who get to explore other people’s worlds.
We should take them up on that offer, though, if we’re going to be effective. If we maintain the distinction between “technical” and “non-technical” people, or between “engineering” and “the business”, then we deny ourselves the ability to learn about the problem we’re trying to solve, and to make a good solution.
My dad’s got a Brother DCP-7055W printer/scanner, and he wanted to be able to set it up as a network scanner to his Ubuntu machine. This was more fiddly than it should be, and involved a bunch of annoying terminal work, so I’m documenting it here so I don’t lose track of how to do it should I have to do it again. It would be nice if Brother made this easier, but I suppose that it working at all under Ubuntu is an improvement on nothing.
Anyway. First, go off to the Brother website and download the scanner software. At time of writing, https://www.brother.co.uk/support/dcp7055/downloads has the software, but if that’s not there when you read this, search the Brother site for DCP-7055 and choose Downloads, then Linux and Linux (deb), and get the Driver Installer Tool. That’ll get you a shell script; run it. This should give you two new commands in the Terminal: brsaneconfig4
and brscan-skey
.
Next, teach the computer about the scanner. This is what brsaneconfig4
is for, and is all done in the Terminal. You need to know the scanner’s IP address; you can find this out from the scanner itself, or you can use avahi-resolve -v -a -r
to search your network for it. This will dump out a whole load of stuff, some of which should look like this:
= wlan0 IPv4 Brother DCP-7055W UNIX Printer local
hostname = [BRN008092CCEE10.local]
address = [192.168.1.21]
port = [515]
txt = ["TBCP=F" "Transparent=T" "Binary=T" "PaperCustom=T" "Duplex=F" "Copies=T" "Color=F" "usb_MDL=DCP-7055W" "usb_MFG=Brother" "priority=75" "adminurl=http://BRN008092CCEE10.local./" "product=(Brother DCP-7055W)" "ty=Brother DCP-7055W" "rp=duerqxesz5090" "pdl=application/vnd.brother-hbp" "qtotal=1" "txtvers=1"]
That’s your Brother scanner. The thing you want from that is address
, which in this case is 192.168.1.21
.
Run brsaneconfig4 -a name="My7055WScanner" model="DCP-7055" ip=192.168.1.21
. This should teach the computer about the scanner. You can test this with brsaneconfig4 -p
which will ping the scanner, and brsaneconfig4 -q
which will list all the scanner types it knows about and then list your added scanner at the end under Devices on network
. (If your Brother scanner isn’t a DCP-7055W, you can find the other codenames for types it knows about with brsaneconfig4 -q
and then use one of those as the model
with brsaneconfig4 -a
.)
You only need to add the scanner once, but you also need to have brscan-skey
running always, because that’s what listens for network scan requests from the scanner itself. The easiest way to do this is to run it as a Startup Application; open Startup Applications from your launcher by searching from it, and add a new application which runs the command brscan-skey
, and restart the machine so that it’s running.
If you don’t have the GIMP1 installed, you’ll need to install it.
On the scanner, you should now be able to press the Scan button and choose Scan to PC and then Scan Image, and it should work. What will happen is that your machine will pop up the GIMP with the image, which you will then need to export to a format of your choice.
This is quite annoying if you need to scan more than one thing, though, so there’s an optional extra step, which is to change things so that it doesn’t pop up the GIMP and instead just saves the scanned photo which is much nicer. To do this, first install imagemagick
, and then edit the file /opt/brother/scanner/brscan-skey/script/scantoimage-0.2.4-1.sh
with sudo. Change the last line from
echo gimp -n $output_file 2>/dev/null \;rm -f $output_file | sh &
to
echo convert $output_file $output_file.jpg 2>/dev/null \;rm -f $output_file | sh &
Now, when you hit the Scan button on the scanner, it will quietly create a file named something like brscan.Hd83Kd.ppm.jpg
in the brscan
folder in your home folder and not show anything on screen, and this means that it’s a lot easier to scan a bunch of photos one after the other.
I sat down to start compiling these notes, and of course got sidetracked putting together the Rakefile which can be found in the root of this repo.
I was all gung-ho about getting started with Hacktoberfest this year, but I’m not sure I can muster the energy. I absolutely do not need any more t-shirts, and the negative energy around the event’s growing spam problem just turns me off participating entirely.
Regardless, I’m enjoying stewarding How Old Is It? the only project I’ve had that’s gotten more than 10 stars on GitHub. I hope I can at least help some other non-spammy participants get their t-shirts.
Trying to get better at navigating around my code editor without using a mouse. This is motivated by the audiobook of The Pragmatic Programmer that I’m listening to, in which they discuss how efficiency can be improved by reducing the friction between your brain and your computer.
The issue isn’t that taking your hand off the keyboard, placing it on the mouse, clicking some stuff and then moving your hand back to the keyboard takes too much time; realistically the extra time added by mouse usage is going to be dwarfed by a the time spent in meetings or making seven or eight cups of tea.
The issue is that it’s a distraction that takes your mind off what you’re writing.
There was some functionality in RubyMine that I missed, and wanted to replicate inside VSCode. Rather than use one of the existing plugins that more than adequately solve the problem, I decided to write my own. Because, y’know. Of course I would.
Hello, Rails Test Assistant.
If programmers were just more disciplined, more professional, they’d write better software. All they need is a code of conduct telling them how to work like those of us who’ve worked it out.
The above statement is true, which is a good thing for those of us interested in improving the state of software and in helping our fellow professionals to improve their craft. However, it’s also very difficult and inefficient to apply, in addition to being entirely unnecessary. In the common parlance of our industry, “discipline doesn’t scale”.
Consider the trajectory of object lifecycle management in the Objective-C programming language, particularly the NeXT dialect. Between 1989 and 1995, the dominant way to deal with the lifecycle of objects was to use the +new and -free methods, which work much like malloc/free in C or new/delete in C++. Of course it’s possible to design a complex object graph using this ownership model, it just needs discipline, that’s all. Learn the heuristics that the experts use, and the techniques to ensure correctness, and get it correct.
But you know what’s better? Not having to get that right. So around 1994 people introduced new tools to do it an easier way: reference counting. With NeXTSTEP Mach Kit’s NXReference protocol and OpenStep’s NSObject, developers no longer need to know when everybody in an app is done with an object to destroy it. They can indicate when a reference is taken and when it’s relinquished, and the object itself will see when it’s no longer used and free itself. Learn the heuristics and techniques around auto releasing and unretained references, and get it correct.
But you know what’s better? Not having to get that right. So a couple of other tools were introduced, so close together that they were probably developed in parallel[*]: Objective-C 2.0 garbage collection (2006) and Automatic Reference Counting (2008). ARC “won” in popular adoption so let’s focus there: developers no longer need to know exactly when to retain, release, or autorelease objects. Instead of describing the edges of the relationships, they describe the meanings of the relationships and the compiler will automatically take care of ownership tracking. Learn the heuristics and techniques around weak references and the “weak self” dance, and get it correct.
[*] I’m ignoring here the significantly earlier integration of the Boehm conservative GC with Objective-C, because so did everybody else. That in itself is an important part of the technology adoption story.
But you know what’s better? You get the idea. You see similar things happen in other contexts: for example C++’s move from new/delete to smart pointers follows a similar trajectory over a similar time. The reliance on an entire programming community getting some difficult rules right, when faced with the alternative of using different technology on the same computer that follows the rules for you, is a tough sell.
It seems so simple: computers exist to automate repetitive information-processing tasks. Requiring programmers who have access to computers to recall and follow repetitive information processes is wasteful, when the computer can do that. So give those tasks to the computers.
And yet, for some people the problem with software isn’t a lack of automation but a lack of discipline. Software would be better if only people knew the rules, honoured them, and slowed themselves down so that instead of cutting corners they just chose to ignore important business milestones instead. Back in my day, everybody knew “no Markdown around town” and “don’t code in an IDE after Labour Day”, but now the kids do whatever they want. The motivations seem different, and I’d like to sort them out.
Let’s start with hazing. A lot of the software industry suffers from “I had to go through this, you should too”. Look at software engineering interviews, for example. I’m not sure whether anybody actually believes “I had to deal with carefully ensuring NUL-termination to avoid buffer overrun errors so you should too”, but I do occasionally still hear people telling less-experienced developers that they should learn C to learn more about how their computer works. Your computer is not a fast PDP-11, all you will learn is how the C virtual machine works.
Just as Real Men Don’t Eat Quiche, so real programmers don’t use Pascal. Real Programmers use FORTRAN. This motivation for sorting discipline from rabble is based on the idea that if it isn’t at least as hard as it was when I did this, it isn’t hard enough. And that means that the goalposts are movable, based on the orator’s experience.
This is often related to the term of their experience: you don’t need TypeScript to write good React Native code, just Javascript and some discipline. You don’t need React Native to write good front-end code, just JQuery and some discipline. You don’t need JQuery…
But along with the term of experience goes the breadth. You see, the person who learned reference counting in 1995 and thinks that you can only really understand programming if you manually type out your own reference-changing events, presumably didn’t go on to use garbage collection in Java in 1996. The person who thinks you can only really write correct software if every case is accompanied by a unit test presumably didn’t learn Eiffel. The person who thinks that you can only really design systems if you use the Haskell type system may not have tried OCaml. And so on.
The conclusion is that for this variety of disciplinarian, the appropriate character and quantity of discipline is whatever they had to deal with at some specific point in their career. Probably a high point: after they’d got over the tricky bits and got productive, and after you kids came along and ruined everything.
Sometimes the reason for suggesting the disciplined approach is entomological in nature, as in the case of the eusocial insect the “performant” which, while not a real word, exists in greater quantities in older software than in newer software, apparently. The performant is capable of making software faster, or use less memory, or more concurrent, or less dependent on I/O: the specific characteristics of the performant depend heavily on context.
The performant is often not talked about in the same sentences as its usual companion species, the irrelevant. Yes, there may be opportunities to shave a few percent off the runtime of that algorithm by switching from the automatic tool to the manual, disciplined approach, but does that matter (yet, or at all)? There are software-construction domains where specific performance characteristics are desirable, indeed that’s true across a lot of software. But it’s typical to focus performance-enhancing techniques on the bits where they enhance performance that needs enhancing, not to adopt them across the whole system on the basis that it was better when everyone worked this way. You might save a few hundred cycles writing native software instead of using a VM for that UI method, but if it’s going to run after a network request completes over EDGE then trigger a 1/3s animation, nobody will notice the improvement.
Anyway, whatever the source, the problem with calls for discipline is that there’s no strong motivation to become more disciplined. I can use these tools, and my customer is this much satisfied, and my employer pays me this much. Or I can learn from you how I’m supposed to be doing it, which will slow me down, for…your satisfaction? So you know I’m doing it the way it’s supposed to be done? Or so that I can tell everyone else that they’re doing it wrong, too? Sounds like a great deal.
Therefore discipline doesn’t scale. Whenever you ask some people to slow down and think harder about what they’re doing, some fraction of them will. Some will wonder whether there’s some other way to get what you’re peddling, and may find it. Some more will not pay any attention. The dangerous ones are the ones who thought they were paying attention and yet still end up not doing the disciplined thing you asked for: they either torpedo your whole idea or turn it into not doing the thing (see OOP, Agile, Functional Programming). And still more people, by far the vast majority, just weren’t listening at all, and you’ll never reach them.
Let’s flip this around. Let’s look at where we need to be disciplined, and ask if there are gaps in the tool support for software engineers. Some people want us to always write a failing test and make it pass before adding any code (or want us to write a passing test and revert our changes if it accidentally fails): does that mean our tools should not let us write code for which there’s no test? Does the same apply for acceptance tests? Some want us to refactor mercilessly; does that mean our design tools should always propose more parsimonious alternatives for passing the same tests? Some say we should get into the discipline of writing code that always reveals its intent: should the tools make a crack at interpreting the intention of the code-as-prose?
It’s been a while; since the last Reading List! Since then, Vadim Makeev and I recorded episode 6 of The F-Word, our podcast, on Mozilla layoffs, modals and focus, AVIF, AdBlock Plus lawsuit. We also chatted with co-inventor of CSS, Håkon Wium Lie, and Brian Kardell of Igalia about the health of the web ecosystem. Anyway, enough about me. Here’s what I’ve been reading about the web since the last mail.
I had an item in OmniFocus to “write on why I wish I was still using my 2006 iBook”, and then Tim Sneath’s tweet on unboxing a G4 iMac sealed the deal. I wish I was still using my 2006 iBook. I had been using NeXTSTEP for a while, and Mac OS X for a short amount of time, by this point, but on borrowed hardware, mostly spares from the University computing lab.
My “up-to-date” setup was my then-girlfriend’s PowerBook G3 “Wall Street” model, which upon being handed down to me usually ran OpenDarwin, Rhapsody, or Mac OS X 10.2 Jaguar, which was the last release to boot properly on it. When I went to WWDC for the first time in 2005 I set up X Post Facto, a tool that would let me (precariously) install and run 10.3 Panther on it, so that I could ask about Cocoa Bindings in the labs. I didn’t get to run the Tiger developer seed we were given.
When the dizzying salary of my entry-level sysadmin job in the Uni finally made a dent in my graduate-level debts, I scraped together enough money for the entry-level 12” iBook G4 (which did run Tiger, and Leopard). I think it lasted four years until I finally switched to Intel, with an equivalent white acrylic 13” MacBook model. Not because I needed an upgrade, but because Apple forced my hand by making Snow Leopard (OS X 10.6) Intel-only. By this time I was working as a Mac developer so had bought in to the platform lock-in, to some extent.
The treadmill turns: the white MacBook was replaced by a mid-decade MacBook Air (for 64-bit support), which developed a case of “fruit juice on the GPU” so finally got replaced by the 2018 15” MacBook Pro I use to this day. Along the way, a couple of iMacs (both Intel, both aluminium, the second being an opportunistic upgrade: another hand-me-down) came and went, though the second is still used by a friend.
Had it not been for the CPU changes and my need to keep up, could I still use that iBook in 2020? Yes, absolutely. Its replaceable battery could be improved, its browser could be the modern TenFourFox, the hard drive could be replaced with an SSD, and then I’d have a fast, quiet computer that can compile my code and browse the modern Web.
Would that be a great 2020 computer? Not really. As Steven Baker pointed out when we discussed this, computers have got better in incremental ways that eventually add up: hardware AES support for transparent disk encryption. Better memory controllers and more RAM. HiDPI displays. If I replaced the 2018 MBP with the 2006 iBook today, I’d notice those things get worse way before I noticed that the software lacked features I needed.
On the other hand, the hardware lacks a certain emotional playfulness: the backlight shining through the Apple logo. The sighing LED indicating that the laptop is asleep. The reassuring clack of the keys.
Are those the reasons this 2006 computer speaks to me through the decades? They’re charming, but they aren’t the whole reason. Most of it comes down to an impression that that computer was mine and I understood it, whereas the MBP is Apple’s and I get to use it.
A significant input into that is my own mental health. Around 2014 I got into a big burnout, and stopped paying attention to the updates. As a developer, that was a bad time because it was when Apple introduced, and started rapidly iterating on, the Swift programming language. As an Objective-C and Python expert (I’ve published books on both), with limited emotional capacity, I didn’t feel the need to become an expert on yet another language. To this day, I feel like a foreign tourist in Swift and SwiftUI, able to communicate intent but not to fully immerse in the culture and understand its nuances.
A significant part of that is the change in Apple’s stance from “this is how these things work” to “this is how you use these things”. I don’t begrudge them that at all (I did in the Dark Times), because they are selling useful things that people want to use. But there is decidedly a change in tone, from the “Come in it’s open” logo on the front page of the developer website of yore to the limited, late open source drops of today. From the knowledge oriented programming guides of the “blue and white” documentation archive to the task oriented articles of today.
Again, I don’t begrudge this. Developers have work to do, and so want to complete their tasks. Task-oriented support is entirely expected and desirable. I might formulate an argument that it hinders “solutions architects” who need to understand the system in depth to design a sympathetic system for their clients’ needs, but modern software teams don’t have solutions architects. They have their choice of UI framework and a race to an MVP.
Of course, Apple’s adoption of machine learning and cloud systems also means that in many cases, the thing isn’t available to learn. What used to be an open source software component is now an XPC service that calls into a black box that makes a network request. If I wanted to understand why the spell checker on modern macOS or iOS is so weird, Apple would wave their figurative hands and say “neural engine”.
And a massive contribution is the increase in scale of Apple’s products in the intervening time. Bear in mind that at the time of the 2006 iBook, I had one of Apple’s four Mac models, access to an XServe and Airport base station, and a friend who had an iPod, and felt like I knew the whole widget. Now, I have the MBP (one of six models), an iPhone (not the latest model), an iPad (not latest, not Pro), the TV doohickey, no watch, no speaker, no home doohickey, no auto-unlock car, and I’m barely treading water.
Understanding a G4-vintage Mac meant understanding PPC, Mach, BSD Unix, launchd, a couple of directory services, Objective-C, Cocoa, I/O Kit, Carbon, AppleScript, the GNU tool chain and Jam, sqlite3, WebKit, and a few ancillary things like the Keychain and HFS+. You could throw in Perl, Python, and the server stuff like XSAN and XGrid, because why not?
Understanding a modern Mac means understanding that, minus PPC, plus x86_64, the LLVM tool chain, sandbox/seatbelt, Scheme, Swift, SwiftUI, UIKit, “modern” AppKit (with its combination of layer-backed, layer-hosting, cell-based and view-based views), APFS, JavaScript and its hellscape of ancillary tools, geocoding, machine learning, the T2, BridgeOS…
I’m trying to trust a computer I can’t mentally lift.
I was shoulder-surfing my coworker the other day when he did something that I imagine is common knowledge to everyone except me.
When I’m trying to do something like monitor how quickly a file is growing, it’s not uncommon to see a terminal window on my screen that looks like this:
➜ du -hs index.html
4.0K index.html
➜ du -hs index.html
4.0K index.html
➜ du -hs index.html
5.0K index.html
➜ du -hs index.html
6.0K index.html
➜ du -hs index.html
8.0K index.html
➜ du -hs index.html
12.0K index.html
Not only is this untidy, you hardly look impressive, sitting there jabbingly wildly at your up and return keys.
This is why I found it somewhat revelatory when my coworker entered the command watch du -hs index.html
and I saw something like the following:
Every 2.0s: du -hs index.html
4.0K index.html
From the man pages:
NAME
watch - execute a program periodically, showing output fullscreenSYNOPSIS
watch [options] commandDESCRIPTION
watch runs command repeatedly, displaying its output and errors (the first screenfull). This allows you to watch the program output change over time. By default, command is run every 2 seconds and watch will run until interrupted.
If you’re a macOS user like myself, this command is available via the Homebrew package watch.
I had need to test an application built for Linux, and didn’t want to run a whole desktop in a window using Virtualbox. I found the bits I needed online in various forums, but nowhere was it all in one place. It is now!
Prerequisites: Docker and XQuartz. Both can be downloaded from homebrew.
Create a Dockerfile:
FROM debian:latest RUN apt-get update && apt-get install -y iceweasel RUN export uid=501 gid=20 && \ mkdir -p /home/user && \ echo "user:x:${uid}:${gid}:User,,,:/home/user:/bin/bash" >> /etc/passwd && \ echo "staff:x:${uid}:" >> /etc/group && \ echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \ chmod 0440 /etc/sudoers && \ chown ${uid}:${gid} -R /home/user USER user ENV HOME /home/user CMD /usr/bin/iceweasel
It’s good to mount the Downloads folder within /home/user, or your Documents, or whatever. On Catalina or later you’ll get warnings asking whether you want to give Docker access to those folders.
First time through, open XQuartz, goto preferences > Security and check the option to allow connections from network clients, quit XQuartz.
Now open XQuartz, and in the xterm type:
$ xhost + $YOUR_IP $ docker build -f Dockerfile -t firefox . $ docker run -it -e DISPLAY=$YOUR_IP:0 -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/Downloads:/home/users/Downloads firefox
Enjoy firefox (or more likely, your custom app that you’re testing under Linux)!
All the Birmingham-flavoured tech content on this page is supplied by local bloggers:
Want your blog's content featured here?
For information on submitting your blog for inclusion on this list, visit our blog submission page on Birmingham.IO.
All content, trademarks, artwork, and associated imagery are trademarks and/or copyright material of their respective owners. All rights reserved.
Back to Top