Last updated: August 03, 2021 08:22 AM (All times are UTC.)

August 01, 2021

July 30, 2021

Reading List 280 by Bruce Lawson (@brucel)

  • Link ‘o the week: Safari isn’t protecting the web, it’s killing it – also featuring “Safari is killing the web through show-stopping bugs” eg, IndexedDB, localStorage, 100vh, Fetch requests skipping the service worker, and all your other favourites. Sort it out, NotSteve.
  • Do App Store Rules Matter? – “half of the revenue comes from just 0.5% of the users … last year, Apple made $7-8bn in commission revenue from perhaps 5m people spending over $450 a quarter on in-app purchases in games. Apple has started talking a lot about helping you use your iPhone in more healthy ways, but that doesn’t seem to extend to the app store.”
  • URL protocol handler registration for PWAs – Chromium 92 will let installed PWAs handle links that use a specific protocol for a more integrated experience.
  • Xiaomi knocks Apple off #2 spot – but will it become the next Huawei? – “Chinese manufacturer Xiaomi is now the second largest smartphone maker in the world. In last quarter’s results it had a 17% share of global smartphone shipments, ahead of Apple’s 14% and behind Samsung’s 19%
  • One-offs and low-expectations with Safari – “Jen Simmons solicited some open feedback about WebKit, asking what they might “need to add / change / fix / invent to help you?” … If I could ask for anything, it’d be that Apple loosen the purse strings and let Webkit be that warehouse for web innovation that it was a decade ago.” by Dave Rupert
  • Concern trolls and power grabs: Inside Big Tech’s angry, geeky, often petty war for your privacy – “Inside the World Wide Web Consortium, where the world’s top engineers battle over the future of your data.
  • Intent to Ship: CSS Overflow scrollbar-gutter – Tab Atkins explains “the space that a scrollbar *would* take up is always reserved regardless, so it doesn’t matter whether the scrollbar pops in or not. You can also get the same space reserved on the other side for visual symmetry if you want.”
  • Homepage UX – 8 Common Pitfalls starring 75% with carousels implement them badly (TL;DR: don’t bother); 22% hide the Search field; 35% don’t implement country and language selection correctly; 43% don’t style clickable interface elements effectively
  • WVG – Web Vector Graphics – a proof of concept spec from yer man @Hixie on a vector graphics format for Flutter (whatever that is). Interesting discussions on design decisions and trade offs.

July 21, 2021

July 19, 2021

July 13, 2021

$ docker container list -a

I noticed I had a large number of docker containers that weren’t being used. Fortunately docker makes it easy to single out a container from this list and remove all containers that were created before it.

docker ps -aq -f "before=widgets_web_run_32cb6f5d2d71" | xargs docker container rm

There are a bunch of handy filters that can be applied to docker ps. Nice for a bit of spring cleaning.

July 09, 2021

Reading List 279 by Bruce Lawson (@brucel)

I’ve had a number of conversations about what “we” in the “free software community” “need” to do to combat the growth in proprietary, user-hostile and customer-hostile business models like cloud user-generated content hosts, social media platforms, hosted payment platforms, videoconferencing services etc. Questions can often be summarised as “what can we do to get everyone off of Facebook groups”, “how do we get businesses to adopt Jitsi Meet instead of Teams” or “how do we convince everyone that Mattermost is better for community chat than Slack”.

My answer is “we don’t”, which is very different from “we do nothing about those things”. Scaled software platforms introduce all sorts of problems that are only caused by trying to operate the software at scale, and the reason the big Silicon Valley companies are that big is that they have to spend a load of resources just to tread water because they’ve made everything so complex for themselves.

This scale problem has two related effects: firstly the companies are hyper-concerned about “growth” because when you’ve got a billion users, your shareholders want to know where the next hundred million are coming from, not the next twenty. Secondly the companies are overly-focused on lowest common denominator solutions, because Jennifer Miggins from South Shields is a rounding error and anything that’s good enough for Scott Zablowski from Los Angeles will have to be good enough for her too, and the millions of people on the flight path between them.

Growth hacking and lowest common denominator experiences are their problems, so we should avoid making them our problems, too. We already have various tools for enabling growth: the freedom to use the software for any purpose being one of the most powerful. We can go the other way and provide deeply-specific experiences that solve a small collection of problems incredibly well for a small number of people. Then those people become super-committed fans because no other thing works as well for them as our thing, and they tell their small number of friends, who can not only use this great thing but have the freedom to study how the program works, and change it so it does their computing as they wish—or to get someone to change it for them. Thus the snowball turns into an avalanche.

Each of these massive corporations with their non-free platforms that we’re trying to displace started as a small corporation solving a small problem for a small number of people. Facebook was internal to one university. Apple sold 500 computers to a single reseller. Google was a research project for one supervisor. This is a view of the world that’s been heavily skewed by the seemingly ready access to millions of dollars in venture capital for disruptive platforms, but many endeavours don’t have access to that capital and many that do don’t succeed. It is ludicrous to try and compete on the same terms without the same resources, so throw Marc Andreessen’s rulebook away and write a different one.

We get freedom to a billion people a handful at a time. That reddit-killing distributed self-hosted tool you’re building probably won’t kill reddit, sorry. Design for that one farmer’s cooperative in Skåne, and other farmers and other cooperatives will notice. Design for that one town government in Nordrhein-Westfalen, and other towns and other governments will notice. Design for that one biochemistry research group in Brasilia, and other biochemists and other researchers will notice. Make something personal for a dozen people, because that’s the one thing those massive vendors will never do and never even understand that they could do.

July 02, 2021

(Last Updated on )

All the omens were there: a comet in the sky; birds fell silent; dogs howled; unquiet spirits walked the forests and the dark, empty places. Yes, it was a meeting request from Marketing to discuss a new product page with animations that are triggered on scroll.

Much as a priest grasps his crucifix when facing a vampire, I immediately reached for Intersection Observer to avoid the browser grinding to a halt when watching to see if something is scrolled into view. And, like an exoricst sprinkling holy water on a demon, I also cleansed the code with a prefers-reduced-motion media query.

prefers-reduced-motion looks at the user’s operating system settings to see if they wish to suppress animations (for performance reasons, or because animations make them nauseous–perhaps because they have a vestibular disorder). A responsible developer will check to see if the user has indicated that they prefer reduced motion and use or avoid animations accordingly.

Content images can be made to switch between animated or static using the &LTpicture> element (so brilliantly brought to you by a team of gorgeous hunks). Bradbury Frosticle gives this example:

<picture>
  <source srcset="no-motion.jpg" media="(prefers-reduced-motion: reduce)">
  <img srcset="animated.gif alt="brick wall"/>
</picture>

Now, obviously you are a responsible developer. You use alt text (and don’t call it “alt tags”). But unfortunately, not everyone is like you. There are also 639 squillion websites out there that were made before prefers-reduced-motion was a thing, and only 3 of them are actively maintained.

It seems to me that browsers could do more to protect their users. Browsers are, after all, user agents that protect the visitor from pop-ups, malicious sites, autoplaying videos and other denizens of the underworld. They should also protect users against nausea and migraines, regardless of whether the developer thought to (or had the tools available to).

So, I propose that browsers should never respect scroll-behavior: smooth; if a user prefers reduced motion, regardless of whether a developer has set the media query.

Animated GIFs (and their animated counterparts in more modern image formats) are somewhat more complex. Since 2018, the Chromium-based Vivaldi browser has allowed users to deactivate animated GIFs right from the Status Bar, but this isn’t useful in all circumstances.

My initial thought was that the browser should only show the first five seconds because WCAG Success Criterion 2.2.2 says

For any moving, blinking or scrolling information that (1) starts automatically, (2) lasts more than five seconds, and (3) is presented in parallel with other content, there is a mechanism for the user to pause, stop, or hide it

But this wouldn’t allow a user to choose to see more than 5 seconds. The essential aspect of this success criterion is, I think, control. Melanie Richards, who does accessibility on Microsoft Edge browser, noted

IE had a neat keyboard shortcut that would pause any GIFs on command. I’d be curious to get your take on the relative usefulness of a browser setting such as you described, and/or a more contextual command.

I’m doubtful about discoverability of keyboard commands, but Travis Leithead (also Microsoft) suggested

We can use ML + your mic (with permission of course) to detect your frustration at forever looping Gifs and stop just the right one in real time. Or maybe just a play/pause control would help… +1 customer request!

And yes, a play/pause mechanism gives maximum control to the user. This would also solve the problem of looping GIFs in marketing emails, which are commonly displayed in browsers. There is an added complexity of what to do if the animated image is a link, but I’m not paid a massive salary to make web browsers, so I leave that to mightier brains than I possess.

In the meantime, until all browser vendors do my bidding (it only took 5 years for the <picture> element), please consider adding this to your CSS (by Thomas Steiner)


@media (prefers-reduced-motion: reduce) {
  *, ::before, ::after {
    animation-delay: -1ms !important;
    animation-duration: 1ms !important;
    animation-iteration-count: 1 !important;
    background-attachment: initial !important;
    scroll-behavior: auto !important;
    transition-duration: 0s !important;
    transition-delay: 0s !important;
  }
}

I am not an MBA, but it seems to me that making your users puke isn’t the best beginning to a successful business relationship.

July 01, 2021

June 25, 2021

June 24, 2021

June 23, 2021

June 21, 2021

June 18, 2021

Reading List 278 by Bruce Lawson (@brucel)

June 12, 2021

The latest in a series of posts on getting Unity Test Framework tests running in jenkins in the cloud, because this seems entirely undocumented.

To recap, we had got as far as running our playmode tests on the cloud machine, but we had to use the -batchmode -nographics command line parameters. If we don’t, we get tons of errors about non-interactive window sessions. But if we do, we can no longer rely on animation, physics, or some coroutines during our tests! This limits us to basic lifecycle and validation tests, which isn’t great.

We need our cloud machine to pretend there’s a monitor attached, so unity can run its renderer and physics.

First, we’re going to need to make sure we have enough grunt in our cloud machine to run the game at a solid frametate. We use ec2, with the g4dn.xlarge machine (which has a decent GPU) and the https://aws.amazon.com/marketplace/pp/prodview-xrrke4dwueqv6?ref=cns_srchrow#pdp-overview ami, which pre-installs the right GPU drivers.

To do this, we’re going to set up a non-admin windows account on our cloud machine (because that’s just good practice), get it to auto-login on boot and ask it to connect to jenkins under this account. Read on for more details.

First, set up your new windows account by remoting into the admin account of the cloud machine:

  • type “add user” in the windows start menu to get started adding your user. I call mine simply “jenkins”. Remember to save the password somewhere safe!
  • We need to be able to remote into the new user, so go to System Properties, and on the Remote tab click Select Users, and add your jenkins user
  • if jenkins has already run on this machine, you’ll want to give the new jenkins user rights to modify the c:\Workspace folder
  • You’ll also want to go into the Services app, find the jenkins service, and disable it.
  • Next, download autologon https://docs.microsoft.com/en-us/sysinternals/downloads/autologon, uncompress it somewhere sensible, then run it.
    • enter your new jenkins account details
    • click Enable
    • close the dialog

Now, log out of the admin account, and you should be able to remote desktop into the new account using the credentials you saved.

Now we need to make this new account register the computer with your jenkins server once it comes online. More details here https://wiki.jenkins.io/display/JENKINS/Distributed+builds#Distributedbuilds-Agenttomasterconnections, and it may be a bit different for you depending on setup, but here’s what we do:

  • From the remote desktop of the jenkins user account, open a browser and log into your jenkins server
  • Go to the node page for your new machine, and configure the Launch Type to be Launch Agent By Connecting It To The Master
  • Switch to the node’s status tab and you should have an orange button to download the agent jnlp file
  • Put this file in the %userprofile%\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup folder
  • Change the Launch Type back to whatever you need (we use the Slave Setup plugin, despite the icky name https://plugins.jenkins.io/slave-setup/) — it doesn’t need to stay as Launch Agent By Connecting It To The Master.

We’re done. log out of remote desktop and reboot the machine. You should see it come alive in the jenkins server after a few minutes. If you remove the -batchmode and -nographics options from your unity commands, you should see the tests start to run with full physics and animation!

June 11, 2021

June 10, 2021

I’ve talked before about the non-team team dynamic that is “one person per task”. Where the management and engineers collude to push the organisation beyond a sustainable pace by making sure that at all times, each individual is kept busy and collaboration is minimised.

I talked about the deleterious effect on collaboration, particularly that code review becomes a burden resolved with a quick “LGTM”. People quickly develop specialisations and fiefdoms: oh there’s a CUDA story, better give it to Yevgeny as he worked on the last CUDA story.

The organisation quickly adapts to this balkanisation and optimises for it. Is there a CUDA story in the next sprint? We need something for Yevgeny to do. This is Conway’s Corrolary: the most efficient way to develop software is when the structure matches the org chart.

Basically all forms of collaboration become a slog when there’s no “us” in team. Unfortunately, the contradiction at the heart of this 19th century approach to division of labour is that, when applied to knowledge work, the value to each participant of being in meetings is minimised, while the necessity for each participant to be in a meeting is maximised.

The value is minimised because each person has their personal task within their personal fiefdom to work on. Attending a meeting takes away from the individual productivity that the process is optimising for. Additionally, it increases the likelihood that the meeting content will be mostly irrelevant: why should I want to discuss backend work when Sophie takes all the backend tasks?

The meetings are necessary, though, because nobody owns the whole widget. No-one can see the impact of any workflow change, or dependency adoption, or clean up task, because nobody understands more than a 1/N part of the whole system. Every little thing needs to be run by Sophie and Yevgeny and all the others because no-one is in a position to make a decision without their input.

This might sound radically democratic, and not the sort of thing you’d expect from a business: nobody can make a decision without consulting all the workers! Power to the people! In fact it’s just entirely progress-destroying: nobody can make a decision at all until they’ve got every single person on board, and that’s so much work that a lot of decisions will be defaulted. Nothing changes.

And there’s no way within this paradigm to avoid that. Have fewer meetings, and each individual is happier because they get to maximise progress time spent on their individual tasks. But the work will eventually grind to a halt, as the architecture reflects N different opinions, and the N! different interfaces (which have each fallen into the unowned gaps between the individual contributors) become harder to work with.

Have more meetings, and people will grumble that there are too many meetings. And that Piotr is trying to land-grab from other fiefdoms by pushing for decisions that cross into Sophie’s domain.

The answer is to reconstitute the team – preferably along self-organising principles – into a cybernetic organism that makes use of its constituent individuals as they can best be applied, but in pursuit of the team’s goals, not N individual goals. This means radical democracy for some issues, (agreed) tyranny for others, and collective ignorance of yet others.

It means in some cases giving anyone the autonomy to make some choices, but giving someone with more expertise the autonomy to override those choices. In some cases, all decisions get made locally, in others, they must be run past an agreed arbiter. In some cases, having one task per team, or even no tasks per team if the team needs to do something more important before it can take on another task.

June 04, 2021

…and this time it’s online! Back in 2019, we toured ‘A Moment of Madness’ to a number of UK festivals. AMoM is an immersive experience with live actors and Escape Game style puzzles where the players are on a stake-out in a multi-story car-park tracking politician Michael Makerson (just another politician with a closet full […]

Random Expo.io tips by Bruce Lawson (@brucel)

I’m doing some accessibility testing on a React Native codebase that uses Expo during development. If you’ve done something seriously wrong in a previous life and karma has condemned you to using React Native rather than making a Progressive Web App, Expo is jolly useful. It gives you a QR code that you can scan with Android or iOS to ‘install’ the app on your device and you get live reload of any changes. It’s like sinking in a quagmire of shit but someone nice is bringing you beer and playing Abba while it happens. (Beats me why that isn’t their corporate strapline.)

Anyway, I struggled a bit to set it up so here are some random tips that I learned the hard way:

  • If your terminal yells “Error: EMFILE: too many open files, watch at FSEvent.FSWatcher._handle.onchange (internal/fs/watchers.js:178:28) error Command failed with exit code 1” when you start Expo, stop it, do brew install watchman and re-start Expo. Why? No idea. Someone from StackOverflow told me. Most npm stuff is voodoo magic–just install all the things, hope none of them were made by in Moscow by Vladimir Evilovich of KGB Enterprises, ignore all the deprecation warnings and cross your fingers.
  • If you want to open your app in an iOS simulator, you need xcode and you need to install xcode command line tools or it’ll just hang.
  • Scrolling in iOS simulator is weird. Press the trackpad with one hand and scroll with other hand. Or enable three finger drag and have one hand free for coffee, smoking or whatever other filthy habits you’ve developed while working from home.
  • If you don’t want it, you can turn off the debugging menu overlay.
  • If you like CSS, under no circumstances view source of the app running in the browser. It is, however, full of lots of ARIA nourishment thanks to React Native for Web.

Who knows? One day, Apple may decide not to hamstring PWAs on iOS and we can all use the web to run on any device and any browser, just as Sir Uncle Timbo intended.

June 03, 2021

AMA by Graham Lee

It was requested on twitter that I start answering community questions on the podcast. I’ve got a few to get the ball rolling, but what would you like to ask? Comment here, or reach me wherever you know I hang out.

June 02, 2021

June 01, 2021

Having looked at hopefully modern views on Object-Oriented analysis and design, it’s time to look at what happened to Object-Oriented Programming. This is an opinionated, ideologically-motivated history, that in no way reflects reality: a real history of OOP would require time and skills that I lack, and would undoubtedly be almost as inaccurate. But in telling this version we get to learn more about the subject, and almost certainly more about the author too.
They always say that history is written by the victors, and it’s hard to argue that OOP was anything other than victorious. When people explain about how they prefer to write functional programs because it helps them to “reason about” code, all the reasoning that was done about the code on which they depend was done in terms of objects. The ramda or lodash or Elm-using functional programmer writes functions in JavaScript on an engine written in C++. Swift Combine uses a functional pipeline to glue UIKit objects to Core Data objects, all in an Objective-C IDE and – again – a C++ compiler.

Maybe there’s something in that. Maybe the functional thing works well if you’re transforming data from one system to another, and our current phase of VC-backed disruption needs that. Perhaps we’re at the expansion phase, applying the existing systems to broader domains, and a later consolidation or contraction phase will demand yet another paradigm.
Anyway, Object-Oriented Programming famously (and incorrectly, remember) grew out of the first phase of functional programming: the one that arose when it wasn’t even clear whether computers existed, or if they did whether they could be made fast enough or complex enough to evaluate a function. Smalltalk may have borrowed a few ideas from Simula, but it spoke with a distinct Lisp.

We’ll fast forward through that interesting bit of history when all the research happened, to that boring bit where all the development happened. The Xerox team famously diluted the purity of their vision in the hot-air balloon issue of Byte magazine, and a whole complex of consultants, trainers and methodologists jumped in to turn a system learnable by children into one that couldn’t be mastered by professional programmers.
Actually, that’s not fair: the system already couldn’t be mastered by professional programmers, a breed famous for assuming that they are correct and that any evidence to the contrary is flawed. It was designed to be learnable by children, not by those who think they already know better.

The result was the ramda/lodash/Elm/Clojure/F# of OOP: tools that let you tell your local user group that you’ve adopted this whole Objects thing without, y’know, having to do that. Languages called Object-*, Object*, Objective-*, O* added keywords like “class” to existing programming languages so you could carry on writing software as you already had been, but maybe change the word you used to declare modules.

Eventually, the jig was up, and people cottoned on to the observation that Object-Intercal is just Intercal no matter how you spell come.from(). So the next step was to change the naming scheme to make it a little more opaque. C++ is just C with Classes. So is Java, so is C#. Visual BASIC.net is little better.

Meanwhile, some people who had been using Smalltalk and getting on well with fast development of prototypes that they could edit while running into a deployable system had an idea. Why not tell everyone else how great it is to develop prototypes fast and edit them while running into the deployable system? The full story of that will have to wait for the Imagined History of Agile, but the TL;DR is that whatever they said, everybody heard “carry on doing what we’re already doing but plus Jira”.

Well, that’s what they heard about the practices. What they heard about the principles was “principles? We don’t need no stinking principles, that sounds like Big Thinking Up Front urgh” so decided to stop thinking about anything as long as the next two weeks of work would be paid for. Yes, iterative, incremental programming introduced the idea of a project the same length as the gap between pay checks, thus paving the way for fire and rehire.

And thus we arrive at the ideological void of today’s computering. A phase in which what you don’t do is more important than what you do: #NoEstimates, #NoSQL, #NoProject, #NoManagers…#NoAdvances.

Something will fill that void. It won’t be soon. Functional programming is a loose collection of academic ideas with negation principles – #NoSideEffects and #NoMutableState – but doesn’t encourage anything new. As I said earlier, it may be that we don’t need anything new at the moment: there’s enough money in applying the old things to new businesses and funnelling more money to the cloud providers.

But presumably that will end soon. The promised and perpetually interrupted parallel computing paradigm we were guaranteed upon the death of Moore’s Law in the (1990s, 2000s, 2010s) will eventually meet the observation that every object larger than a grain of rice has a decent ARM CPU in, leading to a revolution in distributed consensus computing. Watch out for the blockchain folks saying that means they were right all along: in a very specific way, they were.

Or maybe the impressive capability but limited applicability of AI will meet the limited capability but impressive applicability of intentional programming in some hybrid paradigm. Or maybe if we wait long enough, quantum computing will bring both its new ideas and some reason to use them.

But that’s the imagined future of programming, and we’re not in that article yet.

May 31, 2021

We left off in the last post with an idea of how Object-Oriented Analysis works: if you’re thinking that it used around a thousand words to depict the idea “turn nouns from the problem domain into objects and verbs into methods” then you’re right, and the only reason to go into so much detail is that the idea still seems to confuse people to this day.

Similarly, Object-Oriented Design – refining the overall problem description found during analysis into a network of objects that can simulate the problem and provide solutions – can be glibly summed up in a single sentence. Treat any object uncovered in the analysis as a whole, standalone computer program (this is called encapsulation), and do the object-oriented analysis stuff again at this level of abstraction.

You could treat it as turtles all the way down, but just as Physics becomes quantised once you zoom in far enough, the things you need an object to do become small handfuls of machine instructions and there’s no point going any further. Once again, the simulation has become the deployment: this time because the small standalone computer program you’re pretending is the heart of any object is a small standalone computer program.

I mean, that’s literally it. Just as the behaviour of the overall system could be specified in a script and used to test the implementation, so the behaviour of these units can be specified in a script and used to test the implementation. Indeed some people only do this at the unit level, even though the unit level is identical to the levels above and below it.

Though up and down are not the only directions we can move in, and it sometimes makes more sense to think about in and out. Given our idea of a User who can put things in a Cart, we might ask questions like “how does a User remember what they’ve bought in the past” to move in towards a Ledger or PurchaseHistory, from where we move out (of our problem) into the realm of data storage and retrieval.

Or we can move out directly from the User, asking “how do we show our real User out there in the world the activities of this simulated User” and again leave our simulation behind to enter the realm of the user interface or network API. In each case, we find a need to adapt from our simulation of our problem to someone’s (probably not ours, in 2021) simulation of what any problem in data storage or user interfaces is; this idea sits behind Cockburn’s Ports and Adapters.

Moving in either direction, we are likely to encounter problems that have been solved before. The trick is knowing that they have been solved before, which means being able to identify the commonalities between what we’re trying to achieve and previous solutions, which may be solutions to entirely different problems but which nonetheless have a common shape.

The trick object-oriented designers came up with to address this discovery is the Pattern Language (OK, it was architect Christopher Alexander’s idea: great artists steal and all that), in which shared solutions are given common names and descriptions so that you can explore whether your unique problem can be cast in terms of this shared description. In practise, the idea of a pattern language has fared incredibly well in software development: whenever someone says “we use container orchestration” or “my user interface is a functional reactive program” or “we deploy microservices brokered by a message queue” they are relying on the success of the patterns language idea introduced by object-oriented designers.

Meanwhile, in theory, the concept of patterns language failed, and if you ask a random programmer about design patterns they will list Singleton and maybe a couple of other 1990s-era implementations patterns before telling you that they don’t use patterns.

And that, pretty much, is all they wrote. You can ask your questions about what each object needs to do spatially (“who else does this object need to talk to in order to answer this question?”), temporally (“what will this object do when it has received this message?”), or physically (“what executable running on what computer will this object be stored in?”). But really, we’re done, at least for OOD.

Because the remaining thing to do (which isn’t to say the last thing in a sequence, just the last thing we have yet to talk about) is to build the methods that respond to those messages and pass those tests, and that finally is Object-Oriented Programming. If we start with OOP then we lose out, because we try to build software without an idea of what our software is trying to be. If we finish with OOP then we lose out, because we designed our software without using knowledge of what that software would turn into.

May 30, 2021

I’ve made a lot over the years, including the book, Object-Oriented Programming the Easy Way, of my assertion that one reason people are turned off from Object-Oriented Programming is that they weren’t doing Object-Oriented Design. Smalltalk was conceived as a system for letting children explore computers by writing simulations of other problems, and if you haven’t got the bit where you’re creating simulations of other problems, then you’re just writing with no goal in mind.

Taken on its own, Object-Oriented Programming is a bit of a weird approach to modularity where a package’s code is accessed through references to that package’s record structures. You’ve got polymorphism, but no guidance as to what any of the individual morphs are supposed to be, let alone a strategy for combining them effectively. You’ve got inheritance, but inheritance of what, by what? If you don’t know what the modules are supposed to look like, then deciding which of them is a submodule of other modules is definitely tricky. Similarly with encapsulation: knowing that you treat the “inside” and “outside” of modules differently doesn’t help when you don’t know where that side ought to go.

So let’s put OOP back where it belongs: embedded in an object-oriented process for simulating a problem. The process will produce as its output an executable model of the problem that produces solutions desired by…

Well, desired by whom? The answer that Extreme Programming and Agile Software Development, approaches to thinking about how to create software that were born out of a time when OOP was ascendant, would say “by the customer”: “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”, they say.

Yes, if we’re working for money then we have to satisfy the customer. They’re the ones with the money in the first place, and if they aren’t then we have bigger problems than whether our software is satisfactory. But they aren’t the only people we have to satisfy, and Object-Oriented Design helps us to understand that.

If you’ve ever seen someone who was fully bought into the Unified Modelling Language (this is true for other ways of capturing the results of Object-Oriented Analysis but let’s stick to thinking about the UML for now), then you’ll have seen a Use Case diagram. This shows you “the system” as an abstract square, with stick figures for actors connected to “the system” via arrows annotated with use cases – descriptions of what the people are trying to do with the system.

We’ve already moved past the idea that the software and the customer are the only two things in the world, indeed the customer may not be represented in this use case diagram at all! What would that mean? That the customer is paying for the software so that they can charge somebody else for use of the software: that satisfying the customer is an indirect outcome, achieved by the action of satisfying the identified actors.

We’ve also got a great idea of what the scope will be. We know who the actors are, and what they’re going to try to do; therefore we know what information they bring to the system and what sort of information they will want to get out. We also see that some of the actors are other computer systems, and we get an idea of what we have to build versus what we can outsource to others.

In principle, it doesn’t really matter how this information is gathered: the fashion used to be for prolix use case documents but these days, shorter user stories (designed to act as placeholders for a conversation where the details can be worked out) are preferred. In practice the latter is better, because it takes into account that the act of doing some work changes the understanding of the work to be done. The later decisions can be made, the more you know at the point of deciding.

On the other hand, the benefit of that UML approach where it’s all stuck on one diagram is that it makes commonalities and pattern clearer. It’s too easy to take six user stories, write them on six Github issues, then get six engineers to write six implementations, and hope that some kind of convergent design will assert itself during code review: or worse, that you’ll decide what the common design should have been in retrospective, and add an “engineering story” to the backlog to be deferred forever more.

The system, at this level of abstraction, is a black box. It’s the opaque “context” of the context diagram at the highest level of the C4 model. But that needn’t stop us thinking about how to code it! Indeed, Behaviour-Driven Development has us do exactly that. Once we’ve come to agreement over what some user story or use case means, we can encapsulate (there’s that word again) our understanding in executable form.

And now, the system-as-simulation nature of OOP finally becomes important. Because we can use those specifications-as-scripts to talk about what we need to do with the customer (“so you’re saying that given a new customer, when they put a product in their cart, they are shown a subtotal that is equal to the cost of that product?”), refining the understanding by demonstrating that understanding in executable form and inviting them to use a simulation of the final product. But because the output is an executable program that does what they want, that simulation can be the final product itself.

A common step when implementing Behaviour-Driven Development is to introduce a translation layer between “the specs” and “the production code”. So “a new user” turns into User.new.save!, and someone has to come along and write those methods (or at least inherit from ActiveRecord). Or worse, “a new user” turns into PUT /api/v0/internal/users, and someone has to both implement that and the system that backs it.

This translation step isn’t strictly true. “A new user” can be an statement in your software implementation, your specs can be both a simulation of what the system does and the system itself, you save yourself a lot of typing and a potentially faulty translation.

There’s still a lot to Object-Oriented Analysis, but it roughly follows the well-worn path of Domain-Driven Design. Where we said “a new user”, everybody on the team should agree on what a “user” is, and there should be a concept (i.e. class) in the software domain that encapsulates (that word again!) this shared understanding and models it in executable form. Where we said the user “puts a product in their cart”, we all understand that products and carts are things both in the problem and in the software, and that a user can do a thing called “putting” which involves products, and carts, in a particular way.

If it’s not clear what all those things are or how they go together, we may wish to roleplay the objects in the program (which, because the program is a simulation of the things in the problem, means roleplaying those things). Someone is a user, someone is a product, someone is a cart. What does it mean for the user to add a product to their cart? What is “their” cart? How do these things communicate, and how are they changed by the experience?

We’re starting to peer inside the black box of the “system”, so in a future post we’ll take a proper look at Object-Oriented Design.

May 26, 2021

May 25, 2021

Make Good Grow Volunteering Platform

I was involved in designing concepts for the MGG volunteering platform, this project ranged from initial wireframes to full UI/UX prototyping for investors and stakeholders to review.

Make Good Grow was a complex platform built as a web app for multiple platforms. We worked hard on the user flow and how to make all volunteering opportunities easily accessible. We wanted the design to have character and full of colour and excitement.

The platform also featured a fully embedded event builder where clients could setup their volunteering opportunities. The event builder would walk you step by step through location, categories, skills required and much more. It was designed to be simple flexible and powerful.

The web version of the site was designed to work with a wordpress theme.If you like this design please share and if you’re in the market to book a freelance UI designer don’t hesitate to contact me.

The post Volunteering Platform appeared first on .

May 23, 2021

Kangen water is a trademark for machine-electrolysed water, owned by Enagic. Similar water is also variously known as Electrolyzed Reduced Alkaline Water, Alkaline Water, Structured Water, or Hexagonal Water. It’s popular among new-age influencers and their followers, because it raises the vibrational frequency of their chakras and kundalinis, thereby energising the Qui so that Gemini shines in their quantum auras, or something. Of course, none of these are verifiable –because they’re all fictional.

Enagic itself is careful not to over-promise; their website says that their water can be used for preparing food, making coffee, watering plants, washing your pets, face, hair – who knew?! It rather confusingly tells you “Take your medicine with this water” (Archived web page):


while their FAQ tells you

Kangen Water® is intended for everyday drinking and cooking rather than for drinking with medication. The Japanese Ministry of Health, Labor and Welfare gives the following directions: Do not take medication with machine-produced water. (Archived page)

Alkaline water’s health benefits

As for health benefits, Enagic simply claims that Kangen water is

perfect for drinking and healthy cooking. This electrolytically-reduced, hydrogen-rich water works to restore your body to a more alkaline state, which is optimal for good health.

The mechanism by which it makes your body more alkaline is not explained, nor why a more alkaline state should be “optimal for good health”. In its article Is alkaline water a miracle cure – or BS? The science is in, The Guardian cites Dr Tanis Fenton, an adjunct professor at the University of Calgary and an evidence analyst for Dietitians of Canada:

Fenton stresses, you simply can’t change the pH of your body by drinking alkaline water. “Your body regulates its [blood] pH in a very narrow range because all our enzymes are designed to work at pH 7.4. If our pH varied too much we wouldn’t survive.”

Masaru Emoto and Water memory

The swamps of social media are full of people selling magic Water to each other, making all sorts of outlandish unscientific claims, such as water has “memory”:

water holds memory

This hokum was popularised by Masuru Emoto, a graduate in International Relations who became a “Doctor” of Alternative Medicine at the Open International University for Alternative Medicine in India, a diploma mill which targeted quacks to sell its degrees and was later shut down.

(The idea that water holds memory is often cited by fans of homeopathy. They agree: there is none of the actual substance remaining in this water because it’s been diluted out of existence, but it does its magical healing because the water *remembers*. Cosmic, maaan.)

Mr Emoto’s results have never been reproduced, his methodology was unscientific, sloppy and subjective. Mr Emoto was invited to participate in the James Randi Educational Foundation’s Million Dollar Challenge in which pseudo-scientists are invited to replicate their results in properly controlled scientific methods, and get paid $1 million if they succeed. For some inexplicable reason, he didn’t take up the challenge.

Hexagonal water hydrates better

It’s also claimed that this magical water forms hexagonal “clusters” that somehow make it more hydrating because it’s more readily absorbed by cells. This is nonsense: although water clusters have been observed experimentally, they have a very short lifetime: the hydrogen bonds are continually breaking and reforming at timescales shorter than 200 femtoseconds.

In any case, Water molecules enter cells through a structure called an aquaporin, in single file. If they were clustered, they would be too big to enter the cell.

Electrolyzed Reduced Alkaline Water and science

Let’s look at what real scientists say. From Systematic review of the association between dietary acid load, alkaline water and cancer:

Despite the promotion of the alkaline diet and alkaline water by the media and salespeople, there is almost no actual research to either support or disprove these ideas. This systematic review of the literature revealed a lack of evidence for or against diet acid load and/or alkaline water for the initiation or treatment of cancer. Promotion of alkaline diet and alkaline water to the public for cancer prevention or treatment is not justified.

In their paper Physico-Chemical, Biological and Therapeutic Characteristics of Electrolyzed Reduced Alkaline Water (ERAW), Marc Henry and Jacques Chambron of the University of Strasbourg wrote

It was demonstrated that degradation of the electrodes during functioning of the device releases very reactive nanoparticles of platinum, the toxicity of which has not yet been clearly proven. This report recommends alerting health authorities of the uncontrolled availability of these devices used as health products, but which generate drug substances and should therefore be sold according to regulatory requirements.

In short: machine-produced magic water has no scientifically demonstrable health benefits, and may actually be harmful.

Living structured water?

So why do people make such nonsensical claims about magic water? Some do it for profit, of course. Enagic sells via a multi-level marketing scheme, similar to Amway but for Spiritual People, Lightworkers, Starseeds and Truth-Seakers. (Typo intentional; new-age bliss ninnies love to misspell “sister” as “SeaStar”, for example, because the Cosmos reveals itself through weak puns, and —of course— the Cosmos only speaks English. You sea?)

Doubtless many customers and sellers actually genuinely believe in magic water. It would be tempting to think of this as just some harmless fad for airheads who enjoy sunning their perineums (it was Metaphysical Meagan’s instagram that first introduced me to Magic water machines; of course, she has a “water business”)

But it’s not harmless. The same people who promote magic water also bang on about “sacred earth“, claim Trump won the 2020 election, Covid is a “plandemic” and show a disturbing tendency towards far right politics. The excellent article The New Age Racket and the Left (from 2004 but still worth reading in full) sums it up brilliantly:

At best, then, New Age is a lucrative side venture of neoliberalism, lining the pockets of those crafty enough to package spiritual fulfillment as a marketable product while leaving the spiritually hungry as unsated as ever. At worst, though, it is the expression of something altogether more sinister. Rootedness in the earth, a return to pure and authentic folkways, the embrace of irrationalism, the conviction that there is an authentic way of being beyond politics, the uncritical substitution of group- identification for self-knowledge, are all of them basic features of right-wing ideology…

Many New Agers seem to feel not just secure in but altogether self-righteous about the benevolence of their world-view, pointing to the fact, for example, that it ‘celebrates’ the native cultures that global capitalism would plow over. To this one might respond, first of all, that celebration of native cultures is itself big business. Starbucks does it. So, in its rhetoric, does the Southeast Asian sex-tourism industry. Second, the simple fact that New Age is by its own lights multicultural and syncretistic is by no means a guarantee that it is safe from the accusation of being, at best, permissive of, and, at worst, itself an expression of, right-wing ideology. The Nazis, to return to a tried and true example, were no less obsessed with Indian spirituality than was George Harrison.

Conclusion

Kangen water and its non-branded magical siblings are useless nonsense that wastes resources as it requires electricity. Enagic gives contradictory advice about whether it’s safe to use it to take (real) medicines with. Water machines might make the source water worse because the device releases very reactive nanoparticles of platinum. Most people who extol its virtues want of your money, and they might also be an anti-scientific quasi-fascist.

May 21, 2021

Reading List 277 by Bruce Lawson (@brucel)

May 20, 2021

(Last Updated on )

TL;DR: I’ve forked the splendid but dated Tota11y accessibility visualisation toolkit, added some extra stuff and corrected a bug, and my employer has let me release it for Global Accessibility Awareness Day.

screenshot of Tota11y on this blog, showing heading structure

A while ago, my very nice client (and now employer), Babylon Health, asked me to help them with their accessibility testing. One plan of attack is an automated scan integrated with the CI system to catch errors during development (more about this when it’s finished). But for small content changes made by marketing folks, this isn’t appropriate.

We tried lots of things like Wave, which are great but rather overwhelming for non-technical people because they tend to cover the page being analysed with arcane symbols, many of which are beyond a CMS content editor’s control anyway. There are lots of excellent accessibility checks in Microsoft Edge devtools, but to a non-technical user, this is how devtools look:

an incredibly elaborate 1980s-style control panel for a nucelar power station

Then I remembered something I’d used a while ago to demo heading structures, a tool called Tota11y from Khan Academy, which is MIT licensed. Note, this is not designed to check everything. Tota11y is a simple tool to visualise the most widespread web accessibility errors in a non overwhelmingly-techy way. It aims to give content authors and editors insights into things they can control. It’s not a cure-all.

There were a few things I wanted to change, specifically for Babylon’s web sites. I wanted the contrast analyser to ignore content that was visually hidden using the common clip pattern and to correct a bug whereby it didn’t calculate contrast properly on large text, and reported an error where there isn’t one. False positives encourage people to ignore the output and this erodes trust in tools. The fix uses code from Firefox devtools; thank you to the Mozilla people who helped me find it. There’s loads of other small changes.

Khan Academy seemed to have abandoned the tool, so I forked it. Here it is, if you want to try it for Global Accessibility Awareness Day. Drag the attractive link to your bookmarks bar, then activate it and inspect your page. The code is on GitHub–don’t laugh at my crap JavaScript. It was also an “interesting” experience learning about npm, LESS, handlebars and all the stuff I’d managed to avoid so far.

Feel free to use it if it helps you. Pull requests will be gratefully received (as long as they don’t unnecessarily rewrite it in React and Tailwind), and I’ll be making a few enhancements too. Thanks to Khan Academy for releasing the initial project, to my colleagues for testing it, to Jack Roles for making it look pretty, and to Babylon for letting me release the work they were nice enough to pay me for.

Update: Version 1.1 released

17 June 2021: I’ve made some tweaks to the Tota11y UI. A new naming convention replaces “adjective+animal I’ve never eaten”. V 1+ are “adjective+musical instrument I’ve never tried to play”. V1.1 is “Rusty Trombone”.

May 19, 2021

May 15, 2021

Graham Lee Uses This by Graham Lee

I’ve never been famous enough in tech circles to warrant a post on uses this, but the joy of running your own blog is that you get to indulge any narcissistic tendencies with no filter. So here we go!

The current setup

IMG 3357

This is the desktop setup. The idea behind this is that it’s a semi-permanent configuration so I can get really comfortable, using the best components I have access to to provide a setup I’ll enjoy using. The main features are a Herman Miller Aeron chair, which I got at a steep discount by going for a second-generation fire sale chair, and an M1 Mac Mini coupled to a 24″ Samsung curved monitor, a Matias Tactile Pro 4 keyboard and a Logitech G502 mouse. There’s a Sandberg USB camera (which isn’t great, but works well enough if I use it via OBS’s virtual camera) and a Blue Yeti mic too. The headphones are Marshall Major III, and the Philips FX-10 is used as a Bluetooth stereo.

I do all my streaming (both Dos Amigans and [objc retain];) from this desk too, so all the other hardware you see is related to that. There are two Intel NUC devices (though one is mounted behind one of the monitors), one running FreeBSD (for GNUstep) and one Windows 10 (with WinUAE/Amiga Forever). The Ducky Shine 6 keyboard and Glorious Model O mouse are used to drive whichever box I’m streaming from, which connects to the other Samsung monitor via an AVerMedia HDMI capture device.

IMG 3356

The laptop setup is on a variable-height desk (Ikea SKARSTA), and this laptop is actually provided by my employer. It’s a 12″ MacBook Pro (Intel). The idea is that it should be possible to work here, and in fact at the moment I spend most of my work time at it; but it should also be very easy to grab the laptop and take it away. To that end, the stuff plugged into the USB hub is mostly charge cables, and the peripheral hardware is mostly wireless: Apple Magic Mouse and Keyboard, and a Corsair headset. A desk-mounted stand and a music-style stand hold the tablets I need for developing a cross-platform app at work.

And it happens that there’s an Amiga CD32 with its own mouse, keyboard, and joypad alongside: that mostly gets used for casual gaming.

The general principle

Believe it or not, the pattern I’m trying to conform to here is “one desktop, one laptop”. All those streaming and gaming things are appliances for specific tasks, they aren’t a part of my regular computering setup. I’ve been lucky to be able to keep to the “one desktop, one laptop” pattern since around 2004, usually using a combination of personal and work-supplied equipment, or purchased and handed-down. For example, the 2004-2006 setup was a “rescued from the trash” PowerMac 9600 and a handed-down G3 Wallstreet; both very old computers at that time, but readily affordable to a fresh graduate on an academic support staff salary.

The concept is that the desktop setup should be the one that is most immediate and comfortable, that if I need to spend a few hours computering I will be able to get on very well with. The laptop setup should make it possible to work, and I should be able to easily pick it up and take it with me when I need to do so.

For a long time, this meant something like “I can put my current Xcode project and a conference presentation on a USB stick, copy it to the laptop, then go to a conference to deliver my talk and hack on a project in the hotel room”. These days, ubiquitous wi-fi and cloud sync products remove some of the friction, and I can usually rely on my projects being available on the laptop at time of use (or being a small number of steps away).

I’ve never been a single-platform person. Sometimes “my desktop” is a Linux PC, sometimes a Mac, it’s even been a NeXT Turbo Station and a Sun Ultra workstation before. Sometimes “my laptop” is a Linux PC, sometimes a Mac, the most outré was that G3 which ran OpenDarwin for a time. The biggest ramification of that is that I’ve never got particularly deep into configuring my tools. It’s better for me to be able to find my way around a new vanilla system than it is to have a deep custom configuration that I understand really well but is difficult to port.

When Mac OS X had the csh shell as default, I used that. Then with 10.3 I switched to bash. Then with 10.15 I switched to zsh. My dotfiles repo has a git config, and a little .emacs that enables some org-mode plugins. But that’s it.

Since Stuart Langridge and I released Which Three Birdies, it has taken the web by storm, and we’ve been inundated with requests from prestigious institutions to give lectures on how we accomplished this paradigm shift in non-arbitrary co-ordinates-to-mnemonic mapping. Unfortunately, the global pandemic and the terms of Stuart’s parole prevent us from travelling, so we’re writing it here instead.

The name

A significant advance on its predecessors was achievable because Bruce has a proper degree (English Language and Literature with Drama) and has trained as an English Language teacher. “Which” is an an interrogative pronoun, used in questions about alternatives. This might sound pedantic, but if a service can’t make the right choice from a very limited set of interrogative pronouns, how can you trust it to choose the correct three mnemonics? Establishing trust is vital when launching a tool that is destined to become an essential part of the very infrastructure of cartography.

The APIs

The mechanics of how the service locates and maps to three birds is extensively documented. Further documentation has been provided at the request of the Nobel Prize committee and will be published in due course

Accessibility

The Web is for everyone and anyone who makes sites that are inaccessible is, quite simply, not a proper developer and quite possibly a criminal or even a fascist. Therefore, W3B offers users the chance to hear the calls of the most prevalent birds in their location, and also provides a transcript of those calls.

screenshot of transcripts

Given that there are 18,043 species of birds worldwide, transcribing each one by hand was impractical, so we decided to utilise –nay, leverage– the power of Machine Learning.

Birdsong to Text via Machine Learning

Stuart is, by choice, a Python programmer. Unfortunately, we learned that pythons eat birds and, out of a sense of solidarity with our feathered friends, we decided not to progress with such a barbaric language so we sought an alternative.

Stuart hit upon the answer, due to a fortuitous coincidence. He has a prison tattoo of a puffin on his left buttock (don’t ask) and we remembered that the trendy “R” language is named after call of a Puffin (usually transcribed as “arr-arr-arr”).

Stuart set about learning R, but the we hit another snag: we couldn’t use the actual sounds of birds to train our AI, for copyright reasons.

Luckily, Bruce is also a musician with an extensive collection of instruments, including the actual kazoo that John Cale used to record the weird bits of Venus In Furs. Here it is in its Sotheby’s presentation case:

a kazoo in a presentaton box

Whereas the kazoo is ideal for duplicating the mellifluous squawk of a corncrake, it is less suitable to mimic the euphonious peep of an osprey. Stuart listens to AC/DC and therefore has no musical sense at all, so he wasn’t given an instrument. Instead he took the task of inhaling helium out of childrens’ balloons in order to replicate the higher registers of birdsong. Here’s a photo of the flame-haired Adonis preparing to imitate the melodious lament of the screech owl:

Pennywise the clown from IT, with a red balloon

After a few evenings re-creating a representative sample of birdsong, we had enough avian phonemes in the bag to run a rigorous programme of principal components analysis, cluster analysis and (of course) multilinear subspace learning algorithms to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into higher-dimensional vectors.

All known birds can now reliably be transcribed with 94% accuracy, except for the Crested Anatolian Otter-Catcher. We suspect that the reason for this is the confusion introduced by the Turkish vowel harmony and final-obstruent devoicing. In practice, however, this exception doesn’t affect the utility of the system, because the Crested Anatolian Otter-Catcher is now very rare due to its being extensively hunted in the 19th century. (Fun fact, the bird was once so famous and prevalent that the whole region was known as the Otter-munch Empire.)

Hopefully, this in-depth breakdown of how Which Three Birdies? works will encourage other authors of revolutionary new utilities to open-source their work as we have done for the betterment of all humanity. We’d like to thank the nice people at 51Degrees for commissioning Which Three Birdies?, giving us free rein, and paying us for it. The nutters.

May 12, 2021

Saturday 8 May was the 18th birthday of the famous CSS Zen Garden. To quote the Web Design Museum

The project offered a simple HTML template to be downloaded, the graphic design of which could be customized by any web designer, but only with the help of cascading styles and one’s own pictures. The goal of the project was to demonstrate the various possibilities of CSS in creating visual web design. The CSS Zen Garden gallery exhibited hundreds of examples of diverse web design, all based on a single template containing the same HTML code.

I too designed a theme, which never made it to creator Dave Shea’s official list, but did the rounds on blogs in its day. For a long time, it languished rather broken, because Dave had rejigged the HTML to use HTML5 elements and changed the names of the classes and IDs that I had used as selectors for styling. To mark the occasion, I spent a while reconstructing it. You can enjoy its glory at Geocities 1996 (Seizure warning!). (You might want to use Vivaldi browser which allows you to turn off animated GIFs).

After I tweeted the link on Saturday to celebrate CSS Zen Garden’s birthday, a number of people noted that the site isn’t responsive, so looks broken on mobile. This is because there weren’t any mobile devices when I wrote it in 2003! Sure, I could rewrite the CSS and use Grid and all the modern cool stuff, but that wasn’t my intent. Apart from class and ID names, the only things I changed were the mechanism of hiding text that’s replaced by images. Tt’s now color:transparent as opposed to floating h3 span off screen, as Dave got rid of the spans. The * html hacks for IE6 are still there. (If you don’t know what that means, lucky you. It was my preferred way to target IE6, which believed there was an unnamed selector above the html element in the tree, so * html #whatever would select that ID only in IE6. I liked it because it was valid CSS, albeit nonsensical.)

It’s there as a working artefact of web development in the early 2000s, in the same way as its “exuberant” design is a fond homage to the early web aesthetic that I first discovered in 1996. And what better accolade can there be than this:

Dave’s project opened the eyes of many designers and sent a message across our then-small community of Web Standards wonks that CSS was ready for prime-time. I’m told by many that the Zen Garden is still used by educators, 18 years later. Thank you, Dave Shea!

May 11, 2021

May 07, 2021

Reading List 276 by Bruce Lawson (@brucel)

(Last Updated on )

May 06, 2021

I’m signed up to the e-mail newsletter of a local walkway.

Exciting, I know. This monthly newsletter boasts a circulation of roughly 250 keen readers and it comes full of pictures of local flowers, book reviews, and that sort of thing.

This newsletter comes to me as a PDF from the personal email account of the group secretary. My email address in tucked away in the BCC field. As it happens, Gmail lets you send an email to up to 500 people at once.

The point of this is that there’s no obsession over things like…

  • what happens when I have ten million subscribers?
  • why hasn’t jsrn opened one of my emails in a while?
  • am I using the right newsletter provider? Is it one of the cool ones?

I can only imagine the list of subscribers is kept in a spreadsheet somewhere on the secretary’s computer.

I hope he has a backup.

May 05, 2021

On industry malaise by Graham Lee

Robert Atkins linked to his post on industry malaise:

All over the place I see people who got their start programming with “view source” in the 2000s looking around at the state of web application development and thinking, “Hey wait a minute, this is a mess” […] On the native platform side, there’s no joy either.

This is a post from 2019, but shared in a “this is still valid” sense. To be honest, I think it is. I recognise those doldrums myself; Robert shared the post in reply to my own toot:

Honestly jealous of people who are still excited by new developments in software and have gone through wondering why, then through how to get that excitement back, now wondering if it’s possible that I ever will.

I’ve spent long enough thinking that it’s the industry that’s at fault to thinking it’s me that’s at fault, and now I know others feel the same way I can expand that from “me” to “us”.

I recognise the pattern. The idea that “we” used to do good work with computers until “we” somehow lost “our” way with “our” focus on trivialities like functional reactive programming or declarative UI technology, or actively hostile activities like adtech, blockchain, and cloud computing.

Yes, those things are all hostile, but are they unique to the current times? Were the bygone days with their shrink-wrapped “breaking this seal means agreeing to the license printed on the paper inside the sealed box” EULAs and their “can’t contact flexlm, quitting now” really so much better than the today times? Did we not get bogged down in trivialities like object-relational mapping and rewriting the world in PerlPHPPython?

It is true that “the kids today” haven’t learned all the classic lessons of software engineering. We didn’t either, and there will soon be a century’s worth of skipped classes to catch up on. That stuff doesn’t need ramming into every software engineer’s brain, like they’re Alex from A Clockwork Orange. It needs contextualising.

A clear generational difference in today’s software engineering is what we think of Agile. Those of us who lived through—or near—the transition remember the autonomy we gained, and the liberation from heavyweight, management-centric processes that were all about producing collateral for executive sign-off and not at all about producing working software that our customers valued. People today think it’s about having heavyweight processes with daily status meetings that suck the life out of the team. Fine, things change, it’s time to move forward. But contextualise the Agile movement, so that people understand at least what moving backward would look like.

So some of this malaise will be purely generational. Some of us have aged/grown/tired out of being excited about every new technology, and see people being excited about every new technology as irrelevant or immature. Maybe it is irrelevant, but if so it probably was when we were doing it too: nothing about the tools we grew up with were any more timeless than today’s.

Some of it will also be generational, but for very different reasons. Some fraction of us who were junior engineers a decade or two ago will be leads, principles, heads of division or whatever now, and responsible for the big picture, and not willing to get caught into the minutiae of whether this buggy VC-backed database that some junior heard about at code club will get sunset before that one. We’d rather use postgres, because we knew it back then and know it now. Well, if you’re in that boat, congratulations on the career progression, but it’s now your job to make those big picture decisions, make them compelling, and convince your whole org to side with you. It’s hard, but you’re paid way more than you used to get and that’s how this whole charade works.

Some of it is also frustration. I certainly sense this one. I can pretend I understood my 2006-vintage iBook. I didn’t understand the half of it, but I understood enough to claim some kind of system-level comfort. I had (and read: that was a long flight) the Internals book so I understood the kernel. The Unix stuff is a Unix system, I know this! And if you ignore classic, carbon, and a bunch of programming languages that came out of the box, I knew the frameworks and developer tools too. I understood how to do security on that computer well enough that Apple told you to consider reading my excellent book. But it turns out that they just wouldn’t fucking sit still for a decade, and I no longer understand all of that technology. I don’t understand my M1 Mac Mini. That’s frustrating, and makes me feel stupid.

So yes, there is widespread malaise, and yes, people are doing dumb, irrelevant, or evil things in the name of computering. But mostly it’s just us.

As the kids these days say, please like and subscribe.

May 04, 2021

Naming things by Graham Lee

My current host name scheme at home is characters from the film Tron. So I have:

Laptop: flynn (programmer, formerly at Encom, and arcade owner)

Desktop: yori (programmer at Encom)

TV box: dumont (runs the I/O terminal)

Watch: bit (a bit)

Windows computer: dillinger (the evil corporate suit)

May 01, 2021

You don’t change the world by sitting around being a good person. You change the world by shipping products and making money.

As I wrote in my seminal management book Listen to me because I’m rich, white and clever, IBM wouldn’t have made a shitload of money in wartime Europe if they’d engaged in endless navel-gazing about politics. Their leadership told the staff to Stop Running in Circles and Ship Work that Matters, and get on with compiling a list of people with funny names like “Cohen” or “Levi”.

So here at Brucecamp, we’ve decided that it’s best if our productbots (formerly: employees) do not discuss the sausage machine while we push them into the sausage machine. As I wrote in our other book It doesn’t have to be full of whimpering Woke retards at work, “if you don’t like it, well, there’s the door. Enjoy poverty!”. And that’s all we have to say on the matter. Until the next blogpost. Or book.

In other news, Apple are wankers and I bought a sauna.

April 28, 2021

On UML by Graham Lee

A little context: I got introduced to UML in around 2008, at an employer who had a site licence for Enterprise Architect. I was sent on a training course run by a company that no longer exists called Sun Microsystems: every day for a week I would get on a coach to Marble Arch, then take the central line over to Shoreditch (they were very much ahead of their time, Sun Microsystems) and learn how to decompose systems into objects and represent the static and dynamic properties of these objects on class diagrams, activity diagrams, state diagrams, you name it.

I got a bye on some of the uses of Enterprise Architect at work. Our Unix team was keeping its UML diagrams in configuration management, round-tripping between the diagrams and C++ code to make sure everything was in sync. Because Enterprise Architect didn’t have round-trip support for Objective-C (it still doesn’t), and I was the tech lead for the Mac team, I wasn’t expected to do this.

This freed me from some of the more restrictive constraints imposed on other UML-using teams. My map could be a map, not the territory. Whereas the Unix folks had to show how every single IThing had a ThingImpl so that their diagrams correctly generated the PImpl pattern, my diagrams were free to show only the information relevant to their use as diagrams. Because the diagrams didn’t need to be in configuration management alongside the source, I was free to draw them on whiteboards if I was by a whiteboard and not by the desktop computer that had my installation of Enterprise Architect.

Even though I’d been working with Objective-C for somewhere around six years at this point, this training course along with the experience with EA was the thing that finally made the idea of objects click. I had been fine before with what would now be called the Massive View Controller pattern but then was the Massive App Delegate pattern (MAD software design, if you’ll excuse the ablism).

Apple’s sample code all puts the outlets and actions on the app delegate, so why can’t I? The Big Nerd Ranch book does it, so why can’t I? Oh, the three of us all editing that file has got unwieldy? OK, well let’s make App Delegate categories for the different features in the app and move the methods there.

Engaging with object-oriented design, as distinct from object-oriented programming, let me move past that, and helped me to understand why I might want to define my own classes, and objects that collaborate with each other. It helped me to understand what those classes and objects could be used for (and re-used for).

Of course, these days UML has fallen out of fashion (I still use it, though I’m more likely to have PlantUML installed than EA). In these threads and the linked posts two extreme opinions—along with quite a few in between—are found.

The first is that UML (well not UML specifically, but OO Analysis and Design) represents some pre-lapsarian school of thought from back when programmers used to think, and weren’t just shitting javascript into containers at ever-higher velocities. In this school, UML is something “we” lost along when we stopped doing software engineering properly, in the name of agile.

The second is that UML is necessarily part of the heavyweight waterfall go-for-two-years-then-fail-to-ship project management paradigm that The Blessed Cunningham did away with in the Four Commandments of the agile manifesto. Thou shalt not make unto yourselves craven comprehensive documentation!

Neither is true. “We” had a necessary (due to the dot-com recession) refocus on the idea that this software is probably supposed to be for something, that the person we should ask about what it’s for is probably the person who’s paying for it, and we should probably show them something worth their money sooner rather than later. Many people who weren’t using UML before the fall/revelation still aren’t. Many who were doing at management behest are no longer. Many who were because they liked it still are.

But when I last taught object-oriented analysis and design (as distinct, remember, from object-oriented programming), which was in March 2020, the tool to reach for was the UML (we used Visual Paradigm, not plantuml or EA). It is perhaps not embarrassing that the newest tool for the job is from the 1990s (after all, people still teach Functional Programming which is even older). It is perhaps unfortunate that no design (sorry, “emergent” design) is the best practice that has replaced it.

On the other hand, by many quantitative metrics, software is still doing fine, and the whole UML exercise was only a minority pursuit at its peak.

April 27, 2021

15 by Luke Lanchester (@Dachande663)

Something happened in February that I didn’t give enough attention to at the time. This site, HybridLogic, turned 15. A lot has changed in that time. Both on-line, and off.

Services have come and gone. FictionPress gave way to Twitter. Twitter to Reddit. Reddit to Medium and then back. WordPress has stayed ever-present. This blog has run some incantation of that venerable PHP app since it’s earliest days.

Hardware has seen a slow march through Apple’s line-up. I’ve dipped a toe back into Windows, always returning due to the fractious nature that OS seems to have with its “users”. On servers in the The Cloud and in the bedroom, I’ve mostly stuck with Debian, often Ubuntu. Simple, reliable. My fingers know exactly where to cd too, to get to where I want.

And this blog has remained through it all. Maybe it’ll change. A lick of paint. A new back-end. More content, links, opinions, reviews, guides. Me. For 15 years, hybridlogic.co.uk has been my little corner of the web.

Hopefully it’ll make it to another 15.

April 24, 2021

Or rather, I do use version control when I’m writing, and it isn’t helpful.

I’m currently studying a PhD, and I have around 113k words of notes in a git repository. I also have countless words of notes in a Zotero database and a Remarkable tablet. I don’t particularly miss git when I’m not storing notes in my repository.

A lot of the commit messages in that repository aren’t particularly informative. “Update literature search”, “meeting notes from today”, “meeting notes”, “rewrite introduction”. So unlike in software, where I have changes like “create the ubiquitous documents folder if it doesn’t already exist” and “fix type mismatch in document delegate conformance”, I don’t really have atomic changes in long-form writing.

Indeed, that’s not how I write. I usually set out either to produce an argument, or to improve an existing one. Not to add a specific point that I hadn’t thought of before, not to improve layout or structure in any specific way, not to fix particular problems. So I’m not “adding features” or “fixing bugs” in the same atomic way that I would in software, and don’t end up with a revision history comprising multiple atomic commits.

Some of my articles—this one included—have no checkpoints in their history at all. Others, including posts on De Programmatica Ipsum and journal articles, have a dozen or more checkpoints, but only because I “saved a draft” when I stepped away from the computer, not because there were meaningful atomic increments. I would never revert a change in an article when I’m writing, I’d always fix forward. I’d never introduce a new idea on a branch, there’s always a linear flow.

April 23, 2021

Reading List 275 by Bruce Lawson (@brucel)

April 21, 2021

April 20, 2021

We are uncovering better ways of developing
software by doing it and helping others do it.

It’s been 20 years since those words were published in the manifesto for agile software development, and capital-A Agile methods haven’t really been supplanted. Despite another two decades of doing it and helping others do it.

That seems problematic.

April 19, 2021

April 16, 2021

I’ve spent about a year working on an app for a group in the University where I work, that needed to be available on both Android and iOS. I’ve got a bit of experience working with the Apple-supplied SDKs on iOS, and a teensy amount of experience working with the Google-supplied SDKs on Android. Writing two apps is obviously an option, but not one I took very seriously. The other thing I’ve reached for before in this situation is React Native, where I’ve got a little experience but quite a bit of understanding having worked with React some.

Anyway, this project was a mobile companion for a desktop app written in C# and Windows Forms, and the client was going to have to pick up development at the end of my engagement. So I decided that the best approach for the client was to learn how to do it in Xamarin.Forms, and give them a C# project they could understand at the end. I also hoped there’d be an opportunity to share some code from the desktop software in the mobile project, though this didn’t pan out in the end.

It took a while to understand the centrality of the Model-View-ViewModel idea and how to get it to work with the code I was writing, rather than bludgeoning it in to what I was trying to do. Ultimately lots of X.F works with data bindings, where you say “this thing and that thing are connected” and so your view needs a that thing so it can display this thing. If the that thing isn’t in the right shape, is derived somehow, or shouldn’t be committed to the model until some other things are done, the ViewModel sits in the middle and separates the two.

I’m used to this model in a couple of contexts, and I’ll give Objective-C examples of each because that’s how old I am. In web applications, you can use data bindings to fill in parts of an HTML document based on values from a server-side object. WebObjects works this way (Rails doesn’t, it uses code to fill in parts of etc). The difference between web app data bindings and mobile app data bindings is one of lifecycle. Your value needs to be read once when the page (or XHR) is rendered, and stored once when the user posts the changes. This is so simple that you can use straightforward accessor methods for it. It also happens at a time when loading new content is happening, so any timing constraints are likely to come from elsewhere.

You can also do it in what, because I’m that old, I’ll call rich client applications, like Xamarin.Forms mobile apps or Cocoa Bindings desktop apps. Here, anything could be happening at any time: a worker thread could be updating the model, and the user could interact with the UI, all at the same time, potentially multiple times while a UI element is live. So you can’t just wait until the Submit button is pressed to update everything, you need to track and reflect updates when they happen.

Given a dynamic language like Objective-C, you can say “bind this thing to that thing with these options” and the binding library can rewrite your accessors for this thing and that thing to update the other when changes happen, and avoid circular updates. You can’t do that in C# because apparently more typing is easier to reason about, so you end up replicating the below pattern rather a lot.

public class MyThingViewModel : INotifyPropertyChanged
{
  public event PropertyChangedEventHandler PropertyChanged;
  // ...
  private string _value;
  public string Value
  {
    get => _value;
    set
    {
      _value = value;
      PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(Value)));
    }
  }
}

And when I say “rather a lot”, I mean in this one app that boilerplate appears at least 126 times, this undercounts because despite being public, the PropertyChanged event can only be invoked by instances of the declaring class so if a subclass adds any properties or any change points, you’re going to write protected helper methods to be able to invoke the event from the subclass.

Let’s pivot to investigating another question: why is Cocoa Bindings a desktop-only thing from Apple? I’ve encountered two problems in using it on Xamarin: thread confinement and performance. Thread confinement is a problem anywhere but the performance things are more sensitive on mobile, particularly on 2007-era mobile when this decision was made, and I can imagine a choice was made between “give developers the tools to identify and fix this issue” and “don’t give developers the chance to encounter this issue” back when UIKit was designed. Neither X.F nor UIKit is wrong in their particular choice, they’ve just chosen differently.

UI updates have to happen on the UI thread, probably because UIKit is Cocoa, Cocoa is appkit, and appkit ran on an OS that didn’t give you an easy way to do multiple threads per task. But this has to happen on Android too. And also performance. Anyway, theoretically any of those 126 invocations of PropertyChanged that could be bound to a view (so all of them, because separation of concerns) should be MainThread.BeginInvokeOnMainThread(() => {PropertyChanged?.Invoke(...)}); because what if the value is updated in an async method or a Task. Otherwise, something between a crash and unexpected behaviour will happen.

The performance problem is this: any change to a property can easily cause an unknown amount of code to run, quite often on the UI thread. For example, my app has a data grid (i.e. spreadsheet-like table view) with a “selection” column containing switches. There’s a “select all” button, and a report of the number of selected objects, outside the grid. Pressing “select all” selects all of the objects. Each one notifies observers that its IsSelected property has changed, which is watched by the list view model to update the selection count, and by the data grid to update the switches. So if there’s one row in the grid, selecting all causes two main-thread UI updates. If there are 500 rows, then 1000 updates need to run on the main thread in response to that one button action.

That can get slow :). Before I understood how to fix this, some UI actions would block the UI for tens of seconds as they computed the update. I asked about this in some forums and was told the answer is “your users shouldn’t have that much data in a mobile app, design an app with less data” which is not that helpful. But luckily the folks over at SyncFusion were much more empathetic, and told me the real solution is to design your views and view models such that you can turn off updates while you’re doing some big change, then turn them back on and recalculate the state at the end.

Like I say, it’s likely that someone at Apple already knew this from the Cocoa Bindings times and decided “here’s a great technology, and here’s how to turn it off because it will get in your way” wasn’t a cool story.

April 15, 2021

Back to Top