Last updated: March 21, 2023 11:22 AM (All times are UTC.)
You might be trying to start your Rails server and getting something like the following, with no accompanying stack trace to tell you exactly what it is that’s gone wrong.
[10] Early termination of worker
[8] Early termination of worker
This is a note-to-self after encountering this when trying to upgrade a Rails app from Ruby 2.7 to 3.2. What helped me was a comment from this Stack Overflow post.
If rails s
or bundle exec puma
is telling you the workers are being terminated but not telling you why, try:
rackup config.ru
There was recently a discussion on Hacker News around application logging on a budget. At work I’ve been trying to keep things lean, not to the point of absurdity, but also not using a $100 or $1000/month setup, when a $10 one will suffice for now. We settled on a homegrown Clickhouse + PHP solution that has performed admirably for two years now. Like everything, this is all about tradeoffs, so here’s a top-level breakdown of how Clog (Clickhouse + Log) works.
Creation
We have one main app and several smaller apps (you might call them microservices) spread across a few Digital Ocean instances. These generate logs from requests, queries performed, exceptions encountered, remote service calls etc. We use monolog in PHP and just a standard file writer elsehwere to write new-line delimited JSON to log files.
In this way, there is no dependency between applications and the final logs. Everything that follows could fail, and the apps are still generating logs ready for later collection (or recollection).
Collection
On each server, we run a copy of filebeat. I love this little thing. One binary, a basic YAML file, and we have something that watches our log files, adds a few bits of extra data (the app, host, environment etc), and then pushes each line into a redis queue. This way our central logging instance doesn’t need to have any knowledge of each of the instances which can come and go.
(Weirdly, filebeat is part of elastic so can be used as part of your normal ELK stack, meaning if we wanted to change systems later, we have a natural inflection point.)
There’s definitely bits we could change here. Checking queue length, managing backpressue, etc. But do you know what? In 24 months of running this in production, ingesting between 750K to 1M logs a day, none of that has actually been a problem. Will it be a problem when we hit 10M or 100M logs a day? Sure. But then we have a different set of resources to hand.
Ingesting
We now have a redis queue with a queue of JSON log lines. Originally this was a redis server running on the clog instance, but we later started using a managed redis server for other things so migrated this to. Our actual Clog instance is a 4GB DO instance. That’s it. Initially it was a 2GB (which was $10), so I don’t think we’re too far off the linked HN discussion.
The app to read the queue and add to Clickhouse is… simple. Brutally simple. Written in PHP using the PHP Redis extension in an afternoon, it runs BLPOP in an infinite loop to take an entry, run some very basic input processing (see next), and insert it into Clickhouse.
That processing is the key to how this system stays (fairly) speedy and is 100% not my idea. Uber is one of the first I could find who detailed how splitting up log keys from log values can make querying much more efficient. Combined with materialized views, we can get something very robust that will handle 90% of things we throw at it
Say we have a JSON log like so:
{
"created": "2022-12-25T13:37:00.12345Z",
"event_type": "http_response",
"http_route": "api.example"
}
This is turned into a set of keys and values based on type:
"datetime_keys": ["created"],
"datetime_values": [DateTime(2022-12-25T13:37:00.12345Z)],
"string_keys": ["event_type", "http_route"],
"string_values": ["http_response", "api.example"]
Our clickhouse logs table is:
matcol_event_type String MATERIALIZED string_values[indexOf(string_keys, 'event_type')]
. This pulls out the value of event_type, and creates a virtual column that is stored. This makes queries for these columns much quicker.This isn’t perfect. Not by a long shot. But it means we’ve been able to store our logs and just… not worry about costs spiralling out of control. A combination of a short retention time, Clickhouse’s in-built compression, and just realising that most people aren’t going to be generating TBs of logs a day, means we’ve flown by with this system.
Querying & Analysing
Querying is, again, very simple. Clickhouse offers packages for most languages, but also supports MySQL (and other) interfaces. We already have a back-office tool (in my experience, one of the first things you should work on), that makes it drop-dead simple to add a new screen and connect it to Clickhouse.
From there we can list logs with basic filters and facets. The big advantage I’ve found here over other log-specific tools is we can be a bit smart and link back into the application. For example, if a log includes a “auth_user_id” or “requested_entity_id”, we can link this to an existing information page in our back-office automatically.
Conclusions
There are definitely rough edges in Clog. A big one is that it’s simply an internal tool which means existing knowledge of other tools is lost. Some of the querying and filtering can definitely use some UX love. The alerts are hard-coded. And more.
But, in the two plus years we’ve been using Clog it has cost us a couple hundred dollars and all told, a day or two of my time, and in return saved us an order of magnitude more when pricing up hosted cloud options. This has given us a much longer runway.
I 100% wouldn’t recommend DIY NIH options for everything, but I Clog has paid off for what we needed.
My research touches on the professionalisation (or otherwise) of software engineering, and particularly the association (or not) of software engineers with a professional body, or with each other (or not) through a professional body. So what’s that about?
In Engagement Motivations in Professional Associations, Mark Hager uses a model that separates incentives to belong to a professional association into public incentives (i.e. those good for the profession as a whole, or for society as a whole) and private incentives (i.e. those good for an individual practitioner). Various people have tried to argue that people are only motivated by the private incentives (i.e. de Tocqueville’s “enlightened self-interest”).
Below, I give a quick survey of the incentives in this model, and informally how I see them enacted in computing. If there’s a summary, it’s that any idea of professionalism has been enclosed by the corporate actors.
My dude, software engineering has this in spades. Whether it’s a newsletter encouraging people to apply formal methods to their work, a strike force evangelising the latest programming language, or a consultant explaining that the reason their methodology is failing is that you don’t methodologise hard enough, it’s not incredibly clear that you can be in computering unless you’re telling everyone else what’s wrong with the way they computer. This is rarely done through formal associations though: while the SWEBOK does exist, I’d wager that the fraction of software engineers who refer to it in their work is 0 whatever floating point representation you’re using.
Ironically, the software craftsmanship movement suggests that a better way to promote good practice than professional associations is through medieval-style craft guilds, when professional associations are craft guilds that survived into the 20th century, with all the gatekeeping and back-scratching that entails.
If this happens, it seems to mostly be left to the marketing departments of large companies. The last I saw about augmented reality in the mainstream media was an advert for a product.
Again, you’ll find a lot of this done in the policy departments of large companies. The large professional societies also get involved in lobbying work, but either explicitly walk back from discussions of regulation (ACM) or limit themselves to questions of research funding. Smaller organisations lobby on single-issue platforms (e.g. FSF Europe and the “public money, public code” campaign; the Documentation Foundation’s advocacy for open standards).
It’s not like computering is devoid of ethics issues: artificial intelligence and the world of work; responsibility for loss of life, injury, or property damage caused by defective software; intellectual property and ownership; personal liberty, privacy, and data sovereignty; the list goes on. The professional societies, particularly those derived from or modelled on the engineering associations (ACM, IEEE, BCS), do have codes of ethics. Other smaller groups and individuals try to propose particular ethical codes, but there’s a network effect in play here. A code of ethics needs to be popular enough that clients of computerists and the public know about it and know to look out for it, with the far extreme being 100% coverage: either you have committed to the Hippocratic Oath, or you are not a practitioner of medicine.
If you’re early-career, you want a job board to find someone who’s hiring early career roles. If you’re mid or senior career, you want a network where you can find out about opportunities and whether they’re worth pursuing. I don’t know if you’ve read the news lately, but staying employed in computering isn’t going great at the moment.
How do you get that mid-career role? By showing that you can lead a project, team, or have some other influence. What early-career role gives you those opportunities? crickets Ad hoc networking based on open source seems to fill in for professional association here: rather than doing voluntary work contributing to Communications of the ACM, people are depositing npm modules onto the web.
Rather than reading Communications of the ACM, we’re all looking for task-oriented information at the time we have a task to complete: the Q&A websites, technology-specific podcasts and video channels are filling in for any clearing house of professional advancement (to the point where even for-profit examples like publishing companies aren’t filling the gaps: what was the last attempt at an equivalent to Code Complete, 2nd Edition you can name?). This leads to a sort of balkanisation where anyone can quickly get up to speed on the technology they’re using, and generalising from that or building a holistic view is incredibly difficult. Certain blogs try to fill that gap, but again are individually published and not typically associated with any professional body.
We have degree programs, and indeed those usually have accredited curricula (the ACM has traditionally been very active in that field, and the BCS in the UK). But many of the degrees are Computer Science rather than Software Engineering, and do they teach QA, or systems administration, or project management, or human-computer interaction? Are there vocational courses in those topics? Are they well-regarded: by potential students, by potential employers, by the public?
And then there are vendor certifications.
Wow, 300 Reading Lists!
The National Telecommunications and Information Administration (part of US Dept of Commerce) Mobile App Competition Report came out yesterday (1 Feb). Along with fellow Open Web Advocacy chums, I briefed them and also personally commented.
Consumers largely can’t get apps outside of the app store model, controlled by Apple and Google. This means innovators have very limited avenues for reaching consumers.
Apple and Google create hurdles for developers to compete for consumers by imposing technical limits, such as restricting how apps can function or requiring developers to go through slow and opaque review processes.
I’m very glad that, like other regulators, we’ve helped them understand that there’s not a binary choice between single-platform iOS and Android “native” apps, but Web Apps (AKA “PWA”, Progressive Web Apps) offer a cheaper to produce alternative, as they use universal, mature web technologies:
Web apps can be optimized for design, with artfully crafted animations and widgets; they can also be optimized for unique connectivity constraints, offering users either a download-as-you go experience for low-bandwidth environments, or an offline mode if needed.
However, the mobile duopoly restrict the installation of web apps, favouring their own default browsers:
commenters contend that the major mobile operating system platforms—both of which derive revenue from native app downloads through their mobile app stores & whose own browsers derive significant advertising revenue—have acted to stifle implementation & distribution of web apps
NTIA recognises that this is a deliberate choice by Apple and (to a lesser extent) by Google:
developers face significant hurdles to get a chance to compete for users in the ecosystem, and these hurdles are due to corporate choices rather than technical necessities
NTIA explicitly calls out the Apple Browser Ban:
any web browser downloaded from Apple’s mobile app store runs on WebKit. This means that the browsers that users recognize elsewhere—on Android and on desktop computers—do not have the same functionality that they do on those other platforms.
It notes that WebKit has implemented fewer features that allow Web Apps to have similar capabilities as its iOS single-platform Apps, and lists some of the most important with the dates they were available in Gecko and Blink, continuing
According to commenters, lack of support for these features would be a more acceptable condition if Apple allowed other, more robust, and full-featured browser engines on its operating system. Then, iOS users would be free to choose between Safari’s less feature-rich experience (which might have other benefits, such as privacy and security features), and the broader capabilities of competing browsers (which might have other borrowers costs, such as greater drain on system resources and need to adjust more settings). Instead, iOS users are never given the opportunity to choose meaningfully differentiated browsers and experience features that are common for Android users—some of which have been available for over a decade.
Regardless of Apple’s claims that the Apple Browser Ban is to protect their users,
Multiple commenters note that the only obvious beneficiary of Apple’s WebKit restrictions is Apple itself, which derives significant revenue from its mobile app store commissions
The report concludes that
Congress should enact laws and relevant agencies should consider measures [aimed at] Getting platforms to allow installation and full functionality of third-party web browsers. To allow web browsers to be competitive, as discussed above, the platforms would need to allow installation and full functionality of the third-party web browsers. This would require platforms to permit third-party browsers a comparable level of integration with device and operating system functionality. As with other measures, it would be important to construct this to allow platform providers to implement reasonable restrictions in order to protect user privacy, security, and safety.
The NTIA joins the Australian, EU, and UK regulators in suggesting that the Apple Browser Ban stifles competition and must be curtailed.
The question now is whether Apple will do the right thing, or seek to hurl lawyers with procedural arguments at it instead, as they’re doing in the UK now. It’s rumoured that Apple might be contemplating about thinking about speculating about considering opening up iOS to alternate browsers for when the EU Digital Markets Act comes into force in 2024. But for every month they delay, they earn a fortune; it’s estimated that Google pays Apple $20 Billion to be the default search engine in Safari, and the App Store earned Apple $72.3 Billion in 2020 – sums which easily pay for snazzy lawyers, iPads for influencers, salaries for Safari shills, and Kool Aid for WebKit wafflers.
Place your bets!
In 1701, Asano Naganori, a feudal lord in Japan, was summoned to the shogun’s court in Edo, the town now called Tokyo. He was a provincial chieftain, and knew little about court etiquette, and the etiquette master of the court, Kira Kozuke-no-Suke, took offence. It’s not exactly clear why; it’s suggested that Asano didn’t bribe Kira sufficiently or at all, or that Kira felt that Asano should have shown more deference. Whatever the reasoning, Kira ridiculed Asano in the shogun’s presence, and Asano defended his honour by attacking Kira with a dagger.
Baring steel in the shogun’s castle was a grievous offence, and the shogun commanded Asano to atone through suicide. Asano obeyed, faithful to his overlord. The shogun further commanded that Asano’s retainers, over 300 samurai, were to be dispossessed and made leaderless, and forbade those retainers from taking revenge on Kira so as to prevent an escalating cycle of bloodshed. The leader of those samurai offered to divide Asano’s wealth between all of them, but this was a test. Those who took him up on the offer were paid and told to leave. Forty-seven refused this offer, knowing it to be honourless, and those remaining 47 reported to the shogun that they disavowed any loyalty to their dead lord. The shogun made them rōnin, masterless samurai, and required that they disperse. Before they did, they swore a secret oath among themselves that one day they would return and avenge their master. Then each went their separate ways. These 47 rōnin immersed themselves into the population, seemingly forgoing any desire for revenge, and acting without honour to indicate that they no longer followed their code. The shogun sent spies to monitor the actions of the rōnin, to ensure that their unworthy behaviour was not a trick, but their dishonour continued for a month, two, three. For a year and a half each acted dissolutely, appallingly; drunkards and criminals all, as their swords went to rust and their reputations the same.
A year and a half later, the forty-seven rōnin gathered together again. They subdued or killed and wounded Kira’s guards, they found a secret passage hidden behind a scroll, and in the hidden courtyard they found Kira and demanded that he die by suicide to satisfy their lord’s honour. When the etiquette master refused, the rōnin cut off Kira’s head and laid it on Asano’s grave. Then they came to the shogun, surrounded by a public in awe of their actions, and confessed. The shogun considered having them executed as criminals but instead required that they too die by suicide, and the rōnin obeyed. They were buried, all except one who was not present and who lived on, in front of the tomb of their master. The tombs are a place to be visited even today, and the story of the 47 rōnin is a famous one both inside and outside Japan.
You might think: why have I been told this story? Well, there were 47 of them. 47 is a good number. It’s the atomic number of silver, which is interesting stuff; the most electrically conductive metal. (During World War II, the Manhattan Project couldn’t get enough copper for the miles of wiring they needed because it was going elsewhere for the war effort, so they took all the silver out of Fort Knox and melted it down to make wire instead.) It’s strictly non-palindromic, which means that it’s not only not a palindrome, it remains not a palindrome in any base smaller than itself. And it’s how old I am today.
Yes! It’s my birthday! Hooray!
I have had a good birthday this year. The family and I had delightful Greek dinner at Mythos in the Arcadian, and then yesterday a bunch of us went to the pub and had an absolute whale of an afternoon and evening, during which I became heartily intoxicated and wore a bag on my head like Lord Farrow, among other things. And I got a picture of the Solvay Conference from Bruce.
This year is shaping up well; I have some interesting projects coming up, including one will-be-public thing that I’ve been working on and which I’ll be revealing more about in due course, a much-delayed family thing is very near its end (finally!), and in general it’s just gotta be better than the ongoing car crash that the last few years have been. Fingers crossed; ask me again in twelve months, anyway. I’ve been writing these little posts for 21 years now (last year has more links) and there have been ups and downs, but this year I feel quite hopeful about the future for the first time in a while. This is good news. Happy birthday, me.
I got myself a little present for Christmas. The PICO-8. The PICO-8 is a fantasy console, which is an emulator for a console that doesn’t exist. The PICO-8 comes with its own development and runtime environment, packaged into a single slick application with a beautiful aesthetic.
It also comes with a pretty strict set of constraints in which to work your magic.
Display: 128x128 16 colours
Cartridge size: 32k
Sound: 4 channel chip blerps (I assume this is an industry term)
Code: P8 Lua
CPU: 4M vm insts/sec
Sprites: 256 8x8 sprites
Map: 128x32 tiles
The constraints are appealing. Modern development at big companies sometimes seems like being at an all-you-can-eat buffet with the company credit card. Run out of CPU? Your boss can fix that with whatever the best new MacBook is. Webserver process eating RAM like candy? Doesn’t matter, that’s what automatic load balancers and infinite horizontal scaling is for.
With the PICO-8, there appears to be no such negotiation. There’s something liberating about this. By putting firm limits on the scope of what you can create, you know when to stop. If you hit the limit, you know you have to either admit that the project is as done as it’s going to get, or you need to refine or remove something that’s already there. Infinite potential is both a luxury and a curse.
What you get is what you get, and what you get is enough for a wide community of enthusiasts to create some beautiful and entertaining games that you can play directly in the browser, in your own copy of PICO-8, or on one of several fan-made hardware solutions.
My favourite feature is actually secondary to the main function of the console. Cartridges can be exported as PNG files, with game data steganographically hidden within. Each one of the below files is a playable cartridge that can be loaded into the PICO-8 console.
There’s something tactile that didn’t fully transfer from cartridge to CD and definitely didn’t transfer from CD to digital download. You can’t quite argue that a folder of PNGs isn’t a digital download, but somewhere in the dusty corners of my memory, I recall the sound of plastic rattling against plastic and a long day of zero responsibility ahead.
Regular readers to this chucklefest will recall that I’ve been involved with briefing competition regulators in UK, US, Australia, Japan and EU about the Apple Browser Ban – Apple’s anti-competitive requirement that anything that can browse the web on iOS/iPad must use its WebKit engine. This allows Apple to stop web apps becoming as feature-rich as its iOS apps, for which it can charge a massive fee for listing in its monopoly App Store.
The UK’s Competition and Markets Authority recently announced a market investigation reference (MIR) into the markets for mobile browsers (particularly browser engines). The CMA may decide to make a MIR when it has reasonable grounds for suspecting that a feature or combination of features of a market or markets in the UK prevents, restricts, or distorts competition (PDF).
You would imagine that Apple would welcome this opportunity to be scrutinised, given that Apple told CMA (PDF) that
By integrating WebKit into iOS, Apple is able to guarantee robust user privacy protections for every browsing experience on iOS device… . WebKit has also been carefully designed and optimized for use on iOS devices. This allows iOS devices to outperform competitors on web-based browsing benchmarks… Mandating Apple to allow apps to use third-party rendering engines on iOS, as proposed by the IR, would break the integrated security model of iOS devices, reduce their privacy and performance, and ultimately harm competition between iOS and Android devices.
Yet despite Apple’s assertion that it is simply the best, better than all the rest, it is weirdly reluctant to see the CMA investigate it. You would assume that Apple are confident that it would be vindicated by CMA as better than anyone, anyone they’ve ever met. Yet Apple applied to the Competition Appeal Tribunal (PDF, of course), seeking
1. An Order that the MIR Decision is quashed.
2. A declaration that the MIR Decision and market investigation purportedly launched by
reference to it are invalid and of no legal effect.
In its Notice of Application, Apple also seeks interim relief in the form of a stay of the market investigation pending determination of the application.
Why would this be? I don’t know (I seem no longer to be on not-Steve’s Xmas card list). But it’s interesting to note that a CMA Market Investigation can have real teeth. It has previously forced forced the sale of airports and hospitals (gosh! A PDF) in other sectors.
A market investigation lowers the hurdle for the CMA: it doesn’t have to prove wrongdoing, just adverse effects on competition (abbreviated as AEC, which in other antitrust jurisdictions, however, stands for “as efficient competitor”) and has greater powers to impose remedies. Otherwise a conventional antitrust investigation of Apple’s conduct would be required, and Apple would have to be shown to have abused a dominant position in the relevant market. Apple would like to deprive the CMA of its more powerful tool, and essentially argues that the CMA has deprived itself of that tool by failing to abide by the applicable statute.
It’s rumoured that Apple might be contemplating about thinking about speculating about considering opening up iOS to alternate browsers for when the EU Digital Markets Act comes into force in 2024. But for every month they delay, they earn a fortune; it’s estimated that Google pays Apple $20 Billion to be the default search engine in Safari, and the App Store earned Apple $72.3 Billion in 2020 – sums which easily pay for snazzy lawyers, iPads for influencers, salaries for Safari shills, and Kool Aid for WebKit wafflers.
(Last Updated on )
Low-stakes conspiracy theory: they were invented by word processing marketers to justify spell-check features that weren’t necessary.
Evidence: the Oxford English Dictionary (Oxford being in Britain) entry for “-ise” suffix’s first sense is “A frequent spelling of -ize suffix, suffix forming verbs, which see.” So in a British dictionary, -ize is preferred. But in a computer, I have to change my whole hecking country to be able to write that!
Due to annoyances in the economy (thanks, Putin, and Liz Truss) I find myself once again on the jobs market. Read my LinkTin C.V. thingie, then hire me to make your digital products more accessible, faster and full of standardsy goodness!
In March, I shall be keynoting at axe-con with a talk called Whose web is it, anyway?. It’s free.
Hotlinking, in the context I want to discuss here, is the act of using a resource on your website by linking to it on someone else’s website. This might be any resource: a script, an image, anything that is referenced by URL.
It’s a bit of an anti-social practice, to be honest. Essentially, you’re offloading the responsibility for the bandwidth of serving that resource to someone else, but it’s your site and your users who get the benefit of that. That’s not all that nice.
Now, if the “other person’s website” is a CDN — that is, a site deliberately set up in order to serve resources to someone else — then that’s different. There are many CDNs, and using resources served from them is not a bad thing. That’s not what I’m talking about. But if you’re including something direct from someone else’s not-a-CDN site, then… what, if anything, should the owner of that site do about it?
I’ve got a fairly popular, small, piece of JavaScript: sorttable.js, which makes an HTML table be sortable by clicking on the headers. It’s existed for a long time now (the very first version was written twenty years ago!) and I get an email about it once a week or so from people looking to customise how it works or ask questions about how to do a thing they want. It’s open source, and I encourage people to use it; it’s deliberately designed to be simple1, because the target audience is really people who aren’t hugely experienced with web development and who can add sortability to their HTML tables with a couple of lines of code.
The instructions for sorttable are pretty clear: download the library, then put it in your web space and include it. However, some sites skip that first step, and instead just link directly to the copy on my website with a <script>
element. Having looked at my bandwidth usage recently, this happens quite a lot2, and on some quite high-profile sites. I’m not going to name and shame anyone3, but I’d quite like to encourage people to not do that, if there’s a way to do it. So I’ve been thinking about ways that I might discourage hotlinking the script directly, while doing so in a reasonable and humane fashion. I’m also interested in suggestions: hit me up on Mastodon at @sil@mastodon.social or Twitter4 as @sil.
This is the obvious thing to do: I move the script and update my page to link to the new location, so anyone coming to my page to get the script will be wholly unaffected and unaware I did it. I do not want to do this, for two big reasons: it’s kicking the can down the road, and it’s unfriendly.
It’s can-kicking because it doesn’t actually solve the problem: if I do nothing else to discourage the practice of hotlinking, then a few years from now I’ll have people hotlinking to the new location and I’ll have to do it again. OK, that’s not exactly a lot of work, but it’s still not a great answer.
But more importantly, it’s unfriendly. If I do that, I’ll be deliberately breaking everyone who’s hotlinking the script. You might think that they deserve it, but it’s not actually them who feel the effect; it’s their users. And their users didn’t do it. One of the big motives behind the web’s general underlying principle of “don’t break the web” is that it’s not reasonable to punish a site’s users for the bad actions of the site’s creators. This applies to browsers, to libraries, to websites, the whole lot. I would like to find a less harsh method than this.
That is: do the above, but link to a URL which changes automatically every month or every minute or something. The reason that I don’t want to do this (apart from the unfriendly one from above, which still applies even though this fixes the can-kicking) is that this requires server collusion; I’d need to make my main page be dynamic in some way, so that links to the script also update along with the script name change. This involves faffery with cron jobs, or turning the existing static HTML page into a server-generated page, both of which are annoying. I know how to do this, but it feels like an inelegant solution; this isn’t really a technical problem, it’s a social one, where developers are doing an anti-social thing. Attempting to solve social problems with technical measures is pretty much always a bad idea, and so it is in this case.
I’m leaning in this direction. I’m OK with smaller sites hotlinking (well, I’m not really, but I’m prepared to handwave it; I made the script and made it easy to use exactly to help people, and if a small part of that general donation to the universe includes me providing bandwidth for it, then I can live with that). The issue here is that it’s not always easy to tell who those heavy-bandwidth-consuming sites are. It relies on the referrer being provided, which it isn’t always. It’s also a bit more work on my part, because I would want to send an email saying “hey, Site X developers, you’re hotlinking my script as you can see on page sitex.example.com/sometable.html and it would be nice if you didn’t do that”, but I have no good way of identifying those pages; the document referrer isn’t always that specific. If I send an email saying “you’re hotlinking my script somewhere, who knows where, please don’t do that” then the site developers are quite likely to put this request at the very bottom of their list, and I don’t blame them.
This is: I move the script somewhere else and update my links, and then I change the previous URL to be the same script but it does something like barf a complaint into the console log, or (in extreme cases based on suggestions I’ve had) pops up an alert box or does something equally obnoxious. Obviously, I don’t wanna do this.
That is: contact the highest profile users, but instead of being conciliatory, be threatening. “You’re hotlinking this, stop doing it, or pay the Hotlink Licence Fee which is one cent per user per day” or similar. I think the people who suggest this sort of thing (and the previous malicious approach) must have had another website do something terrible to them in a previous life or something and now are out for revenge. I liked John Wick as much as the next poorly-socialised revenge-fantasy tech nerd, but he’s not a good model for collaborative software development, y’know?
I could put the site behind Cloudflare (or perhaps a better, less troubling CDN) and then not worry about it; it’s not my bandwidth then, it’s theirs, and they’re fine with it. This used to be the case, but recently I moved web hosts5 and stepped away from Cloudflare in so doing. While this would work… it feels like giving up, a bit. I’m not actually solving the problem, I’m just giving it to someone else who is OK with it.
This isn’t overrunning my bandwidth allocation or anything. I’m not actually affected by this. My complaint isn’t important; it’s more a sort of distaste for the process. I’d like to make this better, rather than ignoring it, even if ignoring it doesn’t mean much, as long as I’m not put to more inconvenience by fixing it. We want things to be better, after all, not simply tolerable.
So… what do you think, gentle reader? What would you do about it? Answers on a postcard.
What is “accessibility”? For some, it’s about ensuring that your sites and apps don’t block people with disabilities from completing tasks. That’s the main part of it, but in my opinion it’s not all of the story. Accessibility, to me, means taking care to develop digital services that are inclusive as possible. That means inclusive of people with disabilities, of people outside Euro-centric cultures, and people who don’t have expensive, top-the-range hardware and always-on cheap fast networks.
In his closely argued post The Performance Inequality Gap, 2023, Alex Russell notes that “When digital is society’s default, slow is exclusionary”, and continues
sites continue to send more script than is reasonable for 80+% of the world’s users, widening the gap between the haves and the have-nots. This is an ethical crisis for the frontend.
Big Al goes on to suggest that in order to reach interactivity in less than 5 seconds on first load, we should send no more that ~150KiB of HTML, CSS, images, and render-blocking font resources, and no more than ~300-350KiB of JavaScript. (If you want to know the reasoning behind this, Alex meticulously cites his sources in the article; read it!)
Now, I’m not saying this is impossible using modern frameworks and tooling (React, Next.js etc) that optimise for good “developer experience”. But it is a damned sight harder, because such tooling prioritises developer experience over user experience.
In January, I’ll be back on the jobs market (here’s my LinkTin resumé!) so I’ve been looking at what’s available. Today I saw a job for a Front End lead who will “write the first lines of front end code and set the tone for how the team approaches user-facing software development”. The job spec requires a “bias towards solving problems in simple, elegant ways”, and the candidate should be “confident building with…reliability and accessibility in mind”. Yet, weirdly, even though the first lines of code are yet to be written, it seems the tech stack is already decided upon: React and Next.js.
As Alex’s post shows, such tooling conspires against simplicity and elegance, and certainly against reliability and accessibility. To repeat his message:
When digital is society’s default, slow is exclusionary
Bad performance is bad accessibility.
(Last Updated on )
Twitter currently has problems. Well, one specific problem, which is the bloke who bought it. My solution to this problem has been to move to Mastodon (@sil@mastodon.social if you want to do the same), but I’ve invested fifteen years of my life providing twitter.com with free content so I don’t really want it to go away. Since there’s a chance that the whole site might vanish, or that it continues on its current journey until I don’t even want my name associated with it any more, it makes sense to have a backup. And obviously, I don’t want all that lovely writing to disappear from the web (how would you all cope without me complaining about some random pub’s music in 2011?!), so I wanted to have that backup published somewhere I control… by which I mean my own website.
So, it would be nice to be able to download a list of all my tweets, and then turn that into some sort of website so it’s all still available and published by me.
Fortunately, Zach Leatherman came to save us by building a tool, Tweetback, which does a lot of the heavy lifting on this. Nice one, that man. Here I’ll describe how I used Tweetback to set up my own personal Twitter archive. This is unavoidably a bit of a developer-ish process, involving the Terminal and so on; if you’re not at least a little comfortable with doing that, this might not be for you.
This part is mandatory. Twitter graciously permit you to download a big list of all the tweets you’ve given them over the years, and you’ll need it for this. As they describe in their help page, go to your Twitter account settings and choose Your account > Download an archive of your data. You’ll have to confirm your identity and then say Request data. They then go away and start constructing an archive of all your Twitter stuff. This can take a couple of days; they send you an email when it’s done, and you can follow the link in that email to download a zip file. This is your Twitter backup; it contains all your tweets (and some other stuff). Stash it somewhere; you’ll need a file from it shortly.
You’ll need both node.js and git installed to do this. If you don’t have node.js, go to nodejs.org and follow their instructions for how to download and install it for your computer. (This process can be fiddly; sorry about that. I suspect that most people reading this will already have node installed, but if you don’t, hopefully you can manage it.) You’ll also need git installed: Github have some instructions on how to install git or Github Desktop, which should explain how to do this stuff if you don’t already have it set up.
Now, you need to clone the Tweetback repository from Github. On the command line, this looks like git clone https://github.com/tweetback/tweetback.git
; if you’re using Github Desktop, follow their instructions to clone a repository. This should give you the Tweetback code, in a folder on your computer. Make a note of where that folder is.
Open a Terminal on your machine and cd
into the Tweetback folder, wherever you put it. Now, run npm install
to install all of Tweetback’s dependencies. Since you have node.js installed from above, this ought to just work. If it doesn’t… you get to debug a bit. Sorry about that. This should end up looking something like this:
$ npm install
npm WARN deprecated @npmcli/move-file@1.1.2: This functionality has been moved to @npmcli/fs
added 347 packages, and audited 348 packages in 30s
52 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
From here, you’re following Tweetback’s own README instructions: they’re online at https://github.com/tweetback/tweetback#usage and also are in the README file in your current directory.
Open up the zip file you downloaded from Twitter, and get the data/tweets.js
file from it. Put that in the database
folder in your Tweetback folder, then edit that file to change window.YTD.tweet.part0
on the first line to module.exports
, as the README says. This means that your database/tweets.js
file will now have the first couple of lines look like this:
module.exports = [
{
"tweet" : {
Now, run npm run import
. This will go through your tweets.js
file and load it all into a database, so it can be more easily read later on. You only need to do this step once. This will print a bunch of lines that look like { existingRecordsFound: 0, missingTweets: 122 }
, and then a bunch of lines that look like Finished count { count: 116 }
, and then it’ll finish. This should be relatively quick, but if you’ve got a lot of tweets (I have 68,000!) then it might take a little while. Get yourself a cup of tea and a couple of biscuits and it’ll be done when you’ve poured it.
If you’re setting up your own (sub)domain for your twitter archive, so it’ll be at the root of the website (so, https://twitter.example.com
or whatever) then you can skip this step. However, if you’re going to put your archive in its own directory, so it’s not at the root (which I did, for example, at kryogenix.org/twitter), then you need to tell your setup about that.
To do this, edit the file eleventy.config.js
, and at the end, before the closing }
, add a new line, so the end of the file looks like this:
eleventyConfig.addPlugin(EleventyHtmlBasePlugin);
return {pathPrefix: "/twitter/"}
};
The string "/twitter/"
should be whatever you want the path to the root of your Twitter archive to be, so if you’re going to put it at mywebsite.example.com/my-twitter-archive
, set pathPrefix
to be "/my-twitter-archive"
. This is only a path, not a full URL; you do not need to fill in the domain where you’ll be hosting this here.
As the Tweetback README describes, edit the file _data/metadata.js
. You’ll want to change three values in here: username
, homeLabel
, and homeURL
.
username
is your Twitter username. Mine is sil
: yours isn’t. Don’t include the @
at the beginning.
homeLabel
is the thing that appears in the top corner of your Twitter archive once generated; it will be a link to your own homepage. (Note: not the homepage of this Twitter archive! This will be the text of a link which takes you out of the Twitter archive and to your own home.)
homeURL
is the full URL to your homepage. (This is “https://kryogenix.org/” for me, for example. It is the URL that homeLabel
links to.)
OK. Now you’ve done all the setup. This step actually takes all of that and builds a website from all your tweets.
Run npm run build
.
If you’ve got a lot of tweets, this can take a long time. It took me a couple of hours, I think, the first time I ran it. Subsequent runs take a lot less time (a couple of minutes for me, maybe even shorter for you if you’re less mouthy on Twitter), but the first run takes ages because it has to fetch all the images for all the tweets you’ve ever written. You’ll want a second cup of tea here, and perhaps dinner.
It should look something like this:
$ npm run build
> tweetback@1.0.0 build
> npx @11ty/eleventy --quiet
[11ty] Copied 1868 files / Wrote 68158 files in 248.58 seconds (3.6ms each, v2.0.0-canary.18)
You may get errors in here about being unable to fetch URLs (Image request error Bad response for https://pbs.twimg.com/media/C1VJJUVXEAE3VGE.jpg (404): Not Found
and the like); this is because some Tweets link to images that aren’t there any more. There’s not a lot you can do about this, but it doesn’t stop the rest of the site building.
Once this is all done, you should have a directory called _site
, which is a website containing your Twitter archive! Hooray! Now you publish that directory, however you choose: copy it up to your website, push it to github pages or Netlify or whatever. You only need the contents of the _site
directory; that’s your whole Twitter archive website, completely self-contained; all the other stuff is only used for generating the archive website, not for running it once it’s generated.
If you’re still using Twitter, you may post more Tweets after your downloadable archive was generated. If so, it’d be nice to update the archive with the contents of those tweets without having to request a full archive from Twitter and wait two days. Fortunately, this is possible. Unfortunately, you gotta do some hoop-jumping to get it.
You see, to do this, you need access to the Twitter API. In the old days, people built websites with an API because they wanted to encourage others to interact with that website programmatically as well as in a browser: you built an ecosystem, right? But Twitter are not like that; they don’t really want you to interact with their stuff unless they like what you’re doing. So you have to apply for permission to be a Twitter developer in order to use the API.
To do this, as the Tweetback readme says, you will need a Twitter bearer token. To get one of those, you need to be a Twitter developer, and to be that, you have to fill in a bunch of forms and ask for permission and be manually reviewed. Twitter’s documentation explains about bearer tokens, and explains that you need to sign up for a Twitter developer account to get them. Go ahead and do that. This is an annoying process where they ask a bunch of questions about what you plan to do with the Twitter API, and then you wait until someone manually reviews your answers and decides whether to grant you access or not, and possibly makes you clarify your answers to questions. I have no good suggestions here; go through the process and wait. Sorry.
Once you are a Twitter developer, create an app, and then get its bearer token. You only get this once, so be sure to make a note of it. In a clear allusion to the delight that this whole process brings to users, it probably will begin by screaming AAAAAAAAAAAAAAA
and then look like a bunch of incomprehensible gibberish.
Now to pull in new data, run:
TWITTER_BEARER_TOKEN=AAAAAAAAAAAAAAAAAAq3874nh93q npm run fetch-new-data
(substituting in the value of your token, of course, which will be longer.)
This will fetch any tweets that aren’t in the database because you made them since! And then run npm run build
again to rebuild the _site
directory, and re-publish it all.
I personally run these steps (fetch-new-data
, then build
, then publish) daily in a cron job, which runs a script with contents (approximately):
#!/bin/bash
cd "$(dirname "$0")"
echo Begin publish at $(date)
echo Updating Twitter archive
echo ========================
TWITTER_BEARER_TOKEN=AAAAAAAAAAAAAA9mh8j9808jhey9w34cvj3g3 npm run fetch-new-data 2>&1
echo Updating site from archive
echo ==========================
npm run build 2>&1
echo Publishing site
echo ===============
rsync -e "ssh" -az _site/ kryogenix.org:public_html/twitter 2>&1
echo Finish publish at $(date)
but how you publish and rebuild, and how often you do that, is of course up to you.
What Tweetback actually does is use your twitter backup to build an 11ty static website. (This is not all that surprising, since 11ty is also Zach’s static site builder.) This means that if you’re into 11ty you could make the archive better and more comprehensive by adding stuff. There are already some neat graphs of most popular tweets, most recent tweets, the emoji you use a lot (sigh) and so on; if you find things that you wish that your Twitter archive contained, file an issue with Tweetback, or better still write the change and submit it back so everyone gets it!
Go to tweetback/tweetback-canonical and add yourself to the mapping.js
file. What’s neat about this is that that file is used by tweetback itself. This means that if someone else with a Tweetback archive has a tweet which links to one of your Tweets, now their archive will link to your archive directly instead! It’s not just a bunch of separate sites, it’s a bunch of sites all of which are connected! Lots of connections between sites without any central authority! We could call this a collection of connections. Or a pile of connections. Or… a web!
That’s a good idea. Someone should do something with that concept.
You may, or may not, want to get off Twitter. Maybe you’re looking to get as far away as possible; maybe you just don’t want to lose the years of investment you’ve put in. But it’s never a bad thing to have your data under your control when you can. Tweetback helps make that happen. Cheers to Zach and the other contributors for creating it, so the rest of us didn’t have to. Tell them thank you.
Some posts are written so there’s an audience. Some are written to be informative, or amusing. And some are literally just documentation for me which nobody else will care about. This is one of those.
I’ve moved phone network. I’ve been with Three for years, but they came up with an extremely annoying new tactic, and so they must be punished. You see, I had an account with 4GB of data usage per month for about £15pm, and occasionally I’d go over that; a couple of times a year at most. That’s OK: I don’t mind buying some sort of “data booster” thing to give me an extra gig for the last few days before the next bill; seems reasonable.
But Three changed things. Now, you see, you can’t buy a booster to give yourself a bit of data until the end of the month. No, you have to buy a booster which gives you extra data every month, and then three days later when you’re in the new month, cancel it. There’s no way to just get the data for now.1
This is aggressively customer-hostile. There’s literally no reason to do this other than to screw people who forget to cancel it. Sure, have an option to buy a “permanent top-up”, no arguments with that. But there should also be an option to buy a temporary top-up, just once! There used to be!
I was vaguely annoyed with Three for general reasons anyway — they got rid of free EU roaming, they are unhelpful when you ask questions, etc — and so I was vaguely considering moving away regardless. But this was the straw that broke the camel’s back.2 So… time to look around.
I asked the Mastodon gang for suggestions, and I got lots, which is useful. Thank you for that, all.
The main three I got were Smarty, iD, and Giffgaff. Smarty are Three themselves in a posh frock, so that’s no good; the whole point of bailing is to leave Three. Giffgaff are great, and I’ve been hearing about their greatness for years, not least from popey, but they don’t do WiFi Calling, so they’re a no-no.3 And iD mobile looked pretty good. (All these new “MVNO” types of thing seem quite a lot cheaper than “traditional” phone operators. Don’t know why. Hooray, though.)
So off I went to iD, and signed up for a 30-day rolling SIM-only deal4. £7 per month. 12GB of data. I mean, crikey, that’s quite a lot better than before.
I need to keep my phone number, though, so I had to transfer it between networks. To do this, you need a “PAC” code from your old network, and you supply it to the new one. All my experience of dealing with phone operators is from the Old Days, and back then you had to grovel to get a PAC and your current phone network would do their best to talk you out of it. Fortunately, the iron hand of government regulation has put a stop to these sorts of shenanigans now (the UK has a good tech regulator, the Competition and Markets Authority5) and you can get a PAC, no questions asked, by sending an SMS with content “PAC” to 65075. Hooray. So, iD mobile sent me a new SIM in the post, and I got the PAC from Three, and then I told iD about the PAC (on the website: no person required), and they said (on the website), ok, we’ll do the switch in a couple of working days.
However, the SIM has some temporary number on it. Today, my Three account stopped working (indicating that Three had received and handled their end of the deal by turning off my account), and so I dutifully popped out the Three SIM from my phone6 and put in the new one.
But! Alas! My phone thought that it had the temporary number!
I think this is because Three process their (departing) end, there’s an interregnum, and then iD process their (arriving) end, and I was in the interregnum. I do not know what would have happened if someone rang my actual phone number during this period. Hopefully nobody did. I waited a few hours — the data connection worked fine on my phone, but it had the wrong number — and then I turned the phone off and left it off for ten minutes or so. Then… I turned it back on, and… pow! My proper number is mine again! Hooray!
That ought to have been the end of it. However, I have an Apple phone. So, in Settings > Phone > My Number, it was still reading the temporary number. Similarly, in Settings > Messages > iMessage > Send and Receive, it was also still reading the temporary number.
How inconvenient.
Some combination of the following fixed that. I’m not sure exactly what is required to fix it: I did all this, some more than once, in some random order, and now it seems OK: powering the phone off and on again; disabling iMessage and re-enabling it; disabling iMessage, waiting a few minutes, and then re-enabling it; disabling iMessage, powering off the phone, powering it back on again, and re-enabling it; editing the phone number in My Number (which didn’t seem to have any actual effect); doing a full network reset (Settings > General > Transfer or Reset Device > Reset > Reset Network Settings). Hopefully that’ll help you too.
Finally, there was voicemail. Some years ago, I set up an account with Sipgate, where I get a phone number and voicemail. The thing I like about this is that when I get voicemail on that number, it emails me an mp3 of the voicemail. This is wholly brilliant, and phone companies don’t do it; I’m not interested in ringing some number and then pressing buttons to navigate the horrible menu, and “visual voicemail” never took off and never became an open standard thing anyway. So my sipgate thing is brilliant. But… how do I tell my phone to forward calls to my sipgate number if I don’t answer? I did this once, about 10 years ago, and I couldn’t remember how. A judicious bit of web searching later, and I have the answer.
One uses a few Secret Network Codes to do this. It’s called “call diversion” or “call forwarding”, and you do it by typing a magic number into your phone dialler, as though you were ringing it as a number. So, let’s say your sipgate number is 0121 496 0000. Open up the phone dialler, and dial *61*01214960000#
and press dial. That magic code, *61
, sets your number to divert if you don’t answer it. Do it again with *62
to also divert calls when your phone is switched off. You can also do it again with *67
to divert calls when your phone is engaged, but I don’t do that; I want those to come through where the phone can let me switch calls.
And that’s how I moved phone networks. Stuart, ten years from now when you read this again, now you know how to do it. You’re welcome.
I wrote a couple of short blog posts for Open Web Advocacy (of which I’m a founder member) on our progress in getting regulators to overturn the iOS browser ban and end Apple’s stranglehold over the use of Progressive Web Apps on iThings.
TL:DR; we’re winning.
Flying home. More in love.
Here’s a YouTube video of a talk I gave for the nerdearla conference, with Spanish subtitles. Basically, it’s about Safari being “the new IE”, and what we at Open Web Advocacy are doing to try to end Apple’s browser ban on iOS and iPads, so consumers can use a more capable browser, and developers can deliver non-hamstrung Progressive Web Apps to iThing users.
Since I gave this talk, the UK Competition and Markets Authority have opened a market investigation into Apple’s iThings browser restriction – read News from UK and EU for more.
My grandson has recently learned to count, so I made a set of cards we could ‘play numbers’ with.
We both played. I showed him that you could write ‘maths sentences’ with the ‘and’ and the ‘is’ cards. Next time I visited, he searched in my bag and found the box of numbers. He emptied them out onto the sofa and completely unprompted, ‘wrote’:
I was ‘quite surprised’. We wrote a few more equations using small integers until one added to 8, then he remembered he had a train track that could be made into a figure-of-8 , so ‘Arithmetic Time’ was over but we squeezed in a bit of introductory set theory while tidying the numbers away.
From here on, I’m going to talk about computer programming. I won’t be explaining any jargon I use, so if you want to leave now, I won’t be offended.
I don’t want to take my grandson too far with mathematics in case it conflicts with what he will be taught at school. If schools teach computer programming, it will probably be Python and I gave up on Python.
Instead, I’ve been learning functional programming in the Clojure dialect of Lisp. I’ve been thinking for a while that it woud be much easier to learn functional programming if you didn’t already know imperative programming. There’s a famous text, known as ‘SICP’ or ‘The Wizard Book’ that compares Lisps with magic. What if I took on a sourceror’s apprentice to give me an incentive to learn faster? I need to grab him “to the age of 5”, before the Pythonista get him.
When I think about conventional programming, I make diagrams, and I’ve used Unified Modelling Language (UML) for business analysis, to model ‘data processing’ systems. An interesting feature of LIsps is that process is represented as functions and functions are a special type of data. UML is designed for Object Orient Programming. I haven’t found a way to make it work for Functional Programming (FP.)
So, how can I introduce the ideas of FP to a child who can’t read yet?
There’s a mathematical convention to represent a function as a ‘black-box machine’ with a hopper at the top where you pour in the values and an outlet at the bottom where the answer value flows out. My first thought was to make an ‘add function’ machine but Clojure “treats functions as first-class citizens”, so I’m going to try passing “+” in as a function, along the dotted line labelled f(). Here’s my first prototype machine, passed 3 parameters: 2, 1 and the function +, to configure the black box as an adding machine.
In a Lisp, “2 + 1” is written “(+ 2 1)”.
The ‘parens’ are ‘the black box’.
Now, we’ve made our ‘black box’ an adder, we pass in the integers 2 and 1 and they are transformed by the function into the integer 3.
We can do the same thing in Clojure. Lisp parentheses provide ‘the black box’ and the first argument is the function to use. Other arguments are the numbers to add.
We’ll start the Clojure ‘Read Evaluate Print Loop’ (REPL) now. Clojure now runs well from the command line of a Raspberry Pi 4 or 400 running Raspberry Pi OS.
$ clj
Clojure 1.11.1
user=> (+ 2 1)
3
user=>
Clearly, we have a simple, working Functional Program but another thing about functions is that they can be ‘composed’ into a ‘pipeline’, so we can set up a production line of functional machines, with the second function taking the output of the first function as one of it’s inputs. Using the only function we have so far:
![[5compose–IMG_20221116_135501768-2.jpg]]
In Clojure, we could write that explicitly as a pipeline, to work just like the diagram
(-> (+ 1 2) (+ 1))
4
or use the more conventional Lisp format (start evaluation at the innermost parens)
(+ (+ 1 2) 1)
4
However, unlike the arithmetic “+” operator, the Clojure “+” function can
add up more than 2 numbers, so we didn’t really need to compose the two “+” functions. This single function call would have got the job done:
(+ 1 2 1)
4
SImilarly, we didn’t need to use 2 cardboard black-boxes. We could just pour all the values we wanted adding up into the hopper of the first.
Clojure can handle an infinite number of values, for as long as the computer can, but I don’t think I’ll tell my grandson about infinity until he’s at least 4.
I am writing this from 32,000 feet above Australia. Modern technology never ceases to amaze.
It’s called scarcity, and we can’t wait to see what you do with it.
Let’s start with the important bit. I think that over the last year, with acceleration toward the end of the year, I have heard of over 100,000 software engineers losing their jobs in some way. This is a tragedy. Each one of those people is a person, whose livelihood is at the whim of some capricious capitalist or board of same. Some had families, some were working their dream jobs, others had quiet quit and were just paying the bills. Each one of them was let down by a system that values the line going up more than it values their families, their dreams, and their bills.
While I am sad for those people, I am excited for the changes in software engineering that will come in the next decade. Why? Because everything I like about computers came from a place of scarcity in computering, and everything I dislike about computers came from a place of abundance in computering.
The old, waterfall-but-not-quite, measure-twice-and-cut-once approach to project management came from a place of abundance. It’s cheaper, so the idea goes, to have a department of developers sitting around waiting for a functional specification to be completed and signed off by senior management than for them to be writing working software: what if they get it wrong?
The team at Xerox PARC – 50 folks who were just told to get on with it – designed a way of thinking about computers that meant a single child (or, even better, a small group of children) could think about a problem and solve it in a computer themselves. Some of those 50 people also designed the computer they’d do it on, alongside a network and some peripherals.
This begat eXtreme Programming, which burst onto the scene in a time of scarcity (the original .com crash). People had been doing it for a while, but when everyone else ran out of money they started to listen: a small team of maybe 10 folks, left to get on with it, were running rings around departments of 200 people.
Speaking of the .com crash, this is the time when everyone realised how expensive those Oracle and Solaris licenses were. Especially if you compared them with the zero charged for GNU, Linux, and MySQL. The LAMP stack – the beginning of mainstream adoption for GNU and free software in general – is a software scarcity feature.
One of the early (earlier than the .com crash) wins for GNU and the Free Software Foundation was getting NeXT to open up their Objective-C compiler. NeXT was a small team taking off-the-shelf and free components, building a system that rivalled anything Microsoft, AT&T, HP, IBM, Sun, or Digital were doing – and that outlived almost all of them. Remember that the NeXT CEO wouldn’t become a billionaire until his other company released Toy Story, and that NeXT not only did the above, but also defined the first wave of dynamic websites and e-commerce: the best web technology was scarcity web technology.
What’s happened since those stories were enacted is that computerists have collectively forgotten how scarcity breeds innovation. You don’t need to know how 10 folks round a whiteboard can outsmart a 200 engineer department if your department hired 200 engineers _this month_: just put half of them on solving your problems, and half of them on the problems caused by the first half.
Thus we get SAFe and Scrumbut: frameworks for paying lip service to agile development while making sure that each group of 10 folks doesn’t do anything that wasn’t signed off across the other 350 groups of 10 folks.
Thus we get software engineering practices designed to make it easier to add code than to read it: what’s the point of reading the existing code if the one person who wrote it has already been replaced 175 times over, and has moved teams twice?
Thus we get not DevOps, but the DevOps department: why get your developers and ops folks to talk to each other if it’s cheaper to just hire another 200 folks to sit between them?
Thus we get the npm ecosystem: what’s the point of understanding your code if it’s cheaper just to randomly import somebody else’s and hire a team of 30 to deal with the fallout?
Thus we get corporate open source: what’s the point of software freedom when you can hire 100 people to push out code that doesn’t fulfil your needs but makes it easier to hire the next 500 people?
I am sad for the many people whose lives have been upended by the current downturn in the computering economy, but I am also sad for how little gets done within that economy. I look forward to the coming wave of innovation, and the ability to once again do more with less.
Not everything has to eat the world and the definition of success isn’t always
Life, people, and technology are all more complicated than that.
It’s hard to be attached to something and have it fade away, but that’s part of being a human being and existing in the flow of time. That’s table stakes. I have treasured memories of childhood friends who I haven’t heard from in 20 years. Internet communities that came and went. They weren’t less valuable because they didn’t last forever.
Let a thing just be what it is. So what if it doesn’t pan out the way you expected? If the value you derive from The Thing is reliant on its permanence, you’re setting yourself up for disappointment anyway.
Alternatively, abandon The Thing altogether and, I dunno, go watch a movie or something. The world is your oyster.
No points for figuring out which drama I’m referring to.
swyx wrote an interesting article on how he applied a personal help timeout after letting a problem at work drag on for too long. The help timeout he recommends is something I’ve also recently applied with some of my coworkers, so I thought I’d summarise and plug it here.
There can be a lot of pressure not to ask for help. You might think you’re bothering people, or worse, that they’ll think less of your abilities. It can be useful to counter-balance these thoughts by agreeing an explicit help timeout for your team.
If you’ve been stuck on a task with no progress for x minutes/hours/days, write up your problem and ask the team for help.
There are a few advantages to this:
Read swyx’s article here: https://www.swyx.io/help-timeouts
It’s been a very, very long time since I’ve released code to the open web. In fast, the only contributions I’ve done in the last 5 years have been to Chakra and a few other OSS libraries. So, in an attempt to try something new, I recently delved into the world anew.
There have been a few things lately that I wanted to play with:
The app in question is not new, it’s not novel, it’s not even unique in the components it’s using. But it was quick (less than a weekend moring playing around on the sofa), it’s simple (with a minimal surface), and scratches an itch.
GeoIP-lookup is a Go app that uses a local MaxMind City database file, providing a REST API that returns info about an IP address. That’s it.
The app itself is very simple. HTTP, tiny bit of routing, reading databases, and serving responses. GitHub makes it simple to then run an action that generates a new image and pushes this to GHCR. Unfortunately I couldn’t work out what was expected for the signing to work, but that can come later.
The big remaining thing is tests. I’ve got some basic examples in to test the harness, but not much more at the moment. I’ve also learnt how much I miss strict type systems, how much I hate the front-end world, and how good it feels to just get things done and out. Perfect is the enemy of done.
Anyway, it’s live and feels good.
(Last Updated on )
I went to TechMids the last week. One of the talks that had the most immediate impact on me was the talk from Jen Lambourne, one of the authors of Docs for Developers.
One of the ideas contained in the talk was the following:
You might have significantly more impact by curating existing resources than creating new ones.
Like a lot of people, when I started getting into technical writing, I started with a lot of entry level content. Stuff like Ten Tips for a Healthy Codebase and The Early Return Pattern and You. Consequently, there’s a proliferation of 101 level content on blogs like mine, often only lightly re-hashing the source documentation. This isn’t necessarily a bad thing, and I’m definitely not saying that people shouldn’t be writing these articles. You absolutely should be writing this sort of thing if you’re trying to get better at technical writing, and even if there are 100,000 articles on which HTTP verb to use for what, yours could be the one to make it click for someone.
But, I should be asking myself what my actual goal is. If my main priority is to become a better writer with helping people learn being a close second, then I ought to crack on with writing that 100,001st article. If I’m focused specifically on having an impact on learning outcomes, I should consider curating rather than creating.
Maybe the following would be a good start:
Finally, because I love Ruby and because this is a resource that deserves another signpost, I was recently alerted to The Ruby Learning Center and its resources page. I hope they continue to grow. Hooray for the signpost makers and the librarians.
Hear this talk performed (with appropriate background music):
Friends and enemies, attendees of Tech Mids 2022.
Don’t read off the screen.
If I could offer you only one piece of advice for why and how you should speak in public, don’t read off the screen would be it. Reading your slides out is guaranteed to make your talk boring, whereas the rest of my advice has no basis in fact other than my own experience, and the million great people who gave me thoughts on Twitter.
I shall dispense this advice… now.
Every meetup in every town is crying out for speakers, and your voice is valuable. Tell people your story. The way you see things is unique, just like everybody else.
Everybody gets nervous about speaking sometimes. Anybody claiming that they don’t is either lying, or trying to sell you something. If you’re nervous, consider that its a mark of wanting to do a good job.
Don’t start by planning what you want to say. Plan what you want people to hear. Then work backwards from there to find out what to say to make that happen.
You can do this. The audience are on your side.
Find your own style. Take bits and pieces from others and make them something of your own.
Slow down. Breathe. You’re going faster than it feels like you are.
Pee beforehand. If you have a trouser fly, check it.
If someone tells you why you should speak, take their words with a pinch of salt, me included. If they tell you how to speak, take two pinches. But small tips are born of someone else’s bad experience. When they say to use a lapel mic, or drink water, or to have a backup, then listen; they had their bad day so that you didn’t have to.
Don’t put up with rambling opinions from questioners. If they have a comment rather than a question, then they should have applied to do a talk themselves. You were asked to be here. Be proud of that.
Practice. And then practice again, and again. If you think you’ve rehearsed enough, you haven’t.
Speak inclusively, so that none of your audience feels that the talk wasn’t for them.
Making things look unrehearsed takes a lot of rehearsal.
Some people script their talks, some people don’t. Whether you prefer bullet points or a soliloquy is up to you. Whichever you choose, remember: don’t just read out your notes. Your talk is a performance, not a recital.
Nobody knows if you make a mistake. Carry on, and correct it when you can. But keep things simple. Someone drowning in information finds it hard to listen.
Live demos anger the gods of speaking. If you can avoid a live demo, do so. Record it in advance, or prep it so that it looks live. Nobody minds at all.
Don’t do a talk only once.
Acting can be useful, if that’s the style you like. Improv classes, stage presence, how you stand and what you do with your hands, all of this can be taught. But put your shoulders back and you’ve got about half of it.
Carry your own HDMI adapter and have a backup copy of your talk. Your technology will betray you if it gets a chance.
Record your practices and watch yourself back. It can be a humbling experience, but you are your own best teacher, if you’re willing to listen.
Try to have a star moment: something that people will remember about what you said and the way you said it. Whether that’s a surprising truth or an excellent joke or a weird gimmick, your goal is to have people walk away remembering what you said. Help them to do that.
Now, go do talks. I’m Stuart Langridge, and you aren’t. So do your talk, your way.
But trust me: don’t read off the screen.
I’m going to start blogging again. No reason why, no reason why-not. A lot has happened in the last twelve months; head, wife, job, decisions. Expect lots of random things.
For now, I’m in Saundersfoot enjoying the culmination of the World Rowing Beach Sprint Finals. Take care.
Recently, “Stinky” Taylar and I were evaluating some third party software for accessibility. One of the problems was their sign-up form.
This simple two-field form has at least three problems:
U Nagaharu was a Korean-Japanese botanist. Why shouldn’t he sign up to your site? In Burmese “U” is a also a given name: painter Paw U Thet, actor Win U, historian Thant Myint U, and politicians Ba U and Tin Aung Myint U have this name. Note that for these Burmese people, their given names are not the “first name”; many Asian languages put the family name first, so their “first name” is actually their surname, not their given name.
Many Afghans have no surname. It is also common to have no surname in Bhutan, Indonesia, Myanmar, Tibet, Mongolia and South India. Javanese names traditionally are mononymic, especially among people of older generations, for example, ex-presidents Suharno and Sukarno, which are their full legal names.
Many other people go by one name. Can you imagine how grumpy Madonna, Bono and Cher would be if they tried to sign up to buy your widgets but they couldn’t? Actually, you don’t need to imagine, because I asked Stable Diffusion to draw “Bono, Madonna and Cher, looking very angrily at you”:
Imagine how angry your boss would be if these multi-millionaires couldn’t buy your thingie because you coded your web forms without questioning falsehoods programmers believe about names.
How did this happen? It’s pretty certain that these development teams don’t have an irrational hatred of Indonesians, South Indians, Koreans and Burmese people. It is, however, much more likely they despise Cher, Madonna, and Bono (whose name is “O’Nob” backwards).
What is far more likely is that no-one on these teams is from South East Asia, so they simply didn’t know that not all the world has American-style names. (Many mononymic immigrants to the USA might actually have been “given” or inherited the names “LNU” or “FNU”, which are acronyms of “Last name unknown” or “First name unknown”.)
This is why there is a strong and statistically significant correlation between the diversity of management teams and overall innovation and why companies with more diverse workforces perform better financially.
The W3C has a comprehensive look at Personal names around the world, written by their internationalisation expert, Richard Ishida. I prefer to ask for “Given name”, with no minimum or maximum length, and optional “family name or other names”.
So take another look at your name input fields. Remember, not everyone has a name like “Chad Pancreas” or “Bobbii-Jo Musteemuff”.
(Last Updated on )
Steve McLeod invited me on his podcast, Bootstrapped.fm, to discuss how I run a small web studio called 16by9.
Marc and I talk about what its like to start and build up this type of company, and how, with some careful thinking, you can avoid letting your business become something you never wanted it to be.
You can listen here.
I haven’t recorded many podcasts before but this was a blast. Massive thanks to Steve for inviting me on.
I’ve made a few content updates over the past week or so:
A few months back I also created an Unoffice Hours page. It’s one of the highlights of my week. If you fancy saying hello, book a call.
I’ve been documenting the various processes in my business over the past few months. This week, I’ve been thinking about the process of on-boarding new clients.
How do I ensure we’re a good fit? How do I go beyond what they’re asking for and really understand what they’re after? How do we transition from “we’ve never spoken before” to “I trust you enough to put down a deposit for this project”?
There’s another question that has occurred to me lately: have they commissioned a website before? And if so, how does this impact the expectations they have?
When I started building client websites – some 18 years ago(!) – the majority of people I worked with had never commissioned a website before.
These days when I speak to clients, they’re often on the 4th or 5th redesign of their website. Even if I’m asked to build a brand new website, most of the people I speak to have been through the process of having a website built numerous times before.
In other words: early in my career, most of the people I built websites for didn’t have any preconceived notions of how a website should be built or the process one goes through to create one. These days, they do.
Sometimes they have good experiences and work with talented freelancers or teams. But often I hear horror stories of how they’ve been burned through poor project planning or projects taking longer than expected and going over budget.
I’ve found it worthwhile to ask about these experiences. The quicker I can identify their previous experience and expectations, especially if they’re negative, the quicker I can reassure them that there’s a proven process that we’ll follow.
I was at the RSE conference in Newcastle, along with many people whom I have met, worked with, and enjoyed talking to in the past. Many more people whom I have met, worked with, and enjoyed talking to in the past were at an entirely different conference in Aberystwyth, and I am disappointed to have missed out there.
One of the keynote speakers at RSEcon22, Marlene Manghami, talked about the idea of transcendence through community membership. They cited evidence that fans of soccer teams go through the same hormonal shifts at the same intensity during the match as the players themselves. Effectively the fans are on the pitch, playing the game, feeling the same feelings as their comrades on the team, even though they are in the stands or even at home watching on TV.
I do not know that I have felt that sense of transcendence, and believe I am probably missing out both on strong emotional connections with others and on an ability to contribute effectively to society (to a society, to any society) by lacking the strong motivation that comes from knowing that making other people happier makes me happier, because I am with them.
So, I made a game. It’s called Farmbound. It’s a puzzle; you get a sequence of farm things — seeds, crops, knives, water — and they combine to make better items and to give you points. Knives next to crops and fields continually harvest them for points; seeds combine to make crops which combine to make fields; water and manure grow a seed into a crop and a crop into a field. Think of it like a cross between a match-3 game and Little Alchemy. The wrinkle is that the sequence of items you get is the same for the whole day: if you play again, you’ll get the same things in the same order, so you can learn and refine your strategy. It’s rather fun: give it a try.
It’s a web app. Works for everyone. And I thought it would be useful to explain why it is, why I think that’s the way to do things, and some of the interesting parts of building an app for everyone to play which is delivered over the web rather than via app stores and downloads.
Well, there are a bunch of practical reasons. You get completely immediate play with a web app; someone taps on a share link, and they’re playing. No installation, no platform detection, it Just Works (to coin a phrase which nobody has ever used before about apps ever in the history of technology). And for something like this, an app with platform-specific code isn’t needed: sure, if you’re talking to some hardware devices, or doing low-level device fiddling or operating system integration, you might need to build and deliver something separately to each platform. But Farmbound is not that. There is nothing that Farmbound needs that requires a native app (well, nearly nothing, and see later). So it isn’t one.
There are some benefits for me as the developer, too. Such things are less important; the people playing are the important ones. But if I can make things nicer for myself without making them worse for players, then I’m going to do it. Obviously there’s only one codebase. (For platform-specific apps that can be alleviated a little with cross-platform frameworks, some of which are OK these days.) One still needs to test across platforms, though, so that’s not a huge benefit. On the other hand, I don’t have to pay extra to distribute it (beyond it being on my website, which I’d be paying for anyway), and importantly I don’t have to keep paying in order to keep my game available for ever. There’s no annual tithe required. There’s no review process. I also get support for minority platforms by publishing on the web… and I’m not really talking about something in use by a half-dozen people here. I’m talking about desktop computers. How many people building a native app, even a relatively simple puzzle game like this, make a build for iOS and Android and Windows and Mac and Linux? Not many. The web gets me all that for minimal extra work, and if someone on FreeBSD or KaiOS wants to play, they can, as long as they’ve got a modern browser. (People saying “what about those without modern browsers”… see below.)
But from a less practical and more philosophical point of view… I shouldn’t need to build a platform-specific native app to make a game like this. We want a world where anyone can build and publish an app without having to ask permission, right? I shouldn’t need to go through a review process or be beholden to someone else deciding whether to publish my game. The web works. Would Wordle have become so popular if you had to download a Windows app or wait for review before an update happened? I doubt it. I used to say that if you’re building something complex like Photoshop then maybe go native, but in a world with Figma in it, that maybe doesn’t apply any more, and so Adobe listened to that and now Photoshop is on the web. Give people a thing which doesn’t need installation, gets them playing straight away, and works everywhere? Sounds good to me. Farmbound’s a web app.
Farmbound shouldn’t need its own domain, I don’t think. If people find out about it, it’ll likely be by shared links showing off how someone else did, which means they click the link. If it’s popular then it’ll be top hit for its own name (if it isn’t, the Google people need to have a serious talk with themselves), and if it isn’t popular then it doesn’t matter. And, like native app building, I don’t really want to be on the hook forever for paying for a domain; sure, it’s not much money, but it’s still annoying that I’m paying for a couple of ideas that I had a decade ago and which nobody cares about any more. I can’t drop them, because of course cool URIs don’t change, and I didn’t want to be thinking a decade from now, do I still need to pay for this?
In slightly more ego-driven terms, it being on my website means I get the credit, too. Plus, I quite like seeing things that are part of an existing site. This is what drove the (admittedly hipster-ish) rise of “tilde sites” again a few years ago; a bit of nostalgia for a long time ago. Fortunately, I’ve also got Cloudflare in front of my site, which alleviates worries I might have had about it dying under load, although check back with me again if that happens to see if it turns out to be true or not. (Also, I’m considering alternatives to Cloudflare at the moment too.)
Firstly, I separated the front and back ends and deployed them in different places. I’m not all that confident that my hosted site can cope with being hammered, if I’m honest. This is alleviated somewhat by cloud caching, and hopefully quite a bit more by having a service worker in place which caches almost everything (although see below about that), but a lot of this decision was driven by not wanting to incur a server hit for every visitor every time, as much as possible. This drove at least some of the architectural decisions. The front end is on my site and is plain HTML, CSS, and JavaScript. The back end is not touched when starting the game; it’s only touched when you finish a game, in order to submit your score and get back the best score that day to see if you beat that. That back end is written in Deno, and is hosted on fly.io, who seem pretty cool. (I did look at Deno Deploy, but they don’t do permanent storage.)
Part of the reason the back end is a bit of extra work is that it verifies your submitted game to check you aren’t cheating and lying about your score. This required me to completely reimplement the game code in Deno. Now, you may be saying “what? the front end game code is in JavaScript and so is the back end? why don’t they share a library?” and the answer is, because I didn’t think of it. So I wrote the front end first and didn’t separate out the core game management from all the “animate this stuff with CSS” bits, because it was a fun weekend project done as a proof of concept. Once I got a bit further into it and realised that I should have done that… I didn’t wanna, because that would have sucked all the fun out of the project like a vampire and meant that I’d have never done it. So, take this as a lesson: think about whether you want a thing to be popular up front. Not that you’ll listen to this advice, because I never do either.
Similarly, this means that there’s less in the way of analytics, so I don’t get information about users, or real-time monitoring of popularity. This is because I did not want to add Google Analytics or similar things. No personal data about you ever leaves your device. You’ll have noticed that there’s no awful pop-up cookie consent dialogue; this is because I don’t need one, because I don’t collect any analytics data about players at all! Guess what, people who find those dialogues annoying (i.e., everyone?) You can tell companies to stop collecting data about you and then they won’t need an annoying dialogue! And when they say no… well, then you’ll have learned something about how they view you as customers, perhaps. Similarly, when scores are submitted, there’s no personal information that goes with them. I don’t even know whether two scores were submitted by the same person; there’s no unique ID per person or per device or anything. (Technically, the IP is submitted to the server, of course, but I don’t record it or use it; you’ll have to take my word for that.)
This architecture split also partially explains why the game’s JavaScript-dependent. I know, right? Me, the bloke who wrote “Everyone has JavaScript, right?“, building a thing which requires JS to run? What am I doing? Well, honestly, I don’t want to incur repeated server hits is the thing. For a real project, something which was critical, then I absolutely would do that; I have the server game simulation, and I could relatively easily have the server pass back a game state along with the HTML which was then submitted. The page is set up to work this way: the board is a <form>
, the things you click on are <button>
s, and so on. But I’m frightened of it getting really popular and then me getting a large bill for cloud hosting. In this particular situation and this particular project, I’d rather the thing die than do that. That’s not how I’d build something more critical, but… Farmbound’s a puzzle game. I’m OK with it not working, and if I turn out to be wrong about that, I can change that implementation relatively quickly without it being a big problem. It’s not architected in a JS-dependent way; it’s just progressively enhanced that way.
I had a certain amount of hassle from iOS Safari. Some of this is pretty common — how do I stop a double-tap zooming in? How do I stop the page overscrolling? — but most of the “fixes” are a combination of experimentation, cargo culting ideas off Stack Overflow, and something akin to wishing on a star. That’s all pretty irritating, although Safari is hardly alone in this. But there is a separate thing which is iOS Safari specific, which is this: I can’t sensibly present an “add this to your home screen” hint in iOS browsers other than Safari itself. In iOS Safari, I can show a little hint to help people know that they can add Farmbound to their home screen (which of course is delayed until a second game is begun and then goes away for a month if you dismiss it, because hassling your own players is a foolish thing to do). But in non Safari iOS browsers (which, lest we forget, are still Safari under the covers; see Open Web Advocacy if this is a surprise to you or if you don’t like it), I can’t sensibly present that hint. Because those non-Safari iOS browsers aren’t allowed to add web apps to your home screen at all. I can’t even give people a convenient tap to open Farmbound in iOS Safari where they can add the app to their home screen, because there’s no way of doing that. So, apologies, Chrome iOS or Firefox iOS users and others: you’ll have to open Farmbound in Safari itself if you want an easy way to come back every day. At least for now.
And finally, and honestly most annoyingly, the service worker.
Building and debugging and testing a service worker is still so hard. Working out why this page is cached, or why it isn’t cached, or why it isn’t loading, is incredibly baffling and infuriating still, and I just don’t get it. I tried using “workbox”, but that doesn’t actually explain how to use it properly. In particular, for this use case, a completely static unchanging site, what I want is “cache this actual page and all its dependencies forever, unless there’s a change”. However, all the docs assume that I’m building an “app shell” which then uses fetch()
to get data off the server repeatedly, and so won’t shut up about “network first” and “cache first, falling back” and so on rather than the “just cache it all because it’s static, and then shut up” methodology. And getting insight into why a thing loaded or didn’t is really hard! Sure, also having Cloudflare caching stuff and my browser caching stuff as well really doesn’t help here. But I am not even slightly convinced that I’ve done all this correctly, and I don’t really know how to be better. It’s too hard, still.
So that’s why Farmbound is the way it is. It’s been interesting to create, and I am very grateful to the Elite Farmbound Testing Team for a great deal of feedback and helping me refine the idea and the play: lots of love to popey, Roger, Simon, Martin, and Mark, as well as Handy Matt and my mum!
There are still some things I might do in the future (achievements? maybe), and I might change the design (I’m not great at visual design, as you can tell), and I really wish that I could have done all the animations with Shared Element Transitions because it would have been 312 times easier than the way I did it (a bunch of them add generated content and then web-animations-api move the ::before around, which I thought was quite neat but is also daft by comparison with SET). But I’m pleased with the implementation, and most importantly it’s actually fun to play. Getting over a thousand points is really good (although sometimes impossible, on some days), and I don’t really think the best strategies have been worked out yet. Is it better to make fields and tractors, or not go that far? Is water a boon or an annoyance? I’d be interested in your thoughts. Go play Farmbound, and share your results with me on Twitter!
My debut album is out, featuring 10 songs written while I was living in Thailand, India and Turkey. It’s quite a jumble of genres, as I like lots of different types of music, and not everyone will like it – I write the songs I want to hear, not for other people’s appetites.
You can buy it on Bandcamp for £2 or more, or (if you’re a cheapskate) you can stream it on Spotify or Apple Music. I am available for autographing breasts or buttocks.
(Last Updated on )
I was reflecting on things that I know now, a couple of decades in to my career, that I wish I had been told at the beginning. Many things came to mind, but the most immediate from a technological perspective was Smalltalk’s image model.
It’s not even the technology of the Smalltalk image that’s relevant, but the model of thinking that works well with it. In Smalltalk, there are two (three) important files for a given machine: the VM is the machine that can run Smalltalk; the image is a snapshot of all of the Smalltalk objects on the machine(; and the sources are the source code for the classes and methods in that image).
This has weird implications for how you work that differ greatly from “compile this text stream” or “interpret this text stream” programming environments. People who have used the ENVY/Developer tool generally seem to wax lyrical and wonder why it was never reinvented, like the rest of software engineering is the beach with the ruins of the Statue of Liberty poking out from the end of the Planet of the Apes. But the bit I wish I had been told about: the image model puts the “personal” in “personal computer” as far as programming is concerned. Every piece of software you write is part of your image: a peer of the rest of the software you wrote, of the software that other people wrote that you added, and of the software that was already there when you first booted the machine.
I wish I had been told to think like that: that each tool or project is not a separate tool or project, but a cumulative addition to the image. To keep everything I wrote, so that the next time I needed something I might not need to write it. To make sure, when using new things, that I could integrate them with the image (it didn’t exist at the time, but TruffleSQUEAK is very much this idea). To give up asking “how can I write software to solve this problem”, and to start asking “how can I solve this problem with software, writing some if necessary”?
It would be the difference between twenty of years of experience and one year of experience, twenty times over.
My Work Bezzie “Stinky” Taylar Bouwmeester and I take you on a wild, roller-coaster ride through the magical world of desktop screen readers. Who uses them? How they can help if developers use semantic HTML? How you can test your work with a desktop screenreader? (Parental discretion advised).
Last week I observed a blind screen reader user attempting to complete a legal document that had been emailed to them via DocuSign. This is a service that takes a PDF document, and turns it into a web page for a user to fill in and put an electronic signature to. The user struggled to complete a form because none of the fields had data labels, so whereas I could see the form said “Name”, “Address”, “Phone number”, “I accept the terms and conditions”, the blind user just heard “input, required. checkbox, required”.
Ordinarily, I’d dismiss the product as inaccessible, but DocuSign’s accessibility statement says “DocuSign’s eSignature Signing Experience conforms to and continually tests for GovernmentSection 508 and WCAG 2.1 Level AA compliance. These products are accessible to our clients’ customers by supporting Common screen readers” and had been audited by The Paciello Group, whom I trust.
So I set about experimenting by signing up for a free trial and authoring a test document, using Google Docs and exporting as a PDF. I then imported this into DocuSign and began adding fields to it. I noticed that each input has a set of properties (required, optional etc) and one of these is ‘Data Label’. Aha! HTML fields have a <label> associated with them (or should do), so I duplicated the text and sent the form to my Work Bezzie, Stinky Taylar, to test.
No joy. The labels were not announced. (It seems that the ‘data label’ field actually becomes a column header in the management report screen.) So I set about adding text into the other fields, and through trial and error discovered how to force the front-end to have audible data labels:
I think DocuSign is missing a trick here. Given the importance of input labels for screen readers, a DocuSign author should be prompted for this information, with an explanation of why it’s needed. I don’t think it would be too hard to find the text immediately preceeding the field (or immediately following it on the same line, in the case of radio/ checkboxes) and prefilling the prompt, as that’s likely to be the relevant label. Why go to all the effort to make an accessible product, then make it so easy for your customers to get it wrong?
Another niggle: on the front end, there is an invisible link that is visually revealed when tabbed to, and says “Press enter or use the screen reader to access your document”. However, the tester I observed had navigated directly to the PDF document via headings, and hadn’t tabbed to the hidden link. The ‘screenreader mode’ seemed visually identical to the default ‘hunt for help, cripples!” mode, so why not just have the accessible mode as the default?
All in all, it’s a good product, let down by poor usability and a ‘bolt-on’ approach. And, as we all know, built-in beats bolt-on. Bigly.
sequential-focus-navigation-starting-point
I can usually muddle through whatever programming task is put in front of me, but I can’t claim to have a great eye for design. I’m firmly in the conscious incompetence stage of making things look good.
The good news for me and people like me is that you can fake it. Sort of. I doubt I’ll ever compete with people who actually know what they’re doing, but I have found some resources for making something that doesn’t look like the dog’s dinner.
I’d like to add some more free resources to this, so hopefully I’ll get back to it.
The upcoming issue of the SICPers newsletter is all about phrases that were introduced to computing to mean one thing, but seem to get used in practice to mean another. This annoys purists, pedants, and historians: it also annoys the kind of software engineer who dives into the literature to see how ideas were discussed and used and finds that the discussions and usages were about something entirely different.
So should we just abandon all technical terminology in computing? Maybe. Here’s an irreverent guide.
Luckily the industry doesn’t really use this term any more so we can ignore the changed meaning. The small club of people who still care can use it correctly, everybody else can carry on not using it. Just be aware when diving through the history books that it might mean “extreme late binding of all things” or it might mean “modules, but using the word class” depending on the age of the text.
Nope, this one’s in the bin, I’m afraid. It used to mean “not waterfall” and now means “waterfall with a status meeting every day and an internal demo every two weeks”. We have to find a new way to discuss the idea that maybe we focus on the working software and not on the organisational bureaucracy, and that way does not involve the word…
If you can hire a “DevOps engineer” to fulfil a specific role on a software team then we have all lost at using the phrase DevOps.
This one used to mean “psychologist/neuroscientist developing computer models to understand how intelligence works” and now means “an algorithm pushed to production by a programmer who doesn’t understand it”. But there is a potential for confusion with the minor but common usage “actually a collection of if statements but last I checked AI wasn’t a protected term” which you have to be aware of. Probably OK, in fact you should use it more in your next grant bid.
Previously something very specific used in the context of financial technology development. Now means whatever anybody needs it to mean if they want their product owner to let them do some hobbyist programming on their line-of-business software, or else. Can definitely be retired.
Was originally the idea that maybe the things your software does should depend on the things the customers want it to do. Now means automated tests with some particular syntax. We need a different term to suggest that maybe the things your software does should depend on the things the customers want it to do, but I think we can carry on using BDD in the “I wrote some tests at some point!” sense.
Definitely another one for the bin. If Tony Hoare were not alive today he would be turning in his grave.
Regular readers will recall that the UK competition regulator, CMA, investigated Apple and Google’s mobile ecosystems and concluded there is a need for regulation. Although they were initially looking mostly at native app stores, but quickly widened that to looking at how Apple’s insistence on all browsers using WebKit on iOS is preventing Progressive Web Apps from competing against single platform native apps.
CMA has announced its intention to being a market investigation specifically into the supply of mobile browsers and browser engines, and the distribution of cloud gaming services through app stores on mobile devices, and seeks your views. It doesn’t matter whether you are not based in UK; if you or your clients do business in UK, your views matter too.
Steve Fenton has published his response, as has Alistair Shepherd; here is mine, in case you need something to crib from to write yours. Send your response to the CMA mailbox browsersandcloud@cma.gov.uk before July 22nd.
I am a UK-based web developer and accessibility consultant, specialising in ensuring web sites are inclusive for people with disabilities or who experience other barriers to access–such as living in poorer nations where mobile data is comparatively expensive, networks may be slow and unreliable and people are generally accessing the web on cheap, lower-specification devices. I write in a personal capacity, and am not speaking on behalf of any clients or employers, past or present. You have my permission to publish or quote from this document, with or without attribution.
Many of my clients would like to make apps that are Progressive Web Applications. These are apps that are websites, built with long-established open technologies that work across all operating systems and devices, and enhanced to be able to work offline and have the look and feel of an application. Examples of ‘look and feel’ might be to render full-screen; to be saved with their own icon onto a device’s home screen; to integrate with the device’s underlying platform (with the user’s permission) in order to capture images from the camera; use the microphone for video conferencing; to send push notifications to the user.
The benefits of PWAs are advantageous to both the developer (and the business they work for) and the end user. Because they are based on web technology, a competent developer need only make one app that will work on iOS, Android, as well as desktop computers and tablets. This write-once approach has obvious benefits over developing a single-platform (“native”) app for iOS in addition to a single-platform app for Android and also a website. It greatly reduces costs because it greatly reduces complexity of development, testing and deploying.
The benefits to the user are that the initial download is much smaller than that for a single-platform app from an app store. When an update to the web app is pushed by a developer to the server, the user only downloads the updated pages, not the whole application. For businesses looking to reach customers in growing markets such as India, Indonesia, Nigeria and Kenya, this is a competitive advantage.
In the case of users with accessibility needs due to a disability, the web is a mature platform on which accessibility is a solved problem.
However, many businesses are not able to offer a Progressive Web App, largely due to Apple’s anti-competitive policy of requiring all browsers on iOS and iPad to use its own engine, called WebKit. Whereas Google Chrome on Mac, Windows and Android uses its own engine (called Blink), and Firefox on non-iOS/iPad platforms uses its own rendering engine (called Gecko), Apple’s policy requires Firefox and Chrome on iOS/iPad to be branded skins over WebKit.
This “Apple browser ban” has the unfortunate effect of ham-stringing Progressive Web Apps. Whereas Apple’s Safari browser allows web apps (such as Wordle) to be saved to the user’s home screen, Firefox and Chrome cannot do so–even though they all use WebKit. While single-platform iOS apps can send push notifications to the user, browsers are not permitted to. Push notifications are high on business’ priority because of how it can drive engagement. WebKit is also notably buggy and, with no competition on the iOS/iPad platform, there is little to incentivise Apple to invest more in its development.
Apple’s original vision for applications on iOS was Web Apps, and today they still claim Web Apps are a viable alternative to the App Store. Apple CEO Tim Cook made a similar claim last year in Congressional testimony when he suggested the web offers a viable alternative distribution channel to the iOS App Store. They have also claimed this during a court case in Australia with Epic.
Yet Apple’s own policies prevent Progressive Web Apps being a viable alternative. It’s time to regulate Apple into allowing other browser engines onto iOS/iPad and giving them full access to the underlying platform–just as they currently are on Apple’s MacOS, Android, Windows and Linux. Therefore, I fully support your proposal to make a reference in relation to the supply of mobile browsers and cloud gaming in the UK, the terms of reference, and urge a swift remedy: Apple must be required to allow alternate browser engines on iOS, with access to all of the same APIs and device integrations that Safari and Native iOS have access to.
Yours,
Bruce Lawson
If you’re looking for music to study to tonight, here’s Watering a Flower, by Haruomi Hosono. Originally recorded in 1984 to be in-store music for MUJI.
If you’re looking for a way to avoid studying, it’s the same link, but read the comments.
I’ve been maintaining websites in some form for a long time now, and here’s why maybe you should at least think about it.
You get almost total creative control.
The more content that gets generated inside the walled gardens of Twitter, Instagram, etc. the less weirdness, beauty and creativity we get on the web. When you post on someone else’s service, what you wanted to say is forced into a tiny rectangle and you might find that rectange getting smaller and more restrictive as times goes on.
It’ll last if you take care of it.
If you create your web page using the fundamental technologies, HTML and CSS, and resist the urge to jump onto the ever-turning wheel of more advanced technologies, you’ll have something that in ten years from now you can be pretty sure you’ll be able to slap onto a server and show people. The oft-referenced Space Jam website is a great example.
It doesn’t really even have to be a website.
You know what’s easier than writing HTML? Writing plain text. You know what web servers are perfectly happy to serve? A plain text web site.
Hard things are often worth it.
Learning to develop and host a website is harder than registering a Twitter account and merrily posting away, but you develop a useful skill and a valuable creative outlet. A lot of people liken creating a personal website to gardening. You carefully water, prune, and dote, and what you get is something you can cherish.
Hosting a website isn’t that difficult.
Again, it’s harder than using a third party service, but there are plenty of places to put your site for free or cheap:
It doesn’t really matter if nobody reads it.
Sure, one good thing about the walled gardens is that they’re relatively convenient when it comes to showing your stuff to other people in the garden. However, someone seeing your post isn’t really a human connection. Someone hitting like on your post isn’t really a human connection.
I’ve come to favour fewer, deeper interactions over a larger number of shallower ones, even if those likes do feel good. I’m not writing this to make myself out as the wise person who’s transcended the shallowness of social media. I’m writing it because it takes a deliberate effort for me not to fall into those traps. There’s some effort to recreate “likes” in the IndieWeb, but at the moment I view the lack of likes as more of a feature than a bug.
I recently taught an introduction to Python course, to final-year undergraduate students. These students had little to zero programming experience, and were all expected to get set up with Python (using the Anaconda environment, which we had determined to be the easiest way to get a reasonable baseline configuration) on laptops they had brought themselves.
What follows is not a slight on these people, who were all motivated, intelligent, and capable. It is a slight on the world of programming in the current ages, if you are seeking to get started with putting a general-purpose computer to your own purposes and merely own a general-purpose computer.
One person had a laptop that, being a mere six (6) years old, was too old to run the current version of Anaconda Distribution. We had to crawl through the archives, guessing what older version might work (i.e. might both run on their computer and still permit them to follow the course).
Another had a modern laptop and the same same version of Python/tools that everyone else was using, except that their IDE would crash if they tried to plot a graph in dark mode.
Another had, seemingly without having launched the shell while they owned their computer, got their profile into a state where none of the system binary folders were on their PATH. Hmm, python3 doesn’t work, let’s use which python to find out why not. Hmm, which doesn’t work, let’s use ls to find out why not. Hmmm…
Many, through not having used terminal emulators before, did not yet know that terminal emulators are modal. There are shell commands, which you must type when you see a $ (or a % or a >) and will not work when you can see a >>>. There are Python commands, which are the other way around. If you type a command that launches nano/pico, there are other rules.
By the way, condo and pip (and poetry, if you try to read anything online about setting up Python) are Python things but you cannot use them as Python commands. They are shell commands.
By the other way, everyone writes those shell commands with a $ at the front. You do not write the $. Oh, and by the other other way: they don’t necessarily tell you to open the Terminal to do it.
Different environments—the shell, visual studio code, Spyder, PyCharm—will do different things with respect to your “current working directory” when you run a script. They will not tell you that they have done this, nor that it is important, nor that it is why your script can’t find a data file that’s RIGHT THERE.
This is all way before we get to the dark art of comprehending exception traces.
When I were a lad and Silicon Valley were all fields, you turned a computer on and it was ready for some programming. I’m not suggesting returning to that time, computers were useless then. But I do think it is needlessly difficult to get started with “a programming language that lets you work quickly” in this time of ubiquitous programs.
Recently, the HTML spec changed to replace current outline algorithm with one based on heading levels. So the idea that you could use <h1> for a generic heading across your documents, and the browser would “know” which level it actually should be by its nesting inside <section> and other related “sectioning elements”, is no more.
This has caused a bit of anguish in my Twitter timeline–why has this excellent method of making reusable components been taken away? Won’t that hurt accessibility as documents marked up that way will now have a completely flat structure? We know that 85.7% of screen reader users finding heading level useful.
Here comes the shocker: it has never worked. No web browser has implemented that outlining algorithm. If you used <h1> across your documents, it has always had a flat structure. Nothing has been taken away; this part of the spec has always been a wish, but has never been reality.
One of the reasons I liked having a W3C versioned specification for HTML is that it would reflect the reality of what browsers do on the date at the top of the spec. A living standard often includes things that aren’t yet implemented. And the worse thing about having zombie stuff in a spec is that lots of developers believe (in good faith) that it accurately reflects what’s implemented today.
So it’s good that this is removed from the WHATWG specification (now that the W3C specs are dead). I wish that you could use one generic heading everywhere, and its computed level were communicated to assistive technology. Back n 1991, Sir Uncle Timbo himself wrote
I would in fact prefer, instead of <H1>, <H2> etc for headings [those come from the AAP DTD] to have a nestable <SECTION>..</SECTION> element, and a generic <H>..</H> which at any level within the sections would produce the required level of heading.
But browsers vendors ignored both Sir Uncle Timbo, and me (the temerity!) and never implemented it, so removing this from the spec will actually improve accessibility.
(More detail and timeline in Adrian Roselli’s post There Is No Document Outline Algorithm.)
If there’s a common thread through tech workers, it’s having a drawer full of stickers, accumulated indiscriminately at conferences and meetups, but which one can never quite bring themselves to attach to anything.
There are very understandable human reasons for this. Once that sticker is stuck, you’ve committed. Your enjoyment of that sticker is now bound inextricably to the lifetime of whatever you’ve stick it on. Getting rid of that thing means getting rid of that sticker and the memories that come with it. That sticker isn’t just a picture of a dog, it represents the memories of that time you went to Crufts or whatever. You might have stuck it on a laptop, which means you’ll probably only have that sticker for somewhere between four and eight more years. What a waste. Or you might have stuck it on one of your beautiful notebooks, which in practice means you’ll have it forever, as notebooks are another thing that most of us like to accumulate but balk at the idea of actually using.
So, like many of you, I kept my stickers in a little drawer to occasionally rifle through, smiling at the memories attached. Only, mathematically, I was wasting them.
Let’s say each sticker has a value l
representing how much you like it. For convenience, we’ll give all stickers a fixed value of l=1
.
Your enjoyment, e
, of a sticker is then l * s
where s
is the total number of seconds for which you were looking at it. The success of your sticker strategy is measured by the sum of all e
values.
Time for a worked example. Let’s say you have five stickers in your drawer and you look through the drawer once a month. You look at each sticker for a good 30 seconds before replacing it and moving on to the next one. You maintain this ritual for an admirable 60 years.
12 inspections for 60 years is 720 inspections. With a fixed l=1
, each inspection gives you 30 * 5 * 1
, for a total of e=150
. Your lifetime e
using the drawer strategy a hefty 108,000.
Now imagine you take those five stickers and put them on the back of your desk, where all five remain in your line of sight while you work. Keeping our convenient l=1
for each sticker, you’re racking up a whopping 5e
per second. At this rate, you’ll catch up with your drawer using counterpart in 21,600 seconds, or 360 minutes, or six hours.
In other words, in a little less than a work day minus lunch, I’ve enjoyed my stickers as much as I would have done over 60 years if I’d kept them safe in a drawer and just looked at them once per month.
Don’t be a drawer. Be a sticker.
There’s a lot you can do with Ruby’s concepts of object individuation. A lot you probably shouldn’t do.
Some objects just hoard too many methods. By applying a modest tax rate, you can reclaim memory from your most bloated objects back to the heap.
class Object
def tax(tax_factor = 0.2)
meths = self.class.instance_methods(false)
tax_liability = (meths.length * tax_factor).ceil
tax_liability.times do
tax_payment = meths.shuffle.shift
instance_eval("undef #{tax_payment}")
end
end
end
class Citizen
def car; end
def house; end
def dog; end
def spouse; end
def cash; end
end
c = Citizen.new
c.tax
c.house
# undefined method `house' for #<Citizen:0x00007fc342a866c0>
Write your code like you write your emails when you’re trying a little too hard to come across as friendly.
klasses = ObjectSpace.each_object(Class)
klasses.each do |klass|
next if klass.frozen?
klass.instance_methods.each do |meth|
next if meth.to_s.end_with?('!')
klass.class_eval do
alias_method "#{meth}!".to_sym, meth
end
end
end
[1, 3, 2].max!
# 3
A fun game to play with your friends. Hides Wally in a randomly selected method, and you get to look for him!
klasses = ObjectSpace.each_object(Class)
klass = klasses.to_a.sample
method = klass.instance_methods.sample
klass.instance_eval do
define_method(method) do |*args|
puts "You found Wally!"
end
end
Budgeting is important.
$availableCalls = 1000
trace = TracePoint.new(:call) do |tp|
if tp.defined_class.ancestors.include?(Unsustainable)
raise StandardError.new, 'No more method calls. Go outside and play.' if $availableCalls.zero?
$availableCalls -= 1
end
end
trace.enable
module Unsustainable; end
class EndlessGrowth
include Unsustainable
def grow; end
end
1001.times { EndlessGrowth.new.grow }
# No more method calls. Go outside and play. (StandardError)
class Object
def self.inherited(subclass)
return if %w[Object].include?(self.name)
raise StandardError.new, '🏴'
end
end
class RulingClass; end
class UnderClass < RulingClass; end # raises StandardError
I was lucky enough to get one of the limited number of tickets for Brighton Ruby 2022, so off I trotted down to Brighton for a long weekend in the very comfortable Brighton Surf Guest House.
Joel Hawksley and his talk about getting GitHub’s 40k lines of custom CSS (sort of) under control with their design system, Primer.
Kelly Sutton–who I met at “breakfast”–talking about latency based Sidekiq queues. The idea is that you queue your jobs by expected latency (queue: :within_thirty_seconds
) instead of priority which is ambiguous, and that way your auto-scaling and alerting can respond appropriately. A writeup will potentially be landing on the Gusto engineering blog some time in the near future. They also introduced me to the idea of having specific read-only queues for high throughput tasks that won’t overwhelm the primary database with writes.
Tom Stuart talked about Ruby 2.7+’s pattern matching feature, a powerful but under-hyped feature which to my understanding provides functionality similar to named regular expression captures, but for abitrary objects. He included several examples of how this can be used to reduce the amount of code required to do certain things.
Jemma Issroff presenting on object shapes and how the concept can potentially be applied to Ruby. This is the sort of “under the hood” improvement that I’m not so familiar with, and eager to learn more about. There’s an open issue on the Ruby tracker covering it in more detail.
Roberta Mataityte on the 9 rules of effective debugging, from the book Debugging: The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems by David Agans. A really useful talk, given how easy it is to get lost in the woods while tracking down a problem. I can’t personally vouch for the book, but Roberta’s take on the material definitely made it sound like a worthwhile read, so I’ll probably grab a copy at some point.
Emma Barnes on legacy and the context in which we create and use our tools. Amazing talk, but you probably had to be there.
John Cinnamond on the maybe monad, the null object pattern, and how learning different perspectives helps us truly understand ourselves. Sometimes I suspect that people are using programming to trick me into learning deeper lessons about the human condition. Tricksy.
Naomi Freeman on her framework (Freemwork?) for building psychologically safe teams. Unfortunately by this point I’d stopped taking notes on my phone because I didn’t want to appear disinterested, so I can’t remember the individual points.
Looking forward to next year.
Several years ago, I inherited a legacy application. It had multiple parts all logging back to a central ELK stack that had later been supplemented by additional Prometheus metrics. And every morning, I would wake up to a dozen alerts.
Obviously we tried to reduce the number of alerts sent. Where possible the system would self-heal, bringing up new nodes and killing off old ones. Some of the alerts were collected into weekly reports so they were still seen but, as they didn’t require immediate triage, could be held off.
But the damage was done.
No-one read the Slack alerts channel. Everyone had forwarded the emails to spam. The system cried wolf, but all the villagers had learnt to cover their ears.
With a newer project, I wanted an implicit rule; if an alert pops up, it is because it requires a human interaction. A service shut off because of a billing issue. A new deploy causing a regression in signups. These are things a human needs to step in and do something about (caveat emptor there is wiggle room in this).
There are still warnings been flagged up but developers can check in on these on their own time. Attention is precious and been pulled out of it every hour because of a NullPointer is not a worthy trade-off in my own experience.
A flood of false positives will make you blind to the real need of alerting; knowing when you’re needed.
There are seven basic story plots:
How many distinct software “plots” are there?
Let me know if you can think of any more.
This is particularly powerful when you transition between orders of magnitude and facilitate a positive feedback loop. Make my test suite 15% faster and I’ll thank you kindly. Make it 10x faster and I’ll love you forever. ↩
My talk at AppDevCon discussed the Requirements Trifecta but turned it into a Quadrinella: you need leadership vision, market feedback, and technical reality to all line up as listed in the trifecta, but I’ve since added a fourth component. You also need to be able to tell the people who might be interested in paying for this thing that you have it and it might be worth paying for. If you don’t have that then, if anybody has heard of you at all, it will be as a company that went out of business with a product “five years ahead of its time”: you were able to build it, it did something people could benefit from, in an innovative way, but nobody realised that they needed it.
A 45 minute presentation was enough to introduce that framework and describe it, but not to go into any detail. For example we say that we need “market feedback”, i.e. to know that the thing we are going to build is something that some customer somewhere will actually want. But how do we find out what they want? The simple answer, “we ask them”, turns out to uncover some surprising complexity and nuance.
At one end, you have the problem of mass-market scale: how do you ask a billion people what they want? It’s very expensive to do, and even more expensive to collate and analyse those billion answers. We can take some simplifying steps that reduce the cost and complexity, in return for finding less out. We can sample the population: instead of asking a billion people what they think, we can ask ten thousand people what they think and apply what we learn to all billion people.
We have to know that the way in which we select those 10,000 people is unbiased, otherwise we’re building for an exclusive portion of the target billion. Send a survey to people’s work email addresses on a Friday, and some will not pick it up until Sunday as their weekend is Fri-Sat. Others will be on holiday, or not checking their email that day, or feeling grumpy and inclined to answer with the opposite of their real thoughts, or getting everything done quickly before the weekend and disinclined to think about your questions at all.
Another technique we use is to simplify the questions—or at least the answers we’ll accept to those questions, to make it easier to combine and aggregate those answers. Now we have not asked “what do you think about this” at all; we have asked “which of these ways in which you might think about this do you agree with?” Because people are inclined to avoid conflict, they tend to agree with us. Ask “to what extent do you agree that spline reticulation is the biggest automation opportunity in widget frobnication?” and you’ll learn something different from the person who asked “to what extent do you agree that spline reticulation is the least important automation opportunity in widget frobnication?”
We’ll get richer information from deeper, qualitative interactions with people, and that tends to mean working with fewer people. At the extreme small end we have one person: an agent talks to their client about what that client would like to see. This is quite an easy case to deal with, because you have exactly one viewpoint to interpret.
Of course, that viewpoint could well be inconsistent. Someone can tell you that they get a lot of independence in how they work, then in describing their tasks list all the approvals and sign-offs they have to get. It can also be incomplete. A manager might not fully know all of the details of the work their reports do; someone may know their own work very well but not the full context of the process in which that work occurs. Additionally, someone may not think to tell you everything about their situation: many activities rely on tacit knowledge that’s hard to teach and hard to explain. So maybe we watch them work, rather than asking them how they work. Now, are they doing what they’re doing because that’s how they work, or because that’s how they behave when they’re being watched?
Their viewpoint could also be less than completely relevant: maybe the client is the person paying for the software, but are they the people who are going to use it? Or going to be impacted by the software’s outputs and decisions? I used the example in the talk of expenses software: very few people when asked “what’s the best software you’ve ever used” come up with the tool they use to submit receipts for expense claims. That’s because it’s written for the accounting department, not for the workers spending their own money.
So, we think to involve more people. Maybe we add people’s managers, or reports, or colleagues, from their own and from other departments. Or their customers, or suppliers. Now, how do we deal with all of these people? If we interview them each individually, then how do we resolve contradiction in the information they tell us? If we bring them together in a workshop or focus group, we potentially allow those contradictions to be explored and resolved by the group. But potentially they cause conflict. Or don’t get brought up at all, because the politics of the situation lead to one person becoming the “spokesperson” for their whole team, or the whole group.
People often think of the productiveness of a software team as the flow from a story being identified as “to do” to working software being released to production. I contend that many of the interesting and important decisions relating to the value and success of the software were made before that limited part of the process.
Many, many years ago I was an avid reader and writer of various fiction writing websites. There’s still links to them on this site, which shows a. how long they’ve been around and b. how out of date this site is. Recently I’ve been on a bit of binge, revisiting my past and re-reading these old stories. Which led me to a quest.
I built up a good rapport with several writers. PMs and later emails been traded back and forth, learning a bit more about the world as a young and naive kid. Some folks were on the far side of the world, others 30 miles down the road.
I’ve tried to get in touch with a few of these folks recently and hit the inevitable bit rot that seems to pervade the Internet nowadays. Dead emails, links to profiles on sites that no longer exist. It seems there’s no way to reach some folks.
Sleuthing through LinkedIn, Twitter, every open-source channel I can find has yielded no luck.
I guess, what I’m trying to say is, it’s inherently unverving and disheartening knowing there is someone, somewhere, out there in the world who I will probably never get to talk to again.
But you never know.
The field of software engineering doesn’t change particularly quickly. Tastes in software engineering change all the time: keeping up with them can quickly result in seasickness or even whiplash. For example, at the moment it’s popular to want to do server-side rendering of front end applications, and unpopular to do single-page web apps. Those of us who learned the LAMP stack or WebObjects are back in fashion without having to lift a finger!
Currently it’s fashionable to restate “don’t mock an interface you don’t own” as the more prescriptive, taste-driven statement “mocks are bad”. Rather than change my practice (I use mocks and I’m happy with that from 2014 is still OK), I’ll ask why has this particular taste arisen.
Mock objects let you focus on the ma, the interstices between objects. You can say “when my case controller receives this filter query, it asks the case store for cases satisfying this predicate”. You’re designing a conversation between independent programs, making restrictions about the messages they use to communicate.
But many people don’t think about software that way, and so don’t design software that way either. They think about software as a system that holistically implements a goal. They want to say “when my case controller receives this filter query, it returns a 200 status and the JSON representation of cases matching that query”. Now, the mocks disappear, because you don’t design how the controller talks to the store, you design the outcome of the request which may well include whatever behaviour the store implements.
Of course, tests depending on the specific behaviour of collaborators are more fragile, and the more specific prescription “don’t mock what you don’t control” uses that fragility: if the behaviour of the thing you don’t control changes, you won’t notice because your mock carries on working the way it always did.
That problem is only a problem if you don’t have any other method of auditing your dependencies for fitness for purpose. If you’re relying on some other interface working in a particular way then you should probably also have contract tests, acceptance tests, or some other mechanism to verify that it does indeed work in that way. That would be independent of whether your reliance is captured in tests that use mock objects or some other design.
It’ll only be a short while before mock objects are cool again. Until then, this was an interesting diversion.
All the Birmingham-flavoured tech content on this page is supplied by local bloggers:
Want your blog's content featured here?
For information on submitting your blog for inclusion on this list, visit our blog submission page on Birmingham.IO.
All content, trademarks, artwork, and associated imagery are trademarks and/or copyright material of their respective owners. All rights reserved.
Back to Top