Last updated: August 03, 2020 08:22 PM (All times are UTC.)

August 03, 2020

NeXT marketed their workstations by letting Sun convince people they wanted a workstation, then trying to convince customers (who were already impressed by Sun) that their workstation was better.

As part of this, they showed how much better the development tools were, in this very long reality TV infomercial.

If you don’t know the name Igalia, you’ve still certainly used their code. Igalia is “an open source consultancy specialized in the development of innovative projects and solutions”, which tells you very little, but they’ve been involved in adding many features to the open-source browsers (which is now all browsers) such as MathML and CSS Grid.

One of their new initiatives is very exciting, called Open Prioritisation (I refuse to mis-spell it with a “z”). The successful campaign to support Yoav Weiss adding the <picture> element and friends to Blink and WebKit showed that web developers would contribute towards crowdfunding new browser features, so Igalia are running an experiment to get the diverse interests and needs of the web develoment community to prioritise which new features should be added to the web platform.

They’ve identified some possible new features that are “shovel-ready”—that is, they’re fully specified and ready to be worked on, and the Powers That Be who decide what gets upstreamed and shipped are supportive of their inclusion. Igalia says,

Igalia is offering 6 possible items which we could do to improve the commons with open community support, as well as what that would cost. All you need to do is pledge to the ‘pledged collective’ stating that if we ran such a campaign you’re likely to contribute some dollars. If one of these reaches its goal in pledges, we will announce that and begin the second stage. The second stage is about actually running the funding campaign as a collective investment.

I think this is a brilliant idea and will be pledging some of my pounds (if they are worth anything after Brexit). Can I humbly suggest that you consider doing so, too? If you can’t, don’t fret (these are uncertain times) but please spread the word. Or if your employer has some sort of Corporate Social Responsibility program, perhaps you might ask them to contribute? After all, the web is a common resource and could use some nurturing by the businesses it enables.

If you’d like to know more, Uncle Brian Kardell explains in a short video. Or (shameless plug!) you can hear Brian discuss it with Vadim Makeev and me in the fifth episode of our podcast, The F-Word (transcript available, naturally). All the information on the potential features, some FAQs and no photos of Brian are to be found on Igalia’s Open Prioritization page.

YouTube video

Read More

August 02, 2020

Apollo accelerators make the Vampire, the fastest Motorola 680×0-compatible accelerators for Amiga around. Actually, they claim that with the Sheepsaver emulator to trap ROM calls, it’s the fastest m68k-compatible Mac around too.

The Vampire Standalone V4 is basically that accelerator, without the hassle of attaching it to an Amiga. They replicated the whole chipset in FPGA, and ship with the AROS ROMs and OS for an open-source equivalent to the real Amiga experience.

I had a little bit of trouble setting mine up (this is not surprising, as they’re very early in development of the standalone variant and are iterating quickly). Here’s what I found, much of it from advice gratefully received from the team in the Apollo Slack. I replicate it here to make it easier to discover.

You absolutely want to stick to a supported keyboard and mouse, I ended up getting the cheapest compatible from Amazon for about £20. You need to connect the mouse to the USB port next to the DB-9 sockets, and the keyboard to the other one. On boot, you’ll need to unplug and replug the mouse to get the pointer to work.

The Vampire wiki has a very scary-looking page about SD cards. You don’t need to worry about any of that with the AROS image shipped on the V4. Insert your SD card, then in the CLI type:

mount sd0:

When you’re done:

assign sd0: dismount
assign sd0: remove

The last two are the commands to unmount and eject the disk in AmigaDOS. Unfortunately I currently find that while dismounting works, removing doesn’t; and then subsequent attempts to re-mount sd0: also fail. I don’t know if this is a bug or if I’m holding it wrong.

The CF card with the bootable AROS image has two partitions, System: and Work:. These take up around 200MB, which means you’ve got a lot of unused space on the CF card. To access it, you should get the fixhddsize tool. UnLHA it, run it, enter ata.device as your device, and let it fix things for you.

Now launch System:Tools/HDToolBox. And click “Add Entry”. In the Devices dialog, enter ata.device. Now click that device in the “Changed Name” list, then double-click on the entry that appears (for me, it’s SDCFXS-0 32G...). You’ll see two entries, UDH0: (that’s your System: partition) and UDH1: (Work:). Add an entry here, selecting the unused space. When you’ve done that, save changes, close HDToolBox, and reboot. You’ll see your new drive appear in Workbench, as something like UDH2:NDOS. Right click that, choose Format, then Quick Format. Boom.

My last tip is that AROS doesn’t launch the IP stack by default. If you want networking, go to System:Prefs/Network, choose net0 and click Save. Optionally, enable “Start networking during system boot”.

August 01, 2020

6502 by Graham Lee

On the topic of the Apple II, remember that MOS was owned by Commodore Business Machines, a competitor of Apple’s, throughout the lifetime of the computer. Something to bear in mind while waiting to see where ARM Holdings lands.

Obsolescence by Graham Lee

An eight-year-old model of iPad is now considered vintage and obsolete. For comparison, the Apple ][ was made from 1977-1993 (16 years) and the January 1983 Apple //e would’ve had exactly the same software support as the final model sold in November 1993, or the compatibility cards for Macintosh sold from 1991 onwards.

July 31, 2020

Some programming languages have a final keyword, making types closed for extension and open for modification.

July 30, 2020

The Nineteen Nineties by Graham Lee

I’ve been playing a lot of CD32, and would just like to mention how gloriously 90s it is. This is the startup chime. For comparison, the Interstellar News chime from Babylon 5.

Sure beats these.

App Structure 2020 by Luke Lanchester (@Dachande663)

Five years ago I wrote about how I structured applications. A lot has changed in five years. An old saying states you should be able to look back on your work of yesterday and wonder what you were thinking. So here is how I structure apps in 2020, what’s changed and what’s stayed the same.

Without further ado, here are the most common classes created for a fictional Movie type.

App\Http\Controllers\ApiMovieController
App\Modules\Movie\Entities\Movie
App\Modules\Movie\Commands\CreateMovieCommand
App\Modules\Movie\CommandHandlers\CreateMovieCommandHandler
App\Modules\Movie\Policies\MoviePolicy
App\Modules\Movie\Readers\MovieReader
App\Modules\Movie\Repositories\MovieRepository
App\Modules\Movie\Repositories\MovieRepositoryInterface
App\Modules\Movie\Resources\MovieResource

So what’s changed? Well the first thing is entities are now split into modules, with each module containing its own repositories, commands, resources et al. This makes it much simpler to identify all the parts of a specific module, even if there do tend to be strong links between modules (for instance the User module is referenced elsewhere).

The other big change is the move to Command Query Responsibility Segregation (CQRS). In 2015, I was handling a lot of the logic within the Controller. This was fine when the controller was the sole owner of an action. But as systems have grown, there are more and more events. Bundling authentication, logging, retries etc into the controller caused them to become unwieldy, especially in classes with many methods.

With CQRS, read requests go through a Reader. This is responsible for accepting query information, validating the input (and the user), and then returning data. All data is transformed through one or more Resources.

Any actions that may be performed, from creating a movie to submitting a vote on that entity, are now encapsulated as Commands. Each command contains all of the data needed for it to work including the user performing the action, the entity under control and any inputs. Looking at a command, you can see exactly what’s needed!

Commands are then routed through an event bus. This allows for logging of all actions, addition of transactions and retry controls, and authentication all without needing to touch the actual Command Handler that does the final work!

The system isn’t perfect, but it strikes a good balance between been self-documenting/protecting, and fast for rapid development.

July 27, 2020

July 24, 2020

Reading List 263 by Bruce Lawson (@brucel)

July 23, 2020

July 17, 2020

Live Regions resources by Bruce Lawson (@brucel)

Yesterday I asked “What’s the most up-to-date info on aria-live regions (and ) support in AT?” for some client work I’m doing. As usual, Twitter was responsive and helpful.

Heydon replied

Should be fine, support is good for live regions. Not sure about output, though … Oh, you’re adding the p _with_ the other XHR content? That will have mixed results in my experience.

Brennan said

I’ve seen some failed announcements with live-regions on VoiceOver, especially with iframes. (Announcement of the title seems to kill any pending live content).output has surprisingly good support but (IIRC) is not live by default on at least one browser (IE, I think).

Some more resources people pointed me to:

July 16, 2020

A while ago I developed a little demo app that generates an interactive 3D visualisation – a primitive landscape of sorts – from a number sequence generated by a Perlin Noise algorithm (don’t worry if that means nothing to you!). You can check out the app right here.

The 3D graphics are drawn in the browser DOM, using thousands of divs transformed via CSS. It turned out pretty well, but I found that the frame rate absolutely tanked when adding divs to increase the visualisation’s resolution. Browsers aren’t well equipped when it comes to transforming thousands of divs in 3D, so in pursuit of a better frame rate I decided to use this as an excuse to finally dip into WebGL, by rewriting the app using a WebGL-based renderer.

So, what is WebGL, and what’s Three.js? WebGL is a hardware-accelerated API for drawing graphics in a web browser, enabling 2D and 3D visuals to be drawn with a far higher level of performance than what you might get when using the DOM or a HTML5 canvas. In the right hands, the visual output can be more akin to a modern PC or console video game than the more simplistic animated graphics you might usually see around the web. However, WebGL can be somewhat impenetrable to newcomers, requiring an intimate knowledge of graphics programming and mathematics, so many developers add a library or framework on top to handle the heavy lifting.

And that’s where Three.js steps in, providing a relatively simple API for developing WebGL apps. There are other options of course, but I chose Three.js as there’s a wealth of demos and documentation for it out there, plus it seems like the one I see in the wild the most.

TL;DR: view the new Three.js version of my app in the embed below, or right here on Codepen.

See the Pen
Three.js/WebGL: Interactive Perlin Noise Landscape Visualisation
by Sebastian Lenton (@sebastianlenton)
on CodePen.

Breaking it Down

Rather than diving straight into rebuilding the application in one go, I decided to break it down into various chunks of essential functionality required to complete the final product. Once I’d developed them all, I would then put the various components together and develop the final app. The items on my list were as follows:

Hello World: the objective was to get something simple onscreen. While researching this I discovered an amazing learning resource in the form of the tutorials at Three.js Fundamentals. I couldn’t have completed this project without it, it really was incredibly useful. I followed their Hello World tutorial and got a cube rotating onscreen pretty quickly.

Resizable canvas: by default, Three.js outputs to a 300x150px canvas element that does not resize. I needed the canvas to not only resize itself to fit the size of the browser window, but I also wanted the contents to always occupy a square area located in the centre of the canvas, regardless of window aspect ratio. This is so that the entire visualisation would be visible regardless of window aspect.

The “Responsive Design” article on Three.js Fundamentals got me most of the way there, but when viewed in a mobile-esque portrait aspect the sides of the demo spinning cube would be clipped, whereas I wanted the content to resize itself to always fit within that central square area so it would always be visible. After reading the following GitHub thread I realised I had to update the Three.js camera’s field of view (FOV) upon window resize, and after some mathematical trial and error I managed to get it working. Check out my example here, the red square represents any visible content that might be present in the scene.

Draggable rotating plane: I needed to replicate the ability to click and drag to change the viewing angle. Once I’d got something onscreen it was pretty easy to retrofit the code from the previous version of my application into a Three.js context- check it out here.

However, shortly afterwards I discovered that Three.js includes an extension called OrbitControls, which provides an orbiting camera with momentum, dollying (being able to move the camera closer or further to the target), auto rotation and more, with virtually no setup required beyond some simple configuration. As such it was a no-brainer to use it rather than my own code. You can view an OrbitControls example here.

Vertex colours: the original version uses a CSS gradient background on each transformed div as a texture. To achieve the same effect with WebGL I figured I could either use texture mapping or vertex colours, but given that there would potentially be thousands of child elements all needing a distinct texture I assumed that doing it with vertex colours would be easier and perhaps more performant (might be wrong about that though!).

BTW if you’re wondering what “vertex colours” are: a “vertex” is a point in a 3D model, such as a corner of a square. Each vertex can be given its own colour, and the renderer will interpolate a 3D model’s colour from one point to another- eg, a cylinder with blue vertices at one end and red ones at the other would have a blue-to-red gradient texture.

It was a bit tricky finding concrete info and working examples for this (a lot of the content about Three.js out there has been obsoleted by changes in its codebase), but I managed to get there after some trial and error: see my example here.

Parent and child Meshes: I needed to be able to parent the child objects that make up the landscape to the square base underneath, so that the children would rotate with the base. Three.js Fundamentals once again came to the rescue, with their Scene Graph tutorial explaining how to parent objects to other objects. The tutorial was easy to follow, plus there’s some bonus content at the end about making a tank that moves along a track.

Creating and deleting elements: in the original DOM-based version of the application, the quantity and positions of child objects would change when changing the landscape resolution setting in the controls. At the time it was easiest to simply delete all the child objects and recreate however many were required for the new resolution value. However with this new version I felt it would be more performant to implement a basic object pool.

Instead of repeatedly deleting and recreating objects, with an object pool you create an instance of every object that will ever be needed at startup. Then, you display and modify the ones that are required while deactivating the ones that aren’t. This approach improves performance, since regularly creating and deleting objects often has a heavier cost than modifying objects that are already present in the scene. The drawback is slightly longer startup time, but that isn’t a problem here. In this instance the max resolution of the visualisation is 100, so the total objects created at startup is 10,000.

You can find my basic Object Pool implementation here: enter the number of objects required into the field in the top-right. However, in the case of this application there are probably better ways to achieve this- rather than creating 10,000 individual meshes, it might be possible to use instancing or some sort of particle system to implement this more efficiently. Still got a lot of learning to do…

Modifying meshes in realtime: I needed to work out how to modify a mesh’s size and colours in realtime, since child objects would need to be modified when someone uses the app’s controls. You can see a simple example of scaling and changing vertex colours here. When changing vertex colours, don’t forget to set myGeometry.colorsNeedUpdate = true; whenever the colours need to change.

Putting it all together: I managed to complete all of the above items without too much effort. Once that was done, I assembled everything together and ended up with a version that looked more or less the same as the original DOM-based version, but with far better performance! Once I’d gotten used to the different syntax it wasn’t much more difficult than developing the original version.

Enhancements

Since building something like this with WebGL adds a lot more possibilities in terms of what you can do, I added some enhancements to the original version:

Fog: a basic fog effect can be added to a Three.js scene very easily- and of course, there’s a chapter about it on Three.js Fundamentals. I added a very subtle fog effect, more as a test than anything else. The only problem I encountered was that the OrbitControls “dolly” effect didn’t work well with fog, since the fog effect’s start and end points are relative to the camera position- so dollying the camera towards or away from the object would plunge it entirely into or out of the effect. To get around this I replaced camera dolly with zoom.

Removal of loading indicators: the original version was slow enough that changing a control would freeze the visualisation until the update had completed. To communicate to the user that something was happening I added loading indicators. This new WebGL-based version was so much faster that I felt I could simply remove these and have all updates happening in realtime! It still gets a bit slow when the “resolution” slider is at a high value, but not sure what I can do about that- more research needed. (Also, when you tweak the sliders it looks awesome).

Editable colours: you can change the land, sky and fog colours in this version. The colour picker provided by the library I used for the controls, dat.GUI, made this easy to implement.

3D Sky: the original version had a simple CSS gradient for its background- a flat image that doesn’t react to the rest of the visualisation. I decided that a 3D “sky” would be a good enhancement. Many videogames simulate a 3D horizon by surrounding the play area with a massive cube, textured with a cubemap (a special texture that will make the cube look like a realistic panoramic horizon) but in my case I wanted the sky’s colours to be a linear gradient, so felt a sphere would be a better fit for this.

This was a bit tricky to get right: at first I tried manipulating a sphere’s vertex colours to produce the gradient effect, by reverse-engineering this example code that more or less does exactly that. But for some reason there were slight visual artefacts at certain polygon edges that spoiled the effect slightly, so I tried a different approach.

This involved drawing a gradient to an unseen HTML5 canvas, then using that as a source texture for my sphere, and that worked more or less perfectly. Then with the sphere positioned at 0, 0, 0 in my scene I set it to resize itself on the window resize event in order to completely cover the background at any viewport size, set its material to not be affected by fog (or else it would be barely visible, if at all) and also set it to render the backs of its polygons only, so it wouldn’t obscure the landscape in front.

The procedural canvas texture gets redrawn when its colours are changed via the GUI, and any changes are applied to the sphere without needing to do anything so long as the sphere’s material.map.needsUpdate flag is set to true on each frame.

Post-processing effects: with WebGL you can add post-processing effects such as blur, noise or colour changes. This is somewhat similar to CSS filters except that unlike CSS filters you can write your own effects, so there are no limits on what you can do. I added some simple effects, using Three.js Fundamentals’ article on post processing – some bloom and visual noise – but in this instance I removed the effects from the final version as it made it look worse! Effects like these could be a good fit for a future project though.

Rendering on demand: more of a finishing touch than an enhancement, a frame will only be rendered if there is some movement in the scene, whether by user interaction or the camera moving on its own. If movement stops then rendering does too, using less power on the user’s device.

The Results

After all that, the Three.js version of my app was complete… and was it worth all the effort? Definitely, I’d say- the new version has a far higher level of performance, which means better fame rate, higher maximum landscape resolution and no “loading” indicators. Plus, it was a  great way of dipping my toe into the Three.js world. The possibilities really are endless, and I’m looking forwards to learning more.

However, I’m also mindful of the drawbacks of using something a library like Three.js. The library is quite large, with a minified version weighing in at around 500k, plus GPU-accelerated graphics can be heavy on battery life. Maybe stick to using the DOM or HTML5 canvas when developing simple or frivolous animations.

Check out the final product on Codepen– give it some love if you like it or find the code useful!

The post Adventures With WebGL & Three.js appeared first on Sebastian Lenton.

My chum and co-author Remington Sharp tweeted

We need a universally recognised icon/image/logo for "works offline".

Like the PWA or HTML5 logo. We need to be able to signal to visitors that our URLs are always available.

To the consumer, the terms Progressive Web App or Service Worker are meaningless. So I applied my legendary branding, PR and design skills to come up with something that will really resonate with a web user: the fact that this app works online, offline – anywhere.

So the new logo is a riff on the HTML5 logo, because this is purely web technologies. It has the shield, a wifi symbol on one side and a crossed out wifi symbol on the other, and a happy smile below to show that it’s happy both on and offline. Above it is the acronym “wank” which, of course, stands for “Works anywhere—no kidding!”

Take it to use on your sites. I give the fruits of my labour and creativity free, as a gift to humanity.

July 10, 2020

Reading List 262 by Bruce Lawson (@brucel)

July 09, 2020

Many parts of a modern software stack have been around for a long time. That has trade-offs, but in terms of user experience is a great thing: software can be incrementally improved, providing customers with familiarity and stability. No need to learn an entirely new thing, because your existing thing just keeps on working.

It’s also great for developers, because it means we don’t have to play red queen, always running just to stand still. We can focus on improving that customer experience, knowing that everything we wrote to date still works. And it does still work. Cocoa, for example, has a continuous history back to 2001, and there’s code written to use Cocoa APIs going back to 1994. Let’s port some old Cocoa software, to see how little effort it is to stay up to date.

Bean is a free word processor for macOS. It’s written in Objective-C, using mostly Cocoa (but some Carbon) APIs, and uses the Cocoa Text system. The current version, Bean 3.3.0, is free, and supports macOS 10.14-10.15. The open source (GPL2) version, Bean 2.4.5, supports 10.4-10.5 on Intel and PowerPC. What would it take to make that a modern Cocoa app? Not much—a couple of hours work gave me a fully-working Bean 2.4.5 on Catalina. And a lot of that was unnecessary side-questing.

Step 1: Make Xcode happy

Bean 2.4.5 was built using the OS X 10.5 SDK, so probably needed Xcode 3. Xcode 11 doesn’t have the OS X 10.5 SDK, so let’s build with the macOS 10.15 SDK instead. While I was here, I also accepted whatever suggested updated settings Xcode showed. That enabled the -fobjc-weak flag (not using automatic reference counting), which we can now just turn off because the deployment target won’t support it. So now we just build and run, right?

Not quite.

Step 2: Remove references to NeXT Objective-C runtime

Bean uses some “method swizzling” (i.e. swapping method implementations at runtime), mostly to work around differences in API behaviour between Tiger (10.4) and Leopard (10.5). That code no longer compiles:

/Users/leeg/Projects/Bean-2-4-5/ApplicationDelegate.m:66:23: error: incomplete
      definition of type 'struct objc_method'
                        temp1 = orig_method->method_types;
                                ~~~~~~~~~~~^
In file included from /Users/leeg/Projects/Bean-2-4-5/ApplicationDelegate.m:31:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/objc/runtime.h:44:16: note: 
      forward declaration of 'struct objc_method'
typedef struct objc_method *Method;
               ^

The reason is that when Apple introduced the Objective-C 2.0 runtime in Leopard, they made it impossible to directly access the data structures used by the runtime. Those structures stayed in the headers for a couple of releases, but they’re long gone now. My first thought (and first fix) was just to delete this code, but I eventually relented and wrapped it in #if !__OBJC2__ so that my project should still build back to 10.4, assuming you update the SDK setting. It now builds cleanly, using clang and Xcode 11.5 (it builds in the beta of Xcode 12 too, in fact).

OK, ship it, right?

Diagnose a stack smash

No, I launched it, but it crashed straight away. The stack trace looks like this:

* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x7ffeef3fffc8)
  * frame #0: 0x00000001001ef576 libMainThreadChecker.dylib`checker_c + 49
    frame #1: 0x00000001001ee7c4 libMainThreadChecker.dylib`trampoline_c + 67
    frame #2: 0x00000001001c66fc libMainThreadChecker.dylib`handler_start + 144
    frame #3: 0x00007fff36ac5d36 AppKit`-[NSTextView drawInsertionPointInRect:color:turnedOn:] + 132
    frame #4: 0x00007fff36ac5e6d AppKit`-[NSTextView drawInsertionPointInRect:color:turnedOn:] + 443
[...]
    frame #40240: 0x00007fff36ac5e6d AppKit`-[NSTextView drawInsertionPointInRect:color:turnedOn:] + 443
    frame #40241: 0x00007fff36ac5e6d AppKit`-[NSTextView drawInsertionPointInRect:color:turnedOn:] + 443
    frame #40242: 0x00007fff36a6d98c AppKit`-[NSTextView(NSPrivate) _viewDidDrawInLayer:inContext:] + 328
[...]

That’s, um. Well, it’s definitely not good. All of the backtrace is in API code, except for main() at the top. Has NSTextView really changed so much that it gets into an infinite loop when it tries to draw the cursor?

No. Actually one of the many patches to AppKit in this app is not swizzled, it’s a category on NSTextView that replaces the two methods you can see in that stack trace. I could change those into swizzled methods and see if there’s a way to make them work, but for now I’ll remove them.

Side quest: rationalise some version checks

Everything works now. An app that was built for PowerPC Mac OS X and ported at some early point to 32-bit Intel runs, with just a couple of changes, on x86_64 macOS.

I want to fix one more thing. This message appears on launch and I would like to get rid of it:

2020-07-09 21:15:28.032817+0100 Bean[4051:113348] WARNING: The Gestalt selector gestaltSystemVersion
is returning 10.9.5 instead of 10.15.5. This is not a bug in Gestalt -- it is a documented limitation.
Use NSProcessInfo's operatingSystemVersion property to get correct system version number.

Call location:
2020-07-09 21:15:28.033531+0100 Bean[4051:113348] 0   CarbonCore                          0x00007fff3aa89f22 ___Gestalt_SystemVersion_block_invoke + 112
2020-07-09 21:15:28.033599+0100 Bean[4051:113348] 1   libdispatch.dylib                   0x0000000100362826 _dispatch_client_callout + 8
2020-07-09 21:15:28.033645+0100 Bean[4051:113348] 2   libdispatch.dylib                   0x0000000100363f87 _dispatch_once_callout + 87
2020-07-09 21:15:28.033685+0100 Bean[4051:113348] 3   CarbonCore                          0x00007fff3aa2bdb8 _Gestalt_SystemVersion + 945
2020-07-09 21:15:28.033725+0100 Bean[4051:113348] 4   CarbonCore                          0x00007fff3aa2b9cd Gestalt + 149
2020-07-09 21:15:28.033764+0100 Bean[4051:113348] 5   Bean                                0x0000000100015d6f -[JHDocumentController init] + 414
2020-07-09 21:15:28.033802+0100 Bean[4051:113348] 6   AppKit                              0x00007fff36877834 -[NSCustomObject nibInstantiate] + 413

A little history, here. Back in classic Mac OS, Gestalt was used like Unix programmers use sysctl and soda drink makers use high fructose corn syrup. Want to expose some information? Add a gestalt! Not bloated enough? Drink more gestalt!

It’s an API that takes a selector, and a pointer to some memory. What gets written to the memory depends on the selector. The gestaltSystemVersion selector makes it write the OS version number to the memory, but not very well. It only uses 32 bits. This turned out to be fine, because Apple didn’t release many operating systems. They used one octet each for major, minor, and patch release numbers, so macOS 8.5.1 was represented as 0x0851.

When Mac OS X came along, Gestalt was part of the Carbon API, and the versions were reported as if the major release had bumped up to 16: 0x1000 was the first version, 0x1028 was a patch level release 10.2.8 of Jaguar, and so on.

At some point, someone at Apple realised that if they ever did sixteen patch releases or sixteen minor releases, this would break. So they capped each of the patch/minor numbers at 9, and just told you to stop using gestaltSystemVersion. I would like to stop using it here, too.

There are lots of version number checks all over Bean. I’ve put them all in one place, and given it two ways to check the version: if -[NSProcessInfo isOperatingSystemAtLeastVersion:] is available, we use that. Actually that will never be relevant, because the tests are all for versions between 10.3 and 10.6, and that API was added in 10.10. So we then fall back to Gestalt again, but with the separate gestaltSystemVersionMajor/Minor selectors. These exist back to 10.4, which is perfect: if that fails, you’re on 10.3, which is the earliest OS Bean “runs” on. Actually it tells you it won’t run, and quits: Apple added a minimum-system check to Launch Services so you could use Info.plist to say whether your app works, and that mechanism isn’t supported in 10.3.

Ship it!

We’re done!

Bean 2.4.5 on macOS Catalina

Haha, just kidding, we’re not done. Launching the thing isn’t enough, we’d better test it too.

Bean 2.4.5 on macOS Catalina in dark mode

Dark mode wasn’t a thing in Tiger, but it is a thing now. Bean assumes that if it doesn’t set the background of a NSTextView, then it’ll be white. We’ll explicitly set that.

Actually ship it!

You can see the source on Github, and particularly how few changes are needed to make a 2006-era Cocoa application work on 2020-era MacOS, despite a couple of CPU family switches and a bunch of changes to the UI.



Repackaging for Uncertainty

It’s not news that the global covid-19 pandemic has severely impacted cultural institutions around the world. Here in New York, what was initially announced as a 1-month closure for Broadway was recently extended to the end of 2020, forcing theaters to go dark for at least 9 months. For single ticket purchases, sales can be painfully pushed back, shows rescheduled, credits offered, and emails blasted. But at this point, organizations are now looking at a 20/21 season that has been reduced by 50-70%. So while they ponder what a physically and financially safe reopening looks like, they’re also having to turn their attentions to another key portion of their audience: subscribers.

Every company will tell you they want to cultivate a relationship with their customers. From a brand perspective, you always want to have a key base of loyal consumers, and financially these are the patrons that will consistently return to make purchases. Arts and cultural institutions use annual packages to leverage their reliable volume and quality of programming, allowing them to build relationships with their patrons which span over decades, and sometimes generations. Rather than buying several tickets to individual shows, a package is a bundle deal. The benefits vary from org to org, early access to show dates, choice of seats, and discounted prices on the shows being some of the most common, but one thing is constant: in order to reap any of these benefits, patrons commit to multiple shows over the course of the year. As a relationship tool, packages are very effective. Theatre Communications Group (TCG) in New York City produces an annual fiscal analysis for national trends in the non-profit theater sector by gathering data from all across the country. In 2018 they gathered data from 177 theaters, and found that, on average, subscribers renewed 74% of the time.

This is about more than brand loyalty though: these packages represent a source of consistent income for theaters. Beyond the face value of individual tickets purchased by subscribers, which according to TCG’s research tally up to an average of $835k a year (with the highest earning tier of theaters surveyed pulling in an average of 2.5 million dollars), subscribers are much more likely to convert their attendance into other kinds of support for the organization. These patons can be marketed to with low risk (of annoyance) and high reward (in literal dollars). Thus, the value of subscriber relationships goes well beyond their sheer presence in the venue, making them one of the most consistently considered portions of the audience. At a time when uncertainty is the name of the game, a set of dependable patrons might seem like the perfect audience slice to reach out to right now.

However, since the rewards for packages typically revolve around early access, and require a multi-show commitment, subscription purchases and renewals usually receive a big push very early in the season sales cycle, putting them in a particularly vulnerable position at the start of the covid-19 pandemic. Furthermore, the extension of venue closures have not only shifted back sales dates, but also the seasons themselves leaving patrons with fewer shows to choose from, and fewer to commit to. This is forcing everyone to figure out how to salvage the potential income while still providing a valuable, and fair, experience to patrons. Thus far, we’re seeing three different answers to this question emerge.

Like-for-like

One straightforward approach is to roll with the punches and create mixed packages between this season (Season A), and next season (Season B). This works particularly well for orgs with consistent types of content every year. For example if an organization has a package of 5 Jazz concerts in Season A, but 3 of them are cancelled due to the pandemic, you just take 3 shows from Season B to replace them. Behind the scenes, this strategy does require a lot of hands-on adjustments if shows continue to be cancelled, but with the benefit of preserving the structure of the original package. Patrons also have a transparent view into how the value of their package is being preserved, however they are still tied to a specific show date with no knowledge of what the situation will be at that point.

Voucher Exchange

Another solution which has started to arise is the idea of a voucher system. Rather than trying to reschedule after a show in a package is cancelled, patrons are given vouchers which can be redeemed for a ticket to a future performance. For organizations, this option puts a lot of the workload at the front end, as it requires detailed business consideration: do vouchers expire? If so, how far in the future? Do the vouchers have a dollar value, or can they be exchanged 1:1 for a production? What happens if prices change between now and reopening, or if a user wants a ticket of a different value? (You get the gist). All that being said, once those business rules are set, it has the potential to put the other choices in the hands of the patron. Consequently, for patrons this option takes off some of the pressure: they don’t need to commit to another uncertain date in the future, instead they can be assured they are receiving the value of their package at a time that they feel comfortable.

Pay It Forward

A third option is to push the guesswork entirely to the future and allow users to purchase a set bundle of shows as normal, but with no mention of dates or seats. Instead of setting a calendar for the year, patrons are committing to content: 5 shows, rather than 5 dates. Some organizations have had this in place for early renewals for years, and find it to be an easy way to service patrons who are loyal to the organization through thick and thin. Ultimately though, this allows both parties to make more informed decisions about their theatergoing habits closer to the show itself, rather than 3 months ahead of an unknown future. That being said, this solution requires a lot of upfront discussions within the organization, and to the patron, about what might happen if patrons cannot attend the dates they are assigned either due to conflicts or continued safety concerns.

Anyone who’s remotely involved in the arts & culture sector will not be surprised that there is no one-size-fits-all solution. Some organizations will enjoy the straight-forwardness of mixing packages, others will want to allow for uncertainty and opt for the voucher system or the pay-it-forward option, still others will come up with an infinite number of alternate approaches to this issue. And of course, these solutions are all dependent on an optimistic future which is still a huge question mark: some areas are opening up, others are extending their closures, previously bankable organizations are filing for bankruptcy, and for every positive trend in cases there’s a spike somewhere else. In the game of whack-a-mole that is covid-19, the path towards reopening, and specifically a positive subscriber experience, is a tightrope: business rules will need to be clearly defined, messaging carefully considered, and customer service well briefed on the new practices. No matter what solution organizations opt for, it will need to be tailor fitted so that the patron relationships which will keep theater alive beyond this pandemic can be cultivated. Otherwise they run the risk of patrons feeling milked for money, and lemming marched into the theater.

July 08, 2020

In Part One, I explored the time of transition from Mac OS 8 to Mac OS X (not a typo: Mac OS 9 came out during the transition period). From a software development perspective, this included the Carbon and Cocoa UI frameworks. I mooted the possibility that Apple’s plan was “erm, actually, Java” and that this failed to come about not because Apple didn’t try, but because developers and users didn’t care. The approach described by Steve Jobs, of working out what the customer wants and working backwards to the technology, allowed them to be flexible about their technology roadmap and adapt to a situation where Cocoa on Objective-C, and Carbon on C and C++, were the tools of choice.[*]

So this time, we want to understand what the plan is. The technology choices available are, in the simplistic version: SwiftUI, Swift Catalyst/UIKit, ObjC Catalyst/UIKit, Swift AppKit, ObjC AppKit. In the extended edition, we see that Apple still supports the “sweet solution” of Javascript on the web, and despite trying previously to block them still permits various third-party developer systems: Javascript in React Native, Ionic, Electron, or whatever’s new this week; Xamarin.Forms, JavaFX, Qt, etc.

What the Carbon/Cocoa plan tells us is that this isn’t solely Apple’s plan to implement. They can have whatever roadmap they want, but if developers aren’t on it it doesn’t mean much. This is a good thing: if Apple had sufficient market dominance not to be reasonably affected by competitive forces or market trends, then society would have a problem and the US DOJ or the EU Directorate-General for Competition would have to weigh in. If we don’t want to use Java, we won’t use Java. If enough of us are still using Catalyst for our apps, then they’re supporting Catalyst.

Let’s put this into the context of #heygate.

These apps do not offer in-app purchase — and, consequently, have not contributed any revenue to the App Store over the last eight years.

— Rejection letter from Apple, Inc. to Basecamp

When Steve Jobs talked about canning OpenDoc, it was in the context of a “consistent vision” that he could take to customers to motivate “eight billion, maybe ten billion dollars” of sales. It now takes Apple about five days to make that sort of money, so they’re probably looking for something more than that. We could go as far as to say that any technology that contributes to non-revenue-generating apps is an anti-goal for Apple, unless they can conclusively point to a halo effect (it probably costs Apple quite a bit to look after Facebook, but not having it would be platform suicide, for example).

From Tim Cook’s and Craig Federighi’s height, these questions about “which GUI framework should we promote” probably don’t even show up on the radar. Undoubtedly SwiftUI came up with the SLT before its release, but the conversation probably looked a lot like “developers say they can iterate on UIs really quickly with React, so I’ve got a TPM with a team of ten people working on how we counter that.” “OK, cool.” A fraction of a percent of the engineering budget to nullify a gap between the native tools and the cross-platform things that work on your stack anyway? OK, cool.

And, by the way, it’s a fraction of a percent of the engineering budget because Apple is so gosh-darned big these days. To say that “Apple” has a “UI frameworks plan” is a bit like saying that the US navy has a fast destroyers plan: sure, bits of it have many of them.

At the senior level, the “plan” is likely to be “us” versus “not us”, where all of the technologies you hear of in somewhere like ATP count as “us”. The Java thing didn’t pan out, Sun went sideways in the financial crisis of 2007, how do we make sure that doesn’t happen again?

And even then, it’s probably more like “preferably us” versus “not us, but better with us”: if people want to use cross-platform tools, and they want to do it on a Mac, then they’re still buying Macs. If they support Sign In With Apple, and Apple Pay, then they still “contribute any revenue to the App Store”, even if they’re written in Haskell.

Apple made the Mac a preeminent development and deployment platform for Java technology. One year at WWDC I met some Perl hackers in a breakout room, then went to the Presidio to watch a brown bag session by Python creator Guido van Rossum. When Rails became big, everyone bought a Mac Laptop and a Textmate license, to edit their files for their Linux web apps.

Apple lives in an ecosystem, and it needs help from other partners, it needs to help other partners. And relationships that are destructive don’t help anybody in this industry as it is today. … We have to let go of this notion that for Apple to win, Microsoft has to lose, OK? We have to embrace the notion that for Apple to win, Apple has to do a really good job.

— Steve Jobs, 1997


[*] even this is simplistic. I don’t want to go overboard here, but definitely would point out that Apple put effort into supporting Swing with native-esque controls on Java, language bridges for Perl, Python, Ruby, an entire new runtime for Ruby, in addition to AppleScript, Automator, and a bunch of other programming environments for other technologies like I/O Kit. Like the man said, sometimes the wrong choice is taken, but that’s good because at least it means someone was making a decision.

Tweets by substrakt

No CEO dominated a market without a plan, but no market was dominated by following the plan.

— I made this quote up. Let’s say it was Rockefeller or someone.

In Accidental Tech Podcast 385: Temporal Smear, John Siracusa muses on what “the plan” for Apple’s various GUI frameworks might be. In summary, and I hope this is a fair representation, he says that SwiftUI is modern, works everywhere but isn’t fully-featured, UIKit (including Mac Catalyst) is not as modern, not as portable, but has good feature coverage, and AppKit is old, works only on Mac, but is the gold standard for capability in Mac applications.

He compares the situation now with the situation in the first few years of Mac OS X’s existence, when Cocoa (works everywhere, designed in mid-80s, not fully-featured) and Carbon (works everywhere, designed in slightly earlier mid-80s, gold standard for Mac apps) were the two technologies for building Mac software. Clearly “the plan” was to drop Carbon, but Apple couldn’t tell us that, or wouldn’t tell us that, while important partners were still writing software using the library.

This is going to be a two-parter. In part one, I’ll flesh out some more details of the Carbon-to-Cocoa transition to show that it was never this clear-cut. Part two will take this model and apply it to the AppKit-to-SwiftUI transition.

A lot of “the future” in Next-merger-era Apple was based on neither C with Toolbox/Carbon nor Objective-C with OPENSTEP/Yellow Box/Cocoa but on Java. NeXT had only released WebObjects a few months before the merger announcement in December 1996, but around merger time they released WO 3.1 with very limited Java support. A year later, WO 3.5 with full Java Support (on Yellow Box for Windows, anyway). By May 2001, a few weeks after the GM release of Mac OS X 10.0, WebObjects 5 was released and had been completely rewritten in Java.

Meanwhile, Java was also important on the client. A January 1997 joint statement by NeXT and Apple mentions ObjC 0 times, and Java 5 times. Apple released the Mac Run Time for Java on that day, as well as committing to “make both Mac OS and Rhapsody preeminent development and deployment platforms for Java technology”—Rhapsody was the code-name-but-public for NeXT’s OS at Apple.

The statement also says “Apple plans to carry forward key technologies such as OpenDoc”, which clearly didn’t happen, and led to this exchange which is important for this story:

One of the things I’ve always found is that you’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re gonna try to sell it. And I’ve made this mistake probably more than anybody else in this room, and I’ve got the scar tissue to prove it.

Notice that the problem here is that Apple told us the plan, did something else, and made a guy with a microphone very unhappy. He’s unhappy not at Gil Amelio for making a promise he couldn’t keep, but at Steve Jobs for doing something different.

OpenDoc didn’t carry on, but Java did. In the Rhapsody developer releases, several apps (including TextEdit, which was sample code in many releases of OpenStep, Rhapsody and Mac OS X) were written in Yellow Box Java. In Mac OS X 10.0 and 10.1, several apps were shipped using Cocoa-Java. Apple successfully made Mac OS and Rhapsody (a single) preeminent development and deployment platform for Java technology.

I do most of my engineering on my PowerBook … it’s got fully-functional, high-end software development tools.

— James Gosling, creator of Java

But while people carried on using Macs, and people carried on using Java, and people carried on using Macs to use Java, few people carried on using Java to make software for Macs. It did happen, but not much. Importantly, tastemakers in the NeXT developer ecosystem who were perfectly happy with Objective-C thank you carried on being perfectly happy with Objective-C, and taught others how to be happy with it too. People turned up to WWDC in t-shirts saying [objc retain];. Important books on learning Cocoa said things like:

The Cocoa frameworks were written in and for Objective-C. If you are going to write Cocoa applications, use Objective-C, C, and C++. Your application will launch faster and take up less memory than if it were written in Java. The quirks of the Java bridge will not get in the way of your development. Also, your project will compile much faster.

If you are going to write Java applications, use Swing. Swing, although not as wonderful as Cocoa, was written from the ground up for Java.

— Aaron Hillegass, Cocoa Programming for Mac OS X

Meanwhile, WebObjects for Java was not going great guns either. It still had customers, but didn’t pick up new customers particularly well. Big companies who wanted to pay for a product with the word Enterprise in the title didn’t really think of Apple as an Enterprise-ish company, when Sun or IBM still had people in suits. By the way, one of Sun’s people in suits was Jonathan Schwartz, who had run a company that made Cocoa applications in Objective-C back in the 1990s. Small customers, who couldn’t afford to use products with Enterprise in the title and who had no access to funding after the dot-bomb, were discovering the open source LAMP (Linux, Apache, MySQL, PHP/Perl) stack.

OK, so that’s Cocoa, what about Carbon? It’s not really the Classic Mac OS Toolbox APIs on Mac OS X, it’s some other APIs that are like those APIs. Carbon was available for both Mac OS 8.1+ (as an add-on library) and Mac OS X. Developers who had classic Mac apps still had to work to “carbonise” their software before it would work on both versions.

It took significant engineering effort to create Carbon, effectively rewriting a lot of Cocoa to depend on an intermediate C layer that could also support the Carbon APIs. Apple did this not because it had been part of their plan all along, but because developers looked at Rhapsody with its Cocoa (ObjC and Java) and its Blue Box VM for “classic” apps and said that they were unhappy and wouldn’t port their applications soon. Remember that “you’ve got to start with the customer experience and work backwards to the technology”, and if your customer experience is “I want to use Eudora, Word, Excel, and Photoshop” then that’s what you give ’em.

With this view, Cocoa and Carbon are actually the same age. Cocoa is OpenStep minus Display PostScript (Quartz 2D/Core Graphics taking its place) and with the changes necessary to be compatible with Carbon. Carbon is some MacOS Toolbox-like things that are adapted to be compatible with Cocoa. Both are new to Mac developers in 2001, and neither is furthering the stated goal of making Mac OS a preeminent development and deployment environment for Java.

To the extent that Apple had a technology roadmap, it couldn’t survive contact with their customers and developers—and it was a good thing that they didn’t try to force it. To the extent that they had a CEO-level plan, it was “make things that people want to buy more than they wanted to buy our 1997 products”, and in 2001 Apple released the technology that would settle them on that path. It was a Carbonised app called iTunes.

July 06, 2020

Anti-lock brakes by Graham Lee

Chances are, if you bought a new car or even a new motorcycle within the last few years, you didn’t even get an option on ABS. It came as standard, and in your car was legally mandated. Anti-lock brakes work by measuring the rotational acceleration of the wheels, or comparing their rotational velocities. If one wheel is rotating very much slower than the others, or suddenly decelerates, it’s probably about to lock so the ABS backs off the pressure on the brake for that wheel.

ABS turns everyone into a pretty capable brake operator, in most circumstances. This is great, because many people are not pretty capable at operating brakes, even when they think they are, and ABS makes them better at it. Of course, some people are very capable at it, but ABS levels them too, making them merely pretty capable.

But even a highly capable brake operator can panic, or make mistakes. When that happens, ABS means that the worst effect of their mistake is that they are merely pretty capable.

In some circumstances, having ABS is strictly worse than not having it. An ABS car will take longer to stop on a gravel surface or on snow than a non-ABS car. Car with ABS tend to hit each other much less often than those without, but tend to run off the road more often than those without. But for most vehicles, the ABS is always-on, even in situations where it will get in your way. Bring up that it is getting in your way, and someone will tell you how much safer it is than not having it. Which is true, in the other situations.

Of course the great thing about anti-lock brakes is that the user experience is the same as what most sub-pretty-capable drivers had before. No need to learn a different paradigm or plan your route differently. When you want to stop, press the thing that makes the car stop very hard.

Something, something, programming languages.

July 04, 2020

Let’s look at other software on the desktop, to understand why there isn’t (as a broad, popular platform) Linux on the desktop, then how there could be.

Over on De Programmatica Ipsum I discussed the difference between the platform business model, and the technology platform. In the platform model, the business acts as a matchmaking agent, connecting customers to vendors. An agricultural market is a platform, where stud farmers can meet dairy farmers to sell cattle, for example.

Meanwhile, when a technology platform is created by a business, it enables a two-sided business model. The (technology) platform vendor sells their APIs to developers as a way of making applications. They sell their technology to consumers with the fringe benefit that these third-party applications are available to augment the experience. The part of the business that is truly a platform model is the App Store, but those came late as an effort to capture a share of the (existing) developer-consumer sales revenue, and don’t really make the vendors platform businesses.

In fact, I’m going to drop the word platform now, as it has these two different meanings. I’ll say “store” or “App Store” when I’m talking about a platform business in software, and “stack” or “software stack” when I’m talking about a platform technology model.

Stack vendors have previously been very protective of their stack, trying to fend off alternative technologies that allow consumers to take their business elsewhere. Microsoft famously “poisoned” Java, an early and capable cross-platform application API, by bundling their own runtime that made Java applications deliberately run poorly. Apple famously added a clause to their store rules that forbade any applications made using off-stack technology.

Both of these situations are now in the past: Microsoft have even embraced some cross-platform technology options, making heavy use of Electron in their own applications and even integrating the Chromium rendering engine into their own browser to increase compatibility with cross-platform technology and reduce the cost of supporting those websites and applications made with Javascript. Apple have abandoned that “only” clause in their rules, replacing it with a collection of “but also” rules: yes you can make your applications out of whatever you want, but they have to support sign-in and payment mechanisms unique to their stack. So a cross-stack app is de jure better integrated in Apple’s sandpit.

These actions show us how these stack vendors expect people to switch stacks: they find a compelling application, they use it, they discover that this application works better or is better integrated on another stack, and so they change to it. If you’re worried about that, then you block those applications so that your customers can’t discover them. If you’re not worried about that, then you allow the technologies, and rely on the fact that applications are commodities and nobody is going to find a “killer app” that makes them switch.

Allowing third-party software on your own platform (cross-stack or otherwise) comes with a risk, that people are only buying your technology as an incidental choice to run something else, and that if it disappears from your stack, those customers might go away to somewhere that it is available. Microsoft have pulled that threat out of their briefcase before, settling a legal suit with Apple after suggesting that they would remove Word and Excel from the Mac stack.

That model of switching explains why companies that are otherwise competitors seem willing to support one another by releasing their own applications on each others’ stacks. When Apple and Microsoft are in competition, we’ve already seen that Microsoft’s applications give them leverage over Apple: they also allow Apple customers to be fringe players in the Microsoft sandpit, which may lead them to switch (for example when they see how much easier it is for their Windows-using colleagues to use all of the Microsoft collaboration tools their employers use). But Apple’s applications also give them leverage over Microsoft: the famed “halo effect” of Mac sales being driven by the iPod fits this model: you buy an iPod because it’s cool, and you use iTunes for Windows. Then you see how much better iTunes for Mac works, and your next computer is a Mac. The application is a gateway to the stack.

What has all of this got to do with desktop Linux? Absolutely nothing, and that’s my point. There’s never been a “halo effect” for the Free Software world because there’s never been a nucleus around which that halo can form. The bazaar model does a lot to ensure that. Let’s take a specific example: for many people, Thunderbird is the best email client you can possibly get. It also exists on multiple stacks, so it has the potential to be a “gateway” to desktop Linux.

But it won’t be. The particular bazaar hawkers working on Thunderbird don’t have any particular commitment to the rest of the desktop Linux stack: they’re not necessarily against it, but they’re not necessarily for it either. If there’s an opportunity to make Thunderbird better on Windows, anybody can contribute to exploit that opportunity. At best, Thunderbird on desktop Linux will be as good as Thunderbird anywhere else. Similarly, the people in the Nautilus file manager area of the bazaar have no particular commitment to tighter integration with Thunderbird, because their users might be using GNUMail or Evolution.

At one extreme, the licences of software in the bazaar dissuade switching, too. Let’s say that CUPS, the common UNIX printing subsystem, is the best way to do printing on any platform. Does that mean that, say, Mac users with paper-centric workflows or lifestyles will be motivated to switch to desktop Linux, to get access to CUPS? No, it means Apple will take advantage of the CUPS licence to integrate it into their stack, giving them access to the technology.

The only thing the three big stack vendors seem to agree on when it comes to free software licensing is that the GPL version 3 family of licences is incompatible with their risk appetites, particularly their weaponised patent portfolios. So that points to a way to avoid the second of these problems blocking a desktop Linux “halo effect”. Were there a GPL3 killer app, the stack vendors probably wouldn’t pick it up and integrate it. Of course, with no software patent protection, they’d be able to reimplement it without problem.

But even with that dissuasion, we still find that the app likely wouldn’t be a better experience on a desktop Linux stack than on Mac, or on Windows. There would be no halo, and there would be no switchers. Well, not no switchers, but probably no more switchers.

Am I minimising the efforts of consistency and integration made by the big free software desktop projects, KDE and GNOME? I don’t think so. I’ve used both over the years, and I’ve used other desktop environments for UNIX-like systems (please may we all remember CDE so that we never repeat it). They are good, they are tightly integrated, and thanks to the collaboration on specifications in the Free Desktop Project they’re also largely interoperable. What they aren’t is breakout. Where Thunderbird is a nucleus without a halo, Evolution is a halo without a nucleus: it works well with the other GNOME tools, but it isn’t a lighthouse attracting users from, say, Windows, to ditch the rest of their stack for GNOME on Linux.

Desktop Linux is a really good desktop stack. So is, say, the Mac. You could get on well with either, but unless you’ve got a particular interest in free software, or a particular frustration with Apple, there’s no reason to switch. Many people do not have that interest or frustration.

July 03, 2020

July 02, 2020

Every week, I sit down with a coffee and work through this list. It takes anywhere between 20 and 60 minutes, depending on how much time I have.

Process #

  • Email inbox
  • Things inbox (Things is my task manager of choice)
  • FreeAgent (invoicing, expenses)
  • Trello (sales pipeline, active projects)
  • Check backups

Review #

  • Bank accounts
  • Quarterly goals
  • Yearly theme
  • Calendar
  • Things
    • Ensure all active projects are relevant and have a next action
    • Review someday projects
    • Check tasks have relevant tags

Plan #

  • Write out this weeks goals & intentions
  • Answer Mastermind Automatic Check-ins
    • Did you read, watch or listen to anything that you think the rest of the group might find useful or interesting?
    • What did you accomplish or learn this week?
    • What are your plans for the week?

This is one of the checklists I use to run my business.

If you have feedback, get in touch – I’d love to hear it.

These are the questions I ask a client to help guide our working engagement. It helps understand the clients needs and scope out a solution. I pick and choose the relevant items to the engagement at hand.

  • Describe your company and the products or services your business provides
  • When does the project need to be finished? What is driving that?
  • Who are the main contacts for this project and what are their roles?
  • What are the objectives for this project? (e.g. increase traffic, improve conversion rate, reduce time searching for information)
  • How will we know if this project has been a success? (e.g. 20% increase in sales, 20% increase in traffic)
  • If this project fails, what is most likely to have derailed us?
  • How do you intend to measure this?
  • What is your monthly traffic/conversions/revenues?
  • How have you calculated the value of this project internally?
  • What budget have you set for this project?
  • Who are your closest competitors?

This is one of the checklists I use to run my business. I run these answers past my Minimum Level of Engagement.

If you have feedback, get in touch – I’d love to hear it.

July 01, 2020



Leadership changes at Made

It’s been an odd few months for us here at Made Media, and I’m sure it has for all of you as well. Our clients all around the world have temporarily closed their doors to the public, and have gone through waves of grief, hope, and exhaustion. At the same time, our team has been busy working through the final stages of some big new projects, that are all due to launch in the next month. Tough times, new beginnings.

While all of us in the arts and cultural sector are struggling with the impact of Covid 19, we have also seen much increased demand for our Crowdhandler Virtual Waiting Room product. As a result of this we have made the decision to spin out this product from our core agency business in order to give it the focus and investment it deserves. Effective 1 July, we are setting up a new Crowdhandler company, majority-owned by Made, and we are making the following leadership changes:

I will become CEO of the new Crowdhandler company. It’s with a mixture of excitement and nostalgia that I am stepping down as CEO of Made Media today, after eighteen years, taking up the new role of non-executive Chair. A team of developers and cloud engineers from Made Media are moving with me to join the new Crowdhandler company.

James Baggaley, currently Managing Director, becomes CEO of Made Media. James joined Made in 2017 as Strategy Director, and since then has worked with Made clients around the world on some of our most involved projects. He has a long background of working in, with, and for arts and cultural organisations, with a focus on digital, e-commerce and ticketing strategy.

Meanwhile the Made team is on hand to help our clients as they navigate the coming months and beyond, and we can’t wait for all of these brilliant organisations to start welcoming audiences again.

June 30, 2020

Accessible data tables by Bruce Lawson (@brucel)

I’ve been working on a client project and one of the tasks was remediating some data tables. As I began researching the subject, it became obvious that most of the practical, tested advice comes from my old mates Steve Faulkner and Adrian Roselli.

I’ve collated them here so they’re in one place when I need to do this again, and in case you’re doing similar. But all the glory belongs with them. Buy them a beer when you next see them.

June 29, 2020



Announcing CultureCast

The rise of Covid-19 has forced cultural and arts organisations around the world to rapidly move their events online. In response to that, we have developed an easy to use, cloud-based paywall application to control access to your online video content.

It’s called CultureCast and it lets you control who has access to your video content, and sell access on a pay-per-view basis. It’s a standalone application, and you don’t have to be an existing Made client to start using it!

Tessitura integration

CultureCast is currently available exclusively for Tessitura users, because it integrates with the Tessitura REST API in order to authenticate and identify users.  You can sell video access via ticket purchases in Tessitura.  And control access to videos via constituency codes in Tessitura.

Customisable

You can control the look and feel of CultureCast to match your brand, for no extra cost. You can also mask the URL with a dedicated subdomain (e.g., ondemand.yourvenue.org)  for a one-off setup fee.

Video embed

You can embed video content into CultureCast from any online video provider that supports OEmbed. Vimeo and Brightcove are supported out of the box, and we’re working on built-in support for further providers.

Stripe payments

Customer payments are straightforward. You can process in-app payments via Stripe (with support for ApplePay and other mobile payment methods), with a Stripe account in your name and with the money automatically paid out to you on a rolling basis. You can also control the look and feel of the confirmation emails within the Stripe dashboard.

Pricing

There are no set-up costs to start using CultureCast.

You simply pay a service fee, set as a percentage of the revenue taken through the app. You are also responsible for paying Stripe payment processing fees, and for the cost of your video hosting platform. You don’t need to pay anything to get started, although there is a minimum charge once you reach over 1,000 video views per month via CultureCast.

This is just the beginning: we’re excited to continue development of CultureCast over the coming months and for that we need your feedback - so please try it out and let us know what you think!

If you’d like to get more details, check out our website at https://culturecast.tv, or drop us an email at culturecast@made.media.

June 28, 2020

These are the questions I ask myself to determine if a client will be a good fit to work with me.

  • Do they have a sufficient budget?
  • Is it a project I can add value to?
  • Do they have reasonable expectations?
  • Do they understand what’s involved (both their end and my end)?
  • Will this help me grow my skills and experience?
  • Will this help me with future sales?
  • Do I think this business/idea will succeed?
  • Do I have enough time to do my best work under their deadline?
  • Do they have clear goals for the project?
  • How organised are they?

This is one of the checklists I use to run my business.

If you have feedback, get in touch – I’d love to hear it.

The following are the checklists I use to run my business.

Each checklist has a few simple criteria:

  • It should be clear what the checklist is for and when it should be used
  • The checklist should be as short as possible
  • The wording should be simple and precise

Why use a checklist? #

I’ve previously written about the importance of a checklist. I summarised by saying:

The problem is that our jobs are too complex to be carried out by memory alone. The humble checklist provides defence against our own limitations in more tasks than we might realise.

David Perell in The Checklist Habit:

The less you have to think, the better. When you’re fully dependent on your memory, consider making a checklist. Don’t under-estimate the compounding benefits of fixing small operational leaks. Externalizing your processes is the best way to see them with clear eyes. Inefficiencies stand out like a lone tree on a desert plateau. Once the checklist is made, almost anybody in the organization can perform that operation.

The Checklist Manifesto is a great book on the subject.

Pre-project checklists #

Minimum level of engagement
Questions to determine if a client will be a good fit.

Questions to ask before a client engagement
Questions to run through before engaging on a project.

Project Starter Checklist (coming soon)
Things required before starting a project.

Website launch #

Pre-launch Checklist (coming soon)
Run through this checklist before launching a website

Post-launch Checklist (coming soon)
Run through this checklist after launching a website.

End of project #

Project Feedback (coming soon)
Questions once the project has been completed.

Testimonials (coming soon)
A series of questions to ask to get good testimonials.

Reviews #

Weekly review

Quarterly Review (coming soon)

Annual Review (coming soon)

June 27, 2020

WWDC2020 was the first WWDC I’ve been to in, what, five years? Whenever I last went, it was in San Francisco. There’s no way I could’ve got my employer to expense it this year had I needed to go to San Jose, nor would I have personally been able to cover the costs of physically going. So I wouldn’t even have entered the ticket lottery.

Lots of people are saying that it’s “not the same” as physically being there, and that’s true. It’s much more accessible than physically being there.

For the last couple at least, Apple have done a great job of putting the presentations on the developer site with very short lag. But remotely attending has still felt like being the remote worker on an office-based team: you know you’re missing most of the conversations and decisions.

This time, everything is remote-first: conversations happen on social media, or in the watch party sites, or wherever your community is. The bundling of sessions released once per day means there’s less of a time zone penalty to being in the UK, NZ, or India than in California or Washington state. Any of us who participated are as much of a WWDC attendee as those within a few blocks of the McEnery or Moscone convention centres.

June 26, 2020

Reading List 261 by Bruce Lawson (@brucel)

June 25, 2020

June 24, 2020

I sometimes get asked to review, or “comment on”, the architecture for an app. Often the app already exists, and the architecture documentation consists of nothing more than the source code and the folder structure. Sometimes the app doesn’t exist, and the architecture is a collection of hopes and dreams expressed on a whiteboard. Very, very rarely, both exist.

To effectively review an architecture and make recommendations for improving it, we need much more information than that. We need to know what we’re aiming for, so that we can tell whether the architecture is going to support or hinder those goals.

We start by asking about the functional requirements of the application. Who is using this, what are they using it for, how do they do that? Does the architecture make it easy for the programmers to implement those things, for the testers to validate those things, for whoever deploys and maintains the software to provide those things?

If you see an “architecture” that promotes the choice of technical implementation pattern over the functionality of the system, it’s getting in the way. I don’t need to know that you have three folders of Models, Views and Controllers, or of Actions, Components, and Containers. I need to know that you let people book childrens’ weather forecasters for wild atmospheric physics parties.

We can say the same about non-functional requirements. When I ask what the architecture is supposed to be for, a frequent response is “we need it to scale”. How? Do you want to scale the development team? By doing more things in parallel, or by doing the same things faster, or by requiring more people to achieve the same results? Hold on, did you want to scale the team up or down?

Or did you want to scale the number of concurrent users? Have you tried… y’know, selling the product to people? Many startups in particular need to learn that a CRM is a better tool for scaling their web app than Kubernetes. But anyway, I digress. If you’ve got a plan for getting to a million users, and it’s a realistic plan, does your architecture allow you to do that? Does it show us how to keep that property as we make changes?

Those important things that you want your system to do. The architecture should protect and promote them. It should make it easy to do the right thing, and difficult to regress. It should prevent going off into the weeds, or doing work that counters those goals.

That means that the system’s architecture isn’t really about the technology, it’s about the goals. If you show me a list of npm packages in response to questions about your architecture, you’re not showing me your architecture. Yes, I could build your system using those technologies. But I could probably build anything else, too.

June 22, 2020

I had planned to add anchor links to headings on this site when I came across On Adding IDs to Headings by Chris Coyier.

The Gutenberg heading block allows us to manually add IDs for each heading but, like Chris, I want IDs to be automatically added to each heading.

To see an example, navigate to my /uses page and hover over a title. You’ll see a # that you can click on which will jump directly to that heading.

Chris initially used jQuery to add IDs to headings but after that stopped working he started using a plugin called Add Anchor Links.

The plugin does the job but as someone who 1) likes to have as few plugins as possible installed and 2) loves to tinker, I thought I’d try and come up with a solution that doesn’t involve installing a plugin.

Here’s the code:

add_filter( 'render_block', 'origin_add_id_to_heading_block', 10, 2 );
function origin_add_id_to_heading_block( $block_content, $block ) {
	if ( 'core/heading' == $block['blockName'] ) {
		$block_content = preg_replace_callback("#<(h[1-6])>(.*?)</\\1>#", "origin_add_id_to_heading", $block_content);
	}
	return $block_content;
}

function origin_add_id_to_heading($match) {
	list(, $heading, $title) = $match;
	$id = sanitize_title_with_dashes($title);
	$anchor = '<a href="#'.$id.'" aria-hidden="true" class="anchor" id="'.$id.'" title="Anchor for '.$title.'">#</a>';
	return "<$heading id='$id'>$title $anchor</$heading>";
}

To get this working, you’ll need to add this code to functions.php in your theme.

Here’s how the code snippet above works:

  • We’re using the render_block filter to modify the markup of the block before it is rendered
  • The render_block filter calls the origin_add_id_to_heading_block function which then checks to make sure we’re only updating the heading block
  • We then use the preg_replace_callback function to detect h1, h2, h3 etc using regex before calling the origin_add_id_to_heading function
  • The origin_add_id_to_heading function creates the markup we need: it creates the ID by taking the heading text and replacing spaces with dashes using WordPress’s sanitize_title_with_dashes function, then returns the markup we need

Let me know if you have any suggestions or improvements.

Update (24/06/20): This solution only works on posts/pages that use Gutenberg. The Add Anchor Links plugin works on all content so use this plugin if all of your content isn’t converted to Gutenberg.



Live and on-demand video integration for the Detroit Symphony Orchestra

Even before a global pandemic forced arts audiences to seek out their cultural activities from their homes, Detroit Symphony Orchestra had an accessibility-first mission to bring world class programming to audiences locally and around the world. When Made took on the challenge to create a web experience worthy of that mission, one of the key features was integrating with their live and on-demand video programming.

Prior to the planned launch of the new dso.org, the orchestra was switching to Vimeo OTT as a host for their DSO Live, DSO Replay, and DSO Classroom Edition channels. Some videos were to be open to the public, while others required viewers to make an annual donation to the orchestra before they could access them.

To accomplish this within the new website, we built a viewing experience using the Vimeo OTT API, paired with the Tessitura API for user credentials and access. When a user on dso.org goes to watch a DSO Replay video, they are prompted to login or make a donation to continue. Once they’ve been successfully identified as a donor, they then have access to the full back catalogue of DSO Replay content.

In order for Vimeo OTT to grant access to videos directly, accounts need to be created on their platform, based on the user’s email address. In order for Tessitura to grant video access, a specific constituency code must be active.

image

The first part of this implementation is to add the constituency when someone makes a contribution online. This allows them to watch the video directly and immediately through the DSO website. Additionally, an account is created in Vimeo OTT with the customer’s email address, so that future visits directly to the hosted platform do not require a website login. This also means that these customer credentials will work for any non-web deployments of Vimeo OTT, including via TV apps.

image

The other piece of this puzzle was how to provide access to customers who donate in person, over the phone, or by mail. We worked with the Vimeo team and DSO’s technology team on a scheduled SQL job, creating credentialed accounts within the OTT platform for any new offline donors.

Collectively, these pieces created a seamless integration with a third party OTT system, which allows their donors the ability to view videos either on the DSO site, or on the hosted platform. Controlling free versus paid access at the video level can be controlled entirely in Vimeo, while the mechanism for which specific users can access paid content is controlled in Tessitura.

Like many companies, stickee has traded office antics for lockdown life to keep everyone safe during the pandemic. While we may have swapped out desks...

The post stickee in lockdown appeared first on stickee.

June 18, 2020

My friend Andy Henson wrote an excellent primer on Roam Research, a tool I’ve been using a lot recently.

With Roam, you have to break some of your built-in expectations and assumptions. The atomic unit of a note is no longer a page. If you have only been used to the concept of a note being a page of text with a title, filled with one or more paragraphs you may be immediately put off with the bullets. This is part of its secret sauce. Each bullet, or block in Roam’s parlance, is its own thing. Typically a complete thought. It’s a note in itself. You can indent blocks infinitely, much like an outliner. But then, each word or phrases within it can be turned into references and given its own page. At which point, you can add more blocks of thoughts about that idea or concept, and collect further references to it.

I’ve had a sneak peek of Andy’s upcoming Roam Course and it’s going to be incredibly useful. If you have any interest in writing to improve your thinking, sign up to be notified when it launches.

June 16, 2020

Forearmed by Graham Lee

In researching my piece for the upcoming de Programmatica Ipsum issue on cloud computing, I had thoughts about Apple, arm, and any upcoming transition that didn’t fit in the context of that article. So here’s a different post, about that. I’ve worked at both companies so don’t have a neutral point of view, but I’ve also been in bits of the companies far enough from their missions that I don’t have any insider insight into this transition.

So, let’s dismiss the Mac transition part of this thread straight away: it probably will happen, for the same reasons that the PowerPC->Intel transition happened (the things Apple needed from the parts – mostly lower power consumption for similar performance – weren’t the same things that the suppliers needed, and the business Apple brought wasn’t big enough to make the suppliers change their mind), and it probably will be easier, because Apple put the groundwork in to make third-party devs aware of porting issues during the Intel transition, and encourage devs to use high-level frameworks and languages.

Whether you think the point is convergence (now your Catalyst apps are literally iPad IPAs that run on a Mac), or cost (Apple buy arm chipset licences, but have to buy whole chips from Intel, and don’t get the discount everybody else does for sticking the Intel Inside holographic sticker on the case), or just “betterer”, the arm CPUs can certainly provide. On the “betterer” argument, I don’t predict that will be a straightforward case of tomorrow’s arm Mac being faster than today’s Intel Mac. Partly because compilers: gcc certainly has better optimisations on Intel and I wouldn’t be surprised to find that llvm does too. Partly because workload, as iOS/watchOS/tvOS all keep the platform on guard rails that make the energy use/computing need expectations more predictable, and those guard rails are only slowly being added to macOS now.

On the other hand, it’s long been the case that computers have controller chips in for interfacing with the hardware, and that those chips are often things that could be considered CPUs for systems in their own rights. Your mac certainly already has arm chips in if you bought it recently: you know what’s running the OS for the touch bar? Or the T2 security chip? (Actually, if you have an off-brand PC with an Intel-compatible-but-not-Intel chip, that’s probably an arm core running the x86-64 instructions in microcode). If you beef one of those up so that it runs the OS too, then take a whole bunch of other chips and circuits off the board, you both reduce the power consumption and put more space in for batteries. And Apple do love talking battery life when they sell you a computer.

OK, so that’s the Apple transition done. But now back to arm. They’re a great business, and they’ve only been expanding of late, but it’s currently coming at a cost. We don’t have up to date financial information on Arm Holdings themselves since they went private, but that year they lost ¥31bn (I think about $300M). Since then, their corporate parent Softbank Group has been doing well, but massive losses from their Vision Fund have led to questions about their direction and particularly Masayoshi Son’s judgement and vision.

arm (that’s how they style it) have, mostly through their partner network, fingers in many computing pies. From the server and supercomputer chips from manufacturers like Marvell to smart lightbulbs powered by Nordic Semiconductor, arm have tentacles everywhere. But their current interest is squarely on the IoT side. When I worked in their HPC group in 2017, Simon Segars described their traditional chip IP business as the “legacy engine” that would fund the “disruptive unit” he was really interested in, the new Internet of Things Business Unit. Now arm’s mission is to “enable a trillion connected devices”, and you can bet there isn’t a world market for a trillion Macs or Mac-like computers.

If some random software engineer on the internet can work this out, you can bet Apple’s exec team have worked it out, too. It seems apparent that (assuming it happens) Apple are transitioning the Mac platform to arm at start of the (long, slow) exit arm make from the traditional computing market, and still chose to do it. This suggests something else in mind (after all, Apple already designs its chips in-house, so why not have them design RISC-V or MIPS chips, or something entirely different?). A quick timetable of Mac CPU instruction sets:

  • m68k 1984 – 1996, 12 years (I exclude the Lisa)
  • ppc 1994 – 2006, 12 years
  • x86 and x86-64 2006 – 2021?, 15 years?
  • arm 2020? – 203x?, 1x years?

I think it likely that the Mac will wind down with arm’s interest in traditional computing, and therefore arm will be the last ever CPU/SoC architecture for computers called Macs. That the plan for the next decade is that Apple is still at the centre of a services-based, privacy-focused consumer electronics experience, but that what they sell you is not a computer.

June 13, 2020

Reading List 260 by Bruce Lawson (@brucel)

June 07, 2020

Bella’s in the witch elm
Regal in a taffeta gown
Queen of her forest realm
Autumn leaves form her crown

Bella’s in the witch elm
She won’t say how she came there
She’s still as seasons turn
And winter winds wind her hair

Beneath the cries of the weeping willow
You can hear the sighs from the witch elm’s hollow

Bella’s in the witch elm
wearing her gold wedding ring
She’s silent at the coming of Spring
At the maypole, children dance and sing

Bella’s in the witch elm
Wearing just one summer shoe
She will never tell
Someone put here there – but who?

Who put Bella where she can’t see
Who put Bella in the Witch Elm Tree? Who?

Words and music © Bruce Lawson 2020, all rights reserved.
Production, drums and bass guitar: Shez of @silverlakemusic.

June 05, 2020

June 04, 2020

I’m not outside by Stuart Langridge (@sil)

I’m not outside.

Right now, a mass of people are in Centenary Square in Birmingham.

They’ll currently be chanting. Then there’s music and speeches and poetry and a lie-down. I’m not there. I wish I was there.

This is part of the Black Lives Matter protests going on around the world, because again a black man was murdered by police. His name was George Floyd. That was in Minneapolis; a couple of months ago Breonna Taylor, a black woman, was shot eight times by police in Louisville. Here in the UK black and minority ethnicity people die in police custody twice as much as others.

It’s 31 years to the day since the Tiananmen Square protests in China in which a man stood in front of a tank, and then he disappeared. Nobody even knows his name, or what happened to him.

The protests in Birmingham today won’t miss one individual voice, mine. And the world doesn’t need the opinion of one more white guy on what should be done about all this, about the world crashing down around our ears; better that I listen and support. I can’t go outside, because I’m immunocompromised. The government seems to flip-flop on whether it’s OK for shielding people to go out or not, but in a world where there are more UK deaths from the virus than the rest of the EU put together, where as of today nearly forty thousand people have died in this country — not been inconvenienced, not caught the flu and recovered, died, a count over half that of UK civilian deaths in World War 2 except this happened in half a year — in that world, I’m frightened of being in a large crowd, masks and social distancing or not. But the crowd are right. The city is right. When some Birmingham council worker painted over an I Can’t Breathe emblem, causing the council to claim there was no political motive behind that (tip for you: I’m sure there’s no council policy to do it, and they’ve unreservedly apologised, but whichever worker did it sure as hell had a political motive), that emblem was back in 24 hours, and in three other places around the city too. Good one, Birmingham.

Protestors in Centenary Square today

There are apparently two thousand of you. I can hear the noise from the window, and it’s wonderful. Shout for me too. Wish I could be there with you.

June 03, 2020

Here at stickee we are proud of the variety of specialised talent packed into our office every day. This week, we are pleased to welcome...

The post Mike Wilson joins the stickee team appeared first on stickee.

June 02, 2020

June 01, 2020

I enjoyed CGP Grey’s Lockdown Productivity video. I found the reminder about keeping strict boundaries on physical spaces particularly useful. And I love the sentiment: come back better than before.

May 31, 2020

AWS Certification Progress

In March, I achieved the AWS Cloud Practitioner certification. In May, I decided to go for the Associate Solutions Architect certification. It’s a lot more in depth than Cloud Practitioner, but I’m looking forward to coming out the other side.

To prepare, I bought the AWS Certified Solutions Architect - Associate 2020 course on Udemy. Because it was on sale, and I’m cheap.

What Motivates Me in My Job?

One of the tasks in Apprenticeship Pattens was to come up with 15 things that motivate you in your career. In no particular order:

  1. Making things, and doing it well.
  2. The money is good. Might be tacky to say it out loud, but I’m not complaining.
  3. Working on things that will hopefully have a positive impact on the world.
  4. I’m very lucky to have found work that is very close to play.
  5. Working on something as a team.
  6. Getting the chance to teach people things and elevate them to your level.
  7. Creating an ordered system out of a mess of ideas and requirements.
  8. The cool programmer aesthetic.
  9. Being able to (largely) plan out my own days, and come up with my own solutions to things.
  10. Flexing creative muscles.
  11. The tools! Software is cool.
  12. The hardware.
  13. Being able to surround myself with people much smarter than I am.
  14. Having an opportunity to write regularly.
  15. There’s always something to learn.

And then it asked for five more.

  1. It’s a respectable profession.
  2. Software is unlikely to be replaced or eliminated any time soon.
  3. The community has some amazing people in it.
  4. I get to contribute positively to my part of the community.
  5. I have flexibility to work remotely, or at odd hours.

Things I Read

May 30, 2020

Amiga-Smalltalk now has continuous integration, I don’t know if it’s the first Amiga program ever to have CI but definitely the first I know of. Let me tell you about it.

I’ve long been using AROS, the AROS Research Operating System (formerly the A stood for Amiga) as a convenient place to (manually) test Amiga-Smalltalk. AROS will boot natively on PC but can also be “hosted” as a user-space process on Linux, Windows or macOS. So it’s handy to build a program like Amiga-Smalltalk in the AROS source tree, then launch AROS and check that my program works properly. Because AROS is source compatible with Amiga OS (and binary compatible too, on m68k), I can be confident that things work on real Amigas.

My original plan for Amiga-Smalltalk was to build a Docker image containing AROS, add my test program to S:User-startup (the script on Amiga that runs at the end of the OS boot sequence), then look to see how it fared. But when I discussed it on the aros-exec forums, AROS developer deadwood had a better idea.

He’s created AxRuntime, a library that lets Linux processes access the AROS APIs directly without having to be hosted in AROS as a sub-OS. So that’s what I’m using. You can look at my Github workflow to see how it works, but in a nutshell:

  1. check out source.
  2. install libaxrt. I’ve checked the packages in ./vendor (and a patched library, which fixes clean termination of the Amiga process) to avoid making network calls in my CI. The upstream source is deadwood’s repo.
  3. launch Xvfb. This lets the process run “headless” on the CI box.
  4. build and run ast_tests, my test runner. The Makefile shows how it’s compiled.

That’s it! All there is to running your Amiga binaries in CI.

May 29, 2020

Reading List 259 by Bruce Lawson (@brucel)

May 28, 2020

As you may have noticed, I moved this site to new fangled static site generator Eleventy, using the Hylia starter kit as a base.

By default this uses Netlify, but I wasn't interested in the 3rd party CMS bit, so opted for a simple GitHub action for deploying. There's an exsiting action available for plain Eleventy apps over here. However it doesn't include the sass build part of the Hylia setup that's part of its npm scripts.

A quick bit of hacking about with one of the standard node actions and I built the following action to deploy instead:

name: Hylia Build
on: [push]

jobs:
build_deploy:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@master
- uses: actions/setup-node@v1
with:
node-version: '10.x'
- run: npm install
- run: npm run production
- name: Deploy
uses: peaceiris/actions-gh-pages@v1.1.0
env:
PUBLISH_DIR: dist
PUBLISH_BRANCH: gh-pages
GITHUB_TOKEN: $

Which hopefully is useful to somebody else.

Oh, and you'll need to add a passthrough copy of the CNAME file to the build if you are using a custom domain name. Add the following to your eleventy build:

  config.addPassthroughCopy('CNAME');

And your domains CNAME file to the main source. Otherwise every time you push it'll get removed from the GitHub pages config of the output.

May 27, 2020

Mature Optimization by Graham Lee

This comment on why NetNewsWire is fast brings up one of the famous tropes of computer science:

The line between [performance considerations pervading software design] and premature optimization isn’t clearly defined.

If only someone had written a whole paper about premature optimization, we’d have a bit more information. …wait, they did! The idea that premature optimization is the root of all evil comes from Donald Knuth’s Structured Programming with go to Statements. Knuth attributes it to C.A.R. Hoare in The Errors of TeX, though Hoare denied that he had coined the phrase.

Anyway, the pithy phrase “premature optimization is the root of all evil”, which has been interpreted as “optimization before the thing is already running too slow is to be avoided”, actually appears in this context:

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, [they] will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgements about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail.

Indeed this whole subsection on efficiency opens with Knuth explaining that he does put a lot of effort into optimizing the critical parts of his code.

I now look with an extremely jaundiced eye at every operation in a critical inner loop, seeking to modify my program and data structure […] so that some of the operations can be eliminated. The reasons for this approach are that: a) it doesn’t take long, since the inner loop is short; b) the payoff is real; and c) I can then afford to be less efficinet in the other parts of my programs, which therefore are more readable and more easily written and debugged. Tools are being developed to make this critical-loop identification job easy (see for example [Dan Ingalls, The execution time profile as a programming tool] and [E. H. Satterthwaite, Debugging tools for high level languages]).

So yes, optimize your code, but optimize the bits that benefit from optimization. NetNewsWire is a Mac application, and Apple’s own documentation on improving your app’s performance describe an iterative approach for finding underperforming characteristics (note: not “what is next to optimize”, but “what are users encountering that needs improvement”), making changes, and verifying that the changes led to an improvement:

Plan and implement performance improvements by approaching the problem scientifically:

  1. Gather information about the problems your users are seeing.
  2. Measure your app’s behavior to find the causes of the problems.
  3. Plan one change to improve the situation.
  4. Implement the change.
  5. Observe whether the app’s performance improves.

I doubt that this post will change the “any optimization is the root of all evil” narrative, because there isn’t a similarly-trite epithet for the “optimize the parts that need it” school of thought, but at least I’ve tried.

May 26, 2020

This post is to encourage you to go and play a museum-themed online Escape Game I built. So, you can skip the rest of this article and head straight here to play! Now, you may have already seen that I have a brand new tutorial which shows you how to create your own online Audio […]

An interesting writeup by Brian Kardell on web engine diversity and ecosystem health, in which he puts forward a thesis that we currently have the most healthy and open web ecosystem ever, because we’ve got three major rendering engines (WebKit, Blink, and Gecko), they’re all cross-platform, and they’re all open source. This is, I think, true. Brian’s argument is that this paints a better picture of the web than a lot of the doom-saying we get about how there are only a few large companies in control of the web. This is… well, I think there’s truth to both sides of that. Brian’s right, and what he says is often overlooked. But I don’t think it’s the whole story.

You see, diversity of rendering engines isn’t actually in itself the point. What’s really important is diversity of influence: who has the ability to make decisions which shape the web in particular ways, and do they make those decisions for good reasons or not so good? Historically, when each company had one browser, and each browser had its own rendering engine, these three layers were good proxies for one another: if one company’s browser achieved a lot of dominance, then that automatically meant dominance for that browser’s rendering engine, and also for that browser’s creator. Each was isolated; a separate codebase with separate developers and separate strategic priorities. Now, though, as Brian says, that’s not the case. Basically every device that can see the web and isn’t a desktop computer and isn’t explicitly running Chrome is a WebKit browser; it’s not just “iOS Safari’s engine”. A whole bunch of long-tail browsers are essentially a rethemed Chrome and thus Blink: Brave and Edge are high up among them.

However, engines being open source doesn’t change who can influence the direction; it just allows others to contribute to the implementation. Pick something uncontroversial which seems like a good idea: say, AVIF image format support, which at time of writing (May 2020) has no support in browsers yet. (Firefox has an in-progress implementation.) I don’t think anyone particularly objects to this format; it’s just not at the top of anyone’s list yet. So, if you were mad keen on AVIF support being in browsers everywhere, then you’re in a really good position to make that happen right now, and this is exactly the benefit of having an open ecosystem. You could build that support for Gecko, WebKit, and Blink, contribute it upstream, and (assuming you didn’t do anything weird), it’d get accepted. If you can’t build that yourself then you ring up a firm, such as Igalia, whose raison d’etre is doing exactly this sort of thing and they write it for you in exchange for payment of some kind. Hooray! We’ve basically never been in this position before: currently, for the first time in the history of the web, a dedicated outsider can contribute to essentially every browser available. How good is that? Very good, is how good it is.

Obviously, this only applies to things that everyone agrees on. If you show up with a patchset that provides support for the <stuart> element, you will be told: go away and get this standardised first. And that’s absolutely correct.

But it doesn’t let you influence the strategic direction, and this is where the notions of diversity in rendering engines and diversity in influence begins to break down. If you show up to the Blink repository with a patchset that wires an adblocker directly into the rendering engine, it is, frankly, not gonna show up in Chrome. If you go to WebKit with a complete implementation of service worker support, or web payments, it’s not gonna show up in iOS Safari. The companies who make the browsers maintain private forks of the open codebase, into which they add proprietary things and from which they remove open source things they don’t want. It’s not actually clear to me whether such changes would even be accepted into the open source codebases or whether they’d be blocked by the companies who are the primary sponsors of those open source codebases, but leave that to one side. The key point here is that the open ecosystem is only actually open to non-controversial change. The ability to make, or to refuse, controversial changes is reserved to the major browser vendors alone: they can make changes and don’t have to ask your permission, and you’re not in the same position. And sure, that’s how the world works, and there’s an awful lot of ingratitude out there from people who demand that large companies dedicate billions of pounds to a project and then have limited say over what it’s spent on, which is pretty galling from time to time.

Brian references Jeremy Keith’s Unity in which Jeremy says: “But then I think of situations where complete unity isn’t necessarily a good thing. Take political systems, for example. If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!” This is true, but again the nuance is different, because what this is about is influence. If one party wins a large majority, then it doesn’t matter whether they’re opposed by one other party or fifty, because they don’t have to listen to the opposition. (And Jeremy makes this point.) This was the problem with Internet Explorer: it was dominant enough that MS didn’t have to give a damn what anyone else thought, and so they didn’t. Now, this problem does eventually correct itself in both browsers and political systems, but it takes an awfully long time; a dominant thing has a lot of inertia, and explaining to a peasant in 250AD that the Roman Empire will go away eventually is about as useful as explaining to a web developer in 2000AD that CSS is coming soon, i.e., cold comfort at best and double-plus-frustrating at worst.

So, a qualified hooray, I suppose. I concur with Brian that “things are better and healthier because we continue to find better ways to work together. And when we do, everyone does better.” There is a bunch of stuff that is uncontroversial, and does make the web better, and it is wonderful that we’re not limited to begging browser vendors to care about it to get it. But I think that definition excludes a bunch of “things” that we’re not allowed, for reasons we can only speculate about.

May 19, 2020

Kindness by Ben Paddock (@_pads)

It’s mental health awareness week and the theme is kindness.  

One of my more recent fond stories of kindness was a trip to Colchester zoo.  There was a long queue for tickets but my friends all had passes to get in.  I would have been waiting on my own for quite some time but a person at the front of the queue bought me a ticket so I could skip it (I did pay him).  I don’ know the guy’s name but that made my day.

Just one small, random act of kindness.

Back to Top