Last updated: September 22, 2020 01:22 AM (All times are UTC.)

September 15, 2020

Self-organising teams by Graham Lee

In The Manifesto for Anarchic Software Development I noted that one of the agile manifesto principles is for self-organising teams, and that those tend not to exist in software development. What would a self-organising software team look like?

  1. Management hire a gang and set them a goal, and delegate all decisions on how to run the gang and who is in the gang to members of the gang.
  2. The “team lead” is not in charge of decision-making, but a consultant who can advise gang members on their work. The team lead probably works directly on the gang’s product too, unless the gang is too large.
  3. One member of the gang acts as a go-between for management and communicates openly with the rest of the gang.
  4. Any and all members of the gang are permitted to criticise the gang’s work and the management’s direction.
  5. The lead, the management rep, and the union rep are all posts, not people. The gang can recall any of them at any time and elect a replacement.
  6. Management track the outcomes of the gang, not the “productivity” of individuals.
  7. Management pay performance-related benefits like bonuses to the gang for the gang’s collective output, not to individuals.
  8. The gang hold meetings when they need, and organise their work how they want.

September 14, 2020

Someone has been trolling Apple’s Siri team hard on how they think numbers are pronounced. Today is the second day where I’ve missed a turn due to it. The first time because I didn’t understand the direction, the second because the pronunciation was so confusing I lost focus and drove straight when I should have turned.

The disembodied voice doesn’t even use a recognisable dialect or regional accent, it just gets road numbers wrong. In the UK, there’s a hierarchy of motorways (M roads, like M42), A roads (e.g. A34), B roads (e.g. B3400), and unclassified roads. It’s a little fluid around the edges, but generally you’d give someone the number of an M or A road if you’re giving them directions, and the name of a B road.

Apple Maps has always been a bit weird about this, mostly preferring classifications but using the transcontinental E route numbers which aren’t on signs in the UK and aren’t used colloquially, or even necessarily known. But now its voice directions pronounce the numbers incomprehensibly. That’s ok if you’re in a car and the situation is calm enough that you can study the CarPlay screen to work out what it meant. But on a motorbike, or if you’re concentrating on the road, it’s a problem.

“A” is pronounced “uh”, as if it’s saying “a forty-six” rather than “A46”. Except it also says “forrysix”. Today I got a bit lost going from the “uh foreforryfore” to the “bee forryaytoo” and ended up going in, not around, Coventry.

Entering Coventry should always be consensual.

I’ve been using Apple Maps since the first version which didn’t even know what my town was called, and showed a little village at the other end of the county if you searched for it by name. But with the successive apologies, replatformings, rewrites, and rereleases, it always seems like you take one step forward and then at the roundabout use the fourth exit to take two steps back.

September 12, 2020

Go on, read the manifesto again. You’ll see that it’s a manifesto for anarchism, for people coming together and contributing equally toward solving problems. From each according to their ability, to each according to their need.

The best architectures, requirements, and designs
emerge from self-organizing teams.

While new to software developers in the beginning of this millennium, this would not have been news to architects who noticed the same thing in 1962. A digression: this was more than a decade before architects made their other big contribution to software engineering, the design pattern. The RIBA report noticed two organisations of teams:

One was characterised by a procedure which began by the invention of a building shape and was followed by a moulding of the client’s needs to fit inside this three-dimensional preconception. The other began with an attempt to understand, fully the needs of the people who were to use the building around which, when they were clarified, the building would be fitted.

There were trade-offs between these styles, but the writers of the RIBA report clearly found some reason “to value individuals and interactions over processes and tools”:

The work takes longer and is often unprofitable to the architect, although the client may end up with a much cheaper building put up more quickly than he had expected. Many offices working in this way had found themselves better suited by a dispersed type of work organisation which can promote an informal atmosphere of free-flowing ideas.

Staff retention was higher in the dispersed culture, even though the self-organising nature of the teams meant that sometimes the senior architect was not the project lead, but found themselves reporting to a junior because ideas trumped length of tenure.

This description of self-organising teams in architecture makes me realise that I haven’t knowingly experienced a self-organising team in software, even when working on a team that claimed self-organisation. The idea is prevalent in software of a “platform shop”: a company that builds Rails websites, or Java micro services, or Swift native apps. This is software’s equivalent of beginning “by the invention of a building shape”, only more so: begin by the application of an existing building shape, no invention required.

As the RIBA report notes, this approach “clearly goes with rather autocratic forms of control”. By centralising the technology in the solution design, people can argue that experience with that technology stack (and more specifically, with the way it’s applied in this organisation) is the measure of success, and use that to impose or reinforce a hierarchy.

Clearly, length of tenure becomes a proxy measure for authority in such an organisation. The longer you’ve been in the company, the more experience you have contorting their one chosen solution to attempt to address a client’s problem. Never mind that there are other skills needed in designing a software product (not least of which is actually understanding the problem), and never mind that this “experience” is in repeated application of an unsuitable template: one year of experience ten times over, rather than ten years of experience.

You may be familiar with Unity’s Test Runner window, where you can execute tests and see results. This is the user-facing part of the Unity Test Framework, which is a very extensible system for running tests of any kind. At Roll7 I recently set up the test runner to automatically run simple smoketests on every level of our (unannounced) game, and made jenkins report on failures. In this post I’ll outline how I did the former, and in part two I’ll cover the later.

Play mode tests, automatically generated for every level
Some of our [redacted] playmode and editmode tests, running on Jenkins

(I’m going to assume you have passing knowledge of how to write tests for the test runner)

(a lot of this post is based on this interesting talk about UTF from its creators at Unite 2019)

The UTF is built upon NUnit, a .net testing framework. That’s what provides all those [TestFixture] and [Test] attributes. One feature of NUnit that UTF also supports is [TestFixtureSource]. This attribute allows you to make a sort of “meta testfixture”, a template for how to make test fixtures for specific resources. If you’re familiar with parameterized tests, it’s like that but on a fixture level.

We’re going to make a TestFixtureSource provider that finds all level scenes in our project, and then the TestFixutreSource itself that loads a specific level and runs some generic smoke tests on it. The end result is that adding a new level will automatically add an entry for it to the play mode tests list.

There’s a few options for different source providers (see the NUnit docs for more), but we’re going to make an IEnumerable that finds all our level scenes. The results of this IEnumerable are what gets passed to our constructor – you could use any type here.

class AllRequiredLevelsProvider : IEnumerable<string>
{
    IEnumerator<string> IEnumerable<string>.GetEnumerator()
    {
        var allLevelGUIDs = AssetDatabase.FindAssets("t:Scene", new[] {"Assets/Scenes/Levels"} );
        foreach(var levelGUID in allLevelGUIDs)
        {
            var levelPath = AssetDatabase.GUIDToAssetPath(levelGUID);
            yield return levelPath;
        }
    }
    public IEnumerator GetEnumerator() => (this as IEnumerable<string>).GetEnumerator();
}

Our TestFixture looks like a regular fixture, except also with the source attribute linking to our provider. Its constructor takes a string that defines which level to load.

[TestFixtureSource(typeof(AllRequiredLevelsProvider))]
public class LevelSmokeTests
{
    private string m_levelToSmoke;
    public LevelSmokeTests(string levelToSmoke)
    {
        m_levelToSmoke = levelToSmoke;
    }

Now our fixture knows which level to test, but not how to load it. TestFixtures have a [SetUp] attribute which runs before each test, but loading the level fresh for each test would be slow and wasteful. Instead let’s use [OneTimeSetup] (👀 at the inconsistent capitalisation) and to load and unload our level for each fixture. This depends somewhat on your game implementation, but for now let’s go with UnityEngine.SceneManagement:

// class LevelSmokeTests {
    [OneTimeSetUp]
    public void LoadScene()
    {
        SceneManager.LoadScene(m_levelToSmoke);
    }

Finally, we need some tests that would work on any level we throw at it. The simplest approach is probably to just watch the console for errors as we load in, sit in the level, and then as we load out. Any console errors at any of these stages should fail the test.

UTF provides LogAsset to validate the output of the log, but at this time it only lets you prescribe what should appear. We don’t care about Debug.Log() output, but want to know if there was anything worse than that. Particularly, in our case, we’d like to fail for warnings as well as errors. Too many “benign” warnings can hide serious issues! So, here’s a little utility class called LogSeverityTracker, that helps check for clean consoles. Check the comments for usage.

Our tests can use the [Order] attribute to ensure they happen in sequence:

// class LevelSmokeTests {
    [Test, Order(1)]
    public void LoadsCleanly()
    {
        m_logTracker.AssertCleanLog();
    }

    [UnityTest, Order(2)]
    public IEnumerator RunsCleanly()
    {
        // wait some arbitrary time
        yield return new WaitForSeconds(5);
        m_logTracker.AssertCleanLog();
    }

    [UnityTest, Order(3)]
    public IEnumerator UnloadsCleanly()
    {
        // how you unload is game-dependent 
        yield return SceneManager.LoadSceneAsync("mainmenu");
        m_logTracker.AssertCleanLog();
    }

Now we’re at the point where you can hit Run All in the Test Runner and see each of your levels load in turn, wait a while, then unload. You’ll get failed tests for console warnings or errors, and newly-added levels will get automatically-generated test fixtures.

More tests are undoubtedly more useful than less. Depending on the complexity and setup of your game, the next steps might be to get the player to dumbly walk around for a little bit. You can get a surprising amount of info from a dumb walk!

In part 2, I’ll outline how I added all this to jenkins. It’s not egregiously hard, but it can be a bit cryptic at times.

September 11, 2020

Reading List 265 by Bruce Lawson (@brucel)

September 09, 2020

Dos Amigans by Graham Lee

Tomorrow evening (for me; 1800UTC on 10th Sept) Steven R. Baker and I will twitch-stream our journey learning how to write Amiga software. Check out dosamigans.tv!

September 06, 2020

I finally got around to reading Cal Newport’s latest book: Digital Minimalism. Newport’s previous book, Deep Work, is one of my favourites so I had high expectations – and it delivered. Go read it if you haven’t already.

Newport makes the case that much of the technology that we use – in particular smartphones and social media – has a detrimental impact on our ability to live a deep life. Newport describes the deep life as “focusing with energetic intention on things that really matter – in work, at home, and in your soul – and not wasting too much attention on things that don’t.”

The antidote to the addiction that many of us have to our devices is to become a digital minimalist. Newport defines Digital Minimalism as, “a philosophy of technology use in which you focus your online time on a small number of carefully selected and optimized activities that strongly support things you value, and then happily miss out on everything else.”

The first step to becoming a digital minimalist is to do a thirty-day digital declutter, where you take a break from optional technologies in your life to rediscover more satisfying and meaningful pursuits.

There’s three steps to the digital declutter process:

  1. Define your technology rules. Decide which technologies fall into the “optional” category. The heuristic Newport recommends is: “consider the technology optional unless its temporary removal would harm or significantly disrupt the daily operation of your professional or personal life”.
  2. Take a thirty-day break. During this break, explore and rediscovered activities and behaviours that you find satisfying and meaningful.
  3. Reintroduce technology. Starting from a blank slate, slow reintroduce technologies that add value to your life and determine how you will use them to maximise this value.

This was a timely read for me. I’ve slipped back into bad habits despite knowing full well the toll that social media and my smartphone can have on me.

September felt like a good time to hit reset and do my own digital declutter experiment so for the next thirty days I’ve committed to:

  • No Twitter use (deleted TweetBot from my phone and iPad, blocked access on Mac)
  • No Instagram use (deleted app from my phone)
  • No email on phone
  • No Trading 212 on my phone
  • Not wearing my Apple Watch
  • No news consumption (RSS and a brief check of the news in the morning is okay)

I’ve introduced a few rules that I’m doing my best to follow:

  • No screens in the bedroom
  • Leave my phone in another room while working
  • Run Focus on my Mac while doing 40 minute work sessions (this blocks email, Slack, and a slew of distracting websites)

I’m also tracking a few habits and metrics every day using the Theme System Journal:

  • 10 minutes+ of meditation
  • 30 minutes+ reading
  • 10k steps
  • No alcohol
  • Journaling
  • Whether I’ve completed my daily highlight
  • The number of sessions of deep work I’ve completed (I aim for 4 40-minute sessions per day)
  • Hours on my phone/pickups via iOS’s ScreenTime feature

I am convinced that a reduction in the time I spend on Twitter and Instagram will be beneficial. The Apple Watch is more interesting: I use the health and fitness tracking features which I find useful, but I am also convinced that it creates a low-level anxiety (have I closed my rings? what’s my heart rate? etc.). It’ll be interesting to see how I feel about the Apple Watch at the end of the month.

September 04, 2020

Free as in Water by Graham Lee

The whole “Free as in beer versus free as in freedom” thing confuses people. Or maybe it doesn’t, and it allows detractors to sow fear, uncertainty and doubt over free software by feigning confusion. Either way, people express confusion.

What is “free as in beer”? Beer is never free, it costs money. Oh, you mean when someone gives me free beer. So, like a round-ordering system, where there’s an expectation that I’ll reciprocate later? Or a promotional beer, where there’s a future expectation that I’ll buy more beer?

No, we mean the beer that a friend buys you when you’re out together and they say “let’s get a couple of beers”. There’s no financial tally kept, no expectation to reciprocate, because then it wouldn’t be friendship: it would be some exchange-mediated relationship that can be nullified by balancing the books. There’s no strings attached, just beer (or coffee, or orange squash, whatever you drink). You get the beer, you don’t pay: but you don’t get to make your own beer, or improve that beer. Gratuity, but no liberty.

Various extensions have been offered to the gratis-vs-libre discussions of freedom. One of the funniest, from a proprietary software vendor’s then-CEO, was Scott McNealy’s “free as in puppies”: implying that while the product may be gratis, there’s work to come afterwards.

I think another extension to help software producers like you and me understand the point of the rights conferred by free software is “free as in water”. In so-called developed societies, most of us pay for water, and most of us have a reasonable expectation of a right to access for water. In fact, we often don’t pay for water, we pay for the infrastructure that gets clean, fresh water to our houses and returns soiled water to the treatment centres. If we’re out of our houses, there are public water fountains in urban areas, and a requirement for refreshment businesses to supply fresh water at no cost.

Of course, none of this is to say that you can’t run a for-profit water business. Here in the UK, that infrastructure that gets the main water supply to our houses, offices and other buildings is run for profit, though there are certain expectations placed on the operators in line with the idea that access to water is a right to be enjoyed by all. And nothing stops you from paying directly for the product: you can of course go out and buy a bottle of Dasani. You’ll end up with water that’s indistinguishable from anybody else’s water, but you’ll pay for the marketing message that this water changes your life in satisfying ways.

When the expectation of the “freedom to use the water, for any purpose” is violated, people are justifiably incensed. You can claim that water isn’t a human right, and you can expect that view to be considered dehumanising.

Just as water is necessary to our biological life, so software has become necessary to our social and civic lives due to its eating the world. It’s entirely reasonable to want insight and control into that process, and to want software that’s free as in water.

In the spring of 2020, the GNOME project ran their Community Engagement Challenge in which teams proposed ideas that would “engage beginning coders with the free and open-source software community [and] connect the next generation of coders to the FOSS community and keep them involved for years to come.” I have a few thoughts on this topic, and so does Alan Pope, and so we got chatting and put together a proposal for a programming environment for making simple apps in a way that new developers could easily grasp. We were quite pleased with it as a concept, but: it didn’t get selected for further development. Oh well, never mind. But the ideas still seem good to us, so I think it’s worth publishing the proposal anyway so that someone else has the chance to be inspired by it, or decide they want it to happen. Here:

Cabin: Creating simple apps for Linux, by Stuart Langridge and Alan Pope

I’d be interested in your thoughts.

August 28, 2020

A surprising new creative hobby – Painting Portraits

I started painting at the start of lockdown and here’s my story.

It’s May, Boris Johnson has just told us all we’re not to leave the house and the nation stays indoors. Like many, I turned to surprising new areas of interests to keep me sane – for me it was painting portraits of famous people.

I’ve never picked up a paintbrush for artistic reasons and if I’m honest I’ve never really liked art yet as of August 2020 I have over 80 paintings to my name. So what happened?

We’ll it all started with Grayson Perry’s Art Club – I actually missed the portrait episode but when I saw my wife was watching it I sat and watched with interest.

It was actually seeing Joe Lycett (from my home town of Solihull) painting Chris Whitty that made me think “Oh I’d like to have a go at this, he looks like he’s having fun regardless of the finished product”

My early Acrylic Portrait PaintingsFirst paintings I loved.

Thankfully for me the start up was cheap as my mother-in-law Rosy had loads of acrylic paints and an easel I could have. So I got painting and although the first few were very rough I posted them to Facebook and people really reacted to them so it encouraged me to keep painting.

It was the Richard Ayoade and Frank N’ Furter (Rocky Horror Show) paintings that made me feel I could actually take this further.

A whole load of my acrylic portrait paintingsMy early works

Setting up Portmates

At the time I was posting these paintings to my personal Instagram and Facebook pages which had limited reach as my Facebook is pretty locked down so it was natural to create a new page for my artwork.

I decided to call my brand Portmates, which is a mixture of Portraits and ‘Your mates’ as I was calling my paintings mates at the time. It’s a bit cheesy but it’s stuck.

I did a self portrait which my Facebook page followers LOVED and that became my brand identity and even done versions with 80’s style glasses for other art sets, but more on that later.

My branding based on my self portraitMy self portrait became my brand ID.

Page Growth

I was loving it. For once in my career I was in control. I didn’t have to wait for developers or beg, borrow and steal time from people so I could launch products of my own. I just painted and used my background in product design to release things I could sell.

In the early days I was doing lots of ‘Win a portrait’ competitions and free commissions which really helped get my art in front of new people. I have a whole gallery of people with my paintings. It’s ace! I love seeing them out in the wild. One painting is even hanging in a local hair dressers.

Some of my art sales and commissions Out in the wild!

Facebook groups

Part of my enjoyment is sharing to various facebook groups. I can’t thank Staceylee at the Portrait Artists UK group for the shares and kind words. It’s been a huge part of my journey and will continue to do so.

Stacey has often shared some of my live painting sessions and even purchased a set of cards from me she has really pushed me forward even if she’s unaware she’s been doing it.

The other group I had early success with was the Grayson Perry Art Cub group, they also fed me some very lovely feedback in the early days.

Me, Mulder and Scully paintingI joined the FBI

Developing my style

One of the things that struck me early on was how many of the established artists in these groups were praising my style and how I’d managed to find my artistic voice really quickly.

This lead me to think about the paintings I’ve done before and how I could expand them, make them feel more like original pieces of art rather than perhaps fairly crude paintings.

A weekend of painting portraits A weekends work

I had already been using fairly heavy lines in my paintings and once I upgraded some equipment (better paint brushes, moving from paper to canvases) I was able to be more experimental with my paintings. (It’s far easier to make mistakes on Canvas as you can paint over them without destroying the paper)

I’m now adding far more colour and being braver with my art. Because I have a design background I’m leaning into that – I think my art is somewhere between graphic art and portrait art.

More of my portrait art

Selling art

To my surprise I was getting enquires about buying some of my paintings. To this date I’ve sold three paintings (roughly one a month) which as a hobby ain’t bad at all!

I also started creating my own products such as Art Cards and Prints, which If I’m honest haven’t sold very well but I have sold a few… but as it’s still early days I’m happy with having some stock so when I do find new audiences they’ll have something to buy from me.

80's action art collectionMy 80’s action Art Cards

I’ve experimented with Shopify, Etsy but settled on having a Big Cartel Store. You can of course buy them here.

I’m going to be focusing on selling originals for the next few months before they take over my office and I have to use them as fuel.

I’ve been approached by an online gallery which is exciting, my work should be available to sell from them very soon.

Oh by the way, if you’re interested in the process of getting products from painting to printed items I can sell I did a tweet thread about how I did it (TL;DR, scan, touch up, print) – Read it here.

My art cards volume one available for sale.Art Cards Volume 1

Benefits of painting

My skin has always sucked. I scratch myself senseless and it effects every elements of my life. I don’t sleep well and it effects my mental health. Thankfully since I’ve started painting my skin has recovered MASSIVELY.

Although I still have a way to go before my skin is healthy the painting has really helped me destress and focus my efforts on something other than work, fitness or TV.

When I paint there’s nothing else on my mind, just focusing on the art and getting it how I want it to look (which can take a while!)

(Fitness is still really important to me I’m just less obsessed with how I look in a mirror)

The future

The future is really exciting. I’m planning on expanding my collection of portraits, perhaps even aiming to do a self funded exhibition or even at an established gallery.

I plan on releasing more prints in time for Black Friday / Christmas rush. I know there’s a demand for some of the legends, especially Elton and Freddie so I’ll probably get some of those made ready for Christmas.

Some legends

I’m also going to be exploring other mediums, I’ve ordered two blank skateboards which I’m going to be painting some designs on!

But if I’m honest with myself I don’t see art being my full time focus any time soon as I still really love my UI / UX app design work so I’m mainly going to keep enjoying the new creative outlet I’ve found for myself.

I also need to remind myself I’m very new to this and if any good things are on my horizon with my art it will happen in time. ❤

So TL;DR – I paint now.

Please follow me on all the socials, my Instagram is buzzing at the moment and my Facebook page is a real lovely community of people. Find all my links here.

I’m fairly confident nobody is reading this far down the page but feel free to tweet me with your own lockdown creative stories.

Paintings I did on holiday.Paintings I did on Holiday

The post A surprising new creative hobby – Painting Portraits appeared first on .

August 21, 2020

August 18, 2020

One of the projects I’m working on involves creating a little device which you talk to from your phone. So, I thought, I’ll do this properly. No “cloud service” that you don’t need; no native app that you don’t need; you’ll just send data from your phone to it, locally, and if the owners go bust it won’t brick all your devices. I think a lot of people want their devices to live on beyond the company that sold them, and they want their devices to be under their own control, and they want to be able to do all this from any device of their choosing; their phone, their laptop, whatever. An awful lot of devices don’t do some or all of that, and perhaps we can do better. That is, here’s the summary of that as a sort of guiding principle, which we’re going to try to do:

You should be able to communicate a few hundred KB of data to the device locally, without needing a cloud service by using a web app rather than a native app from an Android phone.

Here’s why that doesn’t work. Android and Chrome, I am very disappointed in you.

Bluetooth LE

The first reaction here is to use Bluetooth LE. This is what it’s for; it’s easy to use, phones support it, Chrome on Android has Web Bluetooth, everything’s gravy, right?

No, sadly. Because of the “a few hundred KB of data” requirement. This is, honestly, not a lot of data; a few hundred kilobytes at most. However… that’s too much for poor old Bluetooth LE. An excellent article from AIM Consulting goes into this in a little detail and there’s a much more detailed article from Novelbits, but transferring tens or hundreds of KB of data over BLE just isn’t practical. Maybe you can get speeds of a few hundred kilo bits per second in theory, but in practice it’s nothing like that; I was getting speeds of twenty bytes per second, which is utterly unhelpful. Sure, maybe it can be more efficient than that, but it’s just never going to be fast enough: nobody’s going to want to send a 40KB image and wait three minutes for it to do so. BLE’s good for small amounts of data; not for even medium amounts.

WiFi to your local AP

The next idea, therefore, is to connect the device to the wifi router in your house. This is how most IoT devices work; you teach them about your wifi network and they connect to it. But… how do you teach them that? Normally, you put them in some sort of “setup” mode and the device creates its own wifi network, and then you connect your phone to that, teach it about your wifi network, and then it stops its own AP and connects to yours instead. This is maybe OK if the device never moves from your house and it only has one wifi network to connect to; it’s terrible if it’s something that moves around to different places. But you still need to connect to its private AP first to do that setup, and so let’s talk about that.

WiFi to the device

The device creates its own WiFi network; it becomes a wifi router. You then connect your phone to it, and then you can talk to it. The device can even be a web server, so you can load the controlling web app from the device itself. This is ideal; exactly what I planned.

Except it doesn’t work, and as far as I can tell it’s Android’s fault. Bah humbug.

You see, the device’s wifi network obviously doesn’t have a route to the internet. So, when you connect your phone to it, Android says “hey! there’s no route to the internet here! this wifi network sucks and clearly you don’t want to be connected to it!” and, after ten seconds or so, disconnects you. Boom. You have no chance to use the web app on the device to configure the device, because Android (10, at least) disconnects you from the device’s wifi network before you can do so.

Now, there is the concept of a “captive portal”. This is the thing you get in hotels and airports and so on, where you have to fill in some details or pay some money or something to be able to use the wifi; what happens is that all web accesses get redirected to the captive portal page where you do or pay whatever’s necessary and then the network suddenly becomes able to access the internet. Android will helpfully detect these networks and show you that captive portal login page so you can sign in. Can we have our device be a captive portal?

No. Well, we can, but it doesn’t help.

You see, Android shows you the captive portal login page in a special cut-down “browser”. This captive portal browser (Apple calls it a CNA, for Captive Network Assistant, so I shall too… but we’re not talking about iOS here, which is an entirely different kettle of fish for a different article), this CNA isn’t really a browser. Obviously, our IoT device can’t provide a route to the internet; it’s not that it has one but won’t let you see it, like a hotel; it doesn’t have one at all. So you can’t fill anything into the CNA that will make that happen. If you try to switch back to the real browser in order to access the website being served from the device, Android says “aha, you closed the CNA and there’s still no route to the internet!” and disconnects you from the device wifi. That doesn’t work.

You can’t open a page in the real browser from the CNA, either. You used to be able to do some shenanigans with a link pointing to an intent:// URL but that doesn’t work any more.

Maybe we can run the whole web app inside the CNA? I mean, it’s a web browser, right? Not an ideal user experience, but it might be OK.

Nope. The CNA is a browser, but half of the features are turned off. There are a bunch of JavaScript APIs you don’t have access to, but the key thing for our purposes is that <input type="file"> elements don’t work; you can’t open a file picker to allow someone to choose a file to upload to the device. So that’s a non-starter too.

So, what do we do?

Unfortunately, it seems that the plan:

communicate a few hundred KB of data to the device locally, without needing a cloud service by using a web app rather than a native app from an Android phone

isn’t possible. It could be, but it isn’t; there are roadblocks in the way. So building the sort of IoT device which ought to exist isn’t actually possible, thanks very much Android. Thandroid. We have to compromise on one of the key points.

If you’re only communicating small amounts of data, then you can use Bluetooth LE for this. Sadly, this is not something you can really choose to compromise on; if your device plan only needs small volumes, great, but if it needs more then likely you can’t say “we just won’t send that data”. So that’s a no-go.

You can use a cloud service. That is: you teach the device about the local wifi network and then it talks to your cloud servers, and so does your phone; all data is round-tripped through those cloud servers. This is stupid: if the cloud servers go away, the device is a brick. Yes, lots of companies do this, but part of the reason they do it is that they want to be able to control whether you can access a device you’ve bought by running the connection via their own servers, so they can charge you subscription money for it. If you’re not doing that, then the servers are a constant ongoing cost and you can’t ever shut them down. And it’s a poor model, and aggressively consumer-hostile, to require someone to continue paying you to use a thing they purchased. Not doing that. Communication should be local; the device is in my house, I’m in my house, why the hell should talking to it require going via a server on the other side of the world?

You can use a native app. Native apps can avoid the whole “this wifi network has no internet access so I will disconnect you from it for your own good” approach by calling various native APIs in the connectivity manager. A web app can’t do this. So you’re somewhat forced into using a native app even though you really shouldn’t have to.

Or you can use something other than Android; iOS, it seems, has a workaround although it’s a bit dodgy.

None of these are good answers. Currently I’m looking at building native apps, which I really don’t think I should have to do; this is exactly the sort of thing that the web should be good at, and is available on every platform and to everyone, and I can’t use the web for it because a bunch of decisions have been taken to prevent that. There are good reasons for those decisions, certainly; I want my phone to be helpful when I’m on some stupid hotel wifi with a signin. But it’s also breaking a perfectly legitimate use case and forcing me to use native apps rather than the web.

Unless I’m wrong? If I am… this is where you tell me how to do it. Something with a pleasant user experience, that non-technical people can easily do. If it doesn’t match that, I ain’t doin’ it, just to warn you. But if you know how this can be done to meet my list of criteria, I’m happy to listen.

Six Colours by Graham Lee

Apple has, in my opinion, some of the best general-purpose computing technology on the market right now, and has had some of the best for all of this millennium. However, their business practices are increasingly punitive, designed to extract greater amounts of rental income from their existing customers (“want to, um, read the newspaper? $9.99/mo, and we get to choose which newspaper!”), with rules that punish those who aren’t interested in helping them extract that rent.

Throughout the iPhone era, Apple has dealt arbitrary hands to people who try to work with them: removing the I Am Rich app without explanation; giving News Corp. a special arrangement to allow in-app subscriptions when nobody else could do it; allowing Netflix to carry on operating rent-free while disallowing others.

People put up with this for the justifiable reason that the Apple technology platform is pleasant and easy to use, well-integrated across multiple contexts including desktop, mobile, wearable and home. None of Apple’s competitors are even playing the same game: you could build some passable simulacrum using multiple vendor technology (for example Windows, Android, Dropbox; or Ubuntu, Replicant, Nextcloud) but no single outlet is going to sell you the “it just works” version of that setup. Not even any vendor consortium works together to provide the same ease and integration: you can’t buy a Windows PC from Samsung, for example, that’ll work out of the box with your Galaxy phone. Even if you get your Chromebook and your Pixel phone from Google, you’ve got some work ahead of you to get everything synced up.

And then, of course, since the failure of their banner ad business, Apple have successfully positioned themselves as the non-ad, non-data-gathering company. Sure, you could get everything we’re doing cheaper elsewhere: but at what cost?

My view is that the one fact—the high-quality technology—doesn’t excuse the other—the rent-extracting business model and capricious heavy-handed application of “the rules” with anyone who tries to work with them. People try to work with them because of the good technology, and get frustrated, enervated, or shut down because of the power imbalance in business. It is OK to criticise Apple for those things they’re not doing well or fairly; it’s a grown-up company worth trillions of dollars, it’ll probably weather the storm. If enough people on the inside learn about and address the criticisms, they may even get better, which will be good for a massive global network of Apple’s employees, suppliers, and customers.

It seems to me that some people (and I’m explicitly talking about people outside Apple now, obviously employees are expected to abide by whatever internal rules the company has and it takes a special kind of person to blow the whistle) will brook none of that. There are people who will defend the two-trillion dollar corporation blocking some small indie’s business; “they’re just applying their rules” (the rules that say I’ll know it when I see it, indicating explicitly that capricious application is to be expected).

It seems weird that a Person On The Internet would feel the need to rush to the defence of The World’s Biggest Company, and so to my mind it seems like they aren’t. It seems like they’re rushing to the defence of 1990s Beleaguered Apple, the company with three weeks of salary money in the bank that’s running on the memory of having the best computers and the hope that maybe the twenty-first model we release this month will be the one that sells. The Apple with its six-coloured logo, where you have to explain that actually the one-button mouse doesn’t make it a toy and you can do real work with it, but could you please send that document in Word 6 format as my Word can’t open Word 97 files thank you. The Apple where actually if you specced up a PC to match this it would probably cost about the same, it’s just that PCs also cover the lower end. The Apple where every friend or colleague you convinced to at least try it out meant a blow to the evil monolith megacorporation bringing computing to the dark side with its nefarious, monopolistic practices and arbitrary treatment of its partners.

That company no longer needs defending. It would be glib to say “that Apple ceased trading on February 7, 1997”, the date that NeXT, Inc. finally disappeared as an independent business. But that’s not what happened. That company slowly boiled as the temperature around it rose. The iMac, iBook, iPod, Mac OS X, iPhone, iPad: all of these things came out of that company. Admittedly, so did iTools, .Mac, and Mobile Me, but eventually along came iCloud. Obviously 2020 Apple is a continuation of the spirit and culture of 1997 Apple, 1984 Apple, 1976 Apple. It has some of the same people, and plenty of people who learned from the people who are and were the same people. But it is also entirely different. Through a continuum of changes, but no deliberate “OK, time to rip off the mask” conversion, Apple is now the IBM that fans booed in 1984, or the Microsoft that fans booed in 1997.

It’s OK to not like that, to not defend it, but to still want something good to come out of their great technology. We have to let go of this notion that for Apple to win, everyone else has to lose.

August 14, 2020

Reading List 264 by Bruce Lawson (@brucel)

August 09, 2020

Nvidia and ARM by Graham Lee

Nvidia’s ambitions are scarcely hidden. Once it owns Arm it will withdraw its licensing agreements from its competitors, notably Intel and Huawei, and after July next year take the rump of Arm to Silicon Valley

This tech giant up for sale is a homegrown miracle – it must be saved for Britain

August 08, 2020

Fairness by Graham Lee

There are two different questions of fairness when it comes to the App Store rules. Apple always spin it to mean “these rules are applied fairly”, which is certainly not true. Putting aside questions of why Netflix get to do things Hey don’t, it’s pretty obvious that the rules include “don’t make apps in these spaces where Apple has apps” that don’t apply to Apple. It’s also clear that nobody in the App Store team is rules lawyering Apple apps on the rest of the rules, either.

But all of that ignores the larger question, “are these rules fair?”

August 07, 2020

grotag by Graham Lee

Lots of Amiga documentation was in the AmigaGuide format. These are simple ASCII documents with some rudimentary markup to turn them into hypertext, working something like TeXInfo manuals. Think more like a markdown-enabled Gopher than the web though: you can link out to an image, video, or any other media (you could once you had AmigaOS 3, anyway) but you can’t display it inline.

Unfortunately choices for modern readers are somewhat limited. Many links are only now found on the Internet Archive, and many of those don’t go to downloads you can actually download. I found a link to an old grotag binary, but it was PowerPC-only.

…and it was on Sourceforge, so I cloned the project and updated the build. I haven’t created a new package yet, but it runs well enough out of Idea. I need to work out how you package Java Swing apps, then do that. It’ll be worth fixing a couple of deprecations, putting assets like the CSS file in the JAR, and maybe learning enough JavaFX to port the UI:

grotag AmigaGuide viewer

To use it, alongside your Guide file you also need a grotag.xml that maps Amiga volume links onto local filesystem paths, so that grotag can find nodes linked to other files. There’s an example of one in the git repo.

Concrete freedoms by Graham Lee

Discussions about free software or open source software can always seem a bit abstract. Who cares if I’ve got the source code, if I’m never going to read it or change it? Why would I want “free” versions of my apps when there are already plenty of zero-cost (i.e. as free as I need) alternatives in my App Store?

This has been the year in which Apple tightened the screws on the App Store, making it clear that anyone who isn’t giving them their 30% or 15% cut or isn’t Netflix or Spotify is on thin ice. In an Orwellian move, they remote-killed Charlie Monroe’s apps and told users that they couldn’t run apps they’d paid for, because the apps would damage their computers.

At the base of the definition of free and open source software are the four freedoms. The first:

The freedom to run the program as you wish, for any purpose (freedom 0).

This is freedom “zero” not just because of the C-style zero indexing pun, but because it was added into the space preceding freedom one after the other three were written. The Free Software Foundation didn’t think it needed explicitly stating, but apparently it does.

In a free software world, YOU are free to run software as you wish, for any purpose. Some trillion-dollar online company pretending to be a bricks-and-mortar retailer equivalent isn’t going to come along and say “sorry, we’ve decided we don’t want you running that, and rather than explain why we’re just going to say it’s for your own good.” They aren’t going to stop developers from sharing or selling their software, on the basis that they haven’t paid enough of a tithe to the mothership.

These four freedoms may seem abstract, but they have real and material consequences. So does their absence.

August 03, 2020

NeXT marketed their workstations by letting Sun convince people they wanted a workstation, then trying to convince customers (who were already impressed by Sun) that their workstation was better.

As part of this, they showed how much better the development tools were, in this very long reality TV infomercial.

If you don’t know the name Igalia, you’ve still certainly used their code. Igalia is “an open source consultancy specialized in the development of innovative projects and solutions”, which tells you very little, but they’ve been involved in adding many features to the open-source browsers (which is now all browsers) such as MathML and CSS Grid.

One of their new initiatives is very exciting, called Open Prioritisation (I refuse to mis-spell it with a “z”). The successful campaign to support Yoav Weiss adding the <picture> element and friends to Blink and WebKit showed that web developers would contribute towards crowdfunding new browser features, so Igalia are running an experiment to get the diverse interests and needs of the web develoment community to prioritise which new features should be added to the web platform.

They’ve identified some possible new features that are “shovel-ready”—that is, they’re fully specified and ready to be worked on, and the Powers That Be who decide what gets upstreamed and shipped are supportive of their inclusion. Igalia says,

Igalia is offering 6 possible items which we could do to improve the commons with open community support, as well as what that would cost. All you need to do is pledge to the ‘pledged collective’ stating that if we ran such a campaign you’re likely to contribute some dollars. If one of these reaches its goal in pledges, we will announce that and begin the second stage. The second stage is about actually running the funding campaign as a collective investment.

I think this is a brilliant idea and will be pledging some of my pounds (if they are worth anything after Brexit). Can I humbly suggest that you consider doing so, too? If you can’t, don’t fret (these are uncertain times) but please spread the word. Or if your employer has some sort of Corporate Social Responsibility program, perhaps you might ask them to contribute? After all, the web is a common resource and could use some nurturing by the businesses it enables.

If you’d like to know more, Uncle Brian Kardell explains in a short video. Or (shameless plug!) you can hear Brian discuss it with Vadim Makeev and me in the fifth episode of our podcast, The F-Word (transcript available, naturally). All the information on the potential features, some FAQs and no photos of Brian are to be found on Igalia’s Open Prioritization page.

YouTube video

Read More

August 02, 2020

Apollo accelerators make the Vampire, the fastest Motorola 680×0-compatible accelerators for Amiga around. Actually, they claim that with the Sheepsaver emulator to trap ROM calls, it’s the fastest m68k-compatible Mac around too.

The Vampire Standalone V4 is basically that accelerator, without the hassle of attaching it to an Amiga. They replicated the whole chipset in FPGA, and ship with the AROS ROMs and OS for an open-source equivalent to the real Amiga experience.

I had a little bit of trouble setting mine up (this is not surprising, as they’re very early in development of the standalone variant and are iterating quickly). Here’s what I found, much of it from advice gratefully received from the team in the Apollo Slack. I replicate it here to make it easier to discover.

You absolutely want to stick to a supported keyboard and mouse, I ended up getting the cheapest compatible from Amazon for about £20. You need to connect the mouse to the USB port next to the DB-9 sockets, and the keyboard to the other one. On boot, you’ll need to unplug and replug the mouse to get the pointer to work.

The Vampire wiki has a very scary-looking page about SD cards. You don’t need to worry about any of that with the AROS image shipped on the V4. Insert your SD card, then in the CLI type:

mount sd0:

When you’re done:

assign sd0: dismount
assign sd0: remove

The last two are the commands to unmount and eject the disk in AmigaDOS. Unfortunately I currently find that while dismounting works, removing doesn’t; and then subsequent attempts to re-mount sd0: also fail. I don’t know if this is a bug or if I’m holding it wrong.

The CF card with the bootable AROS image has two partitions, System: and Work:. These take up around 200MB, which means you’ve got a lot of unused space on the CF card. To access it, you should get the fixhddsize tool. UnLHA it, run it, enter ata.device as your device, and let it fix things for you.

Now launch System:Tools/HDToolBox. And click “Add Entry”. In the Devices dialog, enter ata.device. Now click that device in the “Changed Name” list, then double-click on the entry that appears (for me, it’s SDCFXS-0 32G...). You’ll see two entries, UDH0: (that’s your System: partition) and UDH1: (Work:). Add an entry here, selecting the unused space. When you’ve done that, save changes, close HDToolBox, and reboot. You’ll see your new drive appear in Workbench, as something like UDH2:NDOS. Right click that, choose Format, then Quick Format. Boom.

My last tip is that AROS doesn’t launch the IP stack by default. If you want networking, go to System:Prefs/Network, choose net0 and click Save. Optionally, enable “Start networking during system boot”.

August 01, 2020

6502 by Graham Lee

On the topic of the Apple II, remember that MOS was owned by Commodore Business Machines, a competitor of Apple’s, throughout the lifetime of the computer. Something to bear in mind while waiting to see where ARM Holdings lands.

Obsolescence by Graham Lee

An eight-year-old model of iPad is now considered vintage and obsolete. For comparison, the Apple ][ was made from 1977-1993 (16 years) and the January 1983 Apple //e would’ve had exactly the same software support as the final model sold in November 1993, or the compatibility cards for Macintosh sold from 1991 onwards.

July 31, 2020

Some programming languages have a final keyword, making types closed for extension and open for modification.

July 30, 2020

The Nineteen Nineties by Graham Lee

I’ve been playing a lot of CD32, and would just like to mention how gloriously 90s it is. This is the startup chime. For comparison, the Interstellar News chime from Babylon 5.

Sure beats these.

App Structure 2020 by Luke Lanchester (@Dachande663)

Five years ago I wrote about how I structured applications. A lot has changed in five years. An old saying states you should be able to look back on your work of yesterday and wonder what you were thinking. So here is how I structure apps in 2020, what’s changed and what’s stayed the same.

Without further ado, here are the most common classes created for a fictional Movie type.

App\Http\Controllers\ApiMovieController
App\Modules\Movie\Entities\Movie
App\Modules\Movie\Commands\CreateMovieCommand
App\Modules\Movie\CommandHandlers\CreateMovieCommandHandler
App\Modules\Movie\Policies\MoviePolicy
App\Modules\Movie\Readers\MovieReader
App\Modules\Movie\Repositories\MovieRepository
App\Modules\Movie\Repositories\MovieRepositoryInterface
App\Modules\Movie\Resources\MovieResource

So what’s changed? Well the first thing is entities are now split into modules, with each module containing its own repositories, commands, resources et al. This makes it much simpler to identify all the parts of a specific module, even if there do tend to be strong links between modules (for instance the User module is referenced elsewhere).

The other big change is the move to Command Query Responsibility Segregation (CQRS). In 2015, I was handling a lot of the logic within the Controller. This was fine when the controller was the sole owner of an action. But as systems have grown, there are more and more events. Bundling authentication, logging, retries etc into the controller caused them to become unwieldy, especially in classes with many methods.

With CQRS, read requests go through a Reader. This is responsible for accepting query information, validating the input (and the user), and then returning data. All data is transformed through one or more Resources.

Any actions that may be performed, from creating a movie to submitting a vote on that entity, are now encapsulated as Commands. Each command contains all of the data needed for it to work including the user performing the action, the entity under control and any inputs. Looking at a command, you can see exactly what’s needed!

Commands are then routed through an event bus. This allows for logging of all actions, addition of transactions and retry controls, and authentication all without needing to touch the actual Command Handler that does the final work!

The system isn’t perfect, but it strikes a good balance between been self-documenting/protecting, and fast for rapid development.

July 27, 2020

July 24, 2020

Reading List 263 by Bruce Lawson (@brucel)

July 23, 2020

July 17, 2020

Live Regions resources by Bruce Lawson (@brucel)

Yesterday I asked “What’s the most up-to-date info on aria-live regions (and ) support in AT?” for some client work I’m doing. As usual, Twitter was responsive and helpful.

Heydon replied

Should be fine, support is good for live regions. Not sure about output, though … Oh, you’re adding the p _with_ the other XHR content? That will have mixed results in my experience.

Brennan said

I’ve seen some failed announcements with live-regions on VoiceOver, especially with iframes. (Announcement of the title seems to kill any pending live content).output has surprisingly good support but (IIRC) is not live by default on at least one browser (IE, I think).

Some more resources people pointed me to:

July 16, 2020

A while ago I developed a little demo app that generates an interactive 3D visualisation – a primitive landscape of sorts – from a number sequence generated by a Perlin Noise algorithm (don’t worry if that means nothing to you!). You can check out the app right here.

The 3D graphics are drawn in the browser DOM, using thousands of divs transformed via CSS. It turned out pretty well, but I found that the frame rate absolutely tanked when adding divs to increase the visualisation’s resolution. Browsers aren’t well equipped when it comes to transforming thousands of divs in 3D, so in pursuit of a better frame rate I decided to use this as an excuse to finally dip into WebGL, by rewriting the app using a WebGL-based renderer.

So, what is WebGL, and what’s Three.js? WebGL is a hardware-accelerated API for drawing graphics in a web browser, enabling 2D and 3D visuals to be drawn with a far higher level of performance than what you might get when using the DOM or a HTML5 canvas. In the right hands, the visual output can be more akin to a modern PC or console video game than the more simplistic animated graphics you might usually see around the web. However, WebGL can be somewhat impenetrable to newcomers, requiring an intimate knowledge of graphics programming and mathematics, so many developers add a library or framework on top to handle the heavy lifting.

And that’s where Three.js steps in, providing a relatively simple API for developing WebGL apps. There are other options of course, but I chose Three.js as there’s a wealth of demos and documentation for it out there, plus it seems like the one I see in the wild the most.

TL;DR: view the new Three.js version of my app in the embed below, or right here on Codepen.

See the Pen
Three.js/WebGL: Interactive Perlin Noise Landscape Visualisation
by Sebastian Lenton (@sebastianlenton)
on CodePen.

Breaking it Down

Rather than diving straight into rebuilding the application in one go, I decided to break it down into various chunks of essential functionality required to complete the final product. Once I’d developed them all, I would then put the various components together and develop the final app. The items on my list were as follows:

Hello World: the objective was to get something simple onscreen. While researching this I discovered an amazing learning resource in the form of the tutorials at Three.js Fundamentals. I couldn’t have completed this project without it, it really was incredibly useful. I followed their Hello World tutorial and got a cube rotating onscreen pretty quickly.

Resizable canvas: by default, Three.js outputs to a 300x150px canvas element that does not resize. I needed the canvas to not only resize itself to fit the size of the browser window, but I also wanted the contents to always occupy a square area located in the centre of the canvas, regardless of window aspect ratio. This is so that the entire visualisation would be visible regardless of window aspect.

The “Responsive Design” article on Three.js Fundamentals got me most of the way there, but when viewed in a mobile-esque portrait aspect the sides of the demo spinning cube would be clipped, whereas I wanted the content to resize itself to always fit within that central square area so it would always be visible. After reading the following GitHub thread I realised I had to update the Three.js camera’s field of view (FOV) upon window resize, and after some mathematical trial and error I managed to get it working. Check out my example here, the red square represents any visible content that might be present in the scene.

Draggable rotating plane: I needed to replicate the ability to click and drag to change the viewing angle. Once I’d got something onscreen it was pretty easy to retrofit the code from the previous version of my application into a Three.js context- check it out here.

However, shortly afterwards I discovered that Three.js includes an extension called OrbitControls, which provides an orbiting camera with momentum, dollying (being able to move the camera closer or further to the target), auto rotation and more, with virtually no setup required beyond some simple configuration. As such it was a no-brainer to use it rather than my own code. You can view an OrbitControls example here.

Vertex colours: the original version uses a CSS gradient background on each transformed div as a texture. To achieve the same effect with WebGL I figured I could either use texture mapping or vertex colours, but given that there would potentially be thousands of child elements all needing a distinct texture I assumed that doing it with vertex colours would be easier and perhaps more performant (might be wrong about that though!).

BTW if you’re wondering what “vertex colours” are: a “vertex” is a point in a 3D model, such as a corner of a square. Each vertex can be given its own colour, and the renderer will interpolate a 3D model’s colour from one point to another- eg, a cylinder with blue vertices at one end and red ones at the other would have a blue-to-red gradient texture.

It was a bit tricky finding concrete info and working examples for this (a lot of the content about Three.js out there has been obsoleted by changes in its codebase), but I managed to get there after some trial and error: see my example here.

Parent and child Meshes: I needed to be able to parent the child objects that make up the landscape to the square base underneath, so that the children would rotate with the base. Three.js Fundamentals once again came to the rescue, with their Scene Graph tutorial explaining how to parent objects to other objects. The tutorial was easy to follow, plus there’s some bonus content at the end about making a tank that moves along a track.

Creating and deleting elements: in the original DOM-based version of the application, the quantity and positions of child objects would change when changing the landscape resolution setting in the controls. At the time it was easiest to simply delete all the child objects and recreate however many were required for the new resolution value. However with this new version I felt it would be more performant to implement a basic object pool.

Instead of repeatedly deleting and recreating objects, with an object pool you create an instance of every object that will ever be needed at startup. Then, you display and modify the ones that are required while deactivating the ones that aren’t. This approach improves performance, since regularly creating and deleting objects often has a heavier cost than modifying objects that are already present in the scene. The drawback is slightly longer startup time, but that isn’t a problem here. In this instance the max resolution of the visualisation is 100, so the total objects created at startup is 10,000.

You can find my basic Object Pool implementation here: enter the number of objects required into the field in the top-right. However, in the case of this application there are probably better ways to achieve this- rather than creating 10,000 individual meshes, it might be possible to use instancing or some sort of particle system to implement this more efficiently. Still got a lot of learning to do…

Modifying meshes in realtime: I needed to work out how to modify a mesh’s size and colours in realtime, since child objects would need to be modified when someone uses the app’s controls. You can see a simple example of scaling and changing vertex colours here. When changing vertex colours, don’t forget to set myGeometry.colorsNeedUpdate = true; whenever the colours need to change.

Putting it all together: I managed to complete all of the above items without too much effort. Once that was done, I assembled everything together and ended up with a version that looked more or less the same as the original DOM-based version, but with far better performance! Once I’d gotten used to the different syntax it wasn’t much more difficult than developing the original version.

Enhancements

Since building something like this with WebGL adds a lot more possibilities in terms of what you can do, I added some enhancements to the original version:

Fog: a basic fog effect can be added to a Three.js scene very easily- and of course, there’s a chapter about it on Three.js Fundamentals. I added a very subtle fog effect, more as a test than anything else. The only problem I encountered was that the OrbitControls “dolly” effect didn’t work well with fog, since the fog effect’s start and end points are relative to the camera position- so dollying the camera towards or away from the object would plunge it entirely into or out of the effect. To get around this I replaced camera dolly with zoom.

Removal of loading indicators: the original version was slow enough that changing a control would freeze the visualisation until the update had completed. To communicate to the user that something was happening I added loading indicators. This new WebGL-based version was so much faster that I felt I could simply remove these and have all updates happening in realtime! It still gets a bit slow when the “resolution” slider is at a high value, but not sure what I can do about that- more research needed. (Also, when you tweak the sliders it looks awesome).

Editable colours: you can change the land, sky and fog colours in this version. The colour picker provided by the library I used for the controls, dat.GUI, made this easy to implement.

3D Sky: the original version had a simple CSS gradient for its background- a flat image that doesn’t react to the rest of the visualisation. I decided that a 3D “sky” would be a good enhancement. Many videogames simulate a 3D horizon by surrounding the play area with a massive cube, textured with a cubemap (a special texture that will make the cube look like a realistic panoramic horizon) but in my case I wanted the sky’s colours to be a linear gradient, so felt a sphere would be a better fit for this.

This was a bit tricky to get right: at first I tried manipulating a sphere’s vertex colours to produce the gradient effect, by reverse-engineering this example code that more or less does exactly that. But for some reason there were slight visual artefacts at certain polygon edges that spoiled the effect slightly, so I tried a different approach.

This involved drawing a gradient to an unseen HTML5 canvas, then using that as a source texture for my sphere, and that worked more or less perfectly. Then with the sphere positioned at 0, 0, 0 in my scene I set it to resize itself on the window resize event in order to completely cover the background at any viewport size, set its material to not be affected by fog (or else it would be barely visible, if at all) and also set it to render the backs of its polygons only, so it wouldn’t obscure the landscape in front.

The procedural canvas texture gets redrawn when its colours are changed via the GUI, and any changes are applied to the sphere without needing to do anything so long as the sphere’s material.map.needsUpdate flag is set to true on each frame.

Post-processing effects: with WebGL you can add post-processing effects such as blur, noise or colour changes. This is somewhat similar to CSS filters except that unlike CSS filters you can write your own effects, so there are no limits on what you can do. I added some simple effects, using Three.js Fundamentals’ article on post processing – some bloom and visual noise – but in this instance I removed the effects from the final version as it made it look worse! Effects like these could be a good fit for a future project though.

Rendering on demand: more of a finishing touch than an enhancement, a frame will only be rendered if there is some movement in the scene, whether by user interaction or the camera moving on its own. If movement stops then rendering does too, using less power on the user’s device.

The Results

After all that, the Three.js version of my app was complete… and was it worth all the effort? Definitely, I’d say- the new version has a far higher level of performance, which means better fame rate, higher maximum landscape resolution and no “loading” indicators. Plus, it was a  great way of dipping my toe into the Three.js world. The possibilities really are endless, and I’m looking forwards to learning more.

However, I’m also mindful of the drawbacks of using something a library like Three.js. The library is quite large, with a minified version weighing in at around 500k, plus GPU-accelerated graphics can be heavy on battery life. Maybe stick to using the DOM or HTML5 canvas when developing simple or frivolous animations.

Check out the final product on Codepen– give it some love if you like it or find the code useful!

The post Adventures With WebGL & Three.js appeared first on Sebastian Lenton.

My chum and co-author Remington Sharp tweeted

We need a universally recognised icon/image/logo for "works offline".

Like the PWA or HTML5 logo. We need to be able to signal to visitors that our URLs are always available.

To the consumer, the terms Progressive Web App or Service Worker are meaningless. So I applied my legendary branding, PR and design skills to come up with something that will really resonate with a web user: the fact that this app works online, offline – anywhere.

So the new logo is a riff on the HTML5 logo, because this is purely web technologies. It has the shield, a wifi symbol on one side and a crossed out wifi symbol on the other, and a happy smile below to show that it’s happy both on and offline. Above it is the acronym “wank” which, of course, stands for “Works anywhere—no kidding!”

Take it to use on your sites. I give the fruits of my labour and creativity free, as a gift to humanity.

July 10, 2020

Reading List 262 by Bruce Lawson (@brucel)

July 09, 2020

Many parts of a modern software stack have been around for a long time. That has trade-offs, but in terms of user experience is a great thing: software can be incrementally improved, providing customers with familiarity and stability. No need to learn an entirely new thing, because your existing thing just keeps on working.

It’s also great for developers, because it means we don’t have to play red queen, always running just to stand still. We can focus on improving that customer experience, knowing that everything we wrote to date still works. And it does still work. Cocoa, for example, has a continuous history back to 2001, and there’s code written to use Cocoa APIs going back to 1994. Let’s port some old Cocoa software, to see how little effort it is to stay up to date.

Bean is a free word processor for macOS. It’s written in Objective-C, using mostly Cocoa (but some Carbon) APIs, and uses the Cocoa Text system. The current version, Bean 3.3.0, is free, and supports macOS 10.14-10.15. The open source (GPL2) version, Bean 2.4.5, supports 10.4-10.5 on Intel and PowerPC. What would it take to make that a modern Cocoa app? Not much—a couple of hours work gave me a fully-working Bean 2.4.5 on Catalina. And a lot of that was unnecessary side-questing.

Step 1: Make Xcode happy

Bean 2.4.5 was built using the OS X 10.5 SDK, so probably needed Xcode 3. Xcode 11 doesn’t have the OS X 10.5 SDK, so let’s build with the macOS 10.15 SDK instead. While I was here, I also accepted whatever suggested updated settings Xcode showed. That enabled the -fobjc-weak flag (not using automatic reference counting), which we can now just turn off because the deployment target won’t support it. So now we just build and run, right?

Not quite.

Step 2: Remove references to NeXT Objective-C runtime

Bean uses some “method swizzling” (i.e. swapping method implementations at runtime), mostly to work around differences in API behaviour between Tiger (10.4) and Leopard (10.5). That code no longer compiles:

/Users/leeg/Projects/Bean-2-4-5/ApplicationDelegate.m:66:23: error: incomplete
      definition of type 'struct objc_method'
                        temp1 = orig_method->method_types;
                                ~~~~~~~~~~~^
In file included from /Users/leeg/Projects/Bean-2-4-5/ApplicationDelegate.m:31:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/objc/runtime.h:44:16: note: 
      forward declaration of 'struct objc_method'
typedef struct objc_method *Method;
               ^

The reason is that when Apple introduced the Objective-C 2.0 runtime in Leopard, they made it impossible to directly access the data structures used by the runtime. Those structures stayed in the headers for a couple of releases, but they’re long gone now. My first thought (and first fix) was just to delete this code, but I eventually relented and wrapped it in #if !__OBJC2__ so that my project should still build back to 10.4, assuming you update the SDK setting. It now builds cleanly, using clang and Xcode 11.5 (it builds in the beta of Xcode 12 too, in fact).

OK, ship it, right?

Diagnose a stack smash

No, I launched it, but it crashed straight away. The stack trace looks like this:

* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x7ffeef3fffc8)
  * frame #0: 0x00000001001ef576 libMainThreadChecker.dylib`checker_c + 49
    frame #1: 0x00000001001ee7c4 libMainThreadChecker.dylib`trampoline_c + 67
    frame #2: 0x00000001001c66fc libMainThreadChecker.dylib`handler_start + 144
    frame #3: 0x00007fff36ac5d36 AppKit`-[NSTextView drawInsertionPointInRect:color:turnedOn:] + 132
    frame #4: 0x00007fff36ac5e6d AppKit`-[NSTextView drawInsertionPointInRect:color:turnedOn:] + 443
[...]
    frame #40240: 0x00007fff36ac5e6d AppKit`-[NSTextView drawInsertionPointInRect:color:turnedOn:] + 443
    frame #40241: 0x00007fff36ac5e6d AppKit`-[NSTextView drawInsertionPointInRect:color:turnedOn:] + 443
    frame #40242: 0x00007fff36a6d98c AppKit`-[NSTextView(NSPrivate) _viewDidDrawInLayer:inContext:] + 328
[...]

That’s, um. Well, it’s definitely not good. All of the backtrace is in API code, except for main() at the top. Has NSTextView really changed so much that it gets into an infinite loop when it tries to draw the cursor?

No. Actually one of the many patches to AppKit in this app is not swizzled, it’s a category on NSTextView that replaces the two methods you can see in that stack trace. I could change those into swizzled methods and see if there’s a way to make them work, but for now I’ll remove them.

Side quest: rationalise some version checks

Everything works now. An app that was built for PowerPC Mac OS X and ported at some early point to 32-bit Intel runs, with just a couple of changes, on x86_64 macOS.

I want to fix one more thing. This message appears on launch and I would like to get rid of it:

2020-07-09 21:15:28.032817+0100 Bean[4051:113348] WARNING: The Gestalt selector gestaltSystemVersion
is returning 10.9.5 instead of 10.15.5. This is not a bug in Gestalt -- it is a documented limitation.
Use NSProcessInfo's operatingSystemVersion property to get correct system version number.

Call location:
2020-07-09 21:15:28.033531+0100 Bean[4051:113348] 0   CarbonCore                          0x00007fff3aa89f22 ___Gestalt_SystemVersion_block_invoke + 112
2020-07-09 21:15:28.033599+0100 Bean[4051:113348] 1   libdispatch.dylib                   0x0000000100362826 _dispatch_client_callout + 8
2020-07-09 21:15:28.033645+0100 Bean[4051:113348] 2   libdispatch.dylib                   0x0000000100363f87 _dispatch_once_callout + 87
2020-07-09 21:15:28.033685+0100 Bean[4051:113348] 3   CarbonCore                          0x00007fff3aa2bdb8 _Gestalt_SystemVersion + 945
2020-07-09 21:15:28.033725+0100 Bean[4051:113348] 4   CarbonCore                          0x00007fff3aa2b9cd Gestalt + 149
2020-07-09 21:15:28.033764+0100 Bean[4051:113348] 5   Bean                                0x0000000100015d6f -[JHDocumentController init] + 414
2020-07-09 21:15:28.033802+0100 Bean[4051:113348] 6   AppKit                              0x00007fff36877834 -[NSCustomObject nibInstantiate] + 413

A little history, here. Back in classic Mac OS, Gestalt was used like Unix programmers use sysctl and soda drink makers use high fructose corn syrup. Want to expose some information? Add a gestalt! Not bloated enough? Drink more gestalt!

It’s an API that takes a selector, and a pointer to some memory. What gets written to the memory depends on the selector. The gestaltSystemVersion selector makes it write the OS version number to the memory, but not very well. It only uses 32 bits. This turned out to be fine, because Apple didn’t release many operating systems. They used one octet each for major, minor, and patch release numbers, so macOS 8.5.1 was represented as 0x0851.

When Mac OS X came along, Gestalt was part of the Carbon API, and the versions were reported as if the major release had bumped up to 16: 0x1000 was the first version, 0x1028 was a patch level release 10.2.8 of Jaguar, and so on.

At some point, someone at Apple realised that if they ever did sixteen patch releases or sixteen minor releases, this would break. So they capped each of the patch/minor numbers at 9, and just told you to stop using gestaltSystemVersion. I would like to stop using it here, too.

There are lots of version number checks all over Bean. I’ve put them all in one place, and given it two ways to check the version: if -[NSProcessInfo isOperatingSystemAtLeastVersion:] is available, we use that. Actually that will never be relevant, because the tests are all for versions between 10.3 and 10.6, and that API was added in 10.10. So we then fall back to Gestalt again, but with the separate gestaltSystemVersionMajor/Minor selectors. These exist back to 10.4, which is perfect: if that fails, you’re on 10.3, which is the earliest OS Bean “runs” on. Actually it tells you it won’t run, and quits: Apple added a minimum-system check to Launch Services so you could use Info.plist to say whether your app works, and that mechanism isn’t supported in 10.3.

Ship it!

We’re done!

Bean 2.4.5 on macOS Catalina

Haha, just kidding, we’re not done. Launching the thing isn’t enough, we’d better test it too.

Bean 2.4.5 on macOS Catalina in dark mode

Dark mode wasn’t a thing in Tiger, but it is a thing now. Bean assumes that if it doesn’t set the background of a NSTextView, then it’ll be white. We’ll explicitly set that.

Actually ship it!

You can see the source on Github, and particularly how few changes are needed to make a 2006-era Cocoa application work on 2020-era MacOS, despite a couple of CPU family switches and a bunch of changes to the UI.



Repackaging for Uncertainty

It’s not news that the global covid-19 pandemic has severely impacted cultural institutions around the world. Here in New York, what was initially announced as a 1-month closure for Broadway was recently extended to the end of 2020, forcing theaters to go dark for at least 9 months. For single ticket purchases, sales can be painfully pushed back, shows rescheduled, credits offered, and emails blasted. But at this point, organizations are now looking at a 20/21 season that has been reduced by 50-70%. So while they ponder what a physically and financially safe reopening looks like, they’re also having to turn their attentions to another key portion of their audience: subscribers.

Every company will tell you they want to cultivate a relationship with their customers. From a brand perspective, you always want to have a key base of loyal consumers, and financially these are the patrons that will consistently return to make purchases. Arts and cultural institutions use annual packages to leverage their reliable volume and quality of programming, allowing them to build relationships with their patrons which span over decades, and sometimes generations. Rather than buying several tickets to individual shows, a package is a bundle deal. The benefits vary from org to org, early access to show dates, choice of seats, and discounted prices on the shows being some of the most common, but one thing is constant: in order to reap any of these benefits, patrons commit to multiple shows over the course of the year. As a relationship tool, packages are very effective. Theatre Communications Group (TCG) in New York City produces an annual fiscal analysis for national trends in the non-profit theater sector by gathering data from all across the country. In 2018 they gathered data from 177 theaters, and found that, on average, subscribers renewed 74% of the time.

This is about more than brand loyalty though: these packages represent a source of consistent income for theaters. Beyond the face value of individual tickets purchased by subscribers, which according to TCG’s research tally up to an average of $835k a year (with the highest earning tier of theaters surveyed pulling in an average of 2.5 million dollars), subscribers are much more likely to convert their attendance into other kinds of support for the organization. These patons can be marketed to with low risk (of annoyance) and high reward (in literal dollars). Thus, the value of subscriber relationships goes well beyond their sheer presence in the venue, making them one of the most consistently considered portions of the audience. At a time when uncertainty is the name of the game, a set of dependable patrons might seem like the perfect audience slice to reach out to right now.

However, since the rewards for packages typically revolve around early access, and require a multi-show commitment, subscription purchases and renewals usually receive a big push very early in the season sales cycle, putting them in a particularly vulnerable position at the start of the covid-19 pandemic. Furthermore, the extension of venue closures have not only shifted back sales dates, but also the seasons themselves leaving patrons with fewer shows to choose from, and fewer to commit to. This is forcing everyone to figure out how to salvage the potential income while still providing a valuable, and fair, experience to patrons. Thus far, we’re seeing three different answers to this question emerge.

Like-for-like

One straightforward approach is to roll with the punches and create mixed packages between this season (Season A), and next season (Season B). This works particularly well for orgs with consistent types of content every year. For example if an organization has a package of 5 Jazz concerts in Season A, but 3 of them are cancelled due to the pandemic, you just take 3 shows from Season B to replace them. Behind the scenes, this strategy does require a lot of hands-on adjustments if shows continue to be cancelled, but with the benefit of preserving the structure of the original package. Patrons also have a transparent view into how the value of their package is being preserved, however they are still tied to a specific show date with no knowledge of what the situation will be at that point.

Voucher Exchange

Another solution which has started to arise is the idea of a voucher system. Rather than trying to reschedule after a show in a package is cancelled, patrons are given vouchers which can be redeemed for a ticket to a future performance. For organizations, this option puts a lot of the workload at the front end, as it requires detailed business consideration: do vouchers expire? If so, how far in the future? Do the vouchers have a dollar value, or can they be exchanged 1:1 for a production? What happens if prices change between now and reopening, or if a user wants a ticket of a different value? (You get the gist). All that being said, once those business rules are set, it has the potential to put the other choices in the hands of the patron. Consequently, for patrons this option takes off some of the pressure: they don’t need to commit to another uncertain date in the future, instead they can be assured they are receiving the value of their package at a time that they feel comfortable.

Pay It Forward

A third option is to push the guesswork entirely to the future and allow users to purchase a set bundle of shows as normal, but with no mention of dates or seats. Instead of setting a calendar for the year, patrons are committing to content: 5 shows, rather than 5 dates. Some organizations have had this in place for early renewals for years, and find it to be an easy way to service patrons who are loyal to the organization through thick and thin. Ultimately though, this allows both parties to make more informed decisions about their theatergoing habits closer to the show itself, rather than 3 months ahead of an unknown future. That being said, this solution requires a lot of upfront discussions within the organization, and to the patron, about what might happen if patrons cannot attend the dates they are assigned either due to conflicts or continued safety concerns.

Anyone who’s remotely involved in the arts & culture sector will not be surprised that there is no one-size-fits-all solution. Some organizations will enjoy the straight-forwardness of mixing packages, others will want to allow for uncertainty and opt for the voucher system or the pay-it-forward option, still others will come up with an infinite number of alternate approaches to this issue. And of course, these solutions are all dependent on an optimistic future which is still a huge question mark: some areas are opening up, others are extending their closures, previously bankable organizations are filing for bankruptcy, and for every positive trend in cases there’s a spike somewhere else. In the game of whack-a-mole that is covid-19, the path towards reopening, and specifically a positive subscriber experience, is a tightrope: business rules will need to be clearly defined, messaging carefully considered, and customer service well briefed on the new practices. No matter what solution organizations opt for, it will need to be tailor fitted so that the patron relationships which will keep theater alive beyond this pandemic can be cultivated. Otherwise they run the risk of patrons feeling milked for money, and lemming marched into the theater.

July 08, 2020

In Part One, I explored the time of transition from Mac OS 8 to Mac OS X (not a typo: Mac OS 9 came out during the transition period). From a software development perspective, this included the Carbon and Cocoa UI frameworks. I mooted the possibility that Apple’s plan was “erm, actually, Java” and that this failed to come about not because Apple didn’t try, but because developers and users didn’t care. The approach described by Steve Jobs, of working out what the customer wants and working backwards to the technology, allowed them to be flexible about their technology roadmap and adapt to a situation where Cocoa on Objective-C, and Carbon on C and C++, were the tools of choice.[*]

So this time, we want to understand what the plan is. The technology choices available are, in the simplistic version: SwiftUI, Swift Catalyst/UIKit, ObjC Catalyst/UIKit, Swift AppKit, ObjC AppKit. In the extended edition, we see that Apple still supports the “sweet solution” of Javascript on the web, and despite trying previously to block them still permits various third-party developer systems: Javascript in React Native, Ionic, Electron, or whatever’s new this week; Xamarin.Forms, JavaFX, Qt, etc.

What the Carbon/Cocoa plan tells us is that this isn’t solely Apple’s plan to implement. They can have whatever roadmap they want, but if developers aren’t on it it doesn’t mean much. This is a good thing: if Apple had sufficient market dominance not to be reasonably affected by competitive forces or market trends, then society would have a problem and the US DOJ or the EU Directorate-General for Competition would have to weigh in. If we don’t want to use Java, we won’t use Java. If enough of us are still using Catalyst for our apps, then they’re supporting Catalyst.

Let’s put this into the context of #heygate.

These apps do not offer in-app purchase — and, consequently, have not contributed any revenue to the App Store over the last eight years.

— Rejection letter from Apple, Inc. to Basecamp

When Steve Jobs talked about canning OpenDoc, it was in the context of a “consistent vision” that he could take to customers to motivate “eight billion, maybe ten billion dollars” of sales. It now takes Apple about five days to make that sort of money, so they’re probably looking for something more than that. We could go as far as to say that any technology that contributes to non-revenue-generating apps is an anti-goal for Apple, unless they can conclusively point to a halo effect (it probably costs Apple quite a bit to look after Facebook, but not having it would be platform suicide, for example).

From Tim Cook’s and Craig Federighi’s height, these questions about “which GUI framework should we promote” probably don’t even show up on the radar. Undoubtedly SwiftUI came up with the SLT before its release, but the conversation probably looked a lot like “developers say they can iterate on UIs really quickly with React, so I’ve got a TPM with a team of ten people working on how we counter that.” “OK, cool.” A fraction of a percent of the engineering budget to nullify a gap between the native tools and the cross-platform things that work on your stack anyway? OK, cool.

And, by the way, it’s a fraction of a percent of the engineering budget because Apple is so gosh-darned big these days. To say that “Apple” has a “UI frameworks plan” is a bit like saying that the US navy has a fast destroyers plan: sure, bits of it have many of them.

At the senior level, the “plan” is likely to be “us” versus “not us”, where all of the technologies you hear of in somewhere like ATP count as “us”. The Java thing didn’t pan out, Sun went sideways in the financial crisis of 2007, how do we make sure that doesn’t happen again?

And even then, it’s probably more like “preferably us” versus “not us, but better with us”: if people want to use cross-platform tools, and they want to do it on a Mac, then they’re still buying Macs. If they support Sign In With Apple, and Apple Pay, then they still “contribute any revenue to the App Store”, even if they’re written in Haskell.

Apple made the Mac a preeminent development and deployment platform for Java technology. One year at WWDC I met some Perl hackers in a breakout room, then went to the Presidio to watch a brown bag session by Python creator Guido van Rossum. When Rails became big, everyone bought a Mac Laptop and a Textmate license, to edit their files for their Linux web apps.

Apple lives in an ecosystem, and it needs help from other partners, it needs to help other partners. And relationships that are destructive don’t help anybody in this industry as it is today. … We have to let go of this notion that for Apple to win, Microsoft has to lose, OK? We have to embrace the notion that for Apple to win, Apple has to do a really good job.

— Steve Jobs, 1997


[*] even this is simplistic. I don’t want to go overboard here, but definitely would point out that Apple put effort into supporting Swing with native-esque controls on Java, language bridges for Perl, Python, Ruby, an entire new runtime for Ruby, in addition to AppleScript, Automator, and a bunch of other programming environments for other technologies like I/O Kit. Like the man said, sometimes the wrong choice is taken, but that’s good because at least it means someone was making a decision.

Tweets by substrakt

No CEO dominated a market without a plan, but no market was dominated by following the plan.

— I made this quote up. Let’s say it was Rockefeller or someone.

In Accidental Tech Podcast 385: Temporal Smear, John Siracusa muses on what “the plan” for Apple’s various GUI frameworks might be. In summary, and I hope this is a fair representation, he says that SwiftUI is modern, works everywhere but isn’t fully-featured, UIKit (including Mac Catalyst) is not as modern, not as portable, but has good feature coverage, and AppKit is old, works only on Mac, but is the gold standard for capability in Mac applications.

He compares the situation now with the situation in the first few years of Mac OS X’s existence, when Cocoa (works everywhere, designed in mid-80s, not fully-featured) and Carbon (works everywhere, designed in slightly earlier mid-80s, gold standard for Mac apps) were the two technologies for building Mac software. Clearly “the plan” was to drop Carbon, but Apple couldn’t tell us that, or wouldn’t tell us that, while important partners were still writing software using the library.

This is going to be a two-parter. In part one, I’ll flesh out some more details of the Carbon-to-Cocoa transition to show that it was never this clear-cut. Part two will take this model and apply it to the AppKit-to-SwiftUI transition.

A lot of “the future” in Next-merger-era Apple was based on neither C with Toolbox/Carbon nor Objective-C with OPENSTEP/Yellow Box/Cocoa but on Java. NeXT had only released WebObjects a few months before the merger announcement in December 1996, but around merger time they released WO 3.1 with very limited Java support. A year later, WO 3.5 with full Java Support (on Yellow Box for Windows, anyway). By May 2001, a few weeks after the GM release of Mac OS X 10.0, WebObjects 5 was released and had been completely rewritten in Java.

Meanwhile, Java was also important on the client. A January 1997 joint statement by NeXT and Apple mentions ObjC 0 times, and Java 5 times. Apple released the Mac Run Time for Java on that day, as well as committing to “make both Mac OS and Rhapsody preeminent development and deployment platforms for Java technology”—Rhapsody was the code-name-but-public for NeXT’s OS at Apple.

The statement also says “Apple plans to carry forward key technologies such as OpenDoc”, which clearly didn’t happen, and led to this exchange which is important for this story:

One of the things I’ve always found is that you’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re gonna try to sell it. And I’ve made this mistake probably more than anybody else in this room, and I’ve got the scar tissue to prove it.

Notice that the problem here is that Apple told us the plan, did something else, and made a guy with a microphone very unhappy. He’s unhappy not at Gil Amelio for making a promise he couldn’t keep, but at Steve Jobs for doing something different.

OpenDoc didn’t carry on, but Java did. In the Rhapsody developer releases, several apps (including TextEdit, which was sample code in many releases of OpenStep, Rhapsody and Mac OS X) were written in Yellow Box Java. In Mac OS X 10.0 and 10.1, several apps were shipped using Cocoa-Java. Apple successfully made Mac OS and Rhapsody (a single) preeminent development and deployment platform for Java technology.

I do most of my engineering on my PowerBook … it’s got fully-functional, high-end software development tools.

— James Gosling, creator of Java

But while people carried on using Macs, and people carried on using Java, and people carried on using Macs to use Java, few people carried on using Java to make software for Macs. It did happen, but not much. Importantly, tastemakers in the NeXT developer ecosystem who were perfectly happy with Objective-C thank you carried on being perfectly happy with Objective-C, and taught others how to be happy with it too. People turned up to WWDC in t-shirts saying [objc retain];. Important books on learning Cocoa said things like:

The Cocoa frameworks were written in and for Objective-C. If you are going to write Cocoa applications, use Objective-C, C, and C++. Your application will launch faster and take up less memory than if it were written in Java. The quirks of the Java bridge will not get in the way of your development. Also, your project will compile much faster.

If you are going to write Java applications, use Swing. Swing, although not as wonderful as Cocoa, was written from the ground up for Java.

— Aaron Hillegass, Cocoa Programming for Mac OS X

Meanwhile, WebObjects for Java was not going great guns either. It still had customers, but didn’t pick up new customers particularly well. Big companies who wanted to pay for a product with the word Enterprise in the title didn’t really think of Apple as an Enterprise-ish company, when Sun or IBM still had people in suits. By the way, one of Sun’s people in suits was Jonathan Schwartz, who had run a company that made Cocoa applications in Objective-C back in the 1990s. Small customers, who couldn’t afford to use products with Enterprise in the title and who had no access to funding after the dot-bomb, were discovering the open source LAMP (Linux, Apache, MySQL, PHP/Perl) stack.

OK, so that’s Cocoa, what about Carbon? It’s not really the Classic Mac OS Toolbox APIs on Mac OS X, it’s some other APIs that are like those APIs. Carbon was available for both Mac OS 8.1+ (as an add-on library) and Mac OS X. Developers who had classic Mac apps still had to work to “carbonise” their software before it would work on both versions.

It took significant engineering effort to create Carbon, effectively rewriting a lot of Cocoa to depend on an intermediate C layer that could also support the Carbon APIs. Apple did this not because it had been part of their plan all along, but because developers looked at Rhapsody with its Cocoa (ObjC and Java) and its Blue Box VM for “classic” apps and said that they were unhappy and wouldn’t port their applications soon. Remember that “you’ve got to start with the customer experience and work backwards to the technology”, and if your customer experience is “I want to use Eudora, Word, Excel, and Photoshop” then that’s what you give ’em.

With this view, Cocoa and Carbon are actually the same age. Cocoa is OpenStep minus Display PostScript (Quartz 2D/Core Graphics taking its place) and with the changes necessary to be compatible with Carbon. Carbon is some MacOS Toolbox-like things that are adapted to be compatible with Cocoa. Both are new to Mac developers in 2001, and neither is furthering the stated goal of making Mac OS a preeminent development and deployment environment for Java.

To the extent that Apple had a technology roadmap, it couldn’t survive contact with their customers and developers—and it was a good thing that they didn’t try to force it. To the extent that they had a CEO-level plan, it was “make things that people want to buy more than they wanted to buy our 1997 products”, and in 2001 Apple released the technology that would settle them on that path. It was a Carbonised app called iTunes.

July 06, 2020

Anti-lock brakes by Graham Lee

Chances are, if you bought a new car or even a new motorcycle within the last few years, you didn’t even get an option on ABS. It came as standard, and in your car was legally mandated. Anti-lock brakes work by measuring the rotational acceleration of the wheels, or comparing their rotational velocities. If one wheel is rotating very much slower than the others, or suddenly decelerates, it’s probably about to lock so the ABS backs off the pressure on the brake for that wheel.

ABS turns everyone into a pretty capable brake operator, in most circumstances. This is great, because many people are not pretty capable at operating brakes, even when they think they are, and ABS makes them better at it. Of course, some people are very capable at it, but ABS levels them too, making them merely pretty capable.

But even a highly capable brake operator can panic, or make mistakes. When that happens, ABS means that the worst effect of their mistake is that they are merely pretty capable.

In some circumstances, having ABS is strictly worse than not having it. An ABS car will take longer to stop on a gravel surface or on snow than a non-ABS car. Car with ABS tend to hit each other much less often than those without, but tend to run off the road more often than those without. But for most vehicles, the ABS is always-on, even in situations where it will get in your way. Bring up that it is getting in your way, and someone will tell you how much safer it is than not having it. Which is true, in the other situations.

Of course the great thing about anti-lock brakes is that the user experience is the same as what most sub-pretty-capable drivers had before. No need to learn a different paradigm or plan your route differently. When you want to stop, press the thing that makes the car stop very hard.

Something, something, programming languages.

July 04, 2020

Let’s look at other software on the desktop, to understand why there isn’t (as a broad, popular platform) Linux on the desktop, then how there could be.

Over on De Programmatica Ipsum I discussed the difference between the platform business model, and the technology platform. In the platform model, the business acts as a matchmaking agent, connecting customers to vendors. An agricultural market is a platform, where stud farmers can meet dairy farmers to sell cattle, for example.

Meanwhile, when a technology platform is created by a business, it enables a two-sided business model. The (technology) platform vendor sells their APIs to developers as a way of making applications. They sell their technology to consumers with the fringe benefit that these third-party applications are available to augment the experience. The part of the business that is truly a platform model is the App Store, but those came late as an effort to capture a share of the (existing) developer-consumer sales revenue, and don’t really make the vendors platform businesses.

In fact, I’m going to drop the word platform now, as it has these two different meanings. I’ll say “store” or “App Store” when I’m talking about a platform business in software, and “stack” or “software stack” when I’m talking about a platform technology model.

Stack vendors have previously been very protective of their stack, trying to fend off alternative technologies that allow consumers to take their business elsewhere. Microsoft famously “poisoned” Java, an early and capable cross-platform application API, by bundling their own runtime that made Java applications deliberately run poorly. Apple famously added a clause to their store rules that forbade any applications made using off-stack technology.

Both of these situations are now in the past: Microsoft have even embraced some cross-platform technology options, making heavy use of Electron in their own applications and even integrating the Chromium rendering engine into their own browser to increase compatibility with cross-platform technology and reduce the cost of supporting those websites and applications made with Javascript. Apple have abandoned that “only” clause in their rules, replacing it with a collection of “but also” rules: yes you can make your applications out of whatever you want, but they have to support sign-in and payment mechanisms unique to their stack. So a cross-stack app is de jure better integrated in Apple’s sandpit.

These actions show us how these stack vendors expect people to switch stacks: they find a compelling application, they use it, they discover that this application works better or is better integrated on another stack, and so they change to it. If you’re worried about that, then you block those applications so that your customers can’t discover them. If you’re not worried about that, then you allow the technologies, and rely on the fact that applications are commodities and nobody is going to find a “killer app” that makes them switch.

Allowing third-party software on your own platform (cross-stack or otherwise) comes with a risk, that people are only buying your technology as an incidental choice to run something else, and that if it disappears from your stack, those customers might go away to somewhere that it is available. Microsoft have pulled that threat out of their briefcase before, settling a legal suit with Apple after suggesting that they would remove Word and Excel from the Mac stack.

That model of switching explains why companies that are otherwise competitors seem willing to support one another by releasing their own applications on each others’ stacks. When Apple and Microsoft are in competition, we’ve already seen that Microsoft’s applications give them leverage over Apple: they also allow Apple customers to be fringe players in the Microsoft sandpit, which may lead them to switch (for example when they see how much easier it is for their Windows-using colleagues to use all of the Microsoft collaboration tools their employers use). But Apple’s applications also give them leverage over Microsoft: the famed “halo effect” of Mac sales being driven by the iPod fits this model: you buy an iPod because it’s cool, and you use iTunes for Windows. Then you see how much better iTunes for Mac works, and your next computer is a Mac. The application is a gateway to the stack.

What has all of this got to do with desktop Linux? Absolutely nothing, and that’s my point. There’s never been a “halo effect” for the Free Software world because there’s never been a nucleus around which that halo can form. The bazaar model does a lot to ensure that. Let’s take a specific example: for many people, Thunderbird is the best email client you can possibly get. It also exists on multiple stacks, so it has the potential to be a “gateway” to desktop Linux.

But it won’t be. The particular bazaar hawkers working on Thunderbird don’t have any particular commitment to the rest of the desktop Linux stack: they’re not necessarily against it, but they’re not necessarily for it either. If there’s an opportunity to make Thunderbird better on Windows, anybody can contribute to exploit that opportunity. At best, Thunderbird on desktop Linux will be as good as Thunderbird anywhere else. Similarly, the people in the Nautilus file manager area of the bazaar have no particular commitment to tighter integration with Thunderbird, because their users might be using GNUMail or Evolution.

At one extreme, the licences of software in the bazaar dissuade switching, too. Let’s say that CUPS, the common UNIX printing subsystem, is the best way to do printing on any platform. Does that mean that, say, Mac users with paper-centric workflows or lifestyles will be motivated to switch to desktop Linux, to get access to CUPS? No, it means Apple will take advantage of the CUPS licence to integrate it into their stack, giving them access to the technology.

The only thing the three big stack vendors seem to agree on when it comes to free software licensing is that the GPL version 3 family of licences is incompatible with their risk appetites, particularly their weaponised patent portfolios. So that points to a way to avoid the second of these problems blocking a desktop Linux “halo effect”. Were there a GPL3 killer app, the stack vendors probably wouldn’t pick it up and integrate it. Of course, with no software patent protection, they’d be able to reimplement it without problem.

But even with that dissuasion, we still find that the app likely wouldn’t be a better experience on a desktop Linux stack than on Mac, or on Windows. There would be no halo, and there would be no switchers. Well, not no switchers, but probably no more switchers.

Am I minimising the efforts of consistency and integration made by the big free software desktop projects, KDE and GNOME? I don’t think so. I’ve used both over the years, and I’ve used other desktop environments for UNIX-like systems (please may we all remember CDE so that we never repeat it). They are good, they are tightly integrated, and thanks to the collaboration on specifications in the Free Desktop Project they’re also largely interoperable. What they aren’t is breakout. Where Thunderbird is a nucleus without a halo, Evolution is a halo without a nucleus: it works well with the other GNOME tools, but it isn’t a lighthouse attracting users from, say, Windows, to ditch the rest of their stack for GNOME on Linux.

Desktop Linux is a really good desktop stack. So is, say, the Mac. You could get on well with either, but unless you’ve got a particular interest in free software, or a particular frustration with Apple, there’s no reason to switch. Many people do not have that interest or frustration.

July 03, 2020

July 02, 2020

Every week, I sit down with a coffee and work through this list. It takes anywhere between 20 and 60 minutes, depending on how much time I have.

Process #

  • Email inbox
  • Things inbox (Things is my task manager of choice)
  • FreeAgent (invoicing, expenses)
  • Trello (sales pipeline, active projects)
  • Check backups

Review #

  • Bank accounts
  • Quarterly goals
  • Yearly theme
  • Calendar
  • Things
    • Ensure all active projects are relevant and have a next action
    • Review someday projects
    • Check tasks have relevant tags

Plan #

  • Write out this weeks goals & intentions
  • Answer Mastermind Automatic Check-ins
    • Did you read, watch or listen to anything that you think the rest of the group might find useful or interesting?
    • What did you accomplish or learn this week?
    • What are your plans for the week?

This is one of the checklists I use to run my business.

If you have feedback, get in touch – I’d love to hear it.

These are the questions I ask a client to help guide our working engagement. It helps understand the clients needs and scope out a solution. I pick and choose the relevant items to the engagement at hand.

  • Describe your company and the products or services your business provides
  • When does the project need to be finished? What is driving that?
  • Who are the main contacts for this project and what are their roles?
  • What are the objectives for this project? (e.g. increase traffic, improve conversion rate, reduce time searching for information)
  • How will we know if this project has been a success? (e.g. 20% increase in sales, 20% increase in traffic)
  • If this project fails, what is most likely to have derailed us?
  • How do you intend to measure this?
  • What is your monthly traffic/conversions/revenues?
  • How have you calculated the value of this project internally?
  • What budget have you set for this project?
  • Who are your closest competitors?

This is one of the checklists I use to run my business. I run these answers past my Minimum Level of Engagement.

If you have feedback, get in touch – I’d love to hear it.

July 01, 2020



Leadership changes at Made

It’s been an odd few months for us here at Made Media, and I’m sure it has for all of you as well. Our clients all around the world have temporarily closed their doors to the public, and have gone through waves of grief, hope, and exhaustion. At the same time, our team has been busy working through the final stages of some big new projects, that are all due to launch in the next month. Tough times, new beginnings.

While all of us in the arts and cultural sector are struggling with the impact of Covid 19, we have also seen much increased demand for our Crowdhandler Virtual Waiting Room product. As a result of this we have made the decision to spin out this product from our core agency business in order to give it the focus and investment it deserves. Effective 1 July, we are setting up a new Crowdhandler company, majority-owned by Made, and we are making the following leadership changes:

I will become CEO of the new Crowdhandler company. It’s with a mixture of excitement and nostalgia that I am stepping down as CEO of Made Media today, after eighteen years, taking up the new role of non-executive Chair. A team of developers and cloud engineers from Made Media are moving with me to join the new Crowdhandler company.

James Baggaley, currently Managing Director, becomes CEO of Made Media. James joined Made in 2017 as Strategy Director, and since then has worked with Made clients around the world on some of our most involved projects. He has a long background of working in, with, and for arts and cultural organisations, with a focus on digital, e-commerce and ticketing strategy.

Meanwhile the Made team is on hand to help our clients as they navigate the coming months and beyond, and we can’t wait for all of these brilliant organisations to start welcoming audiences again.

June 30, 2020

Accessible data tables by Bruce Lawson (@brucel)

I’ve been working on a client project and one of the tasks was remediating some data tables. As I began researching the subject, it became obvious that most of the practical, tested advice comes from my old mates Steve Faulkner and Adrian Roselli.

I’ve collated them here so they’re in one place when I need to do this again, and in case you’re doing similar. But all the glory belongs with them. Buy them a beer when you next see them.

Keeping Politics Out of Tech

It’s uncomfortable to think about oppression. It’s extremely uncomfortable to consider the ways in which you might be complicit in that oppression.

However, everything you say and everything you do is a vote for the kind of person you want to be and the kind of world you want to live in. Silence and inaction are a vote for the status quo.

I don’t want to be the kind of person who is content to tell people to suffer in silence because I have the privilege to treat the politics that have tangible, negative effects on their lives as a thought experiment.

Things I Read

June 29, 2020



Announcing CultureCast

The rise of Covid-19 has forced cultural and arts organisations around the world to rapidly move their events online. In response to that, we have developed an easy to use, cloud-based paywall application to control access to your online video content.

It’s called CultureCast and it lets you control who has access to your video content, and sell access on a pay-per-view basis. It’s a standalone application, and you don’t have to be an existing Made client to start using it!

Tessitura integration

CultureCast is currently available exclusively for Tessitura users, because it integrates with the Tessitura REST API in order to authenticate and identify users.  You can sell video access via ticket purchases in Tessitura.  And control access to videos via constituency codes in Tessitura.

Customisable

You can control the look and feel of CultureCast to match your brand, for no extra cost. You can also mask the URL with a dedicated subdomain (e.g., ondemand.yourvenue.org)  for a one-off setup fee.

Video embed

You can embed video content into CultureCast from any online video provider that supports OEmbed. Vimeo and Brightcove are supported out of the box, and we’re working on built-in support for further providers.

Stripe payments

Customer payments are straightforward. You can process in-app payments via Stripe (with support for ApplePay and other mobile payment methods), with a Stripe account in your name and with the money automatically paid out to you on a rolling basis. You can also control the look and feel of the confirmation emails within the Stripe dashboard.

Pricing

There are no set-up costs to start using CultureCast.

You simply pay a service fee, set as a percentage of the revenue taken through the app. You are also responsible for paying Stripe payment processing fees, and for the cost of your video hosting platform. You don’t need to pay anything to get started, although there is a minimum charge once you reach over 1,000 video views per month via CultureCast.

This is just the beginning: we’re excited to continue development of CultureCast over the coming months and for that we need your feedback - so please try it out and let us know what you think!

If you’d like to get more details, check out our website at https://culturecast.tv, or drop us an email at culturecast@made.media.

June 28, 2020

These are the questions I ask myself to determine if a client will be a good fit to work with me.

  • Do they have a sufficient budget?
  • Is it a project I can add value to?
  • Do they have reasonable expectations?
  • Do they understand what’s involved (both their end and my end)?
  • Will this help me grow my skills and experience?
  • Will this help me with future sales?
  • Do I think this business/idea will succeed?
  • Do I have enough time to do my best work under their deadline?
  • Do they have clear goals for the project?
  • How organised are they?

This is one of the checklists I use to run my business.

If you have feedback, get in touch – I’d love to hear it.

The following are the checklists I use to run my business.

Each checklist has a few simple criteria:

  • It should be clear what the checklist is for and when it should be used
  • The checklist should be as short as possible
  • The wording should be simple and precise

Why use a checklist? #

I’ve previously written about the importance of a checklist. I summarised by saying:

The problem is that our jobs are too complex to be carried out by memory alone. The humble checklist provides defence against our own limitations in more tasks than we might realise.

David Perell in The Checklist Habit:

The less you have to think, the better. When you’re fully dependent on your memory, consider making a checklist. Don’t under-estimate the compounding benefits of fixing small operational leaks. Externalizing your processes is the best way to see them with clear eyes. Inefficiencies stand out like a lone tree on a desert plateau. Once the checklist is made, almost anybody in the organization can perform that operation.

The Checklist Manifesto is a great book on the subject.

Pre-project checklists #

Minimum level of engagement
Questions to determine if a client will be a good fit.

Questions to ask before a client engagement
Questions to run through before engaging on a project.

Project Starter Checklist (coming soon)
Things required before starting a project.

Website launch #

Pre-launch Checklist (coming soon)
Run through this checklist before launching a website

Post-launch Checklist (coming soon)
Run through this checklist after launching a website.

End of project #

Project Feedback (coming soon)
Questions once the project has been completed.

Testimonials (coming soon)
A series of questions to ask to get good testimonials.

Reviews #

Weekly review

Quarterly Review (coming soon)

Annual Review (coming soon)

June 27, 2020

WWDC2020 was the first WWDC I’ve been to in, what, five years? Whenever I last went, it was in San Francisco. There’s no way I could’ve got my employer to expense it this year had I needed to go to San Jose, nor would I have personally been able to cover the costs of physically going. So I wouldn’t even have entered the ticket lottery.

Lots of people are saying that it’s “not the same” as physically being there, and that’s true. It’s much more accessible than physically being there.

For the last couple at least, Apple have done a great job of putting the presentations on the developer site with very short lag. But remotely attending has still felt like being the remote worker on an office-based team: you know you’re missing most of the conversations and decisions.

This time, everything is remote-first: conversations happen on social media, or in the watch party sites, or wherever your community is. The bundling of sessions released once per day means there’s less of a time zone penalty to being in the UK, NZ, or India than in California or Washington state. Any of us who participated are as much of a WWDC attendee as those within a few blocks of the McEnery or Moscone convention centres.

June 26, 2020

Reading List 261 by Bruce Lawson (@brucel)

June 25, 2020

June 24, 2020

I sometimes get asked to review, or “comment on”, the architecture for an app. Often the app already exists, and the architecture documentation consists of nothing more than the source code and the folder structure. Sometimes the app doesn’t exist, and the architecture is a collection of hopes and dreams expressed on a whiteboard. Very, very rarely, both exist.

To effectively review an architecture and make recommendations for improving it, we need much more information than that. We need to know what we’re aiming for, so that we can tell whether the architecture is going to support or hinder those goals.

We start by asking about the functional requirements of the application. Who is using this, what are they using it for, how do they do that? Does the architecture make it easy for the programmers to implement those things, for the testers to validate those things, for whoever deploys and maintains the software to provide those things?

If you see an “architecture” that promotes the choice of technical implementation pattern over the functionality of the system, it’s getting in the way. I don’t need to know that you have three folders of Models, Views and Controllers, or of Actions, Components, and Containers. I need to know that you let people book childrens’ weather forecasters for wild atmospheric physics parties.

We can say the same about non-functional requirements. When I ask what the architecture is supposed to be for, a frequent response is “we need it to scale”. How? Do you want to scale the development team? By doing more things in parallel, or by doing the same things faster, or by requiring more people to achieve the same results? Hold on, did you want to scale the team up or down?

Or did you want to scale the number of concurrent users? Have you tried… y’know, selling the product to people? Many startups in particular need to learn that a CRM is a better tool for scaling their web app than Kubernetes. But anyway, I digress. If you’ve got a plan for getting to a million users, and it’s a realistic plan, does your architecture allow you to do that? Does it show us how to keep that property as we make changes?

Those important things that you want your system to do. The architecture should protect and promote them. It should make it easy to do the right thing, and difficult to regress. It should prevent going off into the weeds, or doing work that counters those goals.

That means that the system’s architecture isn’t really about the technology, it’s about the goals. If you show me a list of npm packages in response to questions about your architecture, you’re not showing me your architecture. Yes, I could build your system using those technologies. But I could probably build anything else, too.

Back to Top