Last updated: December 18, 2018 03:22 PM (All times are UTC.)

December 18, 2018

Cleaner Code by Graham Lee

Readers of OOP the easy way will be familiar with the distinction between object-oriented programming and procedural programming. You will have read, in that book, about how what we claim is OOP in the sentence “OOP has failed” is actually procedural programming: imperative code that you could write in Pascal or C, with the word “class” used to introduce modularity.

Here’s an example of procedural-masquerading-as-OOP, from Robert C. Martin’s blog post FP vs. OO List Processing:

void updateHits(World world){
  nextShot:
  for (shot : world.shots) {
    for (klingon : world.klingons) {
      if (distance(shot, klingon) <= type.proximity) {
        world.shots.remove(shot);
        world.explosions.add(new Explosion(shot));
        klingon.hits.add(new Hit(shot));
        break nextShot;
      }
    }
  }
}

The first clue that this is a procedure, not a method, is that it isn’t attached to an object. The first change on the road to object-orientation is to make this a method. Its parameter is an instance of World, so maybe it wants to live there.

public class World {
  //...

  public void updateHits(){
    nextShot:
    for (Shot shot : this.shots) {
      for (Klingon klingon : this.klingons) {
        if (distance(shot, klingon) <= type.getProximity()) {
          this.shots.remove(shot);
          this.explosions.add(new Explosion(shot));
          klingon.hits.add(new Hit(shot));
          break nextShot;
        }
      }
    }
  }
}

The next non-object-oriented feature is this free distance procedure floating about in the global namespace. Let’s give the Shot the responsibility of knowing how its proximity fuze works, and the World the knowledge of where the Klingons are.

public class World {
  //...

  private Set<Klingon> klingonsWithin(Region influence) {
    //...
  }

  public void updateHits(){
    for (Shot shot : this.shots) {
      for (Klingon klingon : this.klingonsWithin(shot.getProximity())) {
        this.shots.remove(shot);
        this.explosions.add(new Explosion(shot));
        klingon.hits.add(new Hit(shot));
      }
    }
  }
}

Cool, we’ve got rid of that spaghetti code label (“That’s the first time I’ve ever been tempted to use one of those” says Martin). Incidentally, we’ve also turned “loop over all shots and all Klingons” to “loop over all shots and nearby Klingons”. The World can maintain an index of the Klingons by location using a k-dimensional tree then searching for nearby Klingons is logarithmic in number of Klingons, not linear.

By the way, was it weird that a Shot would hit whichever Klingon we found first near it, then disappear, without damaging other Klingons? That’s not how Explosions work, I don’t think. As it stands, we now have a related problem: a Shot will disappear n times if it hits n Klingons. I’ll leave that as it is, carry on tidying up, and make a note to ask someone what should really happen when we’ve discovered the correct abstractions. We may want to make removing a Shot an idempotent operation, so that we can damage multiple Klingons and only end up with a Shot being removed once.

There’s a Law of Demeter violation, in that the World knows how a Klingon copes with being hit. This unreasonably couples the implementations of these two classes, so let’s make it our responsibility to tell the Klingon that it was hit.

public class World {
  //...

  private Set<Klingon> klingonsWithin(Region influence) {
    //...
  }

  public void updateHits(){
    for (Shot shot : this.shots) {
      for (Klingon klingon : this.klingonsWithin(shot.getProximity())) {
        this.shots.remove(shot);
        this.explosions.add(new Explosion(shot));
        klingon.hit(shot);
      }
    }
  }
}

No, better idea! Let’s make the Shot hit the Klingon. Also, make the Shot responsible for knowing whether it disappeared (how many episodes of Star Trek are there where photon torpedoes get stuck in the hull of a ship?), and whether/how it explodes. Now we will be in a position to deal with the question we had earlier, because we can ask it in the domain language: “when a Shot might hit multiple Klingons, what happens?”. But I have a new question: does a Shot hit a Klingon, or does a Shot explode and the Explosion hit the Klingon? I hope this starship has a business analyst among its complement!

We end up with this World:

public class World {
  //...

  public void updateHits(){
    for (Shot shot : this.shots) {
      for (Klingon klingon : this.klingonsWithin(shot.getProximity())) {
        shot.hit(klingon);
      }
    }
  }
}

But didn’t I say that the shot understood the workings of its proximity fuze? Maybe it should search the World for nearby targets.

public class World {
  //...

  public void updateHits(){
    for (Shot shot : this.shots) {
      shot.hitNearbyTargets();
    }
  }
}

As described in the book, OOP is not about adding the word “class” to procedural code. It’s a different way of working, in which you think about the entities you need to model to solve your problem, and give them agency. Obviously the idea of “clean code” is subjective, so I leave it to you to decide whether the end state of this method is “cleaner” than the initial state. I’m happy with one fewer loop, no conditions, and no Demeter-breaking coupling. But I’m also happy that the “OO” example is now object-oriented. It’s now looking a lot less like enterprise software, and a lot more like Enterprise software.

December 17, 2018

Woah, too many products. Let me explain. No, it will take too long, let me summarise.

Sometimes, people running software organisations call their teams “product teams”, and organise them around particular “products”. I do not believe that this is a good idea. Because we typically aren’t making products, we’re solving problems.

The difference is that a product is “done”. If you have a “product team”, they probably have a “definition of done”, and then release software that has satisfied that definition. Even where that’s iterative and incremental, it leads to there being a “product”. The thing that’s live represents as much of the product as has been done.

The implications of there being a “product” that is partially done include optimising for getting more “done”. Particularly, we will prioritise adding new stuff (getting more “done”) over fixing old stuff (shuffling the deckchairs). We will target productish metrics, like number of daily actives and time spent.

Let me propose an alternative: we are not making products, we are solving problems. And, as much out of honesty as job preservation, let me assure you that the problems are very difficult to solve. They are problems in cybernetics, in other words in communication and control in a complex system. The system is composed of three identifiable, interacting subsystems:

  1. The people who had the problem;
  2. The people who are trying to solve the problem;
  3. The software created to present the current understanding of the solution.

In this formulation, we don’t want “amount of product” to be a goal, we want “sufficiency of solution” to be a goal. We accept that the software does not represent the part of the “product” that has been “done”. The software represents our best effort to date at modelling our understanding of the solution as we comprehend it to date.

We therefore accept that adding more stuff (extending the solution) is one approach we could consider, along with fixing old stuff (reflecting new understanding in our work). We accept that introducing the software can itself change the problem, and that more people using it isn’t necessarily a goal: maybe we’ve helped people to understand that they didn’t actually need that problem solved all along.

Now our goals can be more interesting than bushels of software shovelled onto the runtime furnace: they can be about sufficiency of the solution, empowerment of the people who had the problem, and improvements to their quality of life.

December 14, 2018

Reading List by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my underwear so I can spend time reading stuff.

Despite the theory that everything can be done in software (and of course, anything that can’t be done could in principle be approximated using numerical methods, or fudged using machine learning), software engineering itself, the business of writing software, seems to be full of tools that are accepted as de facto standards but, nonetheless, begrudgingly accepted by many teams. What’s going on? Why, if software is eating the world, hasn’t it yet found an appealing taste for the part of the world that makes software?

Let’s take a look at some examples. Jira is very popular among many people. I found a blog post literally called Why I Love Jira. And yet, other people say that Jira is an anti pattern, a sentiment that gets reasonable levels of community support.

Jenkins is almost certainly the (“market”, though it’s free) leader among continuous delivery tools, a position it has occupied since ousting Hudson, from which it was forked. Again, it’s possible to find people extolling the virtues and people hating on it.

Lastly, for some quantitative input, we can find that according to the Stack Overflow 2018 survey, most respondents (78.9%) love Rust, but most people use JavaScript (69.8%). From this we draw the interesting conclusion that the most popular tool in the programming language realm is not, actually, the one that wins the popularity contest.

So, weird question, why does everybody do this to themselves? And then more specifically, why is your team doing it to yourselves, and what can you do about it?

My hypothesis is that all of these tools succeed because they are highly configurable. I mean, JavaScript is basically a configuration language for Chromium (don’t @ me) to solve/cause your problem. Jira’s workflows are ridiculously configurable, and if Jenkins doesn’t do what you want then you can find a plugin to do it, write a plugin to do it or make a Groovy script that will do it.

This appeals to the desire among software engineers to find generalisations. “Look,” we say, “Jenkins is popular, it can definitely be made to do what we want, so let’s start there and configure it to our needs”.

Let’s take the opposing view for the moment. I’m going to drop the programming language example of JS/Rust, because all programming languages are, roughly speaking, entirely interchangeable. The detail is in the roughness. The argument below still applies, but requires more exposition which will inevitably lead to dissatisfaction that I didn’t cover some weird case. So, for the moment, let’s look at other tools like Jira and Jenkins.

The exact opposing view is that our project is distinct, because it caters to the needs of our customers and their (or these days, probably our) environment, and is understood and worked on by our people with our processes, which is not true for any other project. So rather than pretend that some other tool fits our needs or can be bent into shape, why don’t we build our own?

And, for our examples, building such a tool doesn’t appear to be a big deal. Using the expansive software engineering term “just”, a CD tool is “just” a way to run each step in the deployment pipeline and tell someone when a step fails. A development-tracking tool is “just” a way to list the things the team is or could be working on.

This is more or less a standard “build or buy” question, with just one level of indirection: both building and buying are actually measured in terms of time. How long would it take the team to write a new CD tool, and to maintain it? How long would it take the team to configure Jenkins, and to maintain it?

The answer should be fairly easy to consider. Let’s look at the map:

We are at x, of course. We are a short way from the Path of Parsimony, the happy path along which the generic tools work out of the box. That distance is marked on the map as .

Think about how you would measure for your team. You would consider the expectations of the out-of-the-box tool. You would consider the expectations of your team, and of your project. You would look at how those expectations differ, and try to quantify the result.

This tells you something about the gap between what the tool provides by default and what you need, which will help you quantify the amount of customisation needed (the cost of building a spur out from the Path of Parsimony to x). You can then compare that with the cost of building a tool that supports your position directly (the cost of building your own path, running through x).

But the map also suggests another option: why don’t we move from x closer to the path, and make smaller? Which of our distinct assumptions are incidental and can be abandoned, which are essential and need to be supported, and which are historical and could be revised? Is there a way to change the context so that adopting the popular tool is cheaper?

[Left out of the map but just as important is the related question: has somebody else already charted a different path, and how far are we from that? In other words, is there a different off-the-shelf product which needs less configuration than the one we’ve picked, so the total migration-plus-configuration cost is less than sticking where we are?]

My impression is that these questions tend to get asked once at the start of a project or initiative, then not again until the team is so far away from the Path of Parsimony that they are starting to get tangled and stung by the Weeds of Woe. Teams that change tooling such as their issue trackers or CD pipeline tend to do it once the existing way is already hurting too much, and the route back to the path no longer clear.

December 12, 2018

December 11, 2018

The Magpie team at Stickee builds clever things (software, mostly) for some of the world’s biggest companies. Following a strong 2018, 2019 is going to...

The post PHP Software Developers for the Magpie team appeared first on stickee.

December 10, 2018

Christmas Gifts for Designers

It’s that time of year where you feel the urge to buy your friends and family gifts, sometimes it’s easy and sometimes it can be a tough old slog. Well, if they’re a designer try buying them some of these cool products.

Panobook Notebook

pano book

Panobook is a panoramic notebook for your desk, and eventually, your shelf. Made of quality materials and thoughtful details, and includes a slipcase for archiving when you’re finished. It features a dot grid with device guides and is a seriously high quality product.

Prices start from $19 each but worth buying the three pack for $57

Learn More >

 

Socks in a Box Subscription

Socksinabox Subscriptions have socks to suit everybody – simply select the range of socks that most suits the person you are buying for. Perfect for the sock lover.

Subscriptions start from £19.99

Learn More > 

 

Pantone Candles

Pantone Scented Candles release a selection of fragrances to suit any mood or taste. They are presented within coloured glass vessels the colours of which were inspired by the standardised colours of the world-renowned Pantone Colour Matching System book.

Small candles start at around £6

Learn More >

 

UI Stencils Starter Pack

Get your sketch on. This starter pack comes with everything you need to jump start your prototyping chops. Features your choice of Web, iPhone, or Android Stencil Kits, Accompanying Sketch Pad, UI Stencils Case & Pentel p207 Drafting Pencil.

This set will cost you £67  

Learn More >

 

Wireframe Deck

The Wireframe Deck includes 80 2×2″ double-sided cards of common website and UI elements, lo-fi on one side and hi-fi on the other. A great way to get creative away from the computer.

You’ll pay $19 for this.

Learn More >

 

Dot Grid Journal Pack

Dotgrid.co Journal Pack contains one A5 and one A6 dot grid journal. It also comes with a Staedtler Fineliner Pen in a Dotgrid.co branded bag and is available in six fun colours. A great bundled gift for any designer friends in some really cool colours.

A bargain at £28!

Learn More >

 

Logi Wireless Charger

Logitech’s Powered wireless charging stand delivers convenient hands-free wireless charging for your iPhone. Its thoughtful design lets you charge your iPhone while still making calls — without ever cutting power. And a U-shaped cradle ensures effortless charging placement every time you rest your iPhone inside. Side-note, great for unlocking with FaceID without touching it.

This is a pretty £60 but will transform your desk.

Learn More >

 

Pixel Ruler

The ultimate tool for responsive screen size sketching. Heavy-duty gauge stainless steel ruler with pixel increments. Markers for mobile, tablet and widescreen (laptop) sizing.

If you want to pay $36 for a ruler this one is for you.

Learn More > 

There’s a few ideas for you. Send me over any other cool designer related gifts for future posts.

I cover lots of things on my blog, perhaps you’ll enjoy reading ‘Be a better freelance designer’

The post Christmas Gifts for Designers appeared first on .

December 09, 2018

I frequently meet software teams who describe themselves as “high velocity”, they even have graphs coming from Jira to prove it, and yet their ability to ship great software, to delight their customers, or even to attract their customers, doesn’t meet their expectations. A little bit of sleuthing usually discovers the underlying problem.

Firstly, let’s take a look at that word, “velocity”. I, like Kevlin Henney, have a background in Physics, and therefore I agree with him that Velocity is a vector, and has a direction. But “agile” velocity only measures amount of stuff done to the system over time, not the direction in which it takes the system. That story may be “5 points” when measured in terms of heft, but is that five points of increasing existing customer satisfaction? Five points of new capability that will be demoed at next month’s trade show? Five points of attractiveness to prospects in the sales funnel?

Or is it five points of making it harder for a flagship customer to get their work done? Five points of adding thirty-five points of technical debt work later? Five points of integrating the lead engineer’s pet technology?

All of these things look the same in this model, they all look like five points. And that means that for a “high-velocity” (but really low-velocity, high-speed) team, the natural inclination is to jump on it, get it done, and get those five points under their belt and onto the burn down chart. The faster they burn everything down, the better they look.

Some of the presenting symptoms of a high-speed, low-velocity team are listed below. If you recognise these in your team, book yourself in for office hours and we’ll see if we can get you unstuck.

  • “The Business”: othering the rest of the company. The team believes that their responsibility is to build the thing that they were asked for, and “the business” needs to tell them what to build, and to sell it.
  • Work to rule: we build exactly what was asked for, no more, no less. If the tech debt is piling up it’s because “the business” (q.v.) doesn’t give us time to fix it. If we built the wrong thing it’s because “the business” put it at the top of the backlog. If we built the thing wrong it’s because the acceptance criteria weren’t made clear before we started.
  • Nearly done == done: look, we know our rolling average velocity is 20 bushels of software, and we only have 14 furlongs and two femtocandela of software to show at this demo. But look over here! These 12 lumens and 4 millitesla of software are in QA, which is nearly done, so we’ve actually been working really hard. The fact that you can’t use any of that stuff is unimportant.
  • Mini-waterfall: related to work to rule (q.v.), this is the requirement that everyone do their bit of the process in order, so that the software team can optimise for requirements in -> software out and get that sweet velocity up. We don’t want to be doing discovery in engineering, because that means uncertainty, uncertainty means rework, and rework means lower velocity.
  • Punitive estimation: we’re going to rename “ambiguity” to “risk”, and then punish our product owner for giving us risky stories by boosting their estimates to account for the “risk”. Such stories will never get scheduled, because we’ll never be asked to do that one risky thing when we can get ten straightforward things done in what we are saying is the same time.
  • Story per dev: as a team, our goal is to shovel as much software onto the runtime furnace as possible. Therefore we are going to fan out the tasks to every individual. We are each capable of wielding our own shovel, and very rarely do we accidentally hit each other in the face while shovelling.

December 07, 2018

Reading List by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my underwear so I can spend time reading stuff.

December 06, 2018

I received an invitation reading

Acme Inc is organizing and sponsoring Timbuktu’s biggest annual developer conference called “Timbuk-Toot!”. Past speakers include Richard Stallman, Morgan Freeman, Humphrey Bogart and Cruella de Vil.

Would you like to give a workshop or a talk at Timbuk-Toot! in February 2019?

Let me know!

And that was it. It’s not nearly enough information to make a decision, so —unless it’s from someone I know, or I’ve always always wanted to visit Timbuktu— I park this, thinking “I’ll email back and ask them for more information”. Then, of course, the rest of the day happens and I forget about it.

So, if you’re emailing someone to ask them to speak, at a minimum I’d need this information:

  • what do you want me to talk about? What’s the theme of the conference?
  • How long is a talk? How long is a workshop?
  • Are the talks recorded? Are they available to all, e.g. over YouTube?
  • How many people attend? Who attends?
  • You mention Acme Inc; is it an internal Acme staff conference, or open to the public?
  • What are the dates? How can I answer when i’m only told “February 2019”?
  • Are you paying for my flights/ accommodation? I suspect not, as you don’t mention it.
  • If you are covering travel expenses, am I expected to pay them and get them reimbursed? If yes, how many days does it take for me to get my money back? (A the moment, over Xmas, I’m giving an two-month interest-free loan of 1000Euro to a conference.)
  • If I’m a freelancer, are you paying a fee for my time? I suspect not, as you don’t mention it.
  • Is there a code of conduct? Where is it? Is the speaker line-up inclusive? (Probably not, if you’re not paying expenses or a speaker fee.)

You need to tell me why I should care about it. You might be huge in the Timbuktu web industry, but I don’t know you. So far, all you have done is raise questions which means I have to reply to you and ask them, on top of all the other things I have to do today.

Don’t make me think!

Donald Knuth is pretty cool. One of the books he wrote that I own and have actually read[*] is Literate Programming, in which he describes (among other things) weaving program text and documentation together in a single narrative.

Two of his books that I own and have sort of dipped into here and there are TeX: the Program, and METAFONT: the Program. These are literate programs, created from webs in which Human text and Computer text are interleaved to tell the story of what the program does.

Human text and computer text, but not images. If you want pictures, you have to carry them around separately. Even though we are highly visual organisms, and many of the programs we produce have significant graphical components, very few programming environments treat images as anything other than external files that can be looked at and maybe previewed. The only programming environment I know of that lets you include images in program source is TempleOS.

I decided to extend the idea of the Literate web to the realm of Figurative Programming. A gloom (graphical loom) web can contain human text, computer text, and image descriptions (e.g. graphviz, plantuml, GLE…) which get included in the human-readable document as figures.

The result is gloom. It’s written in itself, so the easiest way to get started is with the Xcode project at gloomstrap which can extract the proper gloom sources from the gloom web. Alternatively, you can dive in and read the PDF it made about itself.

Because I built gloomstrap first, gloom is really a retelling of that program in a Figurative Programming web, rather than a program that was designed figuratively. Because of that, I don’t really have experience yet of trying to design a system in gloom. My observation was that the class hierarchy I came up with in building gloomstrap didn’t always lend itself to a linear storytelling for inclusion in a web. I expect that were I to have designed it in noweb rather than Xcode, I would have had a different hierarchy or even no classes at all.

Similarly, I didn’t try test-firsting in gloom, and nor did I port the tests that I did write into the web. Instinct tells me that it would be a faff, but I will try it and find out. I think richer expressions of program intention can only be a good thing, and if Figurative Programming is not the way in which that can be done, then at least we will find out something about what to do instead.

[*] Coming up in January’s De Programmatica Ipsum: The Art of _The Art of Computer Programming_, an article about a book that I have _definitely_ read _quite a few bits here and there_ of.

December 03, 2018

Two books by Graham Lee

A member of a mailing list I’m on recently asked: what two books should be on every engineer’s bookshelf? Here’s my answer.

Many software engineers, the ones described toward the end of Code Complete 2, would benefit most from Donald Knuth’s The Art of Computer Programming and Computers and Typesetting. It is truly astounding that one man has contributed so comprehensively to the art of variable-height monitor configurations.

If, to misquote Bill Hicks, “you’ve got yourself a reader”, then my picks are coloured by the fact that I’ve been trying to rehabilitate Object-Oriented Design for the last few years, by re-introducing a couple of concepts that got put aside over the recent decades:

  1. Object orientation; and
  2. Design.

With that in mind, my two recommendations are the early material from that field that I think shows the biggest divergence in thinking. Readers should be asking themselves “are these two authors really writing about the same topic?”, “where is the user of the software system in this book?”, “who are the users of the software system in this book?”, and “do I really need to choose one or other of these models, why not both or bits of both?”

  1. “Object-Oriented Programming: an evolutionary approach” by Brad Cox (there is another edition with Andrew Novobilski as a co-author). Cox’s model is the npm/CPAN model: programmers make objects (“software ICs”), describe their characteristics in a data sheet, and publish them in a catalogue. Integrators choose likely-looking objects from the catalogue and assemble an application out of them.

  2. “Object-Oriented Software Construction” by Bertrand Meyer. Meyer’s model is the “software engineering” model: work out what the system should do, partition that into “classes” based on where the data should naturally live, and design and build those classes. In designing the classes, pay particular attention to the expectations governing how they communicate: the ma as Alan Kay called the gaps between the objects.

November 30, 2018

InVision Studio. First Impressions.

I made yogamoji in around 3 hours in InVision Studio. Check out the prototype here & Upvote it on Dribbble for me!

InVision Studio

Why now?

I downloaded InVision Studio all the way back in April and left it sitting in my applications folder like a pack of Pot Noddles you’re to embarrassed to eat. I’ve been a dedicated Sketch user for years, so much so that over 12 months ago I stopped paying for Adobe CC and have no regrets.

I’ve also been using InVision + Craft + Sketch for years too so I didn’t have much desire to open up Studio and start learning something new. For my UI Design needs Sketch + Craft did everything I needed. I could make nice designs and make prototypes in no time whatsoever so why would I switch?

Fast forward to today and UI animations are becoming such an important part of any designers workflow I thought to myself “Heck, I’m going to get left behind here…”. I started my journey giving Framer X a trial and while I have a high opinion of Framer X I needed something to compare it to… so that’s the reason why I picked up InVision Studio.

Here’s my first impressions of InVision Studio 

Familiar

InVisionStudio

If you’re a Sketch user you’ll be able to pick InVision Studio up in no time, the layout is similar and most importantly so are the shortcuts.

I found myself forgetting I wasn’t in Sketch which was a good thing, while Framer X had more of a learning curve, more errors, more frustration.

I liked that Studio’s interactions were more powerful than craft in Sketch. Studio feels like Sketch with benefits.

Pretty

yogamoji

Studio is beautiful, every pixel has been expertly designed.

Sketch at times can feel a bit dated with it’s choice of UI and often release updates, while Studio feels clean, sharp, crisp and functional.

Everything is tucked out of the way, they’re aware their users are typically clever creative folks so won’t mind if a menu is hidden, especially if it’s not used much.

Talking of menus, they’re small and inoffensive. We don’t need a merge button that’s 80 pixels do we? We just need to get to it and Studio makes access really easy.

Powerful

yogamoji

As I mentioned before, InVision feels like Sketch with Benefits. This brings some serious prototyping power. I’ve been used to basic animations and they served me well but Studio opens a whole new dimension of possibilities to how my future prototypes could work.

I can now go and create scrolling elements and animate popups, move elements around the screen as you interact with other elements… you don’t need me to tell you all this but It feeds the imagination.

My creative senses are tingling at the prospect of more animation in my projects.

Pains

macpains

Sadly my Mac didn’t enjoy the animation process as much as I did, my poor machine struggled with some of the more complicated animations.

Without getting into it too deep, you have to make sure both art boards you’re trying to animate between contain the same elements so if you have a long image heavy list that needs to be on both pages + 0% alpha which can get in the way and in my case cause memory issues.

Side note: I was recoding the screen at the same time so perhaps this was the cause of the mini melt down.

I missed having all my Sketch plugins but sure in time Studio will catch up.

But as pains go, pretty minimal.

Conclusion

InvISIONsTUDIO

I don’t really scratch the surface in this post as many of you have been using it for months and know all this but my verdict is Studio is a great alternative to Sketch, especially for those projects that require more animation.

Will it replace Sketch for me? Unsure at this point. Possibly.
Is it better than Framer X? Yes and No. It’s different.
Adobe XD? No idea. Probably will never be in my workflow because of Adobe’s cost.

My next steps would be to use Studio on a professional project and see how I get on, if you want to hire me to design you something, please do so 🙂

Before I sign off… I’ve written about InVision before, so why not read that too?

The post InVision Studio – First Impressions appeared first on .

November 27, 2018

Spotify Playlists to help you work

If you’re like me you need music or a podcast on in the background to get you working. I had a job years ago that had a no music rule and as a result it was like working in a library, that job didn’t keep me for very long.

Over the years I’ve collected a fair few Spotify playlists that I like to throw on to help me work. My music tastes are fairly alternative but hopefully there will be something for everyone here, enjoy.

Alternative 70’s

alternative70s

This playlist is awesome. It’s a full decade before I was even born but contains some of my favourite songs. You’ll find bands like The Cure, Joy Division, Undertones, Iggy Pop and loads more.

Listen to ‘Alternative 70s’ here>

Billy’s Playlist

billys playlist

Around 4,000 alternative songs that are every 6 Music listeners dream. A mix of Arcade Fire, Arctic Monkey’s, Underworld, Kanye… all sorts.

Listen to ‘Billy’s Playlist’ here>

Last Man on Earth Soundtrack

lastmanonearth

A fan of the awesome TV show has painstakingly collected all the awesome songs from this comedy show. You’ll find old classics and new tunes. Kinks, Foo Fighters, Buddy Holly, Cat Power…

Listen to ‘Last Man on Earth Soundtrack’ here>

Classic Acoustic.

classic acoustic

Another Spotify managed playlist full of beautiful acoustic songs, many classics and some you may not of heard of. Simon and Garfunkel, Beatles, Bob Dylan, Elton John, Nick Drake. This playlist will make your heart ache but it’s brilliant.

Listen to ‘Classic Acoustic’ here>

Hip Hop Evolution

hiphopevolution

If Hip Hop is your jam then you have no choice but give this playlist your time. It’s a soundtrack from the amazing HBO doc-series Hip Hop Evolution and contains all the classics from 2-Pac to Kurtis Blow.

Listen to ‘Hip-Hop Evolution’ here>

Almost Famous

Almost Famous

Another collection of songs put together from a fan of the excellent film of the same name, this playlist has all the classics from Elton John to David Bowie.

Listen to ‘Almost Famous’ here>

Best of Alt

bestofalt

And finally it wouldn’t be right without sharing a playlist I’ve been adding to for years, Best of Alt. I know some friends have been playing this in their offices over the last few years to mixed results. You can expect a wide range of everything, from heavy metal to pop. If I like the song (or once did) it’s in there. You’ll find Smashing Pumpkins, Deftones, Arctic Monkeys, Jeff Buckley, Idlewild etc etc

Listen to ‘Best of Alt’ here>

 

 

Thanks for reading and make sure you check out my other Blog posts.

The post Spotify Playlists to Help you Work appeared first on .

Packaging software by Graham Lee

I’ve been learning about Debian Packaging. I’ve built OS X packages, RPMs, Dockerfiles, JARs, and others, but never dpkgs, so I thought I’d give it a go.

My goal is to make a suite of GNUstep packages for Debian. There already are some in the base distribution, and while they’re pretty up to date they are based on a lot of “default” choices. So they use gcc and the GNU Objective-C runtime, which means no blocks and no modern objc features. The packages I’m making are, mostly thanks to overriding some choices in building gnustep-make, built using clang, the next-generation objc runtime, libdispatch etc.

The Debian packaging tools are very powerful, very well documented, and somewhat easy to use. But what really impressed me along this journey was CPack. I’ve used cmake before, on a team building a pretty big C++/Qt project. It’s great. It’s easier to understand than autoconf, easier to build correct build rules over a large tree than make, and can generate fast builds using ninja or IDE-compatible projects for Xcode, IntelliJ and (to some extent) Eclipse.

What cpack adds is the ability to generate packages, of various flavours (Darwin bundles, disk images, RPMs, DEBs, NullSoft installers, and more) from the information about the build targets. That’s really powerful.

Packaging software is a really important part of the customer experience: what is an “App Store” other than a package selection and distribution mechanism? It irks me that packaging systems are frequently either coupled to the target environment (Debian packages and OpenBSD ports are great, but only work in those systems), or baroque (indeed autoconf may have gone full-on rococo). Package builder-builders give distributors a useful respite, using a single tool to build packages that work for any of their customers.

It’s important that a CD pipeline produces the same artefacts that your customers use, and also that it consumes them: you can’t make a binary, test it, see that it works, then bundle it and ship it. You have to make a binary, bundle it, ship it, install it, then test it and see that it works. (Obviously the tests I’m talking about here are the “end-to-end”, or “customer environment” tests. You don’t wait until your thing is built to see whether your micro-tests pass, you wait until your micro-tests pass to decide whether it’s worth building the thing.)

I know that there are other build tools that also include packaging capabilities. The point is, using one makes things easier for you and for your customers. And, it turns out, CMake is quite a good choice for one.

November 24, 2018

Long ago, I took 4/9ths of an undergraduate physics degree, along with 4/9ths computer science and 1/9th mathematics. Having had little or no contact with physics in the intervening years, I’ve started to do some light reading about relativity in the last couple of years. This week, I came across a tip on Quora * to a fellow traveller in space-time: “stop thinking of the speed of light as a number”. Erm… WHAT?! As every school child knows, the maximum  speed of light (or any other form of electromagnetic radiation) in a vacuum is about 600,000 Km/s. That sounds like a number to me. The problem with speed in Einstein’s relativistic model of reality though, is that distance and time get very weird. That makes them hard to think about, so the advice was to ignore what we think we know and look at things in a new way.
[* – I’ll add an acknowledgement to the author of  the comment on Quora, if I ever find it again. It took me a while to understand what I’d read. ]

I’m not sure I was entirely paying attention when I studied physics last time. I don’t remember anyone explaining the precise nature of the the scientific method, or indeed what physics actually is; that’s metaphysics. This time around I see science as the process of understanding how nature works, using evidence rather than guessing then arguing the case for your beliefs. That is philosophy, or a religion. Physics, in particular, is about observing reality and working out what the rules are. It is NOT about saying why things happen. As science was becoming formalised, it was known as ‘natural philosophy’ i.e. philosophy that refers to evidence from nature.

Einstein’s Theories of Relativity say that matter and energy are equivalent. His equation of mass-energy equivalence records the relationship between the alternative mass and energy forms of matter. It is a very well known equation, even with people who have no idea what it refers to.

The form of the equation we are most familiar with is

E = mc²

E is the concentrated energy contained in a mass, m. E is a much bigger number than m because we know that c is a big number AND it’s squared.

This equation can be re-arranged to a form I don’t remember seeing or taking note of before:

c = √(E/m)

This new way (at least to me) of looking at this century old theory says that c is related to the ratio between the Energy and mass of an object. This ratio stays the same, even as space-time expands or contracts, according to the General Theory of Relativity. The recent confirmation by the LIGO project of gravitation waves that also travel at c, were also predicted, so this gives extra weight to the theory

I’ve realised that physics often relates things to each other, without saying which is the ‘fundamental thing’. Does gravity bend space-time or is the curvature of space-time what causes gravity? The equations work either way.

John Archibald Wheeler said it a different way: “Space-time tells matter how to move; matter tells space-time how to curve” and matter can be converted to energy, energy to matter.

November 22, 2018

Workaround App for iPhone

Book flexible workspace in hotels around Germany

Workaround is the project of German entrepreneur Maxim Streletzki. The idea is simple. You pack your laptop and rent workspace in some of Germanys most luxurious hotels. You pay by the hour and get free access to WIFI. It was solving the issue many hotel chains have, lots of dead space. This was perfect for them to not only look busy but make extra money from their lobby.

Workaround App for iPhone

Maxim first approached me to design him some concepts for his app, this quickly grew into creating a full interactive prototype that he could show his investors and business partners. The prototype took you from logging in all the way to paying for your time at the flexible workspace.

The app was designed to be simple to use and make use of lots of beautiful whitespace. We wanted to get away from the traditional ways apps were being designed and dropped list views to be purely location based views. You can then click through to the hotel and view it’s photos, amenities and how to get there.

Workaround App for iPhone

This project played to my strengths, lots of creativity and white space! It made use of my ability to use clean, functional design as a feature and really pleased with how it turned out.

Workaround App for iPhone

If you have an app you want me to design feel free to drop me a message and we can get something in the diary.

 

 

I write lots of blogs about design and freelancing so why not head over and have a read.

The post Workaround App for iPhone appeared first on .

The Weekend Woodworker course review

As anyone that follows this blog knows, I've slowly been working my way though the Weekend Woodworker course by Steve Ramsey.

I found the course enlightening and educational, as well as entertaining, so when I was given the opportunity to write a review of it for HackSpace magazine, I jumped at it - and today that review has been published in issue 13.

If you're not a subscriber to the magazine, you can download the PDF edition for free from their site, although I personally find the dead tree edition much better, having been a subscriber since the first issue.

I hope you enjoy the review - feel free to let me know what you think -  and if you want more content like it, subscribe to the blog.

The Weekend Woodworker course review

Sling App for iOS / iPhone

A handy way to send your friends a drink for any occasion.

Australian entrepreneurs (and siblings) Phil and Jane Lawlor approached me for a UI design and brand for their new product Sling. The idea behind Sling is simple, you send your loved one or friends a drink.

The drink is already paid for all they have to do is go to the bar and pick it up.

Sling App for iOS / iPhone

Sling uses your friends checked in locations to alert you when they’re at a bar or restaurant. You can then buy them a drink via the app wherever you are in the world. This then alerts your friend to go to the bar and collect their free drink. Who doesn’t want to pick up a free Margarita?

The app produces an exclusive code that would be redeemed via the bars purchase software, this then transfers payment to the vendor and your friends gets the drink.

Sling App for iOS / iPhone

The nice thing about Sling is you don’t even need to have any friends out. If you’re out or been somewhere recently and tried a nice cocktail you can buy that same drink for your friend. “You must try this cocktail, have one on me!”.  It’s a great way of sharing experiences.

This will then send a notification to your friend that gives them a set period of time to claim the drink. If the time expires, the users card is refunded.

Sling App for iOS / iPhone

Part of this project was branding. The logo below is made up of two shapes, the shape if you look closely is the top of a wine glass turned on it’s side. They are they cut together to form boomerang shapes to suggest you’re slinging something (and the founders being Australian also came to mind).

Sling App for iOS / iPhone

I really enjoyed working on this project. It was a complicated one and I found it challenging due to the amount of features we wanted to get into the app but really happy with the results. If you need a beautiful app design please contact me to talk.

“Professionalism and skill are hard to find in consultants for startups. Mike has both. He provided high quality designs, to spec, on time and within budget. I could not ask for more.”

Phil Lawlor, Founder Sling App.

I write lots on my design blog. Try reading “It’s OK Not to Chase Constant Growth With Your Freelance Business”

The post Sling App for iPhone appeared first on .

November 21, 2018

Representing concurrency in an object-oriented system has been a long-standing problem. Encapsulating the concurrency primitives via objects and methods is easy enough, but doesn’t get us anywhere. We still end up composing our programs out of threads and mutexes and semaphores, which is still hard.

Prior Art

It’s worth skimming the things that I’ve written about here: I’ve put quite a lot of time into this concurrency problem and have written different models as my understanding changed.

Clearly, this is a broad problem. It’s one I’ve spent a lot of time on, and have a few satisfactory answers to if no definitive answer. I’m not the only one. Over at codeotaku they’ve concluded that Fibers are the right solution, though apparently on the basis of performance.

HPC programs are often based on concurrent execution through message passing, though common patterns keep it to a minimum: the batch processor starts all of the processes, each process finds its node number, node 0 divvies up the work to all of the nodes (a message send), then they each run through their part of the work on their own. Eventually they get their answer and send a message back to node 0, and when it has gathered all of the results everything is done. So really, the HPC people solve this problem by avoiding it.

You’re braining it wrong, Graham

Many of these designs try to solve for concurrency in quite a general way. Serial execution is a special case, where you only have one object or you don’t submit commands to the bus. The problem with this design approach, as described by Bertrand Meyer in his webinar on concurrent Object-Oriented Programming, is that serial execution is the only version we really understand. So designing for the general, and hard-to-understand, case means that generally we won’t understand what’s going on.

The reason he says this is so is that we’re better at understanding static relationships between things than the dynamic evolution of a system. As soon as you have mutexes and condition locks and so on, you are forced to understand the dynamic behaviour of the system (is this lock available? Is this condition met?). Worse: you have to understand it holistically (can anything that’s going on at the moment have changed this value?).

Enter SCOOP

Meyer’s proposal is that as serial programs are much easier to understand (solved, one might say, if one has read Dijkstra’s A Discipline of Programming) we should make our model as close to serial programming as possible. Anything that adds concurrency should be unsurprising, and not violate any expectations we had if we tried to understand our program as a sequential process.

He introduced SCOOP (Simple Concurrent Object-Oriented Programming) developed by the Concurrency Made Easy group at ETH Zürich and part of Eiffel. Some of the design decisions he presented:

  • a processor is an abstraction representing sequential execution
  • there is a many-to-one mapping of objects to processors (this means that an object’s execution is always serial, and that all objects are effectively mutexes)
  • where an object messages another on a different processor, commands will be asynchronous (but executed in order) and queries will be synchronous
  • processors are created dynamically and opportunistically (i.e. whenever you create an object in a “separate” and as-yet unpopulated domain)

An implementation of this concurrency model in Objective-C is really easy. A proxy object representing the domain separation intercepts messages, determines whether they are commands or queries and arranges for them to be run on the processor. It inspects the objects returned from methods, introducing proxies to tie them to the relevant processor. In this implementation a “processor” is a serial operation queue, but it could equivalently be a dedicated thread, a thread pulled from a pool, a dedicated CPU, or anything else that can run one thing at a time.

This implementation does not yield all of the stated benefits of SCOOP. Two in particular:

  1. The interaction of SCOOP with the Eiffel type system is such that while a local (to this processor) object can be referred to through a “separate” variable (i.e. one that potentially could be on a different processor), it is an error to try to use a “separate” object directly as if it were local. I do not see a way, in either Swift’s type system or ObjC’s, to maintain that property. It looks like this proposal, were it to cover generic or associated types, would address that deficiency.

  2. SCOOP turns Eiffel’s correctness preconditions into wait conditions. A serial program will fail if it tries to send a message without satisfying preconditions. When the message is sent to a “separate” object, this instead turns into a requirement to wait for the precondition to be true before execution.

Conclusions

Meyer is right: concurrent programming is difficult, because we are bad at considering all of the different combinations of states that a concurrent system can be in. A concurrent design can best be understood if it is constrained to be mostly like a serial one, and not require lots of scary non-local comprehension to understand the program’s behaviour. SCOOP is a really nice tool for realising such designs.

This is something I can help your team with! As you can see, I’ve spent actual years understanding and thinking about software concurrency, and while I’m not arrogant enough to claim I have solved it I can certainly provide a fresh perspective to your team’s architects and developers. Book an office hours appointment and let’s take a (free!) hour to look at your concurrency problems.

November 19, 2018

Wildcat is live (and angry)

This is the third post in the Wildcat saga, so if this is your first time reading this blog it might be worth catching up - to find out more about the antagonist of our story, start with Introducing Apricat, and to understand our approach to solving the problem (and one of the challenges I've encountered on the way), read Project Wildcat... has been delayed.

All caught up? Good. Because I'm happy to reveal that Wildcat is live.

Wildcat is live (and angry)

I spoke a little about the technical hardware and general software approach in my last post, so instead of rehashing that I'm simply going to supply a link to the git repo for the project which I'm hoping, combined with the diagram below, will be self explanatory:

Wildcat is live (and angry)

For this post I'm more interested in talking about the fabrication of the non-technical parts of the project, most of which were designed around a cat ear frame we found in HomeSense.

The front panel was designed in about an hour (based on some initial measurements of the frame and LEDs) using Tinkercad, then after 3D printing and realising that nothing fit, slightly refined into what you see below:

This second print fit perfectly, so was primed using Rust-Oleum Surface Primer, expertly painted by Lucy using acrylics, then glued into the frame using Gorilla glue.

The next problem was housing the electronics. Thankfully Adafruit provide a really cool system for printing modular Feather cases, of which I used a tall variation, allowing the stacking headers on my Huzzah to fit.

Before attaching the board into the case with some bolts, I removed the plastic housing from the female ends of a couple of jumper wires (so they'd also fit in the case), and slipped them into the pins of the header, wrapped them around the internal case mounts (to help stop them being tugged out) and lead them out the hole in the side, ready to be attached to the button.

Wildcat is live (and angry)

I could have soldered the jumper wires to the pins, which I'm sure would make them more secure, but one of my guiding principles with this build was the ability to disassemble everything easily if I wanted to salvage any of the electronics.

I wanted the button to be hidden but easily accessible, so using two strips of copper tape and some glue, I attached it to the back of the left ear, soldering the contacts to one end of the strips and the jumper wires to the other.

Wildcat is live (and angry)

The final bit of assembly was sticking the lid from the case module to the back of the front panel. Due to the size of the LED matrix there was a small gap between them when dry fitted, but half a Command Strip either side not only filled this gap nicely, but will allow me to easily separate the two if I need to.

Wildcat is live (and angry)

And that's the build. I asked Apricat what she thought of the project, and think the look she gave me summed it up:

Wildcat is live (and angry)

In truth, she had calmed down a lot before I even started making this project, and I think she's truly feeling at home with us. So of course this project isn't designed for punishment, instead it's just a bit of fun and an excuse to build something, because we really love Apricat, and know that when she does bite us, it's not aggressive, but instead just her playful way of showing she loves us.

Wildcat is live (and angry)

This is the third post in the Wildcat saga, so if this is your first time reading this blog it might be worth catching up - to find out more about the antagonist of our story, start with Introducing Apricat, and to understand our approach to solving the problem (and one of the challenges I've encountered on the way), read Project Wildcat... has been delayed.

All caught up? Good. Because I'm happy to reveal that Wildcat is live.

Wildcat is live (and angry)

I spoke a little about the technical hardware and general software approach in my last post, so instead of rehashing that I'm simply going to supply a link to the git repo for the project which I'm hoping, combined with the diagram below, will be self explanatory:

Wildcat is live (and angry)

For this post I'm more interested in talking about the fabrication of the non-technical parts of the project, most of which were designed around a cat ear frame we found in HomeSense.

The front panel was designed in about an hour (based on some initial measurements of the frame and LEDs) using Tinkercad, then after 3D printing and realising that nothing fit, slightly refined into what you see below:

This second print fit perfectly, so was primed using Rust-Oleum Surface Primer, expertly painted by Lucy using acrylics, then glued into the frame using Gorilla glue.

The next problem was housing the electronics. Thankfully Adafruit provide a really cool system for printing modular Feather cases, of which I used a tall variation, allowing the stacking headers on my Huzzah to fit.

Before attaching the board into the case with some bolts, I removed the plastic housing from the female ends of a couple of jumper wires (so they'd also fit in the case), and slipped them into the pins of the header, wrapped them around the internal case mounts (to help stop them being tugged out) and lead them out the hole in the side, ready to be attached to the button.

Wildcat is live (and angry)

I could have soldered the jumper wires to the pins, which I'm sure would make them more secure, but one of my guiding principles with this build was the ability to disassemble everything easily if I wanted to salvage any of the electronics.

I wanted the button to be hidden but easily accessible, so using two strips of copper tape and some glue, I attached it to the back of the left ear, soldering the contacts to one end of the strips and the jumper wires to the other.

Wildcat is live (and angry)

The final bit of assembly was sticking the lid from the case module to the back of the front panel. Due to the size of the LED matrix there was a small gap between them when dry fitted, but half a Command Strip either side not only filled this gap nicely, but will allow me to easily separate the two if I need to.

Wildcat is live (and angry)

And that's the build. I asked Apricat what she thought of the project, and think the look she gave me summed it up:

Wildcat is live (and angry)

In truth, she had calmed down a lot before I even started making this project, and I think she's truly feeling at home with us. So of course this project isn't designed for punishment, instead it's just a bit of fun and an excuse to build something, because we really love Apricat, and know that when she does bite us, it's not aggressive, but instead just her playful way of showing she loves us.

November 16, 2018

A little challenge by Graham Lee

A little challenge today: create a JS function that turns its arguments into a list of pairs. Actually, the brief was “using Ramda” but I ended up not doing that:

function basePairwise(xs) {
  if (xs.length == 0) return [];
  if (xs.length == 1) return [[xs[0], undefined]];
  return [[xs[0], xs[1]]].concat(basePairwise(xs.slice(2)));
}

function pairwise(...xs) {
  return basePairwise(xs);
}

One of the nice things about JavaScript (I won’t say that often, so note the date) is the introspective nature: the fact that I get to just look at the way a function was called, rather than saying “you must use exactly these types and this arity always”. Here, I’ve done that with the JS spread operator: I could have used the arguments pseudo-list for only a little more work.

Reading List by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my underwear so I can spend time reading stuff.

November 13, 2018

I already wrote about the Ultimate Programmer Super Stack, a huge bundle of books and courses on a range of technologies: Python, JS, Ruby, Java, HTML, node, Aurelia… and APPropriate Behaviour, my book on everything that goes into being a programmer that isn’t programming.

Today is the last day of the bundle. Check it out here, it won’t be available for long.

November 12, 2018

I was on a “Leadership in Architecture” panel organised by RP International recently, and was asked about problems we face using new techniques like Microservices, serverless and machine learning in the financial technology sector. The biggest blocker I see is the RFP (Request for Proposals), RFI (Request for Information), the MSA (Master Service Agreement), any document with a three-letter acronym. We would do better if they disappeared.

I’m fully paid up in my tithe to the church of “customer collaboration over contract negotiation”, and I believe that this needs to extend beyond the company boundary. If we’re going to spend a few months going back and forth over waving our certifications about, deciding who gets to contact whom within what time, and whether the question they asked constitutes a “bug report” or a “feature request”, then I don’t think it matters whether the development team use two-week sprints or not. We’ve already lost.

We’ve lost because we know that the interactions between the people involved are going to be restricted to the terms agreed during that negotiation. No longer are people concerned about whether the thing we’re making is valuable; they’re concerned with making sure their professional indemnity insurance is up to date before sending an email to the DRI (Definitely Responsibility-free Inbox).

We’ve lost because we had a team sitting on its hands during the negotiation, and used that time “productively” by designing the product, putting epics and stories in a backlog, grooming that backlog, making wireframes, and all of those other things that aren’t working software.

We’ve lost because each incompatibility between the expectation and our intention is a chance to put even more contract negotiation in place, instead of getting on with making the working software. When your RFI asks which firewall ports you need to open into your DMZ, and our answer is none because the software runs outside of your network on a cloud platform, we’re not going to get into discussions of continuous delivery and whether we both read the Phoenix Project. We’re going to get into discussions of whether I personally will warrant against Amazon outages. But here’s the thing: we don’t need the software to be 100% up yet, we don’t even know whether it’s useful yet.

Here’s an alternative.

  1. We, collectively, notice that the software we make solves the problem you have.
  2. We, collectively, agree that you can use the software we have now for a couple of weeks.
  3. We, collectively, discuss the things that would make the software better at solving the problem.
  4. We, collectively, get those things done.
  5. We, collectively, GO TO 2.

Notice that you may have to pay for steps 2-4.

November 10, 2018

Project Wildcat... has been delayed

It is with no small amount of sadness that I have to report that, in the final stretches of the build of Wildcat, I've inadvertently managed to set my progress back for a few days.

As mentioned in my previous post, the addition of a new member of the family hasn't been without its issues, with Apricat being somewhat liberal in her use of tooth and claw to show her distrust of us.

To help combat this I devised the idea of an incident board, such as you'll see in factories showing how long it's been since the last accident, but designed to show how long it's been since Apricat last misbehaved.

Project Wildcat... has been delayed

The plan was to use an Adafruit Feather Huzzah and a 16x8 LED matrix, connected with a physical button, and a super cute cat photo frame we found in Homesense, to build a ticker which used data persisted on Adafruit IO to calculate and display how long it's been since the button was last pressed.

Arguably, rather than relying on the cloud for something trivial like this, I could have done the entire project locally, using any number of development boards that support non-volatile memory with a real time clock - but I don't have any of those things.

OK, so that's a bit of a lie.

I could easily have based the project on a Raspberry Pi Zero, which would have solved the persistence issue, with the addition of an RTC PiZero addon solving the clock problem, but that would mean writing the project using Python, and figuring out how to make it start properly on boot - and honestly, do we really need an entire Debian-based operating system for something as simple as recording a time and powering an LED display? It just feels like overkill.

Alternatively, the use of the Feather M0 Adalogger I have, combined with a DS3231 Precision RTC FeatherWing, would also solve these problems and let me use the Arduino platform - but that would prevent me from doing fancy things like Alexa integration...

Not that I have any plans to do that, but I can if I want.

Truth is, I wanted more experience logging data to the cloud from remote devices like this to help with a project I have planned for the future, and, well, I'm a web guy - I'd never live it down if I didn't make it smart.

Anyway, back to this morning, and I'm in the final stages of assembly, having just soldered the button onto the back of the frame. I needed to test it with a power unit that wasn't my laptop, so I grabbed the first spare USB charger I had spare and plugged it in.

Little did I know that by this point, it was already too late.

I had a couple of flashing LEDs on the board, but nothing else. Confused as to why it wasn't working, I tried plugging it back into my laptop, and while it booted the display, I wasn't able to keep it running for more than a couple of seconds, nor did the Arduino software recognise it was connected, and something on the board was getting very hot.

After a few minutes searching I found the following on the power management page for the Huzzah:

Project Wildcat... has been delayed

The power unit I'd chosen at random was the official Raspberry Pi 3 power adaptor, one which supplied the same 2.5 amps as the CanaKit one in the warning.

Basically, I fried the unit.

This sucks, especially when I was in the final stretch of assembly - but at least I've learnt that not all USB power supplies are the same, and will be far more selective with how I power my projects in the future.

Anyway, thanks to a generous donation by Lucy I've ordered another Feather Huzzah, so I'm hoping to be back on track soon. Once I've finally completed the assembly, and have proven to myself that it all works, I'll make another post explaining some of the more technical aspects of it.

Until then I guess we'll all have to sit here frustrated at being so close, but being denied success due to something really dumb. Oh well.

UPDATE: Wildcat is live!

Project Wildcat... has been delayed

It is with no small amount of sadness that I have to report that, in the final stretches of the build of Wildcat, I've inadvertently managed to set my progress back for a few days.

As mentioned in my previous post, the addition of a new member of the family hasn't been without its issues, with Apricat being somewhat liberal in her use of tooth and claw to show her distrust of us.

To help combat this I devised the idea of an incident board, such as you'll see in factories showing how long it's been since the last accident, but designed to show how long it's been since Apricat last misbehaved.

Project Wildcat... has been delayed

The plan was to use an Adafruit Feather Huzzah and a 16x8 LED matrix, connected with a physical button, and a super cute cat photo frame we found in Homesense, to build a ticker which used data persisted on Adafruit IO to calculate and display how long it's been since the button was last pressed.

Arguably, rather than relying on the cloud for something trivial like this, I could have done the entire project locally, using any number of development boards that support non-volatile memory with a real time clock - but I don't have any of those things.

OK, so that's a bit of a lie.

I could easily have based the project on a Raspberry Pi Zero, which would have solved the persistence issue, with the addition of an RTC PiZero addon solving the clock problem, but that would mean writing the project using Python, and figuring out how to make it start properly on boot - and honestly, do we really need an entire Debian-based operating system for something as simple as recording a time and powering an LED display? It just feels like overkill.

Alternatively, the use of the Feather M0 Adalogger I have, combined with a DS3231 Precision RTC FeatherWing, would also solve these problems and let me use the Arduino platform - but that would prevent me from doing fancy things like Alexa integration...

Not that I have any plans to do that, but I can if I want.

Truth is, I wanted more experience logging data to the cloud from remote devices like this to help with a project I have planned for the future, and, well, I'm a web guy - I'd never live it down if I didn't make it smart.

Anyway, back to this morning, and I'm in the final stages of assembly, having just soldered the button onto the back of the frame. I needed to test it with a power unit that wasn't my laptop, so I grabbed the first spare USB charger I had spare and plugged it in.

Little did I know that by this point, it was already too late.

I had a couple of flashing LEDs on the board, but nothing else. Confused as to why it wasn't working, I tried plugging it back into my laptop, and while it booted the display, I wasn't able to keep it running for more than a couple of seconds, nor did the Arduino software recognise it was connected, and something on the board was getting very hot.

After a few minutes searching I found the following on the power management page for the Huzzah:

Project Wildcat... has been delayed

The power unit I'd chosen at random was the official Raspberry Pi 3 power adaptor, one which supplied the same 2.5 amps as the CanaKit one in the warning.

Basically, I fried the unit.

This sucks, especially when I was in the final stretch of assembly - but at least I've learnt that not all USB power supplies are the same, and will be far more selective with how I power my projects in the future.

Anyway, thanks to a generous donation by Lucy I've ordered another Feather Huzzah, so I'm hoping to be back on track soon. Once I've finally completed the assembly, and have proven to myself that it all works, I'll make another post explaining some of the more technical aspects of it.

Until then I guess we'll all have to sit here frustrated at being so close, but being denied success due to something really dumb. Oh well.

UPDATE: Wildcat is live!

In OOP the Easy Way, I make the argument that microservices are a rare instance of OOP done well:

Microservice adopters are able to implement different services in different technologies, to think about changes to a given service only in terms of how they satisfy the message contract, and to independently replace individual services without disrupting the whole system. This […] sounds a lot like OOP.

Microservices are an idea from service-oriented architecture (SOA) in which each application—each microservice—represents a distinct bounded context in the problem domain. If you’re a movie theatre complex, then selling tickets to people is a very different thing from showing movies in theatres, that are coupled loosely at the point that a ticket represents the right to a given seat in a given theatre at a given showing. So you might have a microservice that can tell people what showings there are at what times and where, and another microservice that can sell people tickets.

People who want to write scalable systems like microservices, because they can scale different parts of their application separately. Maybe each franchisee in the theatre chain needs one instance of one service, but another should scale as demand grows, sharing a central resource pool.

Never mind all of that. The real benefit of microservices is that they make boundary-crossing more obvious, maybe even more costly, and as a result developers think about where the boundaries should be. The “problem” with monolithic (single-process) applications was never, really, that the deployment cost too much: one corollary of scale is that you have more customers. It was that there was no real enforcement of separate parts of the problem domain. If you’ve got a thing over here that needs that data over there, it’s easy to just change its visibility modifier and grab it. Now this thing and that thing are coupled, whoops!

When this thing and that thing are in separate services, you’re going to have to expose a new endpoint to get that data out. That’s going to make it part of that thing’s public commitment: a slightly stronger signal that you’re going down a complex path.

It’s possible to take the microservices idea and use it in other contexts than “the backend”. In one Cocoa app I’m working on, I’ve taken the model (the representation in objects of the problem I’m solving) and put it into an XPC Plugin. XPC is a lot like old-style Distributed Objects or even CORBA or DCOM, with the exception that there are more safety checks, and everything is asynchronous. In my case, the model is in Objective-C in the plugin, and the application is in Swift in the host process.

“Everything is asynchronous” is a great reminder that the application and the model are communicating in an arm’s-reach fashion. My model is a program that represents the domain problem, as mentioned before. All it can do is react to events in the problem domain and represent the changes in that domain. My application is a reification of the Cocoa framework to expose a user interface. All it can do is draw stuff to the screen, and react to events in the user interface. The app and the model have to collaborate, because the stuff that gets drawn should be related to the problem, and the UI events should be related to desired changes in the domain. But they are restricted to collaborating over the published interface of the XPC service: a single protocol.

XPC was designed for factoring applications, separating the security contexts of different components and giving the host application the chance to stay alive when parts of the system fail. Those are valid and valuable benefits: the XPC service hosting the model only needs to do computation and allocate memory. Drawing (i.e. messaging the window server) is done elsewhere. So is saving and loading. And that helps enforce the contract, because if I ever find myself wanting to put drawing in the model I’m going to cross a service boundary, and I’m going to need to think long and hard about whether that is correct.

If you want to talk more about microservices, XPC services, and how they’re different or the same, and how I can help your team get the most out of them, you’re in luck! I’ve recently launched the Labrary—the intersection of the library and the laboratory—for exactly that purpose.

November 09, 2018

Introducing Apricat by Daniel Hollands (@limeblast)

Introducing Apricat

It was around 7:30 in the morning when Lucy awoke. At first she was confused as to why her body would even consider consciousness at this early hour on a weekend.

She thought to herself "Are we going on Holiday today?" before remembering" No, we've only just got back from Cheddar".

So why was she awake?

She turned to Daniel, and as she uttered that oh-so frequent request/demand of "Coffee", a smile spread across her face.

Today, she remembered, was the day we're adopting the cat.


The delightfully named Apricat is a beautiful ginger and white domestic short-haired that's believed to be around three years old.

She was a pregnant stray picked up by the RSPCA in Birmingham, and after giving birth to three kittens (all of whom were promptly adopted) she was initially denied the chance of adoption herself due to health issues. Thankfully she made a full recovery, and shortly after returning to the adoption centre she met two very strange humans (that would be us).

The RSPCA operate a traffic light classification for all the animals in their care, and upon our first introduction she was classified as green (to indicate that she was friendly), but by our second meeting a week later, she had been downgraded to amber.

Undeterred by this, we adopted her on 6th October 2018, and she's been slowly making herself at home since.

This hasn't been without it's issues, however, as in the early days and weeks of her life with this rebellious streak in her would flair up, resulting in many bites and scratches when we interacted with her.

It's this behaviour which lead to the creation of Wildcat, a small Arduino project designed to keep track of how long it's been since she last acted up.

(I'm going to talk more about this project in my next post).

Thankfully, since these early issues, she's chilled out a lot. Whereas before she'd spend all her time in the hallway, just outside the room we occupied, she's now found the confidence to come sit on the sofa next to us.

Allowing us to stroke her is still a privilege, not a right, but she's affording this privilege on a far more frequent basis. She's very vocal when it's time for dinner, and if we leave the bedroom door open, she'll inform us she wants feeding at 4:30 in the morning by jumping on the bed and wriggling.

We keep the door closed now.

Anyway, we're letting her outside for the first time tomorrow. So provided she comes back, I'll post about Project Wildcat next.

Introducing Apricat

UPDATE: She came back in, but then disaster struck.

Introducing Apricat by Daniel Hollands (@limeblast)

Introducing Apricat

It was around 7:30 in the morning when Lucy awoke. At first she was confused as to why her body would even consider consciousness at this early hour on a weekend.

She thought to herself "Are we going on Holiday today?" before remembering" No, we've only just got back from Cheddar".

So why was she awake?

She turned to Daniel, and as she uttered that oh-so frequent request/demand of "Coffee", a smile spread across her face.

Today, she remembered, was the day we're adopting the cat.


The delightfully named Apricat is a beautiful ginger and white domestic short-haired that's believed to be around three years old.

She was a pregnant stray picked up by the RSPCA in Birmingham, and after giving birth to three kittens (all of whom were promptly adopted) she was initially denied the chance of adoption herself due to health issues. Thankfully she made a full recovery, and shortly after returning to the adoption centre she met two very strange humans (that would be us).

The RSPCA operate a traffic light classification for all the animals in their care, and upon our first introduction she was classified as green (to indicate that she was friendly), but by our second meeting a week later, she had been downgraded to amber.

Undeterred by this, we adopted her on 6th October 2018, and she's been slowly making herself at home since.

This hasn't been without it's issues, however, as in the early days and weeks of her life with this rebellious streak in her would flair up, resulting in many bites and scratches when we interacted with her.

It's this behaviour which lead to the creation of Wildcat, a small Arduino project designed to keep track of how long it's been since she last acted up.

(I'm going to talk more about this project in my next post).

Thankfully, since these early issues, she's chilled out a lot. Whereas before she'd spend all her time in the hallway, just outside the room we occupied, she's now found the confidence to come sit on the sofa next to us.

Allowing us to stroke her is still a privilege, not a right, but she's affording this privilege on a far more frequent basis. She's very vocal when it's time for dinner, and if we leave the bedroom door open, she'll inform us she wants feeding at 4:30 in the morning by jumping on the bed and wriggling.

We keep the door closed now.

Anyway, we're letting her outside for the first time tomorrow. So provided she comes back, I'll post about Project Wildcat next.

Introducing Apricat

UPDATE: She came back in, but then disaster struck.

November 08, 2018

This isn’t a full tutorial, and I’m not an expert, but I noticed this knowledge wasn’t really collected together anywhere, so I’m putting something together here. Please shout if there’s any holes or mistakes.

The CharacterMovementComponent that comes with the third-person starter kit has several movement modes you can switch between using the Set Movement Mode node. Walking, falling, swimming, and flying are all supported out-of-the-box, but there’s also a “Custom” option in the dropdown. How do you implement a new custom movement mode?

First, limitations: I’ve not found a way to make networkable custom movement modes via blueprint. I think I need to be reusing the input from Add Movement Input, but I’m not yet sure how. Without doing that, the server has no idea how to do your movement.

When you set a new movement mode, the OnMovementModeChanged event (which is undocumented??) gets fired:

movementmodechanged

At this point you can toggle state or meshes, zero velocity, and other things you might want to do when entering and leaving your custom mode.

The (also undocumented) UpdateCustomMovement event will fire when you need to do movement:

updatemovementmode.PNG

From here you can read your input axis and implement your behaviours. You can just use the delta and Set Actor Location, but there’s also the Calc Velocity node which can help implement friction and deceleration for you.

To return to normal movement again, I’ve found it’s safest to use Set Movement Mode to enter Falling, and let the component sort itself out, but ymmv there.

Hope someone finds this helpful.

November 06, 2018

There’s a great bundle of polyglot learning taking place over at the Ultimate Programmer Super Stack. My book, APPropriate Behaviour – the things every programmer needs to know that aren’t programming – is featured alongside content on Python, Ruby, Java, JS, Aurelia, Node, startups, and more.

The bundle is just up for a week, but please do check it out: for not much more than you’d probably pay for APPropriate Behaviour you’ll get a whole heap of stuff that should keep you entertained for a while :).

November 05, 2018

Is it that a month in the laboratory will save an hour in the library, or the other way around? A little more conversation, a little less action?

There are things to learn from both the library and the laboratory, and that’s why I’m launching the Labrary, providing consulting detective and training service to software teams who need to solve problems, and to great engineers who want to be great lead engineers, principal engineers and architects.

The Labrary is also the home to my books and other projects to come. So if you want to find out what a consulting detective can do for your team, follow the @labrarian on Mastodon or book office hours to talk things over.

November 02, 2018

Reading List by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my underwear so I can spend time reading stuff.

November 01, 2018

At the start of October, I headed out to Sussex for our annual mastermind retreat. It’s amazing how spending a few days away with other like-minded business owners can really focus the mind and get you fired up.

Why host a mastermind retreat?

I’ve written about mastermind groups before. This is how I described a mastermind group:

A mastermind group is a group of like-minded people who share a common goal. They meet (physically or virtually) on a regular basis to discuss what they’re working on and what problems they’re facing.

You could think of it as a circle of professional friends. They might not be people you know to start with but over time you will become friends. You’ll help each move forwards by holding each other accountable and by providing support.

A mastermind retreat is spending time with your mastermind group in an inspiring location for a few days.

Here are some of the benefits of a mastermind retreat:

Time to step away and think

A retreat is a great excuse to get away from the day-to-day grind. I don’t do any client work and I’ll only check emails once or twice a day. As far as my clients are concerned, I’m off-the-grid on vacation. This frees up the time and space to think about the bigger picture. (Note to self: you should really get away from the day-to-day grind more often.)

You get valuable feedback

Most of us, especially those of us that work remotely, find getting honest feedback a challenge. A retreat fixes that problem: it’s time to put your ideas on the line. The group will ask questions and offer new ideas and perspectives. Talking through your ideas with people you trust and respect is something we should all do more often.

You build deeper connections

Chatting to your mastermind group on Skype is all well and good, but meeting in person is different. Even Matt Mullenweg, CEO of the fully-remote company Automattic, has recently written about the importance of meeting in-person. Spending a few days together gives you time to get to know each other, and helps you better understand their businesses.

Rekindle the groups energy and motivation

Our mastermind group has been meeting on Skype every 2 weeks for a few years but over time things can become repetitive. A retreat is a way to bring everyone together again. It’s an incredibly motivating place to be. You come away wanting to execute on your ideas.

What’s more, a retreat is easy to organise. Here are a few things to consider:

The logistics of a mastermind retreat

People

It’s important to have to the right group of people on the retreat. For this reason, you’ll likely want to make it application or invite-only.

You’ll want everyone to be in roughly the same position. On our retreat, we were all self-employed and had a focus on moving our businesses to the next level. We also had a good cross-section of skills, which meant we could all contribute different ideas to the discussion.

You don’t have to know each others businesses before attending to get a huge amount of value. You have enough time to learn about each other while you’re there.

You’ll want to keep the group relatively small. There were 5 of us on our retreat, which worked out great. There were just 3 of us on the first retreat I attended. I could imagine groups of up to 8 working just as well. Any bigger and you risk not having enough time to chat to everyone.

Location

Finding the right location is key. We used Airbnb to book a lovely 8-bedroom home in Sussex. It was likely overkill for just 5 of us, but the extra space is often worth it.

Once you split the cost with everyone, it doesn’t work out too expensive. All in, for accommodation and food, it worked out to just under £400 each. You could, of course, run a retreat for much less. Airbnb also lets you split the cost with other attendees, so you don’t have to foot the bill yourself.

When looking at a venue, here’s a few things to consider:

  • Does it have a well-equipped kitchen so you can prepare meals?
  • Is there a table big enough for you all to sit around and run sessions from?
  • Are there enough bedrooms/beds to get a decent nights sleep?
  • Does it have a garden with a seating area?
  • Are there things to do and places to walk in the surrounding area?

Food

You’ll need to plan food ahead of time. Good quality food that everyone can enjoy is essential to running a good retreat.

Here’s a few things we did:

  • We went out for a pub meal on the first evening
  • We used Gousto for two evening meals (which made it easy to organise and cook)
  • For lunch, we prepared salads and pizza
  • We created a shared shopping list for additional things like tea, snacks, etc.

Schedule

You’ll also want to plan the schedule before arriving. You don’t want to be thinking about what to do when you arrive.

We arrived Tuesday afternoon and left on Friday morning, leaving 2 full days. Here was our itinerary:

Arrival day (PM): Pick bedrooms. Round of introductions. Unpack. Pub meal for dinner.
Day 1: Sessions in the morning and after lunch, leaving a space for an activity in the afternoon and evening.
Day 2: Same format as day 1.
Final day: Pack. Check-out. Breakfast at a café to wrap-up.

The first day looked something like this:

Pre 9am: wake, shower and breakfast
9am: Session 1 – Darren
10am: Session 2 – Francesca
11am: Session 3 – Andy
12am: Lunch
1pm: Session 4 – Nazz
2pm: Session 5 – Marc
3pm: Afternoon activities
6pm: Evening meal

Each session was roughly 45 minutes with a 15 minute break. For the session, we each presented something we were working on or thinking about for 10 minutes. The next half an hour was spent in group discussion. Have someone with a timer to keep things on track.

Plan the list of sessions prior to the retreat. This will give everyone time to prepare for their session.

Andy Henson shared the document template we used to organise our retreat in his write-up on mastermind retreats.

Tips for getting the most out of a retreat

  • Keep it small. 3-8 people feels like the sweet spot. Any bigger and you risk changing the dynamic of the group and it becomes more difficult to organise.
  • A retreat is for work. A solo retreat might be about rest and recovery, but a mastermind retreat is about getting and providing as much value as possible. It should be intense.
  • No client/project work. Set an out of office on your email. Let your clients know you’re away. Client/project work is out-of-bounds. It’s time to focus on your business.
  • Know what you want to get out of the retreat. Do your homework before attending. Know what you want the group to help you with. The better you can come prepared, the more you’ll get from it.
  • Plan in advance. Have an itinerary and meal plan ready before you arrive at the retreat.
  • Setup a Slack group. Get to know each other first. A Slack group is a great way to plan and chat before and after the retreat.
  • Hold an event wrap-up. Share what you’ve learned and what your next moves are. Keep everyone updated on your progress.

And a few final tips, courtesy of Seth Godin:

  • Must be off site, with no access to electronic interruption
  • Don’t serve boring food
  • Here’s the goal: new friends… here’s the output: a new and better to-do list

On Blue Agile by Graham Lee

Ron Jeffries has some interesting posts lately on Dark Scrum, the idea that poor programmers are being chained to the code face in the software mines, forced to unthinkingly crank out features under Agile-sequel banners like “velocity” and “embracing change”.

In one such post, he refutes the notion of a shadowy Agile Indu$trial Complex. The AIC is a hypothetical network of consultants, tool vendors and project managers who collectively profit from every story point, every Jira “story”, and every grooming session.

Here’s the thing, though. You don’t need fnord filters and Masonic handshakes to explain Dark Scrum. You need to understand that Scrum is doing exactly as intended, and that it’s orthogonal to—not opposite to—the intentions of Agile software development.

Alan Kay had this analogy of two perpendicular planes that he used to explain the difference between Object-Oriented Programming and the thing programmers do in Java. The pink plane contains existing ideas and processes. You can get better at your craft by advancing along the pink plane.

Every so often, an idea comes along (OOP, says Kay, is one such idea) that is not better, it is different. It takes you out of the pink plane altogether into the orthogonal blue plane. You can’t ask “is this idea better” because it doesn’t make sense. It’s different. It has different qualities, so your question about what makes things better no longer makes sense.

Back when we were all allegedly doing waterfall software development, we were delivering software that satisfied requirements defined for some project. We can ask “how good are we at delivering software that satisfies requirements”, and define “better” answers to that question.

Scrum is a process improvement framework for delivering products. As such it provides a “good” baseline for software delivery, and tools to help us get “better” at it. Scrum is the pink plane.

The Agile crowd, on the other hand, stopped asking how much software we are delivering, and started asking how valuable our interactions with our customers are. It’s in the blue plane. The questions we had in the pink plane are no longer relevant, so if we’re still asking them, we will get nonsensical answers.

It is possible, even likely, that the rise of Scrum is due to trying to apply pink plane thinking to the Agile idea, in the way that the rise of C++ or SOLID is due to pink plane thinking about OOP. You could imagine someone who manages a software team seeing a lot of software coming out of an XP team. They read about the XP practices and conclude “I need to make my team adopt these practices, and we’ll make more software”.

But perhaps the XP team weren’t worried about making more software, and don’t even understand why making more software would be a goal.

That doesn’t make Scrum “bad”, indeed looked at along the pink plane it’s better than its predecessors. It just makes it unexpected, and disappointing as a result.

October 31, 2018

International Tinkering Education Conference I’ve been fortunate to be involved in the fantastic GEEK play & games festival 3 times now. Once at Dreamland in Margate, last summer at the Singapore Science Centre in (you’ve guessed it) Singapore and I’ve just come back from GEEK in Beijing!  This time we were part of the inaugural […]

There has been a couple of blogposts recently about text-level HTML semantics and assistive technology; for example, You’re using <em> wrong and Accessibility: Bold vs. Strong and Italic vs. Emphasis, the latter of which says

The Strong tag, <strong>, and the Emphasis tag, <em>, are considered Semantic Markup that allows for added meaning to your content. It serves as an indication to a screen reader of how something should be understood.

Whenever I read “some browsers” or “some screenreaders”, I always ask “but which ones?”, as did Ilya Streltsyn, who asked me “what is the current state of the text-level semantics in HTML reality?”

Léonie Watson to the rescue! Over Twitter, Watters wrote

Most are capable of reporting these things on demand, but do not as standard. So you don’t hear the text/font characteristics being announced as you read, but you can query a character/word etc. to discover its characteristics. This is based on the visual presentation of the text though, rather than through any recognition of the elements themselves (which as @SelenIT2 notes, are not mapped to the acc API).

Ilya (@SelenIT2) noted that “almost no text-level semantic element has a direct mapping to any accessible object”, linking to HTML Accessibility API Mappings 1.0 to demonstrate. This means that even if a screenreader vendor wanted to pass that information to a user, they can’t, because the browsers don’t expose the information to the Accessibility Tree that assistive technology hooks into.

Ilya also pointed my to a GitHub issue on the NVDA screenreader “Semantic support (not just style support) for del and ins on web pages”, in which the developers pose an interesting usability conundrum:

While I normally push for semantics over style, I’ve always found elements like this to be tricky. Strong and em, for example, don’t really mean anything to most people, even though they have more semantic meaning than bold or italic. That said, I think ins and del would mean more to most users semantically speaking…

It’s worth noting that we do support strike, super and sub. We just don’t report them by default. Also, while you make valid points, the reality is that we must always consider the concerns of our users over those of authors. If users find that it causes excessive verbosity, that is reason enough for this not to be a default…

Having emphasis reported by default has been extremely unpopular with users and resulted in a lot of complaints about NVDA 2015.4. The unfortunate reality is that emphasis is very much over-used in the wild. I had serious misgivings that this would be the result when we implemented this and it seems these unfortunately turned out to be quite warranted. As such, we’ve now disabled this by default, though the option is still there for those that want it.

So, should we stop using text-level semantics? Well, <strong>no</strong>. They continue to add meaning to sighted users, and as Watters says, some AT users can benefit from them. But don’t over-use them. Like adding title attributes to all your links, there’s such a thing as too much accessibility.

If I wanted to do a table view data source in ObjC, it would look like this:

- tableView:aTableView objectValueForTableColumn:aColumn row:(NSInteger)row {
  return [representedObject.collection[row] valueForKey:[aColumn identifier]];
}

When I do it in Swift, it ends up looking like this:

func tableView(_ tableView: NSTableView, objectValueFor tableColumn: NSTableColumn?, row: Int) -> Any? {
    guard let identifier = tableColumn?.identifier else {
        assertionFailure("No table column")
        return nil
    }
    guard let obj = (self.representedObject as? ModelType)?.collection(at:row) else {
        assertionFailure("Can't find model object at \(row)")
        return nil
    }
    switch identifier {
    case NSUserInterfaceItemIdentifier(rawValue:"column1"):
        return obj.field1
    case NSUserInterfaceItemIdentifier(rawValue:"column2"):
        return objc.field2
    //...
    default:
        assertionFailure("Unknown table column \(tableColumn?.identifier ?? NSUserInterfaceItemIdentifier(rawValue: "unknown"))")
        return nil
    }
}

I can’t help feeling I’m doing it wrong.

Rant and Rave UI Design

A bold striking dashboard for a CX platform

To celebrate the news of Rant & Rave being acquired by Upland Software I thought it would be a good time to share some of the UI Design work I’ve done for R&R over the last few years.

I’ve done lots of varying different projects for Nigel and his team over the years, ranging from dashboards to mobile experiences. I’ve had to consider their user and client base when designing screens. I’ve had to learn lots of new terminology in the CX space and understand how their detailed and advanced software works in order to create something new and easy to use.

Rant and Rave UI Design

I really enjoyed the challenge of working on an advanced bit of software and hearing about the impact the changes we designed made to their users.

Here is just a handful of screens that have been created for them over the years.

Rant and Rave UI Design

 

Rant and Rave UI Design

Did you read Reflecting on 10 years as a freelance designer?

The post Rant & Rave UI Design appeared first on .

October 30, 2018

How much for an App?

You wouldn’t believe how many email enquiries I get simply asking this question, “How much for an App?”

If you’re a freelance designer and work within the technology sector you’ve probably had this email too, it’s frustrating and IMO lazy.

Photo by JESHOOTS.COM on Unsplash

Photo by JESHOOTS.COM on Unsplash

So before you fire off that email asking a professional to give you a cost for a project, try thinking about these things…

  1. What does the app do?

  2. Describe in detail your app functionality

  3. What platforms are you looking at?

  4. Who’s your target audience?

  5. What’s your budget?

  6. Do you have competitors, if so who are they?

  7. Do you have a engineering team / developers ready?

  8. What limitations do you have from a technical point of view?

  9. Do you require advanced features such as admin tools?

  10. Do you have a prototype or beta?

The BEST way to approach a designer or developer about your project is to send them a detailed brief, listing requirements and breaking down everything covered above.

Don’t worry, there’s lots of resources online to help you. Start with Canva’s “Writing an effective design brief: Awesome examples and a free template to get you started”

Side note. It’s OK to not know some of these questions but please, try to explain what you want a freelancer to hand over to you, as guesswork can be costly for everyone.

I write lots about freelancing, this post about how to ‘Speak basic UI & UX’ will help when you next approach a designer.

The post How much for an App? appeared first on .

October 29, 2018

Fitness Group Ui

A beautiful new way to get fit with others.

This project is a personal one aimed to bring groups of like minded people together with one goal, get fit together.

Sometimes being in a new city, away for travel or just in your very own town can be tough to find people to exercise with.

This app is designed to bring people together by giving over your location, seeing who’s near you and if any events are on that you’d like to join.

Fitness Group UI Design

The app will be purely fitness focused, so if you’re into yoga you can tailor the app to only feature yoga people, or if you like to try everything then you can access that too.

Plus! You can manage and control who comes to your events, so you have ultimate control with the people you invite to your group training.

Fitness Group Ui

While this is work in progress I’m excited to show off a few key screens.

More soon!

Pssst.. Find out how Twitter has improved me as a freelance designer

 

The post Fitness Activity UI Design appeared first on .

October 28, 2018

App Ui design quiz

Challenge yourself & win Big Cash

PROVEIT is the first US app that lets you play daily trivia against your friends for cash prizes. Whether you love Seinfeld, Science, or the Super Bowl, someone’s waiting for you to PROVEIT.

I love it when I get recommended to work with awesome tech teams all over the world. This recommendation came from Matt at Prempoint suggesting I needed to work with his friends Nate and Prem on their new trivia quiz game.

Once introductions were made I was quick to jump onboard and work on this exciting project.

I started by designing some concepts of the new app. We wanted to focus on how the game could look in several versions time that included licensed sections, promoted content and huge tournament challenges. The concepts sparked lots of excitement and debates, the design needed to tick a lot of boxes – not only visually but legally. We felt the best direction was to make it as social as possible, that lead to seeing who was playing the games, how many were playing and easy to access friend challenges.

App Ui design quiz

I was given lots of creative freedom and designed the whole UX flow from sign up, on-boarding to winning your first game. It was a great experience and I’m proud of the final product.

 

App Ui design quiz

It’s only available in the US for the moment, but keep an eye on this team as they’re doing some great things in the quiz space.

App Ui design quiz

 

Interested in Puzzle games? try some of these

The post PROVEIT Game Design appeared first on .

October 21, 2018

Back in 2011, I was speaking at QCon London at the invitation of my friend and de Programmatica Ipsum co-conspirator akosma, and one of the conference’s community events was an iOS developer meet-up hosted in the conference centre. I think we had a speaker panel of the conference mobile track speakers: regardless, there was a panel, and I was on it.

This was when Steve Jobs’ analogy of PCs as trucks, iPads as cars was still fresh in everybody’s mind. Consensus in the room was that this made sense, that the iPad was an everyday computer where a Mac is “for pros”, and you couldn’t do a pro app, say Photoshop, for the iPad.

I was angry that a bunch of people who say that they are clever at making computers do things could so easily reject the idea that a computer could do a thing, particularly when it was a thing computers could already do. In a huff, I stomped out of the room, only to stomp back in a few minutes later carrying a flipchart. I turned to the first page and drew a big black rounded rectangle. “OK”, I said, “we’re going to design Photoshop for iPad. Go.”

Unsurprisingly, the room designed Photoshop for iPad. Nothing changed about Photoshop, or about the iPad, or about these people, except that previously they had been told by no less a person than Apple’s CEO that iPads should not be thought of as a computer for doing computer things. I had told them that it could be used for computer things and that they were the people who could make it happen, lo and behold, it happened. I don’t remember whether I had even used an iPad at that time; nonetheless, I led a team of designers who designed Photoshop for iPad.

What Apple were really saying with the trucks metaphor was “this is a new platform, please have low expectations”. “No Photoshop. No Office. Lame.” was not the review they wanted to see, and by controlling the narrative around what you should expect from an iPad, they controlled whether it lived up to expectations.

I think the point of my post is “we usually expect marketing to hype things up, beware of marketers hyping down your imagination”. I’m not quite ready to finish yet, though.

This year, of course, there are iPad Pros, for Pros, that do the kind of truck stuff we were told iPads are not for, such as Photoshop. This was inevitable. Unless Apple or Adobe went out of business, or the iPad or Photoshop really tanked, there was going to be Photoshop for iPad.

I’m wondering who wrote it, though.

I have no doubt that the developers at Adobe are capable of doing it. I also have no doubt that it’s strategically important for Apple in their new “iPad Pros are trucks” world, that there should be Pro apps for iPad Pros. I know that all of the platform vendors are happy to write ports of apps they want to see on their platforms, and give them to the app vendors to release under their own brands. To me, the story “Adobe realised this was a valuable addition for Creative Cloud customers” and the story “Apple realised this was a valuable addition for iPad Pro perception” are both convincing.

October 20, 2018

Beginner thoughts by Graham Lee

Back story: my period of walkabout, in which I went to see the rest of the computing world beyond Apple land, started in November 2014. This was shortly after Swift’s introduction at WWDC 2014. It ended in October 2018, by which time the language had evolved considerably, its position in the community had advanced greatly, and SourceKitService had stopped crashing.

I have previously written on the learning phases I encountered on exposure to Haskell, now what about Swift? I have the opportunity to reflect on how I react as a beginner, and share that so that we all learn how we (well, I) learn, and maybe discover how we can teach.

About the project

I’m writing a tool that I want, which takes files in one format (RSS) and writes them out in another format (Maildir). You can follow along. The reason for mentioning this here are twofold:

  • I do not know what I’m doing, but I’m willing to share that.
  • To let you understand the (limited, I think) complexity of the thing I’m trying to build.

Thinks: This should not be that hard

I often feel like Swift is making me feel like an idiot. This is because my expectation is too high: I know the platform fairly well. I know the Foundation framework pretty well. I know Xcode pretty well. I understand my problem to some extent. It should just be the programming language that’s different.

And I do different programming languages all the time. What’s another one going to do?

But of course it’s not just the programming language that changed. It changed the conventions for things like naming methods or raising errors, and that means that the framework methods have changed, which means that things I used to know have changed, which means that I do not know as much as I assume. It introduced a new library, which I also don’t know.

Thinks: That unimportant thing was really frustrating

Two such convention changes are correlated: classes that used to be Foundation and are now standard library (or maybe are Foundation still but have been renamed on being bridged, I’m not sure) are renamed from NSThing to Thing. That means that the name of NSURL is now URL.

That means that if you have a variable that represents a URL, you can’t follow the Cocoa convention of leaving the abbreviation uppercased and calling it URL, because now it’s got the same name as the type URL. So the new convention is to call it url.

Objectively, that’s not a big deal. Subjectively, this stuff is baked in pretty deep, and changing it is hard.

Thinks: Even learning something is frustrating

The last event to make me get up and walk around a field was actually discovering something new about Swift, which should be the point, but nonetheless made me feel bad.

I have discovered that when it comes to working with optionals, the language syntax means that There Is More Than One Way To Do It. When I learned about if let and guard let, I was confused by the fact that the thing on the right needed to be an optional, not unwrap one: surely if my rvalue is an optional, then my lvalue should be, too?

Then, when I learned about the ?. and subsequently ?? operators, I thought “there’s no way I would ever have thought to type that, or known how to search for those things”. And even though they only make things shorter, not different, I still felt frustration at the fact that I’d gone through typing things out the long way.

Thinks: There’s More Than One Way Not To Do It

One of the Broken Expectations™ is that I know how to use Strings. Way back when, NeXT apps used char * as their string type. Then Enterprise Objects Framework came along with its Foundation library of data types, including a new-fangled Unicode string class, NSString. Then, well, that was it for absolute ages.

So, when I had a String and I wanted to take the substring to an index, I was familiar with -substringToIndex: and tried to apply that. That method is deprecated, so I didn’t want to use it. OK, well I can string[0..<N]. Apparently not, integer subscripting is not allowed, and the error message tells me to read a code comment to understand why. I wish it told me where that code comment was, or just showed it to me, instead!

Eventually I found that there’s a .prefix(N) method, again this is the sort of thing that makes me think: what’s wrong with me? I’ve been programming for years, I’ve been programming on this platform for years, I should be able to get this.

Conclusion: Read a Book

I had expected that my knowledge of the Mac, Xcode, and Cocoa would be sufficient to carry me through a four-year gap on picking up a language, particularly with the occasional observation of a conference talk about the Swift language (I’ve even given one!). I had expected that picking up a project to build a thing would give me a chance to get acquainted.

I was reflecting on my early experiences with writing NeXT and Mac applications in Objective-C. I had my copy of the NeXT Developer Documentation, or Cocoa in a Nutshell, open on the desk, looking at the methods available and thinking “I have this, I want that, can I find one of these that gets me there?” I had expected that auto-complete in Xcode would be my modern equivalent of working that way.

Evidently not. Picking up the new standard library things, and the new operators, will require deliberate learning. I’ve got some videos lined up, but I think my next action is to find a good book to read.

October 19, 2018

Reading List by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my underwear so I can spend time reading stuff.

October 15, 2018

Jeremy Bicha wrote up an unknown Ubuntu feature: “printing” direct to a Google Drive PDF. I rather wanted this, but I don’t run the Gnome desktop, so I thought I might be out of luck. But no! It works fine on my Ubuntu MATE desktop too. A couple of extra tweaks are required, though. This is unfortunately a bit technical, but it should only need setting up once.

You need the Gnome Control Centre and Gnome Online Accounts installed, if you don’t have them already, as well as the Google Cloud Print extension that Jeremy mentions. From a terminal, run sudo apt install gnome-control-center gnome-online-accounts cpdb-backend-gcp.

Next, you need to launch the Control Centre, but it doesn’t like you if you’re not running the Gnome desktop. So, we lie to it. In that terminal, run XDG_CURRENT_DESKTOP=GNOME gnome-control-center online-accounts. This should correctly start the Control Centre, showing the online accounts. Sign in to your Google account using that window. (I only have Files and Printers selected; you don’t need Mail and Calendars and so on to get this printing working.)

Then… it all works. From now on, when you go to print something, the print dialogue will, after a couple of seconds, show a new entry: “Save to Google Drive”. Choose that, and your document will “print” to a PDF stored in Google Drive. Easy peasy. Nice one Jeremy for the write-up. It’d be neat if Ubuntu MATE could integrate this a little more tightly.

Over the years I’ve discovered several remote job sites that specialise in roles for the technology industry. Here’s some sites for you to find that perfect remote role.

Dribbble

Dribbble Jobs

Dribbble Jobs is an excellent resource for job hunters. It’s not dedicated to remote working but they do have a filter that refines the roles to satisfy that requirement. You’ll find it on the right hand side. Great for designers (duh)

http://dribbble.com/jobs

Remotive

Remotive Jobs

Remotive is a really nice site aimed at hoping job seekers find remote jobs, they claim to be helping 25,000 people finding their dream job. Great for developers and engineers.

https://remotive.io

Honest.work

Honest Work Jobs

Honest.work is a great resource for those who like to work from home. They make sure that salaries and rates are clear and introduce you directly to companies looking to hire. Great for developers, designers and Marketing professionals.

https://honest.work

RemoteOK

RemoteOK Jobs

RemoteOK has some really amazing roles, you’ll more likely to find some big brands hiring on this site. Similar to honest.work and Remotive, this site is ideal for creatives and engineers.

https://remoteok.io

Angel.co

Angel.co Jobs

Angel is the place to be if you’re hunting for the hottest new startup roles. Like Dribbble Angel.co isn’t solely for remote workers but does have a filter. This site is ideal for creatives and engineers.

https://angel.co/jobs

BONUS

Product Hunt

ProductHunt is the darling of the startup world, they list awesome products, startups and companies. They also list some great remote jobs. You’ll find some great startups here, ideal for developers.

https://www.producthunt.com/jobs

Any other places?

Drop me a message on Twitter (@zer0mike) about any sites I’ve missed and I’ll add them to this list.

Please checkout my work – I’m proud of it all and would love you to view it.

While I have you, read about how I’ve been freelancing for 10 years.

The post 5 places to find remote jobs online appeared first on .

October 13, 2018

Update

The information below is mostly redundant. After filing a bug report with Apple, their engineers determined that the Xcode-detected set of macro actions (find a text field, double click, enter text) weren’t working because the double click action wasn’t editing the text field. It is possible to use UIAutomation Tests, you just have to carefully review the UI actions and determine that they have the effect expected, particularly after letting Xcode record UI macros.

Original Post

Unfortunately my work to organise UIAutomation tests has hit the stumbling block that the UI Automation runner doesn’t use the main thread for main-thread-only APIs.

In Xcode 9 and High Sierra, the authors of that post I just linked found that it was possible to turn off the main thread checker in the Test configuration of the build scheme and get working tests anyway. Unfortunately that doesn’t work for me in Xcode 10 and Mojave: the main thread checker isn’t killing the app: the TSM subsystem is just refusing to do its thing. So my tests can’t do straightforward things like write text into a text field. Unfortunately this is a “it’s not me, it’s you” moment, and I don’t think I can carry on using Xcode’s UI tests for my goals.

However, I still want to be able to write “end-to-end” level tests to drive my development. I have (at least) three ways to proceed:

  • I could find a third party library and discover whether it has the main thread problem. Calabash doesn’t support Mac apps, and the other examples I can find (Cucumberish and TABTestKit) both rely on UI Automation so presumably don’t address the main thread problem.
  • I could write the tests in AppleScript. That would be a good way to build up the AppleScript UI for the app, but it doesn’t represent an end-to-end test of the GUI.
  • I could write the tests using NSApplication.sendEvent(_ event:) to simulate clicks, scrolls and text entry, and use the unit test runner to host them. That could work, but I have my doubts (I would guess that the runner is synchronous and stalls the main thread).

I discovered that it is possible to write the test “at the UI level” but in the unit runner, using a combination of key events and AppKit API like sendAction( to:). The trade-offs of this approach:

  • it takes longer, as the abstractions needed to easily find and use AppKit controls don’t (currently) exist
  • it doesn’t use the Accessibility interface so isn’t an accessibility audit at the same time as a correctness test
  • you don’t hit the same problems as with the UI Automation runner
  • it’s much faster

This may be the best approach for now, though I’d welcome other views.

October 12, 2018

Reading List by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my underwear so I can spend time reading stuff.

Content UX Design

Clean & Elegant Content Management System.

Q Content are a UK based technology business in the creative content industry.

The Q platform had already been live for a number of years but was showing it’s age. I was approached to help Q add some skilled design to their platform and I was excited to get involved.

Content UX Design

Q knew their platform inside out. They knew what worked and what needed some UX TLC, they had done extensive research with their regular users and had a list of challenges for me.

I love a challenge. I’m really happy with the results.

Content UX Design

Did you read 5 Reasons why startups make great clients?

Kobu Agency

The post Q Content – UI & UX Design appeared first on .

citystasher

Drop your bags and enjoy the city.

A while ago I was asked to work on a new project for the team over at Citystasher (Now Stasher), a London based startup aiming to disrupt the travel / baggage market.

This sounded like a solution to a problem I’ve struggled with before. Where do you keep your large bags when in a huge city? They have a simple solution, in many places very near to you. Hotels, Stores and other trusted places.

Citystasher UI Design

CityStasher is a convenient, safe and affordable way of storing your bags across the UK, without having to pay the expensive Airport fees.

Citystasher UI Design

While their business has gone through many changes in the year or so since I worked on the project and their site doesn’t reflect fully what I worked on I still thought I’d created some nice work and wanted to share.

Citystasher UI Design

While I have you. Give in to time off!

Thomas Quaritsch

The post Citystasher UI & UX Design appeared first on .

October 11, 2018

It’s OK Not to Chase Constant Growth With Your Freelance Business

All too often within business, we chase the idea of more – more work, more responsibility, more money. While an ambitious growth mindset can be great for a business with the resources to back it up, it can be exhausting for an independent freelancer.

Chasing Growth

Photo by Markus Spiske on Unsplash

Sometimes, when running a freelance business, it’s more realistic and less stressful to be content with what you have. Freelancing is notorious for the feast and famine cycle that generates unstable income, so simply generating a constant, predictable amount month-to-month, year-to-year, is actually a big achievement.

This post came about after speaking to fellow freelancer Zack Neary-Hayes and realising that we had a lot in common, mainly due to how we view our businesses and our ambitions for them.

The rest of this post has been written by Zack and is an overview of a conversation we had back in August 2018.

How much is freedom to work your way worth?

freelance work

When Mike and I were talking, we realised that even though the services we offered to our clients were different, there were quite a few similarities between our businesses:

  1. We both run service driven businesses direct to clients, and there was a lot of overlap between the things we liked and disliked about this.
  2. The freelance and remote working lifestyles are extremely important to both of us. Neither of us felt particularly driven to try and target significant growth within our businesses. We’re both content with how things are going.
  3. We both agreed that we prioritised having the freedom to live a lifestyle that we choose, to work on projects we find interesting, and ultimately, to earn a living on our own terms.
  4. We’ve both recognised that we have skills that businesses need, so we sell those skills directly into businesses, which grants us a level of freedom we generally wouldn’t have through normal employment.

Do you want a growth or lifestyle business?

Work how you want to

Photo by Alvin Balemesa on Unsplash

For me and Mike, the idea of having a lifestyle business is much more important than chasing growth. It’s important for us to produce great work, have good relationships with clients, but also to enjoy as much time outside of our freelance work as possible.

And with this mindset, it becomes much easier to be OK with not being obsessed about aggressively growing your business. Work can become a healthy process that enables us to dive in and enjoy hobbies, interests, and passions outside of work.

I find it relatively easy to build a ‘church and state’ like separation between work and my personal life. I know the reasons why I work – why I freelance – and I keep this motivation front and centre of everything I do.

I think this key: you don’t need to feel like you need to chase growth to be a successful freelancer. Maybe this is why both Mike and I feel content with our businesses the way they are; we’re not driven by growth targets and financial forecasts, so we don’t feel the need to dig in and grind with work unnecessarily.

Work should be a good fit for your lifestyle

Freelancing the right way

Photo by Nicole Honeywill on Unsplash

If you can look at your work and feel truly content with how it’s going, then that’s great. You’re doing something you enjoy and making money in the process, a win-win situation.

If you’re in this situation and you’re earning enough to comfortably cover all of your living, business, and lifestyle costs, then do you really need to aim for more next year? Sure, you may put up your prices as you gain another year of experience and have reduced capacity, but do you actively need to set your ambitions on growing more? And would it be deemed a failure if you missed those targets?

Freelancing is a two-way street. It needs to work for your clients – you need to be able to confidently deliver your service – but more importantly, work needs to be a good fit for you.

As long as you can draw some sense of fulfilment from your work, and have plenty of things you enjoy outside of work, then you’re on the right path for a balanced lifestyle. Money may motivate you. Working reduced hours may motivate you. Having as little stress at work as possible may be your prime goal. The trick is to hit whatever goals you need to to make you feel comfortable and content. These goals may not always be financial.

And what works for others in this situation may not be the right fit for you. If the grinding, hustling, getting up at 4.30am lifestyle isn’t for you, no problem, forge your own path and work on your own terms. Don’t feel pressured by how other people present their work and freelance businesses.

Remember: you’re in control of your own business, and the most important thing is that the business works for you.

Freelance Growth

Photo by Devin Edwards on Unsplash

Thanks to Zack for the post. I’ve also written lots about Freelancing over the years, you may also enjoy reading ‘Be a better freelance designer‘, ‘Reflecting on 10 years as a Freelancer‘ and ‘Better ways to manage your design leads

The post The Freelance Business Dilemma: Do You Really Need to Chase Growth appeared first on .

October 10, 2018

(Russian translation: Почему мы не добавим в HTML элемент <чудесный>?)

Yesterday, there was an interesting conversation, started by Sara Soueidan:

Now, before people start to think “but colour isn’t content, it’s presentation!”, Sara was talking of pages showing colour swatches. In this case, the colours are the content. It seems like a good candidate for a semantic element, because it has meaning.

In my capacity as Ancient Old Fogey Of The Web, I sat and thought about this.

Rembrandt painting of Philosopher in Meditation

HTML: The mis-spent youth

The first iteration of HTML was a small set of tags noted down by in an email from Sir Uncle Timbo in October 1991, and added to in November 1992. By the time HTML2 came around, some tags had changed names, and a few tags added that showed HTML’s primary use as a language for mathematics and computer geeks: <var>, <samp>, <code>, <pre>, <kbd> as well as the now-defunct <xmp>, <dir> and <listing>.

At this point, we only had three presentational elements: <tt>, <b> and <i> (and arguably, <i> isn’t presentational—the spec says “If a specific rendering is necessary — for example, when referring to a specific text attribute as in “The italic parts are mandatory” — a typographic element can be used to ensure that the intended typography is used where possible.”)

Further generations of HTML reflected the changing uses of the Web; it was no longer the read/write medium that Sir Uncle Timbo had envisaged, so we needed a way of sending information back to sites – thus, a whole form-related markup evolved, which subsequently has served eCommerce brilliantly. Tables were added to show data, extending the web’s original use-case of showing and sharing mathematical papers. This had the side-effect of allowing creative people to (mis)-use tables to make great-looking sites, which meant ever more consumer-friendly sites (and a menagerie of presentational markup which is deprecated now we have CSS).

By the time HTML5 came around, we added a whole slew of elements to demarcate landmarks in common web page designs – <nav>, <header>, <article>, <main> and the like, which has improved the experience for assistive technology users.

By my count, we now have 124 HTML elements, many of which are unknown to many web authors, or regularly confused with each other—for example, the difference between <article> and <section>. This suggests to me that the cognitive load of learning all these different elements is getting too much.

HTML: comfortable middle-age

There’s loads of stuff we don’t have elements for in HTML. For ages I wanted a <location> element for geo information and a <person> element (<person givenname="Bruce" familyname="Lawson" nickname="Awesome" honorific="Mr."> etc.)

But here are some of the main reasons why we probably won’t get these (or Sara’s <color> element):

The 80/20 rule

The Web exists to share all possible human knowledge. Thus, the list of possible things that we could have a semantic for is infinite. We’re already getting overload on learning or remembering our current list of elements, their semantics and their attributes. So we (hopefully) have a set of elements that express the most commonly-used semantics (ignoring historical artefacts which browsers must continue to support because we can’t break the web).

Fourteen years ago (!) Matthew Thomas wrote

The more complex a markup language, the fewer people understand it, the less conformant the average article will be, so the less useful the Web’s semantics will be.

Testing

Browsers are sophisticated beasts. I’d wager it’s the most complex software running on your device right now. As someone who used to work for a browser vendor, I know there’s a lot of resistance to adding new elements to the language – it adds even more testing to be done and boosts the chances of regressions. As Mat Marquis wrote in his recent history of Responsive Images,

Most important of all, though, it meant we didn’t have to recreate all of the features of img on a brand-new element: because picture didn’t render anything in and of itself

What’s the use-case?

The most important question: if there were a <person>, <location> or <color> element, what would the browser do with it?

Matthew Thomas suggested that new elements need to have some form of User Interface to make them easier for authors to choose the right one:

One way of improving this situation would be to reduce the number of new elements — forget about <article> and <footer>, for example.

Another way would be to recommend more distinct default presentation for each of the elements — for example, default <article> to having a drop cap, default <sidebar> to floating right, default <header>, <footer>, and <navigation> to having a slightly darker background than their parent element, and default <header>…<li> and <footer>…</li> to inline presentation. This would make authors more likely to choose the appropriate element.

As Robin Berjon wrote

Pretty much everyone in the Web community agrees that “semantics are yummy, and will get you cookies”, and that’s probably true. But once you start digging a little bit further, it becomes clear that very few people can actually articulate a reason why.

So before we all go another round on this, I have to ask: what’s it you wanna do with them darn semantics?

The general answer is “to repurpose content”. That’s fine on the surface, but you quickly reach a point where you have to ask “repurpose to what?”. For instance, if you want to render pages to a small screen (a form of repurposing) then <nav> or <footer> tell you that those bits aren’t content, and can be folded away; but if you’re looking into legal issues digging inside <footer> with some heuristics won’t help much.

I think HTML should add only elements that either expose functionality that would be pretty much meaningless otherwise (e.g. <canvas>) or that provide semantics that help repurpose *for Web browsing uses*.

So what can we do?

Luckily, HTML already has a little-known element you can use to wrap data to make it machine readable: the <data> element:

The element can be used for several purposes.

When combined with microformats or microdata, the element serves to provide both a machine-readable value for the purposes of data processors, and a human-readable value for the purposes of rendering in a Web browser. In this case, the format to be used in the value attribute is determined by the microformats or microdata vocabulary in use.

Manuel Strehl mocked up a quick example of Sara’s colour swatch using the <data> element. You could add more semantics to this using microdata and schema.org color property.

Some schema.org vocabularies do pass the Robin and Matthews’ “browser UI test” (kinda-sorta). We know that Google’s Rich Snippets search results make use of some microdata, as does Apple’s WatchOS, which is why I use it to mark up publication dates on this blog:


<article itemscope itemtype="http://schema.org/BlogPosting">
<header>
<h2 itemprop="title">
<a href="https://www.brucelawson.co.uk/2018/reading-list-201/">Reading List</a></h2>
<time itemprop="dateCreated pubdate datePublished"
datetime="2018-06-29">Friday 29 June 2018</time>
</header>
<p>Some marvellous, trustworthy content</p>
<p><strong>Update: <time itemprop="dateModified"
datetime="2018-06-30">Saturday 30 June 2018</time></strong>Updated content</p>

Google says

You can add additional schema.org structured data elements to your pages to help Google understand the purpose and content of the page. Structured data can help Google properly classify your page in search results, and also make your page eligible for future search result features.

This is pretty vague (Google secret algorithms, etc) but I don’t believe it can hurt. What’s that you say? It adds dozens of extra bytes of markup to your page? Go and check your kilobytes of jQuery and React, and your hero images before you start to worry about the download overhead of nourishing semantics.

What about Custom Elements?

Custom elephants are Coming Soon™ in Edge, behind a flag in Gecko and already in Blink. These allow you to make your own new tags, which must contain a hyphen – e.g., <lovely-bruce>. However, they’re primarily a way of composing and sharing discrete lumps of functionality (“Components”) and don’t add any semantics.

Conclusion

So that’s why we don’t include lots of new semantics into HTML (but feel free to propose some if there’s a real use case). However, you can do a lot using existing semantics, generic containers like <data> and extensibility hooks. Happy marking-up!

October 08, 2018

I started writing a new Mac app, and I started doing it by driving the implementation through Xcode UI Automation tests. But then it turned out I was driving the test infrastructure as much as the tests, and it’s that I want to talk about.

Given, When, Then

My (complete, Xcode UI Automation) test looks like this:

func testAddingANoteResultsInANoteBeingAdded() {
    given("An empty notebook")
    when("I add a note to the notebook")
    then("There is a note in the notebook")
}

The test case class has an object called a World, which holds, well, the test’s world. There are two parts to this.

The World holds regular expressions associated with blocks, where each block does some part of the test if its associated regular expression matched the description of the test. As an example, my test fixture sets up this association:

try world.then(matchingExpectation: "^There is a note in the notebook$",
                work: { _, world in
                guard let notebook:LabraryNotebook
                  = world.getFromState("TheNotebook") as? LabraryNotebook else {
                    XCTFail("No notebook to test")
                    return
                }
                XCTAssertEqual(notebook.countOfNotes(), 1,
                  "There should be one row in the notes table")
})

We’ll get back to how that block is implemented later. For the moment, I want to make it clear that this is a way to organise a UI test (or, indeed, any other functional test) using XCTest: it is not a new test framework. The test case class still subclasses XCTestCase, and assertions are still made with the XCTAssert* macros/functions. That’s just all wrapped up in this given/when/then structure.

Let’s look at the block’s two parameters: the first is an array of the regular expression’s capture groups so that you can find out information about the test specification, should you want.

The other argument is a reference to the World, which enables the second feature of the World: as state storage so that each part of the test can communicate with later parts. Notice that the when clause in my test says it adds a note to “the notebook”, and the then clause checks that there is a note in “the notebook”. How do they both use the same notebook object? The when clause stores it on the World using world.storeInState(), and the then clause retrieves it with world.getFromState().

Page Objects

Rather than putting XCUIElement goop directly in my test blocks, I use an abstraction called the Page Object pattern, popular among people writing browser tests in Selenium. This puts an adapter between my tests and my UI controls, so the test says (for example) app.newDocument() and the Application page object knows that that means finding the “File” menu, clicking it, then clicking the “New” menu item.

The way to create a new document in a Cocoa app has not changed since 1987 and may not change soon. But the details of my own UI surely will, and will change at a different rate than the goals of the people using it. While someone may want to add a note for the rest of time, there may not always be an “Add Note” button. So my test can continue to say:

when("I add a note to the notebook")

but the page object for a document can change from:

func addANote() {
    let app = XCUIApplication()
    let window = app.windows[documentName]
    let control = window.buttons["Add Note"]
    control.click()
}

to whatever will find and drive the interface in my redesigned application.

Would you like this?

I’m happy to package the given/when/then organisation up and release it under an open source licence so that you can use it in your own apps. As I’ve only just written the code, I’ve yet to do that, but it’s coming! I’m aware that there are multiple ways of getting/using Swift libraries, so if you’re interested please let me know whether you would expect to use an Xcode project that builds a framework, a Swift PM package, a CocoaPod or a Carthage…cart… so I can support you using the software in your way.

Jack-o'-lantern out of reclaimed wood

Unlike the last time I tried building something ready for Halloween, I'm somewhat proud of the fact that I was able to have an idea for a project on a Thursday, and have it finished by the Sunday.

The idea was inspired by a wooden Jack-o'-lantern I spotted in a shop window while on holiday in Cheddar Gorge - as I saw it, I thought to myself "I can make that", and as soon as we got back to the hotel room, I did some research and found this awesome video, which served as the basis for my own build:

Now, if you was to ask Lucy what the percentage of projects born out of I can make that moments I've actually built, she'd no doubt tell you it's close to 0% - but unlike all the other times I've said this, wherein there'd always be a asterisk stating something like "...if I had a laser cutter", this time I knew I already had everything I needed.

The wood used was reclaimed from a couple of pallets I obtained a few weeks ago via a posting on gumtree. I'd never responded to such an ad before, so didn't really know what I was doing - so there I was, with Lucy for moral support, at a building site in Malvern trying to figure out the best way to fit the pallets into my car.

The hardest part about using pallets for project is they're a bugger to break apart. Even armed with a pallet buster I found it hard work trying to pry them apart without breaking the planks.

Anyway, the build took about an hour and a half, following the Just Make It design closely. Much like in the video, I stuck to using a jigsaw for all the cuts. This resulted in wonky edges and imperfect corners, but as suggested in the video, this just adds to the rustic charm - and I'm honestly really happy with it. I think I even prefer it to the one I saw in the shop window.

There's no finish on it right now, because I really like the burnt look and I'm concerned a shiny varnish might take away from this. I'm going repeat the burning technique on a piece of scrap wood for a varnish test, so if it doesn't look too bad, I can add varnish to it later. That said, I'm not planning on putting it outside, and it's not something that would be actively handled, so I can't see there being any real issues leaving it as is.

The last thing I did was use the Circuit Playground Express I got via my Hackspace magazine subscription, and coded up (aka copy/pasted) a simulated candle. I love this effect, and think it looks much better than the crappy £1 led tea lights I got from Tiger.

Wooden Jack-o'-lantern

Anyway, my next post will be about the 3rd project in the Weekend Woodworker course. I built it last weekend, but need to do a ton more sanding on it before I apply a finish. Expect that post before the start of next week.

Back to Top