Last updated: June 19, 2019 09:22 PM (All times are UTC.)

June 17, 2019

Last week, I was invited to address the annual conference of the UK Association for Accessible Formats. I found myself sitting next to a man with these two refreshable braille displays, so I asked him what the difference is.

Two similar refreshable braille displays, side by side

On the left is his old VarioUltra 20, which can connect to devices via USB, Bluetooth, and can take a 32MS SD card, for offline use (reading a book, for example). It’s also a note-taker. He told me it cost around £2500. On the right is his new Orbit Reader 20, “the world’s most affordable Refreshable Braille Display” with similar functionality, which costs £500.

As he wasn’t deaf-blind, I asked why he uses such expensive equipment, when devices have built-in free screen readers. One of his reasons was, in retrospect, so blazingly obvious, and so human.

He likes to read his kids bedtime stories. With the braille display, he can read without a synthesised voice in his ear. Therefore, he could do all the characters’ voices himself to entertain his children.

My take-home from this: Of course free screen readers are an enormous boon, but each person has their own reasons for choosing their assistive technologies. Accessibility isn’t a technological problem to be solved. It’s an essential part of the human condition: we all have different needs and abilities.

June 12, 2019

June 11, 2019

I’ve just returned from a fantastic week in Copenhagen at the 2019 Ecsite Conference – Pushing Boundaries hosted at The Experimentarium in Hellerup.  It was my 4th Ecsite, having contributed to previous Ecsite conferences in Graz, Porto and Geneva.  Here’s some details from Ecsite 2017 in Porto where in 2 days we built an Audio […]

June 06, 2019

Since starting The Labrary late last year, I’ve been able to work with lots of different organisations and lots of different people. You too can hire The Labrary to make it easier and faster to create high-quality software that respects privacy and freedom, though not before January 2020 at the earliest.

In fact I’d already had a portfolio career before then, but a sequential one. A couple of years with this employer, a year with that, a phase as an indie, then back to another employer, and so on. At the moment I balance a 50% job with Labrary engagements.

The first thing to notice is that going part time starts with asking the employer. Whether it’s your current employer or an interviewer for a potential position, you need to start that conversation. When I first went from full-time to 80%, a few people said something like “I’d love to do that, but I doubt I’d be allowed”. I infer from this that they haven’t tried asking, which means it definitely isn’t about to happen.

My experience is that many employers didn’t even have the idea of part-time contracts in mind, so there’s no basis on which they can say yes. There isn’t really one for “no” either, except that it’s the status quo. Having a follow-up conversation to discuss their concerns both normalises the idea of part-time employees, and demonstrates that you’re working with them to find a satisfactory arrangement: a sign of a thoughtful employee who you want to keep around, even if only some of the time!

Job-swapping works for me because I like to see a lot of different contexts and form synthetic ideas across all of them. Working with different teams at the same time is really beneficial because I constantly get that sense of change and excitement. It’s Monday, so I’m not there any more, I’m here: what’s moved on in the last week?

It also makes it easier to deal with suboptimal working environments. I’m one of those people who likes being in an office and the social connections of talking to my team, and doesn’t get on well with working from home alone (particularly when separated from my colleagues by timezones and oceans). If I only have a week of that before I’m back in society, it’s bearable, so I can consider taking on engagements that otherwise wouldn’t work for me. I would expect that applies the other way around, for people who are natural hermits and would prefer not to be in shared work spaces.

However, have you ever experienced that feeling of dread when you come back from a week of holiday to discover that pile of unread emails, work-chat-app notifications, and meeting bookings you don’t know the context for? Imagine having that every week, and you know what job-hopping is like. I’m not great at time management anyway, and having to take extra care to ensure I know what project C is up to while I’m eyeballs-deep in project H work is difficult. This difficulty is compounded when clients restrict their work to their devices; a reasonable security requirement but one that has led to the point now where I have four different computers at home with different email accounts, VPN access, chat programs, etc.

Also, absent employee syndrome hits in two different ways. For some reason, the median lead time for setting up meetings seems to be a week. My guess is that this is because the timeslot you’re in now, while you’re all trying to set up the meeting, is definitely free. Anyway. Imagine I’m in now, and won’t be next week. There’s a good chance that the meeting goes ahead without me, because it’s best not to delay these things. Now imagine I’m not in now, but will be next week. There’s a good chance that the meeting goes ahead without me anyway, because nobody can see me when they book the meeting so don’t remember I might get involved.

That may seem like your idea of heaven: a guaranteed workaround to get out of all meetings :). But to me, the interesting software engineering happens in the discussion and it’s only the rote bits like coding that happen in isolation. So if I’m not in the room where the decisions are made, then I’m not really engineering the software.

Maybe there’s some other approach that ameliorates some of the downsides of this arrangement. But for me, so far, multiple workplaces is better than one, and helping many people by fulfilling the Labrary’s mission is better than helping a few.

Last week we had the pleasure of welcoming technology and game enthusiast, Alyssa, to our office. Here is what she wrote about her week of...

The post Alyssa’s Work Experience at Stickee appeared first on stickee.

We are delighted to announce that we have won the award for ‘VR Product of the Year’ at the prestigious National Technology Awards. The NTA...

The post stickee wins VR Product of the Year appeared first on stickee.

June 05, 2019

If you’ve used the Rails framework, you will probably recognise this:

class Comment < ApplicationRecord
  belongs_to :article
end

This snippet of code implies three things:

  1. We have a table of comments.
  2. We have a table of articles.
  3. Each comment is related to an article by some ID.

Rails users will take for granted that if they have an instance of the Comment class, they will be able to execute some_comment.article to obtain the article that the comment is related to.

This post will give you an extremely simplified look at how something like Rails’ ActiveRecord relations can be achieved. First, some groundwork.

Modules

Modules in Ruby can be used to extend the behaviour of a class, and there are three ways in which they can do this: include, prepend, and extend. The difference between the three? Where they fall in the method lookup chain.

class MyClass
  prepend PrependingModule
  include IncludingModule
  extend ExtendingModule
end

In the above example:

  • Methods from PrependingModule will be created as instance methods and override instance methods from MyClass.
  • Methods from IncludingModule will be created as instance methods but not override methods from MyClass.
  • Methods from ExtendingModule will be added as class methods on MyClass.

We can do fun things with extend.

Executing Code During Interpretation Time

module Ownable
  def belongs_to(owner)
    puts "I belong to #{owner}!"
  end
end

class Item
  extend Ownable
  belongs_to :overlord
end

In the above code, we’re just defining a module and a class. No instance of the class is ever created. However, when we execute just this code in an IRB session, you will see “I belong to overlord!” as the output. Why? Because the code we write while defining a class is executed as that class definition is being interpreted.

What if we re-write our module to define a method using Ruby’s define_method?

module Ownable
  def belongs_to(owner)
    define_method(owner.to_sym) do
      puts self.object_id
    end
  end
end

Whatever we passed as the argument to belongs_to will become a method on instances of our Item class.

our_item = Item.first
our_item.overlord
#  => 70368441222580

Excellent. You may have heard this term before, but this is “metaprogramming”. Writing code that writes code. You just metaprogrammed.

Tying It Together

You might also notice that we’re getting closer to the behaviour that we would expect from Rails.

So let’s say we have our Item class, and we’re making a videogame, so we’re going to say that our item belongs to a player.

class Item
  extend Ownable
  belongs_to :player
end

Our Rails-like system could make some assumptions about this.

  1. There is a table in the database called players.
  2. There is a column in our items table called player_id.
  3. The player model is represented by the class Player.

Let’s return to our module and tweak it based on these assumptions.

module Ownable
  def belongs_to(owner)
    define_method(owner.to_sym) do
      # We need to get `Player` out of `:player`
      klass = owner.to_s.capitalize.constantize
      # We need to turn `:player` into `:player_id`
      foreign_key = "#{owner}_id".to_sym
      # We need to execute the actual query
      klass.find_by(id: self.send(foreign_key))
      # SELECT * FROM players WHERE id = :player_id LIMIT 1
    end
  end
end

class Item
  extend Ownable
  belongs_to :player
end

my_item = Item.first
my_item.player
# SELECT * FROM players WHERE id = 1 LIMIT 1
# => #<Player id: 12>

Neat.

June 03, 2019

Ideas and solutions for tech founders on a tight budget

When it comes to building your first product or website you’ll quickly learn how cost is a huge factor in working with the right people – but that shouldn’t stop you from launching your product or website.

Most of my projects start at around £3,000 because of how complex product design can be, the bare fact is that good design takes time and costs money. So what happens if you have a much lower budget?

Here are a few suggestions on how to proceed on a small budget.

Adjust scope

The first thing you can do is adjust your scope, you don’t have to launch a feature rich product! You can start with a basic MVP that will take less time to design and build (You can even skip the build and work with a prototype – Investors don’t care if it’s built as long as they can see potential)

Don’t do too much too soon when actually you can launch and become successful with far less. So try adjusting your scope.

Buy a pre-designed theme / template

Another great option is to buy a pre-designed theme or template. There are thousands of these online.

A theme will help you get a basic look and feel for your product and while you will have to revisit the design in your businesses future, a theme will be a great start and cost less than $100/£100

https://twitter.com/8obbyanderson/status/1126391995709231105

Lots of options available to you. For Web you can look at Wix, Squarespace and most decent hosting packages have web templates free. WordPress has a wealth of excellent themes.

For app design you’ll have to be a little more technical by downloading a GUI kit (pre designed apps). You can download these for free (InVision have some amazing ones – look here) or you can also pay for more premium UI’s over at Creative Market or UI8.

Another great resource is to give Dribbble.com a search for ‘GUI’ ‘Free UI Kits’ – The community is very giving! 🙂

Find a cheaper / junior designer

Another great option is to search for a designer who’s new to the industry.

These will typically be college students or recent graduates. It’s easy to reach out to your local University and ask them to recommend someone for some work experience.

Another option is to head to sites like Fiverr, Peopleperhour and Upwork and search for low budget designers who have good reviews. Be careful, they could end up selling you a template or somebody else’s hard work. Be firm with your brief.

Learn/DIY

We’re really lucky to live in a world with so many excellent online free learning resources so why not try and learn it yourself to get started?

Figma is free and excellent, Sketch and Framer have free trials and worth looking at Adobe XD if you have a CC account. Download, install and jump onto YouTube and follow someone like Pablo Stanley who gives excellent tutorials.

Feeling like you can spend some cash on your learning? Try TeamTreehouse or Lynda.com who have video courses that will walk you through the basics and get you designing in no time.

Find a tech business partner

When I launched my first startup I traded my design time for some developer time. Martin eventually became my co-founder and we managed to get Howler to decent place before closing it last year.

Ask around, some designers/developers may have an opening and find your product interesting enough to give you some time. It’s worth going in with some investment leads or at minimum a business plan to hook their interest.

Stagger costs

If they don’t want to join the business they may be open to staggering costs so you get the perfect product but at an affordable monthly payment.

While this might not float with most, it could work nicely for professionals who take on monthly retainers.

Ask your network for help

Everyone knows someone that’s looking for work, so don’t be afraid to ask for help. I’m always recommending designers and developers to people who’ve contacted me.

https://twitter.com/danwht/status/1126442249972330497

So if your own network comes up empty, ask some designers on Twitter if they can recommend someone. Typically we’re willing to help, give it a try!

Keep looking!

I’m a firm believer in ‘pay for what you get’ but that doesn’t mean there won’t be a designer within your budget you just have to keep looking.

I did a poll and the results were very interesting, take a look.

I hope this helps, I really do. It breaks my heart turning away enthusiastic passionate tech startup founders because of budget.

Go make something amazing.

Follow me on Twitter & thanks to everyone who contributed to this blog.

Photo by Marc Schäfer on Unsplash

The post Ideas and solutions for tech founders on a tight budget appeared first on .

May 31, 2019

Reading List 232 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way!

But we are hackers and hackers have black terminals with green font colors ~ John Nunemaker

This is the second in a series of posts on useful bash aliases and shell customisations that developers here at Talis use for their own personal productivity. In this post I describe how I configured my shell to automatically change my terminal theme when I connect to a remote machine in any of our AWS accounts.


Background

As I’ve mentioned previously, at Talis, we run most of our infrastructure on AWS. This is spread over multiple accounts, which exist to separate our production infrastructure from development/staging infrastructure. Consequently we can find ourselves needing to SSH onto boxes across these various accounts. For me it is not uncommon to be connected to multiple machines across these accounts, and what I found myself needing was a way to quickly tell which of these were production boxes and which were servers in our development account.

Solution

All of my development work is done on a Macbook Pro running macOS. Several years ago I started using iTerm2 as my terminal emulator instead of the built in terminal which has always felt particularly limited. Given these constraints the solution I came up with was to implement a wrapper around an ssh command that would tell iTerm2 when to switch themes so that we can use different colors for production environments vs development.

In order to work it requires you to create three profiles in iTerm2, and for the purposes of this each of these profiles is essentially the theme you want to use. When creating a profile you can customise colors, fonts etc. But crucially for each of them you need to enter a value in the badge field. This tells iTerm2 what to set as the badge, which is displayed as a watermark on the terminal. In this case I wanted to use the host of the machine that I’ve connected to which I specify as current_user_host in my script; therefore the value for the badge field needs to be set to \(user.current_ssh_host).

iTerm profile

When you’ve created the profiles you can add the following to your ~/.aliases file to ensure that the ssh wrapper script knows which profiles to use for the three themes it requires.

export SSH_DEFAULT_THEME=Spacemacs
export SSH_DANGER_THEME=Production
export SSH_WARNING_THEME=Staging

Once this is done you can use the wrapper script. Do the following:

  • copy the contents of the script to /usr/local/bin/ssh (or anywhere as long as it’s on your PATH)
  • now when you issue an ssh command in the terminal the script captures the hostname of the machine that you are trying to connect to
  • it then uses awslookup to check to see which AWS account that host resides in.
  • in my case, if it’s in the production account it tells iTerm to switch to the SSH_DANGER_THEME, and if it’s in our development account it uses the SSH_WARNING_THEME.
  • the terminal will then switch to the corresponding theme.
  • when you exit your ssh session the wrapper resets the theme back to your default.

For example, when I ssh to a production server, my terminal automatically switches to this:

Danger Theme

And when I connect to a development server, it automatically changes to this:

Warning Theme

As soon as I exit the ssh session the terminal is restored to my default theme.

Whilst this is a very specific solution for macOS you can achieve similar results on Linux. Enjoy!

May 30, 2019

The Logical Fallacy by Graham Lee

Nary a week goes by without seeing a post by a programmer, for programmers, on the subject of logical fallacies in arguments. This week’s, courtesy of hacker news, is not egregious, enlightening, or indeed different in any way from the usual torrent. It is merely the one that prompted me into writing this article. The most frequent, and most severe, logical fallacy I encounter among programmers is this one:

  • basing your argument on logic.

Now, obviously, for a fallacy to be recognised it needs to have a Latin name, so I’m going to call this one argumentum ex logica.

Argumentum ex logica is the fallacious reasoning that the best course of action for a group of people is the one that can be arrived at by logical deduction. No need to consider the emotions of the people involved, or the aesthetic properties of any potential solutions. Just treat your workplace like your high school debating club, pick (seemingly arbitrarily) some axioms, and batter your way through to your preferred conclusion.

If people disagree with you on (unreasonable) emotional grounds, just name their logical fallacies so you can overrule their arguments, like you’re in an episode of Ally McBeal and want their comments stricken from the record. If people manage to find a flaw in the logic of your argument, just pick a new axiom you’d never mentioned before and carry on.

The application of the argumentum ex logica fallacy is frequently accompanied by descriptions of the actions of “the brain”, that strange impish character that sits inside each of us and causes us to divert from the true path of Syrran of Vulcan. Post hoc ergo propter hoc, we are told, is an easy mistake to make because “the brain” sees successive events as related.

Here’s the weird thing. We all have a “the brain” inside us, as an important part of our being. By writing off “the brain” as a mistaken and impure lump of wet fat, programmers are saying that they are building their software not for humans. There must be some other kind of machine that functions on purely logical grounds, for whom their software is intended. It should not be.

May 28, 2019

I’m doing some changes to this WordPress site and wanted to get out of the loop of FTPing a new version of my CSS to the live server and refreshing the browser. Rather than clone the site and setup a dev server, I wanted to host it on my local machine so the cycle of changing and testing would be faster and I could work online.

Nice people on Twitter recommended Local By Flywheel which was easy to install and get going (no dependancy rabbit hole) and which allows you to locally host multiple sites. It also has a really intuitive UI.

To clone my live site, I installed the BackUpWordPress plugin, told it to backup the MySQL database and the files (eg, the theme, plugins etc) and let it run. It exports a file that Local by Flywheel can easily injest – simply drag and drop it onto Local’s start screen. (There’s a handy video that shows how to do it.)

For some reason, although I use the excellent Make Paths Relative plugin, the link to my main stylesheet uses the absolute path, so I edited my local header.php (in ⁨Users⁩ ▸ ⁨brucelawson⁩ ▸ ⁨Local Sites⁩ ▸ ⁨brucelawsoncouk1558709320complete201905241-clone⁩ ▸ ⁨app⁩ ▸ ⁨public⁩ ▸ ⁨wp-content⁩ ▸ ⁨themes⁩ ▸ ⁨HTML5⁩⁩ ) to point to the local copy of the CSS:

link rel="stylesheet" href="http://brucelawson.local/wp-content/themes/HTML5/style.css" media="screen".

And that’s it – fire up Local, start the server, get coding!

If you’re having problems with the local wp-admin redirecting to your live site’s admin page, Flywheel engineers suggest:

  1. Go to the site in Local
  2. Go to Database » Adminer
  3. Go to the wp_XXXXX_options table (click the select button beside it in the sidebar)
  4. Make sure both the siteurl and home options are set to the appropriate local domain. If not, use the edit links to change them.

May 24, 2019

Reading List 231 by Bruce Lawson (@brucel)

Stickee Technology Ltd, Solihull, Birmingham, B90 4SB – (£18’000-£24’000) Overview We are looking for an admin account exec to join our existing digital team. This...

The post Admin Account Exec appeared first on stickee.

May 21, 2019

Domain-specific markup for fun and profit

It doesn’t come as a surprise to Dull Old Web Farts (DOWFs) like me to learn last month that Google gives a search boost to sites that use structured data (as well as rewarding sites for being performant and mobile-friendly). Google has brilliant heuristics for analysing the content of sites, but developers being explicit and marking up their content using subject-specific vocabularies means more robust results.

For the first time (to my knowledge), Google has published some numbers on how structured data affects business. The headlines:

  • Jobrapido’s overall organic traffic grew by 115%, and they have seen a 270% increase in new user registrations from organic traffic
  • After the launch of job posting structured data, Google organic traffic to ZipRecruiter job pages converted at a rate three times higher than organic traffic from other search engines. The Google organic conversion rate on job pages was also more than 4.5 times higher than it had been previously, and the bounce rate for Google visitors to job pages dropped by over 10%.
  • In the month following implementation, Eventbrite saw roughly a 100-percent increase in the typical year-over-year growth of traffic from Google Search
  • Traffic to all Rakuten Recipe pages from search engines soared 2.7 times, and the average session duration was now 1.5 times longer than before.

Impressive, indeed. So how do you do it? For this site, I chose a vocabulary from schema.org:

These vocabularies cover entities, relationships between entities and actions, and can easily be extended through a well-documented extension model. Over 10 million sites use Schema.org to markup their web pages and email messages. Many applications from Google, Microsoft, Pinterest, Yandex and others already use these vocabularies to power rich, extensible experiences.

Because this is a blog, I chose the BlogPosting schema, and I use the HTML5 microdata syntax. So each article is marked up like this:

<article itemscope itemtype="http://schema.org/BlogPosting">
  <header>
  <h2 itemprop="headline" id="post-11378">The HTML Treasure Hunt</h2>
  <time itemprop="dateCreated pubdate datePublished" 
    datetime="2019-05-20">Monday 20 May 2019</time>
  </header>
    ...
</article>

The values for the microdata attributes are specified in the schema vocabulary, except the pubdate value on itemprop which isn’t from schema.org, but is required by Apple for WatchOS because, well, Apple likes to be different.

And that’s basically it. All of this, of course, is taken care of by one WordPress template, so it’s automatic.

Metadata partial copy-paste necrosis for misery and loss

One thing puzzles me, however; Google documentation says that Google Search supports structured data in any of three formats: JSON-LD, RDFa and microdata formats, but notes “Google recommends using JSON-LD for structured data whenever possible”.

However, no reason is given for preferring JSON-LD except “Google can read JSON-LD data when it is dynamically injected into the page’s contents, such as by JavaScript code or embedded widgets in your content management system”. I guess this could be an advantage, but one of the other “features” of JSON-LD is, in my opinion, a bug:

The markup is not interleaved with the user-visible text

I strongly feel that metadata that is separated from the user-visible data associated with it highly susceptible to metadata partial copy-paste necrosis. User-visible text is also developer-visible text. When devs copy/ paste that, it’s very easy to forget to copy any associated metadata that’s not interleaved, leading to errors. (And Google will penalise errors: structured data will not show up in search results if “The structured data is not representative of the main content of the page, or is potentially misleading”.)

An example of metadata partial copy-paste necrosis can be seen in the commonly-recommended accessible form pattern:

<label for="my-input">Your name:</label>
<input id="my-input"/>

As Thomas Caspars wrote

I’ve contacted chums in Google to ask why JSON-LD is preferred, but had no reply. (I may go as far as trying to “reach out” next time.)

Andrew wrote

I’m pretty sure Google prefers JSON-LD over microdata because it’s easier for them to stealborrow the data for their own use in that format. When I was working on a screen-scraping project a few years ago, I found that to be the case. Since then, I’ve come to believe that schema.org is really about making it easier for the big guys to profit from data collection instead of helping site owners improve their SEO. But I’m probably just being a conspiracy theorist.

Speculation and conspiracy theories aside, until there’s a clear reason why I should use JSON-LD over interleaved microdata, I’m keeping it as it is.

Google replies

Updated 23 May: Dan Brickley, a Google employee who is Lord of Schema.org, wrote this thread on Twitter:

May 20, 2019

The HTML Treasure Hunt by Bruce Lawson (@brucel)

Here are my slides for The HTML Treasure Hunt, my keynote at the International JavaScript Conference last week. They probably don’t make much sense on their own, unfortunately, as I use slides as pointers for me to ramble about the subject, but a video is coming soon, and I’ll post it here.

Update! Here’s the video! Starts at 18:08.

Given that one of my themes was “write less JS and more HTML”, feedback was great! Attendees gave me 4.8 out of 5 for “Quality of the presentation” (against a conference average of 4.0) and 4.9 for “Speaker’s knowledge of the subject” (against an average of 4.5). Comments included:

great talk! reminding of the basics we often forget.

amazing way to start the second day of this conference. inspiring to say the least. great job Bruce

very entertaining and great message. excellent speaker

Thanks, that was a talk a lot of us needed.

Remarkable presentation. Thought provoking, backed with statistics. Well presented.

Very experienced and inspiring speaker. I would really like to incorporate this new ideas (for me) in my code

I think there’s a room full of people going to re-learn HTML after that inspiring talk!

If you’d like me to talk at your event, give training at your organisation, or help CTO your next development project, get in touch!

May 16, 2019

Deprecating yarn by Graham Lee

In which I help Oxford University CS department with their threading issues.

We are looking for a content executive to join our existing digital marketing team. This is an excellent opportunity to join a dynamic and innovative...

The post Content Executive Role appeared first on stickee.

May 13, 2019

Niche-audience topic time: if you’re in Oxford Uni, I’m giving a one-day course on collaborative software engineering with git and GitHub (the ideas apply to GitLab, Bitbucket etc. too) on 4th June, 10-3 at the Maths Institute. Look out for information from the OxfordRSE group with a sign-up link!

May 10, 2019

May 08, 2019

While Alan Turing is regarded by many as the grandfather of Artificial Intelligence, George Boole should be entitled to some claim to that epithet too. His Investigation of the Laws of Thought is nothing other than a systematisation of “those universal laws of thought which are the basis of all reasoning”. The regularisation of logic and probability into an algebraic form renders them amenable to the sort of computing that Turing was later to show could be just as well performed mechanically or electronically as with pencil and paper.

But when did people start talking about the logic of binary operations in computers as being due to Boole? Turing appears never to have mentioned his name: although he certainly did talk about the benefits of implementing a computer’s memory as a collection of 0s and 1s, and describe operations thereon, he did not call them Boolean or reference Boole.

In the ACM digital library, Symbolic synthesis of digital computers from 1952 is the earliest use of the word “Boolean”. Irving S. Reed describes a computer as “a Boolean machine” and “an automatic operational filing system” in its abstract. He cites his own technical report from 1951:

Equations (1.33) and (1.35) show that the simple Boolean system, given in (1.34) may be analysed physically by a machine consisting of N clocked flip flops for the dependent variables and suitable physical devices for producing the sum and product of the various variables. Such a machine will be called the simple Boolean machine.

The best examples of simple Boolean machines known to this author are the Maddidas and (or) universal computers being built or considered by Computer Research Corporation, Northrop Aircraft Inc, Hughes Aircraft, Cal. Tech., and others. It is this author’s belief that all the electronic and digital relay computers in existence today may be interpreted as simple Boolean machines if the various elements of these machines are regarded in an appropriate manner, but this has yet to be proved.

So at least in the USA, the correlation between digital computing and Boolean logic was being explored almost as soon as the computer was invented. Though not universally: the book “The Origins of Digital Computers” edited by Brian Randell, with articles from Charles Babbage, Grace Hopper, John Mauchly, and others, doesn’t mention Boole at all. Neither does Von Neumann’s famous “first draft” report on the EDVAC.

So, second question. Why do programmers spell Boole bool? Who first decided that five characters was too many, and that four was just right?

Some early programming languages, like Lisp, don’t have a logical data type at all. Lisp uses the empty list to mean “false” and anything else to mean true. Snobol is weird (he said, surprising nobody). It also doesn’t have a logical type, conditional execution being predicated on whether an operation signals failure. So the “less than” function can return the empty string if a<b, or it can fail.

Fortran has a LOGICAL type, logically. COBOL, being designed to be illogical wherever Fortran is logical, has a level 88 data type. Simula, Algol and Pascal use the word ‘boolean’, modulo capitalisation.

ML definitely has a bool type, but did it always? I can’t see whether it was introduced in Standard ML (1980s-1990), or earlier (1973+). Nonetheless, it does appear that ML is the source of misspelled Booles.

May 03, 2019

Reading List 230 by Bruce Lawson (@brucel)

May 01, 2019

April 29, 2019

Digital Declutter by Graham Lee

I’ve been reading and listening to various books about the attention/surveillance economy, the rise of fascism in the Anglosphere and beyond, and have decided to disconnect from the daily outrage and the impotent swiping of “social” “content”. The most immediately actionable advice came from Cal Newport’s Digital Minimalism. I will therefore be undertaking a digital declutter in May.

Specifically this means:

  • no social media. In fact I haven’t been on most of them all of April, so this is already in play. By continuing it into May, I intend to do a better job of choosing things to do when I’m not hitting refresh.
  • alerts on chat apps on for close friends and family only.
  • streaming TV only when watching with other people.
  • Email once per day.
  • no RSS.
  • audiobooks only while driving.
  • Slack once per day.
  • Web browsing only when it progresses a specific work, or non-computering, task.
  • at least one walk per day, of at least half an hour, with no technology.
  • Phone permanently in Do Not Disturb mode.

It’s possible that I end up blogging more, if that’s what I start thinking of when I’m not browsing the twitters. Or less. We’ll find out over the coming weeks.

My posts for De Programmatica Ipsum are written and scheduled, so service there is not interrupted. And I’m not becoming a hermit, just digitally decluttering. Arrange Office Hours, come to Brum AI, or find me somewhere else, if you want to chat!

My flat has terrible mobile coverage. It’s okaaaay-ish in the living room and dead in the bedrooms, and it’s infuriating. You might be thinking, but Stuart! you live right in the centre of Birmingham! surely coverage in a city centre would be amazing! at which point you will get a look like this

:|

and then I will say, yeah, it’s something to do with the buildings or concrete or something, it’s fine when you’re outside. It’s an old building. Maybe they put copper in the walls like in the Rookery or something. Anyway, whatever, it never worked, regardless of which operator you’re on. So, when I moved in, I contacted my operator, Three, and said: this sucks, do something about it. They said, sure thing, install our Wi-Fi Calling app, Three in-Touch. Which I did, and managed for two years.

I say “managed” because that app is terrible. The UI is clunky, it doesn’t handle picture messages, there’s no way to mark a message read, the phone call quality cuts out and breaks all the time, and most annoyingly when you get an SMS it shows a notification but doesn’t play a sound, so I have no idea that I got a notification1. I’ve missed loads of SMSes over the last couple of years because of that.

Anyway, the Three In-Touch2 app popped up a little dialogue box last week:

We are updating our network. From 15 May 2019, the Three inTouch app will no longer be supported. Your call history and SMS sent or received will remain visible but you will no longer be able to make calls or send or receive SMS through the app. If you delete the app, your call and SMS history will be lost. WiFi calling is already built into most handsets these days and will continue to work in any enabled handset without need of the app

Ah, thought I. WiFi calling is already built in, is it?

You see, there’s a problem with that. On iOS it’s built in. On Android, on a lot of phones, it’s built in only if you bought the phone from your carrier, which I never do. It needs some special bit of config built into the firmware for wifi calling to work.3 So I thought, well, I’m screwed then. Because Three won’t update my Z5 Compact to have whatever firmware it needs to do wifi calling, they’ve killed the (execrable, but functional) app, and I’m not buying a new phone until the phone industry stops making them the size of a football pitch.4 I got on the live chat thing with Three, who (predictably) said, yeah, we’re not sending you the firmware, pull the other one, son, it’s got bells on.5

And then, a surprise. Because they said, howsabout we send you a Three HomeSignal device? It’s a box that plugs into your wifi and then your phone sees it as a phone antenna and you get good coverage.

And my brain went, what, a femtocell? A femtocell like I asked you about when I first started having coverage problems and you swore blind didn’t exist because everyone has wifi calling now? One of those femtocells?

Having been taught to never look a gift femtocell in the mouth, though, I didn’t say anything except “yes please I’d like that, cheers very much”. And so it arrived in the post two days later. Result.

However, the user guide leaves a bit to be desired.

The Three HomeSignal user guide, which says to plug in the ethernet cable and the power, and does not at all mention that it also needs a SIM card

I do not know why the user guide completely ignores that you need to also plug a SIM card into the HomeSignal device, but it does in fact ignore that. You should not ignore it: you need to do it, otherwise it doesn’t work; you’ll get an error, which is the LED flashing red five times, which means (in the only mention of a SIM anywhere in the user guide) “SIM card problem”. At which point you think: what bloody SIM card?

More confusingly still, Three include two SIM cards in the package. One is a Pay as you Go SIM. This is not the one you want. I don’t know why the hell they put that in; fine, if you wanna sell me on your products, go for it, but you made the HomeSignal process a billion times more confusing. The SIM card that goes into the HomeSignal device is on a special green card, and it says “HomeSignal device only” on it. Put that one in the HomeSignal box. The other one, the Pay as you Go one, you should sellotape it to a brick and then send it back to Three, postage on delivery.

Once you’ve done that, it works. So, Three, if you’re listening: one bonus point for finally deciding to update the awful Three in Touch app. Minus twenty points for not having a replacement for people who didn’t buy phones from you. Plus another twenty points for offering me the Three HomeSignal femtocell which fixes my problem. Minus a little bit for the bad instructions which don’t say that I have to put a SIM card in, and then minus quite a lot more for putting two SIM cards in the box when I do finally work that out! So, on balance, this is probably about neutral; they have fixed a problem I shouldn’t have and which is partially caused by phone manufacturers, using a technically very clever solution which was confusingly explained. Business as usual in the mobile phone world, I suppose, unless you’re using an Apple phone at which point everything works really smoothly until it suddenly doesn’t. One day someone will get all of this right, surely?

  1. They have had, looking at the internet, this reported to them about six hundred billion times
  2. they don’t seem to be anything like consistent about the spelling and punctuation of the name of this app, so I don’t see why I should be either
  3. It seems like this might be fixed in very recent versions of Android? It’s not at all clear. This problem is of course exacerbated by not getting system updates unless your phone is less than a week old.
  4. I saw a bloke on Twitter the other day say rather sneerily, I bet all the people who are mocking folding phones like the Samsung Fold now also mocked the Samsung Note until they realised they like big screens after all. No, sneery bloke. I mock the folding phones now for being a (terribly clever technical) solution in search of a problem, I hated the huge Note when it first came out, and I hate all huge phones now. For god’s sake, phone industry, make a small flagship phone. Just one. I suppose it’ll have to be an Xperia Compact again, but it’d be good if there were competition for it.
  5. They also said that they’re rebuilding the Three inTouch app to be good and work with 5G. Apparently. At some unknown point in the future. I carefully did not point out that if you’re building a replacement for something then there needs to be overlap between the two things rather than an interregnum during which every user is screwed, because that information needs to go to their project manager rather than their support team, but to be clear: to whichever project manager thinks this is an acceptable way to do deployments, I hope someone hides snakes in your car.

April 26, 2019

Reading List 229 by Bruce Lawson (@brucel)

April 22, 2019

Nokē UI Design

A smart security platform for your home

After completing my work with Prempoint I was put in touch with Co-founder and President David Gengler of Nokē. He and his team had a growing product that was in need of some UI design and UX love.

Nokē is a powerful bluetooth lock platform with integrated, smart locking access control, automated key management and audit trails. Nokē is an essential part of any home and business lock management system.

Nokē UI Design

My first task was to learn all about how their locks and platforms worked. I was sent some Bluetooth locks and got busy testing and working out how I could improve the UX of their platform.

The next task was to create some high fidelity wireframes and talk David and his team through my ideas. I presented my wireframes via a screen share and made a list of feedback which lead to a Q&A session, further exploring the possibilities and features.

Nokē UI Design

I then took some time away from the team and designed some fresh UI designs for both their iOS app and Web App portal.

The designs were well received which lead me to start working with the teams developers, making sure every detail was covered.

Nokē UI Design

This project was a huge step forward for me and my career as not only was I working on a product that was cross-platform but also a physical product that had it’s own limitations and possibilities.

The end product is a clean, easy to use and smart application that I’m very proud of. What’s even more exciting is that my design will be the basis of other areas of their business as they continue to grow and release new products.

“Working with Mike was a real pleasure for my team and me. His work is top notch and he moves quickly through understanding a project to initial designs and incorporates suggestions quickly and efficiently throughout the design process. We will continue to work with Mike on future projects and have already recommended him to others.”

David Gengler. Co-Founder & President Nokē

To find out more about Nokē click here

If you need UI/UX Design make sure you get in touch!

Please note final production designs may differ from those shown.

The post Nokē iOS & Web App UI Design appeared first on .

April 20, 2019

My SEO Story by Mike Hince (@zer0mike)

My SEO Story

A while back I wrote a blog about my 10 years as a freelancer and got some great reactions from my Twitter followers. This is the other side of the story (my SEO story!) on how Google helped me get to where I am today.

It started years ago when I was made redundant and began trading as Sans Deputy Creative, a title that everyone I ever told had trouble with. After a few years I decided to drop the name and went with a website that reflected me and used my name in the title. > mikehince.com

I’m not very technical so I knew the new website had to be a solution I could update easily and landed on a WordPress theme called Goodspace. It ticked all the boxes and was exactly what I needed to get my portfolio live quickly.

Over the years the theme has been customised heavily not only by myself but some really talented developers. Sebastian Lenton, Day Media and Tier One have all helped me get my website to where it is today.

At the time of launching I knew I wanted to design apps and had been designing my own products for a number of years and posting them on Dribbble. I filled my portfolio with a few live products that I’d launched, some websites I’ve designed for clients and a good handful of my own app concepts.

I made use of the brilliant plugin Yoast – making sure each page had a green light for the SEO and BOOM I was off.

After a few months remarkably Google liked my site and boosted me high up on the ranking for search terms like “Freelance UI Designer” “Freelance UX Designer” and “Freelance App Designer” and I started getting enquiries! In fact they poured in, in 2014/15 I was tracking them and stopped at 500 for the year.

That’s when it started to go wrong.

I got busy, I neglected my website and over the course of two very busy years the visits to my site dropped massively. Now I don’t think it’s as simple as Google falling out of love with my site, I think there are many factors.

One. Google changes it’s search algorithm all the time and if you don’t know about what they’ve tweaked you may suffer page drops.

Two. There was more competition, more talented designers started covering UI/UX and the search results would be stretched thinner.

Three. Location became a factor, Google favours sites that have location tags, I kept my vague on purpose to attract remote clients.

Four. Most importantly… I stopped posting, Google probably didn’t think my website was active.

So what did I do?

Firstly I contacted Zack Neary-Hayes and purchased a site audit package from him. He quickly identified areas where my site was struggling including many fine details that I won’t go into here (you should purchase one from him instead!). He provided me an action list of things to improve and was worth every penny.

I got to work fixing as many of the issues as I could myself, for the more technical problems I reached out to Mahtab at TierOne. He fixed errors and changed areas of my site that I’d redesigned to better attract SEO rankings (more internal linking to name one change).

I started blogging again, I posted new work to my portfolio section and shared my site on social media where possible (without spamming people).

I resubmitted my sitemap and then played the waiting game.

It’s taken me about 6-9 months for Google to start picking up my website again and as of today it’s showing signs of climbing to it’s former glory once more.

I’ve even started getting new enquiries again complimenting me on my SEO, so it must be working!

Moral of the story: keep posting! Don’t let your website go idle no matter how busy you are.

Thanks for reading!

Credits:
Photo by Tom Grimbert on Unsplash
Photo by William Iven on Unsplash
Photo by Fancycrave on Unsplash

The post My SEO Story appeared first on .

April 18, 2019

I run a company, a mission-driven software consultancy that aims to make it easier and faster to make high-quality software that preserves privacy and freedom. On the homepage you’ll find Research Watch, where I talk about research papers I read. For example, the most recent article is Runtime verification in Erlang by using contracts, which was presented at a conference last year. Articles from the last few decades are discussed: most is from the last couple of years, nothing yet is older than I am.

At de Programmatica Ipsum, I write on “individuals, interactions, and the true valuation of the things on the left” with Adrian Kosmaczewski and a glorious feast of guest writers. The most recent issue was on work, the upcoming issue is on programming history. You can subscribe or buy our back-catalogue to read all the issues.

Anyway, those are other places where you might want to read my writing. If people are interested I could publish their feeds here, but you may as well just check each out yourself :).

OneZone UI Design by Mike Hince (@zer0mike)

OneZone UI Design

A new way to discover your city

OneZone was a client of top-tier development studio Kanso and as we’ve worked together on various iPhone projects before CEO Robin called me up and me to be involved in the UX and UI design process. Knowing the quality of work Kanso produce I was happy to jump onboard.

OneZone UI Design

OneZone founder Natasha Zone had a strong vision for what she wanted and had already designed a prototype herself which was the basis of my work on this project.

Myself, the team at Kanso and Natasha all got together in Cardiff to workshop through her ideas and formulate a plan of action. This included reviewing the prototype, user journeys, persona review and a lengthy lean canvas discussion.

OneZone UI Design

Natasha has a great eye for detail and a excellent sense of clean design using white space to it’s advantage.

This played to my strengths and was happy to work along side her to create the screens shown here.

I was involved in refining the UX flow and adding the final touches to the UI Design.

OneZone UI Design

This project was an interesting challenge and I’m really happy with the end results.

All that’s left for me to say is to go download her app on iOS or Android!

If you like this design please share and if you’re in the market to book a freelance UI designer don’t hesitate to contact me.

The post OneZone UI Design appeared first on .

Half a bee by Graham Lee

When you’re writing Python tutorials, you have to use Monty Python references. It’s the law. On the 40th anniversary of the release of Monty Python’s Life of Brian, I wanted to share this example that I made for collections.defaultdict that doesn’t fit in the tutorial I’m writing. It comes as homage to the single Eric the Half a Bee.

from collections import defaultdict

class HalfABee:
    def __init__(self):
        self.is_a_bee = False
    def value(self):
        self.is_a_bee = not self.is_a_bee
        return "Is a bee" if self.is_a_bee else "Not a bee"

>>> eric = defaultdict(HalfABee().value, {})
>>> print(eric['La di dee'])
Is a bee
>>> print(eric['La di dee'])
Not a bee

Dictionaries that can return different values for the same key are a fine example of Job Security-Driven Development.

April 16, 2019

My first commission by Daniel Hollands (@limeblast)

One thing which has been consistent among the projects I've worked on since starting my journey as a maker, is they've all been chosen by me. That changed two weeks ago when I got my first commission.

After sharing my last blog post with a group of my friends and family, one of my oldest friends, Emma, piped up with: “Awesome… I’d like an 8ft x 2ft adjustable frame pin loom and an inkle loom please! X”

The rest of the conversation consisted of comparing budweiser to tap water

I’d never heard of either of these things before, so set about doing some research, and ultimately settled on building the inkle loom - partly because the pin loom she wanted was huge and would require far more precision than I think I’m currently capable of, and partly because I found a tutorial on how to build the inkle loom, which would be simple enough to complete quickly across a couple sessions at The Building Block.

Lap Joints

The first challenge was cutting the lap joints to secure the two uprights. In theory, cutting lap joints using a mitre saw is simple enough, but it’s clearly something which requires practice and patience to get right - neither of which I was afforded as a queue of people waiting to use the single mitre saw quickly formed behind me.

My attempt during the session resulted in cuts which were too wide and too deep, but nothing a little (or a lot) of glue and some brad nails couldn’t fix.

Too wide…
…and too deep

I tried again using some scraps the following weekend using my own mitre saw, and feel that I was more successful as I had more time to concentrate, but still had less that desirable results, partly because there’s too much give in the stop block built into my mitre saw, meaning it was too easy to press down a bit too hard, and go deeper than you had intended.

This is a technique that I’m keen on perfecting, as I can see myself using it a lot, but think I’d have more success using a table saw instead, and plan on practicing that particular technique as soon as I’m able to.

Tension rod

Next up was cutting a slot for the tension rod to fit into. There were multiple techniques I could have used for this, such as drilling two holes at either end of the slot, and using a jigsaw to join them together, but the instructor suggested using a plunge router with a guard fitted to guide it along the correct path.

Although I have a plunge router (which I purchased from Aldi a couple of years ago), I’d only ever used it the once, and even then only to round over some edges, meaning I’d never taken advantage of the plunge functionality to start a cut in the middle of the wood.

The key to this was to take several passes, each of which going no deeper than about 5mm beyond the previous level. This meant it took something like ten passes to get all the way though the wood, but I was somewhat pleased with the outcome.

A solid seven out of ten

Affixing the dowels

Lastly I needed to use a spade bit to cut holes for the dowels using a hand drill. This was relatively easy, making sure to use a scrap of waste wood underneath to (try) avoid splinters.

Some were more successful than others

The next time I need a large hole, I think I think I’ll try using a forstner bit, preferably with a drill press.

The assembly

All that remained was to glue everything together, making sure the joints held tightly with the liberal use of brad nails while the glue dried. I love the speed of the brad nailer, but from an aesthetics point of view I don’t think you can top the look of screws.

Mind you, depending on the application of a project, it may be preferable to hide all fasteners of this nature - but as this is just a functional tool, and the first time I’ve ever built something of this nature, I’m sure I’ll be forgiven some visible brads.

Still to do

Just about the only thing currently missing from the build is the tension rod which goes into the slot. I have a suitable piece of wood for this, and Emma is going to bring some bolts, washers, and wing nuts with her, so a quick hole in the end of the wood to accept the head of the bolt, affixed with some gorilla glue, should work nicely.

See update below.

Conclusion

All in all, I think this is the shoddiest thing I’ve ever built. This is partly because I’ve never used a lot of these techniques before, but also because there’s only so much you can do within a two hour window once a week, and as Emma is coming to collect it en route to Wales tomorrow, it was more important to get it finished.

Throughout this post I’ve mentioned things that I’d do differently. In addition to these, I think I’d scale everything down slightly, so the functional size of the devise is the same, but made from thinner pieces of wood, with thinner dowels - although that will largely depend on the feedback I get from Emma as she leans to weave using it.

That’s right, she’s never used an inkle loom before - so I’m not sure if her learning experience on my build is the best introduction - but there you go.

In any case, I plan on practicing the techniques learned during the built, taking her feedback into account, and making a second, far nicer one, at some point within the next year or so.

In the meantime, Emma has promised some photos of the loom in action, so while we wait for them, I’m going to enjoy drinking the craft ale she’s promised me ;)

Update

Emma has been and gone, collecting the loom and leaving me with a box of nuts and bolts, with which I have since completed the tension rod.

I drilled a hole in the end of my dowel using a forstner bit, and using masking tape as a makeshift clamp, glued the bolt into position with some Gorilla glue. 24 hours later, upon the removal of the masking tape, I found what looked like a cheerio where the glue had expanded.

Looks good enough to eat

My Dremel made short work of this expansion, but this left a cavity around the base of the bolt, which I wasn’t entirely confident would be strong enough. To resolve this issue I decided to use some epoxy resin to fill the cavity and seal off the end with one of the washers.

I’d not used epoxy before, so was quite taken aback by the smell, but 60 seconds of stirring the two parts together, followed by another 60-odd seconds of gingerly trying to dribble resin into the cavity before sliding the washer down the bolt, resulted in something that didn’t look half bad.

Another solid seven out of ten

Although I’ve been unable to test this, it should be possible to simply slide the bolt though the slot, and use another washer in conjunction with a wing nut to tighten it into position.

So while she waits for this final component to arrive in the mail, I’ll be sure and enjoy my reward:

Who am I kidding, I’ve already drunk them all

New Swift hardware by Graham Lee

A nesting tower for swifts

The Swift Tower is an artificial nesting structure, installed in Oxford University parks. That or a very blatant sponsorship deal.

April 12, 2019

Reading List 228 by Bruce Lawson (@brucel)

April 11, 2019

Why is it we’re not allowed to call the Apple guy “Tim Apple” when everybody calls the O’Reilly guy “Tim O’Reilly”?

April 05, 2019

Pythonicity by Graham Lee

The same community that says:

There should be one– and preferably only one –obvious way to do it.

Also says:

So essentially when someone says something is unpythonic, they are saying that the code could be re-written in a way that is a better fit for pythons coding style.

April 03, 2019

Figma – First Impressions

Over the last few months I’ve been exploring Sketch alternatives for UI/UX design. I’ve looked at InVision Studio and FramerX and thought it was about time I tried Figma – A browser based UI Design tool.


Figma is very similar to Sketch, FramerX and InVision Studio but with some seriously powerful differences, the main point being it’s primarily browser based which is a blessing for PC users.

Figma does have Mac and PC desktop apps too and from my testing work very well. I found the browser app to be far more memory intensive which when I switched to the app those issues went away.

Of course, it could just be Safari being an idiot (Yes, yes I have Chrome, I just forget to use it) and disclaimer I was hammering YouTube at the same time.



Figma is a real-time piece of software, meaning teams of UI Designers, UX Designers and Product Managers can all work at the same time.

This is incredibly powerful, imagine a copywriter signing in and just tinkering your words without the need for version controls and handing over complicated files… oh and it’s in the browser so it works for EVERYONE.

Like InVision and Framer you can also invite your developer buddies to take a look at the code the app produces. Again, it’s right there in the browser.



Figma also has the stuff you’d expect like vector editing, prototyping, colour management, styles and commenting but also boasts built in design libraries, again brilliant for teams – no need for shared drives or additional tools.

Figma has some incredible teams using their software, Slack, Microsoft, Zoom, Intercom to name a few.



As a long time Sketch user – Figma just felt really fresh and exciting. The possibilities are endless, so much so I was promoting it on a new client call just hours after getting the hang of it. I love the idea of giving access to a live document that clients, developers and team members can follow along.

Design is less final these days – you have to iterate.

I have to say, Figma is really exciting to me right now and the closest product yet to make me consider moving away from Sketch.

The thing is… Sketch just raised a $20 million series A round which no doubt will see them bringing in some of these missing features so we’ll just have to wait and see.

The post Figma – First Impressions appeared first on .

April 02, 2019

I made a box by Daniel Hollands (@limeblast)

Last night I attended the first of five evening woodworking classes at The Building Block, a local community center in Worcester

As anyone that’s read my review in Hackspace magazine knows, I’m currently learning woodworking via an online course, so when I discovered there was a locally run evening class I jumped at the chance to attend (dragging my friend Kathryn along for the ride).

After the prerequisite health and safety bits (including being issued with safety glasses), and a brief introduction on how to use the tools available, we were challenged with making a box.

It was at this point that we were given a top tip - apparently tongue & groove flooring planks are cheaper to buy than regular planks of wood, meaning all you need to do is remove the aforementioned tongue and groove using the table saw, and you’ve got yourself a perfectly good plank of wood. Personally I’m skeptical of this, but flooring wood is what we were given to work with, so off we set ripping it down.

This is the first time I’ve ever used a table saw. I almost got to use one back in December, when I was given access by a member of the Cheltnham Hackspace to build some wall mounted bottle openers, but on that occasion the wood was cut for me. The funny thing is that I’ve actually owned a table saw for around three weeks now (purchased in a flash sale from Amazon), but haven’t have the space to do anything more than check it spins.

Which bring me to one of the main benefits of a local evening class verses anything online - they’ll generally supply everything for you need. The £140 it cost for the five sessions of evening classes is considerably less money than I’ve spent tooling up for the online one, and while the tools I’ve obtained are now mine to keep - it’s an expensive outlay for a hobby that you’re just dipping your toe into.

Anyway, in addition to the table saw, we had access to drills and drivers, a miter saw, a router, a brad nailer, and a pocket hole jig, the latter two of which I’d also never used before. The box was a simple affair, consisting of four sides cut on the miter saw, glued together on the edges, with the brad nails holding it all together while the glue dried.

Not the most elegant thing, but a good project to use the tools for, and one which you won’t worry too much about getting wrong.

My box, in all its glory

While we’re free to continue building boxes for the remainder of the sessions, the general idea is that we choose a project to work on. For my own part, I think I’m interested in learning to make frames, but I also spied a lathe in the corner, which I’m super keen on playing with as it was watching turning videos on YouTube by people like Peter Brown which piqued my interest in woodworking in the first place.

Watch this space, and I’ll post more updates on my progress in the coming weeks.

April 01, 2019

The ray-traced pictures by Stuart Langridge (@sil)

A two-decade-long search is over.

A couple of years ago I wrote up the efforts required to recover a tiny sound demo for the Archimedes computer. In there, I made brief mention of a sample from an Arc sound editor named Armadeus, of a man saying “the mask, the ray-traced pictures, and finally the wire-frame city”. That wasn’t an idle comment. Bill and I have been looking for that sample for twenty years.

You’re thinking, I bet it’s not been twenty years. And you would be wrong. Here’s Bill posting to comp.sys.acorn in 2003, for a start.

My daughter knows about this sample. Jono knows about it. I use the phrase as a microphone test sentence, the same way other people use “testing, testing, 1, 2”. It’s lived in my head since I was in middle school, there on the Arc machines in CL0, which was Computer Lab Zero. (There was a CL1, which had the BBC Micros in it, on the first floor. I never did know whether CL0, which was on the ground floor, was a subtle joke about floor levels and computers’ zero-based numbering schemes, or if Mr Irons the teacher was just weird. He might have been weird. He thought the name for an exclamation mark was “pling”.)

Anyway, we got to talking about it again, and Bill said: to hell with this, I’ll just buy Armadeus. This act of selfless heroism earns him a gold medal. And a pint, yes it does. I’ll let him tell the story about that, and the mindblowing worthlessness of modern floppy drives, in his own time. But today it arrived, and now I have an mp3!

Interestingly, I thought it was in the other order. The sample actually says: “The ray-traced pictures. The mask. And finally, the wire-frame city.” I thought “the mask” was first, and it isn’t. Still, memory plays tricks on you after this many years. Apparently it’s from a Clares sound and music demo originally (Clares were the (defunct) company that made Armadeus. The name appears to have risen from the dead a couple of times since. No idea who they are now, if anyone.) Anyway, I don’t care; I’ve got it now. And it’s on my website, which means it will never go away. We found it once before and then lost the sample; I’m not making the same mistake again. Is this how Indy felt when he found the Ark?

Also, a shout out to arcem, which is an Archimedes emulator which runs fine on Ubuntu. It’s a pain in the bum to set up — you have to compile it, find ROMs, turn on sound support, use weird keypresses, set up hard drives in an incomprehensible text file, etc; someone ought to snap it or something so it’s easier — but it’s been rather nice revisiting a lot of the Arc software that’s still collected and around for download. Desktop sillies. Someone should bring desktop sillies back to modern desktops. And reconnecting to Arcade BBS, who need to fix all their FTP links to point to telnet.arcade-bbs.net rather than the now-dead arcade.demon.co.uk. I got to watch !DeskDuck again, which was a small mallard duck that swum up and down along the top of your icon bar. And a bunch of old demos from BIA and Arcangel and so on. I’d forgotten a bit how much I liked RISC OS, and in particular that it’s so fast to start up. Bring that back.

Nice one Bill. Time for a pint.

It turns out that Docker has an internal Domain Name Service (DNS). Did you know? It’s new to me too! I learnt the hard way while using a VPN.


This is the error that I found within a container:

persona_1_8a84f3f41190 | 2019/02/25 08:51:14.138610 [WARN] (view) kv.list(global):
Get http://consul:8500/v1/kv/global?recurse=&stale=&wait=60000ms: dial tcp: lookup
consul on 127.0.0.11:53: no such host (retry attempt 12 after "1m0s")

The error states that the domain name consul, which is the name of one of my containers, couldn’t be found by using the DNS at 127.0.0.11 on port 53. But I don’t run a DNS at 127.0.0.11?

Searching for 127.0.0.11:53 led me to Docker - Configure DNS which states: Note: The DNS server is always at 127.0.0.11 Huh, OK, I guess Docker has an internal DNS. And sure enough, I can connect to the service from within a container.

The error seems to go away if I stop the VPN and recycle the containers, how odd.

The reason I was using the VPN was to be able to SSH into an AWS EC2 instance, which is only accessible through the VPN. I wonder if the VPN alters my host’s DNS settings?

cat /etc/resove.conf
nameserver 172.16.0.23
nameserver 127.0.0.53

I have two DNS entries, one for my local Stubby DNS 127.0.0.53, and 172.16.0.23. Who is 172.16.0.23?

A quick search shows that AWS uses 172.16.0.23 as their internal DNS. The reason why we would want to use their DNS would be to resolve internal domain names. 172.16. is an internal IP address which is only accessible through the VPN.

This leaves two questions, which public DNS does Docker use to resolve DNS queries and why would my VPN configuration affect it?

By default, a container inherits the DNS settings of the Docker daemon, including the /etc/hosts and /etc/resolv.conf. You can override these settings on a per-container basis.

This would mean that any DNS queries to 127.0.0.11 would use the private AWS DNS 172.16.0.23 that my VPN desires, which would result in a timeout as Docker isn’t using my VPN.

The solution Fix Docker Networking DNS suggests overriding the DNS to use:

/etc/docker/daemon.json

{
    "dns": ["8.8.8.8", "8.8.4.4"]
}

By setting the DNS IP addresses to a public DNS address we avoid the issue of inheriting a DNS address which is not accessible due to the traffic not being routed through the VPN.

The configuration above resolves the timeout issue. It would seem as if the internal Docker DNS will stop attempting to resolve the request (even the internal DNS names such as the container name) if any of the supplied DNS entries fail with a timeout.

We are still early in the journey of applying docker to our micro service archiecture. In the future we will follow up with more articles covering different subjects regarding Docker & our next adventure, Kubernetes.

March 29, 2019

Reading List 227 by Bruce Lawson (@brucel)

March 28, 2019

There’s more to it by Graham Lee

We saw in Apple’s latest media event a lot of focus on privacy. They run machine learning inferences locally so they can avoid uploading photos to the cloud (though Photo Stream means they’ll get there sooner or later anyway). My Twitter stream frequently features adverts from Apple, saying “we don’t sell your data”.

Of course, none of the companies that Apple are having a dig at “sell your data”, either. That’s an old-world way of understanding advertising, when unscrupulous magazine publishers may have sold their mailing lists to bulk mail senders.

These days, it’s more like the postal service says “we know which people we deliver National Geographic to, so give us your bulk mail and we’ll make sure it gets to the best people”. Only in addition to National Geographic, they’re looking at kids’ comics, past due demands, royalty cheques, postcards from holiday destinations, and of course photos back from the developers.

To truly break the surveillance capitalism economy and give me control of my data, Apple can’t merely give me a private phone. But that is all they can do, hence the focus.

Going back to the analogy of postal advertising, Apple offer a secure PO Box service where nobody knows what mail I’ve got. But the surveillance-industrial complex still knows what mail they deliver to that box, and what mail gets picked up from there. To go full thermonuclear war, as promised, we would need to get applications (including web apps) onto privacy-supporting backend platforms.

But Apple stopped selling Xserve, Mac Mini Server, and Mac Pro Server years ago. Mojave Server no longer contains: well, frankly, it no longer contains the server bits. And because they don’t have a server solution, they can’t tell you how to do your server solution. They can’t say “don’t use Google cloud, it means you’re giving your customers’ data to the surveillance-industrial complex”, because that’s anticompetitive.

At the Labrary, I run my own Nextcloud for file sharing, contacts, calendars, tasks etc. I host code on my own gitlab. I run my own mail service. That’s all work that other companies wouldn’t take on, expertise that’s not core to my business. But it does mean I know where all company-related data is, and that it’s not being shared with the surveillance-industrial complex. Not by me, anyway.

There’s more to Apple’s thermonuclear war on the surveillance-industrial complex than selling privacy-supporting edge devices. That small part of the overall problem supports a trillion-dollar company.

It seems like there’s a lot that could be interesting in the gap.

March 22, 2019

Reading List 226 by Bruce Lawson (@brucel)

You’ve branched off a branch. Unfortunately the original branch is not ready to be merged and you need to promote your branch to a first class citizen, branched off of master.

However, when you raise the pull request for the second branch, you get all the changes from the first! Of course it does - you’ve branched off a branch!

You could wait till the original branch is ready, but alternatively, you can use git rebase to help you.


This is how your branch setup might look like:

--master
        \
         branchA
                \
                 branchB

And you want it to look like this:

--master
        \
         branchB

Here is a useful (and I have also found to be much under-documented) CLI command that you might want to try out:

git rebase --onto master branchA branchB

This moves branchB from branchA and onto master. You could also do this with other branches too - but then that could become really complicated!

You’ll also need to do a git push -f origin branchB to force the update - and now when you raise a pull request, you will only pick up the changes from branchB.

When might this be useful? One example might be when you have long lived branches and you need to move subsequent branches from it and onto master for merging.

Do you have any git tips you can share? If you do, let us know your “Git Tip of the Day” via Twitter.

March 20, 2019

We were promised a bicycle for our minds. What we got was more like a highly-efficient, privately run mass transit tunnel. It takes us where it’s going, assuming we pay the owner. Want to go somewhere else? Tough. Can’t afford to take part? Tough.

Bicycles have a complicated place in society. Right outside this building is one of London’s cycle superhighways, designed to make it easier and safer to cycle across London. However, as Amsterdam found, you also need to change the people if you want to make cycling safer.

Changing the people is, perhaps, where the wheels fell off the computing bicycle. Imagine that you have some lofty goal, say, to organise the world’s information and make it universally accessible and useful. Then you discover how expensive that is. Then you discover that people will pay you to tell people that their information is more universally accessible and useful than some other information. Then you discover that if you just quickly give people information that’s engaging, rather than accessible and useful, they come back for more. Then you discover that the people who were paying you will pay you to tell people that their information is more engaging.

Then you don’t have a bicycle for the mind any more, you have a hyperloop for the mind. And that’s depressing. But where there’s a problem, there’s an opportunity: you can also buy your mindfulness meditation directly from your mind-hyperloop, with of course a suitable share of the subscription fee going straight to the platform vendor. No point using a computer to fix a problem if a trillion-dollar multinational isn’t going to profit (and of course transmit, collect, maintain, process, and use all associated information, including passing it to their subsidiaries and service partners) from it!

It’s commonplace for people to look backward at this point. The “bicycle for our minds” quote comes from 1990, so maybe we need to recapture some of the computing magic from 1990? Maybe. What’s more important is that we accept that “forward” doesn’t necessarily mean continuing in the direction we took to get to here. There are those who say that denying the rights of surveillance capitalists and other trillion-dollar multinationals to their (pie minus tiny slice that trickles down to us) is modern-day Luddism.

It’s a better analogy than they realise. Luddites, and contemporary protestors, were not anti-technology. Many were technologists, skilled machine workers at the forefront of the industrial revolution. What they protested against was the use of machines to circumvent labour laws and to produce low-quality goods that were not reflective of their crafts. The gig economies, zero-hours contracts, and engagement drivers of their day.

We don’t need to recall the heyday of the microcomputer: they really were devices of limited capability that gave a limited share of the population an insight into what computers could do, one day, if they were highly willing to work at it. Penny farthings for middle-class minds, maybe. But we do need to say hold on, these machines are being used to circumvent labour laws, or democracy, or individual expression, or human intellect, and we can put the machinery to better use. Don’t smash the machines, smash the systems that made the machines.

March 15, 2019

Reading List 225 by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my lacy red manties so I can spend time reading stuff.

March 13, 2019

Ratio by Graham Lee

The web has a weird history with comments. I have a book called Zero Comments, a critique of blog culture from 2008. It opens by quoting from a 2005 post from a now defunct website, stodge.org. The Wayback Machine does not capture the original post, so here is the quote as lifted from the book:

In the world of blogging ‘0 Comments’ is an unambiguous statistic that means absolutely nobody cares. The awful truth about blogging is that there are far more people who write blogs than who actually read blogs.

Hmm. If somebody comments on your blog, it means that they care about what you’re saying. What’s the correct thing to do to people who care about your output? In 2011, the answer was to push them away:

It’s been a very difficult decision (I love reading comments on my articles, and they’re almost unfailingly insightful and valuable), but I’ve finally switched comments off.

I experimented with Comments Off, then ultimately turned them back on in 2014:

having comments switched off dilutes the experience for those people who did want to see what people were talking about. There’d be some chat over on twitter (some of which mentions me, some of which doesn’t), and some over on the blog’s Facebook page. Then people will mention the posts on their favourite forums like Reddit, and a different conversation would happen over there. None of that will stop with comments on, and I wouldn’t want to stop it. Having comments here should guide people, without forcing them, to comment where everyone can see them.

This analysis still holds. People comment on my posts over at Hacker News and similar sites, whether I post them there or not. The sorts of comments that you would expect from Hacker News commenters, therefore, rarely appear here. They appear there. I can’t stop that. I can’t discourage it. I can merely offer an alternative.

In 2019 people talk about the Ratio:

While opinions on the exact numerical specifications of The Ratio vary, in short, it goes something like this: If the number of replies to a tweet vastly outpaces its engagement in terms of likes and retweets, then something has gone horribly wrong.

So now saying something that people want to talk about, which in 2005 was a sign that they cared, is a sign that you messed up. The goal is to say things that people don’t care about, but will uncritically share or like. If too many people comment, you’ve been ratioed.

I don’t really have a “solution”: there may be human solutions to technical problems, but there aren’t technical solutions to human problems. And it seems that the humans on the web have a problem that we want an indication that people are interested in what we say, but not too much of an indication, or too much interest.

March 12, 2019

March 10, 2019

David Heinemeier Hansson and Jason Fried of Basecamp and Signal v. Noise lay out how they achieve calm at Basecamp and how other companies can make the choice to do the same.

They point to the long hours, stolen weekends and barbed perks that run rampant in tech and say “it doesn’t have to be this way!”

Growth at all costs. Companies that demand the world of your time, and then steal it away with meetings and interruptions. Companies that coerce or bribe you to spend most of your waking hours with your nose to the grindstone (because after all, this company is like a family, right?)

It Doesn’t Have To Be Crazy At Work discusses these problems and proposes a better way of doing things. DHH and Jason Fried discuss the solutions they’ve found to work at Basecamp, tackling issues from big picture ambitions to lower level project management, and from hiring to perks and payroll. They discuss the solutions that they have found work for them at Basecamp, but make no decrees as to how your company ought to be run. That’s for you to iterate on with your own team.

Most of the stuff I have read about taming work feels very prescriptive. Do this. Do that. Try these processes. Add enough kanban boards and everything will click into place and you will finally be able to breathe. It Doesn’t Have To Be Crazy At Work takes the opposite approach and encourages you to look at the things that you don’t need to do. Make time for the important things by stripping away the inessential.

It’s a pretty short read, and worth a look for anyone who feels like work is a little too hectic.

March 08, 2019

I write this with absolutely zero hyperbole when I say that March couldn’t have come soon enough. February was a great month for stickee, don’t...

The post Stepping up to the charity challenge appeared first on stickee.

Reading List 224 by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my lacy red manties so I can spend time reading stuff.

This post delves into some of the pitfalls encountered when working with JavaScript promises, suggests some improvements and looks at a brief glimpse of the future of asynchronous programming in JavaScript.

This was first presented at one of our internal tech team talks and encouraged our CTO to rethink the coding of our latest JavaScript project here at Talis.


Some Background

Asynchronous workflows can be difficult to read and maintain. Callbacks suffer from these asynchronous pitfalls due to each step in the workflow adding another level of indentation which exacerbates the problem. Nesting is best left to birds! It’s also possible to forget to call a callback, meaning that a process is left hanging. Additionally, what happens when you want to pass an error back instead of data?

Promises are an improvement on this predicament. They return a handle onto an asynchronous event eventually completing or failing and they also enforce the separation of success and error handling code. However, you can still fall into the same pitfalls of nesting and forgetting to fulfill or reject a promise.

The introduction of async/await to the ECMAScript 2017 edition of JavaScript takes a big stride in improving asynchronous workflow management by allowing code to be written to make the workflow appear synchronous. There is also the added bonus of them being interoperable with promises.

Whilst these improvements are welcome, we’re not all in the position of being on the latest and greatest version of JavaScript. Many of us also have to maintain codebases written some time in the past. This blog post provides some insight on our journey from callbacks to async/await, by first getting our promises in order.

Broken Promises

It’s still possible to get into similar trouble with promises as we can with callbacks. Consider the following example of a request -> response workflow in a node controller class:

process(request) {
    const tenantCode = request.path.match(/\w*/g)[1];

    return new Promise((resolve, reject) => {
        this.deserialiseRequest(request).then((model) => {
            this.validateRequest(tenantCode, model).then(() => {
                this.persistData(model).then((persisted) => {
                    this.serialiseResponse(tenantCode, persisted).then((response) => {
                        resolve(response);
                    });
                }).catch((persistError) => {
                    this.serialiseResponse(persistError);
                });
            }).catch((validationError) => {
                reject(validationError);
            });
        }).catch((deserialisationError) => {
            this.serialiseResponse(deserialisationError);
        });
    });
}

There are a few things going on here. A request is being deserialised into a model, then validated, then persisted, then the result of that persistance is being serialised into a response. Each class method is returning a promise that can either be resolved or rejected (triggering then and catch respectively).

Along the way, there are at least three error scenarios. One thing that is immediately striking is that we still have nesting, just like callbacks! It’s also hard to spot what error can be thrown where.

For example, if the validationError catch block was removed, what do you think would happen when a validation error did occur? The catch block below will not come to the rescue here. The validateRequest promise rejection will not be handled, and this is bad news.

Fatal Rejections

Handling errors in promises is a big deal. We noticed this deprecation warning appear in Node 4:

UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch()

DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

It’s clear from this that knowing how and where your promise workflow handles errors is key to avoiding unexpected problems at runtime.

Thankfully, there is a safety net you can use by listening to unhandledRejection and rejectionHandled process events

Rose-linted Glasses

Using a linting tool such as ESlint combined with a promises plugin can help indicate to a developer that they’re not writing promises in the most appropriate way. The example above reveals the following, often repeated, violations:

[eslint] Each then() should return a value or throw [promise/always-return]
[eslint] Avoid nesting promises. [promise/no-nesting]
[eslint] Avoid nesting promises. [promise/no-nesting]
[eslint] Each then() should return a value or throw [promise/always-return]
[eslint] Avoid nesting promises. [promise/no-nesting]

The Flat-chaining pattern

In order to remove the nesting and allow the workflow logic to be seen more easily, we have started to introduce flat promise-chaining. The idea being that each promise takes an input, does something with it and then passes it into the next promise.

Now, the request -> response workflow looks like this:

process(request) {
  return this
    .deserialiseRequest(request)
    .then(this.validateRequest)
    .then(this.persistData.bind(this))
    .then(this.serialiseResponse);
}

Each promise can be seen as a step in the workflow. It should only do one small thing to keep the step simple, there could be many steps involved after all. When you resolve a promise it only takes a single argument and so you can use an object to capture the data as it is passed along the chain. The ES6 spread operator and destructuring syntax makes this easier to achieve:

validateRequest(input) {
    return new Promise((resolve, reject) => {
        const { model, tenantCode } = input;
        // ... do something with input
        return resolve({ ...input, additionalData: true });
    });
}

If any promise in the chain encounters a problem, it can throw an error and a single catch function can handle any of these at the end of the chain:

process(request).catch(serialiseError);

The Future

The future looks better thanks to async/await. An example request -> response workflow can now look like this, bringing greater legibility to the code:

const modelWithTenantCode = await this.deserialiseRequest(request);
await this.validateRequest(modelWithTenantCode);
const persisted = await this.persistData(modelWithTenantCode);
return await this.serialiseResponse(persisted);

You can surround this block with a single try/catch as well, to handle any errors.

If you are using a version of node that supports this then what are you waiting for! Our CTO converted a project full of callbacks to async/wait and finds reading and reasoning with the code far easier as a result.

If you’re running an older version of Node then you’ll be encouraged to know that async/await is interoperable with promises. This means the promises you write now will work in async/await workflows without any effort, making the upgrade path far more manageable.

The main takeaway from this journey has been that asynchronous programming can be hard to read and reason with. We think through workflows in linear terms but they don’t always turn out this way in JavaScript. Thankfully the standard is improving and lessons are being learned.

All of the examples in the blog post are available in full on GitHub

March 06, 2019

It was around 6:58pm when I stood up. The room only had eight people in it, but they are all looking at me, expectantly. Fuelled by the pint of stout I'd just polished off, I begun to talk.

When I’m not building websites or playing with wood, I run a local meetup group called Worcester Source. On the 4th Wednesday of each month, the members of the group gather at The Paul Pry pub, and after eating some pizza and drinking some beer, they stop and listen to a presentation given by a speaker on a technical subject of interest.

Generally, the speaker in question is a guest from a neighbouring community or a developer evangelist from this or that company, and they talk about some interesting piece of technology or maybe a new tool that has hit the market - but on this dark February night there was no guest, there was only me - and I was about to present a talk on achieving your goals.

I gave the talk, and received lots of positive feedback, so I wanted to share it here.

The slides for the talk are available here, with the speaker notes acting as my script. You can navigate the slides by clicking the down arrow each time it’s shown, then clicking the right arrow when you reach the bottom.

I hope you enjoy the presentation, even if you didn’t get to see my present it live. If you’re interested, you can read the post that inspired the talk here.

The word ‘demagogue’

refers to someone who may be charismatic and often bombastic, and is able to use his oratorical skills to appeal to the baser, more negative side of people’s feelings

It’s an apt word for the right-wingest of the Brexiteers who are marching to leave. The way they speak about the European negotiators is simply rude: “You must live in Narnia, Michel Barnier!” and “Get back in your bunker, Jean-Claude Juncker!”, both on the march to leave home page.

Silly people.

In production mode, Rails 4 uses the uglifier gem to compress your JavaScript code into something nice and small to send over the wire. However, when you’ve got a syntax error, it can fail rather… ungracefully.

You get a big stack trace that tells you that you’ve got a syntax error. It even helpfully tells you what the error was! Unexpected punctuation. There’s a comma, somewhere. Where? Who knows. It doesn’t tell you.

If you have a large project, or have multiple people making large changes, it can be a little time consuming checking the diffs of recent commits to figure out where the error was introduced. Here is how you can use the rails console to at least find the offending file:

Since the uglifier doesn’t run in development, let’s drop into the rails console in production mode:

RAILS_ENV=production rails c

Now we can use the following snippet of code to find our errors:

# Take each javascript file...
Dir["app/assets/javascripts/**/*.js"].each do |file_name|
  begin
    # ... run it through the uglifier on its own...
    print "."
    Uglifier.compile(File.read(file_name))
  rescue StandardError => e
    # ... and tell us when there's a problem!
    puts "\nError compiling #{file_name}: #{e}\n"
  end
end

This saved me a little time recently, so I hope it will be handy to someone else.

March 04, 2019

Freelance Tip: Follow your Gut

Some things aren’t taught they are learned. Your college or university lecturer wouldn’t (couldn’t!) teach you things you have to learn for yourself. If I was to create a list of things I’ve learned the hard way it would be a whole blog series.

designers: follow your gut

When you’re new to freelancing you chase every project going which has it’s problems. If you say yes to everything you may find yourself struggling for inspiration which then effects your productivity.

TL;DR: When you’re seeking a new project make sure you follow your gut. It may save you time and money.

Advice: It’s OK to accept shitty jobs. We all have to at some point. Just don’t make it a regular occurrence.

There is one thing that isn’t taught in design lessons that is really simple to follow, it will make your working life a whole lot easier.

Follow your gut.

Sounds simple, right? But what are the factors?

One. You may not get along with the clients you’re talking too.

Two. The project might be out of your comfort zone

Three. You may be OK with pushing your comfort zone but the risk of failing is too high.

Four. The client wants a bargain

Five. You don’t have time to do the project justice

So it’s much more complicated than this and your gut get’s stronger over time, experience is critical to letting your gut learn.

Side note: Read about my experience. 10 years as a Freelance Designer.

Advice: It’s OK to make mistakes, just make sure you learn from them and be honest with your clients.

frustration

To round this blog off, here are some tips from my Twitter network. If you want to contribute connect with me on Twitter.

Thanks to the contributions from everyone above.

Featured Photo by Helena Lopes on Unsplash
Egg Photo by KS KYUNG on Unsplash

The post Freelance Tip: Follow your Gut appeared first on .

March 01, 2019

The balloon goes up by Graham Lee

To this day, many Smalltalk projects have a hot air balloon in their logo. These reference the cover of the issue of Byte Magazine in which Smalltalk-80 was shared with the wider programming community.

A hot air balloon bearing the word "Smalltalk" sails over a castle on a small island.

Modern Smalltalks all have a lot in common with Smalltalk-80. Why? If you compare Smalltalk-72 with Smalltalk-80 there’s a huge amount of evolution. So why does Cincom Smalltalk or Amber Smalltalk or Squeak or even Pharo still look quite a lot like Smalltalk-80?

My answer is because they are used. Actually, Alan’s answer too:

Basically what happened is this vehicle became more and more a programmer’s vehicle and less and less a children’s vehicle—the version that got put out, Smalltalk ’80, I don’t think it was ever programmed by a child. I don’t think it could have been programmed by a child because it had lost some of its amenities, even as it gained pragmatic power.

So the death of Smalltalk in a way came as soon as it got recognized by real programmers as being something useful; they made it into more of their own image, and it started losing its nice end-user features.

I think there are two different things you want from a programming language (well, programming environment, but let’s not split tree trunks). Referencing the ivory tower on the Byte cover, let’s call them “academic” and “industrial”, these two schools.

The industrial ones are out there, being used to solve problems. They need to be stable (some of these problems haven’t changed much in decades), easy to understand (the people have changed), and they don’t need to be exciting, they just need to work. Cobol and Fortran are great in this regard, as is C and to some extent C++: you take code written a bajillion years ago, build it, and it works.

The academic ones are where the new ideas get tried out. They should enable experiment and excitement first, and maybe easy to understand (but if you need to be an expert in the idea you’re trying out, that’s not so bad).

So the industrial and academic languages have conflicting goals. There’s going to be bad feedback set up if we try to achieve both goals in one place:

  • the people who have used the language as a tool to solve problems won’t appreciate it if new ideas come along that mean they have to work to get their solution building or running correctly, again.
  • the people who have used the language as a tool to explore new ideas won’t appreciate it if backwards compatibility hamstrings the ability to extend in new directions.

Unfortunately at the moment a lot of languages are used for both, which leads to them being mediocre at either. The new “we’ve done C but betterer” languages like Go, Rust etc. feature people wanting to add new features, and people wanting not to have existing stuff broken. Javascript is a mess of transpilation, shims, polyfills, and other words that mean “try to use a language, bearing in mind that nobody agrees how it’s supposed to work”.

Here are some patterns for managing the distinction that have been tried in the past:

  • metaprogramming. Lisp in particular is great at having a little language that you can use to solve your problems, and that you can also use to make new languages or make the world work differently to see how that would work. Of course, if you can change the world then you can break the world, and Lisp isn’t great at making it clear that there’s a line between following the rules and writing new ones.
  • pragmas. Haskell in particular is great at having a core language that people understand and use to write software, and a zillion flags that enable different features that one person pursued in their PhD that one time. Not all of the flag combinations may be that great, and it might be hard to know which things work well and which worked well enough to get a dissertation out of. But these are basically the “enable academic mode” settings, anyway.
  • versions. Perl and Python both ran for years in which version x was the safe, stable, industrial language, and version y (it’s not x+1: Python’s parallel versions were 2 and 3000) in which people could explore extensions, removals, or other changes in potentially breaking ways. At some point, each project got to the point where they were happy with the choices, and declared the new version “ready” and available for industrial use. This involved some translation from version x, which wasn’t necessarily straightforward (though in the case of Python was commonly overblown, so people avoided going from 2 to 3 even when it was easy). People being what they are, they put a lot of store in version numbers. So some people didn’t like that folks were recommending to use x when there was this clearly newer y available.
  • FFIs. You can call industrial C89 code (which still works after three decades) from pretty much any academic language you care to invent. If you build a JVM language, it can do what it wants, and still call Java code.

Anyway, I wonder whether that distinction between academic and industrial might be a good one to strengthen. If you make a new programming language project and try to get “users” too soon, you could lose the ability to take the language where you want it to go. And based on the experience of Smalltalk, too soon might be within the first decade.

Fitness Group Ui

A beautiful new way to get fit with others.

This project is a personal one aimed to bring groups of like minded people together with one goal, get fit together.

Sometimes being in a new city, away for travel or just in your very own town can be tough to find people to exercise with.

This app is designed to bring people together by giving over your location, seeing who’s near you and if any events are on that you’d like to join.

Fitness Group UI Design

The app will be purely fitness focused, so if you’re into yoga you can tailor the app to only feature yoga people, or if you like to try everything then you can access that too.

Plus! You can manage and control who comes to your events, so you have ultimate control with the people you invite to your group training.

Fitness Group Ui

While this is work in progress I’m excited to show off a few key screens.

More soon!

Pssst.. Find out how Twitter has improved me as a freelance designer

 

The post Fitness Activity UI Design appeared first on .

Image by Graham Lee

I love my Testsphere deck, from Ministry of Testing. I’ve twice seen Riskstorming in action, and the first time that I took part I bought a deck of these cards as soon as I got back to my desk.

I’m not really a tester, though I have really been a tester in the past. I still fall into the trap of thinking that I set out to make this thing do a thing, I have made it do a thing, therefore I am done. I’m painfully aware when metacognating that I am definitely not done at that point, but back “in the zone” I get carried away by success.

One of the reasons I got interested in Design by Contract was the false sense of “done” I feel when TDDing. I thought of a test that this thing works. I made it pass the test. Therefore this thing works? Well, no: how can I keep the same workflow, and speed of progress but improve the confidence in the statement?

The Testsphere cards are like a collection of mnemonics for testers, and for people who otherwise find themselves wondering whether this software really works. Sometimes I cut the deck, look at the card I’ve found, and think about what it means for my software. It might make me think about new ways to test the code. It might make me think about criticising the design. It might make me question the whole approach I’m taking. This is all good: I need these cues.

I just cut the deck and found the “Image” card, which is in the Heuristics section of the deck. It says that it’s a consistency heuristic:

Is your product true to the image and reputation you or your app’s company wishes to project?

That’s really interesting. How would I test for that? OK, I need to know what success is, which means I need to know “the image and reputation [we wish] to project”. That sounds very much like a marketing thing. Back when I ran the mobile track at QCon London, Jaimee Newberry gave a great talk about finding the voice for your product. She suggested identifying a celebrity whose personality embodies the values you want to project, then thinking about your interactions with your customers as if that personality were speaking to them.

It also sounds like there’s a significant user or customer experience part to this definition. Maybe marketing can tell me what voice, tone, or image we want to suggest to our customers, but what does it mean to say that a touchscreen interface works like Lady Gaga? Is that swipe gesture the correct amount of quirky, unexpected, subversive, yet still accessible? Do the features we have built shout “Poker Face”?

We’ll be looking at user interface design, too. Graphic design. Sound design. Copyediting. The frequency of posts on the email list, and the level of engagement needed. Pricing, too: it’s no good the brochure projecting Fortnum & Mason if the menu says Five Guys.

This doesn’t seem like something I’m going to get from red to green in a few minutes in Emacs. And it’s one of a hundred cards.

Back to Top