Last updated: April 22, 2019 12:22 PM (All times are UTC.)

April 20, 2019

My SEO Story by Mike Hince (@zer0mike)

My SEO Story

A while back I wrote a blog about my 10 years as a freelancer and got some great reactions from my Twitter followers. This is the other side of the story (my SEO story!) on how Google helped me get to where I am today.

It started years ago when I was made redundant and began trading as Sans Deputy Creative, a title that everyone I ever told had trouble with. After a few years I decided to drop the name and went with a website that reflected me and used my name in the title. > mikehince.com

I’m not very technical so I knew the new website had to be a solution I could update easily and landed on a WordPress theme called Goodspace. It ticked all the boxes and was exactly what I needed to get my portfolio live quickly.

Over the years the theme has been customised heavily not only by myself but some really talented developers. Sebastian Lenton, Day Media and Tier One have all helped me get my website to where it is today.

At the time of launching I knew I wanted to design apps and had been designing my own products for a number of years and posting them on Dribbble. I filled my portfolio with a few live products that I’d launched, some websites I’ve designed for clients and a good handful of my own app concepts.

I made use of the brilliant plugin Yoast – making sure each page had a green light for the SEO and BOOM I was off.

After a few months remarkably Google liked my site and boosted me high up on the ranking for search terms like “Freelance UI Designer” “Freelance UX Designer” and “Freelance App Designer” and I started getting enquiries! In fact they poured in, in 2014/15 I was tracking them and stopped at 500 for the year.

That’s when it started to go wrong.

I got busy, I neglected my website and over the course of two very busy years the visits to my site dropped massively. Now I don’t think it’s as simple as Google falling out of love with my site, I think there are many factors.

One. Google changes it’s search algorithm all the time and if you don’t know about what they’ve tweaked you may suffer page drops.

Two. There was more competition, more talented designers started covering UI/UX and the search results would be stretched thinner.

Three. Location became a factor, Google favours sites that have location tags, I kept my vague on purpose to attract remote clients.

Four. Most importantly… I stopped posting, Google probably didn’t think my website was active.

So what did I do?

Firstly I contacted Zack Neary-Hayes and purchased a site audit package from him. He quickly identified areas where my site was struggling including many fine details that I won’t go into here (you should purchase one from him instead!). He provided me an action list of thing to improve and was worth every penny.

I got to work fixing as many of the issues as I could myself, for the more technical problems I reached out to Mahtab at TierOne. He fixed errors and changed areas of my site that I’d redesigned to better attract SEO rankings (more internal linking to name one change).

I started blogging again, I posted new work to my portfolio section and shared my site on social media where possible (without spamming people).

I resubmitted my sitemap and then played the waiting game.

It’s taken me about 6-9 months for Google to start picking up my website again and as of today it’s showing signs of climbing to it’s former glory once more.

I’ve even started getting new enquiries again complimenting me on my SEO, so it must be working!

Moral of the story: keep posting! Don’t let your website go idol no matter how busy you are.

Thanks for reading!

Credits:
Photo by Tom Grimbert on Unsplash
Photo by William Iven on Unsplash
Photo by Fancycrave on Unsplash

The post My SEO Story appeared first on .

April 18, 2019

I run a company, a mission-driven software consultancy that aims to make it easier and faster to make high-quality software that preserves privacy and freedom. On the homepage you’ll find Research Watch, where I talk about research papers I read. For example, the most recent article is Runtime verification in Erlang by using contracts, which was presented at a conference last year. Articles from the last few decades are discussed: most is from the last couple of years, nothing yet is older than I am.

At de Programmatica Ipsum, I write on “individuals, interactions, and the true valuation of the things on the left” with Adrian Kosmaczewski and a glorious feast of guest writers. The most recent issue was on work, the upcoming issue is on programming history. You can subscribe or buy our back-catalogue to read all the issues.

Anyway, those are other places where you might want to read my writing. If people are interested I could publish their feeds here, but you may as well just check each out yourself :).

OneZone UI Design by Mike Hince (@zer0mike)

OneZone UI Design

A new way to discover your city

OneZone was a client of top-tier development studio Kanso and as we’ve worked together on various iPhone projects before CEO Robin called me up and me to be involved in the UX and UI design process. Knowing the quality of work Kanso produce I was happy to jump onboard.

OneZone UI Design

OneZone founder Natasha Zone had a strong vision for what she wanted and had already designed a prototype herself which was the basis of my work on this project.

Myself, the team at Kanso and Natasha all got together in Cardiff to workshop through her ideas and formulate a plan of action. This included reviewing the prototype, user journeys, persona review and a lengthy lean canvas discussion.

OneZone UI Design

Natasha has a great eye for detail and a excellent sense of clean design using white space to it’s advantage.

This played to my strengths and was happy to work along side her to create the screens shown here.

I was involved in refining the UX flow and adding the final touches to the UI Design.

OneZone UI Design

This project was an interesting challenge and I’m really happy with the end results.

All that’s left for me to say is to go download her app on iOS or Android!

If you like this design please share and if you’re in the market to book a freelance UI designer don’t hesitate to contact me.

The post OneZone UI Design appeared first on .

Half a bee by Graham Lee

When you’re writing Python tutorials, you have to use Monty Python references. It’s the law. On the 40th anniversary of the release of Monty Python’s Life of Brian, I wanted to share this example that I made for collections.defaultdict that doesn’t fit in the tutorial I’m writing. It comes as homage to the single Eric the Half a Bee.

from collections import defaultdict

class HalfABee:
    def __init__(self):
        self.is_a_bee = False
    def value(self):
        self.is_a_bee = not self.is_a_bee
        return "Is a bee" if self.is_a_bee else "Not a bee"

>>> eric = defaultdict(HalfABee().value, {})
>>> print(eric['La di dee'])
Is a bee
>>> print(eric['La di dee'])
Not a bee

Dictionaries that can return different values for the same key are a fine example of Job Security-Driven Development.

April 16, 2019

My first commission by Daniel Hollands (@limeblast)

One thing which has been consistent among the projects I've worked on since starting my journey as a maker, is they've all been chosen by me. That changed two weeks ago when I got my first commission.

After sharing my last blog post with a group of my friends and family, one of my oldest friends, Emma, piped up with: “Awesome… I’d like an 8ft x 2ft adjustable frame pin loom and an inkle loom please! X”

The rest of the conversation consisted of comparing budweiser to tap water

I’d never heard of either of these things before, so set about doing some research, and ultimately settled on building the inkle loom - partly because the pin loom she wanted was huge and would require far more precision than I think I’m currently capable of, and partly because I found a tutorial on how to build the inkle loom, which would be simple enough to complete quickly across a couple sessions at The Building Block.

Lap Joints

The first challenge was cutting the lap joints to secure the two uprights. In theory, cutting lap joints using a mitre saw is simple enough, but it’s clearly something which requires practice and patience to get right - neither of which I was afforded as a queue of people waiting to use the single mitre saw quickly formed behind me.

My attempt during the session resulted in cuts which were too wide and too deep, but nothing a little (or a lot) of glue and some brad nails couldn’t fix.

Too wide…
…and too deep

I tried again using some scraps the following weekend using my own mitre saw, and feel that I was more successful as I had more time to concentrate, but still had less that desirable results, partly because there’s too much give in the stop block built into my mitre saw, meaning it was too easy to press down a bit too hard, and go deeper than you had intended.

This is a technique that I’m keen on perfecting, as I can see myself using it a lot, but think I’d have more success using a table saw instead, and plan on practicing that particular technique as soon as I’m able to.

Tension rod

Next up was cutting a slot for the tension rod to fit into. There were multiple techniques I could have used for this, such as drilling two holes at either end of the slot, and using a jigsaw to join them together, but the instructor suggested using a plunge router with a guard fitted to guide it along the correct path.

Although I have a plunge router (which I purchased from Aldi a couple of years ago), I’d only ever used it the once, and even then only to round over some edges, meaning I’d never taken advantage of the plunge functionality to start a cut in the middle of the wood.

The key to this was to take several passes, each of which going no deeper than about 5mm beyond the previous level. This meant it took something like ten passes to get all the way though the wood, but I was somewhat pleased with the outcome.

A solid seven out of ten

Affixing the dowels

Lastly I needed to use a spade bit to cut holes for the dowels using a hand drill. This was relatively easy, making sure to use a scrap of waste wood underneath to (try) avoid splinters.

Some were more successful than others

The next time I need a large hole, I think I think I’ll try using a forstner bit, preferably with a drill press.

The assembly

All that remained was to glue everything together, making sure the joints held tightly with the liberal use of brad nails while the glue dried. I love the speed of the brad nailer, but from an aesthetics point of view I don’t think you can top the look of screws.

Mind you, depending on the application of a project, it may be preferable to hide all fasteners of this nature - but as this is just a functional tool, and the first time I’ve ever built something of this nature, I’m sure I’ll be forgiven some visible brads.

Still to do

Just about the only thing currently missing from the build is the tension rod which goes into the slot. I have a suitable piece of wood for this, and Emma is going to bring some bolts, washers, and wing nuts with her, so a quick hole in the end of the wood to accept the head of the bolt, affixed with some gorilla glue, should work nicely.

Conclusion

All in all, I think this is the shoddiest thing I’ve ever built. This is partly because I’ve never used a lot of these techniques before, but also because there’s only so much you can do within a two hour window once a week, and as Emma is coming to collect it en route to Wales tomorrow, it was more important to get it finished.

Throughout this post I’ve mentioned things that I’d do differently. In addition to these, I think I’d scale everything down slightly, so the functional size of the devise is the same, but made from thinner pieces of wood, with thinner dowels - although that will largely depend on the feedback I get from Emma as she leans to weave using it.

That’s right, she’s never used an inkle loom before - so I’m not sure if her learning experience on my build is the best introduction - but there you go.

In any case, I plan on practicing the techniques learned during the built, taking her feedback into account, and making a second, far nicer one, at some point within the next year or so.

In the meantime, Emma has promised some photos of the loom in action, so while we wait for them, I’m going to enjoy drinking the craft ale she’s promised me ;)

New Swift hardware by Graham Lee

A nesting tower for swifts

The Swift Tower is an artificial nesting structure, installed in Oxford University parks. That or a very blatant sponsorship deal.

April 12, 2019

Reading List 228 by Bruce Lawson (@brucel)

April 11, 2019

Why is it we’re not allowed to call the Apple guy “Tim Apple” when everybody calls the O’Reilly guy “Tim O’Reilly”?

April 05, 2019

Pythonicity by Graham Lee

The same community that says:

There should be one– and preferably only one –obvious way to do it.

Also says:

So essentially when someone says something is unpythonic, they are saying that the code could be re-written in a way that is a better fit for pythons coding style.

April 03, 2019

Figma – First Impressions

Over the last few months I’ve been exploring Sketch alternatives for UI/UX design. I’ve looked at InVision Studio and FramerX and thought it was about time I tried Figma – A browser based UI Design tool.


Figma is very similar to Sketch, FramerX and InVision Studio but with some seriously powerful differences, the main point being it’s primarily browser based which is a blessing for PC users.

Figma does have Mac and PC desktop apps too and from my testing work very well. I found the browser app to be far more memory intensive which when I switched to the app those issues went away.

Of course, it could just be Safari being an idiot (Yes, yes I have Chrome, I just forget to use it) and disclaimer I was hammering YouTube at the same time.



Figma is a real-time piece of software, meaning teams of UI Designers, UX Designers and Product Managers can all work at the same time.

This is incredibly powerful, imagine a copywriter signing in and just tinkering your words without the need for version controls and handing over complicated files… oh and it’s in the browser so it works for EVERYONE.

Like InVision and Framer you can also invite your developer buddies to take a look at the code the app produces. Again, it’s right there in the browser.



Figma also has the stuff you’d expect like vector editing, prototyping, colour management, styles and commenting but also boasts built in design libraries, again brilliant for teams – no need for shared drives or additional tools.

Figma has some incredible teams using their software, Slack, Microsoft, Zoom, Intercom to name a few.



As a long time Sketch user – Figma just felt really fresh and exciting. The possibilities are endless, so much so I was promoting it on a new client call just hours after getting the hang of it. I love the idea of giving access to a live document that clients, developers and team members can follow along.

Design is less final these days – you have to iterate.

I have to say, Figma is really exciting to me right now and the closest product yet to make me consider moving away from Sketch.

The thing is… Sketch just raised a $20 million series A round which no doubt will see them bringing in some of these missing features so we’ll just have to wait and see.

The post Figma – First Impressions appeared first on .

April 02, 2019

I made a box by Daniel Hollands (@limeblast)

Last night I attended the first of five evening woodworking classes at The Building Block, a local community center in Worcester

As anyone that’s read my review in Hackspace magazine knows, I’m currently learning woodworking via an online course, so when I discovered there was a locally run evening class I jumped at the chance to attend (dragging my friend Kathryn along for the ride).

After the prerequisite health and safety bits (including being issued with safety glasses), and a brief introduction on how to use the tools available, we were challenged with making a box.

It was at this point that we were given a top tip - apparently tongue & groove flooring planks are cheaper to buy than regular planks of wood, meaning all you need to do is remove the aforementioned tongue and groove using the table saw, and you’ve got yourself a perfectly good plank of wood. Personally I’m skeptical of this, but flooring wood is what we were given to work with, so off we set ripping it down.

This is the first time I’ve ever used a table saw. I almost got to use one back in December, when I was given access by a member of the Cheltnham Hackspace to build some wall mounted bottle openers, but on that occasion the wood was cut for me. The funny thing is that I’ve actually owned a table saw for around three weeks now (purchased in a flash sale from Amazon), but haven’t have the space to do anything more than check it spins.

Which bring me to one of the main benefits of a local evening class verses anything online - they’ll generally supply everything for you need. The £140 it cost for the five sessions of evening classes is considerably less money than I’ve spent tooling up for the online one, and while the tools I’ve obtained are now mine to keep - it’s an expensive outlay for a hobby that you’re just dipping your toe into.

Anyway, in addition to the table saw, we had access to drills and drivers, a miter saw, a router, a brad nailer, and a pocket hole jig, the latter two of which I’d also never used before. The box was a simple affair, consisting of four sides cut on the miter saw, glued together on the edges, with the brad nails holding it all together while the glue dried.

Not the most elegant thing, but a good project to use the tools for, and one which you won’t worry too much about getting wrong.

My box, in all its glory

While we’re free to continue building boxes for the remainder of the sessions, the general idea is that we choose a project to work on. For my own part, I think I’m interested in learning to make frames, but I also spied a lathe in the corner, which I’m super keen on playing with as it was watching turning videos on YouTube by people like Peter Brown which piqued my interest in woodworking in the first place.

Watch this space, and I’ll post more updates on my progress in the coming weeks.

April 01, 2019

The ray-traced pictures by Stuart Langridge (@sil)

A two-decade-long search is over.

A couple of years ago I wrote up the efforts required to recover a tiny sound demo for the Archimedes computer. In there, I made brief mention of a sample from an Arc sound editor named Armadeus, of a man saying “the mask, the ray-traced pictures, and finally the wire-frame city”. That wasn’t an idle comment. Bill and I have been looking for that sample for twenty years.

You’re thinking, I bet it’s not been twenty years. And you would be wrong. Here’s Bill posting to comp.sys.acorn in 2003, for a start.

My daughter knows about this sample. Jono knows about it. I use the phrase as a microphone test sentence, the same way other people use “testing, testing, 1, 2”. It’s lived in my head since I was in middle school, there on the Arc machines in CL0, which was Computer Lab Zero. (There was a CL1, which had the BBC Micros in it, on the first floor. I never did know whether CL0, which was on the ground floor, was a subtle joke about floor levels and computers’ zero-based numbering schemes, or if Mr Irons the teacher was just weird. He might have been weird. He thought the name for an exclamation mark was “pling”.)

Anyway, we got to talking about it again, and Bill said: to hell with this, I’ll just buy Armadeus. This act of selfless heroism earns him a gold medal. And a pint, yes it does. I’ll let him tell the story about that, and the mindblowing worthlessness of modern floppy drives, in his own time. But today it arrived, and now I have an mp3!

Interestingly, I thought it was in the other order. The sample actually says: “The ray-traced pictures. The mask. And finally, the wire-frame city.” I thought “the mask” was first, and it isn’t. Still, memory plays tricks on you after this many years. Apparently it’s from a Clares sound and music demo originally (Clares were the (defunct) company that made Armadeus. The name appears to have risen from the dead a couple of times since. No idea who they are now, if anyone.) Anyway, I don’t care; I’ve got it now. And it’s on my website, which means it will never go away. We found it once before and then lost the sample; I’m not making the same mistake again. Is this how Indy felt when he found the Ark?

Also, a shout out to arcem, which is an Archimedes emulator which runs fine on Ubuntu. It’s a pain in the bum to set up — you have to compile it, find ROMs, turn on sound support, use weird keypresses, set up hard drives in an incomprehensible text file, etc; someone ought to snap it or something so it’s easier — but it’s been rather nice revisiting a lot of the Arc software that’s still collected and around for download. Desktop sillies. Someone should bring desktop sillies back to modern desktops. And reconnecting to Arcade BBS, who need to fix all their FTP links to point to telnet.arcade-bbs.net rather than the now-dead arcade.demon.co.uk. I got to watch !DeskDuck again, which was a small mallard duck that swum up and down along the top of your icon bar. And a bunch of old demos from BIA and Arcangel and so on. I’d forgotten a bit how much I liked RISC OS, and in particular that it’s so fast to start up. Bring that back.

Nice one Bill. Time for a pint.

It turns out that Docker has an internal Domain Name Service (DNS). Did you know? It’s new to me too! I learnt the hard way while using a VPN.


This is the error that I found within a container:

persona_1_8a84f3f41190 | 2019/02/25 08:51:14.138610 [WARN] (view) kv.list(global):
Get http://consul:8500/v1/kv/global?recurse=&stale=&wait=60000ms: dial tcp: lookup
consul on 127.0.0.11:53: no such host (retry attempt 12 after "1m0s")

The error states that the domain name consul, which is the name of one of my containers, couldn’t be found by using the DNS at 127.0.0.11 on port 53. But I don’t run a DNS at 127.0.0.11?

Searching for 127.0.0.11:53 led me to Docker - Configure DNS which states: Note: The DNS server is always at 127.0.0.11 Huh, OK, I guess Docker has an internal DNS. And sure enough, I can connect to the service from within a container.

The error seems to go away if I stop the VPN and recycle the containers, how odd.

The reason I was using the VPN was to be able to SSH into an AWS EC2 instance, which is only accessible through the VPN. I wonder if the VPN alters my host’s DNS settings?

cat /etc/resove.conf
nameserver 172.16.0.23
nameserver 127.0.0.53

I have two DNS entries, one for my local Stubby DNS 127.0.0.53, and 172.16.0.23. Who is 172.16.0.23?

A quick search shows that AWS uses 172.16.0.23 as their internal DNS. The reason why we would want to use their DNS would be to resolve internal domain names. 172.16. is an internal IP address which is only accessible through the VPN.

This leaves two questions, which public DNS does Docker use to resolve DNS queries and why would my VPN configuration affect it?

By default, a container inherits the DNS settings of the Docker daemon, including the /etc/hosts and /etc/resolv.conf. You can override these settings on a per-container basis.

This would mean that any DNS queries to 127.0.0.11 would use the private AWS DNS 172.16.0.23 that my VPN desires, which would result in a timeout as Docker isn’t using my VPN.

The solution Fix Docker Networking DNS suggests overriding the DNS to use:

/etc/docker/daemon.json

{
    "dns": ["8.8.8.8", "8.8.4.4"]
}

By setting the DNS IP addresses to a public DNS address we avoid the issue of inheriting a DNS address which is not accessible due to the traffic not being routed through the VPN.

The configuration above resolves the timeout issue. It would seem as if the internal Docker DNS will stop attempting to resolve the request (even the internal DNS names such as the container name) if any of the supplied DNS entries fail with a timeout.

We are still early in the journey of applying docker to our micro service archiecture. In the future we will follow up with more articles covering different subjects regarding Docker & our next adventure, Kubernetes.

March 29, 2019

Reading List 227 by Bruce Lawson (@brucel)

March 28, 2019

There’s more to it by Graham Lee

We saw in Apple’s latest media event a lot of focus on privacy. They run machine learning inferences locally so they can avoid uploading photos to the cloud (though Photo Stream means they’ll get there sooner or later anyway). My Twitter stream frequently features adverts from Apple, saying “we don’t sell your data”.

Of course, none of the companies that Apple are having a dig at “sell your data”, either. That’s an old-world way of understanding advertising, when unscrupulous magazine publishers may have sold their mailing lists to bulk mail senders.

These days, it’s more like the postal service says “we know which people we deliver National Geographic to, so give us your bulk mail and we’ll make sure it gets to the best people”. Only in addition to National Geographic, they’re looking at kids’ comics, past due demands, royalty cheques, postcards from holiday destinations, and of course photos back from the developers.

To truly break the surveillance capitalism economy and give me control of my data, Apple can’t merely give me a private phone. But that is all they can do, hence the focus.

Going back to the analogy of postal advertising, Apple offer a secure PO Box service where nobody knows what mail I’ve got. But the surveillance-industrial complex still knows what mail they deliver to that box, and what mail gets picked up from there. To go full thermonuclear war, as promised, we would need to get applications (including web apps) onto privacy-supporting backend platforms.

But Apple stopped selling Xserve, Mac Mini Server, and Mac Pro Server years ago. Mojave Server no longer contains: well, frankly, it no longer contains the server bits. And because they don’t have a server solution, they can’t tell you how to do your server solution. They can’t say “don’t use Google cloud, it means you’re giving your customers’ data to the surveillance-industrial complex”, because that’s anticompetitive.

At the Labrary, I run my own Nextcloud for file sharing, contacts, calendars, tasks etc. I host code on my own gitlab. I run my own mail service. That’s all work that other companies wouldn’t take on, expertise that’s not core to my business. But it does mean I know where all company-related data is, and that it’s not being shared with the surveillance-industrial complex. Not by me, anyway.

There’s more to Apple’s thermonuclear war on the surveillance-industrial complex than selling privacy-supporting edge devices. That small part of the overall problem supports a trillion-dollar company.

It seems like there’s a lot that could be interesting in the gap.

March 22, 2019

Reading List 226 by Bruce Lawson (@brucel)

You’ve branched off a branch. Unfortunately the original branch is not ready to be merged and you need to promote your branch to a first class citizen, branched off of master.

However, when you raise the pull request for the second branch, you get all the changes from the first! Of course it does - you’ve branched off a branch!

You could wait till the original branch is ready, but alternatively, you can use git rebase to help you.


This is how your branch setup might look like:

--master
        \
         branchA
                \
                 branchB

And you want it to look like this:

--master
        \
         branchB

Here is a useful (and I have also found to be much under-documented) CLI command that you might want to try out:

git rebase --onto master branchA branchB

This moves branchB from branchA and onto master. You could also do this with other branches too - but then that could become really complicated!

You’ll also need to do a git push -f origin branchB to force the update - and now when you raise a pull request, you will only pick up the changes from branchB.

When might this be useful? One example might be when you have long lived branches and you need to move subsequent branches from it and onto master for merging.

Do you have any git tips you can share? If you do, let us know your “Git Tip of the Day” via Twitter.

March 20, 2019

We were promised a bicycle for our minds. What we got was more like a highly-efficient, privately run mass transit tunnel. It takes us where it’s going, assuming we pay the owner. Want to go somewhere else? Tough. Can’t afford to take part? Tough.

Bicycles have a complicated place in society. Right outside this building is one of London’s cycle superhighways, designed to make it easier and safer to cycle across London. However, as Amsterdam found, you also need to change the people if you want to make cycling safer.

Changing the people is, perhaps, where the wheels fell off the computing bicycle. Imagine that you have some lofty goal, say, to organise the world’s information and make it universally accessible and useful. Then you discover how expensive that is. Then you discover that people will pay you to tell people that their information is more universally accessible and useful than some other information. Then you discover that if you just quickly give people information that’s engaging, rather than accessible and useful, they come back for more. Then you discover that the people who were paying you will pay you to tell people that their information is more engaging.

Then you don’t have a bicycle for the mind any more, you have a hyperloop for the mind. And that’s depressing. But where there’s a problem, there’s an opportunity: you can also buy your mindfulness meditation directly from your mind-hyperloop, with of course a suitable share of the subscription fee going straight to the platform vendor. No point using a computer to fix a problem if a trillion-dollar multinational isn’t going to profit (and of course transmit, collect, maintain, process, and use all associated information, including passing it to their subsidiaries and service partners) from it!

It’s commonplace for people to look backward at this point. The “bicycle for our minds” quote comes from 1990, so maybe we need to recapture some of the computing magic from 1990? Maybe. What’s more important is that we accept that “forward” doesn’t necessarily mean continuing in the direction we took to get to here. There are those who say that denying the rights of surveillance capitalists and other trillion-dollar multinationals to their (pie minus tiny slice that trickles down to us) is modern-day Luddism.

It’s a better analogy than they realise. Luddites, and contemporary protestors, were not anti-technology. Many were technologists, skilled machine workers at the forefront of the industrial revolution. What they protested against was the use of machines to circumvent labour laws and to produce low-quality goods that were not reflective of their crafts. The gig economies, zero-hours contracts, and engagement drivers of their day.

We don’t need to recall the heyday of the microcomputer: they really were devices of limited capability that gave a limited share of the population an insight into what computers could do, one day, if they were highly willing to work at it. Penny farthings for middle-class minds, maybe. But we do need to say hold on, these machines are being used to circumvent labour laws, or democracy, or individual expression, or human intellect, and we can put the machinery to better use. Don’t smash the machines, smash the systems that made the machines.

March 15, 2019

Reading List 225 by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my lacy red manties so I can spend time reading stuff.

March 13, 2019

Ratio by Graham Lee

The web has a weird history with comments. I have a book called Zero Comments, a critique of blog culture from 2008. It opens by quoting from a 2005 post from a now defunct website, stodge.org. The Wayback Machine does not capture the original post, so here is the quote as lifted from the book:

In the world of blogging ‘0 Comments’ is an unambiguous statistic that means absolutely nobody cares. The awful truth about blogging is that there are far more people who write blogs than who actually read blogs.

Hmm. If somebody comments on your blog, it means that they care about what you’re saying. What’s the correct thing to do to people who care about your output? In 2011, the answer was to push them away:

It’s been a very difficult decision (I love reading comments on my articles, and they’re almost unfailingly insightful and valuable), but I’ve finally switched comments off.

I experimented with Comments Off, then ultimately turned them back on in 2014:

having comments switched off dilutes the experience for those people who did want to see what people were talking about. There’d be some chat over on twitter (some of which mentions me, some of which doesn’t), and some over on the blog’s Facebook page. Then people will mention the posts on their favourite forums like Reddit, and a different conversation would happen over there. None of that will stop with comments on, and I wouldn’t want to stop it. Having comments here should guide people, without forcing them, to comment where everyone can see them.

This analysis still holds. People comment on my posts over at Hacker News and similar sites, whether I post them there or not. The sorts of comments that you would expect from Hacker News commenters, therefore, rarely appear here. They appear there. I can’t stop that. I can’t discourage it. I can merely offer an alternative.

In 2019 people talk about the Ratio:

While opinions on the exact numerical specifications of The Ratio vary, in short, it goes something like this: If the number of replies to a tweet vastly outpaces its engagement in terms of likes and retweets, then something has gone horribly wrong.

So now saying something that people want to talk about, which in 2005 was a sign that they cared, is a sign that you messed up. The goal is to say things that people don’t care about, but will uncritically share or like. If too many people comment, you’ve been ratioed.

I don’t really have a “solution”: there may be human solutions to technical problems, but there aren’t technical solutions to human problems. And it seems that the humans on the web have a problem that we want an indication that people are interested in what we say, but not too much of an indication, or too much interest.

March 12, 2019

March 10, 2019

David Heinemeier Hansson and Jason Fried of Basecamp and Signal v. Noise lay out how they achieve calm at Basecamp and how other companies can make the choice to do the same.

They point to the long hours, stolen weekends and barbed perks that run rampant in tech and say “it doesn’t have to be this way!”

Growth at all costs. Companies that demand the world of your time, and then steal it away with meetings and interruptions. Companies that coerce or bribe you to spend most of your waking hours with your nose to the grindstone (because after all, this company is like a family, right?)

It Doesn’t Have To Be Crazy At Work discusses these problems and proposes a better way of doing things. DHH and Jason Fried discuss the solutions they’ve found to work at Basecamp, tackling issues from big picture ambitions to lower level project management, and from hiring to perks and payroll. They discuss the solutions that they have found work for them at Basecamp, but make no decrees as to how your company ought to be run. That’s for you to iterate on with your own team.

Most of the stuff I have read about taming work feels very prescriptive. Do this. Do that. Try these processes. Add enough kanban boards and everything will click into place and you will finally be able to breathe. It Doesn’t Have To Be Crazy At Work takes the opposite approach and encourages you to look at the things that you don’t need to do. Make time for the important things by stripping away the inessential.

It’s a pretty short read, and worth a look for anyone who feels like work is a little too hectic.

March 08, 2019

I write this with absolutely zero hyperbole when I say that March couldn’t have come soon enough. February was a great month for stickee, don’t...

The post Stepping up to the charity challenge appeared first on stickee.

Reading List 224 by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my lacy red manties so I can spend time reading stuff.

This post delves into some of the pitfalls encountered when working with JavaScript promises, suggests some improvements and looks at a brief glimpse of the future of asynchronous programming in JavaScript.

This was first presented at one of our internal tech team talks and encouraged our CTO to rethink the coding of our latest JavaScript project here at Talis.


Some Background

Asynchronous workflows can be difficult to read and maintain. Callbacks suffer from these asynchronous pitfalls due to each step in the workflow adding another level of indentation which exacerbates the problem. Nesting is best left to birds! It’s also possible to forget to call a callback, meaning that a process is left hanging. Additionally, what happens when you want to pass an error back instead of data?

Promises are an improvement on this predicament. They return a handle onto an asynchronous event eventually completing or failing and they also enforce the separation of success and error handling code. However, you can still fall into the same pitfalls of nesting and forgetting to fulfill or reject a promise.

The introduction of async/await to the ECMAScript 2017 edition of JavaScript takes a big stride in improving asynchronous workflow management by allowing code to be written to make the workflow appear synchronous. There is also the added bonus of them being interoperable with promises.

Whilst these improvements are welcome, we’re not all in the position of being on the latest and greatest version of JavaScript. Many of us also have to maintain codebases written some time in the past. This blog post provides some insight on our journey from callbacks to async/await, by first getting our promises in order.

Broken Promises

It’s still possible to get into similar trouble with promises as we can with callbacks. Consider the following example of a request -> response workflow in a node controller class:

process(request) {
    const tenantCode = request.path.match(/\w*/g)[1];

    return new Promise((resolve, reject) => {
        this.deserialiseRequest(request).then((model) => {
            this.validateRequest(tenantCode, model).then(() => {
                this.persistData(model).then((persisted) => {
                    this.serialiseResponse(tenantCode, persisted).then((response) => {
                        resolve(response);
                    });
                }).catch((persistError) => {
                    this.serialiseResponse(persistError);
                });
            }).catch((validationError) => {
                reject(validationError);
            });
        }).catch((deserialisationError) => {
            this.serialiseResponse(deserialisationError);
        });
    });
}

There are a few things going on here. A request is being deserialised into a model, then validated, then persisted, then the result of that persistance is being serialised into a response. Each class method is returning a promise that can either be resolved or rejected (triggering then and catch respectively).

Along the way, there are at least three error scenarios. One thing that is immediately striking is that we still have nesting, just like callbacks! It’s also hard to spot what error can be thrown where.

For example, if the validationError catch block was removed, what do you think would happen when a validation error did occur? The catch block below will not come to the rescue here. The validateRequest promise rejection will not be handled, and this is bad news.

Fatal Rejections

Handling errors in promises is a big deal. We noticed this deprecation warning appear in Node 4:

UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch()

DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

It’s clear from this that knowing how and where your promise workflow handles errors is key to avoiding unexpected problems at runtime.

Thankfully, there is a safety net you can use by listening to unhandledRejection and rejectionHandled process events

Rose-linted Glasses

Using a linting tool such as ESlint combined with a promises plugin can help indicate to a developer that they’re not writing promises in the most appropriate way. The example above reveals the following, often repeated, violations:

[eslint] Each then() should return a value or throw [promise/always-return]
[eslint] Avoid nesting promises. [promise/no-nesting]
[eslint] Avoid nesting promises. [promise/no-nesting]
[eslint] Each then() should return a value or throw [promise/always-return]
[eslint] Avoid nesting promises. [promise/no-nesting]

The Flat-chaining pattern

In order to remove the nesting and allow the workflow logic to be seen more easily, we have started to introduce flat promise-chaining. The idea being that each promise takes an input, does something with it and then passes it into the next promise.

Now, the request -> response workflow looks like this:

process(request) {
  return this
    .deserialiseRequest(request)
    .then(this.validateRequest)
    .then(this.persistData.bind(this))
    .then(this.serialiseResponse);
}

Each promise can be seen as a step in the workflow. It should only do one small thing to keep the step simple, there could be many steps involved after all. When you resolve a promise it only takes a single argument and so you can use an object to capture the data as it is passed along the chain. The ES6 spread operator and destructuring syntax makes this easier to achieve:

validateRequest(input) {
    return new Promise((resolve, reject) => {
        const { model, tenantCode } = input;
        // ... do something with input
        return resolve({ ...input, additionalData: true });
    });
}

If any promise in the chain encounters a problem, it can throw an error and a single catch function can handle any of these at the end of the chain:

process(request).catch(serialiseError);

The Future

The future looks better thanks to async/await. An example request -> response workflow can now look like this, bringing greater legibility to the code:

const modelWithTenantCode = await this.deserialiseRequest(request);
await this.validateRequest(modelWithTenantCode);
const persisted = await this.persistData(modelWithTenantCode);
return await this.serialiseResponse(persisted);

You can surround this block with a single try/catch as well, to handle any errors.

If you are using a version of node that supports this then what are you waiting for! Our CTO converted a project full of callbacks to async/wait and finds reading and reasoning with the code far easier as a result.

If you’re running an older version of Node then you’ll be encouraged to know that async/await is interoperable with promises. This means the promises you write now will work in async/await workflows without any effort, making the upgrade path far more manageable.

The main takeaway from this journey has been that asynchronous programming can be hard to read and reason with. We think through workflows in linear terms but they don’t always turn out this way in JavaScript. Thankfully the standard is improving and lessons are being learned.

All of the examples in the blog post are available in full on GitHub

March 06, 2019

It was around 6:58pm when I stood up. The room only had eight people in it, but they are all looking at me, expectantly. Fuelled by the pint of stout I'd just polished off, I begun to talk.

When I’m not building websites or playing with wood, I run a local meetup group called Worcester Source. On the 4th Wednesday of each month, the members of the group gather at The Paul Pry pub, and after eating some pizza and drinking some beer, they stop and listen to a presentation given by a speaker on a technical subject of interest.

Generally, the speaker in question is a guest from a neighbouring community or a developer evangelist from this or that company, and they talk about some interesting piece of technology or maybe a new tool that has hit the market - but on this dark February night there was no guest, there was only me - and I was about to present a talk on achieving your goals.

I gave the talk, and received lots of positive feedback, so I wanted to share it here.

The slides for the talk are available here, with the speaker notes acting as my script. You can navigate the slides by clicking the down arrow each time it’s shown, then clicking the right arrow when you reach the bottom.

I hope you enjoy the presentation, even if you didn’t get to see my present it live. If you’re interested, you can read the post that inspired the talk here.

The word ‘demagogue’

refers to someone who may be charismatic and often bombastic, and is able to use his oratorical skills to appeal to the baser, more negative side of people’s feelings

It’s an apt word for the right-wingest of the Brexiteers who are marching to leave. The way they speak about the European negotiators is simply rude: “You must live in Narnia, Michel Barnier!” and “Get back in your bunker, Jean-Claude Juncker!”, both on the march to leave home page.

Silly people.

In production mode, Rails 4 uses the uglifier gem to compress your JavaScript code into something nice and small to send over the wire. However, when you’ve got a syntax error, it can fail rather… ungracefully.

You get a big stack trace that tells you that you’ve got a syntax error. It even helpfully tells you what the error was! Unexpected punctuation. There’s a comma, somewhere. Where? Who knows. It doesn’t tell you.

If you have a large project, or have multiple people making large changes, it can be a little time consuming checking the diffs of recent commits to figure out where the error was introduced. Here is how you can use the rails console to at least find the offending file:

Since the uglifier doesn’t run in development, let’s drop into the rails console in production mode:

RAILS_ENV=production rails c

Now we can use the following snippet of code to find our errors:

# Take each javascript file...
Dir["app/assets/javascripts/**/*.js"].each do |file_name|
  begin
    # ... run it through the uglifier on its own...
    print "."
    Uglifier.compile(File.read(file_name))
  rescue StandardError => e
    # ... and tell us when there's a problem!
    puts "\nError compiling #{file_name}: #{e}\n"
  end
end

This saved me a little time recently, so I hope it will be handy to someone else.

March 04, 2019

Freelance Tip: Follow your Gut

Some things aren’t taught they are learned. Your college or university lecturer wouldn’t (couldn’t!) teach you things you have to learn for yourself. If I was to create a list of things I’ve learned the hard way it would be a whole blog series.

designers: follow your gut

When you’re new to freelancing you chase every project going which has it’s problems. If you say yes to everything you may find yourself struggling for inspiration which then effects your productivity.

TL;DR: When you’re seeking a new project make sure you follow your gut. It may save you time and money.

Advice: It’s OK to accept shitty jobs. We all have to at some point. Just don’t make it a regular occurrence.

There is one thing that isn’t taught in design lessons that is really simple to follow, it will make your working life a whole lot easier.

Follow your gut.

Sounds simple, right? But what are the factors?

One. You may not get along with the clients you’re talking too.

Two. The project might be out of your comfort zone

Three. You may be OK with pushing your comfort zone but the risk of failing is too high.

Four. The client wants a bargain

Five. You don’t have time to do the project justice

So it’s much more complicated than this and your gut get’s stronger over time, experience is critical to letting your gut learn.

Side note: Read about my experience. 10 years as a Freelance Designer.

Advice: It’s OK to make mistakes, just make sure you learn from them and be honest with your clients.

frustration

To round this blog off, here are some tips from my Twitter network. If you want to contribute connect with me on Twitter.

Thanks to the contributions from everyone above.

Featured Photo by Helena Lopes on Unsplash
Egg Photo by KS KYUNG on Unsplash

The post Freelance Tip: Follow your Gut appeared first on .

March 01, 2019

The balloon goes up by Graham Lee

To this day, many Smalltalk projects have a hot air balloon in their logo. These reference the cover of the issue of Byte Magazine in which Smalltalk-80 was shared with the wider programming community.

A hot air balloon bearing the word "Smalltalk" sails over a castle on a small island.

Modern Smalltalks all have a lot in common with Smalltalk-80. Why? If you compare Smalltalk-72 with Smalltalk-80 there’s a huge amount of evolution. So why does Cincom Smalltalk or Amber Smalltalk or Squeak or even Pharo still look quite a lot like Smalltalk-80?

My answer is because they are used. Actually, Alan’s answer too:

Basically what happened is this vehicle became more and more a programmer’s vehicle and less and less a children’s vehicle—the version that got put out, Smalltalk ’80, I don’t think it was ever programmed by a child. I don’t think it could have been programmed by a child because it had lost some of its amenities, even as it gained pragmatic power.

So the death of Smalltalk in a way came as soon as it got recognized by real programmers as being something useful; they made it into more of their own image, and it started losing its nice end-user features.

I think there are two different things you want from a programming language (well, programming environment, but let’s not split tree trunks). Referencing the ivory tower on the Byte cover, let’s call them “academic” and “industrial”, these two schools.

The industrial ones are out there, being used to solve problems. They need to be stable (some of these problems haven’t changed much in decades), easy to understand (the people have changed), and they don’t need to be exciting, they just need to work. Cobol and Fortran are great in this regard, as is C and to some extent C++: you take code written a bajillion years ago, build it, and it works.

The academic ones are where the new ideas get tried out. They should enable experiment and excitement first, and maybe easy to understand (but if you need to be an expert in the idea you’re trying out, that’s not so bad).

So the industrial and academic languages have conflicting goals. There’s going to be bad feedback set up if we try to achieve both goals in one place:

  • the people who have used the language as a tool to solve problems won’t appreciate it if new ideas come along that mean they have to work to get their solution building or running correctly, again.
  • the people who have used the language as a tool to explore new ideas won’t appreciate it if backwards compatibility hamstrings the ability to extend in new directions.

Unfortunately at the moment a lot of languages are used for both, which leads to them being mediocre at either. The new “we’ve done C but betterer” languages like Go, Rust etc. feature people wanting to add new features, and people wanting not to have existing stuff broken. Javascript is a mess of transpilation, shims, polyfills, and other words that mean “try to use a language, bearing in mind that nobody agrees how it’s supposed to work”.

Here are some patterns for managing the distinction that have been tried in the past:

  • metaprogramming. Lisp in particular is great at having a little language that you can use to solve your problems, and that you can also use to make new languages or make the world work differently to see how that would work. Of course, if you can change the world then you can break the world, and Lisp isn’t great at making it clear that there’s a line between following the rules and writing new ones.
  • pragmas. Haskell in particular is great at having a core language that people understand and use to write software, and a zillion flags that enable different features that one person pursued in their PhD that one time. Not all of the flag combinations may be that great, and it might be hard to know which things work well and which worked well enough to get a dissertation out of. But these are basically the “enable academic mode” settings, anyway.
  • versions. Perl and Python both ran for years in which version x was the safe, stable, industrial language, and version y (it’s not x+1: Python’s parallel versions were 2 and 3000) in which people could explore extensions, removals, or other changes in potentially breaking ways. At some point, each project got to the point where they were happy with the choices, and declared the new version “ready” and available for industrial use. This involved some translation from version x, which wasn’t necessarily straightforward (though in the case of Python was commonly overblown, so people avoided going from 2 to 3 even when it was easy). People being what they are, they put a lot of store in version numbers. So some people didn’t like that folks were recommending to use x when there was this clearly newer y available.
  • FFIs. You can call industrial C89 code (which still works after three decades) from pretty much any academic language you care to invent. If you build a JVM language, it can do what it wants, and still call Java code.

Anyway, I wonder whether that distinction between academic and industrial might be a good one to strengthen. If you make a new programming language project and try to get “users” too soon, you could lose the ability to take the language where you want it to go. And based on the experience of Smalltalk, too soon might be within the first decade.

Fitness Group Ui

A beautiful new way to get fit with others.

This project is a personal one aimed to bring groups of like minded people together with one goal, get fit together.

Sometimes being in a new city, away for travel or just in your very own town can be tough to find people to exercise with.

This app is designed to bring people together by giving over your location, seeing who’s near you and if any events are on that you’d like to join.

Fitness Group UI Design

The app will be purely fitness focused, so if you’re into yoga you can tailor the app to only feature yoga people, or if you like to try everything then you can access that too.

Plus! You can manage and control who comes to your events, so you have ultimate control with the people you invite to your group training.

Fitness Group Ui

While this is work in progress I’m excited to show off a few key screens.

More soon!

Pssst.. Find out how Twitter has improved me as a freelance designer

 

The post Fitness Activity UI Design appeared first on .

Image by Graham Lee

I love my Testsphere deck, from Ministry of Testing. I’ve twice seen Riskstorming in action, and the first time that I took part I bought a deck of these cards as soon as I got back to my desk.

I’m not really a tester, though I have really been a tester in the past. I still fall into the trap of thinking that I set out to make this thing do a thing, I have made it do a thing, therefore I am done. I’m painfully aware when metacognating that I am definitely not done at that point, but back “in the zone” I get carried away by success.

One of the reasons I got interested in Design by Contract was the false sense of “done” I feel when TDDing. I thought of a test that this thing works. I made it pass the test. Therefore this thing works? Well, no: how can I keep the same workflow, and speed of progress but improve the confidence in the statement?

The Testsphere cards are like a collection of mnemonics for testers, and for people who otherwise find themselves wondering whether this software really works. Sometimes I cut the deck, look at the card I’ve found, and think about what it means for my software. It might make me think about new ways to test the code. It might make me think about criticising the design. It might make me question the whole approach I’m taking. This is all good: I need these cues.

I just cut the deck and found the “Image” card, which is in the Heuristics section of the deck. It says that it’s a consistency heuristic:

Is your product true to the image and reputation you or your app’s company wishes to project?

That’s really interesting. How would I test for that? OK, I need to know what success is, which means I need to know “the image and reputation [we wish] to project”. That sounds very much like a marketing thing. Back when I ran the mobile track at QCon London, Jaimee Newberry gave a great talk about finding the voice for your product. She suggested identifying a celebrity whose personality embodies the values you want to project, then thinking about your interactions with your customers as if that personality were speaking to them.

It also sounds like there’s a significant user or customer experience part to this definition. Maybe marketing can tell me what voice, tone, or image we want to suggest to our customers, but what does it mean to say that a touchscreen interface works like Lady Gaga? Is that swipe gesture the correct amount of quirky, unexpected, subversive, yet still accessible? Do the features we have built shout “Poker Face”?

We’ll be looking at user interface design, too. Graphic design. Sound design. Copyediting. The frequency of posts on the email list, and the level of engagement needed. Pricing, too: it’s no good the brochure projecting Fortnum & Mason if the menu says Five Guys.

This doesn’t seem like something I’m going to get from red to green in a few minutes in Emacs. And it’s one of a hundred cards.

But we are hackers and hackers have black terminals with green font colors ~ John Nunemaker

This post is the first in a series on useful bash aliases and shell customisations that developers here at Talis use for their own personal productivity. In this post I introduce an alias I wrote called awslookup that allows me retrieve information about ec2 instances across multiple aws accounts.


Background

At Talis, we run most of our infrastructure on AWS. This is spread over multiple accounts, which exist to separate our production infrastructure from development/staging infrastructure. When we release new versions of our software, we bring in new instances, and retire old instances as the new ones accept traffic. We also use auto scaling to bring in more instances during peek periods. This means that if we ever need to jump onto an instance we first have to look up its details. In the past I’ve used tools like ElasticFox, a Firefox plugin, which listed your ec2 instances as well as provided lots of other useful functionality:

ElasticFox

Sadly, ElasticFox has been defunct for some time, and though it has inspired many other, similar, graphical user interfaces none of them really appealed to me.

As I spend most of my time looking at a terminal window, what I really wanted was a way to query any of our AWS accounts for information about our ec2 infrastructure from within my terminal window, and in such as way that I could take the output and combine it with other commands to do interesting things. I wanted something lightweight, that did one thing, and did it well.

awslookup

The AWS command line interface is very good for this but querying using it can be quite difficult. I created a small bash function/alias, called awslookup as a wrapper around an AWS CLI query that lets me list information about instances based, specifically, on their tag:Name.

For example, running the command awslookup staging '*web*' lists all the instances in our staging/development account that have web anywhere in their name tag:

Awslookup Table Output

Where this really becomes useful is that I can format the output as plain, tab delimited text, and when I do so I can pipe the output from awslookup to any number of other tools. For example here’s the plain text output:

Awslookup Text Output

and now I can extract just the hostname (using awk) for each instance and pipe this to other tools:

Awslookup Pipe Output

Next up …

In the next post in this series, I’ll show you how I used the output from awslookup to solve a completely different problem: Automatically changing my terminal theme when I connect to a remote machine in any of our aws accounts.

February 28, 2019

February 27, 2019

Why 80? by Graham Lee

80 characters per line is a standard worth sticking to, even today. OK, why?

Well, back up. Let’s examine the axioms. Is 80 characters per line a standard? Not really, it’s a convention. IBM cards (which weren’t just made by IBM or read by IBM machines) were certainly 80 characters wide, as were DEC video terminals, which Macs etc. emulate. Actually, that’s not even true. The DEC VT-05 could display 72 characters per line, their later VT-50 and successor models introduced 80 characters. The VT-100 could display 132 characters per line, the same quantity as a line printer (including the ones made by IBM). Other video terminals had 40 or 64 character lines. Teletypewriters typically had shorter lines, like 70 characters.

Typewriters were typically limited to \((\mathrm{width\ of\ page} – 2 \times \mathrm{margin\ width}) \times \mathrm{character\ density}\) characters per line. With wide margins and narrow US paper, you might get 50 characters: with narrow margins and wide A4 paper, maybe 100.

IBM were not the only people to make cards, punches, and readers. Other manufacturers did, with other numbers of characters per card. IBM themselves made 40, 45 and 96 column cards. Remington Rand made cards with 45 or 90 columns.

So, axiom one modified, “80 characters per line is a particular convention out of many worth sticking to, even today.” Is it worth sticking to?

Hints are that it isn’t. The effects of line length on reading online news explored screen-reading with different line lengths: 35, 55, 75 and 95 cpl. They found, from the abstract:

Results showed that passages formatted with 95 cpl resulted in faster reading speed. No effects of line length were found for comprehension or satisfaction, however, users indicated a strong preference for either the short or long line lengths.

However that isn’t a clear slam dunk. Quoting their reference to prior work:

Research investigating line length for online text has been inconclusive. Several studies found that longer line lengths (80 – 100 cpl) were read faster than short line lengths (Duchnicky and Kolers, 1983; Dyson and Kipping, 1998). Contrary to these findings, other research suggests the use of shorter line lengths. Dyson and Haselgrove (2001) found that 55 characters per line were read faster than either 100 cpl or 25 cpl conditions. Similarly, a line length of 45-60 characters was recommended by Grabinger and Osman-Jouchoux (1996) based on user preferences. Bernard, Fernandez, Hull, and Chaparro (2003) found that adults preferred medium line length (76 cpl) and children preferred shorter line lengths (45 cpl) when compared to 132 characters per line.

So, long lines are read faster than short lines, except when they aren’t. They also found that most people preferred the longest or shortest lines the most, but also that everybody preferred the shortest or longest lines the least.

But is 95cpl a magic number? What about 105cpl, or 115cpl? What about 273cpl, which is what I get if I leave my Terminal font settings alone and maximise the window in my larger monitor? Does it even make sense for programmers who don’t have to line up the comment markers in Fortran-77 code to be using monospaced fonts, or would we be better off with proportional fonts?

And that article was about online news articles, a particular and terse form of prose, being read by Americans. Does it generalise to code? How about the observation that children and adults prefer different lengths, what causes that change? Does this apply to people from other countries? Well, who knows?

Buse and Weimer found that “average line length” was “strongly negatively correlated” with perceived readability. So maybe we should be aiming for one-character lines! Or we can offset the occasional 1,000 character line by having lots and lots of one-character lines:

}
}
}
}
}
}

It sounds like there’s information missing from their analysis. What was the actual shape of the data? What were the maximum and minimum line lengths considered, what distribution of line lengths was there?

We’re in a good place to rewrite the title from the beginning of the post: 80 characters per line is a particular convention out of many that we know literally nothing about the benefit or cost of, even today. Maybe our developer environments need a bit of that UX thing we keep imposing on everybody else.

There are a few complaints that are often made about the internet by those of us who spend too much time on it.

1) It Sucks Now

The first is that the internet isn’t fun any more. There’s no exploration. Everything is centralised in a few places. Everybody is trying to build a brand. Nobody does anything just for the fun of it. You go on Twitter, get mad, and log off. There’s nowhere else to go.

2) The Whole Thing Is Owned By Like Three People

The second complaint, and it’s very much related, is that the majority of the content that we produce is aggregated in a few places to the benefit of companies. Twitter. Facebook. Yahoo Answers.

I was reading an AskReddit thread a while ago. “What sites do you use when you aren’t on reddit?” I didn’t know. Everything is on one place, and it’s filtered through algorithms, trends, bots and social manipulation. We generate content, but we don’t give it directly to each other. We give it to a third party who flavours it and squeezes it into our waiting mouths like a sort of digital nutrient paste.

The solution RSS, RSS is the solution has been staring us in the face. It’s RSS.

Link pages. Remember link pages?

“A link page is a type of web page found on some websites. The page contains a list of links the web page owner, a person or organization, finds notable to mention. Often this concerns an enumeration of partner organizations, clients, friends or related projects.” - Wikipedia, the only good counter-example to my rant.

People still have personal websites. A lot of us techies still have personal websites. But it’s an exercise in brand building. Look how clever I am. Hire me. Be my friend.

We need to bring link pages back. In a big way.

Advantages:

  • Make friends. Become popular.
  • Cool graphics.
  • Won’t disappear when Jack Dorsey dismantles the Twitter servers to salvage parts for Mark Zuckerberg’s future body.

Disadvantages:

  • Literally none.

I will put my money (just kidding, I use GitHub pages so I don’t pay squat) where my mouth is here and say that if you run a tech site, don’t represent a company, and send me a small (ideally 88 x 31, as per the web badge guidelines) image, I will include it on a special page on my website that gets no traffic.

My dream is that one day our sites will once again be connected by an intricate network of links. A web. A World Wide Web.

Answers To Anticipated Comments

James, the web badges you mention are generally used for web standards, not linking to other sites!

Yeah, I know. Aren’t they cool though?

Are you trying to be funny?

Depends whether it worked or not.

This is a great idea. Can I link to your site?

Sure. Here:

my badge

February 25, 2019

Remember this? It’s the last day for the stack today. Buy today for $47.95 within six hours and get my APPropriate Behaviour, along with books on running software businesses, building test-driven developers, and all sorts of software stacks including Ruby, Node, Laravel, Python, Java, Kotlin…

I recently bought myself a copy of Refactoring UI by Adam Wathan and Steve Schoger.

I opted for the much cheaper ($79 USD for a book and three videos) package, foregoing the large amount of extras I could have gotten for another $70 USD: icons, colour palettes, inspiration gallery and a rather comprehensive sounding font showcase.

While I can’t say for sure, not having bought them, my guess is that they would easily provide more than the extra $70 of value to any designer or aspiring designer.

So, on to the book and videos, from which I learned many things:

The Book

1) Design From Above

  • How to focus on what the product needs to do before worrying about the layout.
  • How to prototype in low fidelity.
  • How to prioritise designing the core features without getting bogged down in hypothetical extensions.
  • How to figure out who you’re designing for.
  • How to design systems to constrain your choices.

2) Design With Hierarchy

  • How to use font sizes and colours to convey hierarchy.
  • How to separate visual hierarchy from document hierarchy.
  • How to balance semantic importance with position in the hierarchy.

3) Layout

  • How to use white space effectively (and how not to).
  • How to build good space relationships.

4) Text

  • How to establish good font size relationships.
  • How to choose and pair good fonts for different parts of the interface.
  • How to design text for prose.

5) Colour

  • How to choose a good colour palette.
  • How to define your colours in code.
  • How to pair colours and make effective use of contrast.

6) Depth

  • How to use shadows to convey a sense of “realness” in your interface.
  • How to position and overlap elements effectively.

7) Images

  • How to choose good images.
  • How to pair text and images when contrast becomes a problem.

8) Finishing Touches

  • How to use a lot of simple tips and tricks to make your product look more polished. These aren’t huge wins for accessibility or performance, but things you can do to elevate your design with minimal investment.

The Videos

Both packages come with three screen-cast style videos in which they improve the design of a realistic interface, explaining their process as they work. Being able to watch the interface transform as they talk you through it, and then be presented with the before and after makes a powerful impression as to the value of the concepts they discuss in the book.

Conclusion

This might seem like I’m simply regurgitating the table of contents and I suppose I am, but my aim is to convey the value I gained from this book in my own words as a developer who wants to be good at design (and who isn’t making any money from singing the book’s praises).

The examples are compellingly presented with before and after states, as well as having the important information such as colour and font choices annotated. Weighing in at a relatively lean 218 pages, Refactoring UI doesn’t delve too deeply into colour theory and the psychology of design, but provides countless rules of thumb and visual examples, showing even someone with the most rudimentary design skills (me) how to elevate their work to the next level.

To get a further sense of the value the book provides, I suggest you check out this series of hot tips from author Steve Schoger.

February 22, 2019

Reading List 223 by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my lacy red manties so I can spend time reading stuff.

Authors need editors to catch mistakes. It is human nature that one cannot adequately proof-read one’s own work. Authors of software need the same assistance as authors of novels to achieve the goals of the software organization. ~ Smarter Collaborator

As the statement above suggests, humans are not all that good at self-reviewing their work. It doesn’t matter if it’s written as code or plain English, a review by a peer will at the very least raise questions, but more than likely identify errors.

Before I dive into Talis’ approach to code reviews, I should first briefly introduce what a code review is, and why it’s beneficial in the development lifecycle.


What’s A Code Review?

A code review, as the name suggests, is the process of checking (reviewing) code. It usually takes the form of reading through code and assessing if the code is optimal, meets the requirements and is sufficiently tested.

Code reviews are also often referred to as Peer Reviews, and that’s because they are performed by someone other than the author of the code. This is usually colleagues in the same team, that work closely on the same code base, and therefore have a good understanding of the context the code has been written in.

When performing a code review the reviewer should be asking questions of the code along the lines of:

  • Does this code meet the requirements?

  • Are the company’s style guides adhered to?

  • Does it cause any regression failures?

  • Is there a way this could be improved:

    • Could this code be made more efficient?
    • Could this code be refactored to make it easier to maintain?
    • Is there a design pattern that can be applied?
    • Is there sufficient testing?
    • Have best practices been followed?
    • Does the approach taken make it maintainable?

What’s The Benefit Of A Code Review?

There are many obvious, practical benefits that a code review offers such as helping to ensure the code is fit for purpose, identifying errors before they make it to production and improving the quality of the code. However, in my opinion the primary benefit is the knowledge transfer that takes place.

We work in small teams here at Talis and typically everyone in the team will be assigned as a reviewer to a code review. This means that everyone that’s actively working in the code base is aware of new or modified code. This results in shared understanding before the code is merged into the master branch. This makes maintainability easier, because hopefully there isn’t a single point of expertise in any given area.


The Talis Approach To Code Reviews

Here at Talis we use GitHub for our code reviews. All code produced here must go through a review before it’s allowed to be merged to the master branch. To trigger the review process a code author will create a Pull Request on GitHub and assign Reviewers to it.

When do we Code Review?

Typically, code reviews happen once an author has completed the item of work. However, here at Talis we think this is often too late! So we have adopted the approach of being able to trigger code reviews at 30% and 90% complete.

The 30% Review

Early feedback on the outline plan of the changes you are going to make is great! It allows for discussion, clarification, reassurance and often modification before you’ve invested lots of time getting the code into a production ready state. The review is not focused on the details because at this stage of development, the code won’t be polished. It’s purely focused on the approach: does it meet the requirements in the best way?

It also makes suggestions for ‘big change’ to the approach less confrontational. The author has invested less time in their solution and can change direction more easily and quickly. We believe that if you were to wait until the work is complete, the time and cost to make a significant change is too great.

The 90% Review

The 90% code review is perhaps what would be considered more traditional. It occurs at the end of the feature’s development. The code will be more or less production ready, it should be adhering to coding standards and have all tests written. This type of code review is more detailed.

The reason we call it a 90% review is because it’s normally the case that a change is required, and that’s a good thing! It’s the purpose of a review; if no changes were ever made the code review would be redundant.

Self Review

Something else we do here at Talis is Self Review. This can be done at any point. However, it is often carried out just before requesting reviewers on a Pull Request. The purpose is two fold, first it allows you, the author, to identify errors. You wouldn’t believe how the change of context from your IDE to a highlighted webpage makes mistakes pop out!

Second, it allows us to annotate our code changes, providing a commentary to help guide the reviewer through the decision process the author went through when writing the code. We like to think this makes our code reviews better because reviewers also get some context around the code changes they are looking at.


I concentrated throughout on the critique aspect of a code review. However, a review isn’t limited to ‘…this could be done better like this’_, or _’you have a typo there’. Reviewers often also leave positive feedback. A particular favourite of ours is the virtual ‘Thumbs Up’ ( 👍), to give the author a pat on the back for a job well done.

As mentioned at the beginning, humans aren’t great at self review. To make sure our blogs don’t fall short of our exacting standards even they get the code review treatment. We’re sure they’re the better for it!

Thanks for reading!

February 21, 2019

Sailing Ship by Bruce Lawson (@brucel)

A Swing/ Big Band song! Many thanks to Chris Taylor – a Big Band scholar – for advice on what’s authentic and what was out of place. Produced by Shez

Leave these stupid people.
The time’s exactly right.
I have got a sailing ship,
let’s sail away tonight.

You can be the captain
I will be the crew.
I have got a sailing ship
so I can sail away with you.

Help me weigh the anchor
I’ll look out while you steer.
Come aboard my sailing ship
Let’s sail away from here.

We’ll sing along with mermaids
who will help us chart our course.
be guided by dolphins
from these weary, dreary shores.

The map says “here be dragons”
but perhaps it’s Shangri-La.
Let’s go and see what lies beyond
this half-life where we are.

Through the doldrums, ice and tempests
we’ll cross the oceans deep.
We’ll make all pirates walk the plank
and croon the Kraken back to sleep.

My ship is called “Clarita”,
her sails are big and bright.
The wind is up and the tide is high
let’s sail away tonight.

Words and music © Bruce Lawson 2018, all rights reserved.

Remember remember the cough 6th of November, when APPropriate Behaviour joined a wealth of other learning material for software engineers in a super-discounted bundle called the Ultimate Programmer Super Stack?

It’s happening again! This is a five-day flash sale, with all same material on levelling up as a programmer, running a startup, and learning new technologies like Aurelia, Node, Python and more. The link at the top of this paragraph goes to the sales page, and you’ve got until Monday, when it’s gone for good.

February 18, 2019

The Fragile Manifesto by Graham Lee

A lot of what I’ve been reading and thinking about of late is about the agile backlash. More speed, lower velocity reflects on IT teams pursuing “deliver more/newer IT” at the cost of “help the company achieve its mission”. Grooming the Backfog is about one dysfunction that arises as a result: (mis)managing a never-ending road of small changes rather than looking at the big picture and finding a path toward the destination. Our products are not our products attempts to address this problem by recasting teams not as makers of product, but as solvers of problems.

Here’s the latest: UK wasting £37 billion a year on failed agile IT projects. Some people will say that this is a result of not Agiling enough: if you were all Lean and MVP and whatever you’d not get to waste all of that money. I don’t necessarily agree with that: I think there’s actually things to learn by, y’know, reading the article.

The truth is that, despite the hype, Agile development doesn’t always work in practice.

True enough, but not a helpful statement, because “Agile” now means a lot of different things to different people. If we take it to mean the values, principles and practices written by the people who came up with the term, then I can readily believe that it wouldn’t work in practice for people whose context is different from those who came up with the ideas in 2001. Which may well be everyone.

I’m also very confident that it doesn’t mean that. I met a team recently who said they did “Agile”, and discussed their standups and two-week iterations. They also described how they were considering whether to go from an annual to biannual release.

Almost three quarters (73%) of CIOs think Agile IT has now become an industry in its own right while half (50%) say they now think of Agile as “an IT fad”.

The Agile-Industrial Complex is well-documented. You know what isn’t well-documented? Your software.

The report revealed 44% of Agile IT projects that fail, do so because of a failure to produce enough (or any) documentation.

The survey found that 34% of failed Agile projects failed because of a lack of upfront and ongoing planning. Planning is a casualty of today’s interpretation of the Agile Manifesto[…]

68% of CIOs agree that agile teams require more Architects. From defining strategy, to championing technical requirements (such as performance and security) to ensuring development teams stick to the rules of the game, the role of the Architect is sorely missed in the agile space. It must be reintroduced.

A bit near the top of the front page of the manifesto for agile software development is a sentence fragment that says:

Working software over comprehensive documentation

Before we discuss that fragment, I’d just like to quote the end of the sentence. It’s a long way further down the page, so it’s possible that some readers have missed it.

That is, while there is value in the items on the right, we value the items on the left more.

Refactor -> Inline Reference:

That is, while there is value in comprehensive documentation, we value working software more.

Refactor -> Extract Statement:

There is value in comprehensive documentation.

Now I want to apply the same set of transforms to another of the sentence fragments:

There is value in following a plan.

Nobody ever said don’t have a plan. You should have a plan. You should be willing to amend the plan. I was recently asked what I’d do if I found that my understanding of the “requirements” of a system differ from the customer’s understanding. It depends a lot on context but if there truly is a “the customer” and they want something that I’m not expecting to offer them, it’s time for me to either throw away my version or find a different customer.

Similarly, nobody said don’t have comprehensive documentation. I have been on a very “by-the-book” Agile team, where a developer team lead gave feedback that they couldn’t work out where a change would go to enable a particular feature. That’s architecture! What they wanted was an architectural plan of the system. Except that they couldn’t explicitly want that, because software architecture is so, ugh, 1990s and Rational Rose. Wanting an architecture diagram is like wanting to use CORBA, urrr.

Once you get past that bizarre emotional response, give me a call.

February 15, 2019

Good design is a language, and when everyone is speaking the same language, that’s when things get done. ~ Arjun Narayanan


What is a design system?

Also known as a design language, or UI library, a design system is really an extension of what you might know as a style or brand guide, into a series of design components for you and your team to reuse.

The Talis design system lives at http://design.talis.com/ and we developers refer to it often when implementing changes to our products.

Pain points

The hard part is actually sticking to your system, even as your product evolves and changes. Multiple teams are writing new code and designing new features, maybe in different programming languages and using different UI frameworks - at Talis we have many on the go! It’s tempting to think that the handover from a designer to a developer should be a uni-directional flow of pixel-perfect mockups. But that just isn’t realistic or advisable, because the design process should involve feedback from those implementing. This is both so that the development team get sight of the design roadmap before it’s set in stone to enable longer-term technical planning, and also so that problems, concerns, or the effect on work and time estimates can be voiced early. We talk about doing reviews of code at 30% and 90% in our Engineering Handbook, but we also try to follow that pattern for reviewing design and product work. We have found this goes a long way to reducing the friction of design handover, but it doesn’t completely alleviate the tendency for our products to drift from what’s set out in our design system.

A problem shared …

In October last year, I was given a scholarship to attend the Design Tools Hackathon 2018. (Thank you to Sketch who sponsored me!) There were around 80 participants from 15 different countries; a mix of designers and developers. The conference consisted of half a day of talks and then a day and a half of hacking. I had worried that as a developer I’d be out of my depth, given I hadn’t used many of the design tools involved, but it turned out that at a hackathon centred around extending design tools, developers who could hack together working prototypes were in high demand! If you’re interested, check out the recap blog post, which has recordings of all the talks, including on topics like “Design Engineering” and how to roll out a design system in a remote-only company. Bonus points if you spot me in the hack demos :)

Through brainstorming hack ideas, I found that many other developers and designers also confronted this issue frequently, and the tooling they had wasn’t really helping them stick to their shared language. I also discovered new tools I’d never used before and found some really powerful workflows that I felt were just scratching the surface of the common pitfalls in modern development teams.

One such tool is Zeplin, a connected space for product teams to handoff designs and style guides with accurate specs, assets and code snippets. The idea is that the designer uploads their designs from Sketch into Zeplin to make it available to developers to review and implement seamlessly. The app provides the developer with a list of CSS properties to style each element in the design. You can give it a go for free at https://zeplin.io/

While a list of CSS properties goes some way to helping you implement a design in code, it won’t necessarily correlate to your existing design system. What follows is a fiddly process of mapping the components and styles in the design, and generated by the handover tool, to ones already existant in your repository. Identifying any deviations from the agreed style guide at this point is crucial. Maybe there are new paradigms that need adding to your design system. Maybe there’s a change in the design that has gone unnoticed until now. As the person implementing the design, the developer is in prime position to validate it against what has gone before and catch any potential design debt.

Ultimately, the goal is DRY code (Don’t Repeat Yourself). In existing code bases with well established stylesheets, there are often many pre-build helper classes that are used to customise and style elements. Instead of adding more bulk to the stylesheet by creating entirely new classes every time you add new element to a page, you want to build your element using existing CSS classes.

…is a great jump off point for a hackathon

We theorised there was a better way! What if a tool like Zeplin could tell you exactly what components from your style guide were in use in a given design? What if Zeplin could tell you off the bat the classes you’d need to implement it? And which bits of the style fell entirely outside of your design system altogether?

This lead to our MVP: A Zeplin extension that, given a CSS stylesheet, compares the CSS properties in the stylesheet with the CSS generated by Zeplin and outputs a list of matching helper classes. Magic! You can check out our code on GitHub. Currently it requires you to bundle your stylesheet in with the extension, but once the Zeplin API is extended to allow, we’d like to give users a way to upload their own stylesheet and let the extension take care of the rest.

For more info on building Zeplin extensions, see the Zeplin extension documentation.

Systems are only as good as the people in them

Of course, tools don’t really solve problems - they optimize processes. The real solution comes down to the relationship between your design team and your development team and the quality of your communication. Getting buy-in can be hard, but I hope I’ve convinced you there might be ways of dismantling barriers to seamlesss developer/designer collaboration.

February 13, 2019

Here’s a talk I did at a lovely inclusive, anarchic, friendly conference last month called Monki Gras. It was great; a low ticket-price, proper food, craft beers and melted cheese snacks, a diverse group of speakers and a diverse audience. I made loads of new friends and heard loads of new perspectives.

February 12, 2019

February 10, 2019

Solve the problem then make the app.

I’ve been guilty of designing and launching products that I’ve personally wanted to see on the market and guess what, most of them failed.

Understanding why they fail is important to any aspiring startup founder. Learn from it and make your next project awesome.

In this blog I talk about how your next project should solve a problem or take an existing problem and out a unique spin on it.

olav-ahrens-rotne

Mike Cappucci, a colleague and good friend in the US and I often talk about new product ideas and he’s constantly reminding me to find the problem before I get too excited about my new idea.

Here’s where I fail. I think of an idea, I get excited and I want to design it. That’s just how I work.

There’s nothing wrong with this approach but if I could take back the hours and hours of design time that wen’t nowhere I would. I’d probably use those hours elsewhere with more value to my business.

It’s important to have someone within your professional network that will tell you honestly that your idea is bad. It will save you time. You’ll find that new ideas grow from bad ones, it’s just part of the process.

So let’s start with a basic rule. Find a problem and work with it.

The problem doesn’t need to be big or have world changing effects, it just needs to be there. It needs to be something that you think could be solved with a simple technical solution.

I bolded simple. Because ideas can be complex but the solution should be simple. Uber didn’t become the go-to taxi app by being confusing, it just works — even your grandma could use it.

Now you have your problem workshop it. Ask everyone you know about the problem and how they tackle it, you may be surprised with their solutions. Next, run it by someone you trust in the startup game. Someone that will tell you truth as quite often people tell you what you want to hear.

Problem identified, research done – get together a MVP. It doesn’t need to be much it just needs to showcase your solution – after that… well that’s another blog entirely.

clark-tibbs

Solution already exists but I have a unique spin on the idea.

The problem doesn’t need to be original it just needs to exist. You may think that you could do what the Deliveroo app does but for every meal you sell you plant a tree. Go for it. The world needs green thinkers like you. Build it.

Make sure your solution ticks new boxes. If you just want to be Deliveroo and you launch… people will just want to use Deliveroo as it’s established and they have an account already active.

BUT if you want to be the green version of Deliveroo and you have an audience you can target the differences too, you have yourself a unique market opportunity.

rawpixel

Although brief, this blog should highlight one important thing.

Find the problem first, then make the app.

If you’ve enjoyed this blog, I have many more. Plus you can follow me on Twitter for more thoughts.

The post Solve the problem then make the app. appeared first on .

yellowbox-ui-ux-design

Automated Farming Equipment Platform

Late last year I was working with US startup Yellowbox who manufacture agricultural devices to automate farming equipment, as part of their expansion they needed a good looking and functional app which is where I stepped in.

I worked closely with founder Neil Mylet to understand the problems he was solving so I could translate it into a easy to use UX Design. The app needed to do quite a few features, projects, weather, health, news so the first challenge was how we fit it all in without having an app that looks too busy.

yellow box-ui-design

I designed their iPhone app and worked on their new branding. Product launches very soon.

yellowbox-ui-ux-design

yellowbox-ui-ux-design

The post Yellowbox UX & UI Design appeared first on .

February 09, 2019

Future tech thoughts: e-sports for fitness

A month ago I purchased a smart trainer for my bike. I liked the idea of having an indoor activity that would link up to some cool apps to track and monitor my fitness progress.

I like it. I go to the garage early in the morning, spin up for an hour, sweat loads and come out of it feeling energised and ready for the day.

I’ve also been testing some of the main smart trainer apps on the market (Zwift, TrainerRoad, Rouvy, FulGaz & The SufferFest to name a few). Perhaps I’ll write a separate blog on those as they range in quality and price.

Zwift + Smart Trainer

This got me thinking, what does the future of smart training look like and how do they make money from it?

We all know gaming e-sports is huge with many tournaments having multi million dollar prizes (The record is for DOTA2 at $24.6 million). e-sports isn’t a new term, it’s been growing at crazy rates for years and will only keep growing.

Sponsors throw money at these tournaments, some tournaments are like Olympic games with huge stadiums! It really is exciting but why stop at gaming?

DOTA 2 Tournament

There’s huge potential for other indoor training types to become e-sports. Ultra marathons via your treadmill. Rowing championships via your machine in the basement.

You could apply the same gaming formula to loads of low-cost smart products that are sweeping the market. It just takes some vision and lot’s of funding.

Zwift + Treadmill

My smart trainer has the ability for me to ride alongside with thousands of other cyclists all over the world and while there are some prize based online tournaments now… in the future this could be worth billions.

We’re likely to see a huge surge in smart fitness products in the coming years. We’ll probably see ordinary products like trainers get smarter.

Anything you can do at home or without any professional gym setting could be weaponised into a smart product.

I really believe that smart tournaments will attract huge sponsors and million dollar prizes for those who take part.

Wattbike Atom via Bike Radar.

Imagine the scene. A stage with 10 smart trainers with Manic Street Preacher level stadium rock lights beaming down onto them. The gun goes and we’re off, ten work class cyclists race to the finish line while users at home try to keep up.

The victor stands on the stage panting while holding a check for $40million, cut to Cat Deeley who’s live with the person who finished top of the home racers via video link. “Congratulations, you’ve also won a million dollars”. Cut to commercials.

So what do we call it. Is it e-sports or is it f-sports, f for fitness. Perhaps s-sports for smart. One way or another I believe we’ll be reading lots more about smart tournaments in 2019 onwards.

Enjoyed this blog? follow me on Twitter

The post Future tech thoughts: e-sports for fitness appeared first on .

February 08, 2019

In no particular order, here are some of my favourite reads from 2018:

Atomic Habits Atomic Habits by James Clear

Of all the books I read last year, this had the biggest impact on me. James Clear shares a framework for building good habits, which can be reversed to help stop bad habits. It’s packed full of practical advice. A book I’ll be revisiting often.

The quality of our lives often depends on the quality of our habits.

Every action you take is a vote for the type of person you wish to become.

It is so easy to overestimate the importance of one defining moment and underestimate the value of making small improvements on a daily basis.

Making Websites Win Making Websites Win by Karl Blanks and Ben Jesson

I build websites for a living, so I was obviously excited to pick this up when I heard about it. The market is flooded with books about how to design and build websites, but very few teach how to make effective websites. This is that book. At the time of writing, the digital edition is £1.99/$1.95. It’s crazy how much value you’ll get for less than a cup of coffee.

If you do make your website more beautiful, ensure your designs are minimalist—visually and technically. Keep them elegantly simple and easy to update. And don’t forget that—like the Stanley hammer—good functional design has a beauty of its own.

The top companies make frequent, incremental changes, and rarely (if ever) have huge website redesigns.

The best marketers create funnels that counter each objection at the exact moment that the visitors are thinking it. And the only way to do that is to understand the visitors well.

Lying Lying by Sam Harris

A thought-provoking quick read (it’ll only take about an hour) on the damage that can be caused by lying. Harris makes a clear argument that few of us are ever in a situation where lying is necessary or beneficial.

Honesty is a gift we can give to others.

Unlike statements of fact, which require no further work on our part, lies must be continually protected from collisions with reality.

Lies are the social equivalent of toxic waste: Everyone is potentially harmed by their spread.

How To Be Right: … in a world gone wrong How To Be Right: … in a world gone wrong by James O’Brien

James O’B – radio presenter on LBC and host of one of my favourite podcasts, Mystery Hour – has spoken to more disgruntled people than most of us. In this book, he dissects the “faulty opinions” from callers to his phone-in show. This book reinforced my belief that most people aren’t inherently bad, but have been misled by politicians and media organisations.

The challenge is to distinguish sharply between the people who told lies and the people whose only offence was to believe them.

Hardly anyone is asked to explain their opinions these days; to outline not just what they believe, but why.

I believe it boils down to a simpler truth than many of us are prepared to admit to: some people are determined to believe in the fundamental badness of others. They choose to.

The Obstacle Is the Way The Obstacle Is the Way by Ryan Holiday

I’ve read Obstacle Is The Way twice now, and I got just as much from it on the second pass. This book serves as a practical introduction to stoicism. If you’re new to the subject of stoic thinking, there’s much to be gleaned from these pages.

Today, most of our obstacles are internal, not external.

Think progress, not perfection.

If an emotion can’t change the condition or the situation you’re dealing with, it is likely an unhelpful emotion. Or, quite possibly, a destructive one.

When When by Daniel H. Pink

Daniel Pink, in this well-researched book, shares his findings on the science of when things should be done. You might be pleased to hear that he doesn’t advocate waking up at 4am to workout (as seems the craze in the productivity space). Instead, reading this book will make you more aware of your own patterns and routines and make the most of them. As the author says, “I used to believe in ignoring the waves of the day. Now I believe in surfing them.”

First, our cognitive abilities do not remain static over the course of a day. During the sixteen or so hours we’re awake, they change—often in a regular, foreseeable manner. We are smarter, faster, dimmer, slower, more creative, and less creative in some parts of the day than others.

Ericsson found that elite performers have something in common: They’re really good at taking breaks.

I used to believe in ignoring the waves of the day. Now I believe in surfing them.

Daring Greatly Daring Greatly by Brené Brown

I read this book as part of the book club I’m in. After the first chapter, I wasn’t sure where the book was heading and if I was going to connect with the message, and honestly, if it wasn’t for the book club, I’d have put the book down and moved on. I’m glad I kept reading, because this a wonderfully warm and tender book about vulnerability. It’s a book I needed to read.

We are a culture of people who’ve bought into the idea that if we stay busy enough, the truth of our lives won’t catch up with us.

Connection is why we’re here. We are hardwired to connect with others, it’s what gives purpose and meaning to our lives, and without it there is suffering.

Vulnerability is the core of all emotions and feelings. To feel is to be vulnerable. To believe vulnerability is weakness is to believe that feeling is weakness.

It Doesn't Have to be Crazy at Work It Doesn’t Have to be Crazy at Work by Jason Fried and DHH

The founders of Basecamp have written a much-needed manifesto for running a calm business. Social media is full of workaholics touting that you need to pull in all-nighters to be successful,  but this book shows that it is possible to run a successful business by working sensible hours, taking vacations, spending time with family and getting a good nights sleep.

The answer isn’t more hours, it’s less bullshit. Less waste, not more production. And far fewer distractions, less always-on anxiety, and avoiding stress.

Cutting back when times are great is the luxury of a calm, profitable, and independent company.

A business is a collection of choices. Every day is a new chance to make a new choice, a different choice.

Notes on a Nervous Planet Notes on a Nervous Planet by Matt Haig

Following up from Reasons to Stay Alive, Matt Haig has written a manual on how to live a sane life in a world that is trying to make us crazy. He covers a breadth of topics that cause anxiety and unhappiness: consumerism, social media, technology, and even reading or watching the news.

The thing with mental turmoil is that so many things that make you feel better in the short term make you feel worse in the long term. You distract yourself, when what you really need is to know yourself.

There is no shame in not watching news. There is no shame in not going on Twitter. There is no shame in disconnecting.

The whole of consumerism is based on us wanting the next thing rather than the present thing we already have. This is an almost perfect recipe for unhappiness.

The Entrepreneur’s Guide to Keeping Your Shit Together The Entrepreneur’s Guide to Keeping Your Shit Together by Sherry Walling

Sherry Walling, co-host of the Zen Founder podcast, has written an important book on staying healthy while dealing with the stresses of running a business. The premise is loud and clear: “the reality is that without a healthy founder, it is impossible to have a healthy business.”

When we lose deep connection with others, our health suffers and our businesses suffer.

Freedom and anxiety. Ingenuity and failure. Adventure and instability. Meaning and isolation. These stand in stark contrast to each other and represent the great risk of being an entrepreneur.

As an entrepreneur, one of the smartest things you can do is to make sure you are in a community—not only with your family and friends, but also in a community with other entrepreneurs.

Profit First Profit First by Mike Michalowicz

While I found the author annoying at times (it’s another business book that should have been 100 pages or less), there’s a lot of learn from this book. In fact, it changed the way I run the finances in my own business. Recommended if you run a small business.

Putting your nose to the grindstone is a really easy way to cover up an unhealthy business.

At the end of the day, the start of a new day, and every second in between, cash is all that counts. It is the lifeblood of your business.

Eliminating unnecessary expenses will bring more health to your business than you can ever imagine.

The Simple Path to Wealth The Simple Path to Wealth by J.L. Collins

This book is a mix of advice on investments and general financial health and surprisingly isn’t as dry or boring as you might expect. There’s plenty of wisdom to found. I loved the concept of ‘F-You money’, which means having enough savings to give you control and autonomy.

It’s your money and no one will care for it better than you.

There are many things money can buy, but the most valuable of all is freedom. Freedom to do what you want and to work for whom you respect.

It’s a big beautiful world out there. Money is a small part of it. But F-You Money buys you the freedom, resources and time to explore it on your own terms.

Psychopath Test The Psychopath Test by Jon Ronson

After reading So You’ve Been Publicly Shamed, and more recently The Psychopath Test, Jon Ronson has quickly become one of my favourite authors. This books follows Ronson’s journey into psychopathy and how dangerous it can be to misdiagnose people. I also found it interesting that “a disproportionate number of psychopaths can be found in high places”, such as CEOs or politicians. Makes sense when you think about current world affairs.

I wondered if sometimes the difference between a psychopath in Broadmoor and a psychopath on Wall Street was the luck of being born into a stable, rich family.

‘Sociopaths love power. They love winning. If you take loving kindness out of the human brain there’s not much left except the will to win.’ ‘Which means you’ll find a preponderance of them at the top of the tree?’ I said.

If you’re after more book recommendations, I also made a list of the best books I read in 2017. I’d love to hear your book recommendations too! Shoot me a tweet.

Reading List 222 by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my lacy red manties so I can spend time reading stuff.

February 05, 2019

The beginning of 2019 was mostly characterised by letting go of things. Some of the things I let go of were physical items I had outgrown. Some were items that I only thought I wanted. Some were items I was given and felt obligated to hold on to. Objects represent your feelings towards the person who gave them to you. They represent who you wanted to be when you got them.

These things are the ones that are very hard to get rid of.

”<high pitched noise>” ~ Marie Kondo

That’s right. I’ve been watching the Netflix show “Tidying Up With Marie Kondo” after it exploded all over Twitter. In my defence, I liked KonMari before it was cool.

The Digital Purge

“Is your cucumber bitter? Throw it away. Are there briars in your path? Turn aside. That is enough.” - Marcus Aurelius

Goodbye weekly newsletters that I never read. Goodbye, promotional emails for companies that I used the trial of for like a week four years ago. Goodbye, statistics from the Medium blog that I don’t have any posts on.

I un-followed a lot of Twitter accounts in that first month, and I am continuing to do so with the very cool Tokimeki Unfollow by Julius Tarng.

The Physical Purge

“thank u, next.” - Ariana Grande

My living space is not large enough for the amount of stuff I had. I got rid of a whole bunch of it through many many trips to the charity shop. This has made me happier in a lot of ways: more room for new stuff; less clutter to make me feel cramped and cause anxiety; re-connecting with some off stuff that I do actually like, but was buried under a mountain of gubbins and bumph.

The Great Kondo-ification of my living space went on and goes on.

The Emotional Purge

“If the rule you followed brought you to this, of what use was the rule?” - Anton Chigurh

I use the great Todoist to organise the things I have to get done. It was a graveyard. Stuff I once wanted to do, but have lost interest in. Things that I felt obligated to do that weren’t actually necessary or urgent.

Tasks that progressed me towards a goal I no longer held close.

Deleted them all.

The end result is that my list of tasks now much more closely represents the things I need to do and the things I want to do.

February 01, 2019

Fleet Management Software

AutoGear is a successful technology business based in Oslo, Norway. They provide a piece of technology that fits into your vehicle and tracks your milage for personal and work usage. In Norway there are laws preventing work vehicles being used for personal reasons and AutoGear helps businesses make sure this law is followed.

autogear-ui-ux-design

AutoGear approached Mike to take creative lead for UI – UX design of their web dashboards.

autogear-ui-ux-design

The project was an extremely complicated one and I’m thrilled with the results.

complex UI design

Mobile Route Map UI

The post AutoGear UX – UI Design appeared first on .

January 31, 2019

Model Software by Andy Wootton (@WooTube)

Today, I had to stop myself writing “solving the problem” about developing software. Why do we say that? Why do software people call any bounded area of reality “the problem domain”?

My change of mind has been fermenting for a while, due to modelling business processes, learning about incremental, agile software development and more recently writing and learning functional programming. In the shower this morning, I finally concluded that I think software is primarily a modelling medium. We solve problems using the models we build.

Wanting to create another first-person shooter game or to model the fluids in a thermo-nuclear reactor are challenges, not problems. We build models of systems we have defined and the systems don’t even have to be real. I read a couple of days ago that a famous modern philosopher said our world is made of both reality and our ideas. Assuming the computer hardware is real, the software can model either reality or our imagination; our chosen narrative.

‘Digital’ gets everyone working with software models instead of reality. Once everyone lives inside the shared model, when does it become our reality?

Or when did it?

January 30, 2019

Chicken McNuggets by Stuart Langridge (@sil)

Back in the old days, when things made sense, you could buy Chicken McNuggets in boxes of 6, 9, and 201. So if you were really hungry, you could buy, for example, 30: two boxes of 9 and two boxes of 6. If you weren’t that hungry you’re a bit scuppered; there’s no combination of 6, 9, and 20 which adds up to, say, 14. What if you were spectacularly hungry, but also wanted to annoy the McDonalds people? What’s the largest order of Chicken McNuggets which they could not fulfil?

Well, that’s how old I am today. Happy birthday to me.

Tomorrow I’m delivering my talk about The UX of Text at BrumPHP, and there may be a birthday drink or two afterwards. So if you’re in the area, do drop by.

Best to not talk about politics right now. It was bad two years ago and it’s worse now. We’re currently in the teeth of Brexit. I thought this below from Jon Worth was a useful summary of what the next steps are, but this is no long-term thing; this shows what might happen in the next few days, which is as far out as can be planned. I have no idea what I’ll be thinking when writing my birthday post next year. I’m pretty worried.

Right, back to work. I’d rather be planning a D&D campaign, but putting together a group to do that is harder than it looks.

  1. yes, yes, now you can get four in a Happy Meal, but that’s just daft. Who only wants four chicken nuggets?

January 29, 2019

Grooming the Backfog by Graham Lee

This is “Pub Walks in Warwickshire”. NEW EDITION, it tells me! This particular EDITION was actually NEW back in 2008. It’s no longer in print.

Pub Walks in Warwickshire

Each chapter is a separate short walk, starting and finishing at a pub with a map and instructions to find your way around the walk. Some of the instructions are broken: a farmer has put a barbed wire fence across a field, or a gate has been replaced or removed. You find when you get there that it’s impossible to follow the instructions, and you have to invent a new route to get back on track. You did bring a different map, didn’t you? If not, you’ll be relying on good old-fashioned trial and error.

Other problems are more catastrophic. The Crown at Napton-on-the-hill seems to have closed in about 2013, so an attempt to do a circular walk ending with a pint there is going to run into significant difficulties, and come to an unsatisfactory conclusion. The world has moved on, and those directions are no longer relevant. You might want to start/end at the Folly, but you’ll have to make up a route that joins to the bits described here.

This morning, a friend told me of a team that he’d heard of who were pulling 25 people in to a three-hour backlog grooming session. That sounds like they’re going to write the NEW EDITION of “Pub Walks in Warwickshire” for their software, and that by the time they come around to walking the route they’ll find some of the paths are fenced over and the pubs closed.

Decomposing the Analogy

A lengthy, detailed backlog is not any different from having a complete project plan in advance of starting work, and comes with the same problems. Just like the pub walks book, you may find that some details need to change when you get to tackling them, therefore there was no value in spending the time constructing all of those details in the first place. These sorts of changes happen when assumptions about the organisation or architecture of the system are invalidated. Yes, you want this feature, but you can no longer put it in the Accounts module because you found that customers think about that when they’re sorting their bills, not their accounts. Or you need to put more effort into handling input from an external data source, because the way it really works isn’t quite the same as the documentation.

Or you find that a part of the landscape is no longer present and there’s no value in being over there. This happens when the introduction of your system, or a competitors’, means that people no longer worry about the problem they had back at the start. Or when changes in what people are trying to do mean they no longer want or need to solve that problem at all.

A book of maps and directions is a snapshot in time of ways to navigate the landscape. If it takes long enough to follow all of the directions, you will find that the details on the ground no longer match the approximation provided by the book.

A backlog of product features and stories is a snapshot in time of ways to develop the product. If it takes long enough to implement all of the features, you will find that the details in the environment no longer match the approximation provided by the backlog.

A Feeling of Confidence

We need to accept that people are probably producing this hefty backlog because they feel good about doing it, and replace it with something else to feel good about. Otherwise, we’re just making people feel bad about what they’re doing, or making them feel bad by no longer doing it.

What people seem to get from detailed plans is confidence. If what they’re confident in is “the process as documented says I need a backlog, and I feel confident that I have done that” then there’s not much we can do other than try to change the process documentation. But reality probably isn’t that facile. The confidence comes from knowing where they’re trying to go, and having a plan to get there.

We can substitute that confidence with frequent feedback: confidence that the direction they’re going in now is the best one given current knowledge, and that it’s really easy to get updates and course corrections. Replace the confidence of a detailed map with the confidence of live navigation.

On the Backfog

A software team should still have an idea of where it’s going. It helps to situate today’s development in the context of where we think (but do not know) we will be soon, to organise the system into a logical architecture, to see which bits of flexibility Ya [Probably] Ain’t Gonna Need and which bits Ya [Probably] Are. It also helps to have the discussion with people who might buy our stuff, because we can say “we think we’re going to do these things in the coming months” and they can say “I will give you a wheelbarrow full of money if you do this one first” or “actually I don’t need that thing so I hope it doesn’t get in my way”.

But we don’t need to know the detailed steps and directions to get there, because building those details now will be wasted effort if things change by the time we are ready to tackle all of the pieces. Those discussions we’re having with the people who might buy our stuff? They might, and indeed probably should, change that high-level direction.

Think of it like trying to navigate an unknown landscape in fog. You know that where you’re trying to get to is over there somewhere, but you can’t clearly see the whole path from here. You probably wouldn’t just take a compass bearing and head toward the destination. You’d look at what you can see around, and what paths there are. You’d check a map, sure, but you’d probably compare it with what you can see. You’d phone ahead to the destination, and check that they expect to be open when you expect to get there. You’d find out if there are any fruitful places to stop along the way.

So yes, share the high-level direction, it’s helpful. But share the uncertainty too. The thing we’re doing next should definitely be known, the thing we’re doing later should definitely be guesswork. Get confidence not from colouring in the plan all the way up to the edges, but by knowing how ready and able you are to update the plan.

January 27, 2019

App Ui design quiz

Challenge yourself & win Big Cash

PROVEIT is the first US app that lets you play daily trivia against your friends for cash prizes. Whether you love Seinfeld, Science, or the Super Bowl, someone’s waiting for you to PROVEIT.

I love it when I get recommended to work with awesome tech teams all over the world. This recommendation came from Matt at Prempoint suggesting I needed to work with his friends Nate and Prem on their new trivia quiz game.

Once introductions were made I was quick to jump onboard and work on this exciting project.

I started by designing some concepts of the new app. We wanted to focus on how the game could look in several versions time that included licensed sections, promoted content and huge tournament challenges. The concepts sparked lots of excitement and debates, the design needed to tick a lot of boxes – not only visually but legally. We felt the best direction was to make it as social as possible, that lead to seeing who was playing the games, how many were playing and easy to access friend challenges.

App Ui design quiz

I was given lots of creative freedom and designed the whole UX flow from sign up, on-boarding to winning your first game. It was a great experience and I’m proud of the final product.

 

App Ui design quiz

It’s only available in the US for the moment, but keep an eye on this team as they’re doing some great things in the quiz space.

App Ui design quiz

 

Interested in Puzzle games? try some of these

The post PROVEIT Game Design appeared first on .

January 22, 2019

Readme

Improve Your README

For a lot of code bases, the README is the first thing that people are likely to see. The quality of your README is going to set an important first impression. How do I use this? How do I install it? What does it even do?

If I’m looking for new tool, those are some pretty important questions. At the minimum, I think a README ought to clearly answer:

  • What is it? What’s it for?
  • How do I install it in order to use it?
  • How do I install it in order to develop it?
  • How do I use it? Provide an example or two or ten of the most common usage scenarios.
  • How do I ask questions about it? Issue tracker? Twitter?
  • How can I contribute to it?

I’m guilty of missing most or all of these on pretty much any tool I’ve ever published. Most of them are useless, so I’m not too worried, but my work project has users. It has other team members. We’re accountable to each other for any bad documentation.

Our on-boarding process has done a lot to improve our README. We don’t hire often, but when we do, our documentation is the primary reference for getting new people up to speed. A lot of the time they’ll ask a question or struggle with something, and we’ll go “wow, we really should have written that down.” There’s a lot of institutional knowledge floating about and it’s difficult to know what should be written down until it’s approached by a totally fresh pair of eyes. Developer on-boarding is a great way to test your documentation.

A common theme in the responses to the challenge thread was the difficulty of keeping the README up to date. Since there’s no way I know of to have a bad README fail a build, I can’t think of any good way to mitigate this other than encouraging diligence and maybe making it someone’s (or many persons’, or everybody’s) responsibility to keep it up to date.

The Takeaway

  • Whether you intend it to be or not, your README is your first impression. There’s a relatively small set of things that people expect to find in a README, and including them can improve everybody’s experience.
  • Documentation will be bad if nobody uses it. Use your documentation as a first point of reference, and you will have an incentive to maintain it. Run your project by oral tradition, and your written documentation will get out of date fast.
  • “Work not documented is work not done.” If you’ve done or changed something, but you haven’t written anything about it in the docs, will it take someone by surprise or cause unnecessary confusion? If so, you haven’t finished working.

Nuke Those TODOs

Why do this? Because code is a lousy place to track todos. When a todo lives in your code, it can’t be prioritized or scheduled, and tends to get forgotten.

This definitely matches my experience. Across the project, there were 11 TODO comments. I was slightly embarrassed to find that they were all by me, and I don’t remember ever personally resolving a single one. Uh oh. I guess TODO comments really do tend to get forgotten.

The oldest offender has been haunting the code base since the 20th May, 2015. Only two of them were from this year. After cleaning them up, we ended up with:

  • Three immediate changes to the code. These issues had been hidden in a comment for over a year, and resolving them on the spot was actually less effort than finding a way to describe them in a ticket.
  • Two comments that were no longer needed at all.
  • Six new issues properly logged for the whole team to see.

If you’ve got an old code base, I encourage you to do something similar: go through it and figure out what the oldest TODO is. Is it something you’re aware of? Does it even apply any more?

Get Rid Of A Warning

Deprecation warnings in a console

Deprecation warnings are funny. I read them, and nod along and say “Gee, I hope it’s not going to be too painful for Future James to deal with that.”

After a while, they get tuned out. It’s just a warning, right? It’s not a full blown issue just yet. Ignoring the big block of spewed out warnings becomes routine. They’re not causing any harm, and they become invisible.

But it could turn into an issue. It could conceal an issue. I was ignoring them because, hey, the tests still passed, but it’s easy to see how another, more serious message could get lost in that huge scrawl of text.

On top of that, you have to contend with the broken window theory. If your code is kicking out a lot of warnings, then people are more likely to let new ones slide. You may inadvertently cultivate a collective attitude of carelessness, and that is when the more serious issues are going to creep in.

Remove Unused Code

As a project grows, it’s natural that bits of code fall out of use. There are different types of unused code, each causing their own problems for developers.

Commented Code

Large blocks of commented code are usually there because a developer concluded that the code was not needed, but did not want to delete it out of concern that the code would be useful in the future.

If you’re using version control, you will always be able to leaf through old versions of the code if you want to see how it changed over time. The unlikely benefit of having that old code on hand is outweighed by the extra cognitive load of having it there.

Unused Classes

If you’re familiar with a code base, having unused classes floating about seems like it’s not really a problem. Since you’re familiar with the classes that you do need to use, the ones you don’t might be out of sight, out of mind.

The problem arises when someone is trying to understand how your code works. Mental energy is spent understanding a class that isn’t used or needed. Energy is spent trying to understand how that class interacts with the rest of the system when it doesn’t.

Unused Local Variables

Variables that are created but never used are quite harmful but extremely easy to clean up. Especially if your method is long and complex (which is a problem for another day), it might not be obvious that a method isn’t being used. Not only does it cause extra cognitive strain, but it can affect your application’s performance, too.

I recommend you configure your editor to lint your code and show you your unused variables in real time.

Trim Unused Branches

Delete local tracking branches that no longer exist on origin:

git remote prune origin

Delete old or unused branches on origin with:

git ls-remote --heads origin
git push origin --delete <some branch>

This is quick and easy to do, but the effect is worthwhile. By curating your branches, you can paint a picture of everything that’s going on with the code base. To stick to the tree analogy, you get to see the directions in which your application is growing.

January 18, 2019

Reading List by Bruce Lawson (@brucel)

A (usually) weekly round-up of interesting links I’ve tweeted. Sponsored by Smashing Magazine who slip banknotes into my lacy red manties so I can spend time reading stuff.

Back to Top