Last updated: July 20, 2018 03:22 PM (All times are UTC.)

July 20, 2018

Reading List by Bruce Lawson (@brucel)

A weekly (-ish) dump of links to interesting things I’ve read and shared on Twitter. Sponsored by those nice folks at Wix Engineering who hurl money at me to read stuff.

July 17, 2018

Story points as described represent an attempt to abstract estimation away from “amount of stuff done per unit time”, because we’re bad at doing that and people were traditionally using that to make us look bad. So we introduce an intermediate value, flip a ratio, and get the story point: the [meaningless value] per amount of stuff. Then we also get the velocity, which is the [meaningless value] per unit time, and…

…and we’re back where we started. Except we’re not, we’re slower than we were before, because it used to be that people asked “how much do you think you can get done over the next couple of weeks”, and we’d tell them, and we’d be wrong. But now they ask “how big is this stuff”, then they ask “how much capacity for stuff is there over the next couple of weeks”, and we tell them both of those things, and we get both wrong, so we still have a wrong answer to the original question but answered two distinct questions incorrectly to get there.

There’s no real way out of that. The idea that velocity will converge over time is flawed, both because the team, the context, and the problem are all changing at once, and because the problem with estimation is not that we’re Gaussian bad at it, but that we’re optimistic bad at it. Consistently, monotonically, “oh I think this will just mean editing some config, call it a one-pointer”-ingly, we fail to see complexity way more than we fail to see simplicity. The idea that even if velocity did converge over time, we would then have reliable tools for planning and estimation is flawed, because what people want is not convergence but growth.

Give people 40 points per sprint for 20 sprints and you’ll be asked not how you became so great at estimation, but why your people aren’t getting any better. Give them about 40 points per sprint for 20 sprints, and they’ll applaud the 44s and frown at the 36s.

The assumption that goes into agile, lean, kanban, lean startup, and similar ideas is that you’re already doing well enough that you only need to worry about local optima, so you may as well take out a load of planning overhead and chase those optima without working out your three-sprint rolling average local optimisation rate.

July 15, 2018

July 13, 2018

Reading List by Bruce Lawson (@brucel)

A weekly (mostly) dump of links to interesting things I’ve read and shared on Twitter. Sponsored by those nice folks at Wix Engineering who hurl money at me to read stuff.

July 11, 2018

OOP the Easy Way by Graham Lee

It’s still very much a work in progress, but OOP the Easy Way is now available to purchase from Leanpub (a free sample is also available from the book’s Leanpub page). Following the theme of my conference talks and blog posts over the last few years, OOP the Easy Way starts with an Antithesis, examining the accidental complexity that has been accumulating under the banner of Object-Oriented Programming for nearly four decades. This will be followed by a Thesis, constructing a model of software objects from the essential core, then a Synthesis, investigating problems that remain unsolved and directions that remain unexplored.

At this early stage, your feedback on the book is very much welcome and will help yourself and fellow readers to get more from the book. You will automatically get updates for free as they are published through Leanpub.

I hope you enjoy OOP the Easy Way!

July 09, 2018

Lady Merchant by Bruce Lawson (@brucel)

My Lady Merchant, although Fortune favours folly,
She won’t smile on me for the things I tell you now.
When you calculate all your losses and your profits,
you’ll know that I did everything that you would allow.

My Lady Merchant, although you fear Time is flying,
there’s no-one you can bribe to clip his chariot wings for you.
When you balance up all your selling and your buying
You’ll know that I did everything that you asked me to.
You’ll know that I did everything that you would let me do.

Words and music © Bruce Lawson 2018, all rights reserved.

July 06, 2018

Reading List by Bruce Lawson (@brucel)

July 05, 2018

CSS-in-JS: FTW || WTF? by Bruce Lawson (@brucel)

Here’s the video of my closing keynote at CSSDay.nl. trigger warning: swearing, potatoes, Rembrandt and naked feet. And here are the slides, so you can follow along at home.

July 04, 2018

I'm starting to learn woodworking

For a craft where the principle material literally grows on trees, it's surprising just how expensive it is to get started in woodworking.

I've had an interest in learning woodworking for the past 2 or so years, born out of watching maker videos on YouTube by people like Peter Brown and Steve Ramsey (among many other quality channels).

I found it relaxing to watch a block of wood get turned on the lathe into into a bowl, or a stack of lumber cut, assembled, glued, and sanded into a side table. I also found it empowering to know that even these masters of their craft made mistakes, and rather than shy away from what they did wrong, instead share them with the viewers, and teach us how to overcome them.

So when the aforementioned Steve Ramsey had a father's day sale on his Weekend Woodworker course a few weeks ago, I jumped at the opportunity to take advantage of my newly gained space, and signed up.

The course is designed to teach you the fundamentals of woodworking over the course of 6 weekend build projects, each consisting of two videos (one for Saturday, one for Sunday) showing, step by step, everything you need to do. I've watched a couple of the videos, to get an idea of what to expect, but before I can get started doing it myself I need some equipment.

One of the main selling points of the course is how cheap it is to buy all the equipment needed to complete it. You're presented with a list of the actual equipment Steve uses in the course, and all together they totalled just under $1000.

Now maybe $1000 is cheap for a beginner woodworker, but it's a touch more than I was expecting, especially when you factor in the cost of the timber itself (which is a whole other issue, one which I'll post about another time) - but if this is the cost of admission, so be it.

I have a Bosch Power For All 18v drill, and love the idea of working unconstrained by cords, so while I'm very possibly contradicting myself here, I figure it's worth the extra expense of expanding my Bosch collection with the cordless variants of their circular saw, impact driver, and random orbital sander. I also already have a jigsaw and shop vac from Aldi (and router, although this isn't required for the course), and was generously gifted an Evolution mitre saw for my birthday.

So throw in some clamps and other bits and pieces and I'll have enough to complete the first three projects, which I aim to do before it gets too cold to work outside.

So watch this space, as I hope it won't be long before I complete the introduction project - the Basic Mobile Workbench.

Dear everyone who writes content: please put publication date (and last updated, if applicable) right at the top of your article.

I’ve been bitten so often by out-of-date content (that’s still highly ranked by search engines) that now I look for a date before I start reading. And scrolling to the end of an article to find it, and then back up to start reading, is a pain in the gonads.

As Gerry McGovern writes,

On the Web, nothing is more damaging to your organization’s reputation and brand than out of date content.

and if you don’t have a prominent date on your content, it might as well be out of date. How can I trust it if I don’t know how current it is?

It’s good if the date is baked into the URL (I configured WordPress to show the year in the URL) but that’s not enough because some browsers (especially mobile) don’t show full URLs or the address bar all the time. Simply have it in good old fashioned plain text, near the article’s title.

I mark mine up with microdata as suggested by schema.org (founded and used by Google, Microsoft, Yahoo and Yandex) in the hope that their search engines will prioritise newer content. pubdate and title are required by Apple’s WatchOS because, well, being needlessly different makes web development more fun.

Here’s the relevant markup:


<article itemscope itemtype="http://schema.org/BlogPosting">
<header>
<h2 itemprop="title">
<a href="https://www.brucelawson.co.uk/2018/reading-list-201/">Reading List</a></h2>
<time itemprop="dateCreated pubdate datePublished"
datetime="2018-06-29">Friday 29 June 2018</time>
</header>
<p>Some marvellous, trustworthy content</p>
<p><strong>Update: <time itemprop="dateModified"
datetime="2018-06-30">Saturday 30 June 2018</time></strong>Updated content</p>

Whether your content is technical, financial, news or a list of schools closed because of snow (yes, one year I kept my kids at home because a three year old article surfaced at the top of a Google search!), reassuring me that your content is current is, to me at least, just as important as serving it over HTTPS.

Thank you for sharing your marvellous stuff! Please encourage me to read it by establishing my trust in it.

kthxbai.

July 03, 2018

Dear Siôn,

Thank you for your comments on Twitter welcoming my feedback on the EU’s proposed copyright reform. I’d like to discuss in particular Article 13, “Use of protected content by information society service providers storing and giving access to large amounts of works and other subject-matter uploaded …

July 02, 2018

We’re looking for a passionate, knowledgeable senior PHP developer to join us in our Solihull office, to help us build and release cool things faster,...

The post Senior PHP Developer appeared first on stickee.

We’re looking for a passionate, knowledgeable senior PHP developer to join us in our Solihull office, to help us build and release cool things faster,...

The post PHP Developer appeared first on stickee.

We’re looking for a passionate, knowledgeable senior PHP developer to join us in our Solihull office, to help us build and release cool things faster,...

The post Front End Developer appeared first on stickee.

We’re looking for an enthusiastic Junior PHP Developer to join us in our Solihull office, to help us release cool things faster, better and smarter....

The post Junior PHP Developer appeared first on stickee.

June 29, 2018

I think I’ve ‘got’ for the first time what the “DIGITAL” thing is.

I’ve been searching to find the meaning of the phrase “digital transformation”, which I assumed encompassed a change from ‘analogue’ to ‘digital’. I finally understood yesterday – that’s not what it’s really about.

The transformation happened slowly to me, over most of my life. My first programming was planned on paper then character boxes were filled-in with a graphite pencil on cards. They were shipped by road to a punch machine that punched the binary codes onto the cards which were then were fed into a computer by operators I never saw. A week later I got some printout back, usually telling me what had gone wrong.

Soon after arriving at university, I had access to GEORGE 3’s Multiple On-line Programming system: a terminal. I used a line editor to create a card-image file which was stored on disk then later submitted to the batch queue. Undergraduates were only allocated space to store one program at a time. There wasn’t room to keep things permanently on-line because of the price of disk space. Some of the research students still walked around with boxes of cards. It was easy to copy a card-stack on one of the card punches and keep it in a safe place. They could probably store more code that way.

I’ve been mostly digital since the 1970s but I saw my digital world as a binary virtualisation of a physical medium. I moved very slowly from dependence on physical to online-only artifacts which had always been representations of digital data.

I realised yesterday that most people have only recently moved their business objects: files, documents, photographs, drawings, 3D-models and social network connection information into the digital realm – from atoms to bits. That frees those objects from their bindings at a single, fixed physical location, leaving them to roam in more than the 3 dimensions of our visualisable reality. This paradigm shift has suddenly hit many without warning, like a revolution, whereas I experienced it as a series of small increments. I’ve been greatly underestimating how disorienting it has been for other industries to reluctantly release their tight grip on physical objects and how worrying it may be for those still facing the cultural adjustment.

I remembered the other day that I used to jump off a shed roof at 5 years old. I could see the spot where I would land. I can’t imagine throwing myself out of a plane into free-fall and that’s why there are ‘digital coaches’. My empathy has been retrieved from an old backup tape. I’m sorry if my lack of understanding ever inconvenienced anyone.

Reading List by Bruce Lawson (@brucel)

A mostly-weekly dump of links to interesting things I’ve read and shared on Twitter. Supported by those nice folks at Wix Engineering who shower me with high-denomination banknotes to reward me for reading this stuff.

Note: For some time I’ve been meaning to write some notes up about how there all these different gaming communities. Each one is busy innovating in their own little world but not always fully aware of what is / has happened in the other communities. Some of these include: LARP, Megagames, BoardGames, UrbanGames, Alternative Reality […]

June 27, 2018

Moving house is stressful, especially when you run a business from your home. Here’s my guide on how to deal with it.

This year marks my 10th year of being a freelance UI & UXer – In that time I’ve moved house three times but this was the first time me and my wife Chrissy have sold and bought at the same time so it needed some extra preparation.

All of these things are probably pretty obvious to you, but figured it was worth sharing 🙂

Photo by Dardan on Unsplash

So, how do I move house while working from home with the least amount of stress?

1. Tell your clients early.

Lets face it, moving house never goes to plan and with our recent move we only completed on a Wednesday and moved out on the Friday, which isn’t ideal!

Luckily I’d been warning my clients for several months that I’d be having time off “pretty soon” and apologised if this causes any inconvenience down the line. Thankfully my clients are awesome and very supportive, some where even more excited than us!

2. Keep your work stuff separate.

It’s really easy to just throw everything into a box and hope that you find what you need when you need it, but it’s hell! What you need to do is make sure you either a) take your work stuff with you in a separate vehicle or b) work out before hand where your home office will be and label the boxes clearly to go into that room.

Photo by Luca Bravo on Unsplash

3. Setup your office first

You work from home right, you need to work to pay the mortgage of this new home… it made sense to me to set my office up first so I had somewhere away from all the madness of the house move that I could work. Even if in the early days of moving working involves sending a few emails, at least you won’t be doing it on your phone or ontop of a pile of boxes.

4. Plan what goes where.

We found that we planned the move in such fine details that we were unpacked in record time, we did this by making sure every box was labeled clearly, we asked the removal firm (if you opted to do it yourself then this still applies) to place the boxes in the right rooms. We labelled the room doors the same as the boxes so they knew where to put them! We even drew plans of each room beforehand to make sure we knew everything would fit! (But that might be a little OTT for some!). Doing this just make it quicker to unpack which meant it was quicker to get back to work.

5. Get mobile broadband!

Fibre broadband can take weeks to get installed, that’s weeks of tethering to your phone and burning data. You might even be in an area like me where the broadband is shockingly poor… if you find yourself in this situation I highly recommend getting a 30 day contract with a mobile broadband provider like EE. I paid £99 for a mobile router and £35 a month for 50gb of data. Because it’s only a 30 day contract I can cancel anytime. This helped me get online faster. Some providers throttle 4G tethering from your phone, I found this a great alternative.

Photo by Bonnie Kittle on Unsplash

6. Have a cash buffer.

Make sure before you move you have a few weeks where you can afford to be off the radar and not earning money. I found the move was fairly quick but actually getting back into the routine of working in a new environment with the distractions of delivery men and paperwork very hard. It took me a good few weeks before I was back in the zone of working for myself. I prepared by ensuring I had money in the bank so worrying about earning money didn’t add to the stress of moving house.

7. Get some exercise.

I’m new to this exercising lark. I’ve been lazy for so many years it was a killer to get off my butt and get running. I’ve just completed the Couch to 5k course and feeling much healthier than I ever have. The reason why I’ve included this in here is for mental health. Sometimes you can let it all get ontop of you. I found running three times a week just cleared my mind.

Photo by aquachara on Unsplash

Ok thats it. Probably my most niche post yet!

Follow me on Twitter

Cover Photo by Erda Estremera on Unsplash

The post Moving house while working at home, a freelancers guide appeared first on .

June 26, 2018

The team at stickee are thrilled to share that we were awarded with ‘Best Use of VR’ at the ‘Computing Tech Marketing & Innovation” awards....

The post stickee’s VR Game Wins Best Use of VR Award appeared first on stickee.

June 21, 2018

I own a shed! by Daniel Hollands (@limeblast)

I own a shed!

When I first started this blog I was living in a small two-bedroom flat in the centre of Birmingham. It was great for transport, shopping, and restaurants, but terrible for project work.

The main problems I faced were the lack of space to work, and my inability to make any changes to the property. As much as I might have wanted to put a sensor on the door to check for intruders (read: my flatmate), or drill some holes to put-up a display - I wasn't allowed.

This means that while I was able to temporally sit my solar panel on the balcony to experiment with [1, 2, 3] I wasn't able to use it for anything pratical, which resulted in an ever growing collection of project components that I needed to store somewhere.

This is why I'm so happy that now, just over two years after I first started the blog, I finally have a house in Worcester to call my own, along with a basement, a large garden, and my very own shed.

It didn't take long for me to dust off the aforementioned solar panel and jerry-rig together a basic control panel:

I own a shed!

I own a shed!

My original battery bit the bullet a long time ago (I think I run it down too much as I wasn't able to charge it again) so I got myself a small 12volt 1.3Ah one from Maplin (RIP), which is hidden behind the shelf guard. It doesn't have much capacity, but it's the only one they had in stock, and at 75% off it was a bargain.

As explained in the book, the multimeter is in-line between the charge controller and battery, and shows how much power is being used (or stored) at that time.

I've proven that it works by running a USB lamp off the provided socket (and allowed a friend to charge his phone via it), but the initial plan is to power these 12V LEDs from it, to provide the shed with light (see below), with a longer term plan of revisiting the other chapters in the zombies book, and implementing them in a more permanent fashion (that's right, my shed is going to have a zombie detector).

What does this mean going forward? Well, a combination of:

  • a house which is crying out for smart technologies
  • a long-running interest in learning woodworking
  • a newly developed interest in house plants and gardening
  • and a bunch more space in which to work

...all contribute to more maker projects, which means more blog posts.

Update - I've added the lights

In among the other tasks this weekend (such as stripping the wallpaper in our lounge), I managed to find the hour or so it took to install the lights.

I own a shed!

The pack I ordered had 20 modules, so I figured it was worth installing them all. The sticky pad on the back made it super easy to attach to the wood, then some small screws held them in place.

After wiring up the first strip, it was clear they provided more than enough illumination alone, but my sense of compleness was lacking, so I wired up the second strip anyway.

This may end up being a foolish move, as twice the number of LEDs means twice the energy drain, but I'm not planning on using them for long stretches of time (the shed really is only suitable for storage, it's not workshop worthy), and if I forget to turn them off, the charge controller will do it for me when needed to prevent the battery from going overlow.

June 16, 2018

June 12, 2018

Little community conferences by Stuart Langridge (@sil)

This last weekend I was at FOSS Talk Live 2018. It was fun. And it led me into various thoughts of how I’d like there to be more of this sort of fun in and around the tech community, and how my feelings on success have changed a bit …

June 11, 2018

FOSS Talk Live 2018 by Stuart Langridge (@sil)

The poster for FOSS Talk Live 2018, in situ in the window of the Harrison

Saturday 9th June 2018 marked FOSS Talk Live 2018, an evening of Linux UK podcasts on stage at The Harrison pub near Kings Cross, London. It’s in its third year now, and each year has improved on the last. This year there were four live shows: Late Night Linux …

June 08, 2018

How (and Why) Developers Use the Dynamic Features of Programming Languages: The Case of Smalltalk is an interesting analysis of the reality of dynamic programming in Smalltalk (Squeak and Pharo, really). Taking the 1,000 largest projects on SqueakSource, the authors quantitatively examine the use of dynamic features in projects and qualitatively consider why they were adopted.

The quantitative analysis is interesting: unsurprisingly a small number (under 1.8%) of methods use dynamic features, but they are spread across a large number of projects. Applications make up a large majority of the projects analysed, but only a small majority of the uses of dynamic features. The kinds of dynamic features most commonly used are those that are also supplied in “static” languages like Java (although one of the most common is live compilation).

The qualitative analysis comes from a position of extreme bias: the poor people who use dynamic features of Smalltalk are forced to do so through lack of alternatives, and pity the even poorer toolsmiths and implementors whose static analysis, optimisation and refactoring tools are broken by dynamic program behaviour! Maybe we should forgot that the HotSpot optimisation tools in Java come from the Smalltalk-ish Self environment, or that the very idea of a “refactoring browser” was first explored in Smalltalk.

This quote exemplifies the authors’ distaste for dynamic coding:

Even if Smalltalk is a language where these features are comparitively easier to access than most programming languages, developers should only use them when they have no viable alternatives, as they significantly obfuscate the control flow of the program, and add implicit dependencies between program entities that are hard to track.

One of the features of using Object-Oriented design is that you don’t have to consider the control flow of a program holistically; you have objects that do particular things, and interesting emergent behaviour coming from the network of collaboration and messages passed between the objects. Putting “comprehensible control flow” at the top of the priority list is the concern of the structured programmer, and in that situation it is indeed convenient to avoid dynamic rewriting of the program flow.

I have indeed used dynamic features in software I’ve written, and rather than bewailing the obfuscation of the control flow I’ve welcomed the simplicity of the solution. Looking at a project I currently have open, I have a table data source that uses the column identifier to find or set a property on the model object at a particular row. I have a menu validation method that builds a validation selector from the menu item’s action selector. No, a static analysis tool can’t work out easily where the program counter is going, but I can, and I’m more likely to need to know.

Wrytr Creative App Design

I’ve recently completed work designing a creative app design for Wrytr, a social writing challenge app designed to connect people by tasking them to submit creative answers to user questions. The app is feed based and has user voting to push the most popular responses to the top. These replies can be easy shared.

Wrytr uses flat colours and minimalistic icons to create a clean design. You can read more about Wrytr here

The app is currently in development, check back or keep an eye on the App Store!

 

The post Wrytr Creative App Design appeared first on .

Reading List by Bruce Lawson (@brucel)

WOOOOH!! It’s my 200th reading list! A mostly-weekly dump of links to interesting things I’ve read and shared on Twitter. Supported by those nice folks at Wix Engineering who shower me with high-denomination banknotes to reward me for reading this stuff.

June 05, 2018

June 04, 2018

On version 12 by Graham Lee

Reflecting on another WWDC keynote reminded me of this bit in Tron:Legacy, which I’ve undoubtedly not remembered with 100% accuracy:

We’re charging children and schools so much for this, what’s so great about the new version?

Well, there’s a 12 in it.

As I’m going to MCE tomorrow, tonight I’m going to my first WWDC keynote event since 2015. I doubt it’ll quite meet the high note of “dissecting” software design issues in the sports lounge at Moscone with Daniel Steinberg and Bill Dudney, but it’s always better with friends.

As I mentioned recently, almost everything I use on a Mac is a cross-platform application, or a LinuxKit container. I find it much quicker to write a GUI in React with my one year of experience than Cocoa with my couple of decades of experience. Rather than making stuff more featured, Apple need to make it relevant.

June 02, 2018

The hardest thing by Graham Lee

I now have the make the hardest decision in programming. It has nothing to do with naming things or invalidating caches: rather it is which *nix to install on a computer. NextBSD and MidnightBSD both have goals that are relevant to my interests, but both seem pretty quiet.

June 01, 2018

Reading List by Bruce Lawson (@brucel)

Swift by Graham Lee

Speaking of Swift, what idiot called it swift-evolution and not “A Modest Proposal”?

Eating the bubble by Graham Lee

How far back do you want to go to find people telling you that JavaScript is eating the world? Last year? Two years ago? Three? Five?

It’s a slow digestion process, if that’s what is happening. Five years ago, there was no such thing as Swift. For the last four years, I’ve been told at mobile dev conferences that Swift is eating the world, too. It seems like the clear, unambiguous direction being taken by software is different depending on which room you’re in.

It’s time to leave the room. It looks sunny outside, but there are a few clouds in the sky. I pull out my phone and check the weather forecast, and a huge distributed system of C, Fortran, Java, and CUDA tells me that I’m probably going to be lucky and stay dry. That means I’m likely to go out to the Olimpick Games this evening, so I make sure to grab some cash. A huge distributed system of C, COBOL and Java rumbles into action to give me my money, and tell my bank that they owe a little more money to the bank that operates the ATM.

It seems like quite a lot of the world is safe from whichever bubble is being eaten.

May 29, 2018

Netscape won by Graham Lee

Back when AOL was a standalone company and Sun Microsystems existed at all, Netscape said that they wanted Windows to be a buggy collection of device drivers that people used to access the web, which would be the real platform.

It took long enough that Netscape no longer exists, but they won. I have three computers that I regularly use:

  • my work Mac has one Mac-only, Mac-native app open during the day[*]. Everything else is on the web, or is cross-platform. It doesn’t particularly matter what _technology_ the cross-platform stuff is made out of because the fact that it’s cross-platform means the platform is irrelevant, and the technology is just a choice of how the vendors spend their money. I know that quite a bit of it is Electron, wrapped web, or Java.
  • my home Windows PC has some emulators for playing (old) platform-specific games, and otherwise only runs cross-platform apps[*] and accesses the web.
  • my home Linux laptop has the tools I need to write the native application I’m writing as a side project, and everything else is cross-platform or on the web.

[*] I’m ignoring the built-in file browsers, which are forced upon me but I don’t use.

May 25, 2018

Reading List by Bruce Lawson (@brucel)

May 23, 2018

As occasionally happens, I’ve been reevaluating my relationships with social media. The last time I did this I received emails asking whether I was dead, so let me assure you that such rumours are greatly exaggerated.

Long time readers will remember that I joined twitter about a billion years ago as ‘iamleeg’, a name with a convoluted history that I won’t bore you with but that made people think that I was called Ian. So I changed to secboffin, as I had held the job title Security Boffin through a number of employers. After about nine months in which I didn’t interact with twitter at all, I deleted my account: hence people checking I wasn’t dead.

This time, here’s a heads up: I don’t use twitter any more, but it definitely uses me. When I decided I didn’t want a facebook account any longer, I just stopped using it, then deactivated my account. Done. For some reason when I stop using my twitter account, I sneak back in later, probably for the Skinnerian pleasure of seeing the likes and RTs for posts about new articles here. Then come the asinine replies and tepid takes, and eventually I’m sinking serious time into being meaningless on Twitter.

I’d like to take back my meaninglessness for myself, thank you very much. This digital Maoism which encourages me, and others like me, to engage with the system with only the reward of more engagement, is not for me any more.

And let me make an aside here on federation and digital sharecropping. Yes, the current system is not to my favour, and yes, it would be possible to make one I would find more favourable. I actually have an account on one of the Free Software microblogging things, but mindlessly wasting time there is no better than mindlessly wasting time on Twitter. And besides, they don’t have twoptwips.

The ideal of the fediverse is flawed, anyway. The technology used on the instance I have an account is by and large blocked from syncing with a section of the fediverse that uses a different technology, because some sites that allow content that is welcome in one nation’s culture and forbidden in another nation’s culture also use that technology, even though the site of which I am a member doesn’t include that content. Such blanket bans are not how federation is supposed to work, but are how it does work because actually building n! individual relationships is hard, particularly when you work to the flawed assumption that n should be everyone.

And let’s not pretend that I’m somehow “taking back control” of my information by only publishing here. This domain is effectively rented from the registry on my behalf by an agent, the VPS that the blog runs on is rented, the network access is rented…very little of the moving parts here are “mine”. Such would be true if this were a blog hosted on Blogger, or Medium, or Twitter, and it’s true here, too.

Anyway, enough about the hollow promises of the fediverse. The point is, while I’m paying for it, you can see my posts here. You can see feeds of the posts here. You can write comments. You can write me emails.

I ATEN’T DEAD.

May 22, 2018

Wow – it’s just 2 weeks until Ecsite 2018 in Geneva.  Ecsite is the largest science communication conference in Europe and is where the worlds Science Centre professionals gather to sharpen their critical mind, recharge their batteries and let off steam on the dance floor – their words not mine 🙂 I’ll be returning for […]

Understanding where your opportunities lie within the customer journey will reveal what is working and what isn’t. Unfortunately whilst many look at a conversion funnel...

The post Managing the conversion funnel – where are the opportunities? appeared first on stickee.

May 19, 2018

A fantastic project to share real-world game making with students and museums in Warwickshire.  You’ll have seen me (on here, twitter & at talks) ‘banging on’ about how successful Escape Room-inspired games would be in museums and here was an opportunity to create them in just a couple of days and then share with the […]

May 16, 2018

May 13, 2018

On null by Graham Lee

I’ve had an interesting conversation on the topic of null over the last few days, spurred by the logical disaster of null. I disagreed with the statement in the post that:

Logically-speaking, there is no such thing as Null

This is true in some logics, but not in all logics. Boolean logic as described in An Investigation of the Laws of Thought, admits only two values:

instead of determining the measure of formal agreement of the symbols of Logic with those of Number generally, it is more immediately suggested to us to compare them with symbols of quantity admitting only of the values 0 and 1.

…and in a later chapter he goes on to introduce probabilities. Anyway. A statement either does hold (it has value 1, with some probability p) or it does not (it has value 0, with some probability 1-p; the Law of the Excluded Middle means that there is no probability that the statement has any other value).

Let’s look at an example. Let x be the lemma “All people are mortal”, and y be the conclusion “Socrates is mortal”. What is the value of y? It isn’t true, because we only know that people are mortal and do not know that Socrates is a person. On the other hand, we don’t know that Socrates is not a person, so it isn’t false either. We need another value, that means “this cannot be decided given the current state of our knowledge”.

In SQL, we find a logic that encodes true, false, and “unknown/undecided” as three different outcomes of a predicate, with the third state being given the name null. If we had a table linking Identity to Class and a table listing the Mortality of different Classes of things, then we could join those two tables on their Class and ask “what is the Mortality of the Class of which Socrates is a member”, and find the answer null.

But there’s a different mathematics behind relational databases, the Relational Calculus, of which SQL is an imperfect imitation. In the relational calculus predicates can only be true or false, there is no “undecided” state. Now that doesn’t mean that the answer to the above question is either true or false, it means that that question cannot be asked. We must ask a different question.

“What is the set of all Mortality values m in the set of tuples (m, c) where c is any of the values of Class that appear in the set of tuples (x, c) where x is Socrates?”

Whew! It’s long-winded, but we can ask it, and the answer has a value: the empty set. By extension, we could always change any question we don’t yet know the answer to into a question of the form “what is the set of known answers to this question”. If we know that the set has a maximum cardinality of 1, then we have reinvented the Optional/Maybe type: it either contains a value or it does not. You get its possible value to do something by sending it a foreach message.

And so we ask whether we would rather model our problem using a binary logic, where we have to consider each question asked in the problem to decide whether it needs to be rewritten as a set membership test, or a ternary logic, where we have to consider that the answer to any question may be the “I don’t know” value.

Implementationally-speaking, there are too many damn nulls

We’ve chosen a design, and now we get to implement it. In an implementation language like Java, Objective-C or Ruby, a null value is supplied as a bottom type, which is to say that there is a magic null or nil keyword whose value acts as a subtype of all other types in the system. Good: we get “I don’t know” behaviour for free anywhere we might want it. Bad: we get that behaviour anywhere else too, so we need to think to be sure that in all places where “I don’t know” is not an answer, that invariant holds in our implementation, or for those of us who don’t like thinking we have to pepper our programs with defensive checks.

I picked those three languages as examples, by the way, because their implementations of null are totally different so ruin the “you should never use a language with null because X” trope.

  • Java nulls are terminal: if you see a null, you blew it.
  • Objective-C nulls are viral: if you see a null, you say a null.[*]
  • Ruby nulls are whatever you want: monkey-patch NilClass until it does your thing.

[*] Objective-C really secretly has _objc_setNilReceiver(id), but I didn’t tell you that.

Languages like Haskell don’t have an empty bottom, so anywhere you might want a null you are going to need to build a thing that represents a null. On the other hand, anywhere you do not want a null you are not going to get one, because you didn’t tell your program how to build one.

Either approach will work. There may be others.

May 11, 2018

Affiliate marketing is often viewed as a method to acquire new customers – which it does – but it’s not the only thing it’s good...

The post Where’s the loyalty? Should retailers pay affiliates commission for existing customers? appeared first on stickee.

May 10, 2018

2018 is the year of GDPR. The new legislation is shaking up how businesses use, store and understand the management of personal data. When the publication...

The post GDPR and tracking – what does it mean for you? appeared first on stickee.

May 08, 2018

The 20th anniversary of the iMac reminded me that while many people capitalises the word “iMac” as Apple would like, including John “I never capitalise trademarks the way companies like” Gruber, nobody uses the article-less form that Apple does:

So you can do everything you love to do on iMac.

I, like many other people, would insert ‘an’ in there, and Apple have lost that battle. There’s probably somebody in Elephant who has chosen that hill to die on.

April 30, 2018

You think your code is self-documenting. That it doesn’t need comments or Doxygen or little diagrams, because it’s clear from the code what it does.

I do not think that that is true.

Even if your reader has at least as much knowledge of the programming language you’ve used as you have, and at least as much knowledge of the libraries you’ve used as you have, there is still no way that your code is self-documenting.

How long have you been doing your job? How long have you been talking to experts in the problem domain, solving similar problems, creating software in this region? The likelihood is, whoever you are, that the new person on your team has never done that, and that your code contains all of the jargon terms and assumptions that go with however-much-experience-you-have experience at solving those problems.

How long were you working on that story, or fixing that bug? How long have you spent researching that specific change that you made? However long it is, everybody else on your team has not spent that long. You are the world expert at that chunk of code, and it’s self-documenting to you as the world expert. But not to anybody else.

We were told about “working software over comprehensive documentation”, and that’s true, but nobody said anything about avoiding sufficient documentation. And nobody else has invested the time to understand the code that you just wrote that you did, so the only person for whom your code is self-documenting is you.

Help us other programmer folks out, think about us when avoiding documentation.

April 29, 2018

My iPad-drawn graphics in Rethinking OOD at App Builders 2018 were not very good, so here are the ink-and-paper versions. Please have them to hand when viewing the talk (which is the first of a two-parter, though I haven’t pitched part two anywhere yet).

Some ideas based on feedback to the Why inheritance never made any sense:

Feedback: Subtypes are necessary

The only one of these that is practically workable is behaviour inheritance <=> subtype inheritance: I’m sorry that you were exposed to Java at such an impressionable age. The compilers of languages like Java enable subclass = subtype, by automatically assuming that a subclass is is a valid value for variable binding, for example. However they do nothing to ensure subclass = subtype. This is valid C#, a language very like Java for this discussion:

namespace QuickTestThing
{
    class Class1
    {
        override public string ToString()
        {
            return "Class1";
        }
    }
    class Class2 : Class1
    {
        public override string ToString()
        {
            throw new Exception();
        }
    }
}

Now is Class2 a subtype of Class1? Does the compiler let you pretend that it is?

You don’t even need inheritance

As discussed in the original post, the whole “favour composition over inheritance” movement gets by fine with no inheritance. Composition and delegation (I don’t know about this message, I’ll forward it to someone who does) let you get the same behaviour.

Feedback: build it yourself

Can I demonstrate a language that has all three of subtype inheritance, behaviour inheritance, and categorical inheritance as distinct language features? Yes, but I would need to learn Racket first. I’m on it.

But in the meantime, re-read the “you don’t even need inheritance” paragraph and think about how you would build each of those three ideas out of delegation.

April 26, 2018

Here’s a song that I finally recorded, about 27 years after writing it. It was originally written for my music partner Alison to sing, so it’s in a very high key. However, when I recorded the guide vocals I was surprised to hear that I could actually reach the notes, so kept the first take.

It’s a bit of a power ballad, but hopefully the dirty guitar goes some way to de-Celine Dioning it.

Thanks to Shez of Silverlake for bass guitar, advice, mixing and mastering.

Now my facts melt into fiction
while I watch this leave-taking performed.
I have to hear your valediction –
you choose to go, and you forbid me to mourn.

If I could scream
I’d scream your name;
If I could dream
I’d dream you cancer and pain;
If I could hunt
I’d capture you – make you tame;
be captive again.

You wanted me to proclaim your brain and beauty;
wanted me to sustain your sense of worth.
It was my pleasure, my delight (not duty)
and now I see your self-esteem feeding off my hurt.

I don’t know
if you ever felt what I felt;
I don’t know
if you ever feel at all;
Now you go
to your room full of bird calls
All I know
is I am very sad and small

Who are you
to go leaving me?
Who are you
to stop needing me?
Who are you
to come, and go?
To empty me and fill my hollows
with your shadow?

You want me to play the role you’ve written;
You want me to applaud your exit bow.
You want my blessing and my permission.
You don’t want me – I doubt you ever did, now.

Without you, I will be random
Instead of you, I will love nothing at all.
What you create, you shape and then abandon –
To your Lear I played such a mediocre Fool.

Who are you
to go leaving me?
Who are you
to stop needing me?
Who are you
to come, and go?
To empty me and fill my hollows
with your shadow?

Your perfect shadow.
Now I’m full-to-burst with sorrow.
So why am I so fucking hollow?

Who are you?

Words and music © Bruce Lawson 2018, all rights reserved.

April 25, 2018

Subatomic Chocolate by Graham Lee

This started out as a toot thread, but “threaded tooting is tedious for everybody involved” so here’s the single post that thread should have been.

The “Electron vs. native” debate doesn’t make much sense. I feel like I’ve been here before:

Somehow those of us who had chosen a different programming language knew that we were better at writing software; much better than those clowns who just made the most successful office suite ever, the most successful picture editing app ever, or the most successful video player ever. Because we’d taken advice on how to write software from a company that was 90 days away from bankruptcy and had proven incapable of executing on software development, we were awesome and the people who were making the shittons of money on the most popular software of all time were clueless idiots.

Some things to ponder but avoid for the moment:

  • why are those the only choices? If I write a Java SWT app with Windows native components on Windows, and Mac native components on Mac, is that native because I’m using the native widget toolkit or not, because I’m using Java? If it is not, is it “Electron”?
  • where is the boundary of native? AppKit is written in Objective-C, so am I using some unholy abomination of an RMI bridge if I write AppKit software using AppKit APIs but a different programming language, like Swift?

It seems clear that people who believe there is a correct answer to “Electron vs. native” are either native app developers or Electron app developers. We can therefore expect them to have some emotional investment (I have decided to build my career doing this, please do not tell me that I’m wrong) and to be seeking truths that support their existing positions. Indeed, if you are on one side of this debate then the other side is not even wrong because the two positions compare incompatible facts.

Pro-Electron: the tools are better/easier/more familiar/JavaScript

Most to all of these things are true, in a lot of cases. As a seasoned “native” app developer, with some tiny amount of JavaScript experience, I can build a thing very quickly in JS (with a GUI in React or React Native, I haven’t tried Electron) that still takes me a long time in a “native” toolkit, both the ones I’m comfortable with and the ones I’m unfamiliar with but download and try things out in.

Now that should be disturbing to any company who builds a “native” platform, and who thinks that developers are key to their success. If someone with nearly two decades of using your thing can be faster at using someone else’s thing within under a year of learning that thing, there is something you need to be learning very quickly about the way the other thing works and how to bring that advantage to your thing, otherwise everything will be made out of the other thing soon and you’d better hope they keep making it work on your thing.

Actually, having said that this argument is true, it’s not true at all. The tools in JS-land are execrable. Bear in mind that the JSVM (we used to call it a “browser”) is a high-performance code environment with live code loading, reflection and self-modifying capabilities; it’s disappointing that the popular developer environments are text editors with syntax highlighting and an integrated terminal window. “Live” code loading is replaced with using Watchman to wait for some files to change, then kicking off some baroque house of cards that turns those files from my house blend of JS into your house blend of JS, then reloading the whole shebang.

Actually, having said that this argument is true and false, it’s not even relevant at all. The developers are the highly-paid people whose job it is to solve the problems for everybody else, why are we making their lives easier, not everybody else’s?

Pro-“native”: the apps are more efficient/consistent

Both of these things are true, in a lot of cases. A “native” application just needs to link the system widget set (which, if your platform supports efficient memory management, is loaded anyway by some first-party tool) and run its code. It will automatically get things that look and behave like the rest of the applications on the platform.

Actually, having said that this argument is true, it’s not true at all. The “native” tools are based on a lot of low-level abstractions (like threads or operations), that are hard to use correctly; rather than rely on an existing solution (remember there’s no npm for “native”, and the supposed equivalent has nowhere near as much coverage) developers are likely to try building their own use of these primitives, with inefficiencies resulting. The “native” look and feel of the components can and will be readily customised to fit branding guidelines, and besides as the look and feel is the platform vendor’s key differentiator they’ve moved things around every release so an app that behaved “consistently” on the last version looks out of place (deliberately, so that developers are “encouraged” to adopt the new platform features) this year.

Actually, having said that this argument is true and false, it’s not even relevant at all. The computer is there as a substrate for a thing that solves somebody’s problem, so as long as the problem is solved and the solution fits on their computer, isn’t the problem solved? And as for “consistency”, the basic tenets of these desktop “native” experiences were carved out three decades ago, before almost all experience with and research into desktop computer interaction. Why aim for consistency with an approach that was decided before we knew what did or didn’t work properly?

Ensuring that your methods and functions always return a value of the same type is great step towards making your applications robust and easy to maintain. In the Substrakt developer team, we endeavour to use the same return type for both the truthy and the falsey responses in our methods and functions. Consistently returning the same type […]

April 20, 2018

Reading List by Bruce Lawson (@brucel)

April 13, 2018

As a technology company who are passionate about all things tech, we love the opportunity to help develop the skills of young talent in this...

The post Children’s coding development at stickee appeared first on stickee.

April 10, 2018

If you’ve established an online brand, monetising your website and earning a passive income is a perfect way to turn your online platform into a...

The post Why monetise your site with a White Label? appeared first on stickee.

The team at stickee are thrilled to share that we are shortlisted for five awards this year already! Spanning from our RnD department to our...

The post stickee shortlisted for 5 awards appeared first on stickee.

April 08, 2018

Many software libraries are released with version “numbers” that follow a scheme called Semantic Versioning. A semantic version is three numbers separated by dots, of the form x.y.z, where:

  • if x is zero, all bets are off. Otherwise;
  • z increments “if only backwards compatible bug fixes are introduced. A bug fix is defined as an internal change that fixes incorrect behavior.”

Problem one: there is no such thing as an “internal change that fixes incorrect behavior” that is “backwards compatible”. If a library has a function f() in its public API, I could be relying on any observable behaviour of f() (potentially but pathologically including its running time or memory use, but here I’ll only consider return values or environment changes for given inputs).

If they “fix” “incorrect” behaviour, the library maintainers may have broken the package for me. I would need a comprehensive collection of contract or integration tests to know that I can still use version x.y.z' if version x.y.z was working for me. This is the worst situation, because the API looks like it hasn’t changed: all of the places where I call functions or create objects still do something, they just might not do the right thing any more.

Problem two: as I relaxed the dependency on running time or memory use, a refactoring could represent a non-breaking change. Semver has nowhere to record truly backwards compatible changes, because bugfixes are erroneously considered backwards compatible

  • y increments “if new, backwards compatible functionality is introduced to the public API”.

This is fine. I get new stuff that I’m not (currently) using, but you haven’t broken anything I do use.

Problem three: an increment to y “MAY include patch level changes”. So I can’t just quietly take in the new functionality and decide whether I need it on my own time, because the library maintainers have rolled in all of their supposedly-backwards-compatible-but-not-really changes so I still don’t know whether this version works for me.

  • x increments “if any backwards incompatible changes are introduced to the public API”.

Problem four: I’m not looking at the same library any more. It has the same name, but it could be completely rewritten, have any number of internal behaviour changes, and any number of external interface changes. It might not do what I want any more, or might do it in a way that doesn’t suit the needs of my application.

On the plus side

The dots are fine. I’m happy with the dots. Please do not feel the need to leave a comment if you are unhappy with the dots or can come up with some contrived reason why “dots are harmful”, as I don’t care.

Better: meaningful versioning

I would prefer to use a version scheme that looks like z.w.y:

  • y has the meaning it does in semver, except that it MUST NOT include patch level changes. If a package maintainer has added new things or deprecated (but not removed) old things, then I can use the package still.
  • z has the meaning it does in semver, except that we stop pretending that bug fixes can be backwards compatible.
  • w is incremented if non-behavioural changes are implemented; for example if internals are refactored, caches are introduced or removed, or private data structures are changed. These are changes that probably mean I can use the package still, but if I needed particular performance attributes from the library then it is on me to discover whether the new version still meets my needs.

There is no room for x in this scheme. If a maintainer wants to write a new, incompatible library, they can use a new name.

Different: don’t use versions

This is more work for me, but less work for the package maintainer. If they are maintaining a change log (which they are, as they are using version control) and perhaps a medium for announcing important changes including security and bug fixes and new features, then I can pick the commit that I discover does what I need. I can maintain my own tree (and should be anyway, in case the maintainer decides to delete their upstream repo) and can cheery pick the changes that are useful for me, leaving out the ones that are harmful for me.

This is more work for me than the z.w.y scheme because now I have to understand the impact of each change. It is the same amount of work as the semver x.y.z scheme, because then I had to understand the impact of each change too, as changes to any of the three version component could potentially include supposedly-backwards-compatible-but-not-really changes.

Back to Top