Last updated: May 23, 2018 10:02 PM (All times are UTC.)

May 23, 2018

As occasionally happens, I’ve been reevaluating my relationships with social media. The last time I did this I received emails asking whether I was dead, so let me assure you that such rumours are greatly exaggerated.

Long time readers will remember that I joined twitter about a billion years ago as ‘iamleeg’, a name with a convoluted history that I won’t bore you with but that made people think that I was called Ian. So I changed to secboffin, as I had held the job title Security Boffin through a number of employers. After about nine months in which I didn’t interact with twitter at all, I deleted my account: hence people checking I wasn’t dead.

This time, here’s a heads up: I don’t use twitter any more, but it definitely uses me. When I decided I didn’t want a facebook account any longer, I just stopped using it, then deactivated my account. Done. For some reason when I stop using my twitter account, I sneak back in later, probably for the Skinnerian pleasure of seeing the likes and RTs for posts about new articles here. Then come the asinine replies and tepid takes, and eventually I’m sinking serious time into being meaningless on Twitter.

I’d like to take back my meaninglessness for myself, thank you very much. This digital Maoism which encourages me, and others like me, to engage with the system with only the reward of more engagement, is not for me any more.

And let me make an aside here on federation and digital sharecropping. Yes, the current system is not to my favour, and yes, it would be possible to make one I would find more favourable. I actually have an account on one of the Free Software microblogging things, but mindlessly wasting time there is no better than mindlessly wasting time on Twitter. And besides, they don’t have twoptwips.

The ideal of the fediverse is flawed, anyway. The technology used on the instance I have an account is by and large blocked from syncing with a section of the fediverse that uses a different technology, because some sites that allow content that is welcome in one nation’s culture and forbidden in another nation’s culture also use that technology, even though the site of which I am a member doesn’t include that content. Such blanket bans are not how federation is supposed to work, but are how it does work because actually building n! individual relationships is hard, particularly when you work to the flawed assumption that n should be everyone.

And let’s not pretend that I’m somehow “taking back control” of my information by only publishing here. This domain is effectively rented from the registry on my behalf by an agent, the VPS that the blog runs on is rented, the network access is rented…very little of the moving parts here are “mine”. Such would be true if this were a blog hosted on Blogger, or Medium, or Twitter, and it’s true here, too.

Anyway, enough about the hollow promises of the fediverse. The point is, while I’m paying for it, you can see my posts here. You can see feeds of the posts here. You can write comments. You can write me emails.

I ATEN’T DEAD.

With the costs of breaches escalating, it’s more important than ever to have...

May 22, 2018

For the majority of people calling insurance companies isn’t the most favoured activity. The insurance industry is one of the least innovative areas for customer experience. This is changing however and artificial intelligence (AI) is now playing a great role in this change. AI technology has the potential to disrupt the insurance industry and revolutionise consumer experience.
Intel on Monday acknowledged that its processors are vulnerable to another Spectre-like speculative...
Wow – it’s just 2 weeks until Ecsite 2018 in Geneva.  Ecsite is the largest science communication conference in Europe and is where the worlds Science Centre professionals gather to sharpen their critical mind, recharge their batteries and let off steam on the dance floor – their words not mine 🙂 I’ll be returning for […]

Countdown to GDPR by Serviceteam IT (@serviceteamit)

Anybody who is involved in cyber security or data protection will be acutely...

Understanding where your opportunities lie within the customer journey will reveal what is working and what isn’t. Unfortunately whilst many look at a conversion funnel...

The post Managing the conversion funnel – where are the opportunities? appeared first on stickee.

May 21, 2018

The code is integrated with at least three exploits that target unpatched IoT...

May 19, 2018

A fantastic project to share real-world game making with students and museums in Warwickshire.  You’ll have seen me (on here, twitter & at talks) ‘banging on’ about how successful Escape Room-inspired games would be in museums and here was an opportunity to create them in just a couple of days and then share with the […]

May 17, 2018

This guidance describes a set of technical security outcomes that are considered to...
The company urges customers to patch three vulnerabilities that received the highest severity...
Threatpost talked to several security researchers about what's changed in the past year....

May 16, 2018

The vulnerability allows an attacker to execute a malware or other payloads on...
Business Email: This is the first part in the email deletion series and concerns B2B relationships. GDPR text is ambiguous as to whether a distinction can be drawn between corporate email addresses and individual email addresses. Is it still possible to opt-out with a corporate email address?

May 13, 2018

On null by Graham Lee

I’ve had an interesting conversation on the topic of null over the last few days, spurred by the logical disaster of null. I disagreed with the statement in the post that:

Logically-speaking, there is no such thing as Null

This is true in some logics, but not in all logics. Boolean logic as described in An Investigation of the Laws of Thought, admits only two values:

instead of determining the measure of formal agreement of the symbols of Logic with those of Number generally, it is more immediately suggested to us to compare them with symbols of quantity admitting only of the values 0 and 1.

…and in a later chapter he goes on to introduce probabilities. Anyway. A statement either does hold (it has value 1, with some probability p) or it does not (it has value 0, with some probability 1-p; the Law of the Excluded Middle means that there is no probability that the statement has any other value).

Let’s look at an example. Let x be the lemma “All people are mortal”, and y be the conclusion “Socrates is mortal”. What is the value of y? It isn’t true, because we only know that people are mortal and do not know that Socrates is a person. On the other hand, we don’t know that Socrates is not a person, so it isn’t false either. We need another value, that means “this cannot be decided given the current state of our knowledge”.

In SQL, we find a logic that encodes true, false, and “unknown/undecided” as three different outcomes of a predicate, with the third state being given the name null. If we had a table linking Identity to Class and a table listing the Mortality of different Classes of things, then we could join those two tables on their Class and ask “what is the Mortality of the Class of which Socrates is a member”, and find the answer null.

But there’s a different mathematics behind relational databases, the Relational Calculus, of which SQL is an imperfect imitation. In the relational calculus predicates can only be true or false, there is no “undecided” state. Now that doesn’t mean that the answer to the above question is either true or false, it means that that question cannot be asked. We must ask a different question.

“What is the set of all Mortality values m in the set of tuples (m, c) where c is any of the values of Class that appear in the set of tuples (x, c) where x is Socrates?”

Whew! It’s long-winded, but we can ask it, and the answer has a value: the empty set. By extension, we could always change any question we don’t yet know the answer to into a question of the form “what is the set of known answers to this question”. If we know that the set has a maximum cardinality of 1, then we have reinvented the Optional/Maybe type: it either contains a value or it does not. You get its possible value to do something by sending it a foreach message.

And so we ask whether we would rather model our problem using a binary logic, where we have to consider each question asked in the problem to decide whether it needs to be rewritten as a set membership test, or a ternary logic, where we have to consider that the answer to any question may be the “I don’t know” value.

Implementationally-speaking, there are too many damn nulls

We’ve chosen a design, and now we get to implement it. In an implementation language like Java, Objective-C or Ruby, a null value is supplied as a bottom type, which is to say that there is a magic null or nil keyword whose value acts as a subtype of all other types in the system. Good: we get “I don’t know” behaviour for free anywhere we might want it. Bad: we get that behaviour anywhere else too, so we need to think to be sure that in all places where “I don’t know” is not an answer, that invariant holds in our implementation, or for those of us who don’t like thinking we have to pepper our programs with defensive checks.

I picked those three languages as examples, by the way, because their implementations of null are totally different so ruin the “you should never use a language with null because X” trope.

  • Java nulls are terminal: if you see a null, you blew it.
  • Objective-C nulls are viral: if you see a null, you say a null.[*]
  • Ruby nulls are whatever you want: monkey-patch NilClass until it does your thing.

[*] Objective-C really secretly has _objc_setNilReceiver(id), but I didn’t tell you that.

Languages like Haskell don’t have an empty bottom, so anywhere you might want a null you are going to need to build a thing that represents a null. On the other hand, anywhere you do not want a null you are not going to get one, because you didn’t tell your program how to build one.

Either approach will work. There may be others.

May 11, 2018

Affiliate marketing is often viewed as a method to acquire new customers – which it does – but it’s not the only thing it’s good...

The post Where’s the loyalty? Should retailers pay affiliates commission for existing customers? appeared first on stickee.

May 10, 2018

2018 is the year of GDPR. The new legislation is shaking up how businesses use, store and understand the management of personal data. When the publication...

The post GDPR and tracking – what does it mean for you? appeared first on stickee.

May 08, 2018

The 20th anniversary of the iMac reminded me that while many people capitalises the word “iMac” as Apple would like, including John “I never capitalise trademarks the way companies like” Gruber, nobody uses the article-less form that Apple does:

So you can do everything you love to do on iMac.

I, like many other people, would insert ‘an’ in there, and Apple have lost that battle. There’s probably somebody in Elephant who has chosen that hill to die on.

April 30, 2018

You think your code is self-documenting. That it doesn’t need comments or Doxygen or little diagrams, because it’s clear from the code what it does.

I do not think that that is true.

Even if your reader has at least as much knowledge of the programming language you’ve used as you have, and at least as much knowledge of the libraries you’ve used as you have, there is still no way that your code is self-documenting.

How long have you been doing your job? How long have you been talking to experts in the problem domain, solving similar problems, creating software in this region? The likelihood is, whoever you are, that the new person on your team has never done that, and that your code contains all of the jargon terms and assumptions that go with however-much-experience-you-have experience at solving those problems.

How long were you working on that story, or fixing that bug? How long have you spent researching that specific change that you made? However long it is, everybody else on your team has not spent that long. You are the world expert at that chunk of code, and it’s self-documenting to you as the world expert. But not to anybody else.

We were told about “working software over comprehensive documentation”, and that’s true, but nobody said anything about avoiding sufficient documentation. And nobody else has invested the time to understand the code that you just wrote that you did, so the only person for whom your code is self-documenting is you.

Help us other programmer folks out, think about us when avoiding documentation.

April 29, 2018

My iPad-drawn graphics in Rethinking OOD at App Builders 2018 were not very good, so here are the ink-and-paper versions. Please have them to hand when viewing the talk (which is the first of a two-parter, though I haven’t pitched part two anywhere yet).

Some ideas based on feedback to the Why inheritance never made any sense:

Feedback: Subtypes are necessary

The only one of these that is practically workable is behaviour inheritance <=> subtype inheritance: I’m sorry that you were exposed to Java at such an impressionable age. The compilers of languages like Java enable subclass = subtype, by automatically assuming that a subclass is is a valid value for variable binding, for example. However they do nothing to ensure subclass = subtype. This is valid C#, a language very like Java for this discussion:

namespace QuickTestThing
{
    class Class1
    {
        override public string ToString()
        {
            return "Class1";
        }
    }
    class Class2 : Class1
    {
        public override string ToString()
        {
            throw new Exception();
        }
    }
}

Now is Class2 a subtype of Class1? Does the compiler let you pretend that it is?

You don’t even need inheritance

As discussed in the original post, the whole “favour composition over inheritance” movement gets by fine with no inheritance. Composition and delegation (I don’t know about this message, I’ll forward it to someone who does) let you get the same behaviour.

Feedback: build it yourself

Can I demonstrate a language that has all three of subtype inheritance, behaviour inheritance, and categorical inheritance as distinct language features? Yes, but I would need to learn Racket first. I’m on it.

But in the meantime, re-read the “you don’t even need inheritance” paragraph and think about how you would build each of those three ideas out of delegation.

April 26, 2018

Here’s a song that I finally recorded, about 27 years after writing it. It was originally written for my music partner Alison to sing, so it’s in a very high key. However, when I recorded the guide vocals I was surprised to hear that I could actually reach the notes, so kept the first take.

It’s a bit of a power ballad, but hopefully the dirty guitar goes some way to de-Celine Dioning it.

Thanks to Shez of Silverlake for bass guitar, advice, mixing and mastering.

Now my facts melt into fiction
while I watch this leave-taking performed.
I have to hear your valediction –
you choose to go, and you forbid me to mourn.

If I could scream
I’d scream your name;
If I could dream
I’d dream you cancer and pain;
If I could hunt
I’d capture you – make you tame;
be captive again.

You wanted me to proclaim your brain and beauty;
wanted me to sustain your sense of worth.
It was my pleasure, my delight (not duty)
and now I see your self-esteem feeding off my hurt.

I don’t know
if you ever felt what I felt;
I don’t know
if you ever feel at all;
Now you go
to your room full of bird calls
All I know
is I am very sad and small

Who are you
to go leaving me?
Who are you
to stop needing me?
Who are you
to come, and go?
To empty me and fill my hollows
with your shadow?

You want me to play the role you’ve written;
You want me to applaud your exit bow.
You want my blessing and my permission.
You don’t want me – I doubt you ever did, now.

Without you, I will be random
Instead of you, I will love nothing at all.
What you create, you shape and then abandon –
To your Lear I played such a mediocre Fool.

Who are you
to go leaving me?
Who are you
to stop needing me?
Who are you
to come, and go?
To empty me and fill my hollows
with your shadow?

Your perfect shadow.
Now I’m full-to-burst with sorrow.
So why am I so fucking hollow?

Who are you?

Words and music © Bruce Lawson 2018, all rights reserved.

April 25, 2018

Subatomic Chocolate by Graham Lee

This started out as a toot thread, but “threaded tooting is tedious for everybody involved” so here’s the single post that thread should have been.

The “Electron vs. native” debate doesn’t make much sense. I feel like I’ve been here before:

Somehow those of us who had chosen a different programming language knew that we were better at writing software; much better than those clowns who just made the most successful office suite ever, the most successful picture editing app ever, or the most successful video player ever. Because we’d taken advice on how to write software from a company that was 90 days away from bankruptcy and had proven incapable of executing on software development, we were awesome and the people who were making the shittons of money on the most popular software of all time were clueless idiots.

Some things to ponder but avoid for the moment:

  • why are those the only choices? If I write a Java SWT app with Windows native components on Windows, and Mac native components on Mac, is that native because I’m using the native widget toolkit or not, because I’m using Java? If it is not, is it “Electron”?
  • where is the boundary of native? AppKit is written in Objective-C, so am I using some unholy abomination of an RMI bridge if I write AppKit software using AppKit APIs but a different programming language, like Swift?

It seems clear that people who believe there is a correct answer to “Electron vs. native” are either native app developers or Electron app developers. We can therefore expect them to have some emotional investment (I have decided to build my career doing this, please do not tell me that I’m wrong) and to be seeking truths that support their existing positions. Indeed, if you are on one side of this debate then the other side is not even wrong because the two positions compare incompatible facts.

Pro-Electron: the tools are better/easier/more familiar/JavaScript

Most to all of these things are true, in a lot of cases. As a seasoned “native” app developer, with some tiny amount of JavaScript experience, I can build a thing very quickly in JS (with a GUI in React or React Native, I haven’t tried Electron) that still takes me a long time in a “native” toolkit, both the ones I’m comfortable with and the ones I’m unfamiliar with but download and try things out in.

Now that should be disturbing to any company who builds a “native” platform, and who thinks that developers are key to their success. If someone with nearly two decades of using your thing can be faster at using someone else’s thing within under a year of learning that thing, there is something you need to be learning very quickly about the way the other thing works and how to bring that advantage to your thing, otherwise everything will be made out of the other thing soon and you’d better hope they keep making it work on your thing.

Actually, having said that this argument is true, it’s not true at all. The tools in JS-land are execrable. Bear in mind that the JSVM (we used to call it a “browser”) is a high-performance code environment with live code loading, reflection and self-modifying capabilities; it’s disappointing that the popular developer environments are text editors with syntax highlighting and an integrated terminal window. “Live” code loading is replaced with using Watchman to wait for some files to change, then kicking off some baroque house of cards that turns those files from my house blend of JS into your house blend of JS, then reloading the whole shebang.

Actually, having said that this argument is true and false, it’s not even relevant at all. The developers are the highly-paid people whose job it is to solve the problems for everybody else, why are we making their lives easier, not everybody else’s?

Pro-“native”: the apps are more efficient/consistent

Both of these things are true, in a lot of cases. A “native” application just needs to link the system widget set (which, if your platform supports efficient memory management, is loaded anyway by some first-party tool) and run its code. It will automatically get things that look and behave like the rest of the applications on the platform.

Actually, having said that this argument is true, it’s not true at all. The “native” tools are based on a lot of low-level abstractions (like threads or operations), that are hard to use correctly; rather than rely on an existing solution (remember there’s no npm for “native”, and the supposed equivalent has nowhere near as much coverage) developers are likely to try building their own use of these primitives, with inefficiencies resulting. The “native” look and feel of the components can and will be readily customised to fit branding guidelines, and besides as the look and feel is the platform vendor’s key differentiator they’ve moved things around every release so an app that behaved “consistently” on the last version looks out of place (deliberately, so that developers are “encouraged” to adopt the new platform features) this year.

Actually, having said that this argument is true and false, it’s not even relevant at all. The computer is there as a substrate for a thing that solves somebody’s problem, so as long as the problem is solved and the solution fits on their computer, isn’t the problem solved? And as for “consistency”, the basic tenets of these desktop “native” experiences were carved out three decades ago, before almost all experience with and research into desktop computer interaction. Why aim for consistency with an approach that was decided before we knew what did or didn’t work properly?

Ensuring that your methods and functions always return a value of the same type is great step towards making your applications robust and easy to maintain. In the Substrakt developer team, we endeavour to use the same return type for both the truthy and the falsey responses in our methods and functions. Consistently returning the same type […]

April 20, 2018

Reading List by Bruce Lawson (@brucel)

April 13, 2018

Reading List by Bruce Lawson (@brucel)

As a technology company who are passionate about all things tech, we love the opportunity to help develop the skills of young talent in this...

The post Children’s coding development at stickee appeared first on stickee.

April 10, 2018

If you’ve established an online brand, monetising your website and earning a passive income is a perfect way to turn your online platform into a...

The post Why monetise your site with a White Label? appeared first on stickee.

The team at stickee are thrilled to share that we are shortlisted for five awards this year already! Spanning from our RnD department to our...

The post stickee shortlisted for 5 awards appeared first on stickee.

April 08, 2018

Many software libraries are released with version “numbers” that follow a scheme called Semantic Versioning. A semantic version is three numbers separated by dots, of the form x.y.z, where:

  • if x is zero, all bets are off. Otherwise;
  • z increments “if only backwards compatible bug fixes are introduced. A bug fix is defined as an internal change that fixes incorrect behavior.”

Problem one: there is no such thing as an “internal change that fixes incorrect behavior” that is “backwards compatible”. If a library has a function f() in its public API, I could be relying on any observable behaviour of f() (potentially but pathologically including its running time or memory use, but here I’ll only consider return values or environment changes for given inputs).

If they “fix” “incorrect” behaviour, the library maintainers may have broken the package for me. I would need a comprehensive collection of contract or integration tests to know that I can still use version x.y.z' if version x.y.z was working for me. This is the worst situation, because the API looks like it hasn’t changed: all of the places where I call functions or create objects still do something, they just might not do the right thing any more.

Problem two: as I relaxed the dependency on running time or memory use, a refactoring could represent a non-breaking change. Semver has nowhere to record truly backwards compatible changes, because bugfixes are erroneously considered backwards compatible

  • y increments “if new, backwards compatible functionality is introduced to the public API”.

This is fine. I get new stuff that I’m not (currently) using, but you haven’t broken anything I do use.

Problem three: an increment to y “MAY include patch level changes”. So I can’t just quietly take in the new functionality and decide whether I need it on my own time, because the library maintainers have rolled in all of their supposedly-backwards-compatible-but-not-really changes so I still don’t know whether this version works for me.

  • x increments “if any backwards incompatible changes are introduced to the public API”.

Problem four: I’m not looking at the same library any more. It has the same name, but it could be completely rewritten, have any number of internal behaviour changes, and any number of external interface changes. It might not do what I want any more, or might do it in a way that doesn’t suit the needs of my application.

On the plus side

The dots are fine. I’m happy with the dots. Please do not feel the need to leave a comment if you are unhappy with the dots or can come up with some contrived reason why “dots are harmful”, as I don’t care.

Better: meaningful versioning

I would prefer to use a version scheme that looks like z.w.y:

  • y has the meaning it does in semver, except that it MUST NOT include patch level changes. If a package maintainer has added new things or deprecated (but not removed) old things, then I can use the package still.
  • z has the meaning it does in semver, except that we stop pretending that bug fixes can be backwards compatible.
  • w is incremented if non-behavioural changes are implemented; for example if internals are refactored, caches are introduced or removed, or private data structures are changed. These are changes that probably mean I can use the package still, but if I needed particular performance attributes from the library then it is on me to discover whether the new version still meets my needs.

There is no room for x in this scheme. If a maintainer wants to write a new, incompatible library, they can use a new name.

Different: don’t use versions

This is more work for me, but less work for the package maintainer. If they are maintaining a change log (which they are, as they are using version control) and perhaps a medium for announcing important changes including security and bug fixes and new features, then I can pick the commit that I discover does what I need. I can maintain my own tree (and should be anyway, in case the maintainer decides to delete their upstream repo) and can cheery pick the changes that are useful for me, leaving out the ones that are harmful for me.

This is more work for me than the z.w.y scheme because now I have to understand the impact of each change. It is the same amount of work as the semver x.y.z scheme, because then I had to understand the impact of each change too, as changes to any of the three version component could potentially include supposedly-backwards-compatible-but-not-really changes.

April 06, 2018

Reading List by Bruce Lawson (@brucel)

A mostly-weekly dump of links to interesting things I’ve read and shared on Twitter. Sponsored by those nice folks at Wix Engineering who shower me with high-denomination banknotes to reward me for reading this stuff.

April 02, 2018

In What is to be done?: Burning Questions of our Movement, Lenin lists four roles who contribute to fomenting revolution – the theoreticians, the propagandists, the agitators, and the organisers:

The theoreticians write research works on tariff policy, with the “call”, say, to struggle for commercial treaties and for Free Trade. The propagandist does the same thing in the periodical press, and the agitator in public speeches. At the present time [1901], the “concrete action” of the masses takes the form of signing petitions to the Reichstag against raising the corn duties. The call for this action comes indirectly from the theoreticians, the propagandists, and the agitators, and, directly, from the workers who take the petition lists to the factories and to private homes for the gathering of signatures.

Then later:

We said that a Social Democrat, if he really believes it necessary to develop comprehensively the political consciousness of the proletariat, must “go among all classes of the population”. This gives rise to the questions: how is this to be done? have we enough forces to do this? is there a basis for such work among all the other classes? will this not mean a retreat, or lead to a retreat, from the class point of view? Let us deal with these questions.

We must “go among all classes of the population” as theoreticians, as propagandists, as agitators, and as organisers.

Side note for Humpty-Dumpties: In this post I’m going to use “propaganda” in its current dictionary meaning as a collection of messages intended to influence opinions or behaviour. I do not mean the pejorative interpretation, somebody else’s propaganda that I disagree with. Some of the messages and calls below I agree with, others I do not.

Given this tool for understanding a movement, we can see it at work in the software industry. We can see, for example, that the Free Software Foundation has a core of theoreticians, a Campaigns Team that builds propaganda for distribution, and an annual conference at which agitators talk, and organisers network. In this example, we discover that a single person can take on multiple roles: that RMS is a theoretician, a some-time propagandist, and an agitator. But we also find the movement big enough to support a person taking a single role: the FSF staff roster lists people who are purely propagandists or purely theoreticians.

A corporate marketing machine is not too dissimilar from a social movement: the theory behind, say, Microsoft’s engine is that Microsoft products will be advantageous for you to use. The “call” is that you should buy into their platform. The propaganda is the MSDN, their ads, their blogs, case studies and white papers and so on. The agitators are developer relations, executives, external MVPs and partners who go on the conference, executive briefing days, tech tours and so on. The organisers are the account managers, the CTOs who convince their teams into making the switch, the developers who make proofs-of-concept to get their peers to adopt the technology, and so on. Substitute “Microsoft” for any other successful technology company and the same holds there.

We can also look to (real or perceived) dysfunction in a movement and see whether our model helps us to see what is wrong. A keen interest of mine is in identifying software movements where “as practised” differs from “as described”. We can now see that this means the action being taken (and led by the organisers) is disconnected from the actions laid out by the theorists.

I have already written that the case with OOP is that the theory changed; “thinking about your software in this way will help you model larger systems and understand your solutions” was turned by the object technologists into “buying our object technology is an easy way to achieve buzzword compliance”. We can see similar things happening now, with “machine learning” and “serverless” being hollowed out to fill with product.

On the other hand, while OOP and machine learning have mutated theories, the Agile movement seems to suffer from a theory gap. Everybody wants to be Agile or to do Agile, all of the change agents and consultants want to tell us to be Agile or to do Agile, but why does this now mean Dark Scrum? A clue from Ron Jeffries’ post:

But there is a connection between the 17 old men who had a meeting in Snowbird, and the poor devils working in the code mines of insurance companies in Ohio, suffering under the heel of the boot of the draconian sons of expletives who imposed a bastardized version of something called Scrum on them. We started this thing and we should at least feel sad that it has sometimes gone so far off the rails. And we should do what we can to keep it from going more off the rails, and to help some people get back on the rails.

Imagine if Karl Marx had written Capital: Critique of Political Economy, then waited eighty years, then said “oh hi, that thing Josef Stalin is doing with the gulags and the exterminations and silencing the opposition, that’s not what I had in mind, and I feel sad”. Well Agile has not gone so far off the rails as that, and has only had twenty years to do it, but the analogy is in the theory being “baked” at some moment, and the world continuing to change. Who are the current theorists advancing Agile “as practised” (or at least the version “as described” that a movement is taking out to change the practice)? Where are the theoreticians who are themselves Embracing Change? It seems to me that we had the formation of the theory in XP, the crystallisation (pardon the pun) of the theory and the call to action in the Agile manifesto, then the project management bit got firmed up in the Declaration of Interdependence, and now Agile is going round in circles with its tiller still set on the Project Management setting.

Well, one post-Agile more-Agile-than-thou movement for the avocado on toast generation is the Software Craft[person]ship movement, which definitely has theory and a call to action (Software Craftsmanship: the New Imperative, which is only a scratch newer than the Agile Manifesto), definitely has vocal propagandists and agitators, and yet still doesn’t seem to be sweeping the industry. Maybe it is, and I just don’t see it. Maybe there’s no clear role for organisers. Maybe the call to action isn’t one that people care about. Maybe the propaganda is not very engaging.

Anyway, Lenin gave me an interesting model.

April 01, 2018

Frozen Crunchie by Bruce Lawson (@brucel)

Public service announcement: it’s good to eat Mars bars straight from the freezer, but don’t try it with a Crunchie: freezing makes them totally brittle and they turn into dust. I ate all the dust I could scoop off my shirt, but had to sweep 50% of it off the floor.

My friend Jooly informs me

Caramac will be fine … I’d be very careful with Turkish Delight though and only a fool would try to freeze a large bar of Dairy Milk Marvellous Creations Jelly Popping Candy. You better know yourself if you’re going to mess with that.

March 29, 2018

At the reportedly-excellent PerfMatters Conference on Tuesday, our Stylable — The Musical music video was unleashed in its world premiere. For those of you who missed this epoch-defining event, here it is!

We had great fun making it. It started out one Friday when I was failing miserably to do some important Git/ NPM/ Yarn/ Jekyll stuff. To cheer myself up, I decided to do something I know I’m good at, so fired up my music software and began recording a little ditty I’d been working on. (Old chums will know I occasionally make Web Standards-based reinterpretations of classic songs, such as Like A Rounded Corner and Living Standard.)

I sent the a roughly-mixed soundcloud link to three members of the Wix Engineering team I work most closely with, who played it to the wider team. The next day I was told that the song had been played during the annual product presentation to Wix’s senior management.

The incomparable Estelle Weyl tweeted that if we made a music video, she would play it at PerfMatters Conference which she was organising. I mentioned this to the team, and suddenly a professional director and crew had been engaged. One night in early January, I drank a bottle of Tempranillo wine and wrote the script, and then flew out to Tel Aviv to make the video.

Shooting took all day, in our team office, and at sunset on the roof of the Wix HQ building on the same street. I think that it properly captures the fun and enthusiasm of the Stylable team, while being professionally lit, shot and edited. I’d be willing to bet that we’re the first open-source project to launch with our own music video.

Big thanks are owed to Danielle Kanish of Wix Academy, who co-ordinated with the outside contractors; Maya Alon, Queen of Wix Academy and 14th incarnation of Parvati, for finding the budget; to director Yoav Gertner and his crew, who took my somewhat odd brief and made it happen; to Tal, Iftach, Tom, Uri, Benita, Kieran, Barak, Avi, Arnon, Hadar, Ido and Nadav from the Stylable team for being such good sports and being willing to make fools of themselves on video; to Estelle Weyl for giving me the idea, and to Alessio Carone for his help and advice on the karaoke subtitles.

I’m very lucky to work with an organisation that would sanction and fund such a daft project. Thanks, Wix! And if they fire me, I shall be offering bespoke dance tuition — but book soon; there’s a long line of people wanting to be able to move as seductively as I do in this video.

March 28, 2018

Early last month, our beloved CTO, Karl Binder, was invited to give a talk at the University of Wolverhampton’s Visual Communications department, speaking to students...

The post Karl talks tech careers at Wolverhampton University appeared first on stickee.

March 26, 2018

Last week we had the pleasure of welcoming technology enthusiast, Hadley, to our office. Currently in Year 10 at Arden Academy, he wanted to experience what...

The post Hadley’s Work Experience at stickee appeared first on stickee.

March 25, 2018

Squares and prettier graphs by Stuart Langridge (@sil)

The Futility Closet people recently posted “A Square Circle“, in which they showed:

49² + 73² = 7730
77² + 30² = 6829
68² + 29² = 5465
54² + 65² = 7141
71² + 41² = 6722
67² + 22² = 4973

which is a nice little result. I like this sort of recreational maths, so I spent a little time wondering whether this was the only such cycle, or the longest, or whether there were longer ones. A brief bit of Python scripting later, and the truth is revealed: it’s not the only cycle, but it is the longest one, with six entries.

There are no other 6-cycles; there’s a 5-cycle (start from 68²+50²=7124), a 4-cycle (47²+56²=5345) and interestingly two 1-cycles, numbers which lead to themselves: 12²+33²=1233 and 88²+33²=8833. That’s rather cool.

I did wonder whether there are also interesting cycles with more numbers, so I tried out adding the squares of 3-digit numbers:

but sadly they’re really boring; there’s a 2-cycle (137²+461²=231290, 231²+290²=137461), another 1-cycle (990²+100²=990100) and that’s it. Nonetheless, quite an interesting little property to fiddle around with.

Prettier graphs

Originally I was going to make my script count the lengths of the cycles and show the largest one and so on, but I realised that that was annoying and fiddly and what I ought to do is just display a nice picture of them and that’d be clear to my eyes immediately and take no code at all. My go-to tool for this sort of thing, where I’m drawing graphs (in the mathematical nodes-and-edges sense) programmatically, is Graphviz, because it’s really easy; you basically write out your graph as obvious simple words with arrows:

digraph {
    "get up" -> "go to work";
    "go to work" -> "come home again";
    "come home again" -> "go to sleep";
    "go to sleep" -> "get up";
}

and then you can make it a graph with one command: dot -Tpng simple.dot > output.png:

A basic graphviz graph of the above code; plain black and white, and not pretty

That looks pretty terrible, though; plain black and white, ugly. I tweaked my graph above to look a bit nicer, with some colours, and that’s really easy; you just add a few extra properties to the nodes (the things to do) and edges (the arrows) in your graph specification:

digraph {

    node[shape="rectangle" style="rounded,filled" gradientangle="270" 
        fillcolor="#990033:#f5404f" color="#991111" 
        fontcolor="#ffffff" fontname="Arial"]

    edge [color="#006699" len=2.5]


    "get up" -> "go to work";
    "go to work" -> "come home again";
    "come home again" -> "go to sleep";
    "go to sleep" -> "get up";
}

and then you get something a bit nicer:

Same graph, but with a little colour and niceness

Now, I am no graphic artist. I’m not good at this stuff. If you’re thinking “that looks rubbish; I could make it look loads nicer” then great! Please, please do so! I would very much like one of the many graphic artists involved in the open source world to put together a “theme” for graphviz that just makes graphs look a bit nicer and classier, by default. Seriously, if you’ve got an artistic eye this is the sort of thing that’d probably take you a lunchtime to do. Just pick some nice colours, line widths, arrow shapes, node shapes, and you’re done. Write a blog post saying “these are the six lines to add to the top of your graphviz .dot files” and that’s the job complete; that would be a small but measureable improvement to the universe that you’ve made, there, with not much effort at all.

The graphviz people are pretty open to the idea of even including such a thing in their releases, maybe even by default. I asked on Twitter whether someone could or had already done this that I’m asking for, and one of the people who responded was Stephen North, who’s part of the graphviz team, saying that they’d be happy to include and publicise such a thing.

To be clear, this is not a complaint about the graphviz team themselves; their job is mostly to think very hard about layout algorithms, which they indeed do a good job of. But I think it’s really important, not just that open source stuff can be made to look pretty if you know what you’re doing, but also that it already does look pretty by default where it can. It turns people off your software, no matter how powerful it is, if some less-powerful alternative puts out more attractive output. There are some things where this would take a lot of work; rejigging the entire UI of a complex programme is difficult and time-consuming, absolutely. But I really feel like someone with a decent artistic eye (i.e., not me) could put together a simple set of colours and font choices and line widths that would make graphviz look much nicer either by default or by specifying --pretty or something, and it wouldn’t take long at all. I’d certainly be way happier if that happened. Maybe that person is you, gentle reader?

March 22, 2018

stickee are thrilled to share that our Competitor Price Monitoring software, Magpie, is shortlisted for another award. The Computing Big Data Excellence Awards celebrates the top...

The post stickee shortlisted for Computing Big Data Excellence Awards appeared first on stickee.

March 21, 2018

March 16, 2018

There are three different types of inheritance going on.

  1. Ontological inheritance is about specialisation: this thing is a specific variety of that thing (a football is a sphere and it has this radius)
  2. Abstract data type inheritance is about substitution: this thing behaves in all the ways that thing does and has this behaviour (this is the Liskov substitution principle)
  3. Implementation inheritance is about code sharing: this thing takes some of the properties of that thing and overrides or augments them in this way. The inheritance in my post On Inheritance is this type and only this type of inheritance.

These are three different, and frequently irreconcilable, relationships. Requiring any, or even all, of them, presents no difficulty. However, requiring one mechanism support any two or more of them is asking for trouble.

A common counterexample to OO inheritance is the relationship between a square and a rectangle. Geometrically, a square is a specialisation of a rectangle: every square is a rectangle, not every rectangle is a square. For all s in Squares, s is a Rectangle and width of s is equal to height of s. As a type, this relationship is reversed: you can use a rectangle everywhere you can use a square (by having a rectangle with the same width and height), but you cannot use a square everywhere you can use a rectangle (for example, you can’t give it a different width and height).

Notice that this is incompatibility between the inheritance directions of the geometric properties and the abstract data type properties of squares and rectangles; two dimensions which are completely unrelated to each other and indeed to any form of software implementation. We have so far said nothing about implementation inheritance, so haven’t even considered writing software.

Smalltalk and many later languages use single inheritance for implementation inheritance, because multiple inheritance is incompatible with the goal of implementation inheritance due to the diamond problem (traits provide a reliable way for the incompatibility to manifest, and leave resolution as an exercise to the reader). On the other hand, single inheritance is incompatible with ontological inheritance, as a square is both a rectangle and an equilateral polygon.

The Smalltalk blue book describes inheritance solely in terms of implementation inheritance:

A subclass specifies that its instances will be the same as instances of another class, called its superclass, except for the differences that are explicitly stated.

Notice what is missing: no mention that a subclass instance must be able to replace a superclass instance everywhere in a program; no mention that a subclass instance must satisfy all conceptual tests for an instance of its superclass.

Inheritance was never a problem: trying to use the same tree for three different concepts was the problem.

“Favour composition over inheritance” is basically giving up on implementation inheritance. We can’t work out how to make it work, so we’ll avoid it: get implementation sharing by delegation instead of by subclassing.

Eiffel, and particular disciplined approaches to using languages like Java, tighten up the “inheritance is subtyping” relationship by relaxing the “inheritance is re-use” relationship (if the same method appears twice in unrelated parts of the tree, you have to live with it, in order to retain the property that every subclass is a subtype of its parent). This is fine, as long as you don’t try to also model the problem domain using the inheritance tree, but much of the OO literature recommends that you do by talking about domain-driven design.

Traits approaches tighten up the “inheritance is specialisation” relationship by relaxing the “inheritance is re-use” relationship (if two super categories both provide the same property of an instance of a category, neither is provided and you have to write it yourself). This is fine, as long as you don’t try to also treat subclasses as covariant subtypes of their superclasses, but much of the OO literature recommends that you do by talking about Liskov Substitution Principle and how a type in a method signature means that type or any subclass.

What the literature should do, I believe, is say “here are the three types of inheritance, focus on any one of them at a time”. I also believe that the languages should support that (obviously Smalltalk, Ruby and friends do support that by not having any type constraints).

  • If I’m using inheritance as a code sharing tool, it should not be assumed that my subclasses are also subtypes.
  • If I am using subtypes to tighten up interface contracts, I should be not only allowed to mark a class anywhere in the tree as a subtype of another class anywhere in the tree, but required to do so: once again, it should not be assumed that my subclasses are also subtypes.
  • If I need to indicate conceptual specialisation via classes, this should also not be assumed to follow the inheritance tree. I should be not only allowed to mark a class anywhere in the tree as a subset of another class, but required to do so: once again, it should not be assumed that my subclasses are also specialisations.

Your domain model is not your object model. Your domain model is not your abstract data type model. Your object model is not your abstract data type model.

Now inheritance is easy again.

March 13, 2018

March 12, 2018

March 09, 2018

March 07, 2018

Every year, stickee welcomes talented students from Holland to join our team for a unique work placement opportunity. Working in partnership with Deltion College, our...

The post Meet our Interns from Holland appeared first on stickee.

March 01, 2018

Another day another awards shortlist for the talented team at stickee. We’re excited to share that we are shortlisted as finalists for the Performance Marketing...

The post stickee shortlisted for Performance Marketing Awards appeared first on stickee.

I’ve always enjoyed reading about the tools and gear other people use. I stumbled on Matt Mullenweg’s “What’s in my bag” post in 2014 and I’ve followed his updates since. But it’s only in the past year or so—as I’ve been working more frequently from coffee shops, co-working spaces and client offices—that I’ve begun to invest in my own travel setup.

What follows is my everyday bag. I keep it permanently packed (minus the laptop), so it’s always ready to go when I head out.

My bag and gear

Here’s a breakdown of what I carry:

  1. MacBook Pro 13”. My main machine is a rMBP early 2013 (which I plug into two Dell UltraSharp U2414H monitors on my desk). I’m probably due an upgrade, but this machine is still going strong and I’m hopeful it’ll last another year or two. I keep a Transcend JetDrive 128gb Expansion Card permanently in the SD card slot which gives me a little extra storage and is useful as a backup. It fits flush so I never notice it’s there.
  2. BAGSMART Cable Organiser Bag, which I use to store: 2x lightning cables, EarPods with lightning connector (spare headphones for Phone), EarPods with 3.5mm headphone plug (spare headphones for Mac), lightning to 3.5mm headphone jack adapter, micro USB cable, SD card, USB memory stick, MacBook Pro power adaptor extension cable and a lightning to SD Card card reader.
  3. Apple Magic Keyboard and Magic Mouse, which I keep in an RKM carrying case.
  4. Roost laptop stand, which also stays in the RKM carrying case above. The roost laptop stand is lightweight and portable. I’ve used this for several years now and I love it.
  5. Kindle Paperwhite. I pretty much exclusively read ebooks these days. The Kindle is rubbish at everything except displaying text, which makes it the perfect reading device.
  6. Zeiss Lens Wipes. Although designed for camera lenses, these work great on any screen. Handy for removing those grubby finger marks.
  7. A Rotring 600 Mechanical Pencil and a Uni-Ball Jetstream Pen (I keep a second Jetstream Pen in the Cable Organiser Bag). Both are good quality, comfortable and great to write or draw with.
  8. Field Notes. I mostly use Bear on my phone for taking notes, but these come in handy for quick sketches or times when I can’t use my phone for whatever reason.
  9. A cheap LED Torch. I don’t use this often as I’ll usually just use the light on my phone, but it’s small and light and doesn’t take up much space.
  10. Rucksack Rain Cover. Handy for the British weather.
  11. Business cards. I only hand out 2 or 3 per year, but still useful to have.
  12. AirPods. My favourite tech purchase last year. I love these things. I use these all the time, mostly for podcasts.
  13. MacBook Pro power adapter.
  14. Anker PowerCore 13000 Power Bank. Means I never have to worry about my phone running out of juice. In the pouch, I keep a micro-USB and lightning cable.
  15. Dash Slim Wallet 5.0. Backed this on Kickstarter and have used for a few years now. It has held up well, but I am due a replacement. Would only buy a minimal/slim wallet these days.
  16. A selection of microfibre lens cleaning cloths, including this microfibre cloth that comes with a little pouch that I keep fastened to my bag.
  17. Tablets (Ibuprofen and hayfever tablets).
  18. Mints.
  19. Prescription glasses and sunglasses (including cases, not pictured).
  20. iPhone 7. Works great and haven’t felt the need to upgrade yet.
  21. North Face Borealis (Urban Navy). I’m really enjoying this backpack. It’s big enough to fit all my gear (but not too big). The straps are well padded and sternum strap makes it comfortable carrying heavier loads. And there’s plenty of pockets to store things.

Additional items

These are items that don’t always stay in my bag:

  1. Pack & Smooch iPad Pro 10.5″ Case Sleeve Cover. Really nice quality case. The only downside is the Apple Pencil loop doesn’t hold the Pencil particularly well.
  2. Sigma 35mm 1.4 Art Lens. A stunningly sharp lens.
  3. 10.5″ iPad Pro and Apple Pencil.
  4. USB Charger Anker 27W 4-Port USB Wall Charger.
  5. Stanley Outdoor Bottle. If I’m travelling long distance or staying over, I’ll often take this awesome little hip flask filled with a good tipple.
  6. Canon 760D with Canon 24mm pancake lens. My usual walk-around setup. Decent lens and really small/light. Will swap the 24mm for the Sigma 35mm if I want something a little sharper.

February 28, 2018

I read 23 books last year. In no particular order, here are my favourites:

So You’ve Been Publicly Shamed by Jon Ronson

The book opens with the story of Justine Sacco, whose life was ruined after tweeting an offensive and careless joke. Ronson continues to interview other people who have been publicly shamed after posting offensive, insensitive, or just plain stupid tweets or Facebook statuses.

Out of all the books I read last year, this had the biggest impact on me. It’ll likely make you use social media differently: both in how you interact with others and how you share yourself online.

If you use social media, you should read this book.

I had enough. I quit Twitter. The world outside Twitter was GREAT. I read books. I reconnected with people I knew from real life and met them for drinks in person.

We see ourselves as non-conformist, but I think all of this is creating a more conformist, conservative age. ‘Look!’ we’re saying. ‘We’re normal! This is the average!’ We are defining the boundaries of normality by tearing apart the people outside of it.

The great thing about social media was how it gave a voice to voiceless people. Let’s not turn it into a world where the smartest way to survive is to go back to being voiceless.

Shoe Dog by Phil Knight

Shoe Dog is a gripping page-turner by the founder of Nike. He covers the early years of Nike, from its humble beginnings as Blue Ribbon Sports, all the way to the company we all know today. It’s a story of struggle, law-suits, hard work and triumph.

It’s also full of wisdom:

Like it or not, life is a game. Whoever denies that truth, whoever simply refuses to play, gets left on the sidelines.

One lesson I took from all my home-schooling about heroes was that they didn’t say much. None was a blabbermouth. None micromanaged. Don’t tell people how to do things, tell them what to do and let them surprise you with their results.

Luck plays a big role. Yes, I’d like to publicly acknowledge the power of luck. Athletes get lucky, poets get lucky, businesses get lucky. Hard work is critical, a good team is essential, brains and determination are invaluable, but luck may decide the outcome.

Tribe of Mentors by Tim Ferriss

I eagerly awaited Tim’s previous book: Tools of Titans. I found it be to be disappointing. Don’t get me wrong, Tools of Titans is full of gems. But it felt like reading Tim’s unorganised notes, which meant it wasn’t particularly readable or enjoyable.

Tribe of Mentors, however, is what Tools of Titans should have been. Tim does what he does best and asks cleverly constructed questions, and then gets out the way. This makes the book far more readable. I’m not even going to quote any take-aways, as there’s far too many. I was scribbling notes from every other page.

Scientific Advertising by Claude Hopkins

Written in 1920, this is a book about writing advertising copy. Despite it’s age, it stands up remarkably well. If you work on the web or in marketing, I recommend picking up a copy. It’s the shortest book on this list at only 127 pages.

When you plan or prepare an advertisement, keep before you a typical buyer. Your subject, your headline has gained his or her attention. Then in everything be guided by what you would do if you met the buyer face-to-face.

Successful salesmen are rarely good speechmakers. They have few oratorical graces. They are plain and sincere men who know their customers and know their lines. So it is in ad writing.

Don’t think that those millions will read your ads to find out if your product interests. They will decide at a glance—by your headline or your pictures. Address the people you seek, and them only.

Masters of Doom by David Kushner

A fun and easy read about how John Carmack and John Romero started id Software and came to dominate an industry as they created iconic games such as Doom and Quake. I found it inspiring: the raw energy and passion of the two Johns, and their inheritance differences, makes for a thrilling read. If you’re a gamer (especially if you played Doom or Quake), or you’re a developer, then you’ll love this book.

The War of Art by Stephen Pressfield

This book is about the universal forces that act against creativity and how we might overcome them. Pressfield calls this force the “Resistance”. If you’re a creator – a writer, musician, artist, freelancer, etc. – then you owe it to yourself to read The War of Art.

There’s a secret that real writers know that wannabe writers don’t, and the secret is this: It’s not the writing part that’s hard. What’s hard is sitting down to write. What keeps us from sitting down is Resistance.

Resistance cannot be seen, touched, heard, or smelled. But it can be felt. We experience it as an energy field radiating from a work-in-potential. It’s a repelling force. It’s negative. Its aim is to shove us away, distract us, prevent us from doing our work.

Rule of thumb: The more important a call or action is to our soul’s evolution, the more Resistance we will feel toward pursuing it.

Creativity, Inc. by Ed Catmull

Ed Catmull is the co-founder of Pixar, an animation film studio who have created some of my favourite movies. I was looking forward to this book to get a glimpse at their creative process. And it did not disappoint. Throughout the book, Catmull is honest and sincere as he examines his own mistakes in an attempt to build a great creative culture.

If you identify too closely with your ideas, you will take offence when they are challenged.

Creative people must accept that challenges never cease, failure can’t be avoided, and “vision” is often an illusion.

It’s Ok for leaders to change their minds and say “Okay, I was wrong, it’s this way.” As long as you commit to a destination and drive toward it with all your might, people will accept when you correct course.

Dance of the Possible by Scott Berkun

I loved Scott Berkun’s take on the creative process. It’s a quick read, at only 250 pages, and I finished it in a few sittings.

It’s the kind of book that makes you want to get to work.

The word create is a verb. It’s an action. Creativity is best thought of in the same way.

You must learn to love your mind, to nurture it by feeding it quality ideas and thoughts, and give it time to prove what it can do.

The ability to see an idea, or a thing, from many different perspectives is among the greatest assets a thinking person can have.

Happy: Why More or Less Everything is Absolutely Fine by Darren Brown

This book explores—you guessed it—what it is to be happy: the history of how religion and society have shaped happiness, it debunks ideas of how positive thinking makes us happy, and how we can live happier lives largely through the principles of Stoicism. Stoicism is a philosophy that dates back thousands of years, but regardless of its age, Stoicism stands up remarkably well to modern day living.

The vital changes to our happiness do not come from outside circumstances, however appealing they might seem.

We are missing out if we feel that happiness is a result of lucky circumstance rather than something rooted immovably in us.

Our daily employment does not need to be our identity. It’s a wonderful bonus to do what one enjoys, but it’s not necessary.

The Subtle Art of Not Giving a Fuck by Mark Manson

Mark Manson likes to say “fuck” a lot. A quick search for the word “fuck” in the Kindle version returns 171 results. It felt obnoxious to start with, but stick with it as there is a lot of wisdom to be found in this book. It’s full of counterintuitive advice. Stop trying to be positive all the time. Pleasure is a bad value. Be in search of more uncertainty and doubt in your life. Be wrong.

The key to a good life is giving a fuck about less, giving a fuck about what is true and immediate and important.

Wanting positive experience is a negative experience; accepting negative experience is a positive experience.

What determines your success isn’t, “What do you want to enjoy?” The relevant question is, “What pain do you want to sustain?”

Bonus (favourite fiction books)

All the books above are non-fiction, but I also read some great fiction books:

The Fault in Our Stars by John Green

I was recommended this book by a friend and I read it without any knowledge of what it was about. I didn’t even read the blurb. And I’m glad I didn’t, as there’s a fair chance I would have skipped this book if I had. It made me laugh and cry—often on the same page. As one reviewer put it, it’s “a novel of life and death and the people caught in between.”

Wonder by R.J. Palacio

Another recommendation, another book I wouldn’t usually read, and another book I’m glad I read. Wonder is the heart-warming story of August Pullman, a fifth-grader who was born with a terrible facial abnormality. It follows his journey as he gets sent to school for the first time after being home-schooled by his parents.

A Man Called Ove by Fredrik Backman

This was my favourite fiction book of the year. It’s the story of Ove, a grumpy old man who wants nothing more than solitude from his neighbours. That doesn’t sound like the setting of a heart-warming story of friendship and love, but it is. One I’ll be revisiting for sure.

The Rosie Project by Graeme Simsion

I picked up this recommendation from Bill Gates, of all people. On his blog, he said about the book:

Anyone who occasionally gets overly logical will identify with the hero, a genetics professor with Asperger’s Syndrome who goes looking for a wife. (Melinda thought I would appreciate the parts where he’s a little too obsessed with optimizing his schedule. She was right.) It’s an extraordinarily clever, funny, and moving book about being comfortable with who you are and what you’re good at.

That’s my list. I’d love to hear about the books you’ve enjoyed recently—Twitter or email are the best places to do so.

February 26, 2018

The team at stickee are delighted to announce that we have been shortlisted for not one, but two awards at the prestigious National Technology Awards....

The post stickee finalists for two National Technology Awards appeared first on stickee.

February 20, 2018

Lots of companies want to collect data about their users. This is a good thing, generally; being data-driven is important, and it’s jolly hard to know where best to focus your efforts if you don’t know what your people are like. However, this sort of data collection also gives people a sense of disquiet; what are you going to do with that data about me? How do I get you to stop using it? What conclusions are you drawing from it? I’ve spoken about this sense of disquiet in the past, and you can watch (or read) that talk for a lot more detail about how and why people don’t like it.

So, what can we do about it? As I said, being data-driven is a good thing, and you can’t be data-driven if you haven’t got any data to be driven by. How do we enable people to collect data about you without compromising your privacy?

Well, there are some ways. Before I dive into them, though, a couple of brief asides: there are some people who believe that you shouldn’t be allowed to collect any data on your users whatsoever; that the mere act of wanting to do so is in itself a compromise of privacy. This is not addressed to those people. What I want is a way that both sides can get what they want: companies and projects can be data-driven, and users don’t get their privacy compromised. If what you want is that companies are banned from collecting anything… this is not for you. Most people are basically OK with the idea of data collection, they just don’t want to be victimised by it, now or in the future, and it’s that property that we want to protect.

Similarly, if you’re a company who wants to know everything about each individual one of your users so you can sell that data for money, or exploit it on a user-by-user basis, this isn’t for you either. Stop doing that.

Aggregation

The key point here is that, if you’re collecting data about a load of users, you’re usually doing so in order to look at it in aggregate; to draw conclusions about the general trends and the general distribution of your user base. And it’s possible to do that data collection in ways that maintain the aggregate properties of it while making it hard or impossible for the company to use it to target individual users. That’s what we want here: some way that the company can still draw correct conclusions from all the data when collected together, while preventing them from targeting individuals or knowing what a specific person said.

In the 1960s, Warner and Greenberg put together the randomised response technique for social science interviews. Basically, the idea here is that if you want to ask people questions about sensitive topics — have they committed a crime? what are their sexual preferences? — then you need to be able to draw aggregate conclusions about what percentages of people have done various things, but any one individual’s ballot shouldn’t be a confession that can be used against them. The technique varies a lot in exactly how it’s applied, but the basic concept is that for any question, there’s a random chance that the answerer should lie in their response. If some people lie in one direction (saying that they did a thing, when they didn’t), and the same proportion of people lie in the other direction (saying they didn’t do the thing when they did), then if you’ve got enough answerers, all the lies pretty much cancel out. So your aggregate statistics are still pretty much accurate — you know that X percent of people did the thing — but any one individual person’s response isn’t incriminating, because they might have been lying. This gives us the privacy protection we need for people, while preserving the aggregate properties that allow the survey-analysers to draw accurate conclusions.

It’s something like whether you’ll find a ticket inspector on a train. Train companies realised a long time ago that you don’t need to put a ticket inspector on every single train. Instead, you can put inspectors on enough trains that the chance of fare-dodgers being caught is high enough that they don’t want to take the risk. This randomised response is similar; if you get a ballot from someone saying that they smoked marijuana, then you can’t know whether they were one of those who were randomly selected to lie about their answer, and therefore that answer isn’t incriminating, but the overall percentage of people who say they smoked will be roughly equal to the percentage of people who actually did.

A worked example

Let’s imagine you’re, say, an operating system vendor. You’d like to know what sorts of machines your users are installing on (Ubuntu are looking to do this as most other OSes already do), and so how much RAM those machines have would be a useful figure to know. (Lots of other stats would also be useful, of course, but we’ll just look at one for now while we’re explaining the process. And remember this all applies to any statistic you want to collect; it’s not particular to OS vendors, or RAM. If you want to know how often your users open your app, or what country they’re in, this process works too.)

So, we assume that the actual truth about how much RAM the users’ computers have looks something like this graph. Remember, the company does not know this. They want to know it, but they currently don’t.

So, how can they collect data to know this graph, without being able to tell how much RAM any one specific user has?

As described above, the way to do this is to randomise the responses. Let’s say that we tell 20% of users to lie about their answer, one category up or down. So if you’ve really got 8GB of RAM, then there’s an 80% chance you tell the truth, and a 20% chance you lie; 10% of users lie in a “downwards” direction, so they claim to have 4GB of RAM when they’ve actually got 8GB, and 10% of users lie in an “upwards” direction and claim to have 16GB. Obviously, we wouldn’t actually have the users lie — the software that collects this info would randomly either produce the correct information or not with the above probabilities, and people wouldn’t even know it was doing it; the deliberately incorrect data is only provided to the survey. (Your computer doesn’t lie to you about how much RAM it’s got, just the company.) What does that do to the graph data?

We show in this graph the users that gave accurate information in green, and inaccurate lies in red. And the graph looks pretty much the same! Any one given user’s answers are unreliable and can’t be trusted, but the overall shape of the graph is pretty similar to the actual truth. There are still peaks at the most popular points, and still troughs at the unpopular ones. Each bar in the graph is reasonably accurate (accuracy figures are shown below each bar, and they’ll normally be around 90-95%, although because it’s random it may fluctuate a little for you.) So our company can draw conclusions from this data, and they’ll be generally correct. They’ll have to take those conclusions with a small pinch of salt, because we’ve deliberately introduced inaccuracy into them, but the trends and the overall shape of the data will be good.

The key point here is that, although you can see in the graph which answers are truth and which are incorrect, the company can’t. They don’t get told whether an answer is truth or lies; they just get the information and no indication of how true it is. They’ll know the percentage chance that an answer is untrue, but they won’t know whether any one given answer is.

Can we be more inaccurate? Well, here’s a graph to play with. You can adjust what percentage of users’ computers lie about their survey results by dragging the slider, and see what that does to the data.

0%  100%

20% of submissions are deliberately incorrect

Even if you make every single user lie about their values, the graph shape isn’t too bad. Lying tends to “flatten out” the graph; it makes tall peaks less tall, and short troughs more tall, and every single person lying probably flattens out things so much that conclusions you draw are probably now going to be wrong. But you can see from this that it ought to be possible to run the numbers and come up with a “lie” percentage which accurately balances the company’s need for accurate information with the user’s need to not provide accuracy.

It is of course critical to this whole procedure that the lies cancel out, which means that they need to be evenly distributed. If everyone just makes up random answers then obviously this doesn’t work; answers have to start with the truth and then (maybe) lie in one direction or another.

This is a fairly simple description of this whole process of introducing noise into the data, and data scientists would be able to bring much more learning to bear on this. For example, how much does it affect accuracy if user information can lie by more than one “step” in every direction? Do we make it so instead of n% truth and 100-n% lies, we distribute the lies normally across the graph with the centrepoint being the truth? Is it possible to do this data collection without flattening out the graph to such an extent? And the state of the data art has moved on since the 1960s, too: Dwork wrote an influential 2006 paper on differential privacy which goes into this in more detail. Obviously we’ll be collecting data on more than one number — someone looking for data on computers on which their OS is installed will want for example version info, network connectivity, lots of hardware stats, device vendor, and so on. And that’s OK, because it’s safe to collect this data now… so how do our accuracy figures change when there are lots of stats and not just one? There will be better statistical ways to quantify how inaccurate the results are than my simple single-bar percentage measure, and how to tweak the percentage-of-lying to give the best results for everyone. This whole topic seems like something that data scientists in various communities could really get their teeth into and provide great suggestions and help to companies who want to collect data in a responsible way.

Of course, this applies to any data you want to collect. Do you want analytics on how often your users open your app? What times of day they do that? Which OS version they’re on? How long do they spend using it? All your data still works in aggregate, but the things you’re collecting aren’t so personally invasive, because you don’t know if a user’s records are lies. This needs careful thought — there has been plenty of research on deanonymising data and similar things, and the EFF’s Panopticlick project shows how a combination of data can be cross-referenced and that needs protecting against too, but that’s what data science is for; to tune the parameters used here so that individual privacy isn’t compromised while aggregate properties are preserved.

If a company is collecting info about you and they’re not actually interested in tying your submitted records to you (see previous point about how this doesn’t apply to companies who do want to do this, who are a whole different problem), then this in theory isn’t needed. They don’t have to collect IP addresses or usernames and record them against each submission, and indeed if they don’t want that information then they probably don’t do that. But there’s always a concern: what if they’re really doing that and lying about it? Well, this is how we alleviate that problem. Even if a company actually are trying to collect personally-identifiable data and they’re lying to us about doing that it doesn’t matter, because we protect ourselves by — with a specific probability — lying back to them. And then everyone gets what they want. There’s a certain sense of justice in that.

February 14, 2018

I have a love/hate relationship with Clojure tooling.

I wanted to learn functional programming (FP) in a Lisp because:
1. There’s a thing I want to do and it felt like the right way to do it
N.B. ‘right’ is not always ‘easiest’
2. Lisp once beat me up quite badly. I’m bigger now. I wanted to go back and punch it on the nose
3. I won’t really ‘get’ immutability until I do some, with no option to cheat.
3. I find homoiconic functional programming conceptually elegant…

…then you try using the Clojure development environment and discover there isn’t one that everyone agrees on and the one used by expert requires you to learn a new language of keyboard hieroglyphs first.

I’ve done just enough Lisping to think that we are being as irresponsible teaching kids only object oriented programming as the BBC were in teaching them BASIC, but it would be actual child abuse to introduce them to FP via emacs and a language dependent on the underlying Object-Oriented Java Virtual Machine for it’s connection to libraries & reality.

This morning I realised I hadn’t plugged my Raspberry Pi in for ages. If you were a child who started coding with Scratch and a few drum loops on Sonic Pi then maybe a bit of the Squeak Smalltalk (Scratch is written in Squeak) or Python in a nice IDE, imagine being handed the ancient emacs scrolls and sent into a corner for a week to learn spells, before you could even start to learn to function. FP is going to initially make soup of your flabby imperative, ‘place is state’-damaged brain anyway. There is no need to make it harder. Teaching languages don’t have to train you for a job, only to think.

If we are going to raise functional children, we need a gentle slope up to Clojure. Clojure is probably the best practical language but it’s too hard. I’ve seen a few people suggest starting with Racket, which is a version of the Scheme Lisp dialect. This morning I poured DrRacket onto my Pi.

There was a minor hiccup with DrRacket not knowing what language I wanted it to read. Kids would need to be protected from that. I wanted “Determine language from source”, not to tell it I was a ‘beginning student’.

Because DrRacket is multi-lingual, the first line of the source code tells it what language to read:
#lang racket

You type in code then press the run button. This is not your grandfathers emacs REPL. Children should have no natural fear of their (parens). Ooh look, cats!

#lang racket

(define (extract str)
(substring str 4 8))

(extract “the cats out of the bag”)

;;-> “cats”

February 08, 2018

Sorry Henry by Stuart Langridge (@sil)

I think I found a bug in a Henry Dudeney book.

Dudeney was a really famous puzzle creator in Victorian/Edwardian times. For Americans: Sam Loyd was sort of an American knock-off of Dudeney, except that Loyd stole half his puzzles from other people and HD didn’t. Dudeney got so annoyed by this theft that he eventually ended up comparing Loyd to the Devil, which was tough talk in 1910.

Anyway, he wrote a number of puzzle books, and at least some are available on Project Gutenberg, so well done the PG people. If you like puzzles, maths or thinking sorts, then there are a few good collections (and there are nicer to read versions at the Internet Archive too). The Canterbury Puzzles is his most famous work, but I’ve been reading Amusements in Mathematics. In there he presents the following puzzle:

81.—THE NINE COUNTERS.

15879
×23×46

I have nine counters, each bearing one of the nine digits, 1, 2, 3, 4, 5, 6, 7, 8 and 9. I arranged them on the table in two groups, as shown in the illustration, so as to form two multiplication sums, and found that both sums gave the same product. You will find that 158 multiplied by 23 is 3,634, and that 79 multiplied by 46 is also 3,634. Now, the puzzle I propose is to rearrange the counters so as to get as large a product as possible. What is the best way of placing them? Remember both groups must multiply to the same amount, and there must be three counters multiplied by two in one case, and two multiplied by two counters in the other, just as at present.

81. ANSWER

In this case a certain amount of mere “trial” is unavoidable. But there are two kinds of “trials”—those that are purely haphazard, and those that are methodical. The true puzzle lover is never satisfied with mere haphazard trials. The reader will find that by just reversing the figures in 23 and 46 (making the multipliers 32 and 64) both products will be 5,056. This is an improvement, but it is not the correct answer. We can get as large a product as 5,568 if we multiply 174 by 32 and 96 by 58, but this solution is not to be found without the exercise of some judgment and patience.


But, you know what? I don’t think he’s right. Now, I appreciate that he probably had to spend hours or days trying out possibilities with a piece of paper and a fountain pen, and I just wrote the following 15 lines of Python in five minutes, but hey, he didn’t have to bear with his government trying to ban encryption, so let’s call it even.

from itertools import permutations
nums = [1,2,3,4,5,6,7,8,9]
values = []
for p in permutations(nums, 9):
    one   = p[0]*100 + p[1]*10 + p[2]
    two   = p[3]*10 + p[4]
    three = p[5]*10 + p[6]
    four  = p[7]*10 + p[8]
    if four > three: continue # or we'll see fg*hi and hi*fg as different
    if one*two == three*four:
        expression = "%s*%s = %s*%s = %s" % (
            one, two, three, four, one*two)
        values.append((expression, one*two))
values.sort(key=lambda x:x[1])
print("Solution for 1-9")
print("\n".join([x[0] for x in values]))

The key point here is this: the little programme above indeed recognises his proposed solutions (158*32 = 79*64 = 5056 and 174*32 = 96*58 = 5568) but it also finds two larger ones: 584*12 = 96*73 = 7008 and 532*14 = 98*76 = 7448. Did I miss something about the puzzle? Or am I actually in the rare position of finding an error in a Dudeney book? And all it took was seventy years of computer technology advancement to put me in that position. Maths, eh? Tch.

It’s an interesting book. There are lots of money puzzles, in which I have to carefully remember that ha’pennies and farthings are a thing (a farthing is a quarter of a penny), there are 12 pennies in a shilling, and twenty shillings in a pound. There’s some rather racist portrayals of comic-opera Chinese characters in a few of the puzzles. And my heart sank when I read a puzzle about husbands and wives crossing a river in a boat, where no man would permit his wife to be in the boat with another man without him, because I assumed that the solution would also say something like “and of course the women cannot be expected to row the boat”, and I was then pleasantly surprised to discover that this was not the case and indeed they were described as probably being capable oarswomen and it was likely their boat to begin with! Writings from another time. But still as good as any puzzle book today, if not better.

February 01, 2018

Another year complete and another year in review post (see also my reviews from 2015 and 2016). I’m publishing this much later than I would have liked, but better late than never.

The usual disclaimer: these reviews are written primarily for myself, so sorry if it’s a little self-indulgent!

What went well

Highlights

Here’s some of my highlights from 2017:

  • Went to my first Whisky Master Class at HTFW
  • Travelled to Swansea, Wales for a short break where we did some lovely coastal walks and discovered just how beautiful Rhossili Bay is
  • Saw Jack Whitehall, Jon Richardson and Russel Howard live
  • Rocked out at Download Festival (been on my bucket list for years)
  • Saw the classic FW14 drive round Silverstone at William’s 40th anniversary event (I’m a big Williams fan as I started watching F1 just as Damon Hill was fighting for the championship)
  • Attended Pattern’s Day in Brighton which was a great choice seeing as though it was my only conference of the year
  • Worked in Bournemouth for the best part of a week (would love to travel while working more)
  • I purchased a T300 RS GT Edition racing wheel and Playseat Challenge and played a load of racing games
  • I also purchased an iPad Pro 10.5″ and Apple Pencil and it’s a joy to use
  • Went on a tour at Cotswolds Distillery which is well worth visiting if you like gin or whisky
  • We had lunch at Man Behind the Curtain which was an amazing experience
  • Spent New Year’s in Edinburgh for Hogmanay (which mostly involved drinking and playing rook)
  • Ju (my wife) travelled to South Africa for 2 weeks to work with cheetahs (a dream of hers since she was young)

Mastermind retreat

The mastermind group I’m part of is still going strong (I’ve written about the benefits of mastermind groups). In October, we held our first mastermind retreat. We booked an Airbnb in Gloucestershire which served as a great place to work, plan, goal set, and chat about our businesses. I left the retreat feeling revitalised and with a greater clarity on my goals. I’m really excited about doing this again. I’d also love to do a solo retreat.

Gave my first public talk

In October, I gave my first public talk at SWM. The talk was called “How to run a freelance business without going crazy.” It was well received and I enjoyed the experience, despite being insanely nervous before hand. Shout out to my friend Dave Redfern for asking and encouraging me to speak.

I’d thought about public speaking for years but had always put it off when I was asked. This feels like a big weight off my shoulder. Its given me confidence that I can do public speaking.

Reading

I read 23 books in 2017, just missing out on my goal of 24. Most of the books I read I really enjoyed, and I’ll write up a post of my favourites. 24 books per year – 2 per month – feels about the right pacing for me at the moment, so I’ll stick to that goal for this year. I do want to make an effort to read more fiction though.

inline-block & Geek+Food

inline-block is a Slack community I started back in 2016. It’s now grown to over 230 folks. While it’s not crazy active (I like it that way), there is some really great discussion taking place. I also started a small meetup with my friend Dave Redfern called Geek+Food. The premise is super simple: every month we pick a new restaurant in Birmingham and then talk shop while eating good food (Birmingham has many great places to eat). It’s been fun.

Breaks from social media

Over the years, I’ve developed an addiction to Twitter. Launching Tweetbot with Alfred is muscle memory at this point. A slight lapse in and concentration and BAM: Twitter is in front of me before I know it. So, in December, I went completely dark on social media and avoided Twitter, Facebook and Instagram for the month. It was easier than I expected and it felt great, like I’d kicked the addiction. I’m back on the socials now, but I don’t check-in nearly as often. I’ll be taking regular breaks from social media again this year. It has had a huge impact on my productivity, that’s for sure.

Productivity and organisation

I felt more organised and in-control for most of 2017 than I had in previous years. I made a few small changes which made a big impact last year:

Weekly reviews. I had tried to do weekly reviews in the past, but I never really stuck to them. Last year I made it a goal to perform a weekly review every week. Often this takes place on a Sunday, but occasionally on a Friday or a Monday. It’s something I look forward to: get a coffee, sit down with my laptop (or iPad), and spend 30 minutes reviewing and going through things for the following week. I’ll write more about my weekly reviews, but suffice to say it has really helped me focus on what’s important and stay on top of the little things that often fall through the cracks or build up over time.

Introduced Sanebox for email. I’ve always been pretty good at email in the sense that I respond quickly and follow “inbox zero”. But it was taking up too much time. Around April time, I introduced Sanebox and it has been a game-changer. Sanebox works by learning which emails you’d like to appear in the inbox and which emails should be sent to a folder to review later. I use the SaneBox digest feature to go through my emails in the evening, while all the important stuff is sent straight to my inbox. It’s really reduced the amount of time I spend dealing with email.

Home office improvements

In February, we redecorated both bedrooms – one of which is my office. The main new addition to the office is a standing desk from Ikea which has been fantastic. I usually stand in the morning and sit in the afternoon. My office space feels great and I love working here (which isn’t always a great thing when my commute is 10 steps).

What went badly

One of the benefits of writing yearly reviews is that it allows you to look back and spot patterns or trends. There’s a few reoccurring things in the “what went badly” list in particular that I want to address this year.

Writing

I sent out 4 newsletters and published 12 blog posts in 2017. I didn’t write anywhere near as much as I’d like to. About halfway through the year I stopped getting up early which disrupted my morning routine and completely wiped out my writing time. One of my priorities for 2018 is to get back into my morning routine.

Working from home

This is another recurring theme from my yearly reviews. Although I enjoy working from home and the solitude that comes with it, spending too much time at home alone can quickly drive you crazy. And to make matters worse, we had to say goodbye to our dog midway through the year which made working from home feel even more isolating.

So the goal is the same as in 2015: more coffee shops, more co-working spaces, more changes of scenery. Possibly even a hot desk, if I can find a suitable place.

Health and diet

I put on a little weight steadily over the year. Nothing major, but not heading in the direction I want to go. I did play plenty of squash throughout the year and my general level of fitness increased, but I want to shed a few pounds this year.

Didn’t launch a product

I really wanted to build and a ship a product but that didn’t materialise. No excuses really, I just didn’t do it. Yes, I was busy with client work but I could have carved out a month for building something if I had wanted to.

Travel

Although I did visit Edinburgh, Swansea, and plenty of places in England, I didn’t travel as much I wanted to this year. I didn’t go abroad and that’s something I’d like to fix this year. We’d also love to be more spontaneous and just go away for a weekend when we feel like it (we’re meticulous planners and so travel is always planned well in advance).

Learning

This isn’t something I expected to include on my list as I consider myself an avid learner. I’m always reading non-fiction books or blog posts about self-improvement or something I don’t know much about. One of the problems of being a freelancer is that you often work alone and don’t get chance to learn from others around you. And I didn’t really schedule the time to learn new things. There’s plenty of web-based things I want to get a better handle on such as CSS Grid Layout.

 

How I did against my 2017 goals:

Launch first product. Nope, that didn’t happen last year but it’s still something I’m working towards.
Write to my mailing list every two weeks. This didn’t happen, but I did restart my newsletter in a new monthly format towards the end of the year.
Publish 12 book notes. I read 23 books but only published book notes for 3 of them.
Save 4 months salary. Done. I started saving more last year.
One movie night per week. We did okay at this. It started off well, we then started missing date nights in the summer, but then picked up throughout winter.

Things I’m thinking about for 2018

Normally I’d set my goals for the year, but these days I prefer to set quarterly goals (which I wrote about in a recent post). The problem with setting yearly goals is that things often change. By working in 12 week blocks, you can change course more frequently if you need to.

So with that said, these aren’t goals per se, but just things I’m thinking about for 2018:

Website growth

Harry Roberts, on his website reaching 10 years of age:

“Having this website changed and shaped my career. If you don’t have a blog, I urge you, start working on one this weekend. Your own blog, with your own content, at your own domain. It might just change your life.”

It’s a sentiment I agree with. This website is my base. It attracts new clients. If I wanted to get a job, I can use this website as a portfolio. If I want to sell a product, my website is where I’ll do it. So investing in my website is a thing that will be useful for years to come.

Business growth

You may have noticed that “business” was neither in my good or bad list this year. It could have been in either. It was good in that it was my most profitable year to date and I was busy for most of it, but it was bad because I haven’t been building real assets. The problem I face with my business right now is that if I take some time off, or if I’m ill, then the business makes no money. This is why I want to explore building and selling products.

More writing

I want to get back into the groove of publishing regularly. I plan on continuing to send out a monthly newsletter, along with 2 or so posts per month.

Social media fasting

I don’t want to go whole hog and give up social media. I enjoy it and get a lot of value from it. But I also recognise the downsides: the huge amount of time it can take up, the distraction it creates and the occasional negativity that can come from it. This year I want to keep my social media usage in-check. That means giving it up for a week or a month at a time now and then, and monitoring my overall activity.

More travelling and photography

This year I’d really like to get back into my hobby of photography. And it’s a good excuse to travel more. I’d love to visit the Lake District this year and we have a few other trips planned.

Weekly gratitude journalling

I’ve tried journalling in different forms and it’s never really stuck. However, gratitude journalling is something I enjoy. So, each week, I plan to write down a couple of things that were awesome, exciting, or that I was grateful for. Every year passes by so quickly and life becomes a blur. It can be hard to remember what happened. Hopefully this gratitude journal is a small step in fixing that, and should make next year’s review easier!

That’s it from me. I hope you have a great 2018.

January 30, 2018

And Everything by Stuart Langridge (@sil)

Rule Forty-two. All persons more than a mile high to leave the court.”
Everybody looked at Alice.
“I”m not a mile high,” said Alice.
“You are,” said the King.
“Nearly two miles high,” added the Queen.
“Well, I shan’t go, at any rate,” said Alice: “besides, that’s not a regular rule: you invented it just now.”
“It’s the oldest rule in the book,” said the King.
“Then it ought to be Number One,” said Alice.

It’s my birthday (again.........). I’m 21, for the second time around. Hooray!

So far it’s been a nice day, with lots of people wishing me happy birthday from midnight last night (including a rather lovely thing from Jono). I got a cool shirt off mum and dad, which I shall be wearing for this evening’s venture to Ghetto Golf, a sort of weird crazy golf place which is all neon and skulls and graffiti, with cocktails.

Dinner with Niamh this afternoon, too, which is cool. I’m still as worried about the future and the world as I was this time last year, but I can have a day off for my birthday. And I have friends. This helps. So I can do nice things; write some code, maybe publish the talk I did at Hackference, solve a problem or two. Eat biscuits. You know. Nice things. No ironing.

Many happy returns, me.

SEO is important. I know it, you know it, your pet goldfish knows it. So, there’s no need to lecture you on the crucialness of...

The post 5 SEO Myths that Need Debunking in 2018 appeared first on stickee.

January 29, 2018

January 26, 2018

Back to Top