Last updated: March 26, 2017 02:22 PM (All times are UTC.)

March 24, 2017

Thanks to an existing customer, BanaBay Limited, giving a whole-hearted recommendation of how we at Serviceteam IT support them, we’ve added another fantastic new customer. We provide BanaBay with IP Telephony, Continuity, Office 365, IT Support and of course Fibre Internet. BanaBay were very gracious, in allowing their potential next door neighbours to assess the quality of the leased line services provided by Serviceteam IT, by […]

(Read more...)

March 23, 2017

March 22, 2017

I gave a talk to my team at ARM today on Working Effectively with Legacy Code by Michael Feathers. Here are some notes I made in preparation, which are somewhat related to the talk I gave.

This may be the most important book a software developer can
read. Why? Because if you don’t, then you’re part of the problem.

It’s obviously a lot easier and a lot more enjoyable to work on
greenfield projects all the time. You get to choose this week’s
favourite technologies and tools, put things together in the ways that
suit you now, and make progress because, well anything is progress
when there’s nothing there already. But throwing away an existing
system and starting from scratch makes it easy to throw away the
lessons learned in developing that system. It may be ugly, and patched
up all over the place, but that’s because each of those patches was
needed. They each represent something we learned about the product
after we thought we were done.

The new system is much more likely to look good from the developer’s
, but what about the users’? Do they want to pay again
for development of a new system when they already have one that mostly
works? Do they want to learn again how to use the software? We have
this strange introspective notion that professionalism in software
development means things that make code look good to other coders:
Clean Code, “well-crafted” code. But we should also have some
responsibility to those people who depend on us and who pay our way,
and that might mean taking the decision to fix the mostly-working

A digression: Lehman’s Laws

Manny Lehman identified three different categories of software system:
those that are exactly specified, those that implement
well-understood procedures, and those that are influenced by the
environment in which they run. Most software (including ours) comes
into that last category, and as the environment changes so must the
software, even if there were no (known) problems with it at an earlier
point in its evolution.

He expressed
Laws governing the evolution of software systems,
which govern how the requirements for new development are in conflict
with the forces that slow down maintenance of existing systems. I’ll
not reproduce the full list here, but for example on the one hand the
functionality of the system must grow over time to provide user
satisfaction, while at the same time the complexity will increase and
perceived quality will decline unless it is actively maintained.

Legacy Code

Michael Feather’s definition of legacy code is code without tests. I’m
going to be a bit picky here: rather than saying that legacy code is
code with no tests, I’m going to say that it’s code with
insufficient tests
. If I make a change, can I be confident that I’ll
discover the ramifications of that change?

If not, then it’ll slow me down. I even sometimes discard changes
entirely, because I decide the cost of working out whether my change
has broken anything outweighs the interest I have in seeing the change
make it into the codebase.

Feathers refers to the tests as a “software vice”. They clamp the
software into place, so that you can have more control when you’re
working on it. Tests aren’t the only tools that do this: assertions
(and particularly Design by Contract) also help pin down the software.

How do I test untested code?

The apparent way forward then when dealing with legacy code is to
understand its behaviour and encapsulate that in a collection of unit
tests. Unfortunately, it’s likely to be difficult to write unit tests
for legacy code, because it’s all tightly coupled, has weird and
unexpected dependencies, and is hard to understand. So there’s a
catch-22: I need to make tests before I make changes, but I need to
make changes before I can make tests.


Almost the entire book is about resolving that dilemma, and contains a
collection of patterns and techniques to help you make low-risk
changes to make the code more testable, so you can introduce the tests
that will help you make the high-risk changes. His algorithm is:

  1. identify the “change points”, the things that need modifying to
    make the change you have to make.
  2. find the “test points”, the places around the change points where
    you need to add tests.
  3. break dependencies.
  4. write the tests.
  5. make the changes.

The overarching model for breaking dependencies is the “seam”. It’s a
place where you can change the behaviour of some code you want to
test, without having to change the code under test itself. Some examples:

  • you could introduce a constructor argument to inject an object
    rather than using a global variable
  • you could add a layer of indirection between a method and a
    framework class it uses, to replace that framework class with a
    test double
  • you could use the C preprocessor to redefine a function call to use
    a different function
  • you can break an uncohesive class into two classes that collaborate
    over an interface, to replace one of the classes in your tests

Understanding the code

The important point is that whatever you, or someone else, thinks
the behaviour of the code should be, actually your customers have paid
for the behaviour that’s actually there and so that (modulo bugs) is
the thing you should preserve.

The book contains techniques to help you understand the existing code
so that you can get those tests written in the first place, and even
find the change points. Scratch refactoring is one technique: look
at the code, change it, move bits out that you think represent
cohesive functions, delete code that’s probably unused, make notes in
comments…then just discard all of those changes. This is like Fred
Brooks’s recommendation to “plan to throw one away”, you can take what
you learned from those notes and refactorings and go in again with a
more structured approach.

Sketching is another technique recommended in the book. You can draw
diagrams of how different modules or objects collaborate, and
particularly draw networks of what parts of the system will be
affected by changes in the part you’re looking at.

Cheerity UX & UI Design

Cheerity is a new startup based in New York and ran by a team of people who have worked with charities their whole lives. Cheerity is the missing link between viral charity content and the cause. In many cases people joined in with viral campaigns without knowing what the cause was, Cheerity solves this problem.

I’ve been working with Cheerity’s team in New York creating UX & UI designs for their platform which is currently in beta. They have already raised thousands of dollars for excellent charities and can’t wait to see how they grow.

The post Cheerity UX & UI Design appeared first on .

March 21, 2017

I get very annoyed about politicians being held to account for admitting they were wrong, rather than forcefully challenged when they were wrong in the first place. Unless they lied, if someone was wrong and admits it, they should be congratulated. They have grown as a human being.

I am about to do something very similar. I’m going to start confessing some wrong things I used to think, that the world has come to agree with me about. I feel I should congratulate you all.

You can’t design a Database without knowing how it will be used

I was taught at university that you could create a single abstract data model of an organisation’s data. “The word database has no plural”, I was told. I tried to create a model of all street furniture (signs and lighting) in Staffordshire, in my second job. I couldn’t do it. I concluded that it was impossible to know what was entities and what was attributes. I now know this is because models are always created for a purpose. If you aren’t yet aware of that purpose, you can’t design for it. My suspicion was confirmed in a talk at Wolverhampton University by Michael ‘JSD’ Jackson. The revelation seemed a big shock to the large team from the Inland Revenue. I guess they had made unconscious assumptions about likely processes.

Relations don’t understand time

(They would probably say the same about me.) A transaction acting across multiple tables is assumed to be instantaneous. This worried me. A complex calculation requiring reads could not be guaranteed to be consistent unless all accessed tables are locked against writes, throughout the transaction. Jackson also confirmed that the Relational Model has no concept of time. A dirty fix is data warehousing which achieves consistency without locking by the trade-off of guaranteeing the data is old.

The Object Model doesn’t generalise

I’d stopped developing software by the time I heard about the Object Oriented Programming paradigm. I could see a lot of sense in OOP for simulating real-world objects. Software could be designed to be more modular when the data structures representing the state of a real-world object and the code which handled state-change were kept in a black box with a sign on that said “Beware of the leopard”. I couldn’t grasp how people filled the space between the objects with imaginary software objects that followed the same restrictions, or why they needed to.

A new wave of Functional Programming has introduced immutable data structures. I have recently learned through Clojure author Rich Hickey’s videos that reflecting state-change by mutating the value of variables is now a sin punishable by a career in Java programming. Functional Programmers have apparently always agreed with me that not all data structures belong in an object

There are others I’m still waiting for everyone to catch up on:

The Writable Web is a bad idea

The Web wasn’t designed for this isn’t very good at it. Throwing complexity bombs at an over-simplified model rarely helps.

Rich Hickey’s Datomic doesn’t appear to have fixed my entity:attribute issue

Maybe that one is impossible.

Agility vs Momentum by Andy Wootton (@WooTube)

[ This post is aimed at readers with at least basic understanding of agile product development. It doesn’t explain some of the concepts discussed.]

We often talk of software development as movement across a difficult terrain, to a destination. Organisational change projects are seen as a lightening attack on an organisation, though in reality, have historically proved much slower than the speed of light. Large projects often force through regime change for ‘a leader’. Conventionally, this leader has been unlikely to travel with the team. Someone needs to “hold the fort”. There may be casualties due to friendly firings.

Project Managers make ‘plans’ of a proposed ‘change journey’ from one system state to another, between points in ‘change space’, via the straightest line possible, whilst ignoring the passage of time which makes change possible. Time is seen as distance and its corollary, cost. The language of projects is “setting-off”, “pushing on past obstacles” and “blockers” such as “difficult customers”, along a fixed route, “applying pressure” to “overcome resistance”. A project team is an army on the march, smashing their way through to a target, hoping it hasn’t been moved. Someone must pay for the “boots on the ground” and their travel costs. This mind-set leads to managers who perceives a need to “build momentum” to avoid “getting bogged down”.

Now let us think about the physics:

  •  momentum = mass x velocity, conventionally abbreviated to p = mv.
    At this point it may also be worth pointing out Newton’s Second Law of Motion:
  • force = mass x acceleration, or F = ma
    (Interpretted by Project Managers as “if it gets stuck, whack it hard with something heavy.”)

What about “agile software developments”? There is a broad range of opinion on precisely what those words mean but there is much greater consensus on what agility isn’t.

People outside the field are frequently bemused by the words chosen as Agile jargon, particularly in the Scrum framework:
A Scrum is not held only when a product development is stuck in the mud.
A Scrum Master doesn’t tell people what to do.
Sprints are conducted at a sustainable pace.
Agility is not the same as speed. Arguably, in agile environments, speed isn’t the same thing as velocity either.

Many teams measure velocity, a crude metric of progress, only useful to enable estimation of how much work should be scheduled for the next iteration, often guessed in ‘story-points’, representing relative ‘size’ but in agile environments, everything is optional and subject to change, including the length of the journey.

If agility isn’t speed, what is it? It is lots of things but the one that concerns us here is the ability to change direction quickly, when necessary. Agile teams set off in a direction, possibly with a destination in mind but aware that it might change. If the journey throws up unexpected new knowledge, the customer may wish to use the travelling time to reach a destination now considered more valuable. The route is not one straight line but a sequence of lines. It could end anywhere in change-space, including where it started (either through failing fast or the value of the journey being exploration rather than transportation.) Velocity is therefore current progress along a potentially windy road of variable length, not average speed through change-space to a destination. An agile development is really an experiment to test a series of hypotheses about an organisational value proposition, not a journey. Agile’s greatest cost savings come from ‘wrong work not done’.

Agility is lightweight, particularly on up-front planning. Agile teams are small and aim to carry everything they need to get the job done. This enables them to set off sooner, at a sensible pace and, if they are going to fail, to fail fast, at low cost. Agility delivers value as soon as possible and it front-loads value. If we measured velocity in terms of value instead of distance, agile projects would be seen to decelerate until they stop. If you are light, immovable objects can be avoided rather than smashed through. Agile teams neither need nor want momentum, in case they decide to turn fast.

March 20, 2017

March 17, 2017

I gave a talk at this year’s Ticketing Professionals conference – which advertises itself as “The Place Where Professionals Talk Ticketing”. It was rather vaguely titled ‘The API – what next for our 3 favourite letters?’ which gave me a pleasingly large target to aim at. This post is a write-up of that talk, it is not exactly the same as the […]

March 15, 2017

Take Smalltalk. Do I have an object in my image? Yes? Well I can use it. Does it need to do some compilation or something? I have no idea, it just runs my Smalltalk.

Take Python. Do I have the python code? Yes? Well I can use it. Does it need to do some compilation or something? I have no idea, it just runs my Python.

Take C.

Oh my God.

C is portable, and there are portable operating system interface specifications for the system behaviour accessible from C, so you need to have C sources that are specific to the platform you’re building for. So you have a tool like autoconf or cmake that tests how to edit your sources to make them actually work on this platform, and performs those changes. The outputs from them are then fed into a thing that takes C sources and constructs the software.

What you want is the ability to take some C and use it on a computer. What C programmers think you want is a graph of the actions that must be taken to get from something that’s nearly C source to a program you can use on a computer. What they’re likely to give you is a collection of things, each of which encapsulates part of the graph, and not necessarily all that well. Like autoconf and cmake, mentioned above, which do some of the transformations, but not all of them, and leave it to some other tool (in the case of cmake, your choice of some other tool) to do the rest.

Or look at make, which is actually entirely capable of doing the graph thing well, but frequently not used as such, so that make all works but making any particular target depends on whether you’ve already done other things.

Now take every other programming language. Thanks to the ubiquity of the C run time and the command shell, every programming language needs its own build system named [a-z]+ake that is written in that language, and supplies a subset of make’s capabilities but makes it easier to do whatever it is needs to be done by that language’s tools.

When all you want is to use the software.

March 14, 2017

We covered the reasons why you should include an email signature, and what information you should include in the previous post. However, Office 365 Email signature management is made possible with the application of mail flow rules. This enables the functionality to append email signatures at the server side, meaning that you get consistently applied […]

(Read more...)

March 13, 2017

Yuval Harari, author of the fantastic book Sapiens (which I’ve started and still need to finish), was a recent guest on The James Altucher Show. Go listen, it’s a great interview.

One of my favourite parts was Yuval’s brief thoughts on meditation. He explained that he starts and finishes every work day with one hours meditation. He explains:

“(Meditation) gives me balance, peace, and calmness and the ability to find myself.”

He continues:

“The idea of meditation is to forget about all the stories in your mind. Just observe reality as it is. What is actually happening right here, right now? You start with very simple things like observing the breath coming in and out of your nostrils or you observe the sensations in your body. This is reality.

For all of history, people have given more and more importance to imaginary stories and they’ve been losing the ability to tell the difference between fiction and reality. Meditation is one of the best ways to regain this ability and really tell the difference between what is real and what is a fiction in my mind.”

I’m a big fan of thought experiments. I like science but I’m too lazy to do real experiments. Why do something when you can think about doing it?

I’ve been observing the political manoeuvring around Brexit and 2nd referendums. I think people are saying things they don’t really believe in order to get an outcome they believe to be right and people are saying things which sound good, to hide the evil swirling beneath the surface.

I asked myself: Which is the greater wrong: doing a good thing for a bad reason or a bad thing for a good reason?

I thought:

‘A good thing’ is highly subjective, depending on your personal values and consequent belief in what is fair. A comparison of  ‘bad thing’s is probably even more fluid. I see it in terms of balance between good and harm to self and others. It’s complex.

‘Good’ and ‘bad’ reasons also depend on your personal targets and motivations along with another subjective moral evaluation of those.

An individual may see a good thing as a positive value and a bad thing as a negative value and believe that as long as the sum is positive, so is the whole package. People call this “pragmatism”. They also tell me it is easier to ask for forgiveness than permission. These people get things done and, generally, only hurt other people.

‘A reason’ sounds like dressing up something you feel you want in logic. Is that always reasonable?

We need to balance what we want and our chances of success against the risks and uncertainty of what we might lose or fail to achieve. To measure success objectively, we need to have specified some targets before we start.

Brexit didn’t have either a plan or targets. It appears to be driven by things that people don’t want. How will we know if it has succeeded or failed? We are told the strategy and tactics must be kept secret or the plan will fail and targets will be missed. If this was a project I was working on, I’d be reading the jobs pages every lunch time. I’ve stopped worrying about the thought experiment.

March 10, 2017

March 09, 2017

The first thing I did yesterday, on International Women’s Day 2017, was retweet a picture of Margaret Hamilton, allegedly the first person in the world to have the job title ‘Software Engineer’. The tweet claimed the pile of printout she was standing beside, as tall as her, was all the tweets asking “Why isn’t there an International Men’s Day?” (There is. It’s November 19th, the first day of snowflake season.) The listings were actually the source code which her team wrote to make the Apollo moon mission possible. She was the first virtual woman on the Moon.

I followed up with a link to a graph showing the disastrous decline of women working in software development since 1985, by way of an explanation of why equal opportunities aren’t yet a done deal. I immediately received a reply from a man, saying there had been plenty of advances in computer hardware and software since 1985, so perhaps that wasn’t a coincidence. This post is dedicated to him.

I believe that the decade 1975 – 1985, when the number of women in computing was still growing fast, was the most productive since the first, starting in the late 1830s, when Dame Ada Lovelace made up precisely 50% of the computer software workforce worldwide. It also happens to approximately coincide with the first time I encountered computing, in about 1974 and stopped writing software in about 1986.

1975 – 1985:
As I entered: Punched cards then a teletype, connected to a 24-bit ICL 1900-series mainframe via 300 Baud accoustic coupler and phone line. A trendy new teaching language called BASIC, complete with GOTOs.

As I left: Terminals containing a ‘microprocessor’, screen addressable via ANSI escape sequences or bit-mapped graphics terminals, connected to 32-bit super-minis, enabling ‘design’. I used a programming language-agnostic environment with a standard run-time library and a symbolic debugger. BBC Micros were in schools. The X windowing system was about to standardise graphics. Unix and ‘C’ were breaking out of the universities along with Free and Open culture, functional and declarative programming and AI. The danger of the limits of physics and the need for parallelism loomed out of the mist.

So, what was this remarkable progress in the 30 years from 1986 to 2016?


Parallel processing research provided Communicating Sequential Processes and the Inmos Transputer.
Declarative, non-functional languages that led to ‘expert systems’. Lower expectations got AI moving.
Functional languages got immutable data.
Scripting languages like Python & Ruby for Rails, leading to the death of BASIC in schools.
Wider access to the Internet.
The read-only Web.
The idea of social media.
Lean and agile thinking. The decline of the software project religion.
The GNU GPL and Linux.
Open, distributed platforms like git, free from service monopolies.
The Raspberry Pi and computer science in schools

Only looked good:

The rise of PCs to under-cut Unix workstations and break the Data Processing department control. Microsoft took control instead.
Reduced Instruction Set Computers were invented, providing us with a free 30 year window to work out the problem of parallelism but meaning we didn’t bother.
In 1980, Alan Kay had invented Smalltalk and the Object Oriented paradigm of computing, allowing complex real-world objects to be simulated and everything else to be modelled as though it was a simulation of objects, even if you had to invent them. Smalltalk did no great harm but in 1983 Bjarne Stroustrup left the lab door open and C++ escaped into the wild. By 1985, objects had become uncontrollable. They were EVERYWHERE.
Software Engineering. Because writing software is exactly like building a house, despite the lack of gravity.
Java, a mutant C++, forms the largely unrelated brand-hybrid JavaScript.
Microsoft re-invents DEC’s VMS and Sun’s Java, as 32-bit Windows NT, .NET and C# then destroys all the evidence.
The reality of social media.
The writeable Web.
Multi-core processors for speed (don’t panic, functions can save us.)

Why did women stop seeing computing as a sensible career choice in 1985 when “mine is bigger than yours” PCs arrived and reconsider when everyone at school uses the same Raspberry Pi and multi-tasking is becoming important again? Probably that famous ‘female intuition’. They can see the world of computing needs real functioning humans again.

We’re thrilled to announce the launch of the new CDI World website, after they came to us with the idea of building a new website. They needed a safe and secure site, one which would protect their data – and that of their clients. While stickee already hosted CDI World’s old site, this new project enabled […]

The post Launch of the brand new CDI World website appeared first on stickee - technology that sticks.

March 08, 2017

The United States is losing on the cyber-battlefield and face a bleak threat landscape, according to DHS chairman Michael McCaul. But, he says, there is still hope to turn things around. Source: Cloud Security DHS Chairman Paints Bleak US Cybersecurity Picture

(Read more...)

March 07, 2017

SEO is important. I know it, you know it, your pet goldfish knows it. So, there’s no need to lecture you on the crucialness of SEO because you’ve probably heard it a thousand times. However, what you might not be aware of is the deceitful myths bouncing around the technosphere regarding SEO. If you believe […]

The post 5 SEO myths that need debunking in 2017 appeared first on stickee - technology that sticks.

Office 365 email signature management for company-wide consistency is made possible with mail flow rules. There are a number of reasons why you might want to append an email signature to your emails. The foremost reason is making it easier for customers to contact you. An Office 365 signature looks professional and consistent, distinguishes your organisation, […]

(Read more...)

March 03, 2017

Because (I like to think) I’m human, I make models of the world around me. Because I’m a computer scientist/a bit weird, I write them down or draw pictures of them. Since I got interested in why some intelligent people have different political views to me, a couple of years ago, I’ve been trying to model the values which underlie people’s belief systems, which I believe determine their political views.

My working model for the values of Left-Right politics (I’m a fluffy compromise near the middle of this scale but I have other scales, upon which I weigh myself a dangerous radical) has been that The Left believe in Equality and The Right in Selfishness. As a radical liberal, I obviously think both extremes are the preserve of drivelling idiots – compromise is all. The flies in my ointment have been the selfishness of the Far Left and the suicidal economic tendencies of working class nationalists in wanting to #Brexit. My model clearly had flaws.

This morning I was amusing myself with a #UKIP fan who countered being told by a woman that it was best to have O type blood (presumably because it is the universal donor) by saying it was best to be AB, so he could receive any blood (a universal recipient.) On the surface this seems to confirm the selfishness theory but I made an intuitive leap that he thought he was too special to lose, which was far from the conclusion I’d arrived at, during our discussion.

My new, modified theory is that the Left think ‘no-one should get special treatment’ and the Right think ‘My DNA is special. I deserve more’. This belief that “I am/am not special” has almost no correlation with the evidence, or even with class. I have no evidence of whether the characteristic is inherited or learned but Michael Gove and members of the BNP clearly  decided that they were special and deserve to be treated better than other people. Tony Benn, on the other hand, argued himself out of believing that he had a God-given right to a place in the House of Lords. Please let me know why I’m wrong.

Had you told me 3 months ago that I would be working for a digital agency, I would have laughed you all the way to the nearest cat cafe. Although, if you told me 6 years ago that I would spend the next 5 and a bit years working for a ticketing company, I would […]

March 02, 2017

Cisco is warning of a flaw that creates conditions susceptible to a DoS attack in its NetFlow Generation Appliance. Source: Cloud Security Cisco Warns of High Severity Bug in NetFlow Appliance

(Read more...)

A proof of concept bypass of Google’s CAPTCHA verification system uses Google’s own web-based tools to pull off the skirting of the system. Source: Cloud Security Google reCaptcha Bypass Technique Uses Google’s Own Tools

(Read more...)

March 01, 2017

IOActive Labs released a report Wednesday warning that consumer, industrial, and service robots in use today have serious security vulnerabilities. Source: Cloud Security Robots Rife With Cybersecurity Holes

(Read more...)

February 27, 2017

Hybrid Vigour by Andy Wootton (@WooTube)

Having been forced by Mrs. Woo to take a week of holiday from what she usually refers to as “staying at home, doing nothing”, I found myself on the Snowdrop Trail tour in the garden at I was quite surprised to discover that the leaflet I was handed had photographs of 17 of the forty-odd different varieties of snowdrop in the garden. Later in the day I was talking to our guide’s long suffering wife who explained that there were over 400 varieties in his garden, of the approximately four and a half thousand types currently known to be in existence.

People with a passion often interest me. Quite why these smallish, mostly white flowers that I would previously have assumed were all the same had become the focus of this man’s life was not clear but his obvious fascination was infectious. It almost made me wish that I was a little less promiscuous in my obsessions. For our second snowdrop trip of the week, we visited to see snowdrops at scale, lining the woodland floor. They all looked the same. I have much to learn.

My favourite Sunnycroft snowdrop fact was that when chasing the holy grail of a large ‘double’ snowdrop, gardeners had used the advantage of ‘hybrid vigour’ to mix and match the desirable characterisics of different species and create a giant amongst snowdrops. Take that, racial supremacists: you’re all snowflakes, by comparison!

February 25, 2017

Tsundoku by Graham Lee

I only have the word of the internet to tell me that Tsundoku is the condition of acquiring new books without reading them. My metric for this condition is my list of books I own but have yet to read:

  • the last three parts of Christopher Tolkien’s Histories of Middle-Earth
  • Strategic Information Management: Challenges and Strategies in Managing Information Systems
  • Hume’s Enquiries Concerning the Human Understanding
  • Europe in the Central Middle Ages, 962-1154
  • England in the Later Middle Ages
  • Bertrand Russel’s Problems with Philosophy
  • John Stuart Mill’s Utilitarianism and On Liberty (two copies, different editions, because I buy and read books at different rates)
  • A Song of Stone by Iain Banks
  • Digital Typography by Knuth
  • Merchant and Craft Guilds: A History of the Aberdeen Incorporated Trades
  • The Indisputable Existence of Santa Claus
  • Margaret Atwood’s The Handmaid’s Tale

And those are only the ones I want to read and own (and I think that list is incomplete – I bought a book on online communities a few weeks ago and currently can’t find it). Never mind the ones I don’t own.

And this is only about books. What about those side projects, businesses, hobbies, blog posts and other interests I “would do if I got around to it” and never do? Thinking clearly about what to do next and keeping expectations consistent with what I can do is an important skill, and one I seem to lack.

Let’s Encrypt and TSOHost by Stuart Langridge (@sil)

This site used Cloudflare, because SSL is hard and my host wanted to charge me money for a certificate. However, that host, TSOHost, have now integrated Let’s Encrypt, and you can turn it on with two clicks. So, especially given the recent Cloudflare bug… I did. Hooray. SSL for me, being done by my host. Now I’ll never have to go to Cloudflare and purge the cache ever again. Good work TSOHost.

Fruit machine hacking by Stuart Langridge (@sil)

From a Wired article (warning: adblocker-blocker, in-browser popups) via LWN:

[T]he operatives use their phones to record about two dozen spins on a [slot machine] they aim to cheat. They upload that footage to a technical staff in St. Petersburg, who analyze the video and calculate the machine’s pattern based on what they know about the model’s pseudorandom number generator. Finally, the St. Petersburg team transmits a list of timing markers to a custom app on the operative’s phone; those markers cause the handset to vibrate roughly 0.25 seconds before the operative should press the spin button.

The timed spins are not always successful, but they result in far more payouts than a machine normally awards: Individual scammers typically win more than $10,000 per day.

From Scarne’s Complete Guide to Gambling, published 1974, on the “Rhythm Boys” scam from 1949:

During 1949 a couple of thousand rhythm players, most of whom were women, were beating the slots all over Nevada and various other sections of the country. Hundreds were barred from the slot rooms, and slander suits (which were later dropped) were filed by some of the barred players. My findings show that national slot machine revenue took a real nose dive, dropping, from the 1948 figure of $700 million to a rockbottom low of $200 million in 1949. The rhythm players beat the slots during 1949 for half a billion dollars.

How did the original mysterious stranger happen to come up with his bright idea? And who was he? I did some further detective work and discovered that he was an Idaho farmer who, during his spare time, had been helping a slot-mechanic friend repair out-of-order machines. He discovered that the three wheels in certain makes of machine made exactly the same number of revolutions when the handle was pulled. He studied the clock fan which controls the length of time the reels spin and found that on some machines the clock went dead from seven to eight seconds after the reels stopped spinning. He also memorized the position of each symbol on each reel. In actual play, since he knew the relative positions on the reel of all the symbols, he could deduce from the nine visible symbols he could see through the window just where all the others were. Then, by timing himself to pull the machine’s level at precisely the right instant and before the clock gear went dead, he found that he could manipulate the desired symbols onto the pay line. Most of the rhythm players who learned the system later could, as a rule, control only the cherry on the first reel, but even that was good enough to empty the payoff coin tube; it just took a little longer.

Everything old is new again. Including exploiting insufficiently-random random-number-generators to make money in Vegas.

February 24, 2017

A standard password change API by Stuart Langridge (@sil)

Wouldn’t it be nice if there were some sort of standard password-change API that websites all implemented? Then when there’s some sort of breach and you have to change a bunch of passwords1 you could just download a list of domains that need fixing and give it to your password manager, and then your password manager would use the standard password-change API on each of those sites to change your password to something else. Literally one click; instead of reading stern concerned messages from everyone on Twitter saying “you should change your passwords now!” one can just click one button and, bish bash bosh, job done. That’d be lovely. Maybe Chrome’s password manager would build it in and automatically fetch sites that need updating from a central list and then I’d be secured without even knowing about it!2

Obvious questions:

What about people without a password manager?

Yeah, they’re no better off under this plan. But they’re no worse off. And they were likely already using guessable passwords. This problem needs fixing, certainly (have people use password managers, make them easier to use, eliminate passwords entirely, many other suggestions) but fixing it is not the goal of this plan.

How does the password manager know where to look?

Put a file in /.well-known which describes the location of the endpoint and the parameters that need to be passed to it or something. That seems an easy problem to solve; your password manager knows the domain, so it just hits https://domain/.well-known/password-change.json and gets {location: '/std-pwchange', required_parameters: {username: "username", password: "password"}} or something. The detail here can be worked out.

Doesn’t this make compromising people’s accounts easier?

I don’t think so, but I might be wrong. At the moment, if I discover your master password I can’t do anything with it without access to your password manager’s database; if I’ve got both your master password and access to your passwords database then I can manually go and steal all your accounts everywhere and change all your passwords. Having this doesn’t make it more likely; it just makes it less drudge-work for an attacker to do.

What about sites that require two-factor auth?

Yeah, this won’t work for them. Then again, if the site requires two-factor auth, having your password potentially compromised in a breach is not as big a deal, right? So the endpoint can return needs-manual-update and then your password manager pops up a box saying “you have to manually update your password on the following sites: (list of links)”. Which is what it would do for sites that inexplicably have not adopted this idea anyway.

Why would anyone adopt this?

Same reason anyone adopts anything; it seems a good plan, or everyone else is. This would certainly make life easier for users of password managers3, and both sites and pw managers can advertise “we make your life easier when this happens” as a feature.

Have I missed a reason why this would be a bad idea? It’d need speccing out in detail, obviously, but the concept to me seems good…

Update: there’s been a suggestion of one possible spec for such an API at for someone who wants to check it out. There’s also

  1. this week it was Cloudflare, but there’ll be another next week no doubt
  2. and a bunch of people would turn this off or never turn it on, but that’s fine, and they’re probably using some different manager already anyway
  3. I’m told that LastPass actually already supports this auto-password-change idea for lots of sites. Presumably they’re doing a little bit of custom code for each site to know where the password change form is? This would just standardise that and allow a password manager to do it automatically without any work at all, which would be obviously lovely for all concerned

stickee are proud to be sponsoring the renowned student Hackathon hosted by the University of Birmingham. At stickee, we love all things tech. With creativity at the core of everything we do, the opportunity to help students nurture their creative minds was a chance we couldn’t miss. In supporting the event, Development Director at stickee, […]

The post The BrumHack Hackathon appeared first on stickee - technology that sticks.

Three years ago, I had an extension built on the back of my house. I went through the usual process of getting quotes from a selection of builders and then chose one that was competitive and who I liked. Easy peasy lemonade. But everyone I knew who had gone through the same process told me […]

The post 8 tips for choosing a web development agency appeared first on stickee - technology that sticks.

February 20, 2017

Everyone running their own business except me probably already knows this. But, three years in, I think I’ve finally actually understood in my own mind the difference between a dividend and a director withdrawal. My accountant, Crunch1 have me record both of them when I take money out of the company, and I didn’t really get why until recently. When I finally got it, I wrote myself a note that I could go back to and read when I get confused again, and I thought I’d publish that here so others can see it too.

(Important note: this is not financial advice. If my understanding here differs from your understanding, trust yourself, or your accountant. I’m also likely glossing over many subtleties, etc, etc. If you think this is downright wrong, I’d be interested in hearing. If you think it’s over-simplified, you’re doubtless correct.)

A dividend is a promise to pay you X money.

A director withdrawal is you taking that money out.

So when a pound comes in, you can create a dividend to say: we’ll pay Stuart 80p.

When you take the money out, you record a director withdrawal of 80p.

Dividends are IOUs. Withdrawals are you cashing the IOU in.

So when the “director’s loan account is overdrawn”, that means: you have recorded dividends of N but have recorded director withdrawals of more than N, i.e., you’ve taken out more than the company wants to pay you. This may be because you are owed the amount you took, and recorded director withdrawals for all that but forgot to do a dividend for it, or because you’ve taken more than you’re allowed.

When creating a new dividend (in Crunch) it will (usefully) say what the maximum dividend you can take is; that should be the maximum takeable while still leaving enough money in the account to pay the tax bill.

In the Pay Yourself dashboard (in Crunch) it’ll say “money owed to Stuart”; that’s money that’s been promised with a dividend but not taken out with a withdrawal. (Note: this may be because you forgot to do a withdrawal for money you’ve taken! In theory it would mean money promised with a dividend but not taken, but maybe you took it and just didn’t do a withdrawal to record that you took it. Check.)

  1. who are really handy, online, and are happy to receive emails in which I ask stupid questions over and over again: if you need an accountant too, this referral link will get us both some money off

February 19, 2017

The Quiet Voice by Stuart Langridge (@sil)

It’s harder to find news these days. On the one hand, there’s news everywhere you turn. Shrieking at you. On the other, we’re each in a bubble. Articles are rushed out to get clicks; everything’s got a political slant in one direction or another. This is not new. But it does feel like it’s getting worse.

It’s being recognised, though. Buzzfeed have just launched a thing called “Outside Your Bubble“, an admirable effort to “give our audience a glimpse at what’s happening outside their own social media spaces”; basically, it’s a list of links to views for and against at the bottom of certain articles. Boris Smus just wrote up an idea to add easily-digestible sparkline graphs to news articles which provide context to the numbers quoted. There have long been services like Channel 4’s FactCheck and AllSides which try to correct errors in published articles or give a balanced view of the news. Matt Kiser’s WTF Just Happened Today tries to summarise, and there are others.

(Aside: I am bloody sure that there’s an xkcd or similar about the idea of the quiet voice, where when someone uses a statistic on telly, the quiet voice says “that’s actually only 2% higher than it was under the last president” or something. But I cannot for the life of me find it. Help.)

So here’s what I’d like.

I want a thing I can install. A browser extension or something. And when I view an article, I get context and viewpoint on it. If the article says “Trump’s approval rating is 38%”, the extension highlights it and says “other sources say it’s 45% (link)” and “here’s a list of other presidents’ approval ratings at this point in their terms” and “here’s a link to an argument on why it’s this number”. When the article says “the UK doesn’t have enough trade negotiators to set up trade deals” there’s a link to an article claiming that that isn’t a problem and explaining why. If it says “NHS wait times are now longer than they’ve ever been” there’s a graph showing what this response times are, and linking to a study showing that NHS funding is dropping faster than response times are. An article saying that X billion is spent on foreign aid gets a note on how much that costs each taxpayer, what proportion of the budget it is, how much people think it is. It provides context, views from outside your bubble, left and right. You get to see what other people think of this and how they contextualise it; you get to see what quoted numbers mean and understand the background. It’s not political one way or the other; it’s like a wise aunt commentator, the quiet voice that says “OK, here’s what this means” so you’re better informed, of how it’s relevant to you and what people outside your bubble think.

Now, here’s why it won’t work.

It won’t work because it’s a hysterical amount of effort and nobody has a motive to do it. It has to be almost instant; there’s little point in brilliantly annotating an article three days after it’s written when everyone’s already read it. It’d be really difficult for it to be non-partisan, and it’d be even more difficult to make people believe it was non-partisan even if it was. There’s no money in it — it’s explicitly not a thing that people go to, but lives on other people’s sites. And there aren’t browser extensions on mobile. The Washington Post offer something like this with their service to annotate Trump’s tweets, but extending it to all news articles everywhere is a huge amount of work. Organisations with a remit to do this sort of thing — the newly-spun-off Open News from Mozilla and the Knight Foundation, say — don’t have the resources to do anything even approaching this. And it’s no good if you have to pay for it. People don’t really want opposing views, thoughts from outside their bubble, graphs and context; that’s what’s caused this thing to need to exist in the first place! So it has to be trivial to add; if you demand money nobody will buy it. So I can’t see how you pay the army of fact checkers and linkers your need to run this. It can’t be crowd sourced; if it were then it wouldn’t be a reliable annotation source, it’d be reddit, which would be disastrous. But it’d be so useful. And once it exists they can produce a thing which generates printable PDF annotations and I can staple them inside my parents copy of the Daily Mail.

February 18, 2017

Postgres contains a wealth of functions that provide information about a database and the objects within. The System Information Functions of the official documention provides a full list. There are a huge number of functions covering a whole host of info from the current database session, privileges, function properties.


Find an objects oid

A lot of the info functions accept the Object Identifier Type for objects in the database. This can be obtained by casting to regclass (also described in the oid docs) then to oid:

select 'schema_name.relation_name'::regclass::oid;

Where relation_name is a table, view, index etc.

View definition

select pg_get_viewdef('schema_name.view_name'::regclass::oid);

Or in psql you can use one of the built in commands:

\d+ schema_name.view_name

Function definition

Returns the function definition for a given function. Many built-in functions don't reveal much due to them not being written in SQL but for those that are you'll get the complete create function statement. For example to view the definition of the PostGIS st_colormap function:

select pg_get_functiondef('st_colormap(raster, integer, text, text)'::regprocedure);


A whole host of functions exist to determine privileges for schemas, tables, functions etc. Some examples:

Determine if the current users can select from a table:

select has_table_privilege('schema_name.relation_name', 'select');

Note: The docs state that "multiple privilege types can be listed separated by commas, in which case the result will be true if any of the listed privileges is held". This means that in order to test a number of privileges it is normally better to test each privilege individually as select has_table_privilege('schema_name.relation_name', 'select,update'); would return t even if only select is supported.

Determine if a user can use a schema:

select has_schema_privilege('schema_name', 'usage');

February 17, 2017

In episode 44 of Cortex, Myke and Grey discussed time tracking. I have a love/hate relationship with time tracking. As an employee, I hated it. It made no sense to track 7.5 hours per day (because who does that much productive work in a day?). But as someone who is self-employed, it makes total sense (and I can see why I was made to do it as an employee).

As Grey says in the episode: if you care about how you’re spending your time, track your time.

Myke and Grey talk about the revelations they had while tracking their time, which match my own:

  • Your brain has no idea how much time you’re spending on stuff. You can’t trust yourself to have any sense of how long it takes to do things.
  • You think you’re working way more than you actually are.
  • You’ll spot patterns. You’ll notice that those busy periods will catch up with you.

It’s worth a listen. And FWIW, I track my time using Freckle.

With the UK economy becoming increasingly digital, cyber security is central to national security and continues to play an important role in ensuring the UK is a safe place to do business. There is a critical need to increase the availability of cyber security skills, so the Department for Culture, Media and Sport (DCMS) is […]

(Read more...)

According to Marcus Sachs, CSO with the North American Electric Reliability Corporation, doomsday fears of a cyberattack against the U.S. electric grid are overblown. Source: Infrastructure Security Squirrels, Not Hackers, Pose Biggest Threat to Electric Grid

(Read more...)

February 16, 2017

One of the services we offer here at stickee is a white label mobile phone comparison service. We are in the process of rebuilding and modernising this service so that all the websites can run on one back end and off of a single code base. That means, multiple websites will need to run on […]

The post Analytics for multiple store fronts with Google Tag Manager appeared first on stickee - technology that sticks.

February 15, 2017

A RSA Conference panel tackles the difficulty in defining cyberwar. Source: Infrastructure Security Setting Expectations Between States on Cyberwar

(Read more...)

Interface design matters. A lot. A user-friendly, visually pleasing and responsive website will generate more traffic, and retain interest and customers, than a badly designed one. That’s just common sense! User experience (UX) and user interface (UI) are often used interchangeably, however they represent different aspects of website design. UX describes ‘the process of development […]

The post Let’s talk about UX and UI appeared first on stickee - technology that sticks.

February 14, 2017

Having run a digital agency for over 10 years, the one thing I do know about is the importance of a good team (especially when you can’t design or develop a website yourself!). It is essential that we all stay passionate, motivated and on top of our game at Substrakt, otherwise quality suffers, the enjoyment […]

Picking a domain name for your new website is both exciting and terrifying. A bit like picking a name for your first born. What if they get bullied? Is there a potential nickname you haven’t thought of? Do the initials spell out something rude? Children have at least one advantage over website URLs: you don’t […]

The post The strength of your domain name appeared first on stickee - technology that sticks.

February 13, 2017

We always like to shout about our customer success, and we are pleased to announce that on Friday night at the Edgbaston Cricket Ground, Arun Luther of Genesis Innovations picked up the 2017 Signature Award for Excellence in Finance. The Signature Awards celebrate the best in those involved in the wealth creation process and with a […]

(Read more...)

A sudden wave of attacks against insecure databases resulting in ransom demands points to wave of data hijacking attacks. Source: Cloud Security Open Databases a Juicy Extortion Target

(Read more...)

February 10, 2017

Bootstrapping Perl by Alastair McGowan-Douglas (@Altreus)

This blog post shows a simple, hands-off, automated way to get yourself a Perl environment in user land. If you already know enough about all of this to do it the hard way, and you prefer that, then this post is not aimed at you.

Here's what we are going to achieve:

  • Set up a Perl 5.24 installation
  • Set up your environment so you can install modules
  • Set up your project so you can install its dependencies

These are the things people seem to struggle with a lot, and the instructions are piecemeal all over the internet. Here they are all standing in a row.

Perlbrew your Perl 5.24

As this blog post becomes older, that number will get bigger, so make sure to alter it if you copy this from the future.

Do this as root:


apt-get install perlbrew


fetch -o- | sh

Whatever else

curl -L | bash


Haha, yeah, right.

Once you've installed perlbrew, log out of root and init it as your user. Then install a perl. This will take a while.

perlbrew init
perlbrew install perl-5.24.0

There, you now have a Perl 5.24.0 installation in your home folder. which perl will still say /usr/bin/perl so you can change that:

perlbrew switch perl-5.24.0

It will have already told you that you need to alter your .bashrc or suchlike, with something like this:

source $HOME/perl5/perlbrew/etc/bashrc

You should do that.

Perlbrew does other stuff - see for details.


You want to be able to install modules against your new perl.

You will have to reinstall modules under every perl you have if you want to use the same modules under different versions. This is because of reasons.1

perlbrew install-cpanm

Now you can use cpanm to install modules. If you install a new Perl with perlbrew, you will have to

perlbrew switch $your_new_perl
perlbrew install-cpanm

All over again. If you're dealing with multiple Perl versions for a reason, you've probably already read the docs enough that you know which commands to use.


A cpanfile is a file in your project that lists the dependencies it requires. The purpose of this file is for when you are developing a project, and thus you haven't actually installed it. It looks like this.

requires "Moose";
requires "DBIx::Class" => "0.082840";
requires "perl" => "5.24";

test_requires "Test::More";

You use it like this

cpanm --installdeps .

The . refers to the current directory, of course, so you run this from a place that has a cpanfile in it.

The full syntax is on CPAN.

Purpose of cpanfile

A "project" here refers to basically anything you might put on CPAN - a distribution. It might be a module, or just some scripts, or a whole suite of both of those things.

The point is it's a unit, and it has dependencies, and you can't run the code without satisfying those dependencies.

If you install this distribution with cpanm then it will automatically install the dependencies because the author set up the makefile correctly so that cpanm knew what the dependencies were. cpanm also puts the modules in $PERL5LIB and the scripts in $PATH so that you can use them.

If you have the source code, either you are the author, or at least you're a contributor; you don't want to run the makefile just to install the dependencies, because this will install the development version of the module too. Nor do you want to require your contributors to install the whole of dzil just to contribute to your module. So, you provide a cpanfile that lists the dependencies they require to run or develop your module or scripts.

1 The primary reason is that every Perl version has a slightly different set of development headers, so any modules written in C will be incompatible. It's too much effort to separate them and disk space is cheap; so we just keep separate libraries and avoid the problem.

February 09, 2017

My Halloween box isn't four months late, it's eight months early

Back in September 2016, inspired a bunch of cool looking projects I found on Instructables, along with access to a laser cutter, and some stock art, I decided to fully embrace this whole maker thing by making a project of my own.

The art on the front is based on a stock image of a silhouetted monster with a girl. I figured if I were to swap the polarity of the image, so everything other than the monster was a silhouette, it would look really cool to have the monster glow different colours.

If you'll allow me to be all pretentious for just a moment, I always imagined the monster as the girl's imaginary friend. I could go on to talk about his non-existence being represented by the fact he's transparent, and that his changing colour represents his ability to adapt to her needs - but frankly, that's all bollocks. I just figured it would look nice.

Anyway, the only reason this project exists at all is because of community access to the maker equipment at the Barclays Eagle Lab in Birmingham on Friday afternoons. At first, I started playing with their 3D printers and was successful in fabricating a smart watch stand and a tentacle phone holder for my flatmate - although my attempt at printing a Make bot failed.

My Halloween box isn't four months late, it's eight months early

But it was their laser cutter that I was most in awe of. The guy running the lab, Dan, said the laser cutter was quite popular as it's something which let people really explore their creative sides. Sure, 3D printers give a lot more options, but to do anything unique with them (rather than just printing models from Thingiverse) you need to learn 3D modelling software. The laser cutter is far more accessible, however, as it feeds off 2D images, which are a lot easier to produce.

My project consisted of three main parts:

  • An opaque acrylic front panel, with the transparent monster design carved into it.
  • A matrix of multicoloured LEDs to shine through the transparent design.
  • A box to put it all into.

The box turned out to be the easiest part of the project. I was able to design a laser cutter compatible template from MakerCase and convert it into the required coral draw file. The design included a hole in the front through which you'll be able to see the acrylic panel.

I don't think the laser cutter was configured correctly, as it took several passes for the laser to fully penetrate the wood, which resulted in a lot of burn marks on it, but this didn't bother me too much it was going to be painted anyway. Plus, mmmmmmm - burning wood smell.

After throwing a frame clamp set and some wood glue into the mix, I soon assembled five of the six sides of the box. I left the back panel off as this was my access point for inserting the front panel and LEDs.

Talking of LEDs, they're next. Although I had played with Neopixels before, this didn't require anything nearly so complex, so I went for some regular 12v RGB LED strips. I mounted these in a grid on the back panel (a process which required a lot of soldering and hot glue).

The last bit was the acrylic front panel, which turned out to be a massive pain in the arse.

It's made out of a material called TroLase Reverse, so called because:

[It] comprises of a transparent acrylic fascia with a coloured coating on the reverse side. By reverse engraving your image into the coloured layer you expose clear text/image which can either be infilled with your choice of acrylic paint or backlit for an effective contrast.

The first issue I had was obtaining the stuff, which, for one reason or another, took a very long time to arrive in the lab, and is why I've only just finished the project - four months after the original Halloween 2016 deadline.

The second issue was the design itself. As such a large area of the acrylic needed to be etched away, it was impossible to do so without the heat of the laser warping the material. It took three attempts to get something halfway decent, but even then it was still massively warped, and had to sit in my oven for a couple of minutes to flatten out.

Anyway, some further assembly, paint, and glue later, I'm happy to say the project is finished. It's far from perfect, and very obviously produced by someone that was making it up as he went along, but I am proud that I was able to finish it, and seeing as it's something I've never done before, I think it's turned out pretty good.

That said, if I were to do it again (which is something I really would like to do at some point), there are a few things I would change.

In the first instance, I think I'd opt to simply buy a shadow box rather than build my own. Before I started this project, I didn't even know they were a thing, but now I know they exist, I can see myself using them for a lot of projects.

One other thing I'd change is to use a CNC router to fabricate the front panel design, rather than a laser cutter. At this point, I don't even know if that's a realistic prospect, what I do know is whatever awe I once had for laser cutters has been replaced with an awe for CNC routers, and really hope that I can own one one day.

Finally, I think I'd find a way of dispersing the light from the LEDs better. It's very obvious there are multiple LEDs behind the acrylic, and I'm finding it quite distracting from the detail of the monster design. Maybe some strategically placed tissue paper would do the job?

Anyway, as much as I would like to provide a full visual history of the build, I'm nowhere near that organised, but I might be able to gather together a gallery of the work in progress images that I sent to my friends over the past 6 months. Watch this space.

I’ve been eagerly awaiting the release of AirPods since they were announced in September last year. Mine finally arrived after ordering in late December.

Here are my initial thoughts after just over a days use:

  • The charging case is great. It fits nicely in a trouser or jacket pocket. The “click” of the magnetic lid closing is incredibly satisfying, as is the way the AirPods slide in to the case.
  • Syncing to my iPhone took seconds. It was the perfect first-use experience. Open packaging, remove AirPods from charging case, click Connect on my iPhone, start using them.
  • They fit snuggly in my ears. AirPods are the same shape as EarPods, so if EarPods fit you, these will too.
  • They haven’t fallen out of my ears yet (although that’s with light use: walking, house chores, etc). They sit better than EarPods, I assume because they don’t have the added weight of a cable.
  • They do look a little ridiculous. But I don’t care.
  • Sound quality is noticeably better than EarPods. Good enough for casual listening and podcasts. Yes, audio nerds, I know there are better sounding headphones for the price point.
  • Recharging the AirPods in the case is quick. 10 minutes and they’ve got another few hours of use.
  • The range is surprisingly good. I can have my phone charging at the opposite end of the house, although it does occasionally stutter at this distance.
  • The lack of volume control and skip buttons is annoying, but not reason enough for me to stop using them.
  • And the biggest weakness: Siri. Double tapping on an ear bud will invoke Siri, but Siri is still slower and less accurate than many of its competitors.

February 08, 2017

Noted by Marc Jenkins (@marcjenkins)

There’s a whole bunch of stuff – thoughts, ideas, links, quotes, etc. – that I don’t publish. I’ve created, without realising it, expectations for what I deem to be a worthy post. An idea has to be fully-formed before it gets published. Which is just daft.

So, in order to share stuff I wouldn’t normally, I’ve created a new section I’ve ingeniously named “Notes”.

Those expectations no longer apply. I now have a place to share thoughts (that’s not Twitter), no matter how short, silly, or trivial they are.

I’ve been using Analytics in one way or another for 10 years now. In that time new features have been added at such a rate that keeping up has been a job in itself. You may well have missed this one, I did. But back in September Google added a feature I’d long since given […]

The post Migrating properties in google analytics appeared first on stickee - technology that sticks.

Used on 72% of all websites, it’s pretty safe to assume that most web developers will run into JavaScript from time to time. Albeit, in a variety of different shapes and sizes, such as jQuery, React, Vue.js or AngularJS, there’s no avoiding it. Yet, somehow, I did. I was a professional LAMP developer for seven […]

Full Stack by Graham Lee

A full-stack software engineer is someone who is comfortable working at any layer, from code and systems through team members to customers.

February 06, 2017

So now 2017 is bedded in and more than just a hangover from Christmas and New Year we can take a deep breath and consider, just for a minute, what we’re up to here in our Research and Development wing of stickee. So then, 2017 started with the launch of the new brand and new […]

The post RnD Update for February 2017 appeared first on stickee - technology that sticks.

Loyalty is dead and buried. British consumers are switching banks, energy suppliers, broadband, TV and insurance providers, more and more often – and the main driving factor is price. These super-savvy consumers are ruthless, and they’ll stop at nothing to save a few pounds, which is why price comparison websites like and have […]

The post You’re not always your customers’ first choice appeared first on stickee - technology that sticks.

A recent batch of vulnerabilities in Honeywell building automation system software epitomize the lingering security issues around SCADA and industrial control systems. Source: Infrastructure Security ICS, SCADA Security Woes Linger On

(Read more...)

A recent batch of vulnerabilities in Honeywell building automation system software epitomize the linger security issues around SCADA and industrial control systems. Source: Infrastructure Security ICS, SCADA Security Woes Linger On

(Read more...)

Creating a bespoke illustration can be a lengthy process and once its done there is not often any record of the steps that were taken to create the image. This time lapse video was recorded within the iPad app ‘Procreate’ which was used to create this cricketing batsman illustration that is used to promote our […]

The post Time lapse of a poster being created appeared first on stickee - technology that sticks.

FOSDEM by Graham Lee

My current record of FOSDEM attendance sees me there once per decade: my first visit was in 2007 and I’m having breakfast in my hotel at the end of my second trip. I should probably get here more often.

Unlike a lot of the corporate conferences I’ve been to in other fields, FOSDEM is completely free and completely organised by its community. An interesting effect of this is that whole there’s no explicit corporate presence, you’ll see companies represented if they actually support free and open source software as much as they claim. Red Hat doesn’t have a stand, but pick up business cards from the folks at CentOS, Fedora, GNOME, ManageIQ…

When it comes to free software, I’m a jack of many trades and a master of none. I have drive-by commits in a few different projects including FreeBSD and clang, and recently launched the GNUstep developer guide to add some necessary documentation, but am an expert nowhere.

That makes FOSDEM an exciting selection box of new things to learn, many of which I know nothing or little about. That’s a great situation to be in; it’s also unsurprising that I know so little as I’ve only been working with free software (indeed, any software) for a little over a decade.

February 05, 2017

Coercion over configuration.

February 04, 2017

There was no need to build a package management system since CPAN, and yet npm is the best.
Wait, what?

Every time a new programming language or framework is released, people seem to decide that:

  1. It needs its own package manager.

  2. Simple algorithms need to be rewritten from scratch in “pure” $language/framework and distributed as packages in this package manager.

This is not actually true. Many programming languages – particularly many of the trendy ones – have a way to call C functions, and a way to expose their own routines as C functions. Even C++ has this feature. This means that you don’t need any new packaging system, if you can deploy packages that expose C functions (whatever the implementation language) then you can use existing code, and you don’t need to rewrite everything.

So there hasn’t been a need for a packaging system since at least CPAN, maybe earlier.

On the other hand, npm is the best packaging system ever because people actually consume existing code with it. It’s huge, there are tons of libraries, and so people actually think about whether this thing they’re doing needs new code or the adoption of existing code. It’s the realisation of the OO dream, in which folks like Brad Cox said we’d have data sheets of available components and we’d pull the components we need and bind them together in our applications.

Developers who use npm are just gluing components together into applications, and that’s great for software.

February 03, 2017

Email is critical. Learn how to protect email with simple, and old hat, methods in tandem with Exchange Online for email continuity & email disaster recovery. I’m not sure if it’s just me, but there does seem to be an incessant whine regarding the demise of email. Usually from some thought leader or other, who quite […]

(Read more...)

Back to Top