Last updated: April 28, 2017 11:23 PM (All times are UTC.)
Charles Roper wrote a great piece called Kaizen in which he talks about breaking down big things into small steps:
In taking tiny steps, we become accustomed to the change. We get used to it little by little. As time goes on, we increase our exposure to the change. Habits form, routines solidify, and, as if by magic, we find we’ve grown. We’ve made progress. This is how we defeat the fear. We practice. We do something so easy it seems stupid. That’s the trick.
Charles asks what is the smallest thing I can do? in the context of writing daily:
Five minutes of writing per day, perhaps? That sounds reasonable. How about one sentence? That’s pretty Kaizen. Or a tweet’s worth? Ideal. So I’ll aim to write just a tweet’s worth of something every day for 30 days. A paragraph, maybe. Or just a sentence or two. It’ll take around five minutes at most.
That’s a great approach to daily writing. But it applies to much more than just writing. If there’s a habit you want to build, ask what is the smallest step you can take?
display: flow-root, which Rachel Andrew describes in The end of the clearfix hack?.
I just want to point out that even the best of us aren’t doing what we expect the makers of acne creams to do.
That trash can Mac Pro that hasn’t been updated in years? It’s too hard to write software for.
Now, let’s be clear, there are any number of abstractions that have been created to help programmers parallelise their thing, from the process onward. If you’ve got a loop and can add the words
#pragma omp parallel for to your code, then your loop can be run in parallel over as many threads as you like. It’s not hard.
Making sure that the loop body can run concurrently with itself is hard, but there are some rules to follow that either make it easy or tell you when to avoid trying. But you’re still only using the CPU, and there’s that whole dedicated GPU to look after as well.
Even with interfaces like OpenCL, it’s difficult to get this business right. If you’ve been thinking about your problem as objects, then each object has its own little part of the data – but now you need to get that information into a layout that’ll be efficient for doing the GPU work, then actually do the copy, then copy the results back from the GPU memory…is doing all of that worth it?
For almost all applications, the answer is no. For almost no applications, the answer is occasionally. For a tiny number of applications, the answer is most of the time, but if you’re writing one of those then you’re a scientist or a data “scientist” and probably not going to get much value out of a deskside workstation anyway.
What’s needed for that middle tier of applications is the tools – by which I mostly mean the libraries – to deal with this problem when it makes sense. You don’t need visualisations that say “hey, if you learned a different programming language and technique and then applied it to this little inner loop you could get a little speed boost for the couple of seconds that one percent of users will use this feature every week” – you need implementations that notice that and get on with it anyway.
The Mac Pro is, in that sense, the exact opposite of the Macintosh. Back in the 1980s, the Smalltalk software was ready well before there was any hardware that could run it well, and the Macintosh was a thing that took this environment that could be seen to have value, and made it kindof work on real hardware. Conversely, the Mac Pro was ready well before there was any software that could make use of it, and that’s a harder sell. The fact that, four years later, this is still true, makes it evident that it’s either difficult or not worth the effort to try to push the kind of tools and techniques necessary to efficiently use Mac Pro-style hardware into “the developer ecosystem”. Yes, there are niches that make very good use of them, but everybody else doesn’t and probably can’t.
If you have a website where you want people to take an action, like buy something, subscribe, click to an affiliate etc then Goal funnels are one of the most useful tools in Google Analytics. Goal funnels show you how many people complete each stage of a conversion process. This can provide really valuable insight […]
The stickee team is made up of enthusiastic individuals who are experts in their field of work. One example of this, is our Director of Business Development, Dan Richards. With years of experience in marketing, SEO and Digital Media, Dan has strengthened his technical knowledge whilst accumulating valuable experience over the years. Due to his expertise […]
My first reading list after 3 months backpacking around India. Some of the links were in draft since before I went, so you may have seen them before. Om Boomshanka.
Here are the filters I currently use in Todoist:
(today | overdue) & p1 | p2
MIT’s, or Most Important Tasks, is a concept I got from Zen to Done. ZTD, written by Leo Babauta of Zen Habits fame, is a simplified and more practical take on Getting Things Done. It’s a short book and much more approachable than GTD.
Each morning – or if I’m really organised, the evening before – I decide on what my MIT’s are for the day. I try and choose about 3 things which I do this by assigning priority 1 or priority 2 to those tasks.
This filter then shows any tasks that are marked as priority 1 or priority 2 and are either overdue or due today.
There’s usually around 10 tasks in my Today view, but the MIT’s are what I focus on. I’ll try and get them done early in the day if possible so that other things don’t get in the way.
@errands | @groceries
The Errands filter shows any tasks that are labelled @Errands or @Groceries. I add things that I need to Todoist with either of those two labels. That way, when I’m in a supermarket or need to run errands, I just use this filter so that I won’t forget the milk or batteries or whatever else it is I might need.
(overdue | today) & @Batch
A few times a week, I’ll use this filter to help clear out the small tasks that I need to get done. First, I go through Todoist and add the label @Batch to any small tasks that will take around 10 minutes or less to complete.
I then sit down for around an hour and blast through this filter. Tasks that I might batch include updating a client on a project, sending or keeping on top of invoices, making a quick phone call, small updates to a client website, and so on. I can often get through 4-10 tasks in just an hour by using this filter.
Similar to the filter above, this shows all tasks labelled @Batch but regardless of due date. If I have a spare 30 minutes and my tasks for the day are complete, I’ll often work from this list.
7 days & @Waiting
If I’m waiting on something from someone, I’ll add it as a task with a label of @Waiting. This filter than shows all the things I’m waiting on over the next week. I review this filter at the start of the week so I can plan accordingly. It’s also useful as a reminder to follow up if necessary.
created before: -90 days
The filters I have marked in grey are part of my monthly review.
This filter shows all tasks that were created over 90 days ago. Tasks that appear in this filter are often tasks that have slipped through the cracks and need scheduling or are no longer important and can be deleted.
This filter shows all tasks that are due in the next 30 days. During the review, I look over what needs to be done in the next month. One of things I like about this filter is it helps spot particularly busy days so I can move tasks around accordingly.
no due date
As you’d expect, this filter lists all tasks without a due date. Like the 90 days old filter, this helps surface tasks that I may have forgotten about or are no longer important and can be removed.
I have a Routines project with Daily, Weekly, Monthly and Periodic recurring tasks. This filter shows them all as well as any recurring tasks in other projects. I like to review them on a monthly basis because too many recurring tasks can easily clog up the system.
This shows all the tasks in my Todoist account, ordered by project. I don’t use this filter very often but it is handy for major reviews.
I like the way the Ubuntu Unity desktop works. However, a while ago I switched over to Gnome Shell to see what it was like, and it seemed good so I stuck around. But I’ve added a few extensions to it so it feels a bit more like the parts of the Unity experience that I liked. In light of the news from Canonical that they’ll be shipping the Gnome desktop in the next LTS in 2018, and in light of much hand-wringing from people who like Unity as much as I do about how they don’t want to lose the desktop they prefer, I thought I’d write down what I did, so others can try it too.
As you can see, that looks like Unity. It feels like Unity too.
The main bit of customisation here is extensions. From the Gnome shell extensions web app, install Better Volume Indicator, Dash to Dock, No Topleft Hot Corner, and TopIcons Plus. That gives you the Launcher on the left, indicators in the top right, and a volume indicator that you can roll the mouse wheel on. Choose a theme; I use the stock Gnome theme, “Adwaita”, but I turned it to “dark mode” (in “Tweak Tool”, turn on “Global Dark Theme”), and set icons to “Ubuntu-mono-dark” in the same place.
Most of the stuff you’re familiar with in Unity actually carries through to Gnome Shell pretty much unchanged — they’re actually very similar, especially with the excellent Dash to Dock extension! One neat trick is that the window spread and the search have been combined; if you hit the Super key (the “Windows” key), it opens up the window spread and lets you search, so the normal way of launching an app by name (hit Super, type the name, hit Enter) is completely unchanged. Similarly, you can launch apps from the Launcher on the left with Super+1 or Super+2 and so on, just like Unity.
There are a whole bunch of other extensions to customise bits of how Gnome Shell works; if there are some that make it more like a Unity feature you like and I haven’t listed, I’d be happy to hear about them. Meanwhile, I’ve still got the feel I like, on the desktop I’ll be using. Hooray for that.
For the past few months, Todoist has been my task manager of choice. I’ve invested a lot of time in tweaking it to find a setup I’m happy with. This is the first post in a mini series I’ll be doing on Todoist.
First, I wanted to share my custom start page. In Todoist, the start page is what you see when you click on the logo in the top left corner. You can customise what appears on this page by going to Settings and selecting from the “start page” drop down.
I started by using complex queries to show my various priorities and projects and labels. But honestly, I didn’t find that particularly useful. Now I’ve opted for an incredibly simple start page and I use it multiple times a day.
Before I dive into the start page I use, I want to explain the problem I’m trying to solve. I start by searching through my Today view or perhaps use a Filter to find a task I want to work on next. I then start working on that task. But before I know it, I’ve been distracted and I’m no longer working on that task. It might be an incoming call, a knock at the door, or an interruption of my own making (Twitter or Slack most likely). I then forget what I was working on and start on something else. It’s not until later when I review Todoist again that I have that realisation: “Oh yeah, that’s what I was working on.”
This is a problem when you have a large degree of autonomy with your time. You can work on what you want, when you want. But jumping around tasks like this is hugely inefficient.
To get around this, I find the task I want to work on and then apply a label of “Now”. My start page uses a custom query to show the @Now label like this:
Now, when I lose track of what I’m working on, I jump straight to the start page which shows me the task I was working on. Once that task is done, I check it off, find another task to work on, add the @Now label, and off I go.
I’m not sure if this is useful to anyone else but it seems to work for me.
We are delighted to announce that our very own Development Director James Nestoruk has been shortlisted for this year’s Birmingham Young Professional of The Year (BYPY) Award. Across six award categories, the Birmingham Future’s flagship event, recognises the city’s finest talent. With an additional category dedicated to one candidate winning the prestigious Overall BYPY Winner. […]
My alarm went off at 6am. After meditating and making tea, I was at my desk to write for 6:30am.
It’s the first time I’ve had an early start during the challenge and it felt good.
I like to write early when the world around me is asleep. Twitter is slow, my email inbox is dead, and Slack is quiet. It’s the perfect time to create.
I use an app called Focus to prevent further distractions. When I hit the menu bar icon, it enables the Focus timer for 25 minutes. During this time, it blocks distracting websites (such as Facebook, Twitter, Amazon) as well as distracting apps (Slack, Tweetbot, Spark).
One of the best productivity tips I can offer is this: don’t rely on willpower. Willpower is finite. You have to setup your environment to facilitate focus. While the Focus timer is running, I can’t get on Twitter even if I want to. And for the most part, it stops me from even trying. My only option is to sit here and write.
Yesterday I wrote about my problems nailing the premise of the book. My goal today was to explore that. I took what I wrote yesterday and turned it into a chapter. What I wrote was pretty good and it helped me think deeper about the problem.
I wrapped up my writing by 8am and went about my day, casually thinking about the book in free moments. You know how it is when you’re working on a big project – you can’t help but think about it. It was much later when I stuck in traffic – driving home from a client meeting – that I had a small epiphany.
I had given the book a title that I liked. It was short and snappy, but it made an assumption about the premise of the book. It wasn’t until that car journey that I realised I didn’t agree with that premise. I know I’m being a little abstract, but it felt like an important break through.
When I got home, I wrote another 500 words to capture my thoughts. And that’s where I’ll kick off tomorrow morning. I’m excited to sit at my keyboard again and see where this path leads.
Words written: 1667
Total words: 3458
Aaron alerts me to the recent initiative of sharing one’s favourite podcasts with the hashtag #trypod. That sounds like fun, speaking as a podcast listener and performer. So, here’s the stuff I listen to, at the moment (it is April 2017).
If you’re sitting about waiting for the next Harry Dresden book, you’ll like this. An excellent example of fandom; to someone outside the club they just ceaselessly hash over the books, but I think that’s good fun both to do and to listen to. And I’ve learned about quite a bit that I missed, as well.
Rocket’s great. Very tech-focused, and the presenters skew a little more to the journalism side than other tech podcasts which tend to be more developer or sysadmin-based. If you like the stuff that I do, you’ll probably like this. Look here for commentary on what’s going on in the tech world, plus quite a lot of laughing at one another’s jokes.
Admittedly made up of friends of mine, but that’s not the point. Amusing commentary on tech and open source stuff, with an Ubuntu slant, but these days they end up talking more about weird little hardware projects and Ubuntu MATE than what Canonical are up to. It’s fun; they try for a “family-friendly” sort of vibe as well. While on-air, at least.
An episode-by-episode relisten-to and discussion-of The West Wing, the TV programme. This is fandom stuff, much like Dresden above, but one of the two presenters is John Malina who played Will Bailey in the show and so they tend to have lots of interviews with members of the cast, the writers, the directors, and so on.
This covers elementary, the OS. They tend to go fairly in-depth on specific aspects or projects from the elementary team, although not always (they had me on once and were berated about various things, which I’m grateful for the chance to have done). Worth listening to if you’re part of the elementary community.
The legendary and legendarily infrequent HTTP 203 podcast. Web development, right up at the cutting edge (sometimes considerably past the cutting edge and into the field on the other side of the road) from Jake Archibald and Paul Lewis of the Google Chrome developer relations team. About 20% amazing insight on what the web is and where it’s going, another 20% them describing interesting web things they’ve been up to, and the remainder off-colour stories and arsing about, which is marvellous stuff. If you’re a web dev and you’re not listening to this I don’t know what’s wrong with you.
From the BBC. A short but frequent programme in which they do a deep-dive into some quoted or reported bit of statistics and explain whether it’s right and what it means. I’ve learned quite a lot from this!
From the editorial team of the late, great Linux Voice magazine. They’re amusing to listen to, and they’ve often got up to quite a bit. Notable for “Voice of the Masses”, which involves various audience polls, and “Finds”: random things and bits of software they’ve discovered this week and want to mention.
Mark Steadman and his guest on Brum Radio do an extended hour-long interview and also play a sort of Choose Your Own Adventure game; something like a very constrained role-playing session. When I was on it I won and didn’t die! Yay! So that’s encouraging.
Traditional-style Linux podcast, but a good laugh; Joe, Ikey, Félim, and Jesse kick around some ideas, Ikey goes on about the distro he builds, they look at news and goings-on. For Linux people only, pretty much, but if you are then it’s one of the better ones.
There is Bad Voltage.
Historically teams I've worked with have taken a few varying approaches when designing tests against acceptance criteria. One is to have the business define the feature, while the team help define the acceptance criteria. Ultimately the business gets the final say if they agree, and further acceptance criteria is either added or removed. The strength of this approach is everyone is involved with the process so nothing is missed or misunderstood. The biggest flaw with this style is that the documentation produced is often verbose, using wordy Given-When-Then scenarios. Using this plan a test plan is then created, mapping tests to acceptance criteria.
An alternative approach is have the business define both the feature and acceptance criteria while the team come up with a corresponding test strategy. This more technical approach allows for a separation of testing activities and test categories. Finally the test plan is replayed back to the business and correlated against acceptance criteria. A negative of this approach is not everyone is involved with the task at the same time. This means there can be some disconnect with what the business is actually asking for. Both approaches work though they can yield mixed results on a case by case basis.
I've recently been introduced to the concept of a testing/QA matrix, which is a far more condensed and simplified solution. It has the benefit of the whole team being engaged, while producing nothing more than a simple table that can fit comfortably on a A4 page. The left hand column includes each condition of acceptance, while the other columns should have a mark to indicate the type of test that will cover this functionality. An example is below.
Unit Integration Acceptance Contract Manual
COA X X
The beauty of this matrix is that at a glance you can see where you testing efforts lie. If too much occurs on the right of the matrix you may need to re-consider and question your approach. Is there a way to limit the more expensive style of tests and still gain confidence? Other questions can arise around test coverage and whether higher level tests are needed.
When producing this matrix the whole team including the business should be involved. By having everyone together, decisions can be made quickly with everyone in agreement. Additionally it allows debate and discussion around how each feature should be tested.
For higher level tests these can be directly translated into automated tests. While the lower level tests need to confirmed at a later date once the code is complete.
Along side the QA matrix it may be worth while adding a simple diagram of the components that will be involved such as web servers, databases and so on. This can aid discussion and highlight hot spots for changes or tests.
Finally for demonstration to the business the matrix can be used as a form contract for signing off functionality. Once the feature is complete it is simply a case of finding the corresponding tests, confirming their existence and making a note of the commit that included them.
Today, I got up early to play squash; rushed home to shower and change, grabbed my camera gear and headed off to my nephew’s birthday party; got back and had a 30 minute nap on the sofa (because I’m getting old); sat down at my desk to do my weekly review; then had the idea to write a journal entry for each day of the writing challenge so I wrote my first entry; then headed out for a game of Badminton with the family, before finally sitting down to eat dinner with my wife.
All that to say that it was 7pm and I hadn’t written a word towards my writing goal. So I poured myself a whisky – which is what I imagine my favourite authors doing – and sat down infront of my keyboard.
I’ve been thinking about the book I’m writing all day. There’s a big problem I’ve been grappling with for a while. I know roughly what I want to say and I have some ideas that I’ve cobbled together over the past few months, but I don’t yet have clarity on the idea. The book isn’t crystal clear in my mind.
A few friends have asked what the book is about and my answer is often waffly. I haven’t nailed the elevator pitch.
The problem, I think, is because I’m over-complicating things. I need to strip back the idea and simplify. I need to unpick the core message and ignore the ideas on the periphery.
I wasn’t feeling any of the chapter titles I’d written down yesterday, so, rather than writing another chapter, I created an empty document with the title “scratchpad”.
I wrote a few paragraphs on who the book is for and how it would help them. This helped free me up.
Next I wrote about the core premise of the book and a few examples to back up the premise. I think those examples will end up being chapters. There’s a lot expounding to do.
So, not the most productive day, and I still have a ton of work in figuring out the core message of the book, but I’m glad I sat in the chair and did some writing.
I need to sleep on it. The alarm is set for 6am. See you tomorrow.
Words written: 960
Total words: 1791
Last year, I started a 30 day writing challenge and invited others to join me. The idea was to write consecutively for 30 days to build a writing habit. I managed to write 30 blog posts. It was a challenge to write, edit and publish a post in a single day. It gave me a lot of respect for people who do that day in, day out.
This year, my goal is different. I plan to write at least 500 words a day towards a book I’m working on. I’m not ready to announce what the book is about just yet (partly because it may change). Let’s just say it’s in the self-help/productivity genre.
My goal isn’t to finish the book during April. I should have over 15,000 words at the end of the challenge. Some of those words will hopefully make sense. The rest will be rewritten or binned.
Yesterday was the first day of the challenge. I’m writing this a day late because honestly the idea to journal about the challenge only just occurred to me. The Slack channel is busy with people sharing what they’re writing. It’s incredibly inspiring to watch. But I felt bad. As the guy who set this challenge up, I felt like I should at least be contributing. So these journal entries are my small contribution.
The first day of the challenge just so happened to be a Saturday. I have a morning routine that I try and stick to but I have a much more laid back approach to Saturday and Sunday mornings. This was the case yesterday, too. I arrived at my desk to write about 8:00am.
I started by arranging some chapters titles I had written down and formed a rough outline. I then picked the most interesting topic and starting writing. It just so happened to be chapter 1, but I don’t plan on writing the book in order.
It went fairly smoothly. I’m in writing mode. I’m not worried about editing. I fix typos and sentences when I spot them, but otherwise, it’s rough work. The plan is to edit at a later stage, but for now it’s production mode.
So that’s day 1 done. 29 days to go.
Words written: 831
Total words: 831
So, this evening, there were beers with Dan and Ebz and Charles and after a brief flirtation with the Alchemist (where we didn’t go, since the bar was three deep and they might be good at cocktails but they’re really, really slow at cocktails and so I didn’t fancy waiting fifteen minutes just to get to the bar) and the Bureau (who are also good at cocktails, and sell an interesting Czech lager with a picture of the brewery on the glass (and while I’m on the subject, why does every bar feel it necessary to sell me Beer X in a Beer X-branded glass these days? I really don’t care, bar staff of the world. Don’t feel like you have to)), we descended upon the Indian Brewery in their new place under the arches. Honestly, up to now, I’d tried their Birmingham Lager in cans (perfectly nice), and that was it; I’d never been to their bar. And it’s fabulous. I was vaguely peckish before getting there but I probably wouldn’t have bothered with anything; I flirted with the menu in Bureau but was basically ungrabbed by whatever it is that grabs me about menus. And then we piled out of the cab (which we’d got in to save Ebz and her tottery heels; obviously Sport Dan would have walked the distance to get there but I was quietly glad of not having to) into Indian and the delectable smell of the place completely turned me around. Twenty feet from the place I was mildly thinking about food; six feet from the door I was ready to eat a horse, and one of my companions, and possibly a road sign. This is a place that knows how to capitalise on the weaknesses of their punters. On the way in the door we we interrupted by a chap who, tactfully, hasn’t skipped many lunches, who asked us: are you eating? Yes, we all said, salivating. And this helpful chap cleared away a small end of a table — the Indian Brewery interior is organised as a batch of wooden benches, like the ones you get in a pub garden. So if you’re not eating, he’ll find you a space to sit in between others; if you are, he’ll find you a slightly larger block, possibyl by elbowing those already there. And the friendliness doesn’t end there. Everything, from the Bollywood posters on the wall to the attitude of the staff and the casual, no-frills nature of the layout exudes friendliness; it feels welcoming, like someone took the concept of welcomingness and distilled it out of the air into an atmosphere that pervades the whole place and stepping into it feels like a hot bath. And the Fat Naan is utterly delightful. “Are you sure you want a whole fat naan to yourself?” asked Dan. Yes. Yes, I was sure. And I was right. It was bloody delectable. Honestly, I can’t recommend it enough. Good beers (and a good selection of beers, moreover, which I wasn’t expecting), great Indian food, friendly staff. If you want more from a place, I can’t see what it is that you want.
If you use Ubuntu MATE you may have found that it’s really difficult to resize windows; you can move your mouse pointer over the edge of a window to drag the window bigger or smaller, but the “grab area” is only one pixel wide. This is alaarmingly irritating.
It’s a long-standing bug (people were complaining about this in 2010, seven years ago!), which has been fixed in Ubuntu proper for a long time, but has resurfaced in Ubuntu MATE. Anyway, to fix resizing of windows, you need to enable Compiz in Ubuntu MATE; that replaces Ubuntu MATE’s standard “window manager”, which is named “Marco”, with the Compiz window manager, and Compiz doesn’t have this daft problem. Yes, it is annoying that this doesn’t just work, and apparently they’re working on fixing it so it doesn’t become our problem as users to fix the deficiencies, but in the interim you can at least fix it for yourself even though you shouldn’t have to.
So, open MATE Tweak, which is in the System menu (Preferences > Look and Feel > MATE Tweak), and under Windows, choose Compiz (Advanced GPU accelerated desktop effects) under Window Manager.
Technically, Compiz requires a 3d accelerated graphics card. However, unless your machine is very, very, very old indeed, it will have enough 3d acceleration to do this; this is not like playing games or similar. My ancient Dell laptop copes with it fine, so it should not be a worry.
Ubuntu MATE will then switch to Compiz (you don’t need to reboot or anything) and will show you a window saying “Keep this window manager?” If you see that window, you can click “Yes, OK” in it. (If for some reason this hasn’t worked, then you won’t see that window and so it will automatically switch back, so your computer isn’t broken.) Now, resizing Ubuntu MATE windows should be a lot easier, because the resize grab area will not be one single pixel.
Comparison websites’ adverts are a bit like Marmite – you either love them and consider yourself to be sooooo MoneySuperMarket, or you hate them and everything meerkats stand for… Man, that’s a lot of British cultural references in one opening line. We’ve lost the Americans already. But whether you like them or not, these adverts […]
There’s just a few days left before the 30 Day Writing Challenge begins which means there’s still time to sign up.
Perhaps you have a blog that’s gathering dust or there’s a book you’ve been meaning to write. Either way, the challenge is a great way to build a writing habit.
I know the challenge sounds daunting, so I wanted to directly address some objections I’ve heard:
I don’t have time to write
You’ll want to put aside around 30 minutes per day to get the most out of the challenge. That time should be spent writing without distraction. You might end up writing less on some days. Even writing for 10 minutes is better than nothing.
You may have to give something up during the challenge. Perhaps you could use that time you’d usually spend in front of the TV at the end of the day. Or perhaps you could get up an hour earlier. Either way, the time you put in to this challenge is an investment in your future self.
I won’t be able to write every day during April
Want to take weekends off? Go ahead. Have a week-long holiday booked? No problem. While ideally you would write every day, the real goal of the challenge is to get you writing more than you would otherwise. If you can’t write every day during April, don’t let that put you off. Write what you can.
Do I have to publish what I write every day?
No. In fact, I’d advise against it. I published daily last year and it was intense. The goal is to turn up and write. Some days, what you write will be great. Other days, it will be garbage. That’s just part of the creative process. Turning up to write is more important than what you write.
I don’t know what to write about
Tell a story that made an impression on you recently. Talk about your workflow or process. Write about a solution to a problem you’ve recently encountered. Write a review. Interview someone in the industry. Or write about what’s on your mind.
Open up a notebook and scribble down some ideas. As soon as you have another idea, write it down. Always write down your ideas. You’ll soon have plenty of topics to write about.
Sound good? Click here to find out more about the challenge and to sign up. Be sure to join the Slack group, too.
stickee were proud to sponsor the renowned student hackathon BrumHack at the University of Birmingham over the weekend. Now completing its sixth installment, the 24-hour non-stop hackathon ran from March 25th-26th, with students from across the UK attending. Over the weekend, the students assembled into teams and began building a project from scratch, ready for […]
I gave a talk to my team at ARM today on Working Effectively with Legacy Code by Michael Feathers. Here are some notes I made in preparation, which are somewhat related to the talk I gave.
This may be the most important book a software developer can
read. Why? Because if you don’t, then you’re part of the problem.
It’s obviously a lot easier and a lot more enjoyable to work on
greenfield projects all the time. You get to choose this week’s
favourite technologies and tools, put things together in the ways that
suit you now, and make progress because, well anything is progress
when there’s nothing there already. But throwing away an existing
system and starting from scratch makes it easy to throw away the
lessons learned in developing that system. It may be ugly, and patched
up all over the place, but that’s because each of those patches was
needed. They each represent something we learned about the product
after we thought we were done.
The new system is much more likely to look good from the developer’s
perspective, but what about the users’? Do they want to pay again
for development of a new system when they already have one that mostly
works? Do they want to learn again how to use the software? We have
this strange introspective notion that professionalism in software
development means things that make code look good to other coders:
Clean Code, “well-crafted” code. But we should also have some
responsibility to those people who depend on us and who pay our way,
and that might mean taking the decision to fix the mostly-working
Manny Lehman identified three different categories of software system:
those that are exactly specified, those that implement
well-understood procedures, and those that are influenced by the
environment in which they run. Most software (including ours) comes
into that last category, and as the environment changes so must the
software, even if there were no (known) problems with it at an earlier
point in its evolution.
Laws governing the evolution of software systems,
which govern how the requirements for new development are in conflict
with the forces that slow down maintenance of existing systems. I’ll
not reproduce the full list here, but for example on the one hand the
functionality of the system must grow over time to provide user
satisfaction, while at the same time the complexity will increase and
perceived quality will decline unless it is actively maintained.
Michael Feather’s definition of legacy code is code without tests. I’m
going to be a bit picky here: rather than saying that legacy code is
code with no tests, I’m going to say that it’s code with
insufficient tests. If I make a change, can I be confident that I’ll
discover the ramifications of that change?
If not, then it’ll slow me down. I even sometimes discard changes
entirely, because I decide the cost of working out whether my change
has broken anything outweighs the interest I have in seeing the change
make it into the codebase.
Feathers refers to the tests as a “software vice”. They clamp the
software into place, so that you can have more control when you’re
working on it. Tests aren’t the only tools that do this: assertions
(and particularly Design by Contract) also help pin down the software.
The apparent way forward then when dealing with legacy code is to
understand its behaviour and encapsulate that in a collection of unit
tests. Unfortunately, it’s likely to be difficult to write unit tests
for legacy code, because it’s all tightly coupled, has weird and
unexpected dependencies, and is hard to understand. So there’s a
catch-22: I need to make tests before I make changes, but I need to
make changes before I can make tests.
Almost the entire book is about resolving that dilemma, and contains a
collection of patterns and techniques to help you make low-risk
changes to make the code more testable, so you can introduce the tests
that will help you make the high-risk changes. His algorithm is:
The overarching model for breaking dependencies is the “seam”. It’s a
place where you can change the behaviour of some code you want to
test, without having to change the code under test itself. Some examples:
The important point is that whatever you, or someone else, thinks
the behaviour of the code should be, actually your customers have paid
for the behaviour that’s actually there and so that (modulo bugs) is
the thing you should preserve.
The book contains techniques to help you understand the existing code
so that you can get those tests written in the first place, and even
find the change points. Scratch refactoring is one technique: look
at the code, change it, move bits out that you think represent
cohesive functions, delete code that’s probably unused, make notes in
comments…then just discard all of those changes. This is like Fred
Brooks’s recommendation to “plan to throw one away”, you can take what
you learned from those notes and refactorings and go in again with a
more structured approach.
Sketching is another technique recommended in the book. You can draw
diagrams of how different modules or objects collaborate, and
particularly draw networks of what parts of the system will be
affected by changes in the part you’re looking at.
Cheerity is a new startup based in New York and ran by a team of people who have worked with charities their whole lives. Cheerity is the missing link between viral charity content and the cause. In many cases people joined in with viral campaigns without knowing what the cause was, Cheerity solves this problem.
I’ve been working with Cheerity’s team in New York creating UX & UI designs for their platform which is currently in beta. They have already raised thousands of dollars for excellent charities and can’t wait to see how they grow.
The post Cheerity UX & UI Design appeared first on .
I get very annoyed about politicians being held to account for admitting they were wrong, rather than forcefully challenged when they were wrong in the first place. Unless they lied, if someone was wrong and admits it, they should be congratulated. They have grown as a human being.
I am about to do something very similar. I’m going to start confessing some wrong things I used to think, that the world has come to agree with me about. I feel I should congratulate you all.
You can’t design a Database without knowing how it will be used
I was taught at university that you could create a single abstract data model of an organisation’s data. “The word database has no plural”, I was told. I tried to create a model of all street furniture (signs and lighting) in Staffordshire, in my second job. I couldn’t do it. I concluded that it was impossible to know what was entities and what was attributes. I now know this is because models are always created for a purpose. If you aren’t yet aware of that purpose, you can’t design for it. My suspicion was confirmed in a talk at Wolverhampton University by Michael ‘JSD’ Jackson. The revelation seemed a big shock to the large team from the Inland Revenue. I guess they had made unconscious assumptions about likely processes.
Relations don’t understand time
(They would probably say the same about me.) A transaction acting across multiple tables is assumed to be instantaneous. This worried me. A complex calculation requiring reads could not be guaranteed to be consistent unless all accessed tables are locked against writes, throughout the transaction. Jackson also confirmed that the Relational Model has no concept of time. A dirty fix is data warehousing which achieves consistency without locking by the trade-off of guaranteeing the data is old.
The Object Model doesn’t generalise
I’d stopped developing software by the time I heard about the Object Oriented Programming paradigm. I could see a lot of sense in OOP for simulating real-world objects. Software could be designed to be more modular when the data structures representing the state of a real-world object and the code which handled state-change were kept in a black box with a sign on that said “Beware of the leopard”. I couldn’t grasp how people filled the space between the objects with imaginary software objects that followed the same restrictions, or why they needed to.
A new wave of Functional Programming has introduced immutable data structures. I have recently learned through Clojure author Rich Hickey’s videos that reflecting state-change by mutating the value of variables is now a sin punishable by a career in Java programming. Functional Programmers have apparently always agreed with me that not all data structures belong in an object
There are others I’m still waiting for everyone to catch up on:
The Writable Web is a bad idea
The Web wasn’t designed for this isn’t very good at it. Throwing complexity bombs at an over-simplified model rarely helps.
Rich Hickey’s Datomic doesn’t appear to have fixed my entity:attribute issue
Maybe that one is impossible.
[ This post is aimed at readers with at least basic understanding of agile product development. It doesn’t explain some of the concepts discussed.]
We often talk of software development as movement across a difficult terrain, to a destination. Organisational change projects are seen as a lightening attack on an organisation, though in reality, have historically proved much slower than the speed of light. Large projects often force through regime change for ‘a leader’. Conventionally, this leader has been unlikely to travel with the team. Someone needs to “hold the fort”. There may be casualties due to friendly firings.
Project Managers make ‘plans’ of a proposed ‘change journey’ from one system state to another, between points in ‘change space’, via the straightest line possible, whilst ignoring the passage of time which makes change possible. Time is seen as distance and its corollary, cost. The language of projects is “setting-off”, “pushing on past obstacles” and “blockers” such as “difficult customers”, along a fixed route, “applying pressure” to “overcome resistance”. A project team is an army on the march, smashing their way through to a target, hoping it hasn’t been moved. Someone must pay for the “boots on the ground” and their travel costs. This mind-set leads to managers who perceives a need to “build momentum” to avoid “getting bogged down”.
Now let us think about the physics:
What about “agile software developments”? There is a broad range of opinion on precisely what those words mean but there is much greater consensus on what agility isn’t.
People outside the field are frequently bemused by the words chosen as Agile jargon, particularly in the Scrum framework:
A Scrum is not held only when a product development is stuck in the mud.
A Scrum Master doesn’t tell people what to do.
Sprints are conducted at a sustainable pace.
Agility is not the same as speed. Arguably, in agile environments, speed isn’t the same thing as velocity either.
Many teams measure velocity, a crude metric of progress, only useful to enable estimation of how much work should be scheduled for the next iteration, often guessed in ‘story-points’, representing relative ‘size’ but in agile environments, everything is optional and subject to change, including the length of the journey.
If agility isn’t speed, what is it? It is lots of things but the one that concerns us here is the ability to change direction quickly, when necessary. Agile teams set off in a direction, possibly with a destination in mind but aware that it might change. If the journey throws up unexpected new knowledge, the customer may wish to use the travelling time to reach a destination now considered more valuable. The route is not one straight line but a sequence of lines. It could end anywhere in change-space, including where it started (either through failing fast or the value of the journey being exploration rather than transportation.) Velocity is therefore current progress along a potentially windy road of variable length, not average speed through change-space to a destination. An agile development is really an experiment to test a series of hypotheses about an organisational value proposition, not a journey. Agile’s greatest cost savings come from ‘wrong work not done’.
Agility is lightweight, particularly on up-front planning. Agile teams are small and aim to carry everything they need to get the job done. This enables them to set off sooner, at a sensible pace and, if they are going to fail, to fail fast, at low cost. Agility delivers value as soon as possible and it front-loads value. If we measured velocity in terms of value instead of distance, agile projects would be seen to decelerate until they stop. If you are light, immovable objects can be avoided rather than smashed through. Agile teams neither need nor want momentum, in case they decide to turn fast.
Take Smalltalk. Do I have an object in my image? Yes? Well I can use it. Does it need to do some compilation or something? I have no idea, it just runs my Smalltalk.
Take Python. Do I have the python code? Yes? Well I can use it. Does it need to do some compilation or something? I have no idea, it just runs my Python.
Oh my God.
C is portable, and there are portable operating system interface specifications for the system behaviour accessible from C, so you need to have C sources that are specific to the platform you’re building for. So you have a tool like autoconf or cmake that tests how to edit your sources to make them actually work on this platform, and performs those changes. The outputs from them are then fed into a thing that takes C sources and constructs the software.
What you want is the ability to take some C and use it on a computer. What C programmers think you want is a graph of the actions that must be taken to get from something that’s nearly C source to a program you can use on a computer. What they’re likely to give you is a collection of things, each of which encapsulates part of the graph, and not necessarily all that well. Like autoconf and cmake, mentioned above, which do some of the transformations, but not all of them, and leave it to some other tool (in the case of cmake, your choice of some other tool) to do the rest.
Or look at make, which is actually entirely capable of doing the graph thing well, but frequently not used as such, so that
make all works but making any particular target depends on whether you’ve already done other things.
Now take every other programming language. Thanks to the ubiquity of the C run time and the command shell, every programming language needs its own build system named
[a-z]+ake that is written in that language, and supplies a subset of make’s capabilities but makes it easier to do whatever it is needs to be done by that language’s tools.
When all you want is to use the software.
Yuval Harari, author of the fantastic book Sapiens (which I’ve started and still need to finish), was a recent guest on The James Altucher Show. Go listen, it’s a great interview.
One of my favourite parts was Yuval’s brief thoughts on meditation. He explained that he starts and finishes every work day with one hours meditation. He explains:
“(Meditation) gives me balance, peace, and calmness and the ability to find myself.”
“The idea of meditation is to forget about all the stories in your mind. Just observe reality as it is. What is actually happening right here, right now? You start with very simple things like observing the breath coming in and out of your nostrils or you observe the sensations in your body. This is reality.
For all of history, people have given more and more importance to imaginary stories and they’ve been losing the ability to tell the difference between fiction and reality. Meditation is one of the best ways to regain this ability and really tell the difference between what is real and what is a fiction in my mind.”
I’m a big fan of thought experiments. I like science but I’m too lazy to do real experiments. Why do something when you can think about doing it?
I’ve been observing the political manoeuvring around Brexit and 2nd referendums. I think people are saying things they don’t really believe in order to get an outcome they believe to be right and people are saying things which sound good, to hide the evil swirling beneath the surface.
I asked myself: Which is the greater wrong: doing a good thing for a bad reason or a bad thing for a good reason?
‘A good thing’ is highly subjective, depending on your personal values and consequent belief in what is fair. A comparison of ‘bad thing’s is probably even more fluid. I see it in terms of balance between good and harm to self and others. It’s complex.
‘Good’ and ‘bad’ reasons also depend on your personal targets and motivations along with another subjective moral evaluation of those.
An individual may see a good thing as a positive value and a bad thing as a negative value and believe that as long as the sum is positive, so is the whole package. People call this “pragmatism”. They also tell me it is easier to ask for forgiveness than permission. These people get things done and, generally, only hurt other people.
‘A reason’ sounds like dressing up something you feel you want in logic. Is that always reasonable?
We need to balance what we want and our chances of success against the risks and uncertainty of what we might lose or fail to achieve. To measure success objectively, we need to have specified some targets before we start.
Brexit didn’t have either a plan or targets. It appears to be driven by things that people don’t want. How will we know if it has succeeded or failed? We are told the strategy and tactics must be kept secret or the plan will fail and targets will be missed. If this was a project I was working on, I’d be reading the jobs pages every lunch time. I’ve stopped worrying about the thought experiment.
The first thing I did yesterday, on International Women’s Day 2017, was retweet a picture of Margaret Hamilton, allegedly the first person in the world to have the job title ‘Software Engineer’. The tweet claimed the pile of printout she was standing beside, as tall as her, was all the tweets asking “Why isn’t there an International Men’s Day?” (There is. It’s November 19th, the first day of snowflake season.) The listings were actually the source code which her team wrote to make the Apollo moon mission possible. She was the first virtual woman on the Moon.
I followed up with a link to a graph showing the disastrous decline of women working in software development since 1985, by way of an explanation of why equal opportunities aren’t yet a done deal. I immediately received a reply from a man, saying there had been plenty of advances in computer hardware and software since 1985, so perhaps that wasn’t a coincidence. This post is dedicated to him.
I believe that the decade 1975 – 1985, when the number of women in computing was still growing fast, was the most productive since the first, starting in the late 1830s, when Dame Ada Lovelace made up precisely 50% of the computer software workforce worldwide. It also happens to approximately coincide with the first time I encountered computing, in about 1974 and stopped writing software in about 1986.
1975 – 1985:
As I entered: Punched cards then a teletype, connected to a 24-bit ICL 1900-series mainframe via 300 Baud accoustic coupler and phone line. A trendy new teaching language called BASIC, complete with GOTOs.
As I left: Terminals containing a ‘microprocessor’, screen addressable via ANSI escape sequences or bit-mapped graphics terminals, connected to 32-bit super-minis, enabling ‘design’. I used a programming language-agnostic environment with a standard run-time library and a symbolic debugger. BBC Micros were in schools. The X windowing system was about to standardise graphics. Unix and ‘C’ were breaking out of the universities along with Free and Open culture, functional and declarative programming and AI. The danger of the limits of physics and the need for parallelism loomed out of the mist.
So, what was this remarkable progress in the 30 years from 1986 to 2016?
Parallel processing research provided Communicating Sequential Processes and the Inmos Transputer.
Declarative, non-functional languages that led to ‘expert systems’. Lower expectations got AI moving.
Functional languages got immutable data.
Scripting languages like Python & Ruby for Rails, leading to the death of BASIC in schools.
Wider access to the Internet.
The read-only Web.
The idea of social media.
Lean and agile thinking. The decline of the software project religion.
The GNU GPL and Linux.
Open, distributed platforms like git, free from service monopolies.
The Raspberry Pi and computer science in schools
Only looked good:
The rise of PCs to under-cut Unix workstations and break the Data Processing department control. Microsoft took control instead.
Reduced Instruction Set Computers were invented, providing us with a free 30 year window to work out the problem of parallelism but meaning we didn’t bother.
In 1980, Alan Kay had invented Smalltalk and the Object Oriented paradigm of computing, allowing complex real-world objects to be simulated and everything else to be modelled as though it was a simulation of objects, even if you had to invent them. Smalltalk did no great harm but in 1983 Bjarne Stroustrup left the lab door open and C++ escaped into the wild. By 1985, objects had become uncontrollable. They were EVERYWHERE.
Software Engineering. Because writing software is exactly like building a house, despite the lack of gravity.
Microsoft re-invents DEC’s VMS and Sun’s Java, as 32-bit Windows NT, .NET and C# then destroys all the evidence.
The reality of social media.
The writeable Web.
Multi-core processors for speed (don’t panic, functions can save us.)
Why did women stop seeing computing as a sensible career choice in 1985 when “mine is bigger than yours” PCs arrived and reconsider when everyone at school uses the same Raspberry Pi and multi-tasking is becoming important again? Probably that famous ‘female intuition’. They can see the world of computing needs real functioning humans again.
We’re thrilled to announce the launch of the new CDI World website, after they came to us with the idea of building a new website. They needed a safe and secure site, one which would protect their data – and that of their clients. While stickee already hosted CDI World’s old site, this new project enabled […]
SEO is important. I know it, you know it, your pet goldfish knows it. So, there’s no need to lecture you on the crucialness of SEO because you’ve probably heard it a thousand times. However, what you might not be aware of is the deceitful myths bouncing around the technosphere regarding SEO. If you believe […]
|Office 365 email signature management for company-wide consistency is made possible with mail flow rules. There are a number of reasons why you might want to append an email signature to your emails. The foremost reason is making it easier for customers to contact you. An Office 365 signature looks professional and consistent, distinguishes your organisation, […]|
Because (I like to think) I’m human, I make models of the world around me. Because I’m a computer scientist/a bit weird, I write them down or draw pictures of them. Since I got interested in why some intelligent people have different political views to me, a couple of years ago, I’ve been trying to model the values which underlie people’s belief systems, which I believe determine their political views.
My working model for the values of Left-Right politics (I’m a fluffy compromise near the middle of this scale but I have other scales, upon which I weigh myself a dangerous radical) has been that The Left believe in Equality and The Right in Selfishness. As a radical liberal, I obviously think both extremes are the preserve of drivelling idiots – compromise is all. The flies in my ointment have been the selfishness of the Far Left and the suicidal economic tendencies of working class nationalists in wanting to #Brexit. My model clearly had flaws.
This morning I was amusing myself with a #UKIP fan who countered being told by a woman that it was best to have O type blood (presumably because it is the universal donor) by saying it was best to be AB, so he could receive any blood (a universal recipient.) On the surface this seems to confirm the selfishness theory but I made an intuitive leap that he thought he was too special to lose, which was far from the conclusion I’d arrived at, during our discussion.
My new, modified theory is that the Left think ‘no-one should get special treatment’ and the Right think ‘My DNA is special. I deserve more’. This belief that “I am/am not special” has almost no correlation with the evidence, or even with class. I have no evidence of whether the characteristic is inherited or learned but Michael Gove and members of the BNP clearly decided that they were special and deserve to be treated better than other people. Tony Benn, on the other hand, argued himself out of believing that he had a God-given right to a place in the House of Lords. Please let me know why I’m wrong.
|Cisco is warning of a flaw that creates conditions susceptible to a DoS attack in its NetFlow Generation Appliance. Source: Cloud Security Cisco Warns of High Severity Bug in NetFlow Appliance|
All the Birmingham-flavoured tech content on this page is supplied by local bloggers:
Want your blog's content featured here?
For information on submitting your blog for inclusion on this list, visit our blog submission page on Birmingham.IO.
All content, trademarks, artwork, and associated imagery are trademarks and/or copyright material of their respective owners. All rights reserved.Back to Top