Last updated: October 14, 2019 01:22 PM (All times are UTC.)

October 11, 2019

Reading List 240 by Bruce Lawson (@brucel)

Hello, you kawaii kumquat! Here’s this week’s lovely list ‘o’ links for your reading pleasure. It’s been a while because I’ve been gallivanting around Japan, Romania and the Netherlands, so this is a bumper edition.

There won’t be a reading list next week as I’m going off the grid to read books and record music in a 400 year old farmhouse in the countryside, far from WiFi and the bleeps and bloops of notifications. Until next time, hang loose and stay groovy.

October 09, 2019

October 04, 2019

October 03, 2019

As a software engineer, it’s easy to get work engineering software. Well, maybe not easy, but relatively so: that is the kind of work that comes along most. The kind of work that people are confident I can do. That they can’t do, so would like me to do for money.

It’s also usually the worst work available.

I don’t want to take your shopping list of features, give you a date and a cost, then make those features. Neither of us will be very happy, even if it goes well.

I want to get an understanding of your problem, and demonstrate how software can help in solving it. Maybe what we need to understand isn’t the problem you presented, but the worse problem that wasn’t on your mind. Or the opportunity that’s worth more than a solution to either problem.

Perhaps we ask a question, in solving your problem, to which the answer is that we don’t know, and now we have another problem.

You might not need me to build all of the features you thought of, just one of them. Perhaps that one works better if we don’t build it, but configure something that already exists. Or make it out of paper.

You understand your problem and its domain very well. I understand software very well. Let’s work together on combining that expertise, and both be happier in the process.

September 20, 2019

Recently I was looking at improving the performance of rendering a Space Syntax OpenMapping layer in MapServer. The openmapping_gb_v1 table is in a PostGIS database with an index on the geometry column. The original MapServer DATA configuration looked like this:

DATA "wkb_geometry from (select meridian_class_scale * 10 as line_width, * from openmapping …

September 19, 2019

[:ChangeLog
(:v0.2
“recognise that ‘resolving dependencies’ and ‘build’ are different operations.”
“Add conversion tools between project.clj & deps.edn”)
(:v0.3
“Remove CLI tools described as a build tool comment.”)]

WARNING
I still have much to learn about ClojureScript tooling but I thought I’d share what (I think) I’ve learned, as I have found it difficult to locate advice for beginners that is still current. This is very incomplete. It may stay that way or I may update it into a living document. I don’t actually have much advice to give and it’s only about the paths that have interested me.

Clojure development requires:

  • a text editor,

and optionally,

  • a REPL, for a dynamic coding environment
  • dependency and build tool(s).

The absolute minimum Clojure environment is a Java .jar file, containing the clojure.main/main entry point, which can be called with the name of your file.clj as a parameter, to read and run your code. I don’t think anyone does that, after they’ve run it once to check it flies.

Based on 2 books, ‘Clojure for the Brave & True’ and ‘Living Clojure’, my chosen tools are emacs for editing, with CIDER connecting a REPL, and Leiningen as dependency & build tool. ‘lein repl’ can also start a REPL.
Boot is available as an alternative to Leiningen but I got the impression it might be a bit too ‘exciting’ for a Clojure noob like me, so I haven’t used it yet.
CIDER provides a client-server link between an editor (I’m learning emacs) and a REPL.

If you use Leiningen, it comes with a free set of assumptions about development directory structure and the expectation that you will create a file, project.cli in the root directory of each ‘project’, containing a :dependencies vector. Then magic happens. If your change the dependencies of your project, the config fairies work out everything else that needs changing.

Next, I wanted to start using ClojureScript (CLJS.) I assumed that the same set of tools would extend. I was wrong to assume.
Unfortunately, CLJS tooling is less standardised and doesn’t seem to have reached such a stable state.

In ‘Living Clojure’, Carin Meier suggests using cljsbuild. It uses the lein-cljsbuild plugin and the command:

lein cljsbuild auto

to start a process which automatically re-compiles whenever a change is saved to the cljs source file. If the generated JavaScript is open in a browser, then the change will be shown in the browser window. This is enough to get you going. It is my current state.

I’ve read that there are other tools such as Figwheel, now transitioning to ‘Figwheel Main’ which hot-load the transcribed code into the browser as you change it.
There is a lein-figwheel as well as a lein-cljsbuild, which at least sounds like a drop-in replacement. I suspect it isn’t that simple.

There are several REPLs, though there seems to be some standardisation around nrepl.
It was part of the Clojure project but now has its own nrepl/nrepl repository on Github. It is used by Clojure, ‘lein repl’ and by CIDER.

There is something called Piggieback which adds CLJS support to NREPL. There is a CIDER Piggieback and an NREPL Piggieback. I have NO IDEA! (yet.)
shadow-cljs exists. Sorry, that’s all I have there too.

At this point in my confusion, a dependency issue killed my tool-chain.
I think one of the config fairies was off sick that day. The fix was a re-install of an emacs module. This forced me to explore possible reasons. I discovered the Clojure ‘Getting Started’ page had changed (or I’d never read it.)
https://clojure.org/guides/getting_started

There are also (now?) ‘Deps and the CLI Tools’ https://clojure.org/guides/deps_and_cli and https://clojure.org/reference/deps_and_cli

I think these are new and I believe they are intended to be the beginners’ entry point into Clojure development, before you dive into the more complex tools. There are CLI commands: ‘clojure’ and a wrapper that provides line-editing, ‘clj’
and a file called ‘deps.edn’ which specifies the dependencies, much as ‘projects.clj’ :depencies vector does for Leiningen but with a different syntax.

I’m less clear if these are also intended as new tools for experts, to be used by ‘higher order’ tools like Leiningen and Figwheel, or whether they will be adopted by those tools.

[ On the day I wrote this, I had a tip from didibus on clojureverse.org that there are plugins for Leiningen and Boot to use an existing deps.edn,

so perhaps this is the coming future standard for specifying & resolving dependencies, while lein and boot continue to provide additional build capabilities. deps.edn refers to Maven. I discovered elsewhere that Maven references existed but were hidden away inside Leiningen. It looks like I need to learn a little about Apache Maven. I didn’t come to Closure via Java but I can see the advantages to Java practitioners of using their standard build tool. I may need to drop down into Java one day, so I guess I may as well learn about Java-land now.

Also via: https://clojureverse.org/t/has-anyone-written-a-tool-to-spit-out-a-deps-edn-from-a-project-clj/2086, there is a https://github.com/hagmonk/depify, which ‘goes the other way’, trying it’s best to convert a project.clj to a deps.edn. Hopefully that would be a ‘one-off’? ]

I chose the Clojure language for its simplicity. The tooling journey has been longer than I expected, so I hope this information cuts some corners for you.

[ Please let me know if I’m wrong about any of this or if there are better, current documents that I should read. ]

September 13, 2019

Reading List 239 by Bruce Lawson (@brucel)

Hello, you cheeky strawberry! Here’s this week’s lovely list ‘o’ links for your reading pleasure.

There won’t be a reading lost for a few weeks as I’m writing this from a train to London, commencing a 3 week jaunt around conferences in Japan and Europe. Until next time, hang loose and stay groovy.

September 12, 2019

September 02, 2019

I use the free version of the excellent Mailchimp for WP plugin to allow visitors to this site to sign up for my Magical Brucie Mail and get my Reading List delivered to their inboxes.

When I did my regular accessibility check-up (a FastPass with the splendid Accessibility Insights for Web Chromium plugin by Microsoft) I noticed that the Mailchimp signup form fails two WCAG guidelines:

label: Form elements must have labels (1) WCAG 1.3.1, WCAG 3.3.2

This is because the out-of-the-box default form doesn’t associate its email input with a label:


<p>
	<label>Email address:</label>
	<input type="email" name="EMAIL"  required />	
</p>

I’ve raised an issue on GitHub. Update, 6 Sept: the change was turned down.

Luckily, the plugin allows you to customise the default form. So I’ve configured the plugin to associate the label and input by nesting the input inside the label. (This is more robust than using the IDref way because it’s not susceptible to Metadata partial copy-paste necrosis. (I also killed the placeholder attribute because I think it’s worthless on a single-input form.)

You can do this by choosing “Mailchimp for WP” in your WordPress dashboard’s sidebar, choosing “Form” and then over-riding the default with this:


<p>
	<label>Email address: 
	<input type="email" name="EMAIL"  required />
	</label>
</p>

<p>
	<input type="submit" value="Sign up" />
</p>

And, yay!

August 30, 2019

Reading List 238 by Bruce Lawson (@brucel)

Bit of a plug: I’m co-curating and MCing JSCamp – a one-day JavaScript conference in Bucharest, Romania on 24th of September. It’s the conference I want to attend – not full of frameworks and shinies, but full of funny, thought-provoking talks about making the Web better. The speaker line-up is cosmic, maaaan. Bucharest is a lovely city, based on Paris, accommodation and food is cheap and it’s and very easy to get to from anywhere in Europe. Come along, or tell your friends! Or both! (And no, I’m not on a percentage!)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way.

August 28, 2019

I just had the misfortune to read the following:

“Would you accept a good PR for Hitler?” -> well, for me, if it’s good code, why not?

This was written by somebody deliberately invoking an extreme example in their argument that we ought to “leave politics at the door” when it comes to tech, but it’s a sentiment that I have seen repeated too many times over the past few days.

I can’t help but notice that pleas to leave politics at the door are almost invariably uttered by people like me: white, male, secure in our positions.

So yes, the person who wrote that was being deliberately extreme by using Hitler as an example, but let’s run with it.

Let’s be very generous and assume the best. Let’s say that the contributor in question, despite being a genocidal ghoul, is going to be on their best behaviour. No inciting violence in repository issues. No hateful rhetoric in code reviews. No fascist imagery in giant ASCII code comments.

Do you really not see how working alongside this person might make someone uncomfortable?

Put yourself in the shoes of someone Hitler doesn’t like very much. It’s a big list, so this step shouldn’t take a lot of imagination. Now imagine that you and Hitler are contributing to the same project. You check your e-mails and see their name. They’re reviewing your code. They’re talking to your friends. You have to trust them to be impartial and leave their attempts to harm you to outside of project hours.

Imagine that if you speak up about how horrible this feels, you get told to pipe down. Gosh, why do you have to make this so political?

Now imagine that the other people on the project are perfectly happy to work with them. He writes good code, so his presence doesn’t bother them. Boy, they sure are getting along. Just Hitler and other members of your community. Getting along like a house on fire.

The fact that he doesn’t view you as a person isn’t important because it’s not relevant to the project. Don’t bring it up. You’re making it political.

And that’s the best case, where your antagonist doesn’t use their influence in the project to harm you. The best case.

Must feel real good.

August 23, 2019

Reading List 237 by Bruce Lawson (@brucel)

Bit of a plug: I’m co-curating and MCing JSCamp – a one-day JavaScript conference in Bucharest, Romania on 24th of September. It’s the conference I want to attend – not full of frameworks and shinies, but full of funny, thought-provoking talks about making the Web better. The speaker line-up is cosmic, maaaan. Bucharest is a lovely city, based on Paris, accommodation and food is cheap and it’s and very easy to get to from anywhere in Europe. Come along, or tell your friends! Or both! (And no, I’m not on a percentage!)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way.

August 21, 2019

In the beginning, there was the green field. The lead developer, who may have been the only developer, agreed with the product owner (or “the other member of the company” as they were known) what they would build for the first two weeks. Then File->New Project… happened, and they smashed it out of the park.

The amorphous and capricious “market” liked what they had to offer, at least enough to win some seed funding. The team grew, and kept the same cadence: see what we need to do for the next ten business days, do it, celebrate that we did it.

As the company, its customers, and its market mature, things start to slow down. It’s imperceptible at first, because velocity stays constant. The CTO can’t help but think that they get a lot less out of a 13-point story than they used to, but that isn’t a discussion they’re allowed to have. If you convert points into time then you’re doing old waterfall thinking, and we’re an agile team.

Initially the dysfunction manifests in other ways. Developers complain that they don’t get time to refactor, because “the business” doesn’t understand the benefits of clean code. Eventually time is carved out to clean things up, whether in “hardening sprints” or in effort allocated to “engineering stories”. We are getting as much done, as long as you ignore that less of it is being done for the customers.

Stories become task-sliced. Yes, it’s just adding a button, but we need to estimate the adding a component task, the binding the action task, the extending the reducer task, the analytics and management intelligence task. Yes we are getting as much done, as long as you ignore that less of it has observable outcomes.

Rework increases too, as the easy way to fit a feature into the code isn’t the way that customers want to use it. Once again, “the business” is at fault for not being clear about what they need. Customers who were previously flagship wins are now talked about as regressive laggards who don’t share the vision. Stories must have clearer acceptance criteria, the definition of done must be more explicit: but obviously we aren’t talking about a specification document because we’re an agile team. Yes we’re getting as much done, as long as you ignore that a lot of what we got done this fortnight was what we said we’d done last fortnight.

Eventually forward progress becomes near zero. It becomes hard to add new features, indeed hard even to keep up with the competitors. It’s only two years ago that we were five years ahead of them. People start demoing new ideas in separate apps, because there’s no point dreaming about adding them to our flagship project. File->New Project… and start all over again.

What happened to this team? Or really, to these teams, as I’ve seen this story repeated over and over. They misread “responding to change over following a plan” as “we don’t need no stinking plan”.

Even if you don’t know exactly where you are going at any time, you have a good idea where you think you’re going. It might be spread around the company, which is why we need the experts around the table. Some examples of where to find this information:

  • The product owner has a backlog of requested features that have yet to be built.
  • The sales team have a CRM indicating which prospects are hottest, and what they need to offer to close those deals.
  • The marketing director has a roadmap slide they’re presenting at a conference next month.
  • The CTO has budget projections for the next financial year, including headcount changes and how they plan to reorganise the team to incorporate these changes.
  • The CEO knows where they want to position the company in the market over the next two years, and knows which competitors, regulatory changes, and customer behaviours threaten that position and what about them makes them a threat.
  • Countless spreadsheets, databases, and “business intelligence” dashboards across multiple people and departments.

No, we don’t know the future, but we do know which futures are likely and of those, which are desirable. Part of embracing change is to make those futures easier to cope with. The failure mode of many teams is to ignore all futures because we aren’t in any of them yet.
We should be ready for the future we expect, and both humble and adaptable enough to get ready for a different future when things change. Our software should represent our current knowledge of our problem and its solution, including knowledge about likely developments (hey, maybe there’s a reason they call us developers!). Don’t add the things you aren’t going to need, but don’t exclude the possibility of adding them out of spite for a future that may well come to pass.

August 19, 2019

One of the principles behind the manifesto for Agile software development says:

Business people and developers must work
together daily throughout the project.

I don’t like this language. It sets up the distinction between “engineering” and “the business”, which is the least helpful language I frequently encounter when working in companies that make software. I probably visibly cringe when I hear “the business doesn’t understand” or “the business wants” or similar phrases, which make it clear that there are two competing teams involved in producing the software.

Neither team will win. “We” (usually the developers, and some/most others who report to the technology office) are trying to get through our backlogs, produce working software, and pay down technical debt. However “the business” get in the way with ridiculous requirements like responding to change, satisfying customers, working within budget, or demonstrating features to prospects.

While I’ve long pushed back on software people using the phrase “the business” (usually just by asking “oh, which business do you work for, then?”) I’ve never really had a replacement. Now I try to say “experts around the table”, leaving out the information about what expertise is required. This is more inclusive (we’re all experts, albeit in different fields, working together on our common goal), and more applicable (in research software engineering, there often is no “the business”). Importantly, it’s also more fluid, our self-organising team can identify lack of expertise in some area and bring in another expert.

August 17, 2019

Most of what I know about “the economy” is outdated (Adam Smith, Karl Marx, John Maynard Keynes) or incorrect (the news) so I decided to read a textbook. Basic Economics, 5th Edition by Thomas Sowell is clear, modern, and generally an argument against economic regulation, particularly centralised planning, tariffs, and price control. I still have questions.

The premise of market economics is that a free market efficiently uses prices to allocate scarce resources that have alternative uses, resulting in improved standard of living. But when results are compared, they are given in terms of economic metrics, like unemployment, growth, or GDP/GNP. The implication is that more consuming is correlated with a better standard of living. Is that true? Are there non-economic measurements of standard of living, and do they correlate with the economic measurements?

Even if an economy does yield “a better standard of living”, shouldn’t the spread of living standards and the accessibility of high standards across the population be measured, to determine whether the market economy is benefiting all participants or emulating feudalism?

Does Dr. Sowell arrive at his office at 9am and depart at 5pm? The common 40-hour work week is a result of labour unions and legislation, not supply and demand economics. Should we not be free to set our own working hours? Related: is “unemployment” such a bad thing, do we really need everybody to work their forty hours? If it is a bad thing, why not reduce the working week and have the same work done by more people?

Sowell’s argument allows that some expenses, notably defence, are better paid for centrally and collectively than individually. We all get the same benefit from national defence, but even those who are willing to pay would receive less benefit from a decentralised, individually-funded defence. Presumably the same argument can be applied to roads, too, or space races. But where are the boundaries? Why centralised military, say, and not centralised electricity supply, healthcare, mains water, housing, internet service, or food supply? Is there a good “grain size” for such centralising influences (it can’t be “the nation”, because nations vary so much in size and in centralisation/federation) and if so, does it match the “grain size” for a market economy?

The argument against a centralised, planned economy is that there’s too much information required too readily for central planners to make good judgements. Most attempts at a planned economy preceded broad access to the internet and AI, two technologies largely developed through centralised government funding. For example, the attempt to build a planned economy in Chile got as far as constructing a nationwide Telex network before being interrupted by the CIA-funded Pinochet coup. Is this argument still valid?

Companies themselves are centralised, planned economies that allocate scarce resources through a top-down bureaucracy. How big does a company need to get before it is not the market, but the company’s bureaucracy, that is the successful system for allocating resources?

August 16, 2019

Reading List 236 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way.

August 09, 2019

Introduction A Moment of Madness is a project that I’ve been collaborating on with Katie Day of The Other Way Works for over 5 years now!!!  In fact you can even see some of the earlier blog posts on it here: In 2014 back when it was still called ‘Agent in a Box’ In 2016 […]
Introduction This is a loooong overdue post about a collaborative, strategy game I built last summer (2018), SCOOT3.  ‘Super Computer Operated Orchestrations of Time 3’ is a hybrid board game / videogame with Escape Game and Strategy elements designed to be played in teams of up to 10 and takes ~45 mins.  It is one […]

Reading List 235 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way!

August 06, 2019

The first supper by Daniel Hollands (@limeblast)

Toward the back of our garden, just before the newly build workshop, we have a beautiful pergola that is intertwined with a wisteria tree. This provides cover to a patio area which is prime real estate for a dining table to be placed in it, so a few weeks ago, I took up the challenge of building one.

The plans I used for the build were created by Shanty2Chic, and are special because they’re designed to be built out of stud timber 2x4s using nothing but a mitre saw and pocket hole jig.

Mitre saw mishap

I’d had the date of the build scheduled in my diary for around three weeks, and made sure I had everything I needed in time. The date of the build was important as we had a garden party scheduled for the following weekend – so imagine my frustration when just a week before, the base on my mitre saw cracked.

Thankfully, the saw has a three year warranty on it, and Evolution were happy to pick it up for repair – and with credit where credit is due, they got it back to me in time for the build – even if there was a slight cock-up when shipping it back to me (two saws got swapped during packing, meaning my saw went to someone else, and I received theirs), which Evolution were also quick to rectify.

What Evolution didn’t do, however, was calibrate the saw before sending it back to me, Something I didn’t work out until after I’d built the two trestles. I considered scrapping them and starting again after I’d calibrated the saw, but decided they were stable enough, even if a little wonky.

At least I know how to calibrate the mitre saw now, and have learned a valuable lesson about being square.

Pocket hole jig

Apparently, pocket hole jigs are a bit divisive in the woodworking community, and are often viewed as “cheating” or “not real woodworking” by the elite. Steve Ramsey has recently put out a video highlighting this nonsense for it is, which I’m really happy about, as I’d hate for someone to be shamed out of using a perfectly suitable tool for a job based on the idiotic opinions of the elite minority.

If it’s stupid but it works, it isn’t stupid.

Murphy’s Law

That said, other than briefly seeing one at The Building Block, and watching makers use them in YouTube videos, I’d never used one myself, which is why I asked my parents for one as a birthday present.

Kreg are the brand which seem most popular, and I very nearly asked for the K5 Master System, but after doing some research I decided to go for the Trend PH/JIG/AK. Partly because they’re a British company, but mostly because it’s made out of aluminium, rather than plastic like the Kreg jigs, which should make it more durable.

For obvious reasons, I can’t compare it with any other jigs, but I can say that the kit came with everything I needed, including a clamp, some extension bars to help hold longer pieces of wood, and a small sample of screws, all housed in a robust case. Because I’d need a lot more screws than it came supplied with, I also picked up the PH/SCW/PK1 Pocket Hole Screw Pack (one of the few things I didn’t buy from Amazon, as it’s listed for half the price at Trend Direct).

I found using the jig to be easy and it worked perfectly, even if drilling nine pieces of wood five times each became a little tedious. My only complaint is the square driver screws, which are apparently designed to avoid cam out, but cammed out a lot anyway. Maybe I was doing it wrong?

The build

Other than the calibration squareness issues mentioned above, I think the built went well. I’m a lot more confident with my skills now than I was a year ago, although it’s obvious I’ve got a lot to learn.

Although the plans did have an accompanying video, it served as more of an overview and general build video, rather than what I was used to from The Weekend Woodworker (which features much more hands on instruction at each stage). But armed with the knowledge I’d gained in the past year, I felt able to step up to the challenge of reading the plans, and following the instructions myself.

I made some small variations to the plans for the top – specifically I decided to use the full 2.4 meter length of the studs, rather than cutting them down to the 1.9 meters as defined in the plans. This is because we had plenty of space under the pergola, and it would allow additional people to sit at the ends. I also decided to leave the breadboards off, as I think they’re purely decorative in this instance, and I decided it wasn’t worth the extra wood.

I painted it using Cuprinol Garden Shades; Silver Birch for the top, and Natural Stone for the base.

Initially I attempted to use the Cuprinol Spray & Brush unit that we’d picked up to paint our fence, but it didn’t work very well. I think this is because it’s designed to cover much larger surfaces with a lot more paint than I needed, so because I barely filled it with paint, it spluttered as air got into the pipe.

There’s a paint spray gun on sale in Aldi right now, which I think would have been much better suited to the task, but it’s a little bit more than I can afford right now.

Costs

All in all, the total cost was just under £200:

  • A hair under £100 for the lumber, which I got from Wickes. The plans called for 17 studs, so I ordered 20 (with the extra three acting as fuck-up insurance), and only ended up using 16 of them.
  • £85 for the chairs, which were 25% off from Homebase off due to an end of season sale.
  • around £35 for the paint, screws and glue, etc.

This sounded like quite a lot to me at first, but after seeing that Homebase are selling a vaguely comparable table for £379 without the chairs, it doesn’t seem too bad after all.

Conclusion

I’m happy with how it turned out, and I think it looks great under the pergola. If I was to do it again, I’d make it slightly shorter than it is, or buy slightly taller chairs, but that’s a minor issue as it’s still perfectly usable – at the very least I had no complaints during the party. It seems to have impressed at one person though, as I might have a commission to build one for someone else in the near future, which would be awesome.

July 30, 2019

I originally wrote this as an answer to a question on Quora but I’m increasingly concerned at the cost of higher education for young people from families that are not wealthy. I had parents who would have sacrificed anything for my education but I had clever friends who were not so fortunate. The system is bleeding talent into dead-end jobs. Below, I consider other models of training as I hope it might start a conversation in the technology community and the political infrastructure that trickles money down into it.

Through learning about ‘Agile’ software development, I became interested in related ‘Lean’ thinking. It borrows from Japanese cultural ideas and the way the martial arts are taught. I think the idea is that first you do, then you learn and finally you understand (as illustrated by the film ‘Karate Kid’.) That requires a ‘master’ or ‘Sensei’ to guide and react to what s/he sees about each individual’s current practice. It seems a good model for programming too. There may be times when doing is easier if you gain some understanding before you ‘do’ and advice and assistance with problem solving could be part of this. I’m not alone in thinking this way, as I see phrases like “kata” and “koans” appearing around software development.

I’ve also seen several analogies to woodworking craft which suggests that a master-apprentice relationship might be appropriate. There is even a ‘Software Craftsmanship’ movement. This could work as well in agile software development teams, as it did for weavers of mediaeval tapestries.

A female Scrum Master friend assures me that the word “master” is not gendered in either of these contexts. Of course, not all great individual crafts people make good teachers but teams with the best teachers would start to attract the best apprentices.

If any good programmers aren’t sure about spending their valuable developer’s time teaching, I recommend the “fable in novella form” Jonathan Livingston Seagull, written by Richard Bach, about a young seagull that wants to excel at flying.

Small software companies ‘have a lot on’ but how much would they need to be paid to take on an apprentice in their development teams, perhaps with weekly day-release to a local training organisation? I’d expect a sliding scale to employment as they became productive or were rejected back into the cold, hard world if they weren’t making the grade.

July 29, 2019

July Catchup by James Nutt (@jsrndoftime)

New Job

I started a new job. While my responsibilities and skills have changed a lot since I started my career in 2011, this is actually the first time I have moved company. Just shy of eight years in the same place. That’s basically a millennium in tech years. Long enough that I felt I was long overdue a change. A big change, with lots of big emotions attached.

While it had been on the cards for a fair while, the decision to pull the trigger on switching jobs ended up being something of an impulse. A friend prodded me about the opening on Slack, reckoning I might be a good fit, and the time between applying and signing the contract was really short. Short enough that it hadn’t really hit me until a good week or so afterwards what I had done.

Still, I’m massively enjoying the new job, the new team, the new toys, and the new tech.

New Rig

One of the many perks of this new job is that they’ve got me going on a fancy new MacBook Pro 2018. I’d never used a MacBook before, so the majority of my initial interactions with my new colleagues, who I am desperate to impress, were along the lines of “how do I change windows on this thing?” A stellar first impression. I definitely know how to perform basic computing tasks. Honestly.

First Conference

A Brighton Ruby branded keep cup

On the 5th July I took the day off work to go down to Brighton for my first programming conference, Brighton Ruby. Not much to say about it other than that it was a blast, I met some nice people, learned a few very cool things and hope to go back next year.

And there’s a street food market up the road from the Brighton Dome that does a mean jerk chicken.

New Bod

Just kidding. But I have started going to the gym again. You know, you reach a point where you wake up and your back already hurts and you just go “nah”.

New Gems

I’ve gotten to play with some great new Ruby gems lately that I think are worth sharing.

  • VCR - Record your test suite’s HTTP interactions and replay them during future test runs for fast, deterministic, accurate tests.
  • Shoulda Matchers - Simple one-liner tests for common Rails functionality.
  • Byebug - Byebug is a simple to use and feature rich debugger for Ruby.

Good Reads

July 26, 2019

There are only so many pens, bottle openers, pin badges, tote bags, water bottles, usb drives or beer mats that anyone needs. And that threshold has long since been met and surpassed. Time for something more interesting.

July 25, 2019

My first rails app by Graham Lee

I know, right? I first learned how to rails back when Rails 3 was new, but didn’t end up using it (the backend of the project I was working on was indeed written in Rails, but by other people). Then when I worked at Big Nerd Ranch I picked up bits and pieces of knowledge from the former Highgroove folks, but again didn’t use it. The last time I worked on a real web app for real people, it was in node.js (and that was only really vending a React SPA, so it was really in React). The time before that: WebObjects.

The context of this project is that I had a few days to ninja out an end-to-end concept of a web application that’s going to be taken on by other members of my team to flesh out, so it had to be quick to write and easy to understand. My thought was that Rails is stable and trusted enough that however I write the app, with roughly no experience, would not diverge far from however anyone else with roughly no experience would do it, so there wouldn’t be too many surprises. That the testing story for Rails is solid, that websites in Rails are a well-understood problem.

Obviously I could’ve chosen any of a plethora of technologies and made my colleagues live with the choice, but that would potentially have sunk the project. Going overly hipster with BCHS, Seaside or Phoenix would have been enjoyable but left my team-mates with a much bigger challenge than “learn another C-like OOP language and the particular conventions of this three-tier framework”. Similarly, on the front end, I just wrote some raw JS that’s served by Rails’s asset pipeline, with no frameworks (though I did use Rails.ajax for async requests).

With a day and a half left, I’m done, and can land some bonus features to reduce the workload for my colleagues. Ruby is a joy to use, although it is starting to show some of the same warts that JS suffers from: compare the two ways to make a Ruby hash with the two ways to write JS functions. The inconsistency over brackets around message sends is annoying, too, but livable.

Weirdly testing in Rails seems to only be good for testing Ruby, not JS/Coffeescript/whatever you shove down the frontend. I ended up using the teaspoon gem to run Javascript tests using Jasmine, but it felt weird having to set all that up myself when Rails goes out of its way to make tests for you in Ruby-land. Yes, Rails is in Ruby. But Rails is a web framework, and JS is a necessary evil on the web.

Most of my other problems came from the incompatibility of Ruby versions (I quickly gave up on rvm and used Docker, writing a small wrapper script to run the CD pipeline and give other devs commands like ‘build’, ‘test’, ‘run’, ‘stop’, ‘migrate’) and the changes in Rails API between versions 3-5. A lot of content on blogs[*] and stackoverflow don’t specify the version of Rails or Ruby they’re talking about, so the recommendations may not work the same way.

[*] I found a lot of Rails blogs that just reiterate examples and usage of API that’s already present in the rdoc. I don’t know whether this is SEO poisoning, or people not knowing that the official documentation exists, or there being lots of low-quality blogs.

But overall, Railsing was fun and got me quickly to my destination.

July 23, 2019

‘Looming Hell by Daniel Hollands (@limeblast)

I don’t remember what I was doing when first I stumbled upon The Interlace Project – a “practice-based research project that combines the traditional manufacturing techniques of spinning and weaving with emergent e-textile technologies” – and ordinarily I wouldn’t have given it another though, but as I’d had some exposure to weaving courtesy of my friend Emma, I figured I’d investigate further.

Keeping true to their premise of “Open Source Weaving”, they offer two loom designs for download, along with tutorials on how to build them and instructions on how to use them.

The Frame loom is a simple yet efficient design which lets you laser cut all the components you need out of a sheet of 3mm MDF measuring no more than 15x20cm. On the contrary, the Rigid Heddle Loom is a more complex affair requiring more, yet readily available, materials to build.

I sent the link to Emma, asking if it was something she’d be interested in, and to no one’s surprise she immediately responded that she’d love to have the Rigid Heddle loom. I countered with the offer of building the Frame loom instead.

Thanks to my membership of Cheltenham Hackspace I had access to a laser cutter, but even though I’ve used one before I’d forgotten most of what I’d previously learned. Thankfully, everyone that I’ve met at the space have been really nice and helpful, and James, one of the directors, was happy to spend a couple of hours one Wednesday evening showing me how it worked.

The design gets loaded into the laser cutter software, and modified to match the colours required for each of the three functions it’s capable of, red for vector cutting, blue for vector etching, and black for raster etching. Apparently vector etching isn’t very reliable, so it was recommended to avoid it if possible.

Unlike the last laser cutter I used, which was able to calculate the speed and intensity of the laser automatically based on the material settings you chose, this one needed you to set these values manually. Thankfully there was a chart of all the laser cutter compatible materials available and their relevant settings. There was also a chart of all the materials which must not be used in the laser cutter (did you know PVC emits chlorine gas when cut with a laser?)

I must admit, I pretty much just stood in awe as James configured everything on the computer, placed the sheet of MDF in the machine, aligned the laser head, and started the first of three runs. It was done in three to ensure the inner components were successful, before moving outward, else an outer cut could cause the middle to fall slightly, resulting in an out of focus laser which might prevent the cut from succeeding.

All in all, the cutting process took about 16 minutes, and cost the princely sum of £3.60 (£2 for the 60x40cm sheet of MDF, the vast majority of which remained unused, and £1.60 for the laser time).

Much like the Inkle loom, I have no idea how this works, but Emma does, so I’ll send it to her shortly, and will post updates of her creations in the future.

July 19, 2019

Reading List 234 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way!

July 18, 2019

My Delicious Library collection just hit 1,000 books. That’s not so big, it’s only a fraction of the books I’ve read in my life. I only started cataloguing my books a few years ago.

What is alarming about that is that most of the books are in my house, and most are in physical form. I read a lot, and the majority of the time I’m reading something I own. The reason it’s worrying is that these books take up a lot of space, and cost a lot of money.

I’ve had an on-again, off-again relationship with ebooks. Of course they take up less space, and are more convenient when travelling. The problems with DRM and ownership mean that I tend to only use ebooks now for books from Project Gutenberg or the internet archive, and PDFs of scholarly papers.

And not even that second one, due to the lack of big enough readers. For a long time I owned and enjoyed a Kindle DX, with a screen big enough that a typical magazine page was legible without zooming in. Zooming in on a columnar page is horrific. It’s like watching a tennis match through a keyhole. But the Kindle DX broke, is no longer a thing, and has no competitors. I don’t enjoy reading on regular computer screens, so the option of using a multipurpose tablet is not a good one.

Ebooks also suffer from being out of sight and out of mind. I actually bought some bundle of UX/HCI/design books over a year ago, and have never read them. When I want to read, I look at my pile of unread books and my shelves. I don’t look in ~/Documents/ebooks.

I do listen to audiobooks when I commute, but only when I commute. It’d be nice to have some kind of multimodal reader, across a “printed” and “spoken” format. The Kindle text-to-speech was not that, when I tried it. Jeremy Northam does a much better job of reading out The Road to Wigan Pier than an automated speech synthesiser does.

The technique I’m trying at the moment involves heavy use of the library. I’m a member of both the local municipal library and a big university library. I subscribe to a literary review magazine, the London Review of Books. When an article in there intrigues me, I add the book to the reading list in the library app. When I get to it, I request the book.

That’s not necessarily earth-shattering news. Both public and subscription libraries have existed for centuries. What’s interesting is that for this dedicated reader and technology professional, the digital revolution has yet to usurp the library and its collection of bound books.

July 15, 2019

Love them or hate them, PDFs are a fact of life for many organisations. If you produce PDFs, you should make them accessible to people with disabilities. With Prince, it’s easy to produce accessible, tagged PDFs from semantic HTML, CSS and SVG.

It’s an enduring myth that PDF is an inaccessible format. In 2012, the PDF profile PDF/UA (for ‘Universal Accessibility’) was standardised. It’s the U.S. Library of Congress’ preferred format for page-oriented content and the International Standard for accessible PDF technology, ISO 14289.

Let’s look at how to make accessible PDFs with Prince. Even if you already have Prince installed, grab the latest build (think of it as a stable beta for the next version) and install it; it’s a free license for non-commercial use. Prince is available for Windows, Mac, Linux, Free BSD desktops and wrappers are available for Java, C#/ .NET, ActiveX/COM, PHP, Ruby on Rails and Node/ JavaScript for integrating Prince into websites and applications.

Here’s a trivial HTML file, which I’ve called prince1.html.

<!DOCTYPE html>
<html>
<meta charset=utf-8>
<title>My lovely PDF</title>
<style>
        h1 {color:red;}
        p {color:green;}
</style>
<h1>Lovely heading</h1>
<p>Marvellous paragraph!</p>
</html>

From the command line, type

$ prince prince1.html

Prince has produced prince1.pdf in the same folder. (There are many command line switches to choose the name of the output file, combine files into a single PDF etc., but that’s not relevant here. Windows fans can also use a GUI.)

Using Adobe Acrobat Pro, I can inspect the tag structure of the PDF produced:

Acrobat screenshot: no tags available

As you can see, Acrobat reports “No Tags available”. This is because it’s perfectly legitimate to make inaccessible PDFs – documents intended only for printing, for example. So let’s tell Prince to make a tagged PDF:

$ prince prince1.html --tagged-pdf

Inspecting this file in Acrobat shows the tag structure:

Acrobat screenshot showing tags

Now we can see that under the <Document> tag (PDF’s equivalent of a <body> element), we have an <H1> and a <P>. Yes, PDF tags often —but not always— have the same name as their HTML counterparts. As Adobe says

PDF tags are similar to tags used in HTML to make Web pages more accessible. The World Wide Web Consortium (W3C) did pioneering work with HTML tags to incorporate the document structure that was needed for accessibility as the HTML standard evolved.

However, the fact that the PDF now has structural tags doesn’t mean it’s accessible. Let’s try making a PDF with the PDF-UA profile:

$ prince prince1.html --pdf-profile="PDF/UA-1"

Prince aborts, giving the error “prince: error: PDF/UA-1 requires language specification”. This is because our HTML page is missing the lang attribute on the HTML element, which tells assistive technologies which language the text is written in. This is very important to screen reader users, for example; the pronunciation of the word “six” is very different in English and French.

Unfortunately, this is a very common error on the Web; WebAIM recently analysed the accessibility of the top 1,000,000 home pages and discovered that a whopping 97.8% of home pages had detectable accessibility failures. A missing language specification was the fifth most common error, affecting 33% of sites.

screenshot from webaim showing most common accessibility errors on top million homepagesImage courtesy of webaim.org, © WebAIM, used by kind permission

Let’s fix our web page by amending the HTML element to read <html lang=en>.

Now it princifies without errors. Inspecting it in Acrobat Pro, we see a new <Annot> tag has appeared. Right-clicking on it in the tag inspector reveals it to be the small Prince logo image (that all free licenses generate), with alternate text “This document was created with Prince, a great way of getting web content onto paper”:

Acrobat screenshot with annotation on the Prince logo added with free licenses

This generation of the <Annot> with alternate text, and checking that the document’s language is specified allows us to produce a fully-accessible PDF, which is why we generally advise using the --pdf-profile="PDF/UA-1" command line switch rather than --tagged-pdf.

Adobe maintains a list of Standard PDF tags, most of which can easily be mapped by Prince to HTML counterparts.

Customising Prince’s default mappings

Prince can’t always map HTML directly to PDF tags. This could be because there isn’t a direct counterpart in HTML, or it could be because the source markup has conflicting markup and styling.

Let’s look at the first scenario. HTML has a <main> element, which doesn’t have a one-to-one correspondence with a single PDF tag. On many sites, there is one article per document (a wikipedia entry, for example), and it’s wrapped by a <main> element, or some other element serving to wrap the main content.

Let’s look at the wikipedia article for stegosaurus, because it is the best dinosaur.

We can see from browser developer tools that this article’s content is wrapped with <div id=”bodyContent”>. We can tell Prince to map this to the PDF <Art> tag, defined as “Article element. A self-contained body of text considered to be a single narrative” by adding a declaration in our stylesheet:

#bodyContent { prince-pdf-tag-type: Art; }

On another site, we might want to map the <main> element to <Art>. The same method applies:

Main { prince-pdf-tag-type: Art;}

Different authors’ conventions over the years is one reason why Prince can’t necessarily map everything automatically (although, by default HTML <article> gets mapped to <Art>).

Therefore, in this new build of PrinceXML, much of the mapping of HTML elements to PDF tags has been removed from the logic of Prince, and into the default stylesheet html.css in the style sub-folder. This makes it clearer how Prince maps HTML elements to PDF tags, and allows the author to override or customise it if necessary.

Here is the relevant section of the default mappings:

article { prince-pdf-tag-type: Art }
section { prince-pdf-tag-type: Sect }
blockquote { prince-pdf-tag-type: BlockQuote }
h1 { prince-pdf-tag-type: H1 }
h2 { prince-pdf-tag-type: H2 }
h3 { prince-pdf-tag-type: H3 }
h4 { prince-pdf-tag-type: H4 }
h5 { prince-pdf-tag-type: H5 }
h6 { prince-pdf-tag-type: H6 }
ol { prince-pdf-tag-type: OL }
ul { prince-pdf-tag-type: UL }
li { prince-pdf-tag-type: LI }
dl { prince-pdf-tag-type: DL }
dl > div { prince-pdf-tag-type: DL-Div }
dt { prince-pdf-tag-type: DT }
dd { prince-pdf-tag-type: DD }
figure { prince-pdf-tag-type: Div } /* figure grouper */
figcaption { prince-pdf-tag-type: Caption }
p { prince-pdf-tag-type: P }
q { prince-pdf-tag-type: Quote }
code { prince-pdf-tag-type: Code }
img, input[type="image"] {
prince-pdf-tag-type: Figure;
prince-alt-text: attr(alt);
}
abbr, acronym {
prince-expansion-text: attr(title)
}

There are also two new properties, prince-alt-text and prince-expansion-text, which can be overridden to support the relevant ARIA attributes.

Uncle Hakon shouting at me in ParisUncle Håkon shouting at me last month in Paris

Taking our lead from wikipedia again, we might want to produce a PDF table of contents from the ‘Contents’ box. Here is the Contents for the entry about otters (which are the best non-dinosaurs):

screenshot of wikipedia's in-page table of contents

The box is wrapped in an unordered list inside a <div id=”toc”>. To make this into a PDF Table of Contents (<TOC>), I add these lines to Prince’s HTML.css (because obviously I can’t touch the wikipedia source files):

#toc ul {prince-pdf-tag-type: TOC;} /*Table of Contents */
#toc li {prince-pdf-tag-type: TOCI;} /* TOC item */

This produces the following tag structure:

Acrobat screenshot showing PDF table of contents based on the wikipedia table of contents

In one of my personal sites, I use HTML <nav> as the wrapper for my internal navigation, so would use these declaration instead:

nav ul {prince-pdf-tag-type: TOC;}
nav li {prince-pdf-tag-type: TOCI;}

Only internal links are appropriate for a PDF Table of Contents, which is why Prince can’t automatically map <nav> to <TOC> but makes it easy for you to do so, either by editing html.css directly, or by pulling in a supplementary stylesheet.

Mapping when semantic and styling conflict

There are a number of tricky questions when it comes to tagging when markup and style conflict. For example, consider this markup which is used to “fake” a bulleted list visually:


<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<title>My lovely PDF</title>
<style>
div div {display:list-item;
    list-style-type: disc;
    list-style-position: inside;}
</style>
<div>

    <div>One</div>
    <div>Two</div>
    <div>Three</div>

</div>

Browsers render it something like this:

what looks like a bulleted list in a browser

But this merely looks like a bulleted list — it isn’t structurally anything other than three meaningless <div>s. If you need this to be tagged in the output PDF as a list (so a screen reader user can use a keyboard short cut to jump from list to list, for example), you can use these lines of CSS:

body>div {prince-pdf-tag-type: UL;}
div div {prince-pdf-tag-type: LI;}

Prince creates custom OL-L and UL-L tags which are role-mapped to PDF’s list structure tag <L>. Prince also sets the ListNumbering attribute when it can infer it.

Mapping ARIA roles

Often, developers supplement their HTML with ARIA roles. This can be particularly useful when retrofitting legacy markup to be accessible, especially when that markup contains few semantic elements — the usual example is adding role=button to a set of nested <div>s that are styled to look like a button.

Prince does not do anything special with ARIA roles, partly because, as webaim reports,

they are often used to override correct HTML semantics and thus present incorrect information or interactions to screen reader users

But by supplementing Prince’s html.css, an author can map elements with specific ARIA roles to PDF tags. For example, if your webpage has many <div role=”article”> you can map these to pdf <Art> tags thus:

div[role="article"] {prince-pdf-tag-type: Art;}

Conclusion

As with HTML, the more structured and semantic the markup is, the better the output will be. But of course, Prince cannot verify that alternate text is an accurate description of the function of an image. Ultimately claiming that a document meets the PDF/UA-1 profile actually requires some human review, so Prince has to trust that the author has done their part in terms of making the input intelligible. Using Prince, it’s very easy to turn long documents —even whole books— into accessible and attractive PDFs.

July 12, 2019

July 09, 2019

I’ve just finished teaching a four-day course introducing software engineering for the first time. My plan is to refine the course (I’m teaching it again in October), and it will eventually become the basis for doctoral training programmes in research software engineering at Oxford, and part of a taught Masters. My department already has an M.Sc. in Software Engineering for commercial engineers (in fact I have that degree), and we want to do the same for software engineers in research context.

Of course, I can also teach your team about software engineering!

Some challenges that came up:

  • I’m too comfortable with the command-line to get people past the initial unfamiliar discomfort. From that perspective, command-line tools are all unusably hard. I’ve learnt from various sources to try foo --help, man foo, and other incantations. Others haven’t.

  • git, in particular, is decidedly unfriendly. What I want to do is commit my changes. What I have to do is stage my changes, then commit my staged changes. As a result, teaching git use takes a significant chunk of the available time, and still leaves confusion.

  • you need to either tell people how to set their core.editor, or how to quit vim.

  • similarly, there’s a world of difference between python foo.py and python3 foo.py, and students aren’t going to interpret the sorts of errors you et if you choose the wrong one.

  • Introduce a tangent, and I run the risk of losing people to that tangent. I briefly mentioned UML while discussing diagrams of objects, as a particular syntax for those diagrams. In the subsequent lab, some people put significant time into making sure their diagrams were valid UML.

  • Finding the trade-off between presentation, tutorial, and self-directed exercise is difficult. I’m used to presentations and will happily talk on many topics, but even I get bored of listening to me after the ~50% of the time I’ve spent speaking on this course. It must be worse for the students. And there’s no substitute for practical experience, but that must be supported by guidance.

  • There are so many topics that I didn’t get to cover!

    • only having an hour for OOP is a sin
    • which means I didn’t even mention patterns or principles
    • similarly, other design techniques like functional programming got left off
    • principles like Agile Software Development, Software Craftsmanship, or Devops don’t get a mention
    • continuous integration and continuous delivery got left off. Even if they didn’t, the amount of work involved in going from “I have a Python script” to “I run my tests whenever I change my script, and update my PYpi package whenever they pass” is too damn high.
    • forget databases, web servers, browsers, mobile apps, desktop apps, IoT, or anything that isn’t a command line script or a jupyter notebook
    • and machine learning tools
    • and concurrency, processes and process improvement, risk management, security, team dynamics, user experience, accessibility…

It’s only supposed to be a taster but I have to trade off introducing everything with showing the value present in anything. What this shows, as I found when I wrote APPropriate Behaviour, is that there’s a load that goes into being a programmer that is not programming.

July 05, 2019

Reading List 233 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way!

July 04, 2019

This conference season I’ve spoken at some events for non-frontenders, suggesting that people invest time in learning the semantics of HTML. After all, there are only 120(ish) elements; the average two year old knows 100 words and by the time a child is three will have a vocabulary of over 300 words.

A few people asked me the difference between <article> and <section>. My reply: don’t worry. Simply, don’t use <section>. its only use is in the HTML Document Outline Algorithm, which isn’t implemented anywhere, and seemingly never will be. For the same reason, don’t worry about the <hgroup> element.

But do use <article>, and not just for blog posts/ news stories. It’s not just for articles in the news sense, it’s for discrete self-contained things. Think “article of clothing”, not “magazine article”. So a list of videos should have each one (and its description) wrapped in an <article>. A list of products, similarly. Consider adding microdata from schema.org, as that will give you better search engine results and look better on Apple watches.

And, of course, do use <main>, <nav>, <header> and <footer>. It’s really useful for screen reader users – see my article The practical value of semantic HTML.

Happy marking up!

June 28, 2019

The Fruit of my Loom by Daniel Hollands (@limeblast)

Following on from my post about the loom I build for my friend Emma, I have an update to share on her progress using it, delivered via the medium of WhatsApp images, and the comments she put next to them.

June 27, 2019

For reasons that will become clear, I can’t structure this article as a “falsehoods programmers believe” article, much as that would add to the effect.

There are plenty of such articles in the world, so turn to your favourite search engine, type in “falsehoods programmers believe”, and orient yourself to this concept. You’ll see plenty of articles that list statements that challenge assumptions about a particular problem domain. Some of them list counterexamples, and a subset of those give suggestions of ways to account for the counterexamples.

As the sort of programmer who writes falsehoods programmers believe articles, my belief is that interesting challenges to my beliefs will trigger some curiosity, and lead me to research the counterexamples and solutions. Or at least, to file away the fact that counterexamples exist until I need it, or am otherwise more motivated to learn about it.

But that motivation is not universal. The fact that I treat it as universal turns it into a falsehood I believe about readers of falsehoods articles. Complaints abound that falsehoods articles do not lead directly to fish on the plate. Some readers want a clear breakdown from “thing you might think is true but isn’t true” to “Javascript you can paste in your project to account for it not being true”. These people are not well-served by falsehoods articles.

June 26, 2019

Longer, fuller stacks by Graham Lee

Thinks to self: OK, this “full-stack” project is going to be fairly complex. I need:

  • a database. I don’t need it yet, I’ll defer that.
  • a thing that runs on the server, listens for HTTP requests from a browser, builds responses, and sends them to the browser.
  • a thing that runs on the browser, built out of bits assembled by the server thing, that the user can interact with.

What I actually got was:

  • a thing that runs on the server.
  • a thing that defines the environment for the server.
  • a thing that defines the environment on development machines so that you can run server-development tasks.
  • a thing that turns code that can’t run in a browser into code that can run in a browser.
  • a thing that turns code that can run in a browser into code that does run in real browsers.
  • a headless browser that lets me test browser code.
    • BTW, it doesn’t work with that server environment.
    • a thing that shows how Linux binaries are loaded, to work out how to fix the environment.
    • also BTW, it doesn’t run headless without setting some environment variable
    • a thing that is used for cross-platform native desktop apps, that I can configure to work headless.
  • a thing that builds the bits assembled by the server thing so that the test thing can see the code being tested.

And somehow, people argue that this is about reducing development costs.

June 24, 2019

Freelancing & Dogs

You may be a freelancer and thinking about getting a dog, well YOU MUST but with some considerations. Here is a post about Freelancing & Dogs.

Betsy our Cockapoo has been in the family for 3 years now and has added so much joy to my life but when you work from home owning a dog does come with it’s challenges.

Below is a list of random Pros and Cons about owning a dog while being a freelancer.

Pro: You’ll need to walk your dog which means leaving your studio and getting lots of lovely fresh air and exercise. Turn the walk into a run and get your heart rate up, the dog will sleep forever afterwards too.

Con: Your dog will probably bark at anything or anyone that moves past your house especially when you’re on an important call. Make sure you have your finger on the mute button.

Pro: When you’re having a tough day they will always be there to make you feel better, without fail your K9 friend will make you laugh or help de-stress you a cuddle or that much needed attention we all crave.

Con: They want to play 24/7 and often trick you into playing. You’ll be busy, working on that masterpiece of design and then out of nowhere they’ll pretend they need a wee. You stop what you’re doing and then show up at the backdoor with a tennis ball in their mouth. Cheeky things.

Pro: Clients love dogs, you’ll find if you get talking to a potential client they have dogs of their own and you’ll instantly have a common thing to bond over. Bonding always helps land that new client before you know it you’ll be holding hands walking the dogs.

Con: You can’t just take a in-house project without planning where the dog will stay for the day. Larger dogs can stay at home for long periods of time but smaller dogs will need loo breaks and walks. This comes at an expense too if you don’t have family nearby. Dog walkers are typically £10 an hour.

Pro: They’re really good company, sometimes freelancing can be very lonely with days passing without any meaningful interactions with your clients. Especially if you work from home like me. A dog is a great daily companion.

Con: They want to eat all your food, I’m serious. If you have something nice they want it. If you drop something on the floor, they’ll eat it. If you have a desk snack, they expect some too.

I’m fairly confident nobody is reading this but feel free to tweet me with your own dog-freelance stories. #freelancedogs

The post Freelancing & Dogs appeared first on .

June 21, 2019

What’s the worst that could happen?

The importance of releasing code early

As a developer, I think it’s great to be able to see the real world impact of my work, so when I started at Talis I was very keen to get something into production as quick as possible. Etsy have previously spoken about how they try to get new engineers to deploy to production on their first day. Within my first few days I had already conducted a bug investigation for a member of the product team, and shortly after made my first production release. Not only is releasing code in the first few days of a new job a great morale boost, but it also removes the fear associated with releasing code. After all, if a new starter can be trusted to release code in the first few days, how scary can it be?

In order to be able to enable new starters to release in their first few days, we have to provide appropriate tooling and resources to make this possible.

Ramp up

New developers at Talis are given a series of ramp up stories. These are generally small bugs or improvements that don’t require a great deal of code change, but do require the set up of a local environment, going through our code review process, using our chat ops to release to an on demand environment before finally releasing to production. The main purpose of these ramp up stories is to get new starters comfortable with releasing code to production. In fact, over the last year, we made 54 releases to our main product in just 117 working days. This doesn’t include the numerous releases we made to the microservices that support our core product.

Specifying the work

Every team at Talis works in 2 week sprints, with the work broken down into multiple stories. The aim is that at the completion of each story we are able to release the work done for that story. In its simplest form, a story consists of a description of the bug or feature along with a proposed solution or desired outcomes. In the fortnightly sprint planning meetings, the team will go through each upcoming story to provide estimates and talk through the proposed solutions or outcomes. Even in my very first planning meeting, I was encouraged to ask questions and make suggestions about the stories we were discussing. Developers can then pick up stories for the current sprint and have all the details needed to complete the story.

Story

The fun stuff

Now that we have a well-defined story with clear outcomes, we’re ready to get on with writing some code. Everyone is free to use the IDE of their choosing (as well as operating system), and we have pretty much every major IDE being used by someone in the office. All that matters is that you have a development environment that you are comfortable with. Pair Programming is commonplace within Talis and is encouraged. This is particularly useful whenever you start work on a project or area of the code you’ve not worked on before.

Peer review

Once I had self reviewed my own work and got a green build on our CI server, it was time to ask for a peer review. I feel like this first review is always daunting for developers - after all, it’s the first time your new colleagues will see your work and you want to make a good impression. Thankfully any fears I had were soon alleviated when I received an approval, and got asked to release it…

Don’t worry about it

At Talis releasing is not a big deal. After all, by the time we get to releasing our code has already been reviewed by another member of the team and automated testing has been performed on our CI server. The first step in our release is to deploy our code to an on-demand environment which mimics our production environment. Creating an on-demand environment is made simple with our chat ops:

<service name> create ondemand <code version>

On our staging environment, we will perform some automated testing using New Relic Synthetics. Once everyone is satisfied that everything is working as expected, then we can deploy to production. This time the process is slightly different and another developer will be required to approve the request (this approval means that a second developer is available should any problems occur during or after the release):

<service name> deploy <code version>

Once this has been approved by a second developer, your code will be deployed to production. At this point, we monitor our throughput and error rates for any anomalies, and if we detect any at this point we can check the logs for more detail and, if required, we can rollback to the previous version by deploying that version using the same command. All this means that even if there is an issue with a release, we can return our systems to a stable state.

Further Reading

Our Engineering Handbook has more details on our general approaching to Software Engineering.

June 20, 2019

I am writing a blog post, in which I intend to convince you of my case. A coherent argument must be created, in which the benefits of my view are enumerated. Paragraphs are introduced to separate the different parts of the argument.

The scene was set in the first sentence, so readers know that the actor in the following sentences must be me. Repeating that information would be redundant. Indeed, it was clearly me who set that scene, so no need to mention me at the start of this paragraph. An article in which each sentence is about the author, and not the article’s subject, could be perceived as a sign of arrogance. This perception is obviously performed by the reader of the article, so there is no need to explicitly call that out.

The important features of the remaining sentences in the first paragraph are those relating to the structure of the article. These structural elements are subjects upon which I act, so bringing them to the fore in my writing involves suppressing the object, the actor in the text. I can do this by choosing to use the passive voice.

Unfortunately, grammar checkers throughout the world of computing give the impression that the passive voice is always bad. Millions of people are shown underlining, highlighting, and inline tips explaining that their writing is wrong. Programmers have leaked the abstraction that everything in their world is either 1 or 0, into a world where that does not make sense. Sentences are either marked active (1), correct (1), or passive (0), incorrect (0).

Let us apply that to other fields of creative endeavor. Vincent: a starry night is not that brightly colored. 0. You used too much paint on the canvas. 0. Stars are not that big. 0.

Emily: too many hyphens. 0. No need to capitalize “microscope”. 0. Sentence fragment. 0.

June 17, 2019

Last week, I was invited to address the annual conference of the UK Association for Accessible Formats. I found myself sitting next to a man with these two refreshable braille displays, so I asked him what the difference is.

Two similar refreshable braille displays, side by side

On the left is his old VarioUltra 20, which can connect to devices via USB, Bluetooth, and can take a 32MS SD card, for offline use (reading a book, for example). It’s also a note-taker. He told me it cost around £2500. On the right is his new Orbit Reader 20, “the world’s most affordable Refreshable Braille Display” with similar functionality, which costs £500.

As he wasn’t deaf-blind, I asked why he uses such expensive equipment, when devices have built-in free screen readers. One of his reasons was, in retrospect, so blazingly obvious, and so human.

He likes to read his kids bedtime stories. With the braille display, he can read without a synthesised voice in his ear. Therefore, he could do all the characters’ voices himself to entertain his children.

My take-home from this: Of course free screen readers are an enormous boon, but each person has their own reasons for choosing their assistive technologies. Accessibility isn’t a technological problem to be solved. It’s an essential part of the human condition: we all have different needs and abilities.

June 12, 2019

June 11, 2019

I’ve just returned from a fantastic week in Copenhagen at the 2019 Ecsite Conference – Pushing Boundaries hosted at The Experimentarium in Hellerup.  It was my 4th Ecsite, having contributed to previous Ecsite conferences in Graz, Porto and Geneva.  Here’s some details from Ecsite 2017 in Porto where in 2 days we built an Audio […]

June 06, 2019

Since starting The Labrary late last year, I’ve been able to work with lots of different organisations and lots of different people. You too can hire The Labrary to make it easier and faster to create high-quality software that respects privacy and freedom, though not before January 2020 at the earliest.

In fact I’d already had a portfolio career before then, but a sequential one. A couple of years with this employer, a year with that, a phase as an indie, then back to another employer, and so on. At the moment I balance a 50% job with Labrary engagements.

The first thing to notice is that going part time starts with asking the employer. Whether it’s your current employer or an interviewer for a potential position, you need to start that conversation. When I first went from full-time to 80%, a few people said something like “I’d love to do that, but I doubt I’d be allowed”. I infer from this that they haven’t tried asking, which means it definitely isn’t about to happen.

My experience is that many employers didn’t even have the idea of part-time contracts in mind, so there’s no basis on which they can say yes. There isn’t really one for “no” either, except that it’s the status quo. Having a follow-up conversation to discuss their concerns both normalises the idea of part-time employees, and demonstrates that you’re working with them to find a satisfactory arrangement: a sign of a thoughtful employee who you want to keep around, even if only some of the time!

Job-swapping works for me because I like to see a lot of different contexts and form synthetic ideas across all of them. Working with different teams at the same time is really beneficial because I constantly get that sense of change and excitement. It’s Monday, so I’m not there any more, I’m here: what’s moved on in the last week?

It also makes it easier to deal with suboptimal working environments. I’m one of those people who likes being in an office and the social connections of talking to my team, and doesn’t get on well with working from home alone (particularly when separated from my colleagues by timezones and oceans). If I only have a week of that before I’m back in society, it’s bearable, so I can consider taking on engagements that otherwise wouldn’t work for me. I would expect that applies the other way around, for people who are natural hermits and would prefer not to be in shared work spaces.

However, have you ever experienced that feeling of dread when you come back from a week of holiday to discover that pile of unread emails, work-chat-app notifications, and meeting bookings you don’t know the context for? Imagine having that every week, and you know what job-hopping is like. I’m not great at time management anyway, and having to take extra care to ensure I know what project C is up to while I’m eyeballs-deep in project H work is difficult. This difficulty is compounded when clients restrict their work to their devices; a reasonable security requirement but one that has led to the point now where I have four different computers at home with different email accounts, VPN access, chat programs, etc.

Also, absent employee syndrome hits in two different ways. For some reason, the median lead time for setting up meetings seems to be a week. My guess is that this is because the timeslot you’re in now, while you’re all trying to set up the meeting, is definitely free. Anyway. Imagine I’m in now, and won’t be next week. There’s a good chance that the meeting goes ahead without me, because it’s best not to delay these things. Now imagine I’m not in now, but will be next week. There’s a good chance that the meeting goes ahead without me anyway, because nobody can see me when they book the meeting so don’t remember I might get involved.

That may seem like your idea of heaven: a guaranteed workaround to get out of all meetings :). But to me, the interesting software engineering happens in the discussion and it’s only the rote bits like coding that happen in isolation. So if I’m not in the room where the decisions are made, then I’m not really engineering the software.

Maybe there’s some other approach that ameliorates some of the downsides of this arrangement. But for me, so far, multiple workplaces is better than one, and helping many people by fulfilling the Labrary’s mission is better than helping a few.

Last week we had the pleasure of welcoming technology and game enthusiast, Alyssa, to our office. Here is what she wrote about her week of...

The post Alyssa’s Work Experience at Stickee appeared first on stickee.

We are delighted to announce that we have won the award for ‘VR Product of the Year’ at the prestigious National Technology Awards. The NTA...

The post stickee wins VR Product of the Year appeared first on stickee.

June 05, 2019

If you’ve used the Rails framework, you will probably recognise this:

class Comment < ApplicationRecord
  belongs_to :article
end

This snippet of code implies three things:

  1. We have a table of comments.
  2. We have a table of articles.
  3. Each comment is related to an article by some ID.

Rails users will take for granted that if they have an instance of the Comment class, they will be able to execute some_comment.article to obtain the article that the comment is related to.

This post will give you an extremely simplified look at how something like Rails’ ActiveRecord relations can be achieved. First, some groundwork.

Modules

Modules in Ruby can be used to extend the behaviour of a class, and there are three ways in which they can do this: include, prepend, and extend. The difference between the three? Where they fall in the method lookup chain.

class MyClass
  prepend PrependingModule
  include IncludingModule
  extend ExtendingModule
end

In the above example:

  • Methods from PrependingModule will be created as instance methods and override instance methods from MyClass.
  • Methods from IncludingModule will be created as instance methods but not override methods from MyClass.
  • Methods from ExtendingModule will be added as class methods on MyClass.

We can do fun things with extend.

Executing Code During Interpretation Time

module Ownable
  def belongs_to(owner)
    puts "I belong to #{owner}!"
  end
end

class Item
  extend Ownable
  belongs_to :overlord
end

In the above code, we’re just defining a module and a class. No instance of the class is ever created. However, when we execute just this code in an IRB session, you will see “I belong to overlord!” as the output. Why? Because the code we write while defining a class is executed as that class definition is being interpreted.

What if we re-write our module to define a method using Ruby’s define_method?

module Ownable
  def belongs_to(owner)
    define_method(owner.to_sym) do
      puts self.object_id
    end
  end
end

Whatever we passed as the argument to belongs_to will become a method on instances of our Item class.

our_item = Item.first
our_item.overlord
#  => 70368441222580

Excellent. You may have heard this term before, but this is “metaprogramming”. Writing code that writes code. You just metaprogrammed.

Tying It Together

You might also notice that we’re getting closer to the behaviour that we would expect from Rails.

So let’s say we have our Item class, and we’re making a videogame, so we’re going to say that our item belongs to a player.

class Item
  extend Ownable
  belongs_to :player
end

Our Rails-like system could make some assumptions about this.

  1. There is a table in the database called players.
  2. There is a column in our items table called player_id.
  3. The player model is represented by the class Player.

Let’s return to our module and tweak it based on these assumptions.

module Ownable
  def belongs_to(owner)
    define_method(owner.to_sym) do
      # We need to get `Player` out of `:player`
      klass = owner.to_s.capitalize.constantize
      # We need to turn `:player` into `:player_id`
      foreign_key = "#{owner}_id".to_sym
      # We need to execute the actual query
      klass.find_by(id: self.send(foreign_key))
      # SELECT * FROM players WHERE id = :player_id LIMIT 1
    end
  end
end

class Item
  extend Ownable
  belongs_to :player
end

my_item = Item.first
my_item.player
# SELECT * FROM players WHERE id = 1 LIMIT 1
# => #<Player id: 12>

Neat.

June 03, 2019

Ideas and solutions for tech founders on a tight budget

When it comes to building your first product or website you’ll quickly learn how cost is a huge factor in working with the right people – but that shouldn’t stop you from launching your product or website.

Most of my projects start at around £3,000 because of how complex product design can be, the bare fact is that good design takes time and costs money. So what happens if you have a much lower budget?

Here are a few suggestions on how to proceed on a small budget.

Adjust scope

The first thing you can do is adjust your scope, you don’t have to launch a feature rich product! You can start with a basic MVP that will take less time to design and build (You can even skip the build and work with a prototype – Investors don’t care if it’s built as long as they can see potential)

Don’t do too much too soon when actually you can launch and become successful with far less. So try adjusting your scope.

Buy a pre-designed theme / template

Another great option is to buy a pre-designed theme or template. There are thousands of these online.

A theme will help you get a basic look and feel for your product and while you will have to revisit the design in your businesses future, a theme will be a great start and cost less than $100/£100

https://twitter.com/8obbyanderson/status/1126391995709231105

Lots of options available to you. For Web you can look at Wix, Squarespace and most decent hosting packages have web templates free. WordPress has a wealth of excellent themes.

For app design you’ll have to be a little more technical by downloading a GUI kit (pre designed apps). You can download these for free (InVision have some amazing ones – look here) or you can also pay for more premium UI’s over at Creative Market or UI8.

Another great resource is to give Dribbble.com a search for ‘GUI’ ‘Free UI Kits’ – The community is very giving! 🙂

Find a cheaper / junior designer

Another great option is to search for a designer who’s new to the industry.

These will typically be college students or recent graduates. It’s easy to reach out to your local University and ask them to recommend someone for some work experience.

Another option is to head to sites like Fiverr, Peopleperhour and Upwork and search for low budget designers who have good reviews. Be careful, they could end up selling you a template or somebody else’s hard work. Be firm with your brief.

Learn/DIY

We’re really lucky to live in a world with so many excellent online free learning resources so why not try and learn it yourself to get started?

Figma is free and excellent, Sketch and Framer have free trials and worth looking at Adobe XD if you have a CC account. Download, install and jump onto YouTube and follow someone like Pablo Stanley who gives excellent tutorials.

Feeling like you can spend some cash on your learning? Try TeamTreehouse or Lynda.com who have video courses that will walk you through the basics and get you designing in no time.

Find a tech business partner

When I launched my first startup I traded my design time for some developer time. Martin eventually became my co-founder and we managed to get Howler to decent place before closing it last year.

Ask around, some designers/developers may have an opening and find your product interesting enough to give you some time. It’s worth going in with some investment leads or at minimum a business plan to hook their interest.

Stagger costs

If they don’t want to join the business they may be open to staggering costs so you get the perfect product but at an affordable monthly payment.

While this might not float with most, it could work nicely for professionals who take on monthly retainers.

Ask your network for help

Everyone knows someone that’s looking for work, so don’t be afraid to ask for help. I’m always recommending designers and developers to people who’ve contacted me.

https://twitter.com/danwht/status/1126442249972330497

So if your own network comes up empty, ask some designers on Twitter if they can recommend someone. Typically we’re willing to help, give it a try!

Keep looking!

I’m a firm believer in ‘pay for what you get’ but that doesn’t mean there won’t be a designer within your budget you just have to keep looking.

I did a poll and the results were very interesting, take a look.

I hope this helps, I really do. It breaks my heart turning away enthusiastic passionate tech startup founders because of budget.

Go make something amazing.

Follow me on Twitter & thanks to everyone who contributed to this blog.

Photo by Marc Schäfer on Unsplash

The post Ideas and solutions for tech founders on a tight budget appeared first on .

June 01, 2019

I own a workshop! by Daniel Hollands (@limeblast)

I moved into my current house around a year ago, and as I mentioned at the time, I was super excited by the prospect of owning a shed. Fast forward a year, and much like a Pokémon, the shed has evolved into a workshop, thanks to a very generous donation by my parents.

I plan on doing a full post about the workshop (including a video tour of my setup) in the near future, but for now, here are a series of photos taken at the end of each day over the course of a week, as the fine folks at Sheds R Us laid the groundwork and constructed it.

May 31, 2019

Reading List 232 by Bruce Lawson (@brucel)

Want my reading lists sent straight to your inbox? Sign up and Mr Mailchimp will send it your way!

But we are hackers and hackers have black terminals with green font colors ~ John Nunemaker

This is the second in a series of posts on useful bash aliases and shell customisations that developers here at Talis use for their own personal productivity. In this post I describe how I configured my shell to automatically change my terminal theme when I connect to a remote machine in any of our AWS accounts.


Background

As I’ve mentioned previously, at Talis, we run most of our infrastructure on AWS. This is spread over multiple accounts, which exist to separate our production infrastructure from development/staging infrastructure. Consequently we can find ourselves needing to SSH onto boxes across these various accounts. For me it is not uncommon to be connected to multiple machines across these accounts, and what I found myself needing was a way to quickly tell which of these were production boxes and which were servers in our development account.

Solution

All of my development work is done on a Macbook Pro running macOS. Several years ago I started using iTerm2 as my terminal emulator instead of the built in terminal which has always felt particularly limited. Given these constraints the solution I came up with was to implement a wrapper around an ssh command that would tell iTerm2 when to switch themes so that we can use different colors for production environments vs development.

In order to work it requires you to create three profiles in iTerm2, and for the purposes of this each of these profiles is essentially the theme you want to use. When creating a profile you can customise colors, fonts etc. But crucially for each of them you need to enter a value in the badge field. This tells iTerm2 what to set as the badge, which is displayed as a watermark on the terminal. In this case I wanted to use the host of the machine that I’ve connected to which I specify as current_user_host in my script; therefore the value for the badge field needs to be set to \(user.current_ssh_host).

iTerm profile

When you’ve created the profiles you can add the following to your ~/.aliases file to ensure that the ssh wrapper script knows which profiles to use for the three themes it requires.

export SSH_DEFAULT_THEME=Spacemacs
export SSH_DANGER_THEME=Production
export SSH_WARNING_THEME=Staging

Once this is done you can use the wrapper script. Do the following:

  • copy the contents of the script to /usr/local/bin/ssh (or anywhere as long as it’s on your PATH)
  • now when you issue an ssh command in the terminal the script captures the hostname of the machine that you are trying to connect to
  • it then uses awslookup to check to see which AWS account that host resides in.
  • in my case, if it’s in the production account it tells iTerm to switch to the SSH_DANGER_THEME, and if it’s in our development account it uses the SSH_WARNING_THEME.
  • the terminal will then switch to the corresponding theme.
  • when you exit your ssh session the wrapper resets the theme back to your default.

For example, when I ssh to a production server, my terminal automatically switches to this:

Danger Theme

And when I connect to a development server, it automatically changes to this:

Warning Theme

As soon as I exit the ssh session the terminal is restored to my default theme.

Whilst this is a very specific solution for macOS you can achieve similar results on Linux. Enjoy!

May 30, 2019

The Logical Fallacy by Graham Lee

Nary a week goes by without seeing a post by a programmer, for programmers, on the subject of logical fallacies in arguments. This week’s, courtesy of hacker news, is not egregious, enlightening, or indeed different in any way from the usual torrent. It is merely the one that prompted me into writing this article. The most frequent, and most severe, logical fallacy I encounter among programmers is this one:

  • basing your argument on logic.

Now, obviously, for a fallacy to be recognised it needs to have a Latin name, so I’m going to call this one argumentum ex logica.

Argumentum ex logica is the fallacious reasoning that the best course of action for a group of people is the one that can be arrived at by logical deduction. No need to consider the emotions of the people involved, or the aesthetic properties of any potential solutions. Just treat your workplace like your high school debating club, pick (seemingly arbitrarily) some axioms, and batter your way through to your preferred conclusion.

If people disagree with you on (unreasonable) emotional grounds, just name their logical fallacies so you can overrule their arguments, like you’re in an episode of Ally McBeal and want their comments stricken from the record. If people manage to find a flaw in the logic of your argument, just pick a new axiom you’d never mentioned before and carry on.

The application of the argumentum ex logica fallacy is frequently accompanied by descriptions of the actions of “the brain”, that strange impish character that sits inside each of us and causes us to divert from the true path of Syrran of Vulcan. Post hoc ergo propter hoc, we are told, is an easy mistake to make because “the brain” sees successive events as related.

Here’s the weird thing. We all have a “the brain” inside us, as an important part of our being. By writing off “the brain” as a mistaken and impure lump of wet fat, programmers are saying that they are building their software not for humans. There must be some other kind of machine that functions on purely logical grounds, for whom their software is intended. It should not be.

May 28, 2019

I’m doing some changes to this WordPress site and wanted to get out of the loop of FTPing a new version of my CSS to the live server and refreshing the browser. Rather than clone the site and setup a dev server, I wanted to host it on my local machine so the cycle of changing and testing would be faster and I could work online.

Nice people on Twitter recommended Local By Flywheel which was easy to install and get going (no dependancy rabbit hole) and which allows you to locally host multiple sites. It also has a really intuitive UI.

To clone my live site, I installed the BackUpWordPress plugin, told it to backup the MySQL database and the files (eg, the theme, plugins etc) and let it run. It exports a file that Local by Flywheel can easily injest – simply drag and drop it onto Local’s start screen. (There’s a handy video that shows how to do it.)

For some reason, although I use the excellent Make Paths Relative plugin, the link to my main stylesheet uses the absolute path, so I edited my local header.php (in ⁨Users⁩ ▸ ⁨brucelawson⁩ ▸ ⁨Local Sites⁩ ▸ ⁨brucelawsoncouk1558709320complete201905241-clone⁩ ▸ ⁨app⁩ ▸ ⁨public⁩ ▸ ⁨wp-content⁩ ▸ ⁨themes⁩ ▸ ⁨HTML5⁩⁩ ) to point to the local copy of the CSS:

link rel="stylesheet" href="http://brucelawson.local/wp-content/themes/HTML5/style.css" media="screen".

And that’s it – fire up Local, start the server, get coding!

If you’re having problems with the local wp-admin redirecting to your live site’s admin page, Flywheel engineers suggest:

  1. Go to the site in Local
  2. Go to Database » Adminer
  3. Go to the wp_XXXXX_options table (click the select button beside it in the sidebar)
  4. Make sure both the siteurl and home options are set to the appropriate local domain. If not, use the edit links to change them.

May 24, 2019

Reading List 231 by Bruce Lawson (@brucel)

Stickee Technology Ltd, Solihull, Birmingham, B90 4SB – (£18’000-£24’000) Overview We are looking for an admin account exec to join our existing digital team. This...

The post Admin Account Exec appeared first on stickee.

May 21, 2019

Domain-specific markup for fun and profit

It doesn’t come as a surprise to Dull Old Web Farts (DOWFs) like me to learn last month that Google gives a search boost to sites that use structured data (as well as rewarding sites for being performant and mobile-friendly). Google has brilliant heuristics for analysing the content of sites, but developers being explicit and marking up their content using subject-specific vocabularies means more robust results.

For the first time (to my knowledge), Google has published some numbers on how structured data affects business. The headlines:

  • Jobrapido’s overall organic traffic grew by 115%, and they have seen a 270% increase in new user registrations from organic traffic
  • After the launch of job posting structured data, Google organic traffic to ZipRecruiter job pages converted at a rate three times higher than organic traffic from other search engines. The Google organic conversion rate on job pages was also more than 4.5 times higher than it had been previously, and the bounce rate for Google visitors to job pages dropped by over 10%.
  • In the month following implementation, Eventbrite saw roughly a 100-percent increase in the typical year-over-year growth of traffic from Google Search
  • Traffic to all Rakuten Recipe pages from search engines soared 2.7 times, and the average session duration was now 1.5 times longer than before.

Impressive, indeed. So how do you do it? For this site, I chose a vocabulary from schema.org:

These vocabularies cover entities, relationships between entities and actions, and can easily be extended through a well-documented extension model. Over 10 million sites use Schema.org to markup their web pages and email messages. Many applications from Google, Microsoft, Pinterest, Yandex and others already use these vocabularies to power rich, extensible experiences.

Because this is a blog, I chose the BlogPosting schema, and I use the HTML5 microdata syntax. So each article is marked up like this:

<article itemscope itemtype="http://schema.org/BlogPosting">
  <header>
  <h2 itemprop="headline" id="post-11378">The HTML Treasure Hunt</h2>
  <time itemprop="dateCreated pubdate datePublished" 
    datetime="2019-05-20">Monday 20 May 2019</time>
  </header>
    ...
</article>

The values for the microdata attributes are specified in the schema vocabulary, except the pubdate value on itemprop which isn’t from schema.org, but is required by Apple for WatchOS because, well, Apple likes to be different.

And that’s basically it. All of this, of course, is taken care of by one WordPress template, so it’s automatic.

Metadata partial copy-paste necrosis for misery and loss

One thing puzzles me, however; Google documentation says that Google Search supports structured data in any of three formats: JSON-LD, RDFa and microdata formats, but notes “Google recommends using JSON-LD for structured data whenever possible”.

However, no reason is given for preferring JSON-LD except “Google can read JSON-LD data when it is dynamically injected into the page’s contents, such as by JavaScript code or embedded widgets in your content management system”. I guess this could be an advantage, but one of the other “features” of JSON-LD is, in my opinion, a bug:

The markup is not interleaved with the user-visible text

I strongly feel that metadata that is separated from the user-visible data associated with it highly susceptible to metadata partial copy-paste necrosis. User-visible text is also developer-visible text. When devs copy/ paste that, it’s very easy to forget to copy any associated metadata that’s not interleaved, leading to errors. (And Google will penalise errors: structured data will not show up in search results if “The structured data is not representative of the main content of the page, or is potentially misleading”.)

An example of metadata partial copy-paste necrosis can be seen in the commonly-recommended accessible form pattern:

<label for="my-input">Your name:</label>
<input id="my-input"/>

As Thomas Caspars wrote

I’ve contacted chums in Google to ask why JSON-LD is preferred, but had no reply. (I may go as far as trying to “reach out” next time.)

Andrew wrote

I’m pretty sure Google prefers JSON-LD over microdata because it’s easier for them to stealborrow the data for their own use in that format. When I was working on a screen-scraping project a few years ago, I found that to be the case. Since then, I’ve come to believe that schema.org is really about making it easier for the big guys to profit from data collection instead of helping site owners improve their SEO. But I’m probably just being a conspiracy theorist.

Speculation and conspiracy theories aside, until there’s a clear reason why I should use JSON-LD over interleaved microdata, I’m keeping it as it is.

Google replies

Updated 23 May: Dan Brickley, a Google employee who is Lord of Schema.org, wrote this thread on Twitter:

May 20, 2019

The HTML Treasure Hunt by Bruce Lawson (@brucel)

Here are my slides for The HTML Treasure Hunt, my keynote at the International JavaScript Conference last week. They probably don’t make much sense on their own, unfortunately, as I use slides as pointers for me to ramble about the subject, but a video is coming soon, and I’ll post it here.

Update! Here’s the video! Starts at 18:08.

Given that one of my themes was “write less JS and more HTML”, feedback was great! Attendees gave me 4.8 out of 5 for “Quality of the presentation” (against a conference average of 4.0) and 4.9 for “Speaker’s knowledge of the subject” (against an average of 4.5). Comments included:

great talk! reminding of the basics we often forget.

amazing way to start the second day of this conference. inspiring to say the least. great job Bruce

very entertaining and great message. excellent speaker

Thanks, that was a talk a lot of us needed.

Remarkable presentation. Thought provoking, backed with statistics. Well presented.

Very experienced and inspiring speaker. I would really like to incorporate this new ideas (for me) in my code

I think there’s a room full of people going to re-learn HTML after that inspiring talk!

If you’d like me to talk at your event, give training at your organisation, or help CTO your next development project, get in touch!

May 16, 2019

Deprecating yarn by Graham Lee

In which I help Oxford University CS department with their threading issues.

We are looking for a content executive to join our existing digital marketing team. This is an excellent opportunity to join a dynamic and innovative...

The post Content Executive Role appeared first on stickee.

May 13, 2019

Niche-audience topic time: if you’re in Oxford Uni, I’m giving a one-day course on collaborative software engineering with git and GitHub (the ideas apply to GitLab, Bitbucket etc. too) on 4th June, 10-3 at the Maths Institute. Look out for information from the OxfordRSE group with a sign-up link!

May 10, 2019

Back to Top