Yak Shaving and Woolly Jumpers

116_1269
Photo by Glen MacLarty – A woolly jumper being knitted (…probably not from Yak’s wool)

Do all developers know the phrase “yak shaving”? Apparently not. I see it being introduced to a new generation of developers here.

What does it mean?

“yak shaving” …the alternative explanation.

You’re an engineer. You have a problem:  you’re feeling a bit cold.

  • You could turn up the heating, but you decide it’s better to find a jumper to put on.
  • You find a nylon sweater, but you want to do this properly. You need a woolly jumper.
  • You could go out and buy a woolly jumper, but if you’re going to go to the trouble of heading out to the shops, maybe you should really nail this problem. A really kick-ass solution, would be to knit yourself a good thick woolly jumper.
  • You could use some wool you already have, but since you’ll be spending quite some time on this knitting project, you might as well find some really premium quality wool.
  • You could select some nice wool from a shop, but you’re a thorough sort of person. If it’s worth finding good wool, you decide it’s worth going to the source of the supply chain where the wool is produced.
  • You could travel to a sheep farm somewhere, but since you’re going to be travelling, you start researching different types of wool, and discover that the very best type of wool is from the Himalayan yak.
  • …and so it is that you end up on a hillside in the Himalayas, in the baking hot sun   …shaving a yak.

Origins

I like this explanation, but it is “the alternative explanation”, because I’ve no idea where it came from! Can’t find any reference to it on the internet. I think I was taught this by Andy Allan while I was working at CloudMade. A few years ago I introduced “yak shaving” to workmates at TransportAPI, to much hilarity.

For the real origins of the “yak shaving” phrase, all googles lead to Seth Godin‘s blog, which references, Joi Ito’s blog, which references an O’Reilly Book on productivity, and [oopse. now that’s going a bit recursive], but  no there’s lots of pointers to the real origin, which I find a bit disappointing: A “Yak Shaving Day” segment of a Renn & Stimpy episode.

I suppose it’s disappointing because Ren & Stimpy is so unbearably bizarre I can’t get my head around it, but also I was hoping for the origins to have some better link with our tech community use of the “yak shaving” phrase. Oh well.

New job at OpenCorporates

A month ago I started a new job. This is a big deal for me because I’d been in at TransportAPI for eight years. But time for a new job in 2018! I’ve just started at OpenCorporates.

OpenCorporates provides company listings data. It has a database of corporate entities, rather like Companies House in the UK, but bringing together this and similar datasets from countries all around the world.

Now “company listings data” may sound rather dull, but there’s a couple of aspects of this which I think are quite exciting: open data and… fighting crime and injustice!

Open Data

At one stage the Companies House website was only open during working hours (Yes. Hilariously the website’s clunky querying interface stopped working after 6pm!). Aside from making itself a target of general ridicule from tech people, companies house also became a target of the emerging “open data” campaign at the time. About a decade ago the guardian “Free Our Data” column was leading the march, campaigning for data releases. I also remember attending a “National Hack the Government” event hosted by the guardian, with all sorts of hijinks of civic hacking tech people who were producing better versions of government websites and, if necessary, taking data by force (brute force scraping) to do so.

From these kinds of mischievous beginnings OpenCorporates was born. I remember bumping into Chris Taggart at various conferences and events around this time, while I was generally running around beating the OpenStreetMap drum. Later I ended up working with TransportAPI, and TransportAPI ended up sharing an office space with OpenCorporates at the Open Data Institute for a while.

Companies House went on to get their act together considerably (spurred on by those early civic hackers no doubt). Meanwhile OpenCorporates continues to campaign for open data releases from other governments around the world. My point is, OpenCorporates is very much historically and currently part of the open data movement. I regard any open data work as a kind of “social good”, but if you think about it, company listings are really a flagship example of the kind of dataset any government absolutely should be making public and easy to use.

Fighting crime and injustice!

This brings me onto the other aspect. A more dramatic kind of “social good”. OpenCorporates, by making company listings data more available, is fighting crime and injustice …if a little indirectly.

A “registered company” might be a high-street shop or some other real-world bricks-and-mortar thing, but often not. Registered companies are legal and financial constructs which get used and abused in all sorts of ways. We see examples of dodgy dealings on any episode of “rogue traders” where it invariably reveals how bad people closed down a company, and re-started under a new name to shake off the authorities. It’s clearly too easy to get away with. Company listings data provides a little bit of an antidote by helping with transparency and knowing who you are doing business with.

But that’s just small time fraud. When I started at OpenCorporates they gave me a copy of the book “Treasure Islands: Tax Havens and the Men who Stole the World” by Nicholas Shaxson. It describes tax avoidance, tax evasion, money-laundering, and flows of criminal funds enabled by offshore “secrecy jurisdictions”. It builds up a picture of this as not just a side-show by a dishonest minority, but widespread deeply embedded system of corrupting influence benefitting the rich and powerful, at the expense of everyone else. The problem feels unsolvable as we realise that the corruption runs deep within all of our governments.

“Rich governments cannot be trusted to do the right thing on tax havens and transparency. Many will demand more transparency and international cooperation even as they work to frustrate both. They will call for reasoned debate as they engage in character assassination  secret deals and worse. They will talk the language of democracy and freedom the better to defend unaccountable, irresponsible power and privilege.”

It’s a pretty eye-opening and pretty horrifying book!

The book details some specific financial tricks and loopholes, but generally the idea for a global company (or crime syndicate, or just a wealthy individual) is to have your accountants and lawyers create a tangled web of different corporate entities registered in different jurisdictions, with maximum secrecy making it almost impossible for tax authorities or financial regulators to pin down your assets, profits, risks, and capital. There are many layers of financial trickery, but it’s clear that corporate registrations are the key vehicle for this kind of skullduggery.

As such, data on registered corporate entities is vital. It doesn’t solve all the problems of financial secrecy. Investigators will need to piece together a lot more about companies, their assets, the true owners and beneficiaries, and their links with other entities, in order to cut through the information void created deliberately to bamboozle them. But it’s a start. Company listings form a kind of information bed-rock upon which other information and investigation can be built. Whether it’s an investigative journalist looking to crack open a high profile case, or a tax authority, or a just somebody looking to establish the trustworthiness of a company, OpenCorporates helps enable this important work. It puts the information out there more publicly and in more useful formats to help shed a little bit of light on this murky world of crime and tax dodging.

Maybe “fighting crime and injustice”, is a super-style exaggeration. Folks like the Tax Justice Network are tackling this big problem more head on, but OpenCorporates helps chip away at the problem with a more bottom-up data-oriented approach.

The new job

So I’m getting my head into this new subject area, and joining OpenCorporates as a software engineer. It’s primarily Ruby on Rails development, but with lots of infrastructure and data processing challenges. Fun stuff! We’re based in the main pointy tower at Canary Wharf. We just had our office warming party in a new place 42 storeys up.

If you fancy joining me there, we are hiring back-end engineers, infrastructure engineers, and data analysts. Check out jobs.opencorporates.com!

Printable Calendar tool

Here’s a little thing I just found (I have lots of silly bits of code like this kicking around) :

Printable Calendar

Generate a calendar in a strip layout, suitable for printing, with lots of space for writing notes next to each date.

I originally did this as a visual basic word macro, then later as this php script.

I used to use this to do big calendar printouts to help with family logistics, usually at this time of year, covering the months around Christmas. That’s back when we four kids were all home for Christmas in Yorkshire, all with our own overlapping plans. Nowadays we might do a shared google calendar instead, but I remember the scrawlings all over the calendar wall used to be quite fun.

A tech manifesto from 2007

I’ve just been tidying some old content on this website, which I’d written just over 10 years ago now. Back then I wrote a kind of tech manifesto, or at least a collection of various broad thoughts of tech and the IT industry I was working in at the time. I haven’t worked as an enterprise integration consultant since 2009, and some of it is out-of-date in various other ways. Some of it feels like it’s noticeably coming from my naive younger self.

  • I hate the word “geek”. I don’t really hate the word of course, and that’s not the point I was trying to make. I stand by the idea that we should always work to close the separation between “geeks” and “normal people”. The march of technological progress does this for us of course. If I think back to 2007, the internet was actually a lot less mainstream. Fewer people with broadband at home (including me!). Fewer people required to use the internet or even computers as part of their work. It used to be that “geeks” were people who knew how to use computers and were super savvy with the internet. Nowadays that’s everyone. My 2 year old son is already getting the hang of it! Nowadays I see an interesting push to get more people from more diverse backgrounds to learn to code.

  • It’s a people thing was a piece complaining that clients should discuss high level requirements rather than skipping ahead to designing a solution. This is an accepted anti-waterfall principle rolled into “agile” these days. Perhaps it goes without saying, …except it’s still a common mistake. I recall a few occasions since writing that, when I’ve worked on projects which jumped to discussing a technical solution before getting a high level view of problems we were solving.

    I also talked about user interface design. I think I had a bit of a bee in my bonnet about the project I was on at the time, but I do remember leaving that particular project with great satisfaction at having implemented some of the UI ideas despite initially having them shot down. Since then there has been a couple of times where I’ve found myself surprised by colleagues’ failure to see obvious UI improvements. It makes me wonder whether user interface design is a talent I’ve not really appreciated within myself. Maybe I should do more of that kind of work.

  • With IT project politics, I was talking in general terms, but really I was bearing my soul about some frustrations with my consulting job. Some of the assignments with Green Hat Consulting involved parachuting into a pretty hostile environment. When I quit that industry and went to work in a more fresh funky start-up I left the politics behind on the whole, but of course you never really escape that kind of thing completely. I guess the golden rule I still have to remind myself of would be: work with people you like (and if you don’t, leave, because life’s too short)

  • Maintainable Software. Maybe I had a fairly simplistic view of what makes good software back then, but I think I was just glossing over the details. Obviously there’s a whole universe of coding best practices which make code maintainable, beyond “comments and meaningful variable names”. In fact comments are bad …sometimes. I think keeping up with recommendations and knowing which wisdom to follow and which to discard may be the real skill. Being an “opinionated coder”, and taking pride in your craft. In any case, I’m sure I was correct in saying that most developers consider their own code to be good code. …and I still didn’t learn to drive!

    I was also ranting about documentation. Again I think this was a bee I had my bonnet related a particular request to document a particular project at the time. But it remains a reasonable point, that documentation can be seen as an afterthought; a project delivery box to tick. I quite like documentation. The interesting challenge of trying to distill the most important hand-over information for a project, without making something which is just too long for anyone to bother reading. Also mechanisms to help ensure docs are kept up-to-date e.g. keeping docs close to the code or part of the code. I like that github have establish a nice convention of supplying a README.md file for each repo.

On the whole my “tech manifesto” of a decade ago wasn’t too bad, but a lot’s happened in 10 years, and some of these thoughts were starting to feel quite old. So I decided today that they belong in the blog archive.

But what would I write about if I were to pick some points to make about tech and the IT industry in 2017? (Not sure we even call it the “IT industry” so much these days). I don’t think I would try. Clearly such things are destined to go out of date. A single page of thoughts also feels incomplete, but maybe I should add some more deep thinking to the blog category ‘technology’

Open House rails developer

[Update: They’ve now hired a rails developer. Thanks for the responses! Still opportunities to help I’m sure. I will to try post an update on how to get involved]

I wanted to help promote this Ruby on Rails developer job at OpenCity which I think will be a really interesting thing for somebody, maybe a junior rails developer. Know anyone?

If you’re a londoner you’ve probably heard of “Open House” an event taking place each year where, for one day, you can take a look around lots of interesting buildings, for free, which are often closed to the public the rest of the time.

The Open House event is put together by a not-for-profit organisation called OpenCity, in their office near Aldgate. From there they organise this annual architectural bonanaza, coordinating hundreds of people (volunteers and building owners) with lots of careful planning …and some IT challenges.

opencity-diagram-currentopencity-diagram-1st-stage-rails-app opencity-diagram-big-rails-app

Being an interesting and fun not-for-profit organisation, I have ended up volunteering a bit of my time with them. Me and another volunteer have been planning a rebuild of a database system they’re using internally, which will later ripple up to some cool improvements on their public facing website.

To me this looks like a juicy challenge and a fun organisation to be helping out, so I’m hoping we can find a suitable available developer who feels the same! I’m not available myself, except in my spare time. I plan to be dipping in on this project from time-to-time, so I would be working with this person a little bit.

The project planning is at a very early stage, but I’ve been drawing the above diagrams which show… (left-to-right)

  • Their current set-up
  • An initial milestone introducing a rails app for their buildings database
  • And a final situation with websites and database consolidated

Hopefully in the end we will have managed to consolidate things and four different websites (or four different user journeys/permission levels within the same website) will be served by a lovely new rails app!

OpenCity are very open to our ideas, and one thing we’ve suggested, is to do this whole thing open source on github, as a way of being open to contribution from any other developers who fancy helping this organisation, but this is a pretty big job which will need somebody on it full time. Hopefully quite an interesting challenge for somebody! Please pass this on to anyone who might be interested.

Leaflet Geolocation error: Only secure origins are allowed

I described some reasons to switch to HTTPS on my website. To be completely honest though, I didn’t finally get off my ass do that for any of those good reasons. I did it because I was building a map thing which requested browser geolocation and I noticed geolocation stopped working in chrome.

I’ve seen this deprecation warning a few times:

“getCurrentPosition() and watchPosition() are deprecated on insecure origins. To use this feature, you should consider switching your application to a secure origin, such as HTTPS. See https://goo.gl/rStTGz for more details.”

But somehow didn’t take it seriously. But yes. New versions of chrome won’t do geolocation unless it’s a HTTPS site. See this for yourself with this very basic geolocation test page on w3schools (which is http). [Update: Originally this was http, and so didn’t work in chrome. w3schools have since gone https]

The javascript console still only shows it as a deprecation warning not an error, but if your web application was relying on this…  it broke.

(Update for Aug 2017) Firefox v55 is going with this lock-down too. It says “Geolocation error: User denied geolocation prompt” as a popup, and in the console “A Geolocation request can only be fulfilled in a secure context.”

Any sensible application should probably be watching out for failure cases with geolocation anyway (see later examples for handling errors), but even so I find it a bit surprising that any old websites using geolocation across the web will be broken. There’s a bit more info on this google developers page

If you use LeafletJS, there’s a map.locate method which presumably uses the same method internally (navigator.geolocation.getCurrentPosition), but leaflet also detects the Chrome failure and pops up a different error message…

“Geolocation error: Only secure origins are allowed (see: https://goo.gl/Y0ZkNV)..”

If you use chrome you can see this on my geolocate example (http) here:

http://harrywood.co.uk/maps/examples/leaflet/geolocate.view.html

…and    *Trumpet noise*   see it fixed with the newly available https URL:

https://harrywood.co.uk/maps/examples/leaflet/geolocate.view.html

HTTPs on this site

I made harrywood.co.uk run on HTTPS recently (optionally. It works on both http://harrywood.co.uk and https://harrywood.co.uk). Quite easy to do, and free using letsencrypt

https

Why encrypt harrywood.co.uk?

On the face of it there’s not much point. This is mostly just a straightforward read-only website. Not much scope for bad people to be snooping anything interesting. No passwords or credit cards or anything. General “tracking” doesn’t seem particularly problematic either. Who really cares if somebody can track the fact that you’ve been visiting these sweet innocent innocuous blog posts? Well…

Some types of commercial web tracking only tend to get creepy when they happen in bulk. The evil corporate advertising machine won’t learn much about you from knowing you read a blog post on harrywood.co.uk, but it might start to know you pretty well if it knows this and the previous thousand websites you visited. Encryption throws a spanner in the works for some types of tracking.

Government tracking by intelligence agencies, is also thwarted by encryption (more so probably). They would also like to intercept your browsing traffic to get to know you with their big evil AI. Now sometimes I think it’s fair enough for governments to do a bit of anti-terrorism targeted snooping, but the trouble is it’s too easy for politicians to make that simple-minded argument. The flipside is a subtle future threat of eroded freedoms. That’s tricky, and in general I don’t trust politicians to weigh it up properly. We can use technical measures (encryption!) to help things move in a more freedom preserving direction.

Tracking is a numbers game, done across many websites, and equally encryption as a counter-measure is more effective if we encrypt many websites. If we start to be able to browse a significant proportion of the web in HTTPS, even right down to piddly little websites like this one, then we’ll be getting somewhere. As a result it’s becoming recommendation and slowly a sort of groundswell of expectation on webmasters to do this. It’s slow to get lazy webmasters like me to do something like this, but …well now’s the time for harrywood.co.uk (Who knows? One day I may actually work on updating the content!)

Encryption helps protect against password snooping security issues. harrywood.co.uk has no user passwords, except…  my own password for logging in to write blog posts. I’ve probably used this from public wifi access points in the past. Slapped wrists for me. But now I guess I can be a little more relaxed about that. Speaking of wifi, wifi javascript injection (attacks or just crappy advertising) seems like a nasty problem. Are we safe using any wifi these days? Well we’re a lot safer from this when browsing HTTPS sites.

An OpenStreetMap training course intro

A week ago I got together with Steve Chilton and Steven Feldman and gave an OpenStreetMap training course to a handful of enthusiastic young people who were about to head out to Ghana as volunteers with a charity called tzedek.

Steve Chilton & Harry Wood teaching OSM
Photo by Steven Feldman CC-BY NonCommercial

I’ve done similar things before but nothing exactly termed a “training course” actually. It was pretty similar to the UCL Masters Student mapping party Sept 2010. Back then I was asked to kick things off with an introduction, and had to stand up make something up on the spot. This time I had some slides prepared.

Which slides? Well maybe I should’ve just used learnosm.org teaching resources for this. I took a look at them, but I decided I wanted to say a bit more in the intro sessions (perhaps wrongly actually). The learnosm.org slides are Continue reading “An OpenStreetMap training course intro”

Workshop on Using OpenStreetMap Data

 

I presented a workshop (or at least a live demo session) at the Society Of Cartographers conference with the rather vague open ended title of “Using OpenStreetMap Data”   –  “A tour of the various options for downloading and otherwise accessing OpenStreetMap data from a geo-data user’s perspective. Harry Wood will explain how to delve into the raw data structures using tools on the website and elsewhere, how to explore the wiki-style editing history, how OpenStreetMap’s unique ‘tags’ approach works, and some ways of manipulating the map data.”   At least that’s what I wanted it to be. It didn’t go entirely to plan (see apologies below)

I started by presenting some slides from my OpenTech OpenStreetMap developer ecosystem presentation which highlights the central role of raw geodata, and gradually builds up a picture culminating in this diagram (see above link for the full build-up and explanation)

Also a re-use of the slide explaining different levels of OpenStreetMap use which developers and data user organisations might consider.

Then it was on to the live demos touring around various different topics and tools. I don’t think I actually timed it well enough to get through all these things in either of the two hour-long sessions, but the following were Continue reading “Workshop on Using OpenStreetMap Data”

Some new repton3 maps

Peter McElwee emailed me saying “Thanks for releasing Repton 3, have lots of fun but the wife isn’t so pleased. Anyway have made two levels of my own, would be great to have your comments”

So you can now download his ‘names.rls’ levelset file.

<sarcasm> Repton3 is a fast moving sector of the games industry. The game was released in 1985 for the BBC micro and electron. I made my repton3 version for windows in 1998, just 14 years later. Gamers the world over set to work proving their puzzle solving prowess by conquering my two levelsets as quickly as they could, and sure enough in 2000 my mate Will succeeded, followed by two more people in 2006 (hot on his heels!) I also made a level editor, knowing that this would send gamers into a frenzy as they they start designing levels and emailing me with files to share. Sure enough in January 2011 Peter sent me this names.rls file, and so I didn’t waste a second to publish this on my website 14 months later.

but seriously </sarcasm> it is actually quite exciting for me to receive some repton3 puzzles from somebody else. Thank you Peter, and massive apologies for failing to do anything with your email for the past 14 months! I sort of forgot about it because repton3 is languishing on my under-used windows machine these days. But actually the real reason was, I wanted to play through your maps myself, and then reply saying they were too easy…  but then I got a bit stuck! I have now played them though. Good fun playing other people’s repton puzzles!