A plumber’s guide to Git

Git is a very common tool in modern development workflows. It’s incredibly powerful, and I use it all the time — I can’t remember the last time I used a version control tool that wasn’t Git — but it’s a bit of a black box. How does it actually work?

For a long time, I’ve only had a vague understand of the Git’s inner workings. I think it’s important to understand my tools, because it makes me more confident and effective, so I wanted to learn how Git works under the hood. To that end, I gave a workshop at PyCon UK 2017 about Git internals. Writing the workshop forced me to really understand what was going on.

The session wasn’t videoed, but I do have my notes and exercises. There were four sections, each focusing on a different Git concept. It was a fairly standard format: I did a bit of live demo to show the new ideas, then people would work through the exercises on their own laptop. I wandered around the room, helping people who were stuck, or answering questions, then we’d come together to discuss the exercise. Repeat. On the day, we took about 2 ½ hours to cover all the material.

If you’re trying to follow along at home, the Git book has a great section on the low-level commands of Git. I made heavy reference to this when I wrote the notes and exercises.

If you’re interested, you can download the notes and exercises.

(There are a few amendments and corrections compared to the workshop, because we discovered several mistakes as we worked through it!)

Read more →

How I ask about gender

Another week, another disappointing survey that asks “What is your gender? Female/Male.”

This may be old news to people who read my blog, but if not: gender isn’t a binary. There are plenty of people who identify as non-binary or agender or have some other gender identity that doesn’t fit neatly into one of those two buckets. If you need to ask about gender (and really, do you need to know?), you should be looking beyond offering binary choices.

At a minimum, I think a survey should offer choices for folks who don’t fit the typical F/M binary, and folks who don’t want to tell you. In most cases, you don’t absolutely need to know gender, and you should allow people not to tell you.

This is my current favourite set of choices:

I find the phrase “prefer to self-describe” is less impersonal than “other”, which is often used for the third field. It’s also easier than trying to come up with a cover-all label for “not in female/male”. There’s a bit more work in normalising the free text responses, but I think it’s worth the effort.

I also like having an explicit “prefer not to say” choice, even if it’s not a required question on the survey. It’s good to be absolutely clear that this is an optional question.

This is far from the only way to ask this question — a Google search will turn up lots of advice for asking about gender, and lots of alternative wordings. Use mine, use somebody else’s, or make up your own — just please don’t fall back to “Female/Male”.

Using privilege to improve inclusion

When I go to tech conferences, I’m often drawn to the non-technical talks. Talks about diversity, or management, or culture. So when it came to make a proposal for this year’s PyCon UK, I wanted to see if I could write my own non-technical talk.

Talking about diversity and inclusion can be tricky. It’s easy to be well-intentioned, but end up saying something that’s harmful or offensive. But it’s an important topic — the tech industry has systemic problems with inclusion, and recent news shows us how far we still have to go. I chose it for both those reasons — in part because it’s an important topic, and in part to challenge myself by speaking about a topic I hadn’t tackled before.

This is a talk about privilege. It’s about how we, as people of privilege in the tech industry, can do more to build cultures that are genuinely inclusive.

I first gave this talk at PyCon UK 2017. You can read the slides and notes on this page, or download the slides as a PDF. The notes are a rough approximation of what I planned to say, written after the conference finished. My spoken and written voice are quite different, but it gets the general gist across.

If you’d prefer, you can watch the conference video on YouTube:

Read more →

Lightning talks

A constant highlight of PyCon UK is the lighting talks session. A lightning talk is a talk of up to five minutes, on any topic that might be of interest to the PyCon UK audience. There are usually ten talks in an hour-long session, with a bit of time for handover between speakers, and there are four sessions (one per day) during the conference. Videos of past sessions are on YouTube, including from just this Thursday!

Lightning talks are always fun because you get a wide variety of topics in a short space of time — already this year we’ve heard about mutation testing, dynamic tracing, and chocolate brownies! And it’s a great way for somebody who’s never spoken before to get up on stage. The audience is always friendly, five minutes is enough to say something interesting, and you’re talking about a topic you’re enthusiastic about.

In years gone by, you’d sign up for a lightning talk by writing your name on a flipchart: first-come, first-served. The simplicity was great, but it tipped in favour of people who knew the system — it gave you a head-start compared to a new attendee. And if you hemmed and hawed over whether you wanted to speak, all the slots would be filled up before you’d made a decision.

I’m a big fan of the way the talk selection has been balanced out this year. Thanks to the efforts of Owen, Tim and Vince, the conference now has a lottery system instead.

Read more →

PyCon UK 2017 resources

This post is a signpost to useful resources for my PyCon UK talks/workshops. I’ll update it as I post new resources/links.

Displaying tweets in Keynote

Every so often, I want to use a tweet in some slides I’m making (I have three in my PyCon UK slides for Friday). If I’m doing this, I want to make it clear that the text I’m using is a tweet, not just a generic quote. Tweets have quite a distinct visual style, and give a very clear way to find the original author.

Twitter gives you an “Embed Tweet” button for using on web pages, but I’m not sure if you can use this in Keynote or PowerPoint — and given it has to make a network call to display the tweet properly, do you want to rely on it in a presentation?

Screenshots are better, but still not ideal — you lose the text, so your presentation becomes less accessible. You can also get fuzzy text if you have to resize the tweet or took a small screenshot.

Far better to draw it using your app’s drawing tools as a static image, which is exactly what I do in Keynote. Then the text is directly embedded (more accessible), and text always looks nice and crisp. This is what the effect looks like, with a single tweet per slide (more than one gets distracting):

It’s on the small side for text on a slide, but I’ve found it to work well if deployed sparingly.

If you’d like to use these templates, I’ve uploaded the Keynote file that has both these slides, and templates for creating more. It will probably work in PowerPoint, although I don’t have a copy of PowerPoint to test with.


Using hooks for custom behaviour in requests

Recently I’ve been writing a lot of scripts with python-requests to interact with a new API. It starts off with a simple GET request:

resp = requests.get('http://example.com/api/v1/assets', params={...})

I want to make sure that the request succeeded before I carry on, so I throw an exception if I got an error responses:

resp = requests.get('http://example.com/api/v1/assets', params={...})

If I get an error, the server response may contain useful debugging information, so let’s log that as well (and actually, logging it might be generally useful):

resp = requests.get('http://example.com/api/v1/assets', params={...})

except requests.HTTPError:
    logger.error('Received error %s', resp.text)
    logger.debug('Received response %s', resp.text)

And depending on the API, I may want even more checks or logging. For example, APIs that always return an HTTP 200 OK, but embedded the real response code in a JSON response. Or maybe I want to log the URL I requested.

If I’m making lots of calls to the same API, repeating this code gets quite tedious. Previously I would have wrapped requests.get in a helper function, but that relies on me remembering to use the wrapper.

It turns out there’s a better way — today I learnt that requests has a hook mechanism that allows you to provide functions that are called after every response. In this post, I’ll show you some simple examples of hooks that I’m already using to clean up my code.

Read more →

Control Centre: one step forward, two steps back

I’m not much of an iOS power user, and these days, most new features go straight over my head. As such, there wasn’t much in iOS 11 to interest me, and it took me a while to get round to upgrading.

One thing I was looking forward to was the new Control Centre. The ability to customise controls could come in handy, and doing away with the separate pages seemed like an easy win. Plus, I think the new version just looks nicer.

Now I’ve been using it for several weeks, I’m more ambivalent. Customisation has been really useful — I’ve done away with the unused calculator shortcut, and brought in Low Power Mode, which I use all the time. Most of the buttons look good and are easy to hit, and I’m having much more success with the chunky brightness and volume sliders. But as it advances in one area, so it slips in another. I have two big problems with the new Control Centre.

Read more →

Four ways to underline text in LaTeX

Because I’m old-fashioned, I still write printed documents in LaTeX, and I still think hyperlinks should be underlined. In general, I’m glad that underlines as a form of emphasis have gone away (boldface or italics are much nicer) — but I have yet to be convinced to drop underlines on hyperlinks.

Sometimes I have to write printed documents that contain hyperlinks, which begs the question: how do you write underlines in LaTeX? Finding an underline I like has proven surprisingly hard — in this post, I’ll show you the different ways I’ve tried to underline text.

Using the \underline command

Without installing any packages, you can just use the \underline command. Here’s an example:

I visited \underline{Berlin} in \underline{Germany}.

and the rendered output:

The underline on “Berlin” is nice and tight — but notice how the underline on “Germany” is lower than “Berlin”. That’s to accommodate the descender on the “y”. (A descender is any part of a letter that extends below the baseline of the text. For example, “p”, “y” and “j” all have descenders, but “a”, “i” and “x” don’t.)

The inconsistency is what I don’t like about this approach. It’s fine for one-off underlines, but in a larger document, the inconsistency gets very obvious, and I don’t like how it looks.

Read more →

Using pip-tools to manage my Python dependencies

At last year’s PyCon UK, one of my favourite talks was Aaron Bassett’s session on Python dependency management. He showed us a package called pip-tools, and I’ve been using it ever since.

pip-tools is used to manage your pip dependencies. It allows you to write a top-level summary of the packages you need, for example:

$ cat requirements.in
pytest >= 1.4

Here I want a version of pytest that’s at least 1.4, and any version of requests.

Then I run pip-compile, which turns that into a full requirements.txt:

$ pip-compile
$ cat requirements.txt
certifi==2017.7.27.1      # via requests
chardet==3.0.4            # via requests
idna==2.6                 # via requests
py==1.4.34                # via pytest
urllib3==1.22             # via requests

I can install these dependencies with pip install -r requirements.txt.

The generated file is pinned: every package has a fixed version. This means that I get the same versions whenever I run pip install, no matter what the new version is. If you don’t pin your dependencies, your package manager may silently install a new version when it’s released – and that’s an easy way for bugs to sneak in.

Instead, check in both files into version control, so you can see exactly when a dependency version was changed. This makes it easier to see if a version bump introduced a bump.

There are also comments to explain why you need a particular package: for example, I’m installing certifi because it’s required by requests.

I’ve been using pip-tools since Aaron’s recommendation, and it’s been really nice. It’s not had an earth-shattering impact on my workflow, but it shaves off a bunch of rough edges. If you do any work with Python, I recommend giving it a look.

For more about pip-tools itself, I recommend Better Package Management by Vincent Driessen, one of the pip-tools authors. This human-readable/pinned-package distinction is coming to vanilla pip in the form of Pipfile, but that was in its infancy last September. pip-tools has been stable for over two years.

Recently, I’ve been trying to push more of my tools inside Docker. Every tool I run in Docker is one less tool I have to install locally, so I can get up-and-running that much faster. Handily, there’s already a Docker image for running pip-tools.

You run it as follows:

$ docker run --volume /path/to/repo:/src --rm micktwomey/pip-tools

It looks for a requirements.in in /src, so we mount the repo in that directory — this gives the container the ability to read the file, and write a requirements.txt back into a file on the host system. I also add the --rm flag, which cleans up the countainer after it’s finished running.

If you already have Docker, this is a nice way to use pip-tools without installing it locally.

Alongside Docker, I’ve been defining more of my build processes in Makefiles. Having Docker commands is useful, but I don’t want to have to remember all the flags every time I use them. Writing a Makefile gives me shortcuts for common tasks.

This is the Make task I have for updating a requirements.txt:

requirements.txt: requirements.in
    docker run --volume $(CURDIR):/src --rm micktwomey/pip-tools
    touch requirements.txt

To use it, run make requirements.txt.

The first line specifies the Make target (requirements.txt), and tells Make that it depends on requirements.in. So when the Make task is invoked, it checks that the .in file exists, and then whether the .in file was updated more recently than .txt. If yes — the .txt file needs rebuilding. If no — we’re up-to-date, there’s nothing to do.

The second line runs the Docker command explained above, using the Make variable $(CURDIR) to get the current directory.

Finally, touch ensures that the last modified time of requirements.txt is always updated. pip-tools will only change the modification time if there are changes to the dependency pins — I change it manually so that make knows the task has run, and the “should I run this task” logic explained above doesn’t spin endlessly.

Once I have this Make task, I can invoke it from other tasks — for example, build tasks that install from requirements.txt — and so it gets run when required, but without an explicit action from me. It’s just another step that happens transparently when I run make build.

If you’d like to see an example of this in use, check out the Makefile changes in the same patch as this post.