(1) Henry Wellcome, photograph by Henry van der Weyde.
(2) Comparative heights of mountains, engraving by S. Hall, 1817.
(3) A fish market in India, gouache drawing 18–.
(4) A shrew mole, coloured engraving.
Let’s look at an example from my workplace. I work at Wellcome Collection, a museum and library about the history of human health and medicine, and one of the things we have is a large collection of digital images.
We’d love to use machine learning and computer vision tools to tag our images, to make them easier for people to find. Maybe an algorithm could tell us that these images are a man, mountains, a market, or a mole. But we have to be careful – machine learning is very good at replicating biases in the training set.
There are plenty of stories about algorithms replicating the unconscious biases of the humans who trained them. A few years back, Google got in hot water for tagging images of black users as “gorillas”, and Microsoft have had similar issues with motion tracking in their games consoles. A more racially diverse team might have caught that before it shipped to customers.
Image by Pixel-mixer on Pixabay. Used under CC0.
Finally, let’s move out of the digital realm and look at a physical example.
Modern cars are extremely safe. They’re subject to rigorous crash testing and are packed with safety features – but repeated studies show that women are more likely to die in car accidents.
That’s because until fairly recently, crash tests only featured male-bodied test dummies. They were based on a fiftieth percentile American man, and that was the basis for safety features. Women – especially smaller women – are quite different from this body shape and size, and they experience the forces in a collision in a more severe way. The car industry does now use a wider variety of crash test dummies, but it’ll be a long time before this inequality is worked out of our cars.
So what’s the message here?
Inclusion has to be part of our design process. It’s not something we can add later, not something we sprinkle on at the end; it has to be something we think about throughout our work. Throughout our design process.
It’s much harder (and more embarrassing) to fix something after-the-fact, rather than getting it right from the early stages. We need to think about inclusion throughout. Inclusion has to be part of our design process.
Hopefully I’ve convinced you that you need to think about inclusion all the time, so how do we do that?
Let’s go back to the idea of rules. We exclude people because we internalise rules that don’t accommodate people, that don’t include them. How do we spot those bad rules? Often we don’t even realise they’re there, so how can we possibly correct them?
We need to widen our worldview; go out and listen to people who have different experiences to us. We won’t know a rule is bad until we see a counterexample, so we want to make it easy to get counterexamples. I find Twitter useful for this: I try to follow people who are different to me, I read about their lives and their challenges, and that affects my view of the world. Twitter certainly isn’t the only way to do this – find any medium that lets you hear from people who don’t look like you – but it’s the one that works for me.
Closing slide.
I hope I’ve convinced you that inclusion needs to be part of the design process. It’s something we have to think about throughout, not just tack on at the end.
To be more inclusive, take my framing of unconscious bias, think about those patterns we don’t realise we’re spotting, and try to find ways to notice the unhelpful patterns that you’ve internalised.