Examining the personal side of technology in Channel 4’s Humans

By now, we’re probably familiar with A.I. and realistic androids in cinema. I sometimes wonder whether there’s anything new to add to a theme that’s still years away from actually *happening*, especially when moments like Rutger Hauer’s “tears in the rain” speech at the end of Blade Runner are so iconic. Then Yasuhiro Yoshiura’s Time of Eve happened. And then, unusually for mainstream TV, Channel 4 gave us Humans.

In a similar way to Black Mirror, which incidentally was also the approach in Time of Eve, Humans is ‘soft’ SF that concentrates on the social and emotional repercussions of technological advances while downplaying the nuts-and-bolts details. In my view it’s a wise decision: we’re not currently at an advanced enough level to create real humanlike robots, so explaining how stuff like operating systems, battery life and other details might actually *work* wouldn’t have been time well spent when there are other interesting, and relevant, avenues to explore.

The writers and broadcaster also had to contend with creating something that casual “non-SF” audiences can enjoy, and the impressive ratings for the opening episodes suggests they’ve succeeded.  Although the contemporary setting invariably helped prevent the budget ballooning from futuristic props and costumes, every event and situation feels more recognisable and easy to relate to because, with the obvious exception of the synth technology itself, we’re already there.

Humans follows the idea that good SF makes as many observations about the present day as it does about its own fictional worldview. In this case, we see the depiction of fictional “synths” examine existing social issues such as family secrets, medical care for the elderly and the unwell, the legal framework for human rights, and the career uncertainties that young people face in a changing world.

An important point is that the new emerging technology doesn’t cause problems on its own. Rather, it brings existing problems closer to the surface and/or gives people an excuse for their own shortcomings. By which I mean: the ‘real’ people have problems already, and all that happens during the interactions with the synths is that it makes them more obvious than they would otherwise have been. Which is not unlike the everyday occurrence of a “computer error” being blamed for a human error caused by someone making a mistake while using a computer.

The heart-rending story of the elderly scientist George (played to perfection by William Hurt) for instance highlights the loneliness of solitary elderly people, and how society isn’t catering for their needs despite technological advances supposedly making things easier. Although his role in the development of the synth technology may prove to be pivotal to the series as a whole, the emotional significance of his situation really resonates with me: those of us who have ageing relatives living alone with failing health will immediately recognise the pain he is going through, and how there’s no quick-and-easy fix for it.

If a domestic relationship between a lone ‘organic’ human and a ‘synthetic’ one can be problematic, what happens after the intrusion of artificial life into an entire family? From the point of view of Mr and Mrs Hawkins, the presence of something that can do the housework effortlessly and look after the kids is an appealing prospect in this hectic twenty-first century of ours. That is, unless the ‘something’ turns out to be a ‘someone’…

There are a lot of secrets and simmering issues that are coming to light as the series progresses, but even individual incidents give us plenty to think about. The awkwardly amusing moment when the middle child’s teenage hormones and curiosity get the better of him while alone with the family’s ‘female’ synth actually raises some interesting questions, as does the telling way in which some of them are treated by the people they serve.

For example: the safeguards against inappropriate physical contact between synths and their charges make sense from an ethical as well as a health and safety point of view, but after he begs ‘her’ to not tell his parents about his embarrassing little indiscretion, she agrees. Why? As well as protecting people from harm, synths are supposedly not allowed to lie. Is it a simple assumption that telling the parents will result in him getting into trouble, and therefore constituting putting him in harm’s way? Or perhaps a more ‘human’ gesture to save him from embarrassment?

On one level, it’s a rare moment of comic relief in which we may ask ourselves the rather flippant question, “how do Asimov’s Three Laws apply when a robot’s owner tries to cop a feel?” but when taken together these seemingly innocuous moments start to add up to something more complex. Not just in terms of the hidden pasts behind specific characters, but in how we might incorporate advancing technology into our daily lives…not necessarily for better or for worse, either. It just happens in the messy, nuanced and predictably unpredictable way that human beings often do things.


There are bound to be ongoing arguments that Humans is treading familiar ground, but I suspect that the writers were well aware of that fact before they even put pen to paper. The Turing Test and Asimov’s Three Laws have been examined in detail elsewhere already, so they took the logical step of looking back at the people and seeing how *we* react to *them* instead.

Although this is a rare example for terrestrial TV, fans of futuristic SF are probably used to being asked the question, “when should treat these things as people?” but, to quote the late and great Sir Terry Pratchett, evil begins when we treat people as things. He also said that natural stupidity beats artificial intelligence every time, so perhaps the synths aren’t the ones we should be worried about.