Artificial Intelligence: Real Anxieties?

Is the film Ex Machina – and its depiction of Artificial Intelligence – a sign of a world soon to come?
by Richard McLachlan

The movie Ex Machina feels so current there are powerful moments of recognition – despite the seemingly unlikely scenario of a walking, talking artificial intelligence (AI). Right now Google is enlisting its massive databases, drawing on the contents of every email and Internet search ever made, in the service of what has been called ‘the Manhattan Project of AI’. Blue Book, the movie’s version of Google, has done the same.

And of course the pivot point, said by many to lie at most a few decades away, is the passing of the Turing Test. This is designed to test for successful machine intelligence by fooling a human into thinking she or he is communicating with a human, rather than with a machine.

Ava, an artificial intelligence with the morphology of a young woman in her twenties, is the subject of the movie’s advanced Turing Test. It is advanced in that no one is in any doubt ‘she’ is a robotic intelligence. The question is rather ‘how intelligent’. Caleb, a vulnerable young programmer from Blue Book, conducts the test. He won a competition at work and has been flown to an isolated high tech fortress to spend a week with the company CEO.

Ava’s breasts and pelvis are clad in light, silvery mesh suggesting some yet-to-be developed material with the form-fitting qualities of spandex chain mail. There is an erotic frisson here, with the illusion of flesh tightly constrained by technology. You can see the supporting structure, the cables and the glowing LEDs in the transparent legs and mid-section.

When she puts on a dress and stockings, these few signifiers of femininity, along with a seemingly innocent and lifelike ‘face’, are an invitation to project desire onto a collection of cables and synaptic activity. Sound like a date? I won’t spoil it by saying more.

Ray Kurzweil is a futurist, a prolific inventor and, according to Microsoft’s Bill Gates, – “the best person I know at predicting the future of artificial intelligence.” Ray predicts that 15 years from now, virtual reality will start to resemble the real thing. Further, by the end of the 2030’s, it will be possible for aspects of the brain to become part of an accessible database.

Apparently “we’ll have millions, billions of blood-cell sized computers in our bloodstream… keeping us healthy, augmenting our immune system, also going into our brain and putting our neo-cortex onto the cloud.”

Would that be the cloud to which the National Security Agency and its Five Eyes partners would in future, have unrestricted access? One hundred percent real-seeming virtual reality is just beginning to sound like something I could cope with, if not exactly embrace as recreation. Yet having millions of nano-computers in my bloodstream, accessing my private thoughts and intentions, and then sharing them with people who might view them as a threat – well that is something else altogether. Fortunately, a closer reading of what is actually likely suggests that an imminent upload of the neo-cortex is highly optimistic.

This idea of a neurophysiologic interface with computer systems and databases – ‘jacking in’ as William Gibson once put it – is but one component of an envisaged future onto whose escalator we have already stepped.

The concept of a ‘singularity’ whereby various advanced technologies converge – and demand radical reassessment of what it means to be human – has been articulated and popularized by Kurzweil. It is, he says “… a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself.”

This vision is very similar to what philosopher Nick Bostrom (who steers well clear of popular pronouncements on Kurzweil’s coming ‘singularity’) describes as an intelligence explosion. We will arrive, at some future point, at a ‘superintelligence’ capable of replicating and improving itself – a possibility he suggests we take seriously, and consider how we might retain control over the outcome.

Bostrom assesses for their likely contribution to superintelligence: artificial intelligence created through advanced computer programming, direct copying of the human brain (through scanning and data processing), biological cognition (see CRISPR below), brain-computer interfaces (see ‘jacking in’ above), and networking of human minds and machines. The first two of these could result in the creation of an artificial intelligence. The remaining three involve the enhancement of human capability and thereby increase the likelihood of our being able to create an intelligent machine.

These radically enhanced human beings (cyborgs, or transhumans) could be beings whose genetic code has been interfered with, or whose physiology is enhanced by nanotechnology or robotics, or all three. GNR is the acronym.


The development (from bacteria) in 2012 of a revolutionary technology, CRISPR-Cas9, allows for the ‘editing’ (the identification and ‘replacement’ of DNA letters to disable or change the function of genes) to treat rare genetic diseases in other creatures as well as in humans.

While not straightforward when applied to the genes determining human intelligence, ‘proofreading’ the genome to remove genetic obstructions to greater levels of intelligence is already being imagined. The possible results of this ‘germ-line engineering’ are described by Bostrom as “exceptional physical and mental constitution, with elevated functioning … in health, hardiness, and appearance.”

Researchers at MIT are already hoping to influence SHANK3, a gene involved in neuron communication which, when damaged in children, is known to cause autism. Merle Berger of Boston IVF predicts wide support for this sort of intervention but says the prospect of technology beyond that would cause ‘public uproar’. “Everyone would want the perfect child”.

And which future global elite will want to replicate its genes through scions whose intelligence is less than money can buy?


Kostas Kostarelos, Professor of nanomedicine at University College London, is clear that human enhancement is a reality. He works with nanotechnology on ways to enhance vision, cognitive functions or the ability to move independently. He envisages new devices developed with nanomaterials small enough to allow their introduction to the body by injection or tattooing. These first generation devices will likely act as health monitors able to connect wirelessly, whether to their digital monitors or their doctors. He views improvements to sensory and cognitive function as the next generation.

I have seen young men running along the Hudson River wearing steel-spring feet like Oscar Pistorius wore in his races – and they can really move. These technologies are not repairing damage; they are enhancing what may come to be perceived as our human limitations. “The human enhancement market will reveal the truth about our biological conditions – we are all disabled. This is why human enhancements are here to stay and likely to become more popular.”

Projections into the future of this escalator ride are already reflected in movies and daily life. Tom Cruise in Minority Report has his iris scanned at the entrance to a department store. Personally tailored advertising greets him as he enters, pre-figuring (back then) a world with which we are now more or less familiar.

It’s not hard to imagine iPhone’s SIRI developing seamlessly into Scarlett Johansson’s oh so intimate but innocently promiscuous AI in the movie Her – she’s a machine that passed the Turing Test with flying colours. All that remained was for Joaquin Phoenix to project onto the machine both his desire for her and his anguish at her other lovers.

The suggestion that the Google search engine is “arguably, the greatest AI system that has yet been built” tells us we are already there – only it has not yet achieved the level of general human intelligence. Our familiarity with optical character recognition, language translation, and ‘assistants’ like SIRI, affirms John McCarthy (who coined the term artificial intelligence) – “as soon as it works, no one calls it AI anymore.”

The company Deep Mind was recently bought by Google for $650m. This followed a demonstration of its software learning to play Atari computer games better than a human expert – and startling those present with its sophistication. The software uses a combination of networks of simulated neurons, and the reinforcement learning techniques developed in behavioural psychology.

Will this artificial ability to learn through experience, something we already know we share with some other creatures, far outstrip humans, if only because of the sheer speed of processing? Nathan, the Blue Book CEO in Ex Machina, suggests that in the future we will appear as primitive as our hominid ancestors look to us now.

Technological anxiety rather than AI world domination

Alex Garland, the director of Ex Machina, wanted to address the levels of anxiety ‘floating around’ about AI. He feels the anxiety is less about AIs taking over, or the world changing dramatically, than it is about our feelings of disempowerment – tech paranoia, he calls it.

As the technology becomes more sophisticated, apparently more arcane, and more ‘knowing’ about us and our personal history and preferences, the feeling of dissonance builds. Passing through international airports just recently, I noted plenty of evidence of big investment in automated security. ‘Intelligent’ machines read my passport, and took and ‘recognized’ my photograph before I could pass through an automated gateway. It was eerie and alienating.

This involuntary provision of personal information is now, thanks to Edward Snowden, associated with the knowledge that every single electronic transaction, phone call, or contact with the internet is being captured, stored and available for future reference should we become persons-of-interest.

Intelligence agencies with opaque agendas are fully supported by our elected governments to collaborate with the very tech companies to whom we give our personal information (whether drunk or sober). We can no longer not know this – so the anxiety is hardly surprising.

The Turing Test, one of a number of tests for machine intelligence, provides a nice metaphor in Ex Machina. Our only resource for building an intelligent machine, at least at the beginning, is our own intelligence – and the stories we have told ourselves so many times, we think they’re real.

The Test quickly moves beyond simple markers of intelligence in a ‘fembot’ to her clear ability to engage in a relationship – with an agenda. In doing so, we see an apparently simple test become a mirror held up to our own narratives. The AI has to be humanoid. Who, Nathan asks in Ex Machina, is going to want to communicate with a grey box? It’s true – you need to put a wig on it. Caleb is heterosexual, so a ‘woman’ will give the interaction its necessary edge. Gender identification plays a huge part in this movie.

But then so do other cultural assumptions. There is a sense in which the entire movie is three people ‘testing’ each other and in doing so showing us ourselves. Opportunities for collaboration appear and are passed up or betrayed by one or other character pursuing their own interests.

The assumption of self-interest is not a universal test marker of human intelligence. It is both culturally specific and totally pervasive – “You can’t walk into a game store today and not be confronted by games that celebrate neo-liberalism in some way.”

Tech anxiety has a very real basis in fact. But the potential for creativity, cultural re-imagining, and resistance is its reassuring companion. As Jaron Lanier commented on last year’s purchase of Oculus VR by Facebook – “Whether (it) will yield more creativity or creepiness will be determined by whether the locus of control stays with individuals or drifts to big remote computers controlled by others.”

Fighting against Facebook, Apple, or Google and their complex offerings of convenience and creative delight, mixed with personal data capture and a disturbing connivance with government – well, it looks increasingly quixotic.

Instead, we could apply some form of Turing Test for genuine intelligence – to both ourselves and to our cultural assumptions. It might equip us better for what is coming. At this point, what else can we do?

Footnotes : Artificial (or machine) Intelligence was defined by John McCarthy, who coined the term in 1956, as “the science and engineering of making intelligent machines.” What exactly constitutes intelligence is not entirely clear but the Turing Test, developed by Alan Turing in 1950, remains one among several agreed definitions of an intelligent machine.

The Turing Test is “a proposed test of a computer’s ability to think, requiring that the covert substitution of the computer for one of the participants in a keyboard and screen dialogue should be undetectable by the remaining human participant” – Collins English Dictionary – Complete & Unabridged 2012 Digital Edition

2. Bostrom, Nick, 2014. Superintelligence : Paths, Dangers, Strategies. Oxford University Press, 2014.

3. Andy Miah, Director of the Creative Futures Institute and professor of ethics and emerging technologies at the University of the West of Scotland.

Be the first to comment

Leave a Reply

Your email address will not be published.