How scary is artificial intelligence really? Will it – can it – replace us? The Barbican Centre's exhibition considers the future
A gaggle of nursery tots are enjoying Universal Everything’s Future You, one of the installations at AI: More than Human, the Barbican’s new exhibition on Artificial Intelligence. One by one they stand in front of the screen and delight as the robotic image on it responds to mimic their movements. What changes will these AI natives see in their lifetimes to the impact that AI will have, for better or for worse, on how we live?
The exhibition defines AI as ‘the endeavour to understand and recreate human intelligence using machines’ and invites the visitor to consider a world where their intelligence is not the only one. Somewhat unexpectedly, this potentially most future-leaning of exhibitions looks back, as well as forwards, to set AI in its historical context. We learn about the prehistoric Japanese tradition of giving inanimate objects human characteristics such as a face and hands in order to better communicate with them, and the idea of Kami – divine, benign forces of nature who live in household objects and communicate with humans. More recently, the familiarity of non-human characters in manga and animation is suggested as a factor in Japanese culture’s progressive attitudes to technology. There’s also a major section on the Golem, a mythical figure from Jewish culture made from inanimate matter that magically comes to life, and a look at alchemy, in particular those alchemists such as Paracelsus interested in creating homunculus creatures, mini human-like beings. A triplicate of screens flashes up images of AI from popular culture – Metropolis, Flash Gordon, Star Wars, Bladerunner, Robot Cop, Avatar, Ghost in the Shell, Frankenstein… seen together, we realise the eternal fascination of the theme.
I particularly enjoyed this part of the exhibition, but I suspect it isn’t what most visitors attracted to an AI show would have come for. That comes next. We are given context in the AI timeline, which starting from 1800 runs us through developments in AI thinking including pioneering early computer development before reaching the first golden age of AI from 1956-1973 in the US. This was eventually followed by a second golden age from 1994-2017, which includes innovations such as Windows 95, Siri, Alexa and AlphaGo, the first computer to defeat a human playing the strategy game of Go. Along the way we learn about the development of neural networks (artificial systems modeled on the brain and nervous systems), machine learning (learning from experience without being explicitly programmed), and how the quality of the dataset raw material influences the AI output. In this way, human prejudices can all too easily infiltrate AI. There’s an enlightening section on perception, in particular how what we see in our consciousness is a reconstruction based on expectations and prior beliefs, rather than a mirror-images. To demonstrate this, in Nexus Studios’ Learning to See installation, visitors can adjust an installation of flannels and cables to manipulate how these are translated by a neural network fed on a specific dataset into very different images such as flames, flowers and clouds.
AI’s triumph in the game of Go touches on the unease that we humans understandably feel about what could be seen as our potential, growing obsolescence, and how AI could infringe on our privacy. In journalism, for example, AI can be used to write functional, data-driven articles. ‘Do we still need people to deliver emotion in the written word, or could AI conceivably perform this role?’ the exhibition asks. Already, we learn how the New York Times is using robotic helpers to help moderate comments. The potential and considerable benefits to healthcare, road safety and space exploration are also explored, as is the role in architecture with the inclusion of Sony CLS’s Kreyon City, a sort of smart Lego for planning the city. Developed in tandem with AI technology, it allows the designer to access data to visualise energy consumption, pollution levels, and numbers of inhabitants. The message is that the future is one of collaboration, not replacement.
We are shown how machines absorb human associations between words in Google PAIR’s project Waterfall of Meaning and how the human-faced Alter 3 machine learns through interplay with its surrounding world. There’s plenty more to amaze such as aibo, a robot puppy who develops a personality from its database of memories, robot fish, a dancing model of the Sydney Opera House and Joy Buolamwini’s look at racist and sexist bias in facial analysis software - all rather a lot to take in in a relatively small space. After the main exhibition, the visitor is encouraged to experience several other installations around the Centre. Following the challenging content of the main display it was something of a relief to end up in What a Loving and Beautiful World, a beautifully soothing immersive digital installation created by teamLab where butterflies and Chinese characters floating down the walls respond to shadows that visitors cast. It’s a tranquil end to an exciting, but often thorny and difficult, subject.