As Artificial Intelligence(AI) is used in more BBC products and everything else online, we think it’s important to deliver AI-powered systems that are responsibly and ethically designed. We also want to ensure that everyone has the opportunity to understand more about how this influential technology works in the world. This is part of a series of posts on this topic.
We have noticed that news stories or press releases about AI are often illustrated with stock photos of shiny gendered robots, glowing blue brains or the Terminator. We don't think that these images actually represent the technologies of AI and ML that are in use and being developed. Indeed, we think these are unhelpful stereotypes; they set unrealistic expectations, hinder wider understanding of the technology and potentially sow fear. Ultimately this affects public understanding and critical discourse around this increasingly influential technology. We are working towards better, less clichéd, more accurate and more representative images and media for AI.
Try going to your search engine of choice and search for images of AI. What do you get?
What are the issues?
The problems with stock images of AI has been discussed and analysed a number of times already and there are some great articles and papers about it that describe the issues better than we can. The Is Seeing Believing? project asks how we can evolve the visual language of AI. The Real Scandal of AI also identifies issues with stock photos. The AI Myths project, amongst other topics, includes a feature on how shiny robots are often used to represent AI.
Going a bit deeper, this article explores how researchers have illustrated AI over the decades, this paper discusses how AI is often portrayed as white “in colour, ethnicity, or both” and this paper investigates the “AI Creation” meme that features a human hand and a machine hand nearly touching. Wider issues with the portrayal and perception of AI have also been frequently studied, as by the Royal Society here.
To help us think through the problem we developed a workshop format in which we discuss existing imagery and then think about, sketch and create some better alternatives. The workshop isn’t just about pictures though, it’s thinking through what we talk about when we talk about AI. We have run the workshop with BBC teams several times and earlier in the year we took it to the 2021 Mozilla Festival. We start our workshops by examining and discussing existing images that represent AI and ML.
In our sessions there is often disagreement on which images are helpful and unhelpful, it's not clear cut. Some of the diagram-style images might be helpful, but only if you know a bit about the subject, and they're not visually striking or immediately recognisable. Similarly, funny images don't work unless you know enough to get the joke.
The style of the existing images is often influenced by science fiction and there are many visual cliches of technology, such as 0s and 1s or circuit boards. The colour blue is predominant - although in this case it seems to be representing technology, blue can also be seen as representing male-ness.
The frequent representation of brains associate these images with human intelligence, although much AI and ML in use today is far removed from human intelligence. Robots occur frequently, but AI applications are very often nothing to do with robots or embodied systems. The robots are often white or they’re sexualised female representations. We also often see “evil” robots from popular culture, like the Terminator.
The people we’ve workshopped with liked images that illustrate AI finding patterns, order or connections. Or images that try to embed AI & ML into the reality and context in which used - like surveillance cameras or voice assistants.
What do we think about AI?
From reviewing the research literature and by interviewing AI engineers and developers in the BBC we have identified some common themes which we think are important in describing AI and ML and that could help when thinking about imagery.
- AI is all based on maths, statistics and probabilities
- AI is about finding patterns and connections in data
- AI works at a very large scale, manipulating almost unimaginable amounts of data
- AI is often very complex and opaque and it’s hard to explain how it works. It’s even hard for the experts and practitioners to understand exactly what’s going on inside these systems
- Most AI systems in use today only really know about one thing, it is “narrow” intelligence
- AI works quite differently to the human brain, in some ways it is an alien non-human intelligence
- AI systems are artificial and constructed and coded by humans
- AI is a sociotechnical system; it is combinations of computers and humans, creating, selecting and processing the data
- AI is quite invisible and often hidden
- AI is increasingly common, becoming pervasive, and affects almost all of us in so many areas. It can be powerful when connected to systems of power and affects individuals, society and the world
We would like to see more images that realistically portray the technology and point towards its strengths, weaknesses, context and applications. Maybe they could...
- Represent a wider range of humans and human cultures than ‘caucasian businessperson’ or ‘humanoid robot’
- Represent the human, social and environmental impacts of AI systems
- Reflect the realistically messy, complex, repetitive and statistical nature of AI systems
- Accurately reflect the capabilities of the technology: generally applied to specific tasks and are not of human-level intelligence
- Show realistic applications of AI
- Avoid monolithic or unknowable representations of AI systems
- Avoid using electronic representations of human brains, or robots
Towards better images
In creating new stock photos and imagery we need to consider what makes a good stock photo. Why do people use them and how? Is the image representing a particular part of the technology or is it trying to tell a wider story? What emotional response should the viewers have when looking at it? Does it help them understand the technology and is it an accurate representation?
Consider the visual style; a diagram, a cartoon or a photo each brings different attributes and will communicate ideas in different ways. Imagery is often used to draw attention so it may be important to create something that has impact and is recognisable. A lot of existing stock photos of AI may be misrepresentative and unhelpful, but they are distinctive and impactful and you know them when you see them.
To conclude our workshops we wanted people to start making their own images, or think about what those images might be. They didn't need to be "good" - we were more interested in seeing what people focused on and how they wanted to represent it.
Some of the themes we’ve seen develop from these creations include:
- Putting humans front and centre, and showing AI as a helper, a tool or something to be harnessed.
- Showing the human involvement in AI; in coding the systems or creating the training data.
- Positively reinforcing what AI can do, rather than showing the negative and dangerous aspects.
- Showing the input and outputs and how human knowledge is translated into data.
- Making the invisible visible.
- AI getting things wrong
Interesting metaphors used include sieves and filters (of data), friendly ghosts, training circus animals, social animals, like bees or ants with emergent behaviours, child-like learning and the past predicting the future.
A new image representing datasets / creating order / digitisation
This is just a starting point and there is much more thinking to be done, sketches to be drawn, ideas to be harnessed, definitions agreed on and metaphors minted.
We have already kicked off a project with students at the London College of Communication, giving them this brief and we’re excited about what they’ve created and how they thought about the problem. We are now working with We and AI to develop more ideas and spread the word. Ultimately we’re hoping to help create a library of better stock photos for AI; we’re starting to look for artists to commission and we’re looking for collaborators to work with. Please get in touch if you’re interested in working with us.
This work is part of a wider project that we're working on to try to make AI and ML more understandable to everyone. See previous posts on why we think this is important, different ways we can approach explaining, prototypes that demonstrate understandable AI and a video about AI aimed at young people.
Complexity by SBTS from the Noun Project
Octopus by Atif Arshad from the Noun Project
pattern by Eliricon from the Noun Project
watch world by corpus delicti from the Noun Project
sts by Nithinan Tatah from the Noun Project
narrowing by andriwidodo from the Noun Project
Error 404 by Aneeque Ahmed from the Noun Project
box icon by Fithratul Hafizd from the Noun Project
Ghost by Pelin Kahraman from the Noun Project
stack by Alex Fuller from the Noun Project
Math by Ralf Schmitzer from the Noun Project
chip by Chintuza from the Noun Project
This post is part of the Internet Research and Future Services section