Jake Elwes, "The Zizi Show", 2020. Montage of a deepfake training on drag queen Lilly Snatchdragon. Courtesy of the Artist. © VG Bild-Kunst

© VG Bild-Kunst

Jake Elwes, "The Zizi Show", 2020. Montage of a deepfake training on drag queen Lilly Snatchdragon. Courtesy of the Artist. © VG Bild-Kunst

© VG Bild-Kunst

Our questionnaire

Artificial Intelligence as Phantasm

AI can only recognise the patterns we have previously trained it to recognise

Inke Arns, Dr phil is a freelance curator and author, artistic director since 2005 and director of HMKV Hartware MedienKunstVerein, Dortmund, since 2017. In the interview, she talks about the exhibition "House of Mirrors" and what mirror cabinets have to do with artificial intelligence. The exhibition can be seen at HMKV from 9 April to 31 July 2022.


Inke Arns



Birthplace / Place of residence:

Duisdorf (today Bonn) / Dortmund


curator, director of HMKV Hartware MedienKunstVerein


The art world is dealing a lot with the subject of AI at the moment, why now?

Artificial intelligence has arrived in our everyday lives - just think of hoover robots, facial recognition on mobile phones or autonomous vehicles. That's why many artists are also dealing with AI. HMKV already had its first exhibition on the subject in 2015 and we were almost worried that we were too late. The current occasion is the research project "Training the Archive", which the HMKV Hartware MedienKunstVerein has been carrying out together with the Ludwig Forum Aachen since 2020.

Critical analyses of AI have only gained momentum in recent years - here I am thinking in particular of the publications by Safiya Umoja Noble's Algorithms of Oppression (2018), Pattern Discrimination (2019, ed. by Clemens Apprich, Wendy Hui Kyong Chun, Florian Cramer and Hito Steyerl), the project Excavating AI (2019) by Kate Crawford and the artist Trevor Paglen, and Kate Crawford's Atlas of AI (2021).

Why is the scenography of the exhibition reminiscent of a huge mirror labyrinth?

We - Marie Lechner, Francis Hunger and I - prefer not to speak of "artificial intelligence" but of "pattern recognition". Because that's what it's all about. AI can only recognise the patterns we have previously trained it to recognise. Depending on the data with which these AIs are trained, their outputs are more or less biased, prejudiced or even racist. This is because the AI can only be as good (or bad) as the humans who train it.

If the material (e.g. pictures of faces) is already subject to strong selection (e.g. only faces of white people), the result delivered by the AI will be strongly biased. If you subsequently show the AI pictures of people with a different skin colour, it will not recognise that they are human beings, or it will classify them as "gorillas" or "criminals".

There is a popular internet meme about this from June 2021:

As you see, AI can be strongly biased. It only reflects what we have "taught" it in the training process. And this is where it gets tricky: AI training data sets are often incomplete or biased, and descriptions created by humans can prove to be unexpectedly problematic due to their inherent bias.

This is why we came up with the metaphor of the House of Mirrors. Most of us are familiar with mirror cabinets from traditional fairs: once you have entered the labyrinth of glass walls and distorting mirrors, it is difficult to find your way out again. And all reflections show only one's own image, one's own input.

What are the seven thematic chapters of the exhibition?

You enter through the lobby (where two works by Lauren Huret and Sebastian Schmieg already greet you) - and then the following seven rooms follow:

ROOM 1 - Eine Traumlandschaft der Vollautomatisierung / A Dreamscape of Full Automation

ROOM 2 - Ceci n'est pas une pipe

ROOM 3 - A Chamber of Wonders with Slightly Violent Machines

ROOM 4 - The Hidden Chamber of Artificial Artifical Intelligence

ROOM 5 - Cabinet of scary laughter

ROOM 6 - First I scratched the mirror, then I smashed it

ROOM 7 - Exit Through the Gift Shop

Which aspects fascinate you in the selected artistic works?

Regarding the selection, I can say - also on behalf of my colleagues - that on the one hand we were looking for works that work well in the space and for those that understand AI not only as a technology, but as a social constellation.

We are fascinated by the seriousness with which artists approach the topics and how they try to take apart and deconstruct the technology. How they make visible processes that normally remain invisible (such as the preparation of training data for AI). And how, beyond making it visible, they also intervene in an exemplary way and show us how we can regain agency (e.g. Jake Elwes' The Zizi Show - which is about "queering" training data - or Lauren Lee McCarthy's LAUREN, in which the artist takes on the role of a digital assistant in a smart home).

Why is the exhibition particularly dedicated to the problematic aspects of AI?

Because these aspects are addressed far too little. Most people - including the industry, of course - think of AI as a pure solution tool that makes objective decisions. We want to show that AI is anything but objective.

Automation has a strong impact because it affects all of our lives. AI is the next level of automation that we have to deal with. Charlie Chaplin did that for the 20th century in his classic film Modern Times. Now we live in the 21st century.

What kind of social approach to AI would you like to see?

It's about transparency, about opening the black box of artificial intelligence. We need to know what happens in this black box. We need to understand the mechanisms on the basis of which we are classified, categorised and evaluated. We achieve this not only by teaching more media literacy at school, but also by understanding pattern recognition and AI-driven decisions in our everyday lives, everywhere and at all times.