en de
Back

Online Magazine

Of audiobots, female service staff and vicious circles

Alexa, Siri, Cortana, they all have a female voice – what does that reveal about our societal image of women? To explore this question, Nadja Verena Marcin developed her own audiobot. In the interview, the artist explains why the bot sometimes does not feel like answering and how femininity is used in new technologies to promote consumption.

Eliane Eisenring spoke with Nadja Verena Marcin

Mrs. Marcin, for your latest exhibition you have programmed your own audiobot, #SOPHYGRAY, which can hold conversations about feminism, identity and art. What did the development process for this bot look like?
Initially, I looked at different forms of chatbots, from deterministic models like IBM Watson to more creative solutions like Project December by Jason Rohrer – a creative programmer from California who uses GPT-3 to generate natural language. This interested me, but after talking to various experts and my development partner Novatec, it was clear that this Natural Language Processing was not suitable for our project.

Why not?
On the one hand, a Natural Language Processing-based audio bot would have had to keep up with the rapidly developing technology. On the other hand, Natural Language Processing, as the name suggests, works with "natural language" produced by machines without conscious planning and relies on databases such as the internet, which contains conversations in online forums or posts on social media. It would have been difficult to ensure that things like pornography or hate speech were not included.

Besides, #SOPHYGRAY was supposed to be able to talk about philosophical topics. But natural language methods, interestingly enough, cannot make precise philosophical statements. We tried to imitate the philosopher Donna Haraway (*1944, Denver) with Jason Rohrer's chatbot platform Project December. The bot adopted her terminology, but the statements had no real depth, made no sense. And because it was important to me that these female philosophers were quoted correctly, we then resorted to a deterministic model and thus fed #SOPHYGRAY feminist intersectional philosophy and literature.

Interestingly, natural language methods cannot make precise philosophical statements. That is why we resorted to a deterministic model.

In the exhibition, a visitor asked "What day is today?", to which #SOPHYGRAY replied "Don't you have a calendar at home?". What is this answer supposed to show?
We designed the bot's answers using a classification – one of four answers is cheeky, one is philosophical, one is absurd and one is factual. By including philosophy, the bot often says very clever things. But he/she/it also often simply does not answer at all. I think this shows very nicely the contrast to typical consumer bots that answer everything and do nothing but serve you – with #SOPHYGRAY this does not always work. Sometimes she simply does not feel like answering.

That is also the thematic connection between these audio bots and feminism ...
That's right. The classic audio bots like Alexa or Siri, which have this serving character, all have a woman's voice. So serving is always associated with a woman, which is gender-specific and discriminatory. After all, the developers' intention is that the audio bots make it easier for consumers to buy products, and of course they should have a positive experience in the process. That's why the bots are always so friendly.

But the bot also often simply does not answer at all. I think this shows very nicely the contrast to typical consumer bots that answer everything and do nothing but serve you – with #SOPHYGRAY this does not always work.

How did you originally get the idea to deal with the topics of AI and feminism?
I am interested in the social representation of women, especially the one used in the consumer world. This usually has a large media presence and the images circulating there are consequently very influential. As is the case with Alexa and other audio bots. There are many articles on the topic of AI and discrimination against women, especially in the American sphere. I found the term "commodity feminism" particularly interesting, which I came across while reading.

You also describe this term in the booklet accompanying the exhibition. What does it mean exactly?
In a nutshell, commodity feminism is about linking the sexual attractiveness of a woman to a product or service in order to stimulate consumers.

An example of this would be Ms. Dewey, whom you mention in connection with your exhibition ...
Yes, that is correct. Ms. Dewey was a web search engine developed by Microsoft and released in 2006. The actress Janina Gavankar was recorded in the studio for three days in different poses, and depending on what the users searched for, she changed her pose and made funny or cheeky comments. Ms. Dewey wore thick-rimmed glasses, somewhat reminiscent of a newsreader, and a skin-tight business suit that accentuated her curves. The point was to create a positive experience for users, and Microsoft used a beautiful woman to achieve just that, as in the classic case of advertising.

Ms. Dewey was a web search engine developed by Microsoft and released in 2006. The point was to create a positive experience for users, and Microsoft used a beautiful woman to achieve just that.

Analogously, audiobots use women's voices. According to studies, both men and women prefer female voices in this context ...
I think this is mainly because we are used to hearing women in a purring, nice tone of voice. And what we know, we like. On top of that, as I said, there's the idea of service – because a lot of the service staff are still female, we are used to being served by women. So it is a kind of vicious circle: we are used to women in serving positions, so new technologies like audiobots are also geared towards that expectation, and we never question that connection.

Incidentally, commodity feminism also plays a role in the service sector – for example, in airlines, where flight attendants wear short skirts to provide a pleasant experience for customers.

Not only in the consumer world, but also in films, artificially intelligent beings like robots tend to be women. Why?
These films are often about seduction – the robot seduces the human. In our Christian culture, the seductress is usually a woman – like Eve, who tempts Adam to bite into the apple, so that the two are thrown out of paradise. The seductress is thus a second typical image of women in our society, alongside the servant. The main character is usually the man, and when the woman becomes influential, for example because the hero falls in love with her, the power must be taken away from her again, because it cannot be that the woman takes it over completely. So she seduces the hero until she is then somehow destroyed or destroys herself.

It is a kind of vicious circle: we are used to women in serving positions, so new technologies like audiobots are also geared towards that expectation, and we never question that connection.

Do you have an example of that?
You can see this very well in the 1927 film "Metropolis", which is about Maria, a woman who wants to bring about social change in her home town of Metropolis by preaching. The workers there are ruthlessly exploited by the rich industrialists. Maria is pretty, sensual and gifted, and her speeches so not remain without effect. Therefore, a scientist and an industrial magnate jointly create a "fake" Maria, a "machine person" – on the one hand she is a robot, on the other hand she can also appear in human form, as a seductive dancer. The real Maria is locked away and the robot is sent out to beguile the workers and stop them from thinking about change. This is one of the first robots in cultural history, and interestingly, it is female.

Returning to the topic of audiobots, in 2019 a Danish team of developers created a genderless voice – the bot "Q" speaks in the frequency range precisely between male and female voice pitches. What do you think of that?
I think that is great (laughs), it is really difficult to assign the voice to a gender. I would like to use that voice too. For #SOPHYGRAY, we used a female voice because we wanted the exhibition visitors to initially fall into the usual pattern of having associations with the typical consumer bots like Alexa and then at some point realise, oh wait, something is wrong here, he/ she/ it is saying completely different things.

What is your most important takeaway from your research for this exhibition?
I found the concept of commodity feminism the most fascinating. By dealing with that, I sharpened my own perception and became more aware of certain connections in everyday life – for example, how people are "programmed" to a certain extent by this consumer language and what associations they therefore automatically make.

I found the concept of commodity feminism fascinating, and how people are "programmed" to a certain extent by this consumer language.

To what extent do you think your exhibition was able to contribute something to the dialogue about AI and feminism? What means does art have that other forms of communication do not?
The crucial thing, of course, is that as an artist I am not consumer-dependent – no one told me what to do and I could set my own goal. With my work, I want to subversively educate people, but in a playful way that is also humorous and not too dry or judgmental. One should be able to form one's own judgement.

On the other hand, like all artists interested in social criticism, I would like us to go beyond the museums – one always feels a bit like being in an ivory tower there, with a very filtered, culturally interested audience. And it is precisely with this work that I would like to reach more people. Theoretically, I could publish #SOPHYGRAY as an app.

Is that what you have in mind? Or what plans do you have for #SOPHYGRAY?
I do not yet have the necessary capacity to publish the app. But there will definitely be a continuation of the exhibition – in New York and Berlin. I am currently in contact with an Israeli company that could help me further develop #SOPHYGRAY. The existing bot would serve as a basis. When its library is exhausted and it cannot respond, they would switch to a Natural Language Processing model. This way the bot could talk even more and there would be more engagement. I would like to see in what ways the bot could evolve to interact even better with people – maybe less talking and more asking. I am also open to suggestions (laughs).

About Nadja Verena Marcin

Nadja Verena Marcin (*1982) is a visual artist, filmmaker and writer living in Berlin and New York. She is interested in the topics of gender, history, morality, psychology and human behaviour and analyses them in a theatrical and cinematic context. Her best-known performances include OPHELIA (2017-21) and How to Undress in Front of Your Husband (2016). With her latest work, the solo show #SOPHYGRAY (November 2021 to February 2022), she explored the connections between AI and the prevailing societal image of women.

IN THE MOOD FOR A CHAT? YOU CAN FIND MORE CONVERSATION PARTNERS HERE:

In conversation with
Sustainability AI in research Machine learning AI

AI scares off wolves
In conversation with
AI for good AI in research Machine learning

AI monitors armed conflicts
In conversation with
Sustainability AI for good AI in research AI

AI & the climate
Read