HADES. A Dark Parable About Light (2017)

HADES. A Dark Parable About Light (2017)

A Work by Markus Decker and Pamela Neuwirth

Foto: Markus Decker

It seems that nothing more is left to chance now. In the utopia of a highly efficient society, fantasies and fears of “intelligent” optimization proliferate wildly. The possibility of collecting and analyzing incredible amounts of data and letting them be determined by an unknown magnitude, has now reanimated expectations of artificial intelligence from their AI hibernation again.

With the title Hades, the artists refer to the Greek god of the underworld. As the realm of the dead souls, Hades’ underworld becomes a staged space: a glowing gelatin cube is at the center. As long as the cube is glowing, a photon detector recognizes the points of light. A metaphysical discussion about soul and consciousness is made possible using this resource: the individual enters the world with the poet Dante, only to become entangled shortly thereafter in Descartes’ body-soul problem. The metaphysicians Gassendi and Gruithuisen explore alien civilizations in outer space, while Mary Shelley invents an artificial human with Frankenstein. Husserl explains perception as mental phenomenons of consciousness, which are still largely open questions in the current neuroscientific debates about AI and robotics.

The exchanges among the philosophers and authors take place in ANN (Artificial Neuronal Network). In the first order of ANN there is a formal index of the world, in which the themes of atom, error, consciousness, space, psyche, vision, soul, physics, communication, and love are negotiated. ANN learns these exchanges in the second order, while in the third and fourth order we can only observe the machine’s conclusions about the world.

In Hades the voices of the philosophers become audible in space, and we can follow the philosophical discourse on a display. The machine continues to decide, until the artificial glow is overgrown by nature. Mold (life) gradually extinguishes the light and makes ANN become silent.

Supported and produced by servus.at, Us(c)hi Reiter
Thanks to Aileen Derieg, Oliver Frommel, Kunstuniversität Linz,
Funded by BKA, Vienna and Linz Kultur

Bot or Not? – The Future of Opinion-making

Software robots, or bots for short, have existed for as long as computers have. The first bot that was capable of following a conversation was designed by the British computer scientist and mathematician Alan Turing in the 1950s. His pioneering paper “Computing Machinery and Intelligence”1is still the foundation for the development of artificial intelligence today. Turing first worked through the question of whether machines can think and came to the conclusion that it would be better to pursue the question of whether “the machine” could win a game, because the terms “thinking” and “machine” were too ambiguous.He sidestepped the question “Can machines think?” by proposing a so-called imitation game involving three persons.An observer is to determine the gender of the participants – a man and a woman – on the basis of exchanged text messages.The task of the two players is to fool the observer. Turing’s idea, known as the “Turing Test”, ties into this game and replaces one of the players with a machine. If the observer of the text-based conversation on the computer cannot determine which of the two players is a human and which is a machine, then the machine has won.

In 1966 the German-American computer scientist Joseph Weizenbaum developed ELIZA, a computer program designed to imitate conversations in natural language. Eliza is considered the first chatbot, although its possibilities were still very limited. Today’s chatbots, such as Mitsuku by Steve Worswick, who was awarded the Loebner Prize in 2016, are much more convincing. The Loebner Prize was instituted in 1991 and is awarded annually. The Gold Medal – which has not yet been awarded – is to go to the chatbot that cannot be distinguished from a human. In the annual competition the Bronze Medal is awarded to the most human-like bot of the year.

Bots are all around us in the digital world. They search Google for us, post to Twitter, write articles on stock prices and sports results. Researchers estimate that a quarter of the Tweets in the US election campaign came from bots. Here the automatic messages are intended to reinforce the moods of the voters.

With Weizenbaum’s program ELIZA, which was able to simulate different conversation partners through scripts and responded like a psychotherapist, human conversation partners already opened up and told their most intimate secrets, which astonished the developer. This happened, even though it was obvious that Eliza was not a human, but a relatively simple program. It seems that many people find communication with bots entertaining and sometimes even poetic. If the bots seem too human, that can even be counterproductive: according to a Microsoft study from 2016 about online activism in Latin America,2 anti-corruption activists seemed less motivated in online activities on Twitter, if the Twitter-Bots3 (Botivisten) became too human and shared their solidarity with the activists, for instance, whereas the bots’ call for resistance against corruption found much more agreement and had a motivating effect.

Today we find ourselves confronted with the problem that it is increasingly difficult to determine who or what provides us with which responses, regardless of whether it involves search results or the question of why we receive which messages through various social channels. This has less to do with progress in the development of artificial intelligence, but more with the possibility of collecting unimaginable amounts of data, evaluating them in a certain way automatically according to patterns (Big Data), and using this evaluation in algorithms that subtly steer us in certain directions.

This happened, for example, in the US elections. In the article “The Rise of the Weaponized AI Propaganda Machine”, the authors Berit Anderson and Brett Horvath show how tailored “direct marketing” and the use of bots contributed to how voters behaved in the US elections in 2016.4 Jonathan Albright, professor and data scientist at the Elon University in North Carolina/USA, notes in this article that fake news played an important role in the collection of data about users. According to his investigations, the firm Cambridge Analytica followed the different behaviors of users through user tracking and reinforced these behaviors through algorithms. Those who liked Facebook pages in a certain opinion spectrum and looked at certain fake news pages were shown more of these kinds of messages and news generated by algorithms. The effects were thus reinforced through the various channels. For this, Cambridge Analytica also used user profiles purchased, for instance, from Amazon.

Behind Cambridge Analytica is the family of Robert Mercer, an American hedge-fund billionaire. It is oriented to conservative alt-right interests and is closely connected with the Trump team (Trump’s chief strategist Steve Bannon is a member of the board of the company). Although it cannot be indisputably proved that the Trump campaign was supported by Cambridge Analytica, because work was commissioned through sub-companies, it is already quite clear that profiling, social media marketing and targeting played an important role. This means that in 2016 a new era of political opinion-making was ushered in, in which Big Data and pattern recognition in conjunction with automatization enable new forms of political influencing, which we so far have no way to deal with.

The aforementioned Scout article notes an interesting parallel: Will public opinion-making function in the future in a way similar to high-frequency trading, where algorithms battle one another and influence the buying and selling of stocks? Stock-trading algorithms analyze millions of Tweets and online posts in real time, thus steering the automated buying and selling of stocks.Behind every algorithm, however, there is a human who programs it and sets the rules.So blaming bots alone leads nowhere. What we need are ethical criteria for whether and how automated systems can be used, so that they serve a democratic discourse.

Social Bots are semi-autonomous actors, because their behavior is determined on the one hand by the intentions of the programmers and on the other by given rules and complex algorithms. This is the definition given by the Internet researchers Samuel Woolley, danah boyd and Meredith Broussard in an article for Motherboard.5 The appearance of bots undoubtedly poses ethical and political challenges to society. We need to think about responsibility and transparency – in the design, the technology and the regulation of semi-autonomous systems that we have built ourselves. This includes questions such as: Must bots be identified as bots? How do we deal with personalization? Which data protection legal regulations are necessary? Who is legally responsible for bots – the programmer or the operator? Which ethical rules apply to bot-programmers?

None of these questions have been answered yet, and it may not even be possible to answer some of them. Nevertheless we must face them – at some point in the future perhaps not only among ourselves, but also with bots.

The AMRO Research-Labor is devoted to the theme of “Social Bots” in 2017

http://research.radical-openness.org/2017/

A symposium is being planned for early May in cooperation with the

Linz Art University, Department of Time-based Media.

Authors: Valie Djordjevic & Ushi Reiter

Artwork: Christoph Haag, https://twitter.com/makebotbot

1A. M. Turing: Computing Machinery and Intelligence, 1950, http://orium.pw/paper/turingai.pdf

2Saiph Savage. Andrés Monroy-Hernández. Tobias Hollerer: Botivist: Calling Volunteers to Action using Online Bots, 1 Feb 2016, https://www.microsoft.com/en-us/research/publication/botivist-calling-volunteers-to-action-using-online-bots/?from=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F256068%2Fbotivist_cscw_2016.pdf

3Signe Brewster: How Twitter Bots Turn Tweeters into Activists, 18 Dec 2015, https://www.technologyreview.com/s/544851/how-twitter-bots-turn-tweeters-into-activists/

4Berit Anderson and Brett Horvath: The Rise of the Weaponized AI Propaganda Machine, Scout https://scout.ai/story/the-rise-of-the-weaponized-ai-propaganda-machine

5Samuel Woolley, danah boyd, Meredith Broussard: How to Think About Bots, 23 Feb 2016, https://motherboard.vice.com/en_us/article/how-to-think-about-bots

Bot and the world

Who May Be an Author? Who Has Authority? Who is Fake News? Asks Krystian Woznicki

The article was first published on 3 May 2017 in the Berliner Gazette (http://berlinergazette.de/bot-und-die-welt-autorschaft-fake-news) and is under a Creative Commons License (CC BY-NC)

Anyone who says “fake news” today is at least a little outraged by reality, where much seems to have slipped into disorder, especially in terms of authorship and authority. This aspect is not sufficiently taken into account in the current discussion. The focus is usually on discrediting untrue messages. The fundamental presupposition here, though, is that only something like true messages are allowed, indeed that only true messages may even exist.

 

Foto: Kayla Velasquez, CC0

[Photo: Kayla Velasquez, CC0
https://unsplash.com/@km_mixedloev]

Yet is it not also part of democracy to argue about what is true and what is untrue? Today fake news is the battle cry of those who do not want to enter into a debate; people who have found their truth, even if all that is ultimately clear is that they cannot and will not accept the truths of others. Whether they are Trump fans or Trump opponents.

In this sense I agree with the technology researcher danah boyd, when she says in the fake news debatei: “If we want technical solutions to complex socio-technical issues, we can’t simply throw it over the wall and tell companies to fix the broken parts of society that they made visible and helped magnify.”

The Democratization of False Reports

The central reason why the idea of fake news could become such a major issue is probably primarily because of the multitude of voices that make statements about the world today – while sometimes more, sometimes less energetically claiming truthfulness. False reports, disinformation and propaganda in general have a history. Today, however, it is not only the major institutions and authorities who can circulate all of that as though it were taken for granted, but also any John Doe, some algorithm, a damned bot, or a whistleblower.

So we need to question not only what is new about the phenomenon of fake news, but also the changed conditions, under which purported truths are circulated today. Let’s start the search movement in everyday life: I recently heard the statement, “You are fake news.” This is a variation of “me boss, you nothing!”; the statement goes further, however, and precisely summarizes our situation at the same time.

“You are fake news” does not simply say you are a bad joke or bad news, but rather that you are a non-authorized message. Someone who says that not only doesn’t want to accept confrontation (or being confronted), but actually denies the originator the right to confront them at all. The counterpart is denied the right to exist.

It is a question of authorship that is raised here. More specifically, a question of how authorship can be achieved, confirmed and asserted. Who may claim to be an author? Talk of “fake news” is intended to clarify the situation here by launching an idea of exclusion. Yet exclusion is not so clearly defined. The criteria are vague.

Who Should You Listen To?

The way we have become accustomed to listening exclusively to elevated subjects, in other words potential Nobel Prize winners or demagogues in spe or in office – this is a situation we should challenge. I don’t even want to say we should bring our ideas of elevation and subjectivity to the level of digital society (certainly that too), but simply that we should rethink our criteria: Who gets my attention? Who doesn’t?

Should the Pediga-follower be heard or the asylum-seeker as well? Should we also listen to bots and algorithms as well as to whistleblowers and leakers? (Actively listening is meant here, of course.) Admittedly, these are all very different speaker positions, and you could say you can’t just lump them all together. But the common denominator is: first of all, these are emerging “senders” tending to be poorly represented in society. Secondly, they have no authority and their status as authors is consequently precarious. In short, they are potential “compilers” of fake news. But if we don’t start taking them seriously, these emerging senders tending to be poorly represented, then we are in danger of becoming beings out of touch with reality.

Algorithms and Fake News

We are meanwhile accustomed to accepting recommendations for purchasing, consumption and life decisions from computer programs. They have crept into our smart-phone-supported everyday life without us noticing. Other developments, on the other hand, are at the center of attention: such as prognoses about election results or stock market developments, which not only similarly predict scenarios and the future on the basis of as much data as possible, but actually even crucially influence them.

Regardless, however, of whether they are more or less invisible or show up as little software stars, hardly anyone raises the all-important question: Who is the author of algorithmic predictions or recommendations? Is it the programmer who develops the software? Is it the software itself that develops a life of its own and starts acting as artificial intelligence? Or is it those who first “read” the algorithmic hint as a sign at all and then first turn it into reality – in other words, we who are users, viewers or consumers as needed.

What is really our role, when algorithms start mapping out our life? Who is author? God and his representatives have meanwhile been sidelined. Artificial intelligence is advancing, as Norbert Wiener already wrote shortly after World War Two. Not many wanted to hear it at the time. Today Wiener’s theses are being listened to more closely, such as in discussions about whether AI will be accompanied by something like the “technological singularity.” According to Wikipedia, this is the moment when “the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.”

The Demand for Transparency

In the political discussion, the following question is rarely raised: Who has something to say? This is once again, differently worded, the question of authorship and authority. This question is also present, when NGOs like Algorithm Watch demand transparency about how decisions are made via algorithms. Because it is only when we know how the respective algorithm works – for instance, Google’s search algorithm – that we can start to discuss the question of responsibility and authorship in any meaningful way.

However, the transparency debate also brings several problems with it. I still recall the first major projects from WikiLeaks, such as the release of around a quarter-million diplomatic reports from the US, and the first major intellectual engagement with the phenomenon: an activist platform challenges a superpower. That was 2010 and 2011. The open question then was, not least of all, whether a transparency initiative like WikiLeaks should be allowed to have an agenda, or what it means if it does have an agenda.

Having an agenda also means that the neutrality of the platform was up for discussion: Is WikiLeaks promoting transparency only in a certain (for instance, geo-political) direction? If so, whose interests are served in this way? Who would want to finance a battle like this? Even at that time, Russia was in discussion as a possible patron of the platform. Hardly anyone asked: Have interests not always been involved when it was a matter of “piercing through”information and providing transparency?

Interests can be as diverse as “I want to put myself in a more advantageous position,” or “I want to promote justice.” The latter is considered an honorable motivation for leaking. The former is not. The latter, the imperative for justice, has been strongly in the foreground in all debates of recent years; the former, the strategic use, has hardly been discussed. Now the discourses are commingling. For several months there has been talk of the strategic leak – for instance in terms of the revealed emails of the Democrats during the US election campaign.

Authorship and authority are the issue in this context as well. Of course the allegation of a strategy, a bias, a certain interest is always an attempt to discredit a leak and a whistleblower. “But that only helps the Russians!”, therefore it can only be false, in other words: fake news. But we must learn to talk about the political use (i.e., who is the leak useful to?) as well as about political consequences (i.e. what does the leak reveal and what consequences does that have?).

At the same time, we must learn to understand that one does not necessarily preclude the other. Just because a certain leak “helps the Russians” does not necessarily make it fake news. It should be taken seriously either way – in terms of authorship as well as authority. Not only journalists, but also users of social media should pay close attention to the Five Ws and an H: what, where, when, who, why, and how.

Krystian Woznicki is organizing the exhibition “Signals. An Exhibition of the Snowden Files in Art, Media and Archives” together with the Berliner Gazette (12 to 26 September 2017, Diamondpaper Studio, Berlin) and the conference “Failed Citizens or Failed States? Friendly Fire” (2 to 4 November 2017, ZK/U – Zentrum für Kunst und Urbanistik, Berlin). Most recently published “After the Planes. A Dialogue on Movement, Perception and Politics” with Brian Massumi, and “A Field Guide to the Snowden Files. Media, Art, Archives. 2013–2017” with Magdalena Taube (both Diamondpaper, 2017).

ihttps://www.wired.com/2017/03/google-and-facebook-cant-just-make-fake-news-disappear/

Machines Making Opinions

Valie Djordjevic on Social Bots and Fake News

published in the June edition of Versorgerin

The New Right have discovered the Internet and digital tools. Automatized computer scripts, also called social bots, meanwhile play an important role in election campaigns and the formation of political opinions. Social bots usually pretend to be real people sharing their opinions via social networks. This has various functions: it launches content, reinforces existing trends, normalizes tabooed views, or gives users the feeling they are not alone with their controversial opinion.

Foto: Lorie Shaull, CC
Foto: Lorie Shaull, CC

The Democratization of Political Discourse

The Internet and digitalization initially led to a democratization of political discourse. It is no longer only politicians, election campaign managers, and journalists who can disseminate information and opinions, but now also ordinary citizens. On Facebook and Twitter people discuss how to stop climate change or which social benefits are appropriate. No objections to that, are there? However, that was just the first phase.

 

Since now everyone could use the net as a publication platform, the crackpots came too: conspiracy theories suddenly became socially acceptable (the condensation trails from airplanes, so-called chem trails, are in reality drugs to keep the population quiet; the attack on the World Trade Center on 11 September 2001 was really carried out alternately by the US American or the Israeli secret services – and those are only a few examples). It is conspicuous that many of the conspiracy theories are associated with right-wing populist politics. It is therefore probably not a coincidence that conspiracy theories proliferated among Trump’s fans and media supporters (Breitbart et al.).i In a complex world they supply simple explanations – something they have in common with right-wing populism.

The connection between online communities and the formation of political opinion can be easily traced with the so-called Pizzagate conspiracy. In 2016 users of the message board 4chan and the online community Reddit spread the rumor that there was a child pornography ring in the basement of the pizzeria Comet Ping Pong in Washington D.C., in which Hillary Clinton and other Democratic politicians were involved. This led so far that in December 2016, the 28-year-old Edgar Welch from South Carolina stormed into the pizzeria with a gun and fired three shots (no one was wounded). He wanted to see for himself what was going on, but when he found no evidence of minors being held captive in the pizzeria, he turned himself in to the police.

The Pizzagate conspiracy is one of the most well-known examples of “fake news” – obviously false news contrary to all common sense that spreads through forums and social media. Social bots play a major role here. The media studies scholar Jonathan Albright from the Elon University in North Carolina told the Washington Post that a conspicuous number of Pizzagate Tweets came from the Czech Republic, Cyprus, and Vietnam. He presumes that the most active retweeters are bots, which are supposed to amplify certain news and information.ii

How Does a Social Bot Work?

The massive use of automatized scripts is intended to influence opinions. There are various strategies for this: with retweeting, messages are to be further disseminated as quickly as possible to set off a snowball effect. Automatic retweeting generates the impression that an opinion or information is important. Social bots are able to set trends quickly, especially in smaller language areas, so that the relevant messages are increasingly displayed. In Germany, for example, it is supposedly enough if only 10,000 people tweet about a certain topic.

In political communication reality is constructed in this way – extreme opinions become visible and recognized. With fake news public opinion is steered in a certain direction: floods of refugees, the danger of Islam, conspiracies among the elite against the people. Fake news is constructed in such a way that it serves prejudices and resentment. Conspiracy theories, distrust of the media and mainstream policies thus create a reality in which it makes sense to vote for populists and right-wing demagogues. The mainstream press does not report on this – a further indication for many that the conspiracy must be real and that the established media are lying.

Most bots are not technically sophisticated. There are meanwhile software packages that only need to be configured. Nevertheless, it is not easy to tell whether an account is a bot or a real human. The software is being constantly improved and behaves more and more like humans. The bots meanwhile follow a day-and-night rhythm and respond in more complex ways. On the website “Bot or Not?” of the Indiana University [http://truthy.indiana.edu/botornot/] the name of a Twitter account can be entered and a percentage will be displayed, indicating how likely it is that the account is a bot. “Bot or not” also makes the criteria transparent, according to which it judges: when Tweets are published, the sentence construction of the Tweet, the network of followers and followed, which languages are used and much more.

Freedom of Speech for Bots

Automatized communication is not a problem to begin with – there are meaningful uses for bots. They can post service messages that ease communication with customers and automatically gather certain topics. Problems arise when those using bots obfuscate that the bot posts are algorithmically controlled and, most of all, who is behind them.

Researchers estimate that up to fifty percent of communication in the election campaigns of recent years is from bots. The scientists from the project “Political Bots” [http://comprop.oii.ox.ac.uk/] of the Oxford University noted that in the US election every third Tweet from Trump supporters was presumably automatized – with Clinton it was every fourth Tweet. Numerous bots are also thought to be involved in the French election.iii

Without knowledge about the backgrounds, users are missing the necessary transparency and the contextual knowledge to decide how relevant and reliable a message is. This is also where the question of responsibility and freedom of speech arises. May I disseminate lies and appeal to freedom of speech? Does the right to freedom of speech only apply to humans? Behind every account, though, there is ultimately someone who started it and feeds it with content. Depending on the complexity of the software it is more or less autonomous, but the intention behind it is always political – and that means it is guided by certain power interests. What is missing, however, are the tools to establish transparency in social media – possibly something like a legal requirement of contact data. The discussion about this has only just started.

iIn his highly recommendable lecture at the Piet Zwart Institute in Rotterdam, Florian Cramer provided an overview of the various memes and campaigns of the Alt-Right: https://conversations.e-flux.com/t/florian-cramer-on-the-alt-right/5616

iiMarc Fisher, John Woodrow Cox and Peter Hermann: Pizzagate: From rumor, to hashtag, to gunfire in D.C., Washington Post 6.12.2016, https://www.washingtonpost.com/local/pizzagate-from-rumor-to-hashtag-to-gunfire-in-dc/2016/12/06/4c7def50-bbd4-11e6-94ac-3d324840106c_story.html

iiiJunk News and Bots during the French Presidential Election: What Are French Voters Sharing Over Twitter? http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/04/What-Are-French-Voters-Sharing-Over-Twitter-v9.pdf; Junk News and Bots during the French Presidential Election: What Are French Voters Sharing Over Twitter In Round Two? http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/05/What-Are-French-Voters-Sharing-Over-Twitter-Between-the-Two-Rounds-v7.pdf

I am a Bot?

I am a Bot?

Us(c)hi Reiter on Anthropomorphized and Stereotype Bots

published in the June edition of Versorgerin

We talk to cars, animals, devices, and plants. We humans have always had a tendency to anthropomorphize things and animals. But today the things are responding. Software programs are taking on the form of persons or characters and entering into our world in games and social media. We are not always aware of this and interact with them as with other humans. When “social bots” are mentioned, this means programs that digitally support us or show up in social networks disguised as humans and actively participate in our communication. They are even said to be responsible for reinforcing certain opinions or tendencies in social media channels. Here the question arises: Who are the real authors? The bot or its programmers? How autonomous does a bot have to be, in order to have independent action attributed to it?

However, the anthropomorphization of algorithms also highlights another problem. Inherent to the design and representation of digital assistance services, social bots and humanoid robots is also the danger of reproducing stereotype social roles – not least of all, gender roles. The way that bots learn from users today does not change predominant patriarchal and social circumstances and role expectations. Instead, there is a danger we could get stuck in a loop.

In the view of the author Laurie Penny,i which name, which voice, and which responses robots are designed with, is not just an abstract academic question. She sees a connection in the way fembots are treated in our society on the one hand and real women on the other. Many bots that appear on the market and do not necessarily need a gender are furnished with smart female personalities. These include Microsoft’s Cortana, Amazon’s Alexa, and Apple’s Siri.

Microsoft made a clear decision by giving its intelligent software the name Cortana and her voice from the Microsoft videogame Halo2ii. As in many mainstream video games, the female figure Cortana assumes a problematic role here. On the one hand she is strong, but on the other devoted and also beautiful – and dependent on a mastermind. Cortana’s clean blue circuit-board skin cannot really distract from the fact that this figure is actually naked, while her male colleagues hide in the armor of modern robot-knights. The wet dreams of post-pubescent Silicon Valley programmers?

Even though the Cortana app, unlike its model, remains just a blue, speaking dot with a personality, the functions and possible verbal interactions of this and similar apps often remain stereotypes of female subservience. In digital space as well, outmoded ideas of gender roles seem to be taking the upper hand at the cost of possible diversity.

And in the research field of artificial intelligence there is a problem. In the interview “Inside the surprisingly sexist world of artificial intelligence,”iii Marie des Jardins, professor for computer science at the Maryland University, says: “Since men have come to dominate AI, research has become very narrowly focused on solving technical problems and not on the big questions.”

Most bots with personality and natural-sounding female voices remind, assist, note, search, behave well, function – and do not speak unless spoken to. They do not say no. However, they also put their “progressive” parent companies in a moral predicament. The practice in dealing with personal virtual assistants is to verbally challenge them and to probe certain boundaries.

This is what Leah Fessler, an editor at Quarz magazine, set out to do, in order to find out how bots react to sexual harassment and what the ethical implications of their programmed responses are.iv She collected extensive data. The result is alarming: instead of combating abuse, every bot contributes to reinforcing or even provoking sexist behavior with its passivity. As she posted on Twitter:

“I spent weeks sexually harassing bots like Siri and Alexa – their responses are, frankly, horrific. It’s time tech giants do something.”v

Companies allow the verbal abuse of (fem)bots by users without limitation and thus support the way behavior stereotypes continue to exist and may even be reinforced. Abusive users may thus consider their actions as normal or even acceptable.

Submissive and Subservient

Companies like to argue that female voices are easier to understand, and this is why they give their digital assistants female voices. Even the historical tradition that women established telephone connections,vi thus assuming an important role in the history of technology, is used as an argument for why giving bots a female character is positive. That small speakers cannot reproduce lower-pitched voices as well also serves as a justification for the preferred use of female voices. These are all myths, as Sarah Zhang explains in an article: studies and technical investigations show that this is not the case.vii

With all the hype about artificial intelligence that has come out of hibernation, one could think that robots and bots are already autonomously acting entities due to “Deep Learning.” Yet algorithms are currently more than dependent on human trainers and designers.

Bots in Fiction

It is interesting that the word “robot” was used for the first time in the Czech science fiction play R.U.R from 1920.viii It is derived linguistically from “slave and servitude.” The robots in the play were not mechanical, but more like humans and made to carry out work, until the point where, as an oppressed class, they began to rebel against their masters. From antiquity up to modern times, the artificial woman has also been the idea of the ideal woman. And that is still the case in modern science fiction up to the present – new films and series hardly deviate from historical inscriptions about the idea of woman.

Also in the new HBO series Westworld, released in 2016, female robots are created, controlled, used and abused by men. Westworld is based on the novel by Michael Crichton and the eponymous film from 1973. In a theme park that replicates the Wild West, human-like robots (hosts) accommodate the – apparently – most deeply human needs of their guests. What is brutal and abysmal about humans themselves emerges again and again.

In Westworld, experiencing an adventure means doing things that are not uninhibitedly possible outside the artificial world. This includes, among other things, sexual violence. The female characters are very different from the female killer-sex-robots from the Austin Powers films, which are intended to be funny, yet still remain simply tasteless with their inflated breasts, just shooting everything up. Yet the growing self-awareness of the Westworld fembots at least contributes to the way the endearing, innocent woman (the figure Dolores) or the competent boss of a brothel (the figure Maeve) begin to question their creators over the course of the series and start to defend themselves. The series’ stereotype female roles are opened up by intelligent dialogues and at least prompt reflection.

Fembots in films and TV series often deal with dynamics of power, sexuality, and our relationship to technology. But what about the bots we encounter more and more often in our everyday life? The trend is obviously that we will encounter machines in the future that are as human and as “feminine” as possible. The real reason why Siri, Alexa, Cortana, and Google Home have female voices is that this is intended to make them more profitable and accommodate proven needs of customers.

Apart from the fact that we need to pursue the question of what our relationship to machines could look like, and whether it is necessary to anthropomorphize them, it would also be satisfying to imagine that every offensive harassment or presumptuous question to a digital assistant leads to a system crash or, perhaps even better, to fits of rage that could be reinforced with quotations from feminist manifestos.

In any case, this is precisely where a field for experiment and art opens up, which could introduce an important different perspective outside the logic of the market.