Bot or Not? – The Future of Opinion-making

Software robots, or bots for short, have existed for as long as computers have. The first bot that was capable of following a conversation was designed by the British computer scientist and mathematician Alan Turing in the 1950s. His pioneering paper “Computing Machinery and Intelligence”1is still the foundation for the development of artificial intelligence today. Turing first worked through the question of whether machines can think and came to the conclusion that it would be better to pursue the question of whether “the machine” could win a game, because the terms “thinking” and “machine” were too ambiguous.He sidestepped the question “Can machines think?” by proposing a so-called imitation game involving three persons.An observer is to determine the gender of the participants – a man and a woman – on the basis of exchanged text messages.The task of the two players is to fool the observer. Turing’s idea, known as the “Turing Test”, ties into this game and replaces one of the players with a machine. If the observer of the text-based conversation on the computer cannot determine which of the two players is a human and which is a machine, then the machine has won.

In 1966 the German-American computer scientist Joseph Weizenbaum developed ELIZA, a computer program designed to imitate conversations in natural language. Eliza is considered the first chatbot, although its possibilities were still very limited. Today’s chatbots, such as Mitsuku by Steve Worswick, who was awarded the Loebner Prize in 2016, are much more convincing. The Loebner Prize was instituted in 1991 and is awarded annually. The Gold Medal – which has not yet been awarded – is to go to the chatbot that cannot be distinguished from a human. In the annual competition the Bronze Medal is awarded to the most human-like bot of the year.

Bots are all around us in the digital world. They search Google for us, post to Twitter, write articles on stock prices and sports results. Researchers estimate that a quarter of the Tweets in the US election campaign came from bots. Here the automatic messages are intended to reinforce the moods of the voters.

With Weizenbaum’s program ELIZA, which was able to simulate different conversation partners through scripts and responded like a psychotherapist, human conversation partners already opened up and told their most intimate secrets, which astonished the developer. This happened, even though it was obvious that Eliza was not a human, but a relatively simple program. It seems that many people find communication with bots entertaining and sometimes even poetic. If the bots seem too human, that can even be counterproductive: according to a Microsoft study from 2016 about online activism in Latin America,2 anti-corruption activists seemed less motivated in online activities on Twitter, if the Twitter-Bots3 (Botivisten) became too human and shared their solidarity with the activists, for instance, whereas the bots’ call for resistance against corruption found much more agreement and had a motivating effect.

Today we find ourselves confronted with the problem that it is increasingly difficult to determine who or what provides us with which responses, regardless of whether it involves search results or the question of why we receive which messages through various social channels. This has less to do with progress in the development of artificial intelligence, but more with the possibility of collecting unimaginable amounts of data, evaluating them in a certain way automatically according to patterns (Big Data), and using this evaluation in algorithms that subtly steer us in certain directions.

This happened, for example, in the US elections. In the article “The Rise of the Weaponized AI Propaganda Machine”, the authors Berit Anderson and Brett Horvath show how tailored “direct marketing” and the use of bots contributed to how voters behaved in the US elections in 2016.4 Jonathan Albright, professor and data scientist at the Elon University in North Carolina/USA, notes in this article that fake news played an important role in the collection of data about users. According to his investigations, the firm Cambridge Analytica followed the different behaviors of users through user tracking and reinforced these behaviors through algorithms. Those who liked Facebook pages in a certain opinion spectrum and looked at certain fake news pages were shown more of these kinds of messages and news generated by algorithms. The effects were thus reinforced through the various channels. For this, Cambridge Analytica also used user profiles purchased, for instance, from Amazon.

Behind Cambridge Analytica is the family of Robert Mercer, an American hedge-fund billionaire. It is oriented to conservative alt-right interests and is closely connected with the Trump team (Trump’s chief strategist Steve Bannon is a member of the board of the company). Although it cannot be indisputably proved that the Trump campaign was supported by Cambridge Analytica, because work was commissioned through sub-companies, it is already quite clear that profiling, social media marketing and targeting played an important role. This means that in 2016 a new era of political opinion-making was ushered in, in which Big Data and pattern recognition in conjunction with automatization enable new forms of political influencing, which we so far have no way to deal with.

The aforementioned Scout article notes an interesting parallel: Will public opinion-making function in the future in a way similar to high-frequency trading, where algorithms battle one another and influence the buying and selling of stocks? Stock-trading algorithms analyze millions of Tweets and online posts in real time, thus steering the automated buying and selling of stocks.Behind every algorithm, however, there is a human who programs it and sets the rules.So blaming bots alone leads nowhere. What we need are ethical criteria for whether and how automated systems can be used, so that they serve a democratic discourse.

Social Bots are semi-autonomous actors, because their behavior is determined on the one hand by the intentions of the programmers and on the other by given rules and complex algorithms. This is the definition given by the Internet researchers Samuel Woolley, danah boyd and Meredith Broussard in an article for Motherboard.5 The appearance of bots undoubtedly poses ethical and political challenges to society. We need to think about responsibility and transparency – in the design, the technology and the regulation of semi-autonomous systems that we have built ourselves. This includes questions such as: Must bots be identified as bots? How do we deal with personalization? Which data protection legal regulations are necessary? Who is legally responsible for bots – the programmer or the operator? Which ethical rules apply to bot-programmers?

None of these questions have been answered yet, and it may not even be possible to answer some of them. Nevertheless we must face them – at some point in the future perhaps not only among ourselves, but also with bots.

The AMRO Research-Labor is devoted to the theme of “Social Bots” in 2017

http://research.radical-openness.org/2017/

A symposium is being planned for early May in cooperation with the

Linz Art University, Department of Time-based Media.

Authors: Valie Djordjevic & Ushi Reiter

Artwork: Christoph Haag, https://twitter.com/makebotbot

1A. M. Turing: Computing Machinery and Intelligence, 1950, http://orium.pw/paper/turingai.pdf

2Saiph Savage. Andrés Monroy-Hernández. Tobias Hollerer: Botivist: Calling Volunteers to Action using Online Bots, 1 Feb 2016, https://www.microsoft.com/en-us/research/publication/botivist-calling-volunteers-to-action-using-online-bots/?from=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F256068%2Fbotivist_cscw_2016.pdf

3Signe Brewster: How Twitter Bots Turn Tweeters into Activists, 18 Dec 2015, https://www.technologyreview.com/s/544851/how-twitter-bots-turn-tweeters-into-activists/

4Berit Anderson and Brett Horvath: The Rise of the Weaponized AI Propaganda Machine, Scout https://scout.ai/story/the-rise-of-the-weaponized-ai-propaganda-machine

5Samuel Woolley, danah boyd, Meredith Broussard: How to Think About Bots, 23 Feb 2016, https://motherboard.vice.com/en_us/article/how-to-think-about-bots