Machines Making Opinions

Valie Djordjevic on Social Bots and Fake News

published in the June edition of Versorgerin

The New Right have discovered the Internet and digital tools. Automatized computer scripts, also called social bots, meanwhile play an important role in election campaigns and the formation of political opinions. Social bots usually pretend to be real people sharing their opinions via social networks. This has various functions: it launches content, reinforces existing trends, normalizes tabooed views, or gives users the feeling they are not alone with their controversial opinion.

Foto: Lorie Shaull, CC
Foto: Lorie Shaull, CC

The Democratization of Political Discourse

The Internet and digitalization initially led to a democratization of political discourse. It is no longer only politicians, election campaign managers, and journalists who can disseminate information and opinions, but now also ordinary citizens. On Facebook and Twitter people discuss how to stop climate change or which social benefits are appropriate. No objections to that, are there? However, that was just the first phase.


Since now everyone could use the net as a publication platform, the crackpots came too: conspiracy theories suddenly became socially acceptable (the condensation trails from airplanes, so-called chem trails, are in reality drugs to keep the population quiet; the attack on the World Trade Center on 11 September 2001 was really carried out alternately by the US American or the Israeli secret services – and those are only a few examples). It is conspicuous that many of the conspiracy theories are associated with right-wing populist politics. It is therefore probably not a coincidence that conspiracy theories proliferated among Trump’s fans and media supporters (Breitbart et al.).i In a complex world they supply simple explanations – something they have in common with right-wing populism.

The connection between online communities and the formation of political opinion can be easily traced with the so-called Pizzagate conspiracy. In 2016 users of the message board 4chan and the online community Reddit spread the rumor that there was a child pornography ring in the basement of the pizzeria Comet Ping Pong in Washington D.C., in which Hillary Clinton and other Democratic politicians were involved. This led so far that in December 2016, the 28-year-old Edgar Welch from South Carolina stormed into the pizzeria with a gun and fired three shots (no one was wounded). He wanted to see for himself what was going on, but when he found no evidence of minors being held captive in the pizzeria, he turned himself in to the police.

The Pizzagate conspiracy is one of the most well-known examples of “fake news” – obviously false news contrary to all common sense that spreads through forums and social media. Social bots play a major role here. The media studies scholar Jonathan Albright from the Elon University in North Carolina told the Washington Post that a conspicuous number of Pizzagate Tweets came from the Czech Republic, Cyprus, and Vietnam. He presumes that the most active retweeters are bots, which are supposed to amplify certain news and information.ii

How Does a Social Bot Work?

The massive use of automatized scripts is intended to influence opinions. There are various strategies for this: with retweeting, messages are to be further disseminated as quickly as possible to set off a snowball effect. Automatic retweeting generates the impression that an opinion or information is important. Social bots are able to set trends quickly, especially in smaller language areas, so that the relevant messages are increasingly displayed. In Germany, for example, it is supposedly enough if only 10,000 people tweet about a certain topic.

In political communication reality is constructed in this way – extreme opinions become visible and recognized. With fake news public opinion is steered in a certain direction: floods of refugees, the danger of Islam, conspiracies among the elite against the people. Fake news is constructed in such a way that it serves prejudices and resentment. Conspiracy theories, distrust of the media and mainstream policies thus create a reality in which it makes sense to vote for populists and right-wing demagogues. The mainstream press does not report on this – a further indication for many that the conspiracy must be real and that the established media are lying.

Most bots are not technically sophisticated. There are meanwhile software packages that only need to be configured. Nevertheless, it is not easy to tell whether an account is a bot or a real human. The software is being constantly improved and behaves more and more like humans. The bots meanwhile follow a day-and-night rhythm and respond in more complex ways. On the website “Bot or Not?” of the Indiana University [] the name of a Twitter account can be entered and a percentage will be displayed, indicating how likely it is that the account is a bot. “Bot or not” also makes the criteria transparent, according to which it judges: when Tweets are published, the sentence construction of the Tweet, the network of followers and followed, which languages are used and much more.

Freedom of Speech for Bots

Automatized communication is not a problem to begin with – there are meaningful uses for bots. They can post service messages that ease communication with customers and automatically gather certain topics. Problems arise when those using bots obfuscate that the bot posts are algorithmically controlled and, most of all, who is behind them.

Researchers estimate that up to fifty percent of communication in the election campaigns of recent years is from bots. The scientists from the project “Political Bots” [] of the Oxford University noted that in the US election every third Tweet from Trump supporters was presumably automatized – with Clinton it was every fourth Tweet. Numerous bots are also thought to be involved in the French election.iii

Without knowledge about the backgrounds, users are missing the necessary transparency and the contextual knowledge to decide how relevant and reliable a message is. This is also where the question of responsibility and freedom of speech arises. May I disseminate lies and appeal to freedom of speech? Does the right to freedom of speech only apply to humans? Behind every account, though, there is ultimately someone who started it and feeds it with content. Depending on the complexity of the software it is more or less autonomous, but the intention behind it is always political – and that means it is guided by certain power interests. What is missing, however, are the tools to establish transparency in social media – possibly something like a legal requirement of contact data. The discussion about this has only just started.

iIn his highly recommendable lecture at the Piet Zwart Institute in Rotterdam, Florian Cramer provided an overview of the various memes and campaigns of the Alt-Right:

iiMarc Fisher, John Woodrow Cox and Peter Hermann: Pizzagate: From rumor, to hashtag, to gunfire in D.C., Washington Post 6.12.2016,

iiiJunk News and Bots during the French Presidential Election: What Are French Voters Sharing Over Twitter?; Junk News and Bots during the French Presidential Election: What Are French Voters Sharing Over Twitter In Round Two?