Who May Be an Author? Who Has Authority? Who is Fake News? Asks Krystian Woznicki
The article was first published on 3 May 2017 in the Berliner Gazette (http://berlinergazette.de/bot-und-die-welt-autorschaft-fake-news) and is under a Creative Commons License (CC BY-NC)
Anyone who says “fake news” today is at least a little outraged by reality, where much seems to have slipped into disorder, especially in terms of authorship and authority. This aspect is not sufficiently taken into account in the current discussion. The focus is usually on discrediting untrue messages. The fundamental presupposition here, though, is that only something like true messages are allowed, indeed that only true messages may even exist.
[Photo: Kayla Velasquez, CC0
https://unsplash.com/@km_mixedloev]
Yet is it not also part of democracy to argue about what is true and what is untrue? Today fake news is the battle cry of those who do not want to enter into a debate; people who have found their truth, even if all that is ultimately clear is that they cannot and will not accept the truths of others. Whether they are Trump fans or Trump opponents.
In this sense I agree with the technology researcher danah boyd, when she says in the fake news debatei: “If we want technical solutions to complex socio-technical issues, we can’t simply throw it over the wall and tell companies to fix the broken parts of society that they made visible and helped magnify.”
The Democratization of False Reports
The central reason why the idea of fake news could become such a major issue is probably primarily because of the multitude of voices that make statements about the world today – while sometimes more, sometimes less energetically claiming truthfulness. False reports, disinformation and propaganda in general have a history. Today, however, it is not only the major institutions and authorities who can circulate all of that as though it were taken for granted, but also any John Doe, some algorithm, a damned bot, or a whistleblower.
So we need to question not only what is new about the phenomenon of fake news, but also the changed conditions, under which purported truths are circulated today. Let’s start the search movement in everyday life: I recently heard the statement, “You are fake news.” This is a variation of “me boss, you nothing!”; the statement goes further, however, and precisely summarizes our situation at the same time.
“You are fake news” does not simply say you are a bad joke or bad news, but rather that you are a non-authorized message. Someone who says that not only doesn’t want to accept confrontation (or being confronted), but actually denies the originator the right to confront them at all. The counterpart is denied the right to exist.
It is a question of authorship that is raised here. More specifically, a question of how authorship can be achieved, confirmed and asserted. Who may claim to be an author? Talk of “fake news” is intended to clarify the situation here by launching an idea of exclusion. Yet exclusion is not so clearly defined. The criteria are vague.
Who Should You Listen To?
The way we have become accustomed to listening exclusively to elevated subjects, in other words potential Nobel Prize winners or demagogues in spe or in office – this is a situation we should challenge. I don’t even want to say we should bring our ideas of elevation and subjectivity to the level of digital society (certainly that too), but simply that we should rethink our criteria: Who gets my attention? Who doesn’t?
Should the Pediga-follower be heard or the asylum-seeker as well? Should we also listen to bots and algorithms as well as to whistleblowers and leakers? (Actively listening is meant here, of course.) Admittedly, these are all very different speaker positions, and you could say you can’t just lump them all together. But the common denominator is: first of all, these are emerging “senders” tending to be poorly represented in society. Secondly, they have no authority and their status as authors is consequently precarious. In short, they are potential “compilers” of fake news. But if we don’t start taking them seriously, these emerging senders tending to be poorly represented, then we are in danger of becoming beings out of touch with reality.
Algorithms and Fake News
We are meanwhile accustomed to accepting recommendations for purchasing, consumption and life decisions from computer programs. They have crept into our smart-phone-supported everyday life without us noticing. Other developments, on the other hand, are at the center of attention: such as prognoses about election results or stock market developments, which not only similarly predict scenarios and the future on the basis of as much data as possible, but actually even crucially influence them.
Regardless, however, of whether they are more or less invisible or show up as little software stars, hardly anyone raises the all-important question: Who is the author of algorithmic predictions or recommendations? Is it the programmer who develops the software? Is it the software itself that develops a life of its own and starts acting as artificial intelligence? Or is it those who first “read” the algorithmic hint as a sign at all and then first turn it into reality – in other words, we who are users, viewers or consumers as needed.
What is really our role, when algorithms start mapping out our life? Who is author? God and his representatives have meanwhile been sidelined. Artificial intelligence is advancing, as Norbert Wiener already wrote shortly after World War Two. Not many wanted to hear it at the time. Today Wiener’s theses are being listened to more closely, such as in discussions about whether AI will be accompanied by something like the “technological singularity.” According to Wikipedia, this is the moment when “the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.”
The Demand for Transparency
In the political discussion, the following question is rarely raised: Who has something to say? This is once again, differently worded, the question of authorship and authority. This question is also present, when NGOs like Algorithm Watch demand transparency about how decisions are made via algorithms. Because it is only when we know how the respective algorithm works – for instance, Google’s search algorithm – that we can start to discuss the question of responsibility and authorship in any meaningful way.
However, the transparency debate also brings several problems with it. I still recall the first major projects from WikiLeaks, such as the release of around a quarter-million diplomatic reports from the US, and the first major intellectual engagement with the phenomenon: an activist platform challenges a superpower. That was 2010 and 2011. The open question then was, not least of all, whether a transparency initiative like WikiLeaks should be allowed to have an agenda, or what it means if it does have an agenda.
Having an agenda also means that the neutrality of the platform was up for discussion: Is WikiLeaks promoting transparency only in a certain (for instance, geo-political) direction? If so, whose interests are served in this way? Who would want to finance a battle like this? Even at that time, Russia was in discussion as a possible patron of the platform. Hardly anyone asked: Have interests not always been involved when it was a matter of “piercing through”information and providing transparency?
Interests can be as diverse as “I want to put myself in a more advantageous position,” or “I want to promote justice.” The latter is considered an honorable motivation for leaking. The former is not. The latter, the imperative for justice, has been strongly in the foreground in all debates of recent years; the former, the strategic use, has hardly been discussed. Now the discourses are commingling. For several months there has been talk of the strategic leak – for instance in terms of the revealed emails of the Democrats during the US election campaign.
Authorship and authority are the issue in this context as well. Of course the allegation of a strategy, a bias, a certain interest is always an attempt to discredit a leak and a whistleblower. “But that only helps the Russians!”, therefore it can only be false, in other words: fake news. But we must learn to talk about the political use (i.e., who is the leak useful to?) as well as about political consequences (i.e. what does the leak reveal and what consequences does that have?).
At the same time, we must learn to understand that one does not necessarily preclude the other. Just because a certain leak “helps the Russians” does not necessarily make it fake news. It should be taken seriously either way – in terms of authorship as well as authority. Not only journalists, but also users of social media should pay close attention to the Five Ws and an H: what, where, when, who, why, and how.
Krystian Woznicki is organizing the exhibition “Signals. An Exhibition of the Snowden Files in Art, Media and Archives” together with the Berliner Gazette (12 to 26 September 2017, Diamondpaper Studio, Berlin) and the conference “Failed Citizens or Failed States? Friendly Fire” (2 to 4 November 2017, ZK/U – Zentrum für Kunst und Urbanistik, Berlin). Most recently published “After the Planes. A Dialogue on Movement, Perception and Politics” with Brian Massumi, and “A Field Guide to the Snowden Files. Media, Art, Archives. 2013–2017” with Magdalena Taube (both Diamondpaper, 2017).
ihttps://www.wired.com/2017/03/google-and-facebook-cant-just-make-fake-news-disappear/